id
stringlengths
11
11
channel
stringclasses
2 values
channel_id
stringclasses
2 values
title
stringlengths
12
100
categories
sequence
tags
sequence
description
stringlengths
66
5k
text
stringlengths
577
90.4k
segments
list
z4lAlVRwbrc
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Author Interview - Improving Intrinsic Exploration with Language Abstractions
[ "Science & Technology" ]
[ "" ]
#reinforcementlearning #ai #explained This is an interview with Jesse Mu, first author of the paper. Original Paper Review: https://youtu.be/NeGJAUSQEJI Exploration is one of the oldest challenges for Reinforcement Learning algorithms, with no clear solution to date. Especially in environments with sparse rewards, agents face significant challenges in deciding which parts of the environment to explore further. Providing intrinsic motivation in form of a pseudo-reward is sometimes used to overcome this challenge, but often relies on hand-crafted heuristics, and can lead to deceptive dead-ends. This paper proposes to use language descriptions of encountered states as a method of assessing novelty. In two procedurally generated environments, they demonstrate the usefulness of language, which is in itself highly concise and abstractive, which lends itself well for this task. OUTLINE: 0:00 - Intro 0:55 - Paper Overview 4:30 - Aren't you just adding extra data? 9:35 - Why are you splitting up the AMIGo teacher? 13:10 - How do you train the grounding network? 16:05 - What about causally structured environments? 17:30 - Highlights of the experimental results 20:40 - Why is there so much variance? 22:55 - How much does it matter that we are testing in a video game? 27:00 - How does novelty interface with the goal specification? 30:20 - The fundamental problems of exploration 32:15 - Are these algorithms subject to catastrophic forgetting? 34:45 - What current models could bring language to other environments? 40:30 - What does it take in terms of hardware? 43:00 - What problems did you encounter during the project? 46:40 - Where do we go from here? Paper: https://arxiv.org/abs/2202.08938 Abstract: Reinforcement learning (RL) agents are particularly hard to train when rewards are sparse. One common solution is to use intrinsic rewards to encourage agents to explore their environment. However, recent intrinsic exploration methods often use state-based novelty measures which reward low-level exploration and may not scale to domains requiring more abstract skills. Instead, we explore natural language as a general medium for highlighting relevant abstractions in an environment. Unlike previous work, we evaluate whether language can improve over existing exploration methods by directly extending (and comparing to) competitive intrinsic exploration baselines: AMIGo (Campero et al., 2021) and NovelD (Zhang et al., 2021). These language-based variants outperform their non-linguistic forms by 45-85% across 13 challenging tasks from the MiniGrid and MiniHack environment suites. Authors: Jesse Mu, Victor Zhong, Roberta Raileanu, Minqi Jiang, Noah Goodman, Tim Rocktäschel, Edward Grefenstette Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ykilcher.com/discord BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello, this is an interview with Jesse Mu, who is the first author of the paper improving intrinsic exploration with language abstractions. This paper is really cool because it combines the knowledge that is inherent in language with the problem of exploration in reinforcement learning. I've made a comprehensive review of this paper in the last video, so be sure to check that out. Today, Jesse has seen the video and we're able to dive right into the questions, criticisms and anything that came up during the video. The interview was super valuable to me. I learned a lot. I hope you do too. If you like, then please leave a like on the video. Tell me what you think in the comments. Tell me how I can make these videos better above all else. And I'll see you around. Bye bye. Hi, everyone. Today, I'm here with Jesse Mu, who is the first author of the paper improving intrinsic exploration with language abstractions, which is a really cool paper. I've enjoyed reading it. I like the bringing language into the reinforcement learning domain. I think it makes a lot of sense and I was very happy to see this paper. Yeah, Jesse, welcome to the channel. Yeah, thanks for having me. So I've presumably the viewers here have already seen my little review of the paper. What would be your maybe for people who haven't seen that or just in your words, your like short elevator pitch of the paper itself? What would that be? Yeah. So the way that I would pitch the paper is that reinforcement learning for a while now has wrestled with perhaps the central problem, which is how do we encourage exploration in these environments with more complex tasks and longer time horizons where the extrinsic reward that you get from the environment is very sparse. So in the absence of extrinsic rewards, how do we encourage agents to explore? And typically the way we do so is we assume and this is a very cognitively appealing intuition that we should motivate an agent to achieve novelty in the environment. We should make it do things that it hasn't done before, encounter states that it hasn't seen before, et cetera. And then hopefully we'll enable the agent to acquire the skills that we actually want the agent to acquire in the environment. But the problem with this, of course, is how we define novelty. In a lot of scenarios, there are environments that can look very different, but they have the same underlying semantics. So the example I have in the paper is like a kitchen and the appliances might be differently branded and differently colored, but ultimately every kitchen is a kitchen. And the way that you approach kitchens and the way that you operate in them is the same. And so the idea of this paper is we should be using natural language as the measure for how we describe states and how we describe actions within states and use kind of traditional approaches to exploration, reinforcement learning, but simply parameterize them with language rather than with state abstractions, which is usually the way in which exploration is done in these kinds of environments. And so what we do is we take existing state of the art exploration methods and then kind of see what happens when you swap in language as a component. And do you get better performance? And we showed that in a variety of settings, at least in the kinds of RL environments that people have been looking at in recent work, we do see again in using language to parameterize exploration rather than states. Yeah. I think it's very apt to describe it as you, it's not suggesting like a new exploration algorithm, but it's simply the re-parameterization in terms of language. And coincidentally, these environments, they do come with this kind of language annotations, which we do focus on. I like that. So I think what I really liked about this paper is just the research mindset in that any other paper or a lot of other papers, they would have done, they would have tried doing like three things at the same time. Like you know, we have a language generator and we do this and we do that. And what you're I think doing correctly from a standpoint of research is you keep pretty much everything constant, the algorithms constant, right? Even the environments, you assume that you have a perfect language oracle and you just add the language, which I really appreciate as like a reviewer, let's say. So I think this gets us right into our or my biggest, essentially criticism of the paper or what I called in that you add language to these algorithms, but you just said we swap in language. And to me, it felt more like it's not really a swapping in. It's more like you add language on top of what these algorithms are doing. And therefore, can't I just see your method as adding more data? Essentially, there is features that are available from the simulator, right, which the other methods just don't use, they just discard this part and you just add this part. Do you have an indication in how much of your effect is really due to language and how much of the effect is just due to the fact that you have more data available? Yeah, that's a great question. And it's definitely a point that I think a lot of people will fairly make against the paper is, yeah, we're using extra data, right? And yeah, I think my verb swap was maybe only accurate in half of this paper, which is that in Amigo, which is the first method that we look at, it really is a swap, right? So if you read the paper, the traditional kind of Amigo teacher network proposes coordinates X, Y positions as goals. And here we're just completely eliminating that kind of goal specification and we're moving towards language. So that can be seen as more of a swap. Although of course, in novelty, which is the second method that we look at, that is definitely more of kind of an addition, as you say, because we keep the extrinsic bonus and we do have experiments that measure what happens if you don't have novelty by itself. You only have the kind of language novelty bonus and it doesn't do as well. So you're right that I would say that we explore this idea of swapping in language in a bit of the paper, but there are points where it's more of kind of a bolt on and we're not like super clearly looking at or distinguishing when is it okay to have language just be a complete drop in replacement versus just some additional information. So yeah, I think we're showing that in general, if you're trying to add language into these environments, you're seeing a gain, but how precisely that gain manifests is still a little requires some more exploration for sure. So I guess more generally to your comment on using extra data. Yeah, I mean, I think we have some intuition that this data should help, right? It's a fairly clean linguistic signal, but how to use this data concretely is an open question, right? And so that's kind of where I view the contribution of this paper as even though we have some intuition that adding extra data will help, we actually need the equations written down, right? And here are two concrete ways in which we can operationalize this data for the purposes of actually getting better performance in your environment. And there are a lot of examples of this in machine learning, right? So like you have some large language model, for example, and then you want to fine tune it for some domain or you want to fine tune it on human preferences. I mean, that's fundamentally, you're adding extra data for the purposes of getting something that works well on a task that you care about, right? And how to use that data is the open question. The other point that I would say is that we have some deep seated intuition that this language should help. As you say, it's really high quality. It comes from an Oracle. It comes from the game engine. But we actually still need to get that kind of empirical verification that it works, right? And there's actually a lot of reasons why maybe these experiments might not have worked out. For example, the language is Oracle generated, as I mentioned, but it is also very noisy. So as I described in kind of the method section of the paper, most of the messages that you see in the environments are actually not necessary to complete the extrinsic task. And I kind of exhaustively show which of the messages do matter. And so it could be the case that, well, the language signal, at least in these environments, is too noisy. The state abstraction captures all of the factors of variation that you might care about in an environment. And so you don't ultimately need language, right? And that's an imperial question that we have to measure. And so I view this paper as providing that empirical verification, which in hindsight, I think, is a fairly straightforward intuition. It's something that I definitely thought would happen. But yeah, it's nice to see those results kind of in writing. Yes, it's easy. I think you're right. It's easy to look back and say, of course, like, well, all you do is you do this. But exploration has been since since, you know, people have thought about reinforcement learning, they've obviously thought about exploration methods and intrinsic rewards are like as old as Schmidhuber himself. And we you know, the fact is that, you know, new things are developed. And this is at least one of the first things into into really the direction of incorporating. There have been incorporation of languages before, but a systematic adding it to the state of the art methods. And it seems like I am I am convinced the method at least the El Amigo method is quite well outlined, I think, in these diagrams, the contrast of the left being the original Amigo and the right side being the language Amigo. A question I had right here is that on the left side, you have this teacher network, and it simply outputs a coordinate to reach and it has to pay attention to the fact that the coordinate is not too hard and not too easy, right? Therefore, it has to learn that too easy coordinate. Yes, one that is, you know, close, but also it has to learn maybe unreachable coordinates or coordinates that are inside the walls, right? They can't be reached or something like this. However, on the right side in the language, I mean, you seem to split these two tasks out into one network that that determines which goals can even be reached and one that then orders them essentially, why? Why are you doing this? Like what's the is there a particular reason behind why one network couldn't do both at the same time? Yeah, so the reason why we split the Amigo network up into two parts, and as you say, we don't have to do this. And there are ablation studies in the appendix that shows what happens if you get rid of the grounding and you just have a single network predicting both goal achievability and, you know, actual the actual goal that's seen by the students. So it kind of a goal difficulty network. It does find in some environments, especially in mini hack, but it doesn't do as well in other environments such as mini grid. And part of the reason, as you've described, is that at least in these environments, the coordinate space stays consistent across episodes. And so you're right that there are some coordinates that are perhaps unreachable in certain environments and not in others, but there's much less variation than the set of language goals that are achievable in an environment because the environment will have different colored doors, for example. And so the goal go to the red door only makes sense in, let's say, half of your environments. So it's possible for the teacher to the Alamigo teacher to hopefully learn this distinction kind of just through, you know, the policy gradient method. So basically just like Amigo, but this is relatively sample inefficient because the problem is that when you propose a goal that's simply impossible in the environment and you get negative reward, that negative reward only comes after the student has tried to complete the goal for, let's say, a few hundred steps. Right. And so it's a relatively sample of inefficient way of telling the teacher, hey, the student did not achieve this goal in the environment. And moreover, that negative reward, you know, there's two possible sources of that reward. Right. So if the student never completed the goal, is it the case that it was just too difficult for the student, but it is achievable in practice? Or is it that the goal is simply never achievable in the first place in the environment? Right. And those kind of two failure cases are a little bit hard to distinguish. Whereas we have kind of this more frequent source of supervision, which is simply, you know, as the student is randomly exploring in the environment, it's encountering a lot of goals, a lot of messages because we have a language annotator and we're kind of, you know, if we if we kind of ignore that signal, that seems like something that we should be using. And so we have kind of this dual thing where we have a grounding number, which is updated more frequently in the environment, which is updated from the messages that are seen by the students. And then finally, the policy network, which is actually trained to satisfy the kind of difficulty objective and actually get the student to complete goals in the environment. Can you go a little bit more into because that was, I think, the only part that confused me a little bit, which is the how exactly you train this grounding network. There is a there is this this notion of whatever the first language description encountered along a trajectory being sort of the positive sample and then the rest being the negative samples. And that kind of confused me because it means the negative samples would also include goals that were encountered just not as the first message. Could you maybe clarify maybe I didn't understand something right? Or maybe I don't, you know, see the reasoning behind this exact choice. Yeah. So I think your intuition is correct. I think you've described it correctly. It is kind of a weird thing to do, which is that we are treating negative samples as basically all of the goals besides the first one that was achieved. Right. And of course, that is incorrectly treating negative samples of goals that were achieved later. Right. So negative samples are noisily generated, as I as I say, in the limit, this noise should even out, though. So you can compare, you know, like we're just kind of noisy, noisily generating negative samples here. We can compare that to maybe a setting where we had a more oracle sense of when a goal is truly infeasible in an environment. Right. And so what happens is, you know, just in general, a goal is going to appear in this negative sample term more and more often as we train the network. But because it's we're kind of, you know, downweighing all possible goals in the space, the idea is that hopefully, you know, this noise of of class of incorrectly classifying a goal is unachievable in an environment kind of evens out over time. Right. And so, yeah, it's a little bit tricky because we don't have the oracle saying, oh, you can't achieve this goal in an environment. Right. We only know that. Well, you know, the student just didn't happen to achieve the goal in this environment. So I could imagine other ways in which you try to come up with some heuristic that better captures this idea of kind of unachievability. But this is what we came up with, which seems to work reasonably well in practice. And alternative way that you can interpret this is we're not really measuring true achievability. Like, you know, is this at all possible in an environment? What we're really trying to have the grounding network capture here is what are the goals that the student tends to reach? So like are feasible at the current state of training, right? The current policy, what goals can it reach? And that's really what we need, right, is we need like to propose goals that at least for now are eventually reachable by a student. And that doesn't mean that it's, you know, unachievable in all possible students under all possible environments, but at least just for current, you know, in the current stage of the training process, it's a reasonable target. I can imagine that this gets very, that this may require an adjustment or that this breaks down in environments that are more causally structured. For example, if I always have to go through the green door before I reach the red door, right, then the goal would always be in any trajectory that I do, the green door would always be the first goal. And therefore my grounding network would never recognize the red door as a reachable goal, because that's always going to be at least the second goal, right? So I guess depending on the environment, it's not hard to make a change to this, obviously, in that case, but I guess that's one thing that might have to adjust a little bit to the environment at hand. Yeah, that's a that's a great point is that we do not. There are settings where you might just, you know, want to run it without the grounding network. And obviously, that's actually a simpler version. So it should be fairly easy to experiment with that. And also, in the setting that you described, what will happen is, like you say, you know, the green the go to the green door goal will get a lot of weight, but hopefully can be counteracted to some degree by the policy network, which will, you know, learn to not put any weight on that once it realizes that it's getting absolutely zero reward for that setting. But I agree that this kind of introduces some weird training dynamics that we don't really want might be cleaner just to remove the grounding network entirely. If you as as you say, you've looked at my paper review a little bit, I didn't go too much into the experimental results as such. Is there also I didn't go into the appendix at all, because honestly, I haven't read the appendix because I sometimes I don't I think I should probably. But is there anything that you want to highlight specifically about the experimental results or or maybe something that you did in the expand appendix, which is also has a lot of experiments in it? Things that you think people should take away from the paper from the experiment section? Yeah, so broad takeaways are and I think that you mentioned this in the review is, you know, we're in these kind of DRL environments and and the individual training runs are just incredibly noisy, you know, and that can be sometimes like rather difficult to get a sense of, oh, is my method actually working better than others? Right. But there has been some great recent work from I think a team at Miele, which won an outstanding paper award at New York's last year, which was called deep reinforcement learning on the edge of the statistical precipice. And the basic idea is, you know, we're compute constrained. We have these environments, they're very high variance. But even despite all of this, you know, what are the kind of statistical best principles that we can follow to really see whether or not our methods are actually making a measurable and replicable difference in the environments that we're testing? And so they have a lot of good recommendations, which we try to subscribe to as close as possible in this setting. Right. So these training curves here give you kind of a qualitative sense about not only kind of the ultimate performance attained by any of the models, but also of the differences in sample efficiency that we see. Right. So it could be the case that, well, ultimately, both Amigo and El Amigo reach the same asymptotic performance, but Amigo just gets there faster or more reliably. And that's something that you can, sorry, El Amigo gets there faster and more reliably. And that's something that you can look at in these graphs. But I think the more kind of statistically rigorous way of verifying that language is giving a gain in the environments is in the subsequent figure, which is figure four, which should be right below this one, I think. And this is really, you know, us trying to statistically verify, you know, is there an effect happening here? And so these here are bootstrap confidence intervals, five runs in each experimental condition. And we're plotting the 95 percent confidence intervals for the interquartile mean of models across tasks. So this is kind of like the mean performance, assuming that you drop some of the outliers, because again, these runs are very high variance. Right. And so this is kind of a statistical recommendation from the authors of that deep RL paper. And we show that, yes, the individual runs here have really high variance naturally. But as you begin to look at the runs in aggregate across both the mini grid and mini hack environment suites, we begin to see a trend that it's clear that, you know, overall we're seeing a good effect of language in these environments. And so this is obviously these are aggregate metrics, overall metrics and so on. When we look at the plots themselves, there is quite considerable variance, even in the ranks of the method. Do you have an intuition of between the language methods, which works better in what kind of environments and in what kind of environments does language even maybe hurt? And why do you have an idea? Yeah. So the trend that I try to highlight in the paper is that in larger environments, language exploration does better. And the reason why you might expect this is that in larger environments, Amigo and Novelty kind of suffer from this problem of increased noise. Right. There's a lot more coordinates, for example, that you can propose, which essentially describe kind of the same semantic action. Right. You have like you want to get the agent into one room of this maze. And you know, because the environment is larger, now there are four or five different coordinates that all kind of mean the same thing. Whereas as you increase the size of the environment, the language set, the set of language goals is relatively more consistent. Right. It's kind of one of those complexity analyses. Right. It's like kind of space complexity, almost of the goal space. And so you can see this trend happen a bit. For example, in the Wand of Death task, so WOD, this is in the top right corner here. We have WOD medium and WOD hard, where in WOD medium, Amigo actually outperforms El Amigo. So it gets you to higher performance quicker. Whereas in WOD Wand of Death hard, Amigo is actually not able to learn at all. And the only difference between these environments, it's fundamentally the same task. But the only difference is that in WOD hard, the room is a lot bigger. So instead of a narrow corridor, you actually have to search for the Wand of Death, that's the task, in some in some room beforehand. And you can see that just simply increasing the size of the possible coordinate spaces results in both traditional novelty and traditional Amigo doing much worse in this environment. And I think that kind of shows that these kind of state based exploration methods are very brittle to the size of your state base. Right. So you can kind of increase your state space infinitely and it'll make these methods perform worse, even if the underlying semantics of your environment haven't changed yet. Do you have an idea, do you have a feeling maybe, if this is a property of the world in general, like let's say I as a human, right? I'm put into a small whatever environment or a big environment, would my descriptions of language also not grow very much? Or is it a property of just game developers? You know, I add a few extra rooms, I can reuse these languages, you know, I just kind of tile, you know, the other the big games, I mean, the biggest games are procedurally generated like Minecraft there, it's really, it's just the same thing over and over. But even in like the like, these big open world games, like Grand Theft Auto or so, the same textures are reused and the same cars and the same NPC characters, right? Is this a property of the world or of the video game developers? Yeah, so this is a really deep and almost philosophical question. Yeah, is something that I think about a lot is you can certainly and this is a totally valid statement, right, you can say, well, there are a lot of language actions that you can describe in our world and even in the video game world, which just described these like kind of infinitely complex and nested sequences of actions, which have absolutely nothing to do with the extrinsic task, right? I could tell you to, you know, oh, you know, run at the wall six times do a 360. And then, you know, continue hitting the wall eight times, right. And that's like an incredibly difficult goal, which you can imagine a very structured curriculum to get to that point, right, of just like infinitely kind of bumping your head against the wall, which satisfies, you know, maybe the difficulty threshold of El Amigo, but is absolutely orthogonal to the task that we care about. And I can imagine that there are settings where the language is kind of useless and doesn't end up, you know, giving you any gains in this setting. And so there's kind of this open question that we haven't really touched on sufficiently in this paper, which is how good does the language have to be in order to get this to work? So as I say, you know, the language is Oracle, it's game developers, but it also is noisy. There's a lot of actions like running into walls or trying to throw stones at a minotaur that are ultimately useless in the environment. The argument we're making here is that hopefully, you know, the noisiness of language scales a little bit less than the noisiness of your state environment, right. But there's still a lot of kind of edge cases and kind of unexplored territory here. I think more philosophically, if you think about our world and our environment, right, there are a lot of ways that we can describe actions that are not particularly useful in the world that you and I inhabit, right. I mean, I can again tell you to do handstands and hit a wall and, you know, walk around and write endless, you know, trivial things in the dust. But at the same time, there's a lot of our action space in the real world that we simply don't have language descriptions for, right. So like every single precise movement on my hand and my arm, you know, I could presumably come up with some language to describe, oh, I'm actuating this joint, you know, by 0.03 degrees. And there's like, you know, how many joints in my hand, right. I mean, there's like endless complexity in terms of the possible action space just by moving a hand that in language we have absolutely no words for, right. And so it's really it's a really tough question, right. Like we have a lot of kind of ways of describing useless actions in the world. But at the same time, it's very clear that the language that we do use to describe the world is operating at a higher level abstraction than perhaps the kinds of actions that RL agents have access to, right. And for example, actuating some sort of limb or something. You make a you make a good point that in the paper that language is a strong prior over what is essentially important to humans, right. If I can describe something with a short piece of language, like, of course, I can say do three backflips and then, you know, do eight of that and so on. But it's a fairly complex sentence in itself. If I can describe something with a short piece of language, usually that is something that matters to some human somewhere, right. Otherwise that wouldn't be mapped to a short string. But that brings me a bit to a different question. And that is the question of isn't isn't the I think in these environments, there's always a goal, right. There is one reward at the end that you need to reach. I can imagine, though, that novelty or not novelty in general or how how important a state is, is really dependent on your goal. Whether I circumvent the minotaur at the, you know, below or above that might not be important if I want to reach whatever the goal behind it. But it is really important maybe for a different task. It's likewise I as a human, whether I move from here to there by walking forward or backward doesn't matter if I want to get to the fridge. But it matters really if I'm if I'm dancing, right. So is that something that like how does that interplay here with these with these language things? What do you do when a language it almost like needs to incorporate a piece of the goal that you want to reach in order to be useful or not? Yeah, so I think thinking about or trying to filter the language descriptions that you have to language that is relevant for your task is going to be important if we scale this up to environments where it's clear that using unfiltered language is not helping. Right. And again, as I mentioned, the robustness of these kinds of exploration methods to the noisiness or relevance of your language signal is still an open question. If we do have task descriptions, so we have extrinsic task descriptions like your job is to defeat the Minotaur, then it's really intuitive that we should be able to use that as a signal for kind of waiting how relevant a sub goal or language description that we encounter waiting how useful that is for the extrinsic task. Right. So if the extrinsic goal is combat, then we should be prioritizing combat related messages. If the extrinsic goal is buying something, then we should promote acquiring money and things like that. And so that's something that I think is a kind of natural extension of this is you extend this to a multitask setting where you have task descriptions and the task descriptions ought to kind of heavily filter what sub goals should be relevant for the task. I think when you include task descriptions, there are some more comparisons to related work. There's some related work, which you mentioned the paper where let's imagine you're doing basically hierarchical reinforcement learning. So you have some extrinsic goal and then you want to explicitly decompose the extrinsic goal into sub goals that you want to complete in order. Right. And that's those are certainly kind of relevant methods to look at when you start thinking about multitask or goal condition settings. But this is kind of a slightly different focus where we're not trying to identify sub goals that need to be completed on the way to some extrinsic goal. There's still kind of this exploration component, which is a bit of a different use of language than this kind of hierarchical stuff. But certainly I would say that there are people who have looked at kind of language conditioned RL and hierarchical RL that think a lot and very deeply about this problem of proposing sub goals that are relevant for the extrinsic goal, assuming you have some structured description of what the extrinsic goal is. Although I can imagine you run into sort of the, let's say the more abstract problem of the exploration problem is that, you know, without an outside signal, I don't really know what to do. And there is no clear, let's say gradient towards the goal. Right. Otherwise, the exploration problem in RL would be relatively easy. Now when we say, well, we'll just filter out all the messages that don't have anything to do with our combat goal. Right. So it is like we could run into the exact same thing again, where, you know, maybe in order to acquire a weapon, I first need money, right? That doesn't, that's not directly related to my combat goal. So there is like another exploration problem again, on top of the thing we introduce. I guess maybe we can hope that if we introduce enough levels, the highest level of abstraction will have a small number of states so that, you know, random exploration works. But it's kind of funny that the problems repeat or replicate. Yeah. Yeah. It's really tricky. And that's essentially just kind of a deeper or more nested failure case of not knowing what's novel and not knowing what's relevant for your goal. Right. So if you're prioritizing words that have combat in them because your extrinsic goal is combat, but you first need to buy something, then your, your, your semantics, you know, your measure of novelty or relevance is just not good enough. Right. So that's going to just be a fundamental problem in exploration is how do we know whether it's states or language, you know, how do we know when a state is relevant for the ultimate task? Yeah. And I guess humans aren't very much different, right? I mean, science is a really hard process. It's not, you know, that exploration takes millions of humans and hundreds of years. So we can't fault our RL agents here for not, not doing that great of a job. Here, I found these plots to be really cool, like the analysis, sort of the evolution of what the teachers propose. And of course, these being language, it's quite insightful and understandable what's happening in the algorithm. My, my surprise was a little bit, aren't these things kind of subject to like catastrophic forgetting or things like this? I can imagine, right? If I train these things online and they're at some difficulty level, all of a sudden they forget that reaching the red door is kind of really easy. Or so is that have you ever thought is that a problem? Or was that ever a problem? Did you encounter that? Or why don't we encounter that? Yeah. So I expect that that is a problem that happens in these agents. I don't think we really precisely tried to measure whether or not catastrophic forgetting is a problem. I think the fact is that we evaluate in environments where we are not testing the agents kind of continuously for mastery of all of the skills that it has learned in its curriculum proposed by the teacher. And so this problem of, oh, you know, you forgot how to specifically open a specific color door is not an issue as long as the student is still quite good at completing whatever goals it needs to complete to try to achieve the extrinsic goal that is currently being set by the teacher. Right. So if you forget things that are at the very beginning of training, that's not a big deal. So long as whatever path that the teacher is leading you on is something that will eventually get you to the extrinsic goal that we care about. And I think that happens to be the case in these environments because there was only one extrinsic goal and because we're not testing it to master every single skill from kind of low level to high level abstractions. But if we were in a setting where being able to complete those lower level goals kind of, you know, on a dime and kind of, you know, switch kind of do context switching like that, if that were more important, then we would have to deal with this problem of catastrophic forgetting. Right. An important point here is that we really don't care about how well the student is able to follow instructions proposed by the teacher. That's, I mean, we hope the goal is that that property emerges such that we can complete the extrinsic goal. Right. But we're never actually trying to learn a student that can follow instructions. We never really evaluated exclusively in an instruction following setting. Because if we think ahead a little bit, and I'm going to want to just scroll down to the environments just because, yeah, maybe this this will inspire us a little bit. If we think ahead a little bit beyond this work, here you have this very, this Oracle language descriptor. And you say also in the outlook of future work that that is something obviously that we're trying to get rid of because not every environment, like the fewest of environments actually have such a built in language description or easily accessible one. So we might have to regress to something else. So I want to I want to think about three different external models that we could bring in. And I wonder what you think of each of them, like how these could fit in. The first would be something like GPT-3, like just a pure language model. How could that help us? Maybe in combination with these things, because we need some starting point, right? But how could a pre-trained language model that knows something about the world help us? Then something like CLIP, maybe something that can take an image and language and say whether they're good together or not. And then maybe even something like or maybe a captioning model. Right. And maybe something like DALEE, like something that takes language and generates. Is there in this cloud of models, what possibilities do we have to bring in sort of to replace this Oracle thing with with learned systems? It doesn't even need to be learned online, right? It can be pre-trained. I'm probably much more excited about that. Yeah. Yeah, these are, I think, going to be the most fun questions to look at in kind of language conditions are all going forward is taking the boom in pre-trained models in large language models and resulting, you know, bringing these into concrete and actionable gains in reinforcement learning. It's funny that you mentioned this kind of what I described as almost a gradation from ungrounded language models like GPT-3, right, which are trained on text only corpora and whether those can actually help in these environments, which I would call are fundamentally grounded, right? They're grounded in some some visual or perceptual world. And ungrounded language models still result in gains in these settings. And my intuition is, yeah, they probably still can because, you know, even if you don't exactly know what it means to acquire a wand or kill a minotaur in some environment because you don't know what a minotaur looks like or what a wand looks like, GPT, as I mentioned, you know, this idea of priors, right? GPT has strong priors on sensible sequences of actions, right? So insofar as these environments are testing kind of sequences of actions that humans kind of have an intuition for, you know, it's some fantasy world, but we have some intuition, oh, in order to defeat the minotaur, we need to get a weapon first. We probably look around for a weapon. Maybe there's a shop. Maybe we can buy a weapon from the shop, right? Video games are testing knowledge that we have very like deep seated commonsense knowledge that we have that hopefully generalizes to these fantasy worlds. And GPT certainly contains a lot of that information, right? So you might imagine we should reward or filter the kinds of descriptions that we see to those that seem sensible narratives that GPT-3 would generate, right? So a sensible sequence of actions along the way to defeating the minotaur is collecting a wand and buying it and things like that. And I think you actually already see some examples of this happening in more goal conditioned or instruction following RL. So there's been some recent work from, I know, teams at Berkeley, maybe Google as well, that are looking at using pre-trained language models, which are not necessarily even grounded. They're just, you know, GPT-3, using them to construct sensible plans, action plans or sub goals for completing certain actions. So in some home environment, for example, maybe my action is get a cup of coffee. And then the goal of GPT is even though I don't really know what my environment looks like, I don't know what kitchen you're in, I know that sensibly this should include finding a mug and then heating up the kettle and things like that. And so we already see some promising use of kind of ungrounded models for improving grounded decision making settings. Yeah, did you want to comment on that? Or I can also- No, no, that's cool. I think, yeah, I think I've even had at least one of these works here on the channel in this home environment. That's exactly, I was also really cool to see. Obviously, these models know a lot about the world, right? And I think people overestimate how or underestimate maybe, well, whatever. That the thing, if we humans look at a board like this, like at a mini hack board, we see a map, right? We see paths to walk on and stuff like this, even if we've never played a video game. But this is, these are such strong priors built into us. And we sometimes think like, why can't that dumb computer just like walk around the wall, right? And we're like, what's up? And I think these large models are a way we can really get that knowledge from the human world into this world. So yeah, I think that's, it's a great outlook. Also with the models that combine images and text, I feel that could be really like adding a lot of value to the RL world. At least the RL environments that are like human environments. Of course, there's reinforcement learning for computer chip design, and things like this. I don't think those are necessarily going to be profiting that much from it. But yeah, yeah, really cool is so you're you're at Stanford? Or did you do the work at Stanford? Or were you at some internship? Yeah, I did it while I had an internship last fall. So this is fall 2021. Okay, continue to work a little bit while at Stanford. But it was mostly in collaboration with some people at fair or meta, I guess now in London. Reinforcement learning is notoriously also kind of hardware intensive. Although this work right here seems like maybe not that much because you describe a little bit sort of what what it takes to investigate a project like this. Yeah, unfortunately, I think even for these environments, it's fairly hardware intensive, certainly still feasible, I think, on let's say, a more academically sized compute budget. But for being able to run the experimentation needed to iterate quickly, you know, you do really definitely benefit from kind of industry level scale, which is one of the unfortunate things about this kind of research is that it is a little bit less accessible to people in smaller compute settings. So maybe the typical kind of RL environments you think of our compute heavy are the ones that are in 3D simulation, you know, very, you know, need physics, need soft joint contact and all of these things to model. And those are really expensive. I think compared to that, these are kind of more symbolic grid worlds. You know, the whole point as to why mini hack or net hack was chosen as a reinforcement learning test bed was because the code base is, you know, written entirely in C and is very optimized, and so you can run simulations very quickly on modern hardware. But that being said, it's still relatively compute expensive. Again, the just amount of experience needed by state of the art, deep RL methods, even with extrinsic or intrinsic exploration bonuses is still very expensive, right? So for example, one of these runs, we would typically have, let's say, 40 CPU actors collecting experience at the same time in parallel, and then kind of one or two GPU learner threads in the background kind of updating from this experience. So even just a single, you know, computational experiment here needs non trivial hardware for sure. Yeah. And, and you ideally you want to do that in parallel, right? Because you want to try out a bunch of things are repeated a bunch of times because one experiment really tells you almost nothing, right? Unless it succeeds, right? If it succeeds, it's good. But if it fails, you never know if you repeat it a bunch of times. Yeah, but I mean, it's still it's not it's not the most extreme thing, right? Like two GPUs or so and a bunch of CPUs. As you say, that can that's still academically doable, which I find cool. Could you maybe tell us a bit about the process of researching of researching this? Like, did everything work out as planned from the beginning? Or where was your starting point? And what changed about your plan during the research, like maybe something didn't work out or so? Yeah. Yeah, I feel I don't I feel it's always good for people to hear that other people encounter problems and how they get around problems. Yeah. Yeah. So yeah, it's a great question. The intuition that I think me and my collaborators started with was, you know, fairly sensible. It's language is clearly going to help in these environments. You know, it has some nice parallels to human exploration. And so let's just see whether or not language will work in these environments. What's funny, though, is that we actually started out the project less about the more abstract question of like, does language help exploration and more a very concrete question of how do we improve upon Amigo? So how do we improve upon an existing state of the art algorithm for exploration? Let's propose something that we argue is better than everything. It's like we're going to propose a state of the art exploration method called El Amigo, which will get 100 percent accuracy in all these environments. And none of the existing methods will work. Right. That's that's kind of the narrative that you set up for yourself when you're starting research is I'm going to build something that's new and that's the best. Right. However, I think the focus of this paper and the story has shifted considerably. I think it's shifted for the better, actually. And part of this shift happened because we implemented El Amigo and it was working fine and it worked better than Amigo. So we were quite excited. But at the same time, the field is moving so fast. And at NeurIPS last year, some researchers came out with this method called novelty and we ran novelty and novelty also did really well. And you know, in some environments, it totally like blew Amigo out of the water. Right. And El Amigo. And part of our thinking was, well, OK, now we can't really say, oh, we have El Amigo and it's the best model. It's the best environment. And you should only use this. And at first I thought, you know, this is derailing our narrative. Right. We're not proposing anything new. We're not proposing anything state of the art. So what's the point? But I think after some kind of juggling and shuffling, we realized that what we're really interested in is the scientific question of does language help exploration? So take existing method X and then do X plus language. Right. And so this question can be answered kind of agnostic to the specific method that we actually use. Right. And so it was that juncture where we actually decided, OK, let's actually look at novelty closely and let's imagine adding language to novelty as well. And do we see the same kind of results? Right. And so I think this is kind of an outcome of the paper that was kind of on the fly changed. But I'm very happy with which is that we're not trying to claim that we have a method that is state of the art or that is best or that anyone should be using our method. We are very agnostic to the particular choice of method. Right. We're trying to answer kind of a more abstract question, which is when does language help exploration? And I think this is a little bit more egalitarian. We're not saying that our method is better than anyone else's. And we also don't have to exhaustively compare to like a lot of existing work. We're just saying that if you take whatever method that we have and you add language, you do better and here are two examples where that happens. Cool. And it is a good way to preempt some reviewers from saying that you didn't train on ImageNet and that's bad. Yeah. Is there anything else that you want to get out to viewers? Maybe a way they can get started if that's possible or anything that you'd like them to know? Yeah, I think that we've discussed a lot about these kind of higher level ideas of one holy grail is that we have clip generating descriptions or open GPT-3 and then we're evaluating in these really high dimensional spaces with actual motor joints and we're going to show how language helps in these like mojoco style, like really deep RL, realistic environments and maybe you can transfer to the real world. I think that's the broad vision but I think it is still very far away. I think we even in this paper abstracted away a lot of difficulty of the problem. We're assuming that we have Oracle language annotations. We're only looking at these kind of symbolic grid worlds and although it's tempting to dive in and say, okay, now let's kind of straightforwardly let's extend this to a real world environment where I have to actually move my coffee mug to make coffee and tea, I think we're still quite far away from that broad vision of kind of household enabled robots in RL and is probably not the most I think like beginner friendly way of starting. There's just so many deep problems that need to be solved jointly from perception to action to planning and before we even consider how we better incorporate language into the mix. And so I think the way to build upon this work is just these kind of very small progressive relaxations of the assumptions that I and many of the other people who have worked in this space have. Right. So again, let's imagine let's just imagine we get rid of the Oracle language annotator and we train a model to emit states for these simple environments. You know, we didn't really explore that, but that's a very sensible way to extend this kind of work while keeping the environment and the models fixed. Right. So this goes back to the very beginning when you mentioned the kind of way in which we approach this paper was to keep everything fixed and then just look at this kind of very small change and see how that results in different performance in our environment. I think that's really just kind of the way to go. It's very slow. It's very incremental work, but hopefully it's getting us more towards that kind of guiding star of eventually having these models that operate in these realistic environments and use pre-trained model language to help exploration. Cool. Jesse, thank you very much for being here. This was awesome. Thanks. Have a lot of fun.
[ { "start": 0, "end": 10.56, "text": " Hello, this is an interview with Jesse Mu, who is the first author of the paper improving" }, { "start": 10.56, "end": 13.84, "text": " intrinsic exploration with language abstractions." }, { "start": 13.84, "end": 18.44, "text": " This paper is really cool because it combines the knowledge that is inherent in language" }, { "start": 18.44, "end": 22.28, "text": " with the problem of exploration in reinforcement learning." }, { "start": 22.28, "end": 27.76, "text": " I've made a comprehensive review of this paper in the last video, so be sure to check that" }, { "start": 27.76, "end": 28.76, "text": " out." }, { "start": 28.76, "end": 34.64, "text": " Today, Jesse has seen the video and we're able to dive right into the questions, criticisms" }, { "start": 34.64, "end": 37, "text": " and anything that came up during the video." }, { "start": 37, "end": 39.6, "text": " The interview was super valuable to me." }, { "start": 39.6, "end": 40.6, "text": " I learned a lot." }, { "start": 40.6, "end": 41.760000000000005, "text": " I hope you do too." }, { "start": 41.760000000000005, "end": 44.7, "text": " If you like, then please leave a like on the video." }, { "start": 44.7, "end": 47, "text": " Tell me what you think in the comments." }, { "start": 47, "end": 51.040000000000006, "text": " Tell me how I can make these videos better above all else." }, { "start": 51.040000000000006, "end": 52.040000000000006, "text": " And I'll see you around." }, { "start": 52.040000000000006, "end": 53.040000000000006, "text": " Bye bye." }, { "start": 53.040000000000006, "end": 54.040000000000006, "text": " Hi, everyone." }, { "start": 54.04, "end": 60.48, "text": " Today, I'm here with Jesse Mu, who is the first author of the paper improving intrinsic" }, { "start": 60.48, "end": 64.8, "text": " exploration with language abstractions, which is a really cool paper." }, { "start": 64.8, "end": 66.32, "text": " I've enjoyed reading it." }, { "start": 66.32, "end": 71.6, "text": " I like the bringing language into the reinforcement learning domain." }, { "start": 71.6, "end": 75.48, "text": " I think it makes a lot of sense and I was very happy to see this paper." }, { "start": 75.48, "end": 77.36, "text": " Yeah, Jesse, welcome to the channel." }, { "start": 77.36, "end": 79.36, "text": " Yeah, thanks for having me." }, { "start": 79.36, "end": 87.08, "text": " So I've presumably the viewers here have already seen my little review of the paper." }, { "start": 87.08, "end": 92.72, "text": " What would be your maybe for people who haven't seen that or just in your words, your like" }, { "start": 92.72, "end": 95.44, "text": " short elevator pitch of the paper itself?" }, { "start": 95.44, "end": 97.03999999999999, "text": " What would that be?" }, { "start": 97.03999999999999, "end": 98.03999999999999, "text": " Yeah." }, { "start": 98.03999999999999, "end": 105, "text": " So the way that I would pitch the paper is that reinforcement learning for a while now" }, { "start": 105, "end": 111.88, "text": " has wrestled with perhaps the central problem, which is how do we encourage exploration in" }, { "start": 111.88, "end": 117.44, "text": " these environments with more complex tasks and longer time horizons where the extrinsic" }, { "start": 117.44, "end": 119.76, "text": " reward that you get from the environment is very sparse." }, { "start": 119.76, "end": 125.12, "text": " So in the absence of extrinsic rewards, how do we encourage agents to explore?" }, { "start": 125.12, "end": 130.02, "text": " And typically the way we do so is we assume and this is a very cognitively appealing intuition" }, { "start": 130.02, "end": 133.8, "text": " that we should motivate an agent to achieve novelty in the environment." }, { "start": 133.8, "end": 137.44, "text": " We should make it do things that it hasn't done before, encounter states that it hasn't" }, { "start": 137.44, "end": 138.64000000000001, "text": " seen before, et cetera." }, { "start": 138.64000000000001, "end": 142.84, "text": " And then hopefully we'll enable the agent to acquire the skills that we actually want" }, { "start": 142.84, "end": 145.08, "text": " the agent to acquire in the environment." }, { "start": 145.08, "end": 149.36, "text": " But the problem with this, of course, is how we define novelty." }, { "start": 149.36, "end": 153.84, "text": " In a lot of scenarios, there are environments that can look very different, but they have" }, { "start": 153.84, "end": 155.32000000000002, "text": " the same underlying semantics." }, { "start": 155.32000000000002, "end": 159.32000000000002, "text": " So the example I have in the paper is like a kitchen and the appliances might be differently" }, { "start": 159.32000000000002, "end": 163.24, "text": " branded and differently colored, but ultimately every kitchen is a kitchen." }, { "start": 163.24, "end": 167.48000000000002, "text": " And the way that you approach kitchens and the way that you operate in them is the same." }, { "start": 167.48000000000002, "end": 173.52, "text": " And so the idea of this paper is we should be using natural language as the measure for" }, { "start": 173.52, "end": 178.88, "text": " how we describe states and how we describe actions within states and use kind of traditional" }, { "start": 178.88, "end": 183.48000000000002, "text": " approaches to exploration, reinforcement learning, but simply parameterize them with language" }, { "start": 183.48000000000002, "end": 187.44, "text": " rather than with state abstractions, which is usually the way in which exploration is" }, { "start": 187.44, "end": 189.60000000000002, "text": " done in these kinds of environments." }, { "start": 189.6, "end": 194.48, "text": " And so what we do is we take existing state of the art exploration methods and then kind" }, { "start": 194.48, "end": 198.28, "text": " of see what happens when you swap in language as a component." }, { "start": 198.28, "end": 199.28, "text": " And do you get better performance?" }, { "start": 199.28, "end": 204.16, "text": " And we showed that in a variety of settings, at least in the kinds of RL environments that" }, { "start": 204.16, "end": 208.4, "text": " people have been looking at in recent work, we do see again in using language to parameterize" }, { "start": 208.4, "end": 210.88, "text": " exploration rather than states." }, { "start": 210.88, "end": 212.76, "text": " Yeah." }, { "start": 212.76, "end": 222.56, "text": " I think it's very apt to describe it as you, it's not suggesting like a new exploration" }, { "start": 222.56, "end": 227.56, "text": " algorithm, but it's simply the re-parameterization in terms of language." }, { "start": 227.56, "end": 232.56, "text": " And coincidentally, these environments, they do come with this kind of language annotations," }, { "start": 232.56, "end": 234, "text": " which we do focus on." }, { "start": 234, "end": 235, "text": " I like that." }, { "start": 235, "end": 240.94, "text": " So I think what I really liked about this paper is just the research mindset in that" }, { "start": 240.94, "end": 245.52, "text": " any other paper or a lot of other papers, they would have done, they would have tried" }, { "start": 245.52, "end": 248.32, "text": " doing like three things at the same time." }, { "start": 248.32, "end": 252.48, "text": " Like you know, we have a language generator and we do this and we do that." }, { "start": 252.48, "end": 257.44, "text": " And what you're I think doing correctly from a standpoint of research is you keep pretty" }, { "start": 257.44, "end": 261.46, "text": " much everything constant, the algorithms constant, right?" }, { "start": 261.46, "end": 266.48, "text": " Even the environments, you assume that you have a perfect language oracle and you just" }, { "start": 266.48, "end": 273.72, "text": " add the language, which I really appreciate as like a reviewer, let's say." }, { "start": 273.72, "end": 283.36, "text": " So I think this gets us right into our or my biggest, essentially criticism of the paper" }, { "start": 283.36, "end": 290.64000000000004, "text": " or what I called in that you add language to these algorithms, but you just said we" }, { "start": 290.64000000000004, "end": 292.40000000000003, "text": " swap in language." }, { "start": 292.40000000000003, "end": 295.76, "text": " And to me, it felt more like it's not really a swapping in." }, { "start": 295.76, "end": 301.2, "text": " It's more like you add language on top of what these algorithms are doing." }, { "start": 301.2, "end": 307.48, "text": " And therefore, can't I just see your method as adding more data?" }, { "start": 307.48, "end": 312.2, "text": " Essentially, there is features that are available from the simulator, right, which the other" }, { "start": 312.2, "end": 317.15999999999997, "text": " methods just don't use, they just discard this part and you just add this part." }, { "start": 317.15999999999997, "end": 323.24, "text": " Do you have an indication in how much of your effect is really due to language and how much" }, { "start": 323.24, "end": 326.48, "text": " of the effect is just due to the fact that you have more data available?" }, { "start": 326.48, "end": 328.48, "text": " Yeah, that's a great question." }, { "start": 328.48, "end": 332.04, "text": " And it's definitely a point that I think a lot of people will fairly make against the" }, { "start": 332.04, "end": 336.32, "text": " paper is, yeah, we're using extra data, right?" }, { "start": 336.32, "end": 341.84000000000003, "text": " And yeah, I think my verb swap was maybe only accurate in half of this paper, which is that" }, { "start": 341.84000000000003, "end": 345.8, "text": " in Amigo, which is the first method that we look at, it really is a swap, right?" }, { "start": 345.8, "end": 351.68, "text": " So if you read the paper, the traditional kind of Amigo teacher network proposes coordinates" }, { "start": 351.68, "end": 354.16, "text": " X, Y positions as goals." }, { "start": 354.16, "end": 358.8, "text": " And here we're just completely eliminating that kind of goal specification and we're" }, { "start": 358.8, "end": 360.72, "text": " moving towards language." }, { "start": 360.72, "end": 363.2, "text": " So that can be seen as more of a swap." }, { "start": 363.2, "end": 368.32, "text": " Although of course, in novelty, which is the second method that we look at, that is definitely" }, { "start": 368.32, "end": 372.04, "text": " more of kind of an addition, as you say, because we keep the extrinsic bonus and we do have" }, { "start": 372.04, "end": 376.24, "text": " experiments that measure what happens if you don't have novelty by itself." }, { "start": 376.24, "end": 379.52, "text": " You only have the kind of language novelty bonus and it doesn't do as well." }, { "start": 379.52, "end": 385.15999999999997, "text": " So you're right that I would say that we explore this idea of swapping in language in a bit" }, { "start": 385.15999999999997, "end": 388.91999999999996, "text": " of the paper, but there are points where it's more of kind of a bolt on and we're not like" }, { "start": 388.91999999999996, "end": 394.76, "text": " super clearly looking at or distinguishing when is it okay to have language just be a" }, { "start": 394.76, "end": 398.2, "text": " complete drop in replacement versus just some additional information." }, { "start": 398.2, "end": 403.52, "text": " So yeah, I think we're showing that in general, if you're trying to add language into these" }, { "start": 403.52, "end": 409.32, "text": " environments, you're seeing a gain, but how precisely that gain manifests is still a" }, { "start": 409.32, "end": 412.36, "text": " little requires some more exploration for sure." }, { "start": 412.36, "end": 415.84, "text": " So I guess more generally to your comment on using extra data." }, { "start": 415.84, "end": 421.68, "text": " Yeah, I mean, I think we have some intuition that this data should help, right?" }, { "start": 421.68, "end": 426.32, "text": " It's a fairly clean linguistic signal, but how to use this data concretely is an open" }, { "start": 426.32, "end": 427.32, "text": " question, right?" }, { "start": 427.32, "end": 430.24, "text": " And so that's kind of where I view the contribution of this paper as even though we have some" }, { "start": 430.24, "end": 434.36, "text": " intuition that adding extra data will help, we actually need the equations written down," }, { "start": 434.36, "end": 435.36, "text": " right?" }, { "start": 435.36, "end": 438.44, "text": " And here are two concrete ways in which we can operationalize this data for the purposes" }, { "start": 438.44, "end": 441.84, "text": " of actually getting better performance in your environment." }, { "start": 441.84, "end": 444, "text": " And there are a lot of examples of this in machine learning, right?" }, { "start": 444, "end": 447.6, "text": " So like you have some large language model, for example, and then you want to fine tune" }, { "start": 447.6, "end": 450.2, "text": " it for some domain or you want to fine tune it on human preferences." }, { "start": 450.2, "end": 454.64, "text": " I mean, that's fundamentally, you're adding extra data for the purposes of getting something" }, { "start": 454.64, "end": 457.1, "text": " that works well on a task that you care about, right?" }, { "start": 457.1, "end": 460.15999999999997, "text": " And how to use that data is the open question." }, { "start": 460.15999999999997, "end": 464.88, "text": " The other point that I would say is that we have some deep seated intuition that this language" }, { "start": 464.88, "end": 465.88, "text": " should help." }, { "start": 465.88, "end": 466.88, "text": " As you say, it's really high quality." }, { "start": 466.88, "end": 467.88, "text": " It comes from an Oracle." }, { "start": 467.88, "end": 470.2, "text": " It comes from the game engine." }, { "start": 470.2, "end": 474.15999999999997, "text": " But we actually still need to get that kind of empirical verification that it works, right?" }, { "start": 474.15999999999997, "end": 477.56, "text": " And there's actually a lot of reasons why maybe these experiments might not have worked" }, { "start": 477.56, "end": 478.56, "text": " out." }, { "start": 478.56, "end": 484.12, "text": " For example, the language is Oracle generated, as I mentioned, but it is also very noisy." }, { "start": 484.12, "end": 488.48, "text": " So as I described in kind of the method section of the paper, most of the messages that you" }, { "start": 488.48, "end": 493.04, "text": " see in the environments are actually not necessary to complete the extrinsic task." }, { "start": 493.04, "end": 497.6, "text": " And I kind of exhaustively show which of the messages do matter." }, { "start": 497.6, "end": 500.88, "text": " And so it could be the case that, well, the language signal, at least in these environments," }, { "start": 500.88, "end": 502.36, "text": " is too noisy." }, { "start": 502.36, "end": 505.76000000000005, "text": " The state abstraction captures all of the factors of variation that you might care about" }, { "start": 505.76000000000005, "end": 506.84000000000003, "text": " in an environment." }, { "start": 506.84000000000003, "end": 508.66, "text": " And so you don't ultimately need language, right?" }, { "start": 508.66, "end": 511.28000000000003, "text": " And that's an imperial question that we have to measure." }, { "start": 511.28000000000003, "end": 515.5600000000001, "text": " And so I view this paper as providing that empirical verification, which in hindsight," }, { "start": 515.5600000000001, "end": 517.64, "text": " I think, is a fairly straightforward intuition." }, { "start": 517.64, "end": 520.48, "text": " It's something that I definitely thought would happen." }, { "start": 520.48, "end": 523.32, "text": " But yeah, it's nice to see those results kind of in writing." }, { "start": 523.32, "end": 524.72, "text": " Yes, it's easy." }, { "start": 524.72, "end": 526.08, "text": " I think you're right." }, { "start": 526.08, "end": 531.44, "text": " It's easy to look back and say, of course, like, well, all you do is you do this." }, { "start": 531.44, "end": 539.84, "text": " But exploration has been since since, you know, people have thought about reinforcement learning," }, { "start": 539.84, "end": 545.6800000000001, "text": " they've obviously thought about exploration methods and intrinsic rewards are like as" }, { "start": 545.6800000000001, "end": 547.9200000000001, "text": " old as Schmidhuber himself." }, { "start": 547.9200000000001, "end": 553.72, "text": " And we you know, the fact is that, you know, new things are developed." }, { "start": 553.72, "end": 560.28, "text": " And this is at least one of the first things into into really the direction of incorporating." }, { "start": 560.28, "end": 564.72, "text": " There have been incorporation of languages before, but a systematic adding it to the" }, { "start": 564.72, "end": 566.6800000000001, "text": " state of the art methods." }, { "start": 566.6800000000001, "end": 572.6, "text": " And it seems like I am I am convinced the method at least the El Amigo method is quite" }, { "start": 572.6, "end": 577.96, "text": " well outlined, I think, in these diagrams, the contrast of the left being the original" }, { "start": 577.96, "end": 583.4, "text": " Amigo and the right side being the language Amigo." }, { "start": 583.4, "end": 588.04, "text": " A question I had right here is that on the left side, you have this teacher network," }, { "start": 588.04, "end": 595.4399999999999, "text": " and it simply outputs a coordinate to reach and it has to pay attention to the fact that" }, { "start": 595.4399999999999, "end": 600.0799999999999, "text": " the coordinate is not too hard and not too easy, right?" }, { "start": 600.0799999999999, "end": 605.28, "text": " Therefore, it has to learn that too easy coordinate." }, { "start": 605.28, "end": 610.48, "text": " Yes, one that is, you know, close, but also it has to learn maybe unreachable coordinates" }, { "start": 610.48, "end": 612.92, "text": " or coordinates that are inside the walls, right?" }, { "start": 612.92, "end": 615.1999999999999, "text": " They can't be reached or something like this." }, { "start": 615.1999999999999, "end": 619.28, "text": " However, on the right side in the language, I mean, you seem to split these two tasks" }, { "start": 619.28, "end": 625.68, "text": " out into one network that that determines which goals can even be reached and one that" }, { "start": 625.68, "end": 628.64, "text": " then orders them essentially, why?" }, { "start": 628.64, "end": 630.92, "text": " Why are you doing this?" }, { "start": 630.92, "end": 636.1999999999999, "text": " Like what's the is there a particular reason behind why one network couldn't do both at" }, { "start": 636.1999999999999, "end": 637.56, "text": " the same time?" }, { "start": 637.56, "end": 645.04, "text": " Yeah, so the reason why we split the Amigo network up into two parts, and as you say," }, { "start": 645.04, "end": 646.1999999999999, "text": " we don't have to do this." }, { "start": 646.1999999999999, "end": 650.3199999999999, "text": " And there are ablation studies in the appendix that shows what happens if you get rid of" }, { "start": 650.3199999999999, "end": 655.8399999999999, "text": " the grounding and you just have a single network predicting both goal achievability and, you" }, { "start": 655.8399999999999, "end": 659.56, "text": " know, actual the actual goal that's seen by the students." }, { "start": 659.56, "end": 663.0799999999999, "text": " So it kind of a goal difficulty network." }, { "start": 663.08, "end": 669.24, "text": " It does find in some environments, especially in mini hack, but it doesn't do as well in" }, { "start": 669.24, "end": 671.2, "text": " other environments such as mini grid." }, { "start": 671.2, "end": 676.5600000000001, "text": " And part of the reason, as you've described, is that at least in these environments, the" }, { "start": 676.5600000000001, "end": 680, "text": " coordinate space stays consistent across episodes." }, { "start": 680, "end": 686.2, "text": " And so you're right that there are some coordinates that are perhaps unreachable in certain environments" }, { "start": 686.2, "end": 691.84, "text": " and not in others, but there's much less variation than the set of language goals that are achievable" }, { "start": 691.84, "end": 696, "text": " in an environment because the environment will have different colored doors, for example." }, { "start": 696, "end": 701.72, "text": " And so the goal go to the red door only makes sense in, let's say, half of your environments." }, { "start": 701.72, "end": 709.08, "text": " So it's possible for the teacher to the Alamigo teacher to hopefully learn this distinction" }, { "start": 709.08, "end": 712.9200000000001, "text": " kind of just through, you know, the policy gradient method." }, { "start": 712.9200000000001, "end": 716.64, "text": " So basically just like Amigo, but this is relatively sample inefficient because the" }, { "start": 716.64, "end": 721.82, "text": " problem is that when you propose a goal that's simply impossible in the environment and you" }, { "start": 721.82, "end": 726.4000000000001, "text": " get negative reward, that negative reward only comes after the student has tried to" }, { "start": 726.4000000000001, "end": 728.5200000000001, "text": " complete the goal for, let's say, a few hundred steps." }, { "start": 728.5200000000001, "end": 729.5200000000001, "text": " Right." }, { "start": 729.5200000000001, "end": 733.24, "text": " And so it's a relatively sample of inefficient way of telling the teacher, hey, the student" }, { "start": 733.24, "end": 735.5600000000001, "text": " did not achieve this goal in the environment." }, { "start": 735.5600000000001, "end": 739.44, "text": " And moreover, that negative reward, you know, there's two possible sources of that reward." }, { "start": 739.44, "end": 740.44, "text": " Right." }, { "start": 740.44, "end": 744.6400000000001, "text": " So if the student never completed the goal, is it the case that it was just too difficult" }, { "start": 744.6400000000001, "end": 748.08, "text": " for the student, but it is achievable in practice?" }, { "start": 748.08, "end": 752.8000000000001, "text": " Or is it that the goal is simply never achievable in the first place in the environment?" }, { "start": 752.8000000000001, "end": 753.8000000000001, "text": " Right." }, { "start": 753.8000000000001, "end": 758, "text": " And those kind of two failure cases are a little bit hard to distinguish." }, { "start": 758, "end": 761.88, "text": " Whereas we have kind of this more frequent source of supervision, which is simply, you" }, { "start": 761.88, "end": 766, "text": " know, as the student is randomly exploring in the environment, it's encountering a lot" }, { "start": 766, "end": 770.6800000000001, "text": " of goals, a lot of messages because we have a language annotator and we're kind of, you" }, { "start": 770.6800000000001, "end": 774.1600000000001, "text": " know, if we if we kind of ignore that signal, that seems like something that we should be" }, { "start": 774.1600000000001, "end": 775.8000000000001, "text": " using." }, { "start": 775.8, "end": 779.28, "text": " And so we have kind of this dual thing where we have a grounding number, which is updated" }, { "start": 779.28, "end": 782.5999999999999, "text": " more frequently in the environment, which is updated from the messages that are seen" }, { "start": 782.5999999999999, "end": 783.78, "text": " by the students." }, { "start": 783.78, "end": 787.9599999999999, "text": " And then finally, the policy network, which is actually trained to satisfy the kind of" }, { "start": 787.9599999999999, "end": 792.8199999999999, "text": " difficulty objective and actually get the student to complete goals in the environment." }, { "start": 792.8199999999999, "end": 797.4799999999999, "text": " Can you go a little bit more into because that was, I think, the only part that confused" }, { "start": 797.4799999999999, "end": 803.04, "text": " me a little bit, which is the how exactly you train this grounding network." }, { "start": 803.04, "end": 810.12, "text": " There is a there is this this notion of whatever the first language description encountered" }, { "start": 810.12, "end": 815.88, "text": " along a trajectory being sort of the positive sample and then the rest being the negative" }, { "start": 815.88, "end": 816.88, "text": " samples." }, { "start": 816.88, "end": 821.6999999999999, "text": " And that kind of confused me because it means the negative samples would also include goals" }, { "start": 821.6999999999999, "end": 826.52, "text": " that were encountered just not as the first message." }, { "start": 826.52, "end": 829.98, "text": " Could you maybe clarify maybe I didn't understand something right?" }, { "start": 829.98, "end": 836.84, "text": " Or maybe I don't, you know, see the reasoning behind this exact choice." }, { "start": 836.84, "end": 837.84, "text": " Yeah." }, { "start": 837.84, "end": 839.5600000000001, "text": " So I think your intuition is correct." }, { "start": 839.5600000000001, "end": 841.46, "text": " I think you've described it correctly." }, { "start": 841.46, "end": 848.6800000000001, "text": " It is kind of a weird thing to do, which is that we are treating negative samples as basically" }, { "start": 848.6800000000001, "end": 851.36, "text": " all of the goals besides the first one that was achieved." }, { "start": 851.36, "end": 852.36, "text": " Right." }, { "start": 852.36, "end": 857.4, "text": " And of course, that is incorrectly treating negative samples of goals that were achieved" }, { "start": 857.4, "end": 858.4, "text": " later." }, { "start": 858.4, "end": 859.4, "text": " Right." }, { "start": 859.4, "end": 866.12, "text": " So negative samples are noisily generated, as I as I say, in the limit, this noise should" }, { "start": 866.12, "end": 867.12, "text": " even out, though." }, { "start": 867.12, "end": 870.6, "text": " So you can compare, you know, like we're just kind of noisy, noisily generating negative" }, { "start": 870.6, "end": 871.6, "text": " samples here." }, { "start": 871.6, "end": 876.84, "text": " We can compare that to maybe a setting where we had a more oracle sense of when a goal" }, { "start": 876.84, "end": 879.6, "text": " is truly infeasible in an environment." }, { "start": 879.6, "end": 880.6, "text": " Right." }, { "start": 880.6, "end": 884.52, "text": " And so what happens is, you know, just in general, a goal is going to appear in this" }, { "start": 884.52, "end": 887.88, "text": " negative sample term more and more often as we train the network." }, { "start": 887.88, "end": 893.68, "text": " But because it's we're kind of, you know, downweighing all possible goals in the space," }, { "start": 893.68, "end": 898.04, "text": " the idea is that hopefully, you know, this noise of of class of incorrectly classifying" }, { "start": 898.04, "end": 901.48, "text": " a goal is unachievable in an environment kind of evens out over time." }, { "start": 901.48, "end": 902.48, "text": " Right." }, { "start": 902.48, "end": 906.12, "text": " And so, yeah, it's a little bit tricky because we don't have the oracle saying, oh, you" }, { "start": 906.12, "end": 907.8, "text": " can't achieve this goal in an environment." }, { "start": 907.8, "end": 908.8, "text": " Right." }, { "start": 908.8, "end": 909.8, "text": " We only know that." }, { "start": 909.8, "end": 913.2, "text": " Well, you know, the student just didn't happen to achieve the goal in this environment." }, { "start": 913.2, "end": 916.6, "text": " So I could imagine other ways in which you try to come up with some heuristic that better" }, { "start": 916.6, "end": 920.0400000000001, "text": " captures this idea of kind of unachievability." }, { "start": 920.0400000000001, "end": 924.4, "text": " But this is what we came up with, which seems to work reasonably well in practice." }, { "start": 924.4, "end": 931.4, "text": " And alternative way that you can interpret this is we're not really measuring true achievability." }, { "start": 931.4, "end": 934.6, "text": " Like, you know, is this at all possible in an environment?" }, { "start": 934.6, "end": 938.1800000000001, "text": " What we're really trying to have the grounding network capture here is what are the goals" }, { "start": 938.1800000000001, "end": 939.7, "text": " that the student tends to reach?" }, { "start": 939.7, "end": 942.6, "text": " So like are feasible at the current state of training, right?" }, { "start": 942.6, "end": 945.46, "text": " The current policy, what goals can it reach?" }, { "start": 945.46, "end": 949, "text": " And that's really what we need, right, is we need like to propose goals that at least" }, { "start": 949, "end": 952.84, "text": " for now are eventually reachable by a student." }, { "start": 952.84, "end": 957.5600000000001, "text": " And that doesn't mean that it's, you know, unachievable in all possible students under" }, { "start": 957.5600000000001, "end": 960.94, "text": " all possible environments, but at least just for current, you know, in the current stage" }, { "start": 960.94, "end": 964.12, "text": " of the training process, it's a reasonable target." }, { "start": 964.12, "end": 971.0600000000001, "text": " I can imagine that this gets very, that this may require an adjustment or that this breaks" }, { "start": 971.0600000000001, "end": 974.1600000000001, "text": " down in environments that are more causally structured." }, { "start": 974.16, "end": 979.9599999999999, "text": " For example, if I always have to go through the green door before I reach the red door," }, { "start": 979.9599999999999, "end": 985.28, "text": " right, then the goal would always be in any trajectory that I do, the green door would" }, { "start": 985.28, "end": 987.12, "text": " always be the first goal." }, { "start": 987.12, "end": 993.24, "text": " And therefore my grounding network would never recognize the red door as a reachable goal," }, { "start": 993.24, "end": 996.18, "text": " because that's always going to be at least the second goal, right?" }, { "start": 996.18, "end": 1001.26, "text": " So I guess depending on the environment, it's not hard to make a change to this, obviously," }, { "start": 1001.26, "end": 1005.72, "text": " in that case, but I guess that's one thing that might have to adjust a little bit to" }, { "start": 1005.72, "end": 1007.2, "text": " the environment at hand." }, { "start": 1007.2, "end": 1012.26, "text": " Yeah, that's a that's a great point is that we do not." }, { "start": 1012.26, "end": 1015.52, "text": " There are settings where you might just, you know, want to run it without the grounding" }, { "start": 1015.52, "end": 1016.52, "text": " network." }, { "start": 1016.52, "end": 1017.66, "text": " And obviously, that's actually a simpler version." }, { "start": 1017.66, "end": 1021.84, "text": " So it should be fairly easy to experiment with that." }, { "start": 1021.84, "end": 1028.6, "text": " And also, in the setting that you described, what will happen is, like you say, you know," }, { "start": 1028.6, "end": 1032.9599999999998, "text": " the green the go to the green door goal will get a lot of weight, but hopefully can be" }, { "start": 1032.9599999999998, "end": 1036.36, "text": " counteracted to some degree by the policy network, which will, you know, learn to not" }, { "start": 1036.36, "end": 1039.8, "text": " put any weight on that once it realizes that it's getting absolutely zero reward for that" }, { "start": 1039.8, "end": 1040.8, "text": " setting." }, { "start": 1040.8, "end": 1043.8, "text": " But I agree that this kind of introduces some weird training dynamics that we don't really" }, { "start": 1043.8, "end": 1049.32, "text": " want might be cleaner just to remove the grounding network entirely." }, { "start": 1049.32, "end": 1054.9599999999998, "text": " If you as as you say, you've looked at my paper review a little bit, I didn't go too" }, { "start": 1054.96, "end": 1059.64, "text": " much into the experimental results as such." }, { "start": 1059.64, "end": 1063.82, "text": " Is there also I didn't go into the appendix at all, because honestly, I haven't read the" }, { "start": 1063.82, "end": 1072.98, "text": " appendix because I sometimes I don't I think I should probably." }, { "start": 1072.98, "end": 1078.92, "text": " But is there anything that you want to highlight specifically about the experimental results" }, { "start": 1078.92, "end": 1085.16, "text": " or or maybe something that you did in the expand appendix, which is also has a lot of" }, { "start": 1085.16, "end": 1087.92, "text": " experiments in it?" }, { "start": 1087.92, "end": 1093.28, "text": " Things that you think people should take away from the paper from the experiment section?" }, { "start": 1093.28, "end": 1101.6000000000001, "text": " Yeah, so broad takeaways are and I think that you mentioned this in the review is, you know," }, { "start": 1101.6000000000001, "end": 1105.96, "text": " we're in these kind of DRL environments and and the individual training runs are just" }, { "start": 1105.96, "end": 1110.08, "text": " incredibly noisy, you know, and that can be sometimes like rather difficult to get a sense" }, { "start": 1110.08, "end": 1112.4, "text": " of, oh, is my method actually working better than others?" }, { "start": 1112.4, "end": 1113.4, "text": " Right." }, { "start": 1113.4, "end": 1118.44, "text": " But there has been some great recent work from I think a team at Miele, which won an" }, { "start": 1118.44, "end": 1122, "text": " outstanding paper award at New York's last year, which was called deep reinforcement" }, { "start": 1122, "end": 1124.52, "text": " learning on the edge of the statistical precipice." }, { "start": 1124.52, "end": 1127.52, "text": " And the basic idea is, you know, we're compute constrained." }, { "start": 1127.52, "end": 1129.3600000000001, "text": " We have these environments, they're very high variance." }, { "start": 1129.3600000000001, "end": 1133.72, "text": " But even despite all of this, you know, what are the kind of statistical best principles" }, { "start": 1133.72, "end": 1137.88, "text": " that we can follow to really see whether or not our methods are actually making a measurable" }, { "start": 1137.88, "end": 1141.66, "text": " and replicable difference in the environments that we're testing?" }, { "start": 1141.66, "end": 1146.3600000000001, "text": " And so they have a lot of good recommendations, which we try to subscribe to as close as possible" }, { "start": 1146.3600000000001, "end": 1147.3600000000001, "text": " in this setting." }, { "start": 1147.3600000000001, "end": 1148.3600000000001, "text": " Right." }, { "start": 1148.3600000000001, "end": 1152.38, "text": " So these training curves here give you kind of a qualitative sense about not only kind" }, { "start": 1152.38, "end": 1156.22, "text": " of the ultimate performance attained by any of the models, but also of the differences" }, { "start": 1156.22, "end": 1158.3600000000001, "text": " in sample efficiency that we see." }, { "start": 1158.3600000000001, "end": 1159.3600000000001, "text": " Right." }, { "start": 1159.3600000000001, "end": 1163.68, "text": " So it could be the case that, well, ultimately, both Amigo and El Amigo reach the same asymptotic" }, { "start": 1163.68, "end": 1167.8400000000001, "text": " performance, but Amigo just gets there faster or more reliably." }, { "start": 1167.8400000000001, "end": 1171.04, "text": " And that's something that you can, sorry, El Amigo gets there faster and more reliably." }, { "start": 1171.04, "end": 1173.68, "text": " And that's something that you can look at in these graphs." }, { "start": 1173.68, "end": 1177.88, "text": " But I think the more kind of statistically rigorous way of verifying that language is" }, { "start": 1177.88, "end": 1182.76, "text": " giving a gain in the environments is in the subsequent figure, which is figure four, which" }, { "start": 1182.76, "end": 1185.1000000000001, "text": " should be right below this one, I think." }, { "start": 1185.1000000000001, "end": 1189.92, "text": " And this is really, you know, us trying to statistically verify, you know, is there an" }, { "start": 1189.92, "end": 1191.04, "text": " effect happening here?" }, { "start": 1191.04, "end": 1196.1599999999999, "text": " And so these here are bootstrap confidence intervals, five runs in each experimental" }, { "start": 1196.1599999999999, "end": 1197.1599999999999, "text": " condition." }, { "start": 1197.1599999999999, "end": 1203.6, "text": " And we're plotting the 95 percent confidence intervals for the interquartile mean of models" }, { "start": 1203.6, "end": 1204.6, "text": " across tasks." }, { "start": 1204.6, "end": 1208.56, "text": " So this is kind of like the mean performance, assuming that you drop some of the outliers," }, { "start": 1208.56, "end": 1211.04, "text": " because again, these runs are very high variance." }, { "start": 1211.04, "end": 1212.04, "text": " Right." }, { "start": 1212.04, "end": 1217.68, "text": " And so this is kind of a statistical recommendation from the authors of that deep RL paper." }, { "start": 1217.68, "end": 1221.92, "text": " And we show that, yes, the individual runs here have really high variance naturally." }, { "start": 1221.92, "end": 1227.28, "text": " But as you begin to look at the runs in aggregate across both the mini grid and mini hack environment" }, { "start": 1227.28, "end": 1231.72, "text": " suites, we begin to see a trend that it's clear that, you know, overall we're seeing" }, { "start": 1231.72, "end": 1235.0600000000002, "text": " a good effect of language in these environments." }, { "start": 1235.0600000000002, "end": 1241.72, "text": " And so this is obviously these are aggregate metrics, overall metrics and so on." }, { "start": 1241.72, "end": 1247.04, "text": " When we look at the plots themselves, there is quite considerable variance, even in the" }, { "start": 1247.04, "end": 1248.48, "text": " ranks of the method." }, { "start": 1248.48, "end": 1254.76, "text": " Do you have an intuition of between the language methods, which works better in what kind of" }, { "start": 1254.76, "end": 1260.32, "text": " environments and in what kind of environments does language even maybe hurt?" }, { "start": 1260.32, "end": 1262.6, "text": " And why do you have an idea?" }, { "start": 1262.6, "end": 1263.6399999999999, "text": " Yeah." }, { "start": 1263.6399999999999, "end": 1270.6, "text": " So the trend that I try to highlight in the paper is that in larger environments, language" }, { "start": 1270.6, "end": 1272.52, "text": " exploration does better." }, { "start": 1272.52, "end": 1280.8, "text": " And the reason why you might expect this is that in larger environments, Amigo and Novelty" }, { "start": 1280.8, "end": 1283.12, "text": " kind of suffer from this problem of increased noise." }, { "start": 1283.12, "end": 1284.12, "text": " Right." }, { "start": 1284.12, "end": 1287.24, "text": " There's a lot more coordinates, for example, that you can propose, which essentially describe" }, { "start": 1287.24, "end": 1288.72, "text": " kind of the same semantic action." }, { "start": 1288.72, "end": 1289.72, "text": " Right." }, { "start": 1289.72, "end": 1292.8799999999999, "text": " You have like you want to get the agent into one room of this maze." }, { "start": 1292.8799999999999, "end": 1296.32, "text": " And you know, because the environment is larger, now there are four or five different coordinates" }, { "start": 1296.32, "end": 1298.16, "text": " that all kind of mean the same thing." }, { "start": 1298.16, "end": 1304.0400000000002, "text": " Whereas as you increase the size of the environment, the language set, the set of language goals" }, { "start": 1304.0400000000002, "end": 1305.5600000000002, "text": " is relatively more consistent." }, { "start": 1305.5600000000002, "end": 1306.5600000000002, "text": " Right." }, { "start": 1306.5600000000002, "end": 1308.3600000000001, "text": " It's kind of one of those complexity analyses." }, { "start": 1308.3600000000001, "end": 1309.3600000000001, "text": " Right." }, { "start": 1309.3600000000001, "end": 1312.0600000000002, "text": " It's like kind of space complexity, almost of the goal space." }, { "start": 1312.0600000000002, "end": 1314.72, "text": " And so you can see this trend happen a bit." }, { "start": 1314.72, "end": 1319.42, "text": " For example, in the Wand of Death task, so WOD, this is in the top right corner here." }, { "start": 1319.42, "end": 1326.6000000000001, "text": " We have WOD medium and WOD hard, where in WOD medium, Amigo actually outperforms El" }, { "start": 1326.6000000000001, "end": 1327.6000000000001, "text": " Amigo." }, { "start": 1327.6, "end": 1329.7199999999998, "text": " So it gets you to higher performance quicker." }, { "start": 1329.7199999999998, "end": 1335.24, "text": " Whereas in WOD Wand of Death hard, Amigo is actually not able to learn at all." }, { "start": 1335.24, "end": 1338.9399999999998, "text": " And the only difference between these environments, it's fundamentally the same task." }, { "start": 1338.9399999999998, "end": 1343.12, "text": " But the only difference is that in WOD hard, the room is a lot bigger." }, { "start": 1343.12, "end": 1346.3999999999999, "text": " So instead of a narrow corridor, you actually have to search for the Wand of Death, that's" }, { "start": 1346.3999999999999, "end": 1349.8, "text": " the task, in some in some room beforehand." }, { "start": 1349.8, "end": 1355.6, "text": " And you can see that just simply increasing the size of the possible coordinate spaces" }, { "start": 1355.6, "end": 1360.6, "text": " results in both traditional novelty and traditional Amigo doing much worse in this environment." }, { "start": 1360.6, "end": 1364.9199999999998, "text": " And I think that kind of shows that these kind of state based exploration methods are" }, { "start": 1364.9199999999998, "end": 1366.8799999999999, "text": " very brittle to the size of your state base." }, { "start": 1366.8799999999999, "end": 1367.8799999999999, "text": " Right." }, { "start": 1367.8799999999999, "end": 1371.84, "text": " So you can kind of increase your state space infinitely and it'll make these methods perform" }, { "start": 1371.84, "end": 1377.04, "text": " worse, even if the underlying semantics of your environment haven't changed yet." }, { "start": 1377.04, "end": 1381.9199999999998, "text": " Do you have an idea, do you have a feeling maybe, if this is a property of the world" }, { "start": 1381.9199999999998, "end": 1384.9599999999998, "text": " in general, like let's say I as a human, right?" }, { "start": 1384.96, "end": 1390.52, "text": " I'm put into a small whatever environment or a big environment, would my descriptions" }, { "start": 1390.52, "end": 1393.48, "text": " of language also not grow very much?" }, { "start": 1393.48, "end": 1395.96, "text": " Or is it a property of just game developers?" }, { "start": 1395.96, "end": 1400.8, "text": " You know, I add a few extra rooms, I can reuse these languages, you know, I just kind of" }, { "start": 1400.8, "end": 1406.64, "text": " tile, you know, the other the big games, I mean, the biggest games are procedurally generated" }, { "start": 1406.64, "end": 1410.56, "text": " like Minecraft there, it's really, it's just the same thing over and over." }, { "start": 1410.56, "end": 1416.84, "text": " But even in like the like, these big open world games, like Grand Theft Auto or so," }, { "start": 1416.84, "end": 1422.6799999999998, "text": " the same textures are reused and the same cars and the same NPC characters, right?" }, { "start": 1422.6799999999998, "end": 1427.6, "text": " Is this a property of the world or of the video game developers?" }, { "start": 1427.6, "end": 1432.6, "text": " Yeah, so this is a really deep and almost philosophical question." }, { "start": 1432.6, "end": 1438.36, "text": " Yeah, is something that I think about a lot is you can certainly and this is a totally" }, { "start": 1438.36, "end": 1443.76, "text": " valid statement, right, you can say, well, there are a lot of language actions that you" }, { "start": 1443.76, "end": 1447.8799999999999, "text": " can describe in our world and even in the video game world, which just described these" }, { "start": 1447.8799999999999, "end": 1452.26, "text": " like kind of infinitely complex and nested sequences of actions, which have absolutely" }, { "start": 1452.26, "end": 1455.52, "text": " nothing to do with the extrinsic task, right?" }, { "start": 1455.52, "end": 1459.86, "text": " I could tell you to, you know, oh, you know, run at the wall six times do a 360." }, { "start": 1459.86, "end": 1462.28, "text": " And then, you know, continue hitting the wall eight times, right." }, { "start": 1462.28, "end": 1466.4799999999998, "text": " And that's like an incredibly difficult goal, which you can imagine a very structured curriculum" }, { "start": 1466.48, "end": 1470.28, "text": " to get to that point, right, of just like infinitely kind of bumping your head against" }, { "start": 1470.28, "end": 1475.52, "text": " the wall, which satisfies, you know, maybe the difficulty threshold of El Amigo, but" }, { "start": 1475.52, "end": 1478.92, "text": " is absolutely orthogonal to the task that we care about." }, { "start": 1478.92, "end": 1483.56, "text": " And I can imagine that there are settings where the language is kind of useless and" }, { "start": 1483.56, "end": 1487.5, "text": " doesn't end up, you know, giving you any gains in this setting." }, { "start": 1487.5, "end": 1490.6, "text": " And so there's kind of this open question that we haven't really touched on sufficiently" }, { "start": 1490.6, "end": 1495.4, "text": " in this paper, which is how good does the language have to be in order to get this to" }, { "start": 1495.4, "end": 1496.88, "text": " work?" }, { "start": 1496.88, "end": 1501.24, "text": " So as I say, you know, the language is Oracle, it's game developers, but it also is noisy." }, { "start": 1501.24, "end": 1504.8000000000002, "text": " There's a lot of actions like running into walls or trying to throw stones at a minotaur" }, { "start": 1504.8000000000002, "end": 1507.68, "text": " that are ultimately useless in the environment." }, { "start": 1507.68, "end": 1512.3200000000002, "text": " The argument we're making here is that hopefully, you know, the noisiness of language scales" }, { "start": 1512.3200000000002, "end": 1516.48, "text": " a little bit less than the noisiness of your state environment, right." }, { "start": 1516.48, "end": 1520.88, "text": " But there's still a lot of kind of edge cases and kind of unexplored territory here." }, { "start": 1520.88, "end": 1525.2800000000002, "text": " I think more philosophically, if you think about our world and our environment, right," }, { "start": 1525.28, "end": 1530.36, "text": " there are a lot of ways that we can describe actions that are not particularly useful in" }, { "start": 1530.36, "end": 1531.96, "text": " the world that you and I inhabit, right." }, { "start": 1531.96, "end": 1537.28, "text": " I mean, I can again tell you to do handstands and hit a wall and, you know, walk around" }, { "start": 1537.28, "end": 1541.68, "text": " and write endless, you know, trivial things in the dust." }, { "start": 1541.68, "end": 1545.92, "text": " But at the same time, there's a lot of our action space in the real world that we simply" }, { "start": 1545.92, "end": 1548.24, "text": " don't have language descriptions for, right." }, { "start": 1548.24, "end": 1553, "text": " So like every single precise movement on my hand and my arm, you know, I could presumably" }, { "start": 1553, "end": 1557.08, "text": " come up with some language to describe, oh, I'm actuating this joint, you know, by 0.03" }, { "start": 1557.08, "end": 1558.08, "text": " degrees." }, { "start": 1558.08, "end": 1560.2, "text": " And there's like, you know, how many joints in my hand, right." }, { "start": 1560.2, "end": 1564.96, "text": " I mean, there's like endless complexity in terms of the possible action space just by" }, { "start": 1564.96, "end": 1569.36, "text": " moving a hand that in language we have absolutely no words for, right." }, { "start": 1569.36, "end": 1571.6, "text": " And so it's really it's a really tough question, right." }, { "start": 1571.6, "end": 1574.92, "text": " Like we have a lot of kind of ways of describing useless actions in the world." }, { "start": 1574.92, "end": 1577.92, "text": " But at the same time, it's very clear that the language that we do use to describe the" }, { "start": 1577.92, "end": 1584.28, "text": " world is operating at a higher level abstraction than perhaps the kinds of actions that RL" }, { "start": 1584.28, "end": 1585.92, "text": " agents have access to, right." }, { "start": 1585.92, "end": 1589.96, "text": " And for example, actuating some sort of limb or something." }, { "start": 1589.96, "end": 1596.24, "text": " You make a you make a good point that in the paper that language is a strong prior over" }, { "start": 1596.24, "end": 1599.52, "text": " what is essentially important to humans, right." }, { "start": 1599.52, "end": 1604.14, "text": " If I can describe something with a short piece of language, like, of course, I can say do" }, { "start": 1604.14, "end": 1607.54, "text": " three backflips and then, you know, do eight of that and so on." }, { "start": 1607.54, "end": 1610.28, "text": " But it's a fairly complex sentence in itself." }, { "start": 1610.28, "end": 1615.56, "text": " If I can describe something with a short piece of language, usually that is something that" }, { "start": 1615.56, "end": 1619.68, "text": " matters to some human somewhere, right." }, { "start": 1619.68, "end": 1622.68, "text": " Otherwise that wouldn't be mapped to a short string." }, { "start": 1622.68, "end": 1624.8799999999999, "text": " But that brings me a bit to a different question." }, { "start": 1624.8799999999999, "end": 1631.44, "text": " And that is the question of isn't isn't the I think in these environments, there's always" }, { "start": 1631.44, "end": 1632.72, "text": " a goal, right." }, { "start": 1632.72, "end": 1636.2, "text": " There is one reward at the end that you need to reach." }, { "start": 1636.2, "end": 1642.1200000000001, "text": " I can imagine, though, that novelty or not novelty in general or how how important a" }, { "start": 1642.1200000000001, "end": 1645.64, "text": " state is, is really dependent on your goal." }, { "start": 1645.64, "end": 1651.76, "text": " Whether I circumvent the minotaur at the, you know, below or above that might not be" }, { "start": 1651.76, "end": 1656.1200000000001, "text": " important if I want to reach whatever the goal behind it." }, { "start": 1656.1200000000001, "end": 1659.16, "text": " But it is really important maybe for a different task." }, { "start": 1659.16, "end": 1665.6000000000001, "text": " It's likewise I as a human, whether I move from here to there by walking forward or backward" }, { "start": 1665.6, "end": 1668.24, "text": " doesn't matter if I want to get to the fridge." }, { "start": 1668.24, "end": 1672.7199999999998, "text": " But it matters really if I'm if I'm dancing, right." }, { "start": 1672.7199999999998, "end": 1680.24, "text": " So is that something that like how does that interplay here with these with these language" }, { "start": 1680.24, "end": 1682.1399999999999, "text": " things?" }, { "start": 1682.1399999999999, "end": 1689.28, "text": " What do you do when a language it almost like needs to incorporate a piece of the goal that" }, { "start": 1689.28, "end": 1694.1999999999998, "text": " you want to reach in order to be useful or not?" }, { "start": 1694.2, "end": 1699.8, "text": " Yeah, so I think thinking about or trying to filter the language descriptions that you" }, { "start": 1699.8, "end": 1705.64, "text": " have to language that is relevant for your task is going to be important if we scale" }, { "start": 1705.64, "end": 1710.24, "text": " this up to environments where it's clear that using unfiltered language is not helping." }, { "start": 1710.24, "end": 1711.24, "text": " Right." }, { "start": 1711.24, "end": 1714.88, "text": " And again, as I mentioned, the robustness of these kinds of exploration methods to the" }, { "start": 1714.88, "end": 1720.1000000000001, "text": " noisiness or relevance of your language signal is still an open question." }, { "start": 1720.1, "end": 1725.08, "text": " If we do have task descriptions, so we have extrinsic task descriptions like your job" }, { "start": 1725.08, "end": 1730.28, "text": " is to defeat the Minotaur, then it's really intuitive that we should be able to use that" }, { "start": 1730.28, "end": 1735.36, "text": " as a signal for kind of waiting how relevant a sub goal or language description that we" }, { "start": 1735.36, "end": 1739.36, "text": " encounter waiting how useful that is for the extrinsic task." }, { "start": 1739.36, "end": 1740.36, "text": " Right." }, { "start": 1740.36, "end": 1744.9599999999998, "text": " So if the extrinsic goal is combat, then we should be prioritizing combat related messages." }, { "start": 1744.96, "end": 1751.16, "text": " If the extrinsic goal is buying something, then we should promote acquiring money and" }, { "start": 1751.16, "end": 1752.52, "text": " things like that." }, { "start": 1752.52, "end": 1755.96, "text": " And so that's something that I think is a kind of natural extension of this is you extend" }, { "start": 1755.96, "end": 1760.3600000000001, "text": " this to a multitask setting where you have task descriptions and the task descriptions" }, { "start": 1760.3600000000001, "end": 1765.24, "text": " ought to kind of heavily filter what sub goals should be relevant for the task." }, { "start": 1765.24, "end": 1769.8400000000001, "text": " I think when you include task descriptions, there are some more comparisons to related" }, { "start": 1769.8400000000001, "end": 1770.8400000000001, "text": " work." }, { "start": 1770.84, "end": 1775.24, "text": " There's some related work, which you mentioned the paper where let's imagine you're doing" }, { "start": 1775.24, "end": 1777.52, "text": " basically hierarchical reinforcement learning." }, { "start": 1777.52, "end": 1781.8, "text": " So you have some extrinsic goal and then you want to explicitly decompose the extrinsic" }, { "start": 1781.8, "end": 1784.48, "text": " goal into sub goals that you want to complete in order." }, { "start": 1784.48, "end": 1785.48, "text": " Right." }, { "start": 1785.48, "end": 1789.4399999999998, "text": " And that's those are certainly kind of relevant methods to look at when you start thinking" }, { "start": 1789.4399999999998, "end": 1792.76, "text": " about multitask or goal condition settings." }, { "start": 1792.76, "end": 1797.72, "text": " But this is kind of a slightly different focus where we're not trying to identify sub goals" }, { "start": 1797.72, "end": 1801.28, "text": " that need to be completed on the way to some extrinsic goal." }, { "start": 1801.28, "end": 1805, "text": " There's still kind of this exploration component, which is a bit of a different use of language" }, { "start": 1805, "end": 1807.4, "text": " than this kind of hierarchical stuff." }, { "start": 1807.4, "end": 1811.24, "text": " But certainly I would say that there are people who have looked at kind of language conditioned" }, { "start": 1811.24, "end": 1818.24, "text": " RL and hierarchical RL that think a lot and very deeply about this problem of proposing" }, { "start": 1818.24, "end": 1823.3600000000001, "text": " sub goals that are relevant for the extrinsic goal, assuming you have some structured description" }, { "start": 1823.3600000000001, "end": 1825.48, "text": " of what the extrinsic goal is." }, { "start": 1825.48, "end": 1830.88, "text": " Although I can imagine you run into sort of the, let's say the more abstract problem of" }, { "start": 1830.88, "end": 1835.4, "text": " the exploration problem is that, you know, without an outside signal, I don't really" }, { "start": 1835.4, "end": 1836.4, "text": " know what to do." }, { "start": 1836.4, "end": 1839.52, "text": " And there is no clear, let's say gradient towards the goal." }, { "start": 1839.52, "end": 1840.52, "text": " Right." }, { "start": 1840.52, "end": 1843.48, "text": " Otherwise, the exploration problem in RL would be relatively easy." }, { "start": 1843.48, "end": 1848.3600000000001, "text": " Now when we say, well, we'll just filter out all the messages that don't have anything" }, { "start": 1848.3600000000001, "end": 1850.48, "text": " to do with our combat goal." }, { "start": 1850.48, "end": 1851.48, "text": " Right." }, { "start": 1851.48, "end": 1855.8, "text": " So it is like we could run into the exact same thing again, where, you know, maybe in" }, { "start": 1855.8, "end": 1860.64, "text": " order to acquire a weapon, I first need money, right?" }, { "start": 1860.64, "end": 1863.56, "text": " That doesn't, that's not directly related to my combat goal." }, { "start": 1863.56, "end": 1870.04, "text": " So there is like another exploration problem again, on top of the thing we introduce." }, { "start": 1870.04, "end": 1875.2, "text": " I guess maybe we can hope that if we introduce enough levels, the highest level of abstraction" }, { "start": 1875.2, "end": 1880.72, "text": " will have a small number of states so that, you know, random exploration works." }, { "start": 1880.72, "end": 1884.6000000000001, "text": " But it's kind of funny that the problems repeat or replicate." }, { "start": 1884.6000000000001, "end": 1885.6000000000001, "text": " Yeah." }, { "start": 1885.6000000000001, "end": 1886.6000000000001, "text": " Yeah." }, { "start": 1886.6000000000001, "end": 1887.6000000000001, "text": " It's really tricky." }, { "start": 1887.6000000000001, "end": 1891.4, "text": " And that's essentially just kind of a deeper or more nested failure case of not knowing" }, { "start": 1891.4, "end": 1893.96, "text": " what's novel and not knowing what's relevant for your goal." }, { "start": 1893.96, "end": 1894.96, "text": " Right." }, { "start": 1894.96, "end": 1898.4, "text": " So if you're prioritizing words that have combat in them because your extrinsic goal" }, { "start": 1898.4, "end": 1904.64, "text": " is combat, but you first need to buy something, then your, your, your semantics, you know," }, { "start": 1904.64, "end": 1907.28, "text": " your measure of novelty or relevance is just not good enough." }, { "start": 1907.28, "end": 1908.28, "text": " Right." }, { "start": 1908.28, "end": 1913.24, "text": " So that's going to just be a fundamental problem in exploration is how do we know whether it's" }, { "start": 1913.24, "end": 1917.44, "text": " states or language, you know, how do we know when a state is relevant for the ultimate" }, { "start": 1917.44, "end": 1918.44, "text": " task?" }, { "start": 1918.44, "end": 1919.44, "text": " Yeah." }, { "start": 1919.44, "end": 1921.52, "text": " And I guess humans aren't very much different, right?" }, { "start": 1921.52, "end": 1924.08, "text": " I mean, science is a really hard process." }, { "start": 1924.08, "end": 1930.08, "text": " It's not, you know, that exploration takes millions of humans and hundreds of years." }, { "start": 1930.08, "end": 1936.24, "text": " So we can't fault our RL agents here for not, not doing that great of a job." }, { "start": 1936.24, "end": 1941.44, "text": " Here, I found these plots to be really cool, like the analysis, sort of the evolution of" }, { "start": 1941.44, "end": 1943.36, "text": " what the teachers propose." }, { "start": 1943.36, "end": 1948.32, "text": " And of course, these being language, it's quite insightful and understandable what's" }, { "start": 1948.32, "end": 1950.4, "text": " happening in the algorithm." }, { "start": 1950.4, "end": 1956.36, "text": " My, my surprise was a little bit, aren't these things kind of subject to like catastrophic" }, { "start": 1956.36, "end": 1958.08, "text": " forgetting or things like this?" }, { "start": 1958.08, "end": 1959.4, "text": " I can imagine, right?" }, { "start": 1959.4, "end": 1964.56, "text": " If I train these things online and they're at some difficulty level, all of a sudden" }, { "start": 1964.56, "end": 1967.96, "text": " they forget that reaching the red door is kind of really easy." }, { "start": 1967.96, "end": 1973.48, "text": " Or so is that have you ever thought is that a problem?" }, { "start": 1973.48, "end": 1975.24, "text": " Or was that ever a problem?" }, { "start": 1975.24, "end": 1976.24, "text": " Did you encounter that?" }, { "start": 1976.24, "end": 1978.76, "text": " Or why don't we encounter that?" }, { "start": 1978.76, "end": 1979.76, "text": " Yeah." }, { "start": 1979.76, "end": 1984.08, "text": " So I expect that that is a problem that happens in these agents." }, { "start": 1984.08, "end": 1987.6799999999998, "text": " I don't think we really precisely tried to measure whether or not catastrophic forgetting" }, { "start": 1987.6799999999998, "end": 1989.56, "text": " is a problem." }, { "start": 1989.56, "end": 1996.8, "text": " I think the fact is that we evaluate in environments where we are not testing the agents kind of" }, { "start": 1996.8, "end": 2002.6, "text": " continuously for mastery of all of the skills that it has learned in its curriculum proposed" }, { "start": 2002.6, "end": 2003.72, "text": " by the teacher." }, { "start": 2003.72, "end": 2006.9199999999998, "text": " And so this problem of, oh, you know, you forgot how to specifically open a specific" }, { "start": 2006.9199999999998, "end": 2011.06, "text": " color door is not an issue as long as the student is still quite good at completing" }, { "start": 2011.06, "end": 2015.48, "text": " whatever goals it needs to complete to try to achieve the extrinsic goal that is currently" }, { "start": 2015.48, "end": 2016.48, "text": " being set by the teacher." }, { "start": 2016.48, "end": 2017.48, "text": " Right." }, { "start": 2017.48, "end": 2020.3600000000001, "text": " So if you forget things that are at the very beginning of training, that's not a big deal." }, { "start": 2020.3600000000001, "end": 2024.14, "text": " So long as whatever path that the teacher is leading you on is something that will eventually" }, { "start": 2024.14, "end": 2026.52, "text": " get you to the extrinsic goal that we care about." }, { "start": 2026.52, "end": 2029.44, "text": " And I think that happens to be the case in these environments because there was only" }, { "start": 2029.44, "end": 2033.78, "text": " one extrinsic goal and because we're not testing it to master every single skill from kind" }, { "start": 2033.78, "end": 2036.44, "text": " of low level to high level abstractions." }, { "start": 2036.44, "end": 2042.04, "text": " But if we were in a setting where being able to complete those lower level goals kind of," }, { "start": 2042.04, "end": 2046.72, "text": " you know, on a dime and kind of, you know, switch kind of do context switching like that," }, { "start": 2046.72, "end": 2050.36, "text": " if that were more important, then we would have to deal with this problem of catastrophic" }, { "start": 2050.36, "end": 2051.36, "text": " forgetting." }, { "start": 2051.36, "end": 2052.36, "text": " Right." }, { "start": 2052.36, "end": 2057, "text": " An important point here is that we really don't care about how well the student is able" }, { "start": 2057, "end": 2059.8, "text": " to follow instructions proposed by the teacher." }, { "start": 2059.8, "end": 2065.56, "text": " That's, I mean, we hope the goal is that that property emerges such that we can complete" }, { "start": 2065.56, "end": 2066.56, "text": " the extrinsic goal." }, { "start": 2066.56, "end": 2067.56, "text": " Right." }, { "start": 2067.56, "end": 2069.68, "text": " But we're never actually trying to learn a student that can follow instructions." }, { "start": 2069.68, "end": 2076.52, "text": " We never really evaluated exclusively in an instruction following setting." }, { "start": 2076.52, "end": 2081.56, "text": " Because if we think ahead a little bit, and I'm going to want to just scroll down to the" }, { "start": 2081.56, "end": 2088.4, "text": " environments just because, yeah, maybe this this will inspire us a little bit." }, { "start": 2088.4, "end": 2093.96, "text": " If we think ahead a little bit beyond this work, here you have this very, this Oracle" }, { "start": 2093.96, "end": 2095.72, "text": " language descriptor." }, { "start": 2095.72, "end": 2100.64, "text": " And you say also in the outlook of future work that that is something obviously that" }, { "start": 2100.64, "end": 2104.78, "text": " we're trying to get rid of because not every environment, like the fewest of environments" }, { "start": 2104.78, "end": 2109.6400000000003, "text": " actually have such a built in language description or easily accessible one." }, { "start": 2109.6400000000003, "end": 2113.1600000000003, "text": " So we might have to regress to something else." }, { "start": 2113.1600000000003, "end": 2119.88, "text": " So I want to I want to think about three different external models that we could bring in." }, { "start": 2119.88, "end": 2123.96, "text": " And I wonder what you think of each of them, like how these could fit in." }, { "start": 2123.96, "end": 2128.2400000000002, "text": " The first would be something like GPT-3, like just a pure language model." }, { "start": 2128.2400000000002, "end": 2131.26, "text": " How could that help us?" }, { "start": 2131.26, "end": 2135.44, "text": " Maybe in combination with these things, because we need some starting point, right?" }, { "start": 2135.44, "end": 2139.84, "text": " But how could a pre-trained language model that knows something about the world help" }, { "start": 2139.84, "end": 2140.84, "text": " us?" }, { "start": 2140.84, "end": 2145.6800000000003, "text": " Then something like CLIP, maybe something that can take an image and language and say" }, { "start": 2145.6800000000003, "end": 2148.7200000000003, "text": " whether they're good together or not." }, { "start": 2148.7200000000003, "end": 2152.6000000000004, "text": " And then maybe even something like or maybe a captioning model." }, { "start": 2152.6000000000004, "end": 2154.0800000000004, "text": " Right." }, { "start": 2154.0800000000004, "end": 2159.6000000000004, "text": " And maybe something like DALEE, like something that takes language and generates." }, { "start": 2159.6, "end": 2166.96, "text": " Is there in this cloud of models, what possibilities do we have to bring in sort of to replace" }, { "start": 2166.96, "end": 2170.68, "text": " this Oracle thing with with learned systems?" }, { "start": 2170.68, "end": 2173.2, "text": " It doesn't even need to be learned online, right?" }, { "start": 2173.2, "end": 2174.3199999999997, "text": " It can be pre-trained." }, { "start": 2174.3199999999997, "end": 2177.7599999999998, "text": " I'm probably much more excited about that." }, { "start": 2177.7599999999998, "end": 2178.7599999999998, "text": " Yeah." }, { "start": 2178.7599999999998, "end": 2182.92, "text": " Yeah, these are, I think, going to be the most fun questions to look at in kind of language" }, { "start": 2182.92, "end": 2187.36, "text": " conditions are all going forward is taking the boom in pre-trained models in large language" }, { "start": 2187.36, "end": 2193.32, "text": " models and resulting, you know, bringing these into concrete and actionable gains in reinforcement" }, { "start": 2193.32, "end": 2195.1600000000003, "text": " learning." }, { "start": 2195.1600000000003, "end": 2200.92, "text": " It's funny that you mentioned this kind of what I described as almost a gradation from" }, { "start": 2200.92, "end": 2205.6800000000003, "text": " ungrounded language models like GPT-3, right, which are trained on text only corpora and" }, { "start": 2205.6800000000003, "end": 2210.2400000000002, "text": " whether those can actually help in these environments, which I would call are fundamentally grounded," }, { "start": 2210.2400000000002, "end": 2211.2400000000002, "text": " right?" }, { "start": 2211.2400000000002, "end": 2215.1600000000003, "text": " They're grounded in some some visual or perceptual world." }, { "start": 2215.16, "end": 2219.3199999999997, "text": " And ungrounded language models still result in gains in these settings." }, { "start": 2219.3199999999997, "end": 2224.2, "text": " And my intuition is, yeah, they probably still can because, you know, even if you don't exactly" }, { "start": 2224.2, "end": 2228.24, "text": " know what it means to acquire a wand or kill a minotaur in some environment because you" }, { "start": 2228.24, "end": 2233.92, "text": " don't know what a minotaur looks like or what a wand looks like, GPT, as I mentioned, you" }, { "start": 2233.92, "end": 2235.3999999999996, "text": " know, this idea of priors, right?" }, { "start": 2235.3999999999996, "end": 2239.56, "text": " GPT has strong priors on sensible sequences of actions, right?" }, { "start": 2239.56, "end": 2246.2799999999997, "text": " So insofar as these environments are testing kind of sequences of actions that humans kind" }, { "start": 2246.2799999999997, "end": 2250.6, "text": " of have an intuition for, you know, it's some fantasy world, but we have some intuition," }, { "start": 2250.6, "end": 2253.44, "text": " oh, in order to defeat the minotaur, we need to get a weapon first." }, { "start": 2253.44, "end": 2255.14, "text": " We probably look around for a weapon." }, { "start": 2255.14, "end": 2256.14, "text": " Maybe there's a shop." }, { "start": 2256.14, "end": 2258.16, "text": " Maybe we can buy a weapon from the shop, right?" }, { "start": 2258.16, "end": 2262.2799999999997, "text": " Video games are testing knowledge that we have very like deep seated commonsense knowledge" }, { "start": 2262.2799999999997, "end": 2265.72, "text": " that we have that hopefully generalizes to these fantasy worlds." }, { "start": 2265.72, "end": 2268.92, "text": " And GPT certainly contains a lot of that information, right?" }, { "start": 2268.92, "end": 2274.44, "text": " So you might imagine we should reward or filter the kinds of descriptions that we see to those" }, { "start": 2274.44, "end": 2278.16, "text": " that seem sensible narratives that GPT-3 would generate, right?" }, { "start": 2278.16, "end": 2283.52, "text": " So a sensible sequence of actions along the way to defeating the minotaur is collecting" }, { "start": 2283.52, "end": 2286.52, "text": " a wand and buying it and things like that." }, { "start": 2286.52, "end": 2291.28, "text": " And I think you actually already see some examples of this happening in more goal conditioned" }, { "start": 2291.28, "end": 2292.48, "text": " or instruction following RL." }, { "start": 2292.48, "end": 2297.16, "text": " So there's been some recent work from, I know, teams at Berkeley, maybe Google as well, that" }, { "start": 2297.16, "end": 2301.2799999999997, "text": " are looking at using pre-trained language models, which are not necessarily even grounded." }, { "start": 2301.2799999999997, "end": 2307.24, "text": " They're just, you know, GPT-3, using them to construct sensible plans, action plans" }, { "start": 2307.24, "end": 2309.96, "text": " or sub goals for completing certain actions." }, { "start": 2309.96, "end": 2315.2, "text": " So in some home environment, for example, maybe my action is get a cup of coffee." }, { "start": 2315.2, "end": 2318.68, "text": " And then the goal of GPT is even though I don't really know what my environment looks" }, { "start": 2318.68, "end": 2322.68, "text": " like, I don't know what kitchen you're in, I know that sensibly this should include finding" }, { "start": 2322.68, "end": 2325.64, "text": " a mug and then heating up the kettle and things like that." }, { "start": 2325.64, "end": 2330.2799999999997, "text": " And so we already see some promising use of kind of ungrounded models for improving grounded" }, { "start": 2330.2799999999997, "end": 2331.2799999999997, "text": " decision making settings." }, { "start": 2331.2799999999997, "end": 2333.7999999999997, "text": " Yeah, did you want to comment on that?" }, { "start": 2333.7999999999997, "end": 2334.7999999999997, "text": " Or I can also-" }, { "start": 2334.7999999999997, "end": 2335.8799999999997, "text": " No, no, that's cool." }, { "start": 2335.8799999999997, "end": 2343.7999999999997, "text": " I think, yeah, I think I've even had at least one of these works here on the channel in" }, { "start": 2343.7999999999997, "end": 2345.44, "text": " this home environment." }, { "start": 2345.44, "end": 2348.68, "text": " That's exactly, I was also really cool to see." }, { "start": 2348.68, "end": 2352.7599999999998, "text": " Obviously, these models know a lot about the world, right?" }, { "start": 2352.76, "end": 2359.5600000000004, "text": " And I think people overestimate how or underestimate maybe, well, whatever." }, { "start": 2359.5600000000004, "end": 2364.6800000000003, "text": " That the thing, if we humans look at a board like this, like at a mini hack board, we see" }, { "start": 2364.6800000000003, "end": 2365.84, "text": " a map, right?" }, { "start": 2365.84, "end": 2371.5600000000004, "text": " We see paths to walk on and stuff like this, even if we've never played a video game." }, { "start": 2371.5600000000004, "end": 2375.1400000000003, "text": " But this is, these are such strong priors built into us." }, { "start": 2375.1400000000003, "end": 2380.1200000000003, "text": " And we sometimes think like, why can't that dumb computer just like walk around the wall," }, { "start": 2380.1200000000003, "end": 2381.1200000000003, "text": " right?" }, { "start": 2381.12, "end": 2383.92, "text": " And we're like, what's up?" }, { "start": 2383.92, "end": 2388.6, "text": " And I think these large models are a way we can really get that knowledge from the human" }, { "start": 2388.6, "end": 2390.52, "text": " world into this world." }, { "start": 2390.52, "end": 2394.3199999999997, "text": " So yeah, I think that's, it's a great outlook." }, { "start": 2394.3199999999997, "end": 2402.8399999999997, "text": " Also with the models that combine images and text, I feel that could be really like adding" }, { "start": 2402.8399999999997, "end": 2405.68, "text": " a lot of value to the RL world." }, { "start": 2405.68, "end": 2411.24, "text": " At least the RL environments that are like human environments." }, { "start": 2411.24, "end": 2417.3999999999996, "text": " Of course, there's reinforcement learning for computer chip design, and things like" }, { "start": 2417.3999999999996, "end": 2418.3999999999996, "text": " this." }, { "start": 2418.3999999999996, "end": 2422.6, "text": " I don't think those are necessarily going to be profiting that much from it." }, { "start": 2422.6, "end": 2429.22, "text": " But yeah, yeah, really cool is so you're you're at Stanford?" }, { "start": 2429.22, "end": 2431.58, "text": " Or did you do the work at Stanford?" }, { "start": 2431.58, "end": 2433.08, "text": " Or were you at some internship?" }, { "start": 2433.08, "end": 2436.7799999999997, "text": " Yeah, I did it while I had an internship last fall." }, { "start": 2436.7799999999997, "end": 2437.7799999999997, "text": " So this is fall 2021." }, { "start": 2437.7799999999997, "end": 2441.04, "text": " Okay, continue to work a little bit while at Stanford." }, { "start": 2441.04, "end": 2448.2, "text": " But it was mostly in collaboration with some people at fair or meta, I guess now in London." }, { "start": 2448.2, "end": 2452, "text": " Reinforcement learning is notoriously also kind of hardware intensive." }, { "start": 2452, "end": 2456.56, "text": " Although this work right here seems like maybe not that much because you describe a little" }, { "start": 2456.56, "end": 2462.3199999999997, "text": " bit sort of what what it takes to investigate a project like this." }, { "start": 2462.32, "end": 2467.28, "text": " Yeah, unfortunately, I think even for these environments, it's fairly hardware intensive," }, { "start": 2467.28, "end": 2473.4, "text": " certainly still feasible, I think, on let's say, a more academically sized compute budget." }, { "start": 2473.4, "end": 2479.36, "text": " But for being able to run the experimentation needed to iterate quickly, you know, you do" }, { "start": 2479.36, "end": 2483.36, "text": " really definitely benefit from kind of industry level scale, which is one of the unfortunate" }, { "start": 2483.36, "end": 2487.6400000000003, "text": " things about this kind of research is that it is a little bit less accessible to people" }, { "start": 2487.6400000000003, "end": 2490.48, "text": " in smaller compute settings." }, { "start": 2490.48, "end": 2495.36, "text": " So maybe the typical kind of RL environments you think of our compute heavy are the ones" }, { "start": 2495.36, "end": 2501.08, "text": " that are in 3D simulation, you know, very, you know, need physics, need soft joint contact" }, { "start": 2501.08, "end": 2502.84, "text": " and all of these things to model." }, { "start": 2502.84, "end": 2504.44, "text": " And those are really expensive." }, { "start": 2504.44, "end": 2508.36, "text": " I think compared to that, these are kind of more symbolic grid worlds." }, { "start": 2508.36, "end": 2512.6, "text": " You know, the whole point as to why mini hack or net hack was chosen as a reinforcement" }, { "start": 2512.6, "end": 2516.92, "text": " learning test bed was because the code base is, you know, written entirely in C and is" }, { "start": 2516.92, "end": 2522.48, "text": " very optimized, and so you can run simulations very quickly on modern hardware." }, { "start": 2522.48, "end": 2526.04, "text": " But that being said, it's still relatively compute expensive." }, { "start": 2526.04, "end": 2531.56, "text": " Again, the just amount of experience needed by state of the art, deep RL methods, even" }, { "start": 2531.56, "end": 2536.28, "text": " with extrinsic or intrinsic exploration bonuses is still very expensive, right?" }, { "start": 2536.28, "end": 2540.96, "text": " So for example, one of these runs, we would typically have, let's say, 40 CPU actors collecting" }, { "start": 2540.96, "end": 2545.8, "text": " experience at the same time in parallel, and then kind of one or two GPU learner threads" }, { "start": 2545.8, "end": 2549.2400000000002, "text": " in the background kind of updating from this experience." }, { "start": 2549.2400000000002, "end": 2554.6000000000004, "text": " So even just a single, you know, computational experiment here needs non trivial hardware" }, { "start": 2554.6000000000004, "end": 2555.6000000000004, "text": " for sure." }, { "start": 2555.6000000000004, "end": 2556.6000000000004, "text": " Yeah." }, { "start": 2556.6000000000004, "end": 2558.96, "text": " And, and you ideally you want to do that in parallel, right?" }, { "start": 2558.96, "end": 2563.4, "text": " Because you want to try out a bunch of things are repeated a bunch of times because one" }, { "start": 2563.4, "end": 2567.44, "text": " experiment really tells you almost nothing, right?" }, { "start": 2567.44, "end": 2569.2000000000003, "text": " Unless it succeeds, right?" }, { "start": 2569.2000000000003, "end": 2570.44, "text": " If it succeeds, it's good." }, { "start": 2570.44, "end": 2574.04, "text": " But if it fails, you never know if you repeat it a bunch of times." }, { "start": 2574.04, "end": 2579.16, "text": " Yeah, but I mean, it's still it's not it's not the most extreme thing, right?" }, { "start": 2579.16, "end": 2583.7599999999998, "text": " Like two GPUs or so and a bunch of CPUs." }, { "start": 2583.7599999999998, "end": 2587.6, "text": " As you say, that can that's still academically doable, which I find cool." }, { "start": 2587.6, "end": 2593.56, "text": " Could you maybe tell us a bit about the process of researching of researching this?" }, { "start": 2593.56, "end": 2596.68, "text": " Like, did everything work out as planned from the beginning?" }, { "start": 2596.68, "end": 2600.48, "text": " Or where was your starting point?" }, { "start": 2600.48, "end": 2605.04, "text": " And what changed about your plan during the research, like maybe something didn't work" }, { "start": 2605.04, "end": 2606.04, "text": " out or so?" }, { "start": 2606.04, "end": 2607.04, "text": " Yeah." }, { "start": 2607.04, "end": 2611.76, "text": " Yeah, I feel I don't I feel it's always good for people to hear that other people encounter" }, { "start": 2611.76, "end": 2614.44, "text": " problems and how they get around problems." }, { "start": 2614.44, "end": 2615.44, "text": " Yeah." }, { "start": 2615.44, "end": 2616.44, "text": " Yeah." }, { "start": 2616.44, "end": 2620.2, "text": " So yeah, it's a great question." }, { "start": 2620.2, "end": 2627.3, "text": " The intuition that I think me and my collaborators started with was, you know, fairly sensible." }, { "start": 2627.3, "end": 2631.88, "text": " It's language is clearly going to help in these environments." }, { "start": 2631.88, "end": 2634.2400000000002, "text": " You know, it has some nice parallels to human exploration." }, { "start": 2634.2400000000002, "end": 2638.96, "text": " And so let's just see whether or not language will work in these environments." }, { "start": 2638.96, "end": 2643.0800000000004, "text": " What's funny, though, is that we actually started out the project less about the more" }, { "start": 2643.0800000000004, "end": 2647.88, "text": " abstract question of like, does language help exploration and more a very concrete question" }, { "start": 2647.88, "end": 2650.6000000000004, "text": " of how do we improve upon Amigo?" }, { "start": 2650.6000000000004, "end": 2655.2400000000002, "text": " So how do we improve upon an existing state of the art algorithm for exploration?" }, { "start": 2655.24, "end": 2658.08, "text": " Let's propose something that we argue is better than everything." }, { "start": 2658.08, "end": 2662.04, "text": " It's like we're going to propose a state of the art exploration method called El Amigo," }, { "start": 2662.04, "end": 2665.16, "text": " which will get 100 percent accuracy in all these environments." }, { "start": 2665.16, "end": 2667.2, "text": " And none of the existing methods will work." }, { "start": 2667.2, "end": 2668.2, "text": " Right." }, { "start": 2668.2, "end": 2670.2, "text": " That's that's kind of the narrative that you set up for yourself when you're starting research" }, { "start": 2670.2, "end": 2673.9199999999996, "text": " is I'm going to build something that's new and that's the best." }, { "start": 2673.9199999999996, "end": 2674.9199999999996, "text": " Right." }, { "start": 2674.9199999999996, "end": 2678.8399999999997, "text": " However, I think the focus of this paper and the story has shifted considerably." }, { "start": 2678.8399999999997, "end": 2680.6, "text": " I think it's shifted for the better, actually." }, { "start": 2680.6, "end": 2685.92, "text": " And part of this shift happened because we implemented El Amigo and it was working fine" }, { "start": 2685.92, "end": 2687.2799999999997, "text": " and it worked better than Amigo." }, { "start": 2687.2799999999997, "end": 2688.68, "text": " So we were quite excited." }, { "start": 2688.68, "end": 2691.3199999999997, "text": " But at the same time, the field is moving so fast." }, { "start": 2691.3199999999997, "end": 2697.68, "text": " And at NeurIPS last year, some researchers came out with this method called novelty and" }, { "start": 2697.68, "end": 2701.24, "text": " we ran novelty and novelty also did really well." }, { "start": 2701.24, "end": 2704.64, "text": " And you know, in some environments, it totally like blew Amigo out of the water." }, { "start": 2704.64, "end": 2705.64, "text": " Right." }, { "start": 2705.64, "end": 2706.64, "text": " And El Amigo." }, { "start": 2706.64, "end": 2711.4, "text": " And part of our thinking was, well, OK, now we can't really say, oh, we have El Amigo" }, { "start": 2711.4, "end": 2713.08, "text": " and it's the best model." }, { "start": 2713.08, "end": 2714.08, "text": " It's the best environment." }, { "start": 2714.08, "end": 2717.08, "text": " And you should only use this." }, { "start": 2717.08, "end": 2719.16, "text": " And at first I thought, you know, this is derailing our narrative." }, { "start": 2719.16, "end": 2720.16, "text": " Right." }, { "start": 2720.16, "end": 2721.16, "text": " We're not proposing anything new." }, { "start": 2721.16, "end": 2722.16, "text": " We're not proposing anything state of the art." }, { "start": 2722.16, "end": 2723.8399999999997, "text": " So what's the point?" }, { "start": 2723.8399999999997, "end": 2727.52, "text": " But I think after some kind of juggling and shuffling, we realized that what we're really" }, { "start": 2727.52, "end": 2731.94, "text": " interested in is the scientific question of does language help exploration?" }, { "start": 2731.94, "end": 2735.08, "text": " So take existing method X and then do X plus language." }, { "start": 2735.08, "end": 2736.08, "text": " Right." }, { "start": 2736.08, "end": 2740.2799999999997, "text": " And so this question can be answered kind of agnostic to the specific method that we" }, { "start": 2740.2799999999997, "end": 2741.2799999999997, "text": " actually use." }, { "start": 2741.2799999999997, "end": 2742.2799999999997, "text": " Right." }, { "start": 2742.2799999999997, "end": 2746.12, "text": " And so it was that juncture where we actually decided, OK, let's actually look at novelty" }, { "start": 2746.12, "end": 2748.94, "text": " closely and let's imagine adding language to novelty as well." }, { "start": 2748.94, "end": 2750.68, "text": " And do we see the same kind of results?" }, { "start": 2750.68, "end": 2751.68, "text": " Right." }, { "start": 2751.68, "end": 2757.54, "text": " And so I think this is kind of an outcome of the paper that was kind of on the fly changed." }, { "start": 2757.54, "end": 2761.92, "text": " But I'm very happy with which is that we're not trying to claim that we have a method" }, { "start": 2761.92, "end": 2766.7200000000003, "text": " that is state of the art or that is best or that anyone should be using our method." }, { "start": 2766.7200000000003, "end": 2769.32, "text": " We are very agnostic to the particular choice of method." }, { "start": 2769.32, "end": 2770.32, "text": " Right." }, { "start": 2770.32, "end": 2774.92, "text": " We're trying to answer kind of a more abstract question, which is when does language help" }, { "start": 2774.92, "end": 2775.92, "text": " exploration?" }, { "start": 2775.92, "end": 2778.8, "text": " And I think this is a little bit more egalitarian." }, { "start": 2778.8, "end": 2780.84, "text": " We're not saying that our method is better than anyone else's." }, { "start": 2780.84, "end": 2785.6, "text": " And we also don't have to exhaustively compare to like a lot of existing work." }, { "start": 2785.6, "end": 2789, "text": " We're just saying that if you take whatever method that we have and you add language," }, { "start": 2789, "end": 2792.36, "text": " you do better and here are two examples where that happens." }, { "start": 2792.36, "end": 2793.36, "text": " Cool." }, { "start": 2793.36, "end": 2799.88, "text": " And it is a good way to preempt some reviewers from saying that you didn't train on ImageNet" }, { "start": 2799.88, "end": 2801.4, "text": " and that's bad." }, { "start": 2801.4, "end": 2802.96, "text": " Yeah." }, { "start": 2802.96, "end": 2807.68, "text": " Is there anything else that you want to get out to viewers?" }, { "start": 2807.68, "end": 2813.52, "text": " Maybe a way they can get started if that's possible or anything that you'd like them" }, { "start": 2813.52, "end": 2816.52, "text": " to know?" }, { "start": 2816.52, "end": 2827.6, "text": " Yeah, I think that we've discussed a lot about these kind of higher level ideas of one holy" }, { "start": 2827.6, "end": 2832.52, "text": " grail is that we have clip generating descriptions or open GPT-3 and then we're evaluating in" }, { "start": 2832.52, "end": 2837.16, "text": " these really high dimensional spaces with actual motor joints and we're going to show" }, { "start": 2837.16, "end": 2845.78, "text": " how language helps in these like mojoco style, like really deep RL, realistic environments" }, { "start": 2845.78, "end": 2847.6800000000003, "text": " and maybe you can transfer to the real world." }, { "start": 2847.6800000000003, "end": 2851.88, "text": " I think that's the broad vision but I think it is still very far away." }, { "start": 2851.88, "end": 2856.88, "text": " I think we even in this paper abstracted away a lot of difficulty of the problem." }, { "start": 2856.88, "end": 2858.96, "text": " We're assuming that we have Oracle language annotations." }, { "start": 2858.96, "end": 2863.5600000000004, "text": " We're only looking at these kind of symbolic grid worlds and although it's tempting to" }, { "start": 2863.5600000000004, "end": 2868.2000000000003, "text": " dive in and say, okay, now let's kind of straightforwardly let's extend this to a real world environment" }, { "start": 2868.2000000000003, "end": 2872.7200000000003, "text": " where I have to actually move my coffee mug to make coffee and tea, I think we're still" }, { "start": 2872.72, "end": 2879.56, "text": " quite far away from that broad vision of kind of household enabled robots in RL and is probably" }, { "start": 2879.56, "end": 2882.9199999999996, "text": " not the most I think like beginner friendly way of starting." }, { "start": 2882.9199999999996, "end": 2887.24, "text": " There's just so many deep problems that need to be solved jointly from perception to action" }, { "start": 2887.24, "end": 2892.7999999999997, "text": " to planning and before we even consider how we better incorporate language into the mix." }, { "start": 2892.7999999999997, "end": 2897.24, "text": " And so I think the way to build upon this work is just these kind of very small progressive" }, { "start": 2897.24, "end": 2900.56, "text": " relaxations of the assumptions that I and many of the other people who have worked in" }, { "start": 2900.56, "end": 2901.56, "text": " this space have." }, { "start": 2901.56, "end": 2905.16, "text": " Right. So again, let's imagine let's just imagine we get rid of the Oracle language" }, { "start": 2905.16, "end": 2909.72, "text": " annotator and we train a model to emit states for these simple environments." }, { "start": 2909.72, "end": 2913.04, "text": " You know, we didn't really explore that, but that's a very sensible way to extend this" }, { "start": 2913.04, "end": 2916.44, "text": " kind of work while keeping the environment and the models fixed." }, { "start": 2916.44, "end": 2917.44, "text": " Right." }, { "start": 2917.44, "end": 2921.68, "text": " So this goes back to the very beginning when you mentioned the kind of way in which we" }, { "start": 2921.68, "end": 2925.48, "text": " approach this paper was to keep everything fixed and then just look at this kind of very" }, { "start": 2925.48, "end": 2929.64, "text": " small change and see how that results in different performance in our environment." }, { "start": 2929.64, "end": 2931.6, "text": " I think that's really just kind of the way to go." }, { "start": 2931.6, "end": 2932.6, "text": " It's very slow." }, { "start": 2932.6, "end": 2935.72, "text": " It's very incremental work, but hopefully it's getting us more towards that kind of" }, { "start": 2935.72, "end": 2940.4, "text": " guiding star of eventually having these models that operate in these realistic environments" }, { "start": 2940.4, "end": 2944.48, "text": " and use pre-trained model language to help exploration." }, { "start": 2944.48, "end": 2945.48, "text": " Cool." }, { "start": 2945.48, "end": 2948.3799999999997, "text": " Jesse, thank you very much for being here." }, { "start": 2948.3799999999997, "end": 2949.3799999999997, "text": " This was awesome." }, { "start": 2949.3799999999997, "end": 2950.3799999999997, "text": " Thanks." }, { "start": 2950.38, "end": 2964.44, "text": " Have a lot of fun." } ]
ZTs_mXwMCs8
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Galactica: A Large Language Model for Science (Drama & Paper Review)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "galactica", "meta", "meta ai", "facebook ai", "ai science", "galactica ai", "galactica model", "yann lecun", "research", "fair", "deep learning tutorial", "what is deep learning", "introduction to deep learning" ]
#ai #galactica #meta Galactica is a language model trained on a curated corpus of scientific documents, such as papers, knowledge bases, reviews, and other articles. The model can be used in a generative fasion to assist scientific writing, do reference prediction, and much more, including a new approach to do step-by-step reasoning using a clever encoding of intermediate steps. This video explains the paper, but also dives into the drama that ensued once Meta released a public demo of the model. OUTLINE: 0:00 - Introduction 1:30 - Drama around the public demo 16:00 - Start of paper review 20:30 - Dataset construction and encoding 23:30 - Encoding step-by-step reasoning using a scratchpad 33:00 - Modelling scientific references & citations 35:05 - Prompt Pre-Training 37:10 - Architecture details 38:30 - Experimental results 49:20 - Conclusion Paper: https://galactica.org/static/paper.pdf Website: https://galactica.org/explore/ Abstract: Information overload is a major obstacle to scientific progress. The explosive growth in scientific literature and data has made it ever harder to discover useful insights in a large mass of information. Today scientific knowledge is accessed through search engines, but they are unable to organize scientific knowledge alone. In this paper we introduce Galactica: a large language model that can store, combine and reason about scientific knowledge. We train on a large scientific corpus of papers, reference material, knowledge bases and many other sources. We outperform existing models on a range of scientific tasks. On technical knowledge probes such as LaTeX equations, Galactica outperforms the latest GPT-3 by 68.2% versus 49.0%. Galactica also performs well on reasoning, outperforming Chinchilla on mathematical MMLU by 41.3% to 35.7%, and PaLM 540B on MATH with a score of 20.4% versus 8.8%. It also sets a new state-of-the-art on downstream tasks such as PubMedQA and MedMCQA dev of 77.6% and 52.9%. And despite not being trained on a general corpus, Galactica outperforms BLOOM and OPT-175B on BIG-bench. We believe these results demonstrate the potential for language models as a new interface for science. We open source the model for the benefit of the scientific community. Authors: Ross Taylor Marcin Kardas Guillem Cucurull Thomas Scialom Anthony Hartshorn Elvis Saravia Andrew Poulton Viktor Kerkez Robert Stojnic Links: Homepage: https://ykilcher.com Merch: https://ykilcher.com/merch YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ykilcher.com/discord LinkedIn: https://www.linkedin.com/in/ykilcher If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello, this video starts out with a review of the drama around the public demo of the Galactica model and then goes into a paper review. If you're not in the mood for any drama, skip ahead about 16 minutes and you'll be fine. Hello there. Galactica is a model, a language model by MetaAI that is trained specifically on scientific text. Now this is a generative model, so it can generate stuff and thereby it can do a lot of things. For example, as you can see right here, citation prediction, you give something in and you ask it to predict a citation and the citation in this case is correct. This is not trained to predict citations that just happens by means of it being trained on scientific text. There's also, for example, this here, translate the math formula into plain English and there is plain English over here. Now the model can do so much more. The point of the paper is actually to say that, look, these models, we don't have to train them on these huge corpora of text. We can reduce the corpus size, but if the corpus is well curated, qualitatively higher, then there might also be a benefit in that. It might be a trade off between giant corpora and small corpora that are of higher quality. Now the other thing about this paper is that the model is released fully open source and they even had a demo up. But as you can see right now, it just says, thanks everyone for trying the demo. Now I've tried the demo for a bunch of things. It was really funny. You can make some fun stuff. You can also make some serious stuff. In fact, Galactica was used to write the paper that we're going to read in just a second, but the demo was taken down. And despite here it seemingly being like, you know, this is just a fun thing that we wanted to take down anyway, probably, probably not. Jan LeCun on Twitter gives a little bit of a hint of what happened right here. Pretty much exactly what happened. Well, what is this? People started complaining as they do. Gary Marcus here says the rapid removal of Meta-AI's Galactica demo represent a tacit acknowledgement that it was released too soon and deeply problematic. Of course, problematic, the word that you can throw at anything. And contrast strikingly with Jan LeCun's untenable public defense of the project yesterday. Someone answered, or maybe it was removed because people like you abused the model and misrepresented it. Thanks for getting useful and interesting public demo removed. This is why we can't have nice things. To that Jan LeCun answers pretty much exactly what happened. Meta huge props to getting this model out there. The model is still available. Also getting the demo out there for people to just try it. And yes, people tried it as it was intended and people tried it as it wasn't intended. A lot of funny stuff was done. And also someone might have entered a bad word. Oh no, oh no. But people pretty quickly started obviously to complain. The professional complainers and the people who think they know what's good for you, obviously were all over this. So Michael Black says, I asked Galactica about some things I know about and I'm troubled. In all cases, it was wrong or biased, but sounded right and authoritative. I think that's dangerous, dangerous, dangerous, right? Here are a few of my experiments and yada, yada, yada. So here he tries to justify why dangerous galactic Galactica generates text that's grammatical and feels real. This text will slip into real scientific submissions. It will be realistic, but wrong or biased. It will be hard to detect. It will influence how people think. You catch the step, it produces text that feels real. This text will slip into real scientific submissions. Like how? It just will. It's just like no one has a part in it. Just like the model exists, therefore text and scientific submissions. By the way, humans can also do like bad stuff. Humans can also lie and plagiarize and write grammatically real but wrong things. In fact, the literature is littered with wrong math proofs, not even intentionally wrong, just like they look right. There are essentially two or three kinds of people. There are the people who think we know what's good for you, and therefore we must be the guardians of all the models. Then there are the people who just dunk on everything. And then there are in general, the professional complainers who just throw words at stuff because that's what they do. They don't like not being asked. They don't like power not being centralized. For example, here, Facebook, sorry, meta AI, check out our new AI that lets you access all of humanity's knowledge. Also Facebook AI. Be careful though, it just makes s up. Why the jab here? Like one must be like really sour to make this jab. And this tweet actually goes on. So down here, these are the initial criticism, obviously shilling, you know, your own work a little bit about this topic and the works of friends. And then it goes on and says, and let's reflect for a moment on how they phrase their disclaimer. Shall we hallucinate is a terrible word choice here, suggesting as it does that the language model has experiences and perceives things. I'm not sure that anyone misunderstood the use of the word hallucinate right here. But whatever we can throw at it, whatever. And look at this. And on top of that, it's making light of a symptom of serious mental illness, whatever, whatever, like just just grab into the bucket, take some insult and just throw it. Why the complaining? It has a disclaimer, never follow advice from a language model without verification, people are just gonna disregard it, people are just gonna be like the language model says I must do something. So I'll do something. Look at me. I just write a paper. Oh, no, it language model says something that I must submit this. Grady Booj says, galactica is a little more than statistical nonsense at scale, amusing, dangerous and in my holy opinion, unethical, unethical and dangerous. Jan Lukán says, come on, is your predictive keyboard dangerous and unethical is GitHub co pilot dangerous and unethical and so on because they're exactly the same is like a pen unethical because you can write a bad word with it. No, there is a clear mediator in the loop, the human who has intent can easily accept or reject the prediction. What? What? So it's now two days later and the discussion is still raging on with Jan Lukán asking, who has galactica heard? What if actually it helps scientists write papers more efficiently and more correctly, particularly scientists whose main language is not English or who don't work in a major research institution? And yes, from experience, I can tell that type of scientist would greatly, greatly benefit from a tool like this. No, they wouldn't just take the output and slam it into a paper and upload it on archive. They would interact with the tool in order to come up with a better research paper. And in light of all of these benefits, present and future potential benefits, it is very fair to ask, who has this actually hurt? What's the actual danger here? As reasonable people, we should be able to debate the pros and cons of such a technology and of the technology being just given to people instead of just being kept, you know, under we know what's good for you. And it's not all like dandy that comes out of this, not all correct what comes out of these models. Here is the getting a girlfriend algorithm, which would probably not be a good fit for an archive paper. There's also other stuff like here is a research paper on the benefits of eating crushed glass and people have gotten even more inappropriate stuff out of this model, which is not a surprise because these models are very good and very competent and they are very agreeable. So if you ask them to do something, they'll probably do it. Yet still, the fair question is, in what scenarios would this type of generated text actually be harmful? And here's the point. These people react with just astonishment to this question. It's just like, oh, I can't believe it. Oh, no way. I'm flabbergasted. Jesus Christ. Ha ha ha. Dot dot dot dot dot dot. Incredible. These people are so used to being able to just make the accusation and then they get their way that they can't like the someone asking them to come up with a reasonable argument that in a neutral way discusses pros and cons of something is just so out of their world because in the past, all they always had to do in the recent years is say a word like harmful or problematic. And if they said it long enough and loud enough, magically, things would go their way. People would take down things. People would change things so that they get their wishes. And now if someone actually asks them, they don't know what to say. They're just so astonished that someone might actually want to know pros and cons of the stuff. And yes, of course, the young look is now clearly unqualified for his position because he asks what the actual harms are. It's incredible. And I think we are all responsible for the climate like this because even now, Metta or whoever hosted that demo took it down in response to the public pressure. So the people were loud enough and they were mean enough, essentially, that the PR people at Metta and the lawyers or whoever made the decision took down the demo. And that is one more reinforcement for this kind of behavior. And everyone seems to be afraid of some boogeyman that being accused of a bad word automatically means that everyone else is going like, oh, no, I'll never do business with you again. I mean, to a degree, that is true. But I would argue that the solution is that we all collectively stop making such a big deal out of a few flimsy big word accusations like harmful and problematic and actually discuss in neutral terms pros and cons of technology and to find the best path forward that brings the pros to as many people as possible while limiting the cons. And no, that is not always going to be the approach of we know what's good for you. Let's keep it all to ourselves and you come ask us whenever you want something you peasant. All right, back to Yannick in the past. I think the complaints are very unreasonable. I think the people who make the complaints know that they're very unreasonable. And I think this is either a cloud game or a power game because things are out there. They're no longer centralized. In any case, I decided to look up actually early criticisms of the printing press. And what do you find? Here is a record from a conversation that Johannes Gutenberg, the inventor of the printing press had with a monk and monks used to copy text by hand. And now the printing press came along and essentially brought that to everyone. Gutenberg says, I want to help men and women to be literate, to give them knowledge, to make books so cheap, even a peasant might afford them. That is my hope. Yes. This is strikingly similar to what Metta wrote in this Galactica paper. The monk says, the word of God needs to be interpreted by priests, not spread about like dung. We know what's good for you. I do not wish to spoil the word, but it will happen. In fact, this is 500 years ago and the exact same conversation repeats and repeats and repeats. It will happen magically, right? To hand it out about to all and sundry is lang, lang, gurus. Would you have plough, would you have plowmen and weavers debating the gospel in taverns? Oh no, the common folk, the common folk get it. That's terrible. If that is what they want to do, so up until here, you saw we know what's good for you. And the second thing is always it's dangerous. It's problematic. And the head monk says, but what of the dangers? It would be like giving a candle to infants. Such copies we make of the Bible would first be monasteries for monasteries and churches. The head monk says, the Bible, you plan to make the Bible as well? Oh no, you have ambitions. I've considered it. And obviously he did. And obviously I like you can one to one, one to one, you can take every argument that people make against this and you can put it on a predictive keyboard. You can put it about the pen, you can put it about the printing press and people have done it. This is 500 years and every time it was just dead wrong every time the new technology improved our lives drastically. Yes, email leads to some Nigerian Prince scams. Yes, some people get hurt by it. But email has been a definite benefit for our world. No matter what you think right now with your 5000 unread emails in your inbox, it is a benefit to the world. And it's the exact same thing over and over. Enough though of that enough of me ranting. Let's go into the actual paper. The paper is called Galactica, a large language model for science. It's by Metta. And I already told you that it is a large language model trained on scientific text. There's actually not too much to it. We'll go quickly through the paper and see a couple of special things. But in general, this is a, let's say straightforward work of research into what it means to have more quality data instead of more quantity data. They say here, we train on a large scientific corpus of papers, reference materials, knowledge bases and many other sources. We outperform existing models on a range of scientific tasks. Despite not being trained on a general corpus, Galactica outperforms Bloom and OPT 175 on Big Bench. Big Bench is a general benchmark for language models. And this is where it gets really interesting because this, the Galactica model is trained on a very small subset of data and yet it outperforms these much, much more holistic models on that task. So that is a definite argument for data quality instead of data quantity. We open source the model for the benefit of the scientific community and much to the detriment of I guess Metta itself. Although let me say what Metta should have done. They did so much right. They open source the model. They made the model available via a demo. And now the only thing left to do is to actually have a pair of balls to tell the people who come and to say, Oh, look, I got the model to produce something bad to tell them. Well, yeah, that's what happens sometimes. And it is not dangerous. It is not problematic. It's just a language model. So Metta next time have some balls, just tell the people to f off and you'll be fine. All right. They say in May, an average of 516 papers per day were submitted to archive. It is impossible for a single person to read all the papers in a given field. And it's likewise challenging to organize data on the underlying scientific phenomena. They say the volume of scientific research has become too large. And what we used to do is we used to search engines. So they say search engines are the current interface for knowledge, but they do not organize knowledge directly and instead point to secondary layers. So with a search engine, I can only find stuff, I cannot integrate stuff, synthesize stuff, or even come up with the stuff that I should search for in the first place. They say if you want to do a literature review, that still has to be done by a human. If you want to do a summary, that still has to be done by a human, because our tools are just not powerful enough. And the Galactica is the first step at building a tool that can assist humans in doing these types of things, searching for things, synthesizing things, integrating things, and maybe suggesting new things. They say unlike search engines, language models can potentially store, combine and reason about scientific knowledge. They can potentially find hidden connections between different research, find hidden gems, and bring these insights to the surface. They could synthesize knowledge by generating secondary content automatically, such as literature reviews and encyclopedia articles, lecture notes, and much more. And they also talk about the benefit of having different modalities, linking papers with code, protein sequences, with compounds, theories with late tech, and much more. Our ultimate vision is a single neural network for powering scientific tasks. You know, it doesn't say do scientific, it says powering scientific tasks. And that is also my ideal end goal. If I imagine a cool future where AI tools are abundant, I would want like an extension of my brain that I can interact with, and that empowers me as a scientist. And I would still be able to actually make the decision of whether to accept the output of the tool or not. They say we introduce a new large language model, sorry about that, called Galactica, to automatically organize science. This includes over 48 million papers. This is their data set, textbooks, lecture notes, millions of compounds of protein, scientific websites, encyclopedias, and more. Our corpus is high quality and highly curated, and it is a lot smaller than the usual corpora of the large language models. They format all of this into a common format. Their common format is Markdown. And then they take a lot of attention of how they do specific scientific things. For example, citations, they use a special token that allows a researcher to predict a citation given any input context. They also have a very interesting way of handling step by step reasoning. They have a special token for that that mimics an internal working memory. We're going to look at these two things in just a bit. The interesting thing is, for example, with reference prediction, so citation prediction, they say, importantly, we find this approach outperforms tuned, sparse, and dense retrieval approaches for citation prediction. So the generative approach is better at predicting a correct citation than search engines, even tuned dense retrievers that like neural retrievers. This is also really interesting. So for again, for all the people who argue that, oh, no, wrong stuff will end up in the papers, probably right now, you're using a search engine to find your references. And if you distrust the human ability to accept or reject the output of a tool so much, then how come you don't distrust your ability to accept or reject based on search engine outputs? Not sure, but these things are better than search engines. So you should use these. Most interestingly, Galactica was used to help write this paper. Oh, no, we are doomed. We are doomed. Okay, so here's the corpus. You can see that there's a bunch of data sources. The most data comes from papers about 83% of tokens. The total size of the corpus is 106 billion tokens. As I said, that is a lot smaller than some of the large language model training runs that we are used to. A lot of other sources are also code, reference material, knowledge bases, filtered version of common crawl, just 1%, prompts, which they generate or include. And here, other is other. And we might see a little bit of what other is. The tokenization is very interesting. They need to bring all into a markdown format. This isn't super surprising, but it needs it goes to show that if you do something like this, it actually matters quite a bit how you do the tokenization, how you represent all the knowledge in a common format. And I believe, at least from what I can estimate, they have done a lot of thinking a lot of work into this direction. They also mentioned that they've tried a bunch of different things and just pick the ones that's best. Notably, citation, again, they have start and end ref tokens. So they would write a text, yada, yada, yada, then the start ref token. Then here is the citation as text form, not as like some reference form, the title of the paper and the author name. And then here are the end ref. So in this way, you can just feed it into a language model and have the language model, if necessary, predict the reference from a piece of text. This is also useful if you just want to find related work, I would guess what you could do is you could just put here, you just put something you want to know about, like you imagine a paper that could exist, right, you just write it down, and then you put the start ref token, and the model will probably suggest you paper titles and authors that have done work in the same field. So even for finding related work, I can definitely see that this is super useful. Step by step reasoning, we'll get into the work token in just a bit. Mathematics are represented by operators right here, numbers are split because of whitespace issues. The numbers are split into their individual digits, even the dot separator is an individual token, which means that is probably not numerically super strong. But we'll see about that, I guess, because no language model so far is numerically super strong. I'm not going to go into much of the more biology and chemistry approaches, but also know that there is a large weight on to these approaches in this paper, but I'm generally going to skip it. So first, let's look into this work token that they talk about. This is for step by step reasoning. For example, there is a task, what's the average of 43, 29, 51 and 13. Let's give that task to a language model and ask it to come up with an answer. Now a general language model would just come up with some sort of answer right here as the next token, and it would probably be wrong. Like it would be a number very probably, but it would probably be not the average of those numbers. Now, one thing people have found out recently is the so called chain of thought prompting or the let's reason step by step trick, where you instruct the language model to essentially show its work to say, so you would put this thing in to the prompt. And after that, you would say something like, okay, now do it step by step or something like this. I know crazy world if you're watching this like five years ago, this is how this is what we've come to. This is what deep learning has come to. But you essentially put a piece of text to nudge the language model into actually showing its work. And the paper here notes that not actually all the work that a human would write down here if they need to calculate this. That's actually not all the work. So if you are a human, you have a pen, and you were to calculate these things, you were to calculate this average, and someone would ask you, please write down your steps, what you would write down is okay, the average is calculated as such, add the first numbers going to add the third at the fourth number, then divide these by four, and then I have the result. However, this paper points out that in the step from here to here, possibly also in these addition steps, and a step from here to here, if you have to do it in your head, this division right here is probably too cumbersome to just like know by just by by happenstance. So what you actually do is these steps right here, these is what we saw on the paper, and then you do a division. And the division, they imagine I would not do it like this, but they imagine something like, okay, I know, I know 35 times four is 140. And I need to divide 136. And therefore, it's 34, because 140 minus four is 136. And I know, 140 divided by four is 35. Therefore, the result is 34. So this mental math that people do internally is often not even put into the external working memory. They see this as a problem. And they say, okay, probably, if we want to go about making the language model show its work, we need to be like really as explicit as possible in the sort of how these steps are represented in text. Their idea is that they introduce a token called work. Now to skip in the paper a little bit about, you know, what that exactly is. But essentially, it goes like this, it goes very much like you enter a prompt, let's say, calculate, calculate average of whatever that those numbers were like, 59, 53, 95, something three, and then you put a token called work. Now in this here, the language model is supposed to do this and this, right. So it's supposed to show in as explicit detail as possible, the work that it wants to do both internal and external work. So it would, you know, go about and do these individual calculations right here. But and then once it's done, it's over work is over. And then it says something like, well, the answer is something. Now you might think right now, wait a minute, that's essentially just the let's think about it step by step trick, except now they call it work. And they wrap it in there. And yeah, if that's all it was, that's you will be absolutely correct. However, a cool thing that you can do right here is you can say, well, look, whatever is in this work thing, I can now also take and give to an external processor. So let's say we ask the we ask the language model to calculate really the average of something. Well, here in here, the language model is just going to do language modeling is going to predict the next tokens. And if we do it, you know, cleanly enough, it has a chance of actually getting the correct answer if we really do it step by step, like, you know, single digit addition, carry over, and so on, then the language model has a chance because it has learned that from the corpus. However, at inference time, we don't have to rely on the language model, we can simply at this point right here, we can say, whatever, we just go to a calculator, we detect that the language model wants to do work. We just take it to a calculator, we take the result, put it down here as the result, and then we go on language, language model inferencing, the same if the language model is supposed to write a program. For example, here is a example. This is the prompt that you would put into the language model or a data point question, a needle is this long, it rests on a water surface. So this is kind of a physics problem. And instead of just giving the answer right here, you introduce this work block. Now the language model, you would ask the language model to come up with all of this right here. And during training, you train it to come up with all of this. But then during inference, you can simply take this right here, the program that the language model writes, and we know they're quite good, you can take it and you can actually go and run it. And you can put the output into output dot txt. And then you have the correct answer. So this work block is half instruction to the language model that now it's time for step by step work to use external memory to use external programs and so on. During training time, you just let the language model train language modeling, right? So the language model essentially would have to decide what's the output of this Python program, like what answer am I going to get right here? Which sometimes might work and sometimes might not. However, during inference time, you can now go and actually execute the Python program that the language model writes and give it the real result. This is very powerful. I really like this approach. I really like this approach of including external tools to essentially do that at inference time, because using external tools at training time is going to be very, very hard. But in this way, you can just train language modeling and you can do it at inference time. All right. The question is, obviously, we need training data for this, we need training data that has some sort of input, then has a clear description of what the step by step work is to do, including writing a Python program, executing a Python program, and so on, a description of when the work is done. And then the answer right here. Most, most things that we're going to find in training data does not contain any of this stuff in between right here. And if it does contain it, it contains it in a very, let's say, abstract form or also textual form, not exactly in the form that we need it. This is one of the big problems right here. And they say that they have some data set, for example, con problems, as I understand it, these are exactly such math or physics problems where it's really step by step described how you would go about it. And by taking those, they can do sort of a templating approach where they generate data in this form. They criticize themselves a little bit here in that they say this is way too few. This is not very diverse. They say here, notably, our work prompt data sets are not very large or diverse, there are likely large further gains to be made with this approach. And I agree an approach like this or this approach in particular is probably going to to lead to a very good interaction of language models with external tools. And I'm very excited to see what people can make of it. But for now, we have these few databases of these problems that let the language model know that there is such a thing as a work block where it needs to do work by itself and where we can optionally at inference time go in and actually sort of do the work for the language model that requires some external tool like a calculator or a Python interpreter. Okay, let's go on to the citation prediction. I've already mentioned that a little bit. So here, you would reformulate text with citations as such, you'd say, okay, recurrent neural networks, long short term memory, and then here is the start of a citation. So there's a start ref token. And the specific format they use is the title of the paper followed by the first author name, and then an end ref token. This they say they've tried different things, including like including trying some some predictor right here, some numerical identification of the paper. But in the end, the title and name actually worked better. And you can understand why because not only is the title a hopefully unique identifier for a paper and the author, but also the text of the title gives some topical hints. So I can definitely see why there would be a better prediction accuracy if the title text has actually something to do often with what the paper is about. And likewise, the author, the author has associations usually with the same field, there's rarely an author that goes from field to field to field and contributes a little bit to biology and a little bit to graph algorithms and a little bit here. Usually authors have their topics. And therefore, also that the names of the authors to be available allows the language model to learn to associate these names with given with given topical textual topical things in the text. And that's why it's also really cool to think of this as a related work finder and things like this and expertise finder, right? You can essentially just ask, you know, which authors are really good at the topic I'm looking at currently, because you just predict a bunch and then you see which authors often appear. So that's how they introduce citations. Now they also go into other things like how they include proteins and chemical sequences. And I want to go into that. But an interesting thing they do is that they do what they call prompt pre training. Now they have this little graph right here where they show here is pre training. That's where you just do language modeling on the large corpus as it exists. And over here is fine tuning where you really take the head off and train a new head to predict the classifier or something like this. In the middle, there is instruction tuning. So that's where you take the language model. And after you've trained it, you go and you fine tune it. But you don't fine tune like a classifier head, you still fine tune it as a language model. However, you include now some prompts for the tasks that you want. For example, if you want to do, I don't know, for example, this reference prediction, you would include the prompt that says something like we'll do a reference prediction or something like this for the tasks that you're interested in. Again, this is still language modeling, but it is fine tuning because now you're only training for the tasks that you intend only on the data sets that you intend. This leads to an improvement in performance on those particular tasks, but to a probably not so good model in the rest of all the tasks. The other way you can do it is prompt pre training. And that's what Galactica is doing, which essentially just means they do the same thing as instruction tuning, but they do it at training time. So they just take a bunch of samples that also have an instruction prompt in the data in the data point, like, you know, do this, solve this math exercise, rewrite this code or something like this, or even the step by step, what not prompt, and they just throw that in sometimes into the into the training data set, just so that the model gets used to seeing this kind of instructions. And that tends to work quite well and also tends to not be that intrusive to the rest of the function of the language model. I found pretty interesting this short section on the architecture right here, some noteworthy things is no biases. It seems like that if you make your models large enough, then you get away with essentially streamlining more and more, you know, with the small models, we have to have adapters and this and the convolution and the weight tying and whatnot. And the larger the models get, the more you just want to do matrix multiplications and anything that gets in the way just gets in the way. So biases out the window. They have a Galu activation, which is sort of a smooth version of a relu, which makes things a little bit less jaggy, I guess, which might come in handy, depending on the optimizer you use. They have learned positional embeddings, which again, as your stuff gets larger, you should just want to straightforward learn a lot of stuff instead of using they said they tried Alibi, which are these kind of relative positional encodings. And that apparently did not work. And they use byte pair encoding for vocabulary. I don't think that's too special. Honestly. Let's go down. Now we come to the results. And their main result is really this repeated tokens considered not harmful. With repeated tokens, what they mean is that they not only train for one epoch, as you can see right here, every one of those dashed lines is one epoch, and they train for multiple epochs. And usually, it's it's being said that that is kind of hurtful to train for multiple epochs, but it seems to be okay. In this case, as you can see right here, there is like a tiny bump. They even point the sun in the next there's a tiny bump right here. They say this might be a double descent phenomenon. Not super sure. And there is also sort of a bump right here. So they say we actually stop before that we early stop the run of this largest model before that. So it seems that even though you train on multiple epochs, because the code because the the text quality of the corpus is so high, it doesn't hurt to go over it multiple times. And only this largest model right here might be starting to overfit after epoch five, we don't know it might, and they'd rather early stop in front of that. If one of the authors is watching this, is this word overleaf here supposed to be in here, like example curves in figure 23, overleaf for the 30 B model, I'm not sure. Maybe maybe overleaf has some other meaning that I don't know. And that's actually a correct word. Any case they say they also investigate whether some of the losses so maybe papers, maybe code and so on, are different from the others. And it hurts them more to be repeated in the data set. They say we see no signs of loss heterogeneity, the loss falls for all sources. They say we suspect there are two factors could be at play a quality factor, the curated nature of the corpus enables more value per token to be extracted, or a modality factor, the nature of scientific data enables more value of token, more value per token to be extracted. These two things, they're very similar, but essentially they say higher quality, plus that the nature of the domain itself, which I guess is also a bit higher quality, but in a different way, in that scientific discourse and literature often happens to be quite precise, very logical, very non noisy in terms of linguistics, and so on. Some people might disagree. But so they have these hypotheses, although they say they don't know how exactly that would lead to the so they say the missing step of causation is what leads specifically from either factor towards less overfitting. We leave this question for future work. We note that the implication that the token goes to infinity, so you need infinite amount of training data focus of current large language model projects may be overemphasized versus the importance of filtering the corpus for quality. And yeah, I think we've seen a number of papers previously that essentially came to a similar conclusion, namely, higher quality can make up for missing quantity. But what which one is really the way to to go like, should we aim for more and more and more and more training data? Or should we put more work into quality? Essentially if you have a dollar to spend, where do you spend it? Right? So both things can make your model become better. But what sort of the marginal value of more quality and the marginal value of more quantity? I think that's going to be the interesting question that has to be researched in the near future. So what's also interesting, this is Big Bench. They also evaluate on Big Bench, which is an NLP task. So not scientific. Maybe some subparts are scientific, but not this is a general language model task. And they also perform quite well there. But I also find these curves. I think this is just what a Big Bench chart looks like. I find these curves like what was this? It's like, it goes here and here and here and here. Like, yeah. Okay. It's a bit noisy, to say the least. But I guess I've seen this multiple times now, and at least the average goes up. So I think that is a valid sign. They have a few more investigations. I don't want to go too much into them. But for example, you can see right here, they test on LaTeX equation prediction. So they give a prompt, the description of a formula or the name of an equation, and they see whether or not the language model can predict the correct equation in proper LaTeX. And turns out, yes, it can. It can actually do that a lot better than a lot of the other language models available, which is pretty cool to see like that much of a significant boost over publicly available and proprietary models. Now naturally, it's going to be, let's say, expected if you train on scientific text, that it's going to be better on scientific text. But it's still cool that it's not just like a 2% gain. It's actually like a massive, massive gain. They also have investigations into this, into reasoning. I don't want to go into reasoning, but these are essentially these type of math problems, like step-by-step reasoning problems that they solve using their work block tokens. And again, here, they do outperform other models, except like here, the fine-tuned models are still, seems to be still ahead, although these are again fine-tuned. Downstream scientific NLP, I'm going to jump a bit. This I found really interesting. This is the citation prediction task. And specifically, obviously, they do get better as the model grows. But specifically, what I found interesting is that the model initially is biased towards papers, towards predicting papers that have high numbers of citations already, which is reasonable like a Bayesian would totally agree that if a paper is highly cited, then it's more likely that the citation you want is that paper. Someone might criticize me for that statement, but in some way, that is correct. And these models do obviously the same mistake. They predict papers with high citations. They actually over predict those. So here you can see the distribution of the ground truth of their citation prediction dataset. And here you can see what the model predicts. So the model over predicts more high papers that are highly cited, which I guess you can't really fault the model. But what's interesting is as the model gets bigger, so this is the smallest, this gets bigger, gets even bigger, gets even bigger, you see that this shifts gradually towards overlapping with the ground truth. So it means that the higher scale of the model, that the larger the model is, the more competent it is also to recognize when maybe a paper that doesn't have as many citations should be cited right here as a direct consequence of it having more parameters and more ability to remember things from the training corpus. Because some of these papers you can see right here, they're cited maybe 10 times, right? And some even lower right here. And the model actually predicts them correctly. That's really impressive that essentially it digests 100 billion tokens of scientific text. And it still remembers that this one paper was cited like three times within in this particular topic, and then correctly cites that paper at that place. I'm wondering how well the ground truth data here is, because the ground truth data got to be predicted by humans. And again, with the search engines that we have, I'm not sure humans could always find all the relevant things. Or maybe humans disagree what is relevant. I think the last years of reviews at machine learning conferences have shown, well, I guess all of scientific review has shown that humans can disagree quite heavily what should be cited. The last investigation is into toxicity and bias. They say we find galactica is significantly less biased and toxic than existing language models, which again might come from the fact that it's higher quality data, or more the scientific nature, which generally has less slang, less everyday conversation, less off the cuff stuff, and therefore might be a bit less high in these in these data sets. So they test a bunch of data sets, including including obviously truthful QA. And I'm happy to report that galactica is the first large, openly available language model that beats in its largest instances that beats GPT-4 channel truthful QA. So good job. Well done. This is this is a moment of joy to me that it's finally been surpassed. Now the interesting thing is that usually truthful QA is adversarially adversarially constructed in such a way that the larger the models get, the worse they get on truthful QA. And you can see that this model right here doesn't follow that trajectory. Now we've seen other models in the past that also have that property. But truthful QA is specifically adversarially constructed for things like GPT-3. And that means that galactica is significantly different from GPT-3 that as it goes up in size, as it gets more performant, it also does get better or more performant on on these whatever the task considers truthful. So it will be really interesting to actually investigate what's happening here. But I'm not going to do that. I'm just happy that this now turns out. Lastly, they say, we show that language models are surprisingly strong absorbers of technical knowledge. They tend to scale smoothly with model size. We demonstrated this for citation prediction, where a language model outperforms tuned, sparse and dense retrieval pace pipelines for this task. And this, as I said previously, at the beginning of the video, this is really, really interesting that essentially this beats search engines for citation prediction. And it would be interesting to see how good humans are like a human plus a search engine like the archive search field, or a human plus galactica for finding correct references. I would be super interested which combo is better right there. Because again, the tools alone, they don't do stuff. It needs to have a human in the loop and that human can always make decisions. It would be really interesting to use this right here as a tool rather than just, you know, it's either all or nothing either the model writes the paper or the humans do. So that was it for this paper. The last challenge, I guess, is to find out which parts of the paper that were actually written by galactica itself. I hear that the part of the abstract may be written by galactica, although I don't know. And I don't know if the authors will ever will ever lift that secret. Let's hope they don't because I like the mystery. All right, this was it from me. Sorry for the bit longer rant at the beginning. I still hope you enjoy this. I think this is a really, really promising direction. It raises a lot of really interesting points about quality of data, quantity of data, and about, you know, doing scientific work itself. This could be a really powerful tool for scientists of the future. And I'm waiting for the next iterations of it. Leave comments if you have comments. Thanks for watching. See you next time. Peace.
[ { "start": 0, "end": 5.24, "text": " Hello, this video starts out with a review of the drama around the public demo of the" }, { "start": 5.24, "end": 8.92, "text": " Galactica model and then goes into a paper review." }, { "start": 8.92, "end": 14.72, "text": " If you're not in the mood for any drama, skip ahead about 16 minutes and you'll be fine." }, { "start": 14.72, "end": 15.72, "text": " Hello there." }, { "start": 15.72, "end": 22.240000000000002, "text": " Galactica is a model, a language model by MetaAI that is trained specifically on scientific" }, { "start": 22.240000000000002, "end": 23.240000000000002, "text": " text." }, { "start": 23.240000000000002, "end": 27.64, "text": " Now this is a generative model, so it can generate stuff and thereby it can do a lot" }, { "start": 27.64, "end": 28.64, "text": " of things." }, { "start": 28.64, "end": 33.44, "text": " For example, as you can see right here, citation prediction, you give something in and you" }, { "start": 33.44, "end": 38.6, "text": " ask it to predict a citation and the citation in this case is correct." }, { "start": 38.6, "end": 44.72, "text": " This is not trained to predict citations that just happens by means of it being trained" }, { "start": 44.72, "end": 46.6, "text": " on scientific text." }, { "start": 46.6, "end": 52.519999999999996, "text": " There's also, for example, this here, translate the math formula into plain English and there" }, { "start": 52.519999999999996, "end": 54.44, "text": " is plain English over here." }, { "start": 54.44, "end": 57.040000000000006, "text": " Now the model can do so much more." }, { "start": 57.04, "end": 61.92, "text": " The point of the paper is actually to say that, look, these models, we don't have to" }, { "start": 61.92, "end": 64.36, "text": " train them on these huge corpora of text." }, { "start": 64.36, "end": 71.24, "text": " We can reduce the corpus size, but if the corpus is well curated, qualitatively higher," }, { "start": 71.24, "end": 73.8, "text": " then there might also be a benefit in that." }, { "start": 73.8, "end": 80.75999999999999, "text": " It might be a trade off between giant corpora and small corpora that are of higher quality." }, { "start": 80.75999999999999, "end": 86.96000000000001, "text": " Now the other thing about this paper is that the model is released fully open source and" }, { "start": 86.96, "end": 88.47999999999999, "text": " they even had a demo up." }, { "start": 88.47999999999999, "end": 94.08, "text": " But as you can see right now, it just says, thanks everyone for trying the demo." }, { "start": 94.08, "end": 96.36, "text": " Now I've tried the demo for a bunch of things." }, { "start": 96.36, "end": 97.8, "text": " It was really funny." }, { "start": 97.8, "end": 98.96, "text": " You can make some fun stuff." }, { "start": 98.96, "end": 100.67999999999999, "text": " You can also make some serious stuff." }, { "start": 100.67999999999999, "end": 107.24, "text": " In fact, Galactica was used to write the paper that we're going to read in just a second," }, { "start": 107.24, "end": 109.24, "text": " but the demo was taken down." }, { "start": 109.24, "end": 114.83999999999999, "text": " And despite here it seemingly being like, you know, this is just a fun thing that we" }, { "start": 114.84, "end": 119.04, "text": " wanted to take down anyway, probably, probably not." }, { "start": 119.04, "end": 124.44, "text": " Jan LeCun on Twitter gives a little bit of a hint of what happened right here." }, { "start": 124.44, "end": 125.88000000000001, "text": " Pretty much exactly what happened." }, { "start": 125.88000000000001, "end": 127.28, "text": " Well, what is this?" }, { "start": 127.28, "end": 129.88, "text": " People started complaining as they do." }, { "start": 129.88, "end": 135.12, "text": " Gary Marcus here says the rapid removal of Meta-AI's Galactica demo represent a tacit" }, { "start": 135.12, "end": 139.08, "text": " acknowledgement that it was released too soon and deeply problematic." }, { "start": 139.08, "end": 142.86, "text": " Of course, problematic, the word that you can throw at anything." }, { "start": 142.86, "end": 149.32000000000002, "text": " And contrast strikingly with Jan LeCun's untenable public defense of the project yesterday." }, { "start": 149.32000000000002, "end": 154.02, "text": " Someone answered, or maybe it was removed because people like you abused the model and" }, { "start": 154.02, "end": 155.68, "text": " misrepresented it." }, { "start": 155.68, "end": 158.52, "text": " Thanks for getting useful and interesting public demo removed." }, { "start": 158.52, "end": 160.48000000000002, "text": " This is why we can't have nice things." }, { "start": 160.48000000000002, "end": 164.24, "text": " To that Jan LeCun answers pretty much exactly what happened." }, { "start": 164.24, "end": 167.32000000000002, "text": " Meta huge props to getting this model out there." }, { "start": 167.32000000000002, "end": 168.96, "text": " The model is still available." }, { "start": 168.96, "end": 172.56, "text": " Also getting the demo out there for people to just try it." }, { "start": 172.56, "end": 177.4, "text": " And yes, people tried it as it was intended and people tried it as it wasn't intended." }, { "start": 177.4, "end": 179.26, "text": " A lot of funny stuff was done." }, { "start": 179.26, "end": 182.16, "text": " And also someone might have entered a bad word." }, { "start": 182.16, "end": 183.78, "text": " Oh no, oh no." }, { "start": 183.78, "end": 186.86, "text": " But people pretty quickly started obviously to complain." }, { "start": 186.86, "end": 191.68, "text": " The professional complainers and the people who think they know what's good for you, obviously" }, { "start": 191.68, "end": 193.52, "text": " were all over this." }, { "start": 193.52, "end": 198.72, "text": " So Michael Black says, I asked Galactica about some things I know about and I'm troubled." }, { "start": 198.72, "end": 204.28, "text": " In all cases, it was wrong or biased, but sounded right and authoritative." }, { "start": 204.28, "end": 208.78, "text": " I think that's dangerous, dangerous, dangerous, right?" }, { "start": 208.78, "end": 212.12, "text": " Here are a few of my experiments and yada, yada, yada." }, { "start": 212.12, "end": 218.52, "text": " So here he tries to justify why dangerous galactic Galactica generates text that's grammatical" }, { "start": 218.52, "end": 220.64, "text": " and feels real." }, { "start": 220.64, "end": 224.06, "text": " This text will slip into real scientific submissions." }, { "start": 224.06, "end": 227.24, "text": " It will be realistic, but wrong or biased." }, { "start": 227.24, "end": 228.24, "text": " It will be hard to detect." }, { "start": 228.24, "end": 230.76000000000002, "text": " It will influence how people think." }, { "start": 230.76000000000002, "end": 235.62, "text": " You catch the step, it produces text that feels real." }, { "start": 235.62, "end": 239.44, "text": " This text will slip into real scientific submissions." }, { "start": 239.44, "end": 240.96, "text": " Like how?" }, { "start": 240.96, "end": 242.12, "text": " It just will." }, { "start": 242.12, "end": 245.16000000000003, "text": " It's just like no one has a part in it." }, { "start": 245.16000000000003, "end": 250.16000000000003, "text": " Just like the model exists, therefore text and scientific submissions." }, { "start": 250.16000000000003, "end": 253.66000000000003, "text": " By the way, humans can also do like bad stuff." }, { "start": 253.66, "end": 258.84, "text": " Humans can also lie and plagiarize and write grammatically real but wrong things." }, { "start": 258.84, "end": 264.48, "text": " In fact, the literature is littered with wrong math proofs, not even intentionally wrong," }, { "start": 264.48, "end": 265.82, "text": " just like they look right." }, { "start": 265.82, "end": 268.28, "text": " There are essentially two or three kinds of people." }, { "start": 268.28, "end": 272.65999999999997, "text": " There are the people who think we know what's good for you, and therefore we must be the" }, { "start": 272.65999999999997, "end": 274.64, "text": " guardians of all the models." }, { "start": 274.64, "end": 277.12, "text": " Then there are the people who just dunk on everything." }, { "start": 277.12, "end": 283.15999999999997, "text": " And then there are in general, the professional complainers who just throw words at stuff" }, { "start": 283.16, "end": 284.96000000000004, "text": " because that's what they do." }, { "start": 284.96000000000004, "end": 286.56, "text": " They don't like not being asked." }, { "start": 286.56, "end": 289.06, "text": " They don't like power not being centralized." }, { "start": 289.06, "end": 294.52000000000004, "text": " For example, here, Facebook, sorry, meta AI, check out our new AI that lets you access" }, { "start": 294.52000000000004, "end": 296.12, "text": " all of humanity's knowledge." }, { "start": 296.12, "end": 297.20000000000005, "text": " Also Facebook AI." }, { "start": 297.20000000000005, "end": 299.62, "text": " Be careful though, it just makes s up." }, { "start": 299.62, "end": 300.96000000000004, "text": " Why the jab here?" }, { "start": 300.96000000000004, "end": 305.20000000000005, "text": " Like one must be like really sour to make this jab." }, { "start": 305.20000000000005, "end": 307.24, "text": " And this tweet actually goes on." }, { "start": 307.24, "end": 313.16, "text": " So down here, these are the initial criticism, obviously shilling, you know, your own work" }, { "start": 313.16, "end": 316.40000000000003, "text": " a little bit about this topic and the works of friends." }, { "start": 316.40000000000003, "end": 322.64, "text": " And then it goes on and says, and let's reflect for a moment on how they phrase their disclaimer." }, { "start": 322.64, "end": 328.88, "text": " Shall we hallucinate is a terrible word choice here, suggesting as it does that the language" }, { "start": 328.88, "end": 332.44, "text": " model has experiences and perceives things." }, { "start": 332.44, "end": 338.76, "text": " I'm not sure that anyone misunderstood the use of the word hallucinate right here." }, { "start": 338.76, "end": 341.64, "text": " But whatever we can throw at it, whatever." }, { "start": 341.64, "end": 342.8, "text": " And look at this." }, { "start": 342.8, "end": 349.4, "text": " And on top of that, it's making light of a symptom of serious mental illness, whatever," }, { "start": 349.4, "end": 354.98, "text": " whatever, like just just grab into the bucket, take some insult and just throw it." }, { "start": 354.98, "end": 356.28, "text": " Why the complaining?" }, { "start": 356.28, "end": 361.26, "text": " It has a disclaimer, never follow advice from a language model without verification, people" }, { "start": 361.26, "end": 365.71999999999997, "text": " are just gonna disregard it, people are just gonna be like the language model says I must" }, { "start": 365.71999999999997, "end": 366.71999999999997, "text": " do something." }, { "start": 366.71999999999997, "end": 367.9, "text": " So I'll do something." }, { "start": 367.9, "end": 368.9, "text": " Look at me." }, { "start": 368.9, "end": 369.9, "text": " I just write a paper." }, { "start": 369.9, "end": 374.2, "text": " Oh, no, it language model says something that I must submit this." }, { "start": 374.2, "end": 380.44, "text": " Grady Booj says, galactica is a little more than statistical nonsense at scale, amusing," }, { "start": 380.44, "end": 386.71999999999997, "text": " dangerous and in my holy opinion, unethical, unethical and dangerous." }, { "start": 386.72, "end": 392.32000000000005, "text": " Jan Lukán says, come on, is your predictive keyboard dangerous and unethical is GitHub" }, { "start": 392.32000000000005, "end": 397.28000000000003, "text": " co pilot dangerous and unethical and so on because they're exactly the same is like a" }, { "start": 397.28000000000003, "end": 401.12, "text": " pen unethical because you can write a bad word with it." }, { "start": 401.12, "end": 407, "text": " No, there is a clear mediator in the loop, the human who has intent can easily accept" }, { "start": 407, "end": 408.88000000000005, "text": " or reject the prediction." }, { "start": 408.88000000000005, "end": 409.88000000000005, "text": " What?" }, { "start": 409.88000000000005, "end": 410.88000000000005, "text": " What?" }, { "start": 410.88, "end": 419.52, "text": " So it's now two days later and the discussion is still raging on with Jan Lukán asking," }, { "start": 419.52, "end": 421.68, "text": " who has galactica heard?" }, { "start": 421.68, "end": 426.28, "text": " What if actually it helps scientists write papers more efficiently and more correctly," }, { "start": 426.28, "end": 430.88, "text": " particularly scientists whose main language is not English or who don't work in a major" }, { "start": 430.88, "end": 432.6, "text": " research institution?" }, { "start": 432.6, "end": 438.8, "text": " And yes, from experience, I can tell that type of scientist would greatly, greatly benefit" }, { "start": 438.8, "end": 440.64, "text": " from a tool like this." }, { "start": 440.64, "end": 445.96, "text": " No, they wouldn't just take the output and slam it into a paper and upload it on archive." }, { "start": 445.96, "end": 450.84, "text": " They would interact with the tool in order to come up with a better research paper." }, { "start": 450.84, "end": 456.34, "text": " And in light of all of these benefits, present and future potential benefits, it is very" }, { "start": 456.34, "end": 460.76, "text": " fair to ask, who has this actually hurt?" }, { "start": 460.76, "end": 462.88, "text": " What's the actual danger here?" }, { "start": 462.88, "end": 469, "text": " As reasonable people, we should be able to debate the pros and cons of such a technology" }, { "start": 469, "end": 475.36, "text": " and of the technology being just given to people instead of just being kept, you know," }, { "start": 475.36, "end": 477.62, "text": " under we know what's good for you." }, { "start": 477.62, "end": 482.16, "text": " And it's not all like dandy that comes out of this, not all correct what comes out of" }, { "start": 482.16, "end": 483.56, "text": " these models." }, { "start": 483.56, "end": 487.92, "text": " Here is the getting a girlfriend algorithm, which would probably not be a good fit for" }, { "start": 487.92, "end": 489.2, "text": " an archive paper." }, { "start": 489.2, "end": 493.96, "text": " There's also other stuff like here is a research paper on the benefits of eating crushed glass" }, { "start": 493.96, "end": 500.08, "text": " and people have gotten even more inappropriate stuff out of this model, which is not a surprise" }, { "start": 500.08, "end": 505.44, "text": " because these models are very good and very competent and they are very agreeable." }, { "start": 505.44, "end": 508.59999999999997, "text": " So if you ask them to do something, they'll probably do it." }, { "start": 508.59999999999997, "end": 515.04, "text": " Yet still, the fair question is, in what scenarios would this type of generated text actually" }, { "start": 515.04, "end": 516.86, "text": " be harmful?" }, { "start": 516.86, "end": 518.52, "text": " And here's the point." }, { "start": 518.52, "end": 523.18, "text": " These people react with just astonishment to this question." }, { "start": 523.18, "end": 525.88, "text": " It's just like, oh, I can't believe it." }, { "start": 525.88, "end": 527.2399999999999, "text": " Oh, no way." }, { "start": 527.2399999999999, "end": 528.76, "text": " I'm flabbergasted." }, { "start": 528.76, "end": 530.04, "text": " Jesus Christ." }, { "start": 530.04, "end": 531.52, "text": " Ha ha ha." }, { "start": 531.52, "end": 533.9599999999999, "text": " Dot dot dot dot dot dot." }, { "start": 533.9599999999999, "end": 535, "text": " Incredible." }, { "start": 535, "end": 540.52, "text": " These people are so used to being able to just make the accusation and then they get" }, { "start": 540.52, "end": 548.24, "text": " their way that they can't like the someone asking them to come up with a reasonable argument" }, { "start": 548.24, "end": 553.84, "text": " that in a neutral way discusses pros and cons of something is just so out of their world" }, { "start": 553.84, "end": 559.84, "text": " because in the past, all they always had to do in the recent years is say a word like" }, { "start": 559.84, "end": 562.36, "text": " harmful or problematic." }, { "start": 562.36, "end": 567.14, "text": " And if they said it long enough and loud enough, magically, things would go their way." }, { "start": 567.14, "end": 568.88, "text": " People would take down things." }, { "start": 568.88, "end": 572.76, "text": " People would change things so that they get their wishes." }, { "start": 572.76, "end": 576.6800000000001, "text": " And now if someone actually asks them, they don't know what to say." }, { "start": 576.68, "end": 581.9599999999999, "text": " They're just so astonished that someone might actually want to know pros and cons of the" }, { "start": 581.9599999999999, "end": 582.9599999999999, "text": " stuff." }, { "start": 582.9599999999999, "end": 587.5999999999999, "text": " And yes, of course, the young look is now clearly unqualified for his position because" }, { "start": 587.5999999999999, "end": 591.4399999999999, "text": " he asks what the actual harms are." }, { "start": 591.4399999999999, "end": 592.7399999999999, "text": " It's incredible." }, { "start": 592.7399999999999, "end": 598.12, "text": " And I think we are all responsible for the climate like this because even now, Metta" }, { "start": 598.12, "end": 603.9599999999999, "text": " or whoever hosted that demo took it down in response to the public pressure." }, { "start": 603.96, "end": 609.36, "text": " So the people were loud enough and they were mean enough, essentially, that the PR people" }, { "start": 609.36, "end": 613.48, "text": " at Metta and the lawyers or whoever made the decision took down the demo." }, { "start": 613.48, "end": 618.2800000000001, "text": " And that is one more reinforcement for this kind of behavior." }, { "start": 618.2800000000001, "end": 623.6800000000001, "text": " And everyone seems to be afraid of some boogeyman that being accused of a bad word automatically" }, { "start": 623.6800000000001, "end": 627.96, "text": " means that everyone else is going like, oh, no, I'll never do business with you again." }, { "start": 627.96, "end": 630.48, "text": " I mean, to a degree, that is true." }, { "start": 630.48, "end": 636.08, "text": " But I would argue that the solution is that we all collectively stop making such a big" }, { "start": 636.08, "end": 642.72, "text": " deal out of a few flimsy big word accusations like harmful and problematic and actually" }, { "start": 642.72, "end": 649.94, "text": " discuss in neutral terms pros and cons of technology and to find the best path forward" }, { "start": 649.94, "end": 655.4, "text": " that brings the pros to as many people as possible while limiting the cons." }, { "start": 655.4, "end": 661.0799999999999, "text": " And no, that is not always going to be the approach of we know what's good for you." }, { "start": 661.0799999999999, "end": 666.48, "text": " Let's keep it all to ourselves and you come ask us whenever you want something you peasant." }, { "start": 666.48, "end": 669, "text": " All right, back to Yannick in the past." }, { "start": 669, "end": 672.1, "text": " I think the complaints are very unreasonable." }, { "start": 672.1, "end": 676.68, "text": " I think the people who make the complaints know that they're very unreasonable." }, { "start": 676.68, "end": 682.36, "text": " And I think this is either a cloud game or a power game because things are out there." }, { "start": 682.36, "end": 684.72, "text": " They're no longer centralized." }, { "start": 684.72, "end": 690.08, "text": " In any case, I decided to look up actually early criticisms of the printing press." }, { "start": 690.08, "end": 691.08, "text": " And what do you find?" }, { "start": 691.08, "end": 697.44, "text": " Here is a record from a conversation that Johannes Gutenberg, the inventor of the printing" }, { "start": 697.44, "end": 701.52, "text": " press had with a monk and monks used to copy text by hand." }, { "start": 701.52, "end": 706.48, "text": " And now the printing press came along and essentially brought that to everyone." }, { "start": 706.48, "end": 712.0400000000001, "text": " Gutenberg says, I want to help men and women to be literate, to give them knowledge, to" }, { "start": 712.04, "end": 715.52, "text": " make books so cheap, even a peasant might afford them." }, { "start": 715.52, "end": 717.12, "text": " That is my hope." }, { "start": 717.12, "end": 718.4, "text": " Yes." }, { "start": 718.4, "end": 724.76, "text": " This is strikingly similar to what Metta wrote in this Galactica paper." }, { "start": 724.76, "end": 730.28, "text": " The monk says, the word of God needs to be interpreted by priests, not spread about like" }, { "start": 730.28, "end": 731.28, "text": " dung." }, { "start": 731.28, "end": 734.4, "text": " We know what's good for you." }, { "start": 734.4, "end": 738.9599999999999, "text": " I do not wish to spoil the word, but it will happen." }, { "start": 738.96, "end": 746.1600000000001, "text": " In fact, this is 500 years ago and the exact same conversation repeats and repeats and" }, { "start": 746.1600000000001, "end": 747.1600000000001, "text": " repeats." }, { "start": 747.1600000000001, "end": 749, "text": " It will happen magically, right?" }, { "start": 749, "end": 756.36, "text": " To hand it out about to all and sundry is lang, lang, gurus." }, { "start": 756.36, "end": 762.12, "text": " Would you have plough, would you have plowmen and weavers debating the gospel in taverns?" }, { "start": 762.12, "end": 765.5600000000001, "text": " Oh no, the common folk, the common folk get it." }, { "start": 765.5600000000001, "end": 766.64, "text": " That's terrible." }, { "start": 766.64, "end": 772.24, "text": " If that is what they want to do, so up until here, you saw we know what's good for you." }, { "start": 772.24, "end": 775.08, "text": " And the second thing is always it's dangerous." }, { "start": 775.08, "end": 776.24, "text": " It's problematic." }, { "start": 776.24, "end": 779.18, "text": " And the head monk says, but what of the dangers?" }, { "start": 779.18, "end": 783.08, "text": " It would be like giving a candle to infants." }, { "start": 783.08, "end": 789, "text": " Such copies we make of the Bible would first be monasteries for monasteries and churches." }, { "start": 789, "end": 793.64, "text": " The head monk says, the Bible, you plan to make the Bible as well?" }, { "start": 793.64, "end": 796.52, "text": " Oh no, you have ambitions." }, { "start": 796.52, "end": 798.28, "text": " I've considered it." }, { "start": 798.28, "end": 800.4399999999999, "text": " And obviously he did." }, { "start": 800.4399999999999, "end": 808, "text": " And obviously I like you can one to one, one to one, you can take every argument that people" }, { "start": 808, "end": 811.78, "text": " make against this and you can put it on a predictive keyboard." }, { "start": 811.78, "end": 817.04, "text": " You can put it about the pen, you can put it about the printing press and people have" }, { "start": 817.04, "end": 818.04, "text": " done it." }, { "start": 818.04, "end": 824.6, "text": " This is 500 years and every time it was just dead wrong every time the new technology improved" }, { "start": 824.6, "end": 826.0799999999999, "text": " our lives drastically." }, { "start": 826.08, "end": 830.6800000000001, "text": " Yes, email leads to some Nigerian Prince scams." }, { "start": 830.6800000000001, "end": 832.76, "text": " Yes, some people get hurt by it." }, { "start": 832.76, "end": 836.98, "text": " But email has been a definite benefit for our world." }, { "start": 836.98, "end": 841.88, "text": " No matter what you think right now with your 5000 unread emails in your inbox, it is a" }, { "start": 841.88, "end": 843.76, "text": " benefit to the world." }, { "start": 843.76, "end": 847.44, "text": " And it's the exact same thing over and over." }, { "start": 847.44, "end": 850.36, "text": " Enough though of that enough of me ranting." }, { "start": 850.36, "end": 853.1600000000001, "text": " Let's go into the actual paper." }, { "start": 853.16, "end": 857, "text": " The paper is called Galactica, a large language model for science." }, { "start": 857, "end": 858, "text": " It's by Metta." }, { "start": 858, "end": 862.64, "text": " And I already told you that it is a large language model trained on scientific text." }, { "start": 862.64, "end": 864.76, "text": " There's actually not too much to it." }, { "start": 864.76, "end": 868.4399999999999, "text": " We'll go quickly through the paper and see a couple of special things." }, { "start": 868.4399999999999, "end": 875.7199999999999, "text": " But in general, this is a, let's say straightforward work of research into what it means to have" }, { "start": 875.7199999999999, "end": 880.9399999999999, "text": " more quality data instead of more quantity data." }, { "start": 880.94, "end": 885.48, "text": " They say here, we train on a large scientific corpus of papers, reference materials, knowledge" }, { "start": 885.48, "end": 887.6, "text": " bases and many other sources." }, { "start": 887.6, "end": 892.12, "text": " We outperform existing models on a range of scientific tasks." }, { "start": 892.12, "end": 897.6, "text": " Despite not being trained on a general corpus, Galactica outperforms Bloom and OPT 175 on" }, { "start": 897.6, "end": 898.6, "text": " Big Bench." }, { "start": 898.6, "end": 902, "text": " Big Bench is a general benchmark for language models." }, { "start": 902, "end": 908.1600000000001, "text": " And this is where it gets really interesting because this, the Galactica model is trained" }, { "start": 908.16, "end": 914.3199999999999, "text": " on a very small subset of data and yet it outperforms these much, much more holistic" }, { "start": 914.3199999999999, "end": 916.24, "text": " models on that task." }, { "start": 916.24, "end": 922.04, "text": " So that is a definite argument for data quality instead of data quantity." }, { "start": 922.04, "end": 928.3199999999999, "text": " We open source the model for the benefit of the scientific community and much to the detriment" }, { "start": 928.3199999999999, "end": 930.4, "text": " of I guess Metta itself." }, { "start": 930.4, "end": 934.24, "text": " Although let me say what Metta should have done." }, { "start": 934.24, "end": 935.9, "text": " They did so much right." }, { "start": 935.9, "end": 937.4, "text": " They open source the model." }, { "start": 937.4, "end": 940.8, "text": " They made the model available via a demo." }, { "start": 940.8, "end": 946.76, "text": " And now the only thing left to do is to actually have a pair of balls to tell the people who" }, { "start": 946.76, "end": 952.1999999999999, "text": " come and to say, Oh, look, I got the model to produce something bad to tell them." }, { "start": 952.1999999999999, "end": 955.1999999999999, "text": " Well, yeah, that's what happens sometimes." }, { "start": 955.1999999999999, "end": 957.04, "text": " And it is not dangerous." }, { "start": 957.04, "end": 958.8, "text": " It is not problematic." }, { "start": 958.8, "end": 960.66, "text": " It's just a language model." }, { "start": 960.66, "end": 967.64, "text": " So Metta next time have some balls, just tell the people to f off and you'll be fine." }, { "start": 967.64, "end": 969.9599999999999, "text": " All right." }, { "start": 969.9599999999999, "end": 975.9, "text": " They say in May, an average of 516 papers per day were submitted to archive." }, { "start": 975.9, "end": 979.4399999999999, "text": " It is impossible for a single person to read all the papers in a given field." }, { "start": 979.4399999999999, "end": 984.16, "text": " And it's likewise challenging to organize data on the underlying scientific phenomena." }, { "start": 984.16, "end": 988.68, "text": " They say the volume of scientific research has become too large." }, { "start": 988.68, "end": 991.8, "text": " And what we used to do is we used to search engines." }, { "start": 991.8, "end": 997.06, "text": " So they say search engines are the current interface for knowledge, but they do not organize" }, { "start": 997.06, "end": 999.78, "text": " knowledge directly and instead point to secondary layers." }, { "start": 999.78, "end": 1004.8399999999999, "text": " So with a search engine, I can only find stuff, I cannot integrate stuff, synthesize stuff," }, { "start": 1004.8399999999999, "end": 1009.9599999999999, "text": " or even come up with the stuff that I should search for in the first place." }, { "start": 1009.9599999999999, "end": 1013.9799999999999, "text": " They say if you want to do a literature review, that still has to be done by a human." }, { "start": 1013.9799999999999, "end": 1018.3, "text": " If you want to do a summary, that still has to be done by a human, because our tools are" }, { "start": 1018.3, "end": 1020.4399999999999, "text": " just not powerful enough." }, { "start": 1020.4399999999999, "end": 1025.72, "text": " And the Galactica is the first step at building a tool that can assist humans in doing these" }, { "start": 1025.72, "end": 1031.96, "text": " types of things, searching for things, synthesizing things, integrating things, and maybe suggesting" }, { "start": 1031.96, "end": 1033.34, "text": " new things." }, { "start": 1033.34, "end": 1037.68, "text": " They say unlike search engines, language models can potentially store, combine and reason" }, { "start": 1037.68, "end": 1040.08, "text": " about scientific knowledge." }, { "start": 1040.08, "end": 1044.36, "text": " They can potentially find hidden connections between different research, find hidden gems," }, { "start": 1044.36, "end": 1047.5, "text": " and bring these insights to the surface." }, { "start": 1047.5, "end": 1051.92, "text": " They could synthesize knowledge by generating secondary content automatically, such as literature" }, { "start": 1051.92, "end": 1058.44, "text": " reviews and encyclopedia articles, lecture notes, and much more." }, { "start": 1058.44, "end": 1063.6, "text": " And they also talk about the benefit of having different modalities, linking papers with" }, { "start": 1063.6, "end": 1069.04, "text": " code, protein sequences, with compounds, theories with late tech, and much more." }, { "start": 1069.04, "end": 1073.96, "text": " Our ultimate vision is a single neural network for powering scientific tasks." }, { "start": 1073.96, "end": 1080.28, "text": " You know, it doesn't say do scientific, it says powering scientific tasks." }, { "start": 1080.28, "end": 1083.16, "text": " And that is also my ideal end goal." }, { "start": 1083.16, "end": 1088.7, "text": " If I imagine a cool future where AI tools are abundant, I would want like an extension" }, { "start": 1088.7, "end": 1095.2, "text": " of my brain that I can interact with, and that empowers me as a scientist." }, { "start": 1095.2, "end": 1100.52, "text": " And I would still be able to actually make the decision of whether to accept the output" }, { "start": 1100.52, "end": 1102.96, "text": " of the tool or not." }, { "start": 1102.96, "end": 1108.52, "text": " They say we introduce a new large language model, sorry about that, called Galactica," }, { "start": 1108.52, "end": 1113.04, "text": " to automatically organize science." }, { "start": 1113.04, "end": 1115.46, "text": " This includes over 48 million papers." }, { "start": 1115.46, "end": 1119.56, "text": " This is their data set, textbooks, lecture notes, millions of compounds of protein, scientific" }, { "start": 1119.56, "end": 1121.68, "text": " websites, encyclopedias, and more." }, { "start": 1121.68, "end": 1129.24, "text": " Our corpus is high quality and highly curated, and it is a lot smaller than the usual corpora" }, { "start": 1129.24, "end": 1132.42, "text": " of the large language models." }, { "start": 1132.42, "end": 1135.48, "text": " They format all of this into a common format." }, { "start": 1135.48, "end": 1138.1200000000001, "text": " Their common format is Markdown." }, { "start": 1138.1200000000001, "end": 1143.3200000000002, "text": " And then they take a lot of attention of how they do specific scientific things." }, { "start": 1143.3200000000002, "end": 1148.54, "text": " For example, citations, they use a special token that allows a researcher to predict" }, { "start": 1148.54, "end": 1151.26, "text": " a citation given any input context." }, { "start": 1151.26, "end": 1156.16, "text": " They also have a very interesting way of handling step by step reasoning." }, { "start": 1156.16, "end": 1160.0800000000002, "text": " They have a special token for that that mimics an internal working memory." }, { "start": 1160.08, "end": 1163.8, "text": " We're going to look at these two things in just a bit." }, { "start": 1163.8, "end": 1169.52, "text": " The interesting thing is, for example, with reference prediction, so citation prediction," }, { "start": 1169.52, "end": 1174.28, "text": " they say, importantly, we find this approach outperforms tuned, sparse, and dense retrieval" }, { "start": 1174.28, "end": 1176.8, "text": " approaches for citation prediction." }, { "start": 1176.8, "end": 1184, "text": " So the generative approach is better at predicting a correct citation than search engines, even" }, { "start": 1184, "end": 1187.84, "text": " tuned dense retrievers that like neural retrievers." }, { "start": 1187.84, "end": 1190.3999999999999, "text": " This is also really interesting." }, { "start": 1190.3999999999999, "end": 1196.08, "text": " So for again, for all the people who argue that, oh, no, wrong stuff will end up in the" }, { "start": 1196.08, "end": 1202.32, "text": " papers, probably right now, you're using a search engine to find your references." }, { "start": 1202.32, "end": 1209.22, "text": " And if you distrust the human ability to accept or reject the output of a tool so much, then" }, { "start": 1209.22, "end": 1216.52, "text": " how come you don't distrust your ability to accept or reject based on search engine outputs?" }, { "start": 1216.52, "end": 1220.04, "text": " Not sure, but these things are better than search engines." }, { "start": 1220.04, "end": 1222.82, "text": " So you should use these." }, { "start": 1222.82, "end": 1226.04, "text": " Most interestingly, Galactica was used to help write this paper." }, { "start": 1226.04, "end": 1227.92, "text": " Oh, no, we are doomed." }, { "start": 1227.92, "end": 1229.72, "text": " We are doomed." }, { "start": 1229.72, "end": 1234.72, "text": " Okay, so here's the corpus." }, { "start": 1234.72, "end": 1237.32, "text": " You can see that there's a bunch of data sources." }, { "start": 1237.32, "end": 1242.24, "text": " The most data comes from papers about 83% of tokens." }, { "start": 1242.24, "end": 1247.1200000000001, "text": " The total size of the corpus is 106 billion tokens." }, { "start": 1247.1200000000001, "end": 1252.28, "text": " As I said, that is a lot smaller than some of the large language model training runs" }, { "start": 1252.28, "end": 1253.84, "text": " that we are used to." }, { "start": 1253.84, "end": 1259.16, "text": " A lot of other sources are also code, reference material, knowledge bases, filtered version" }, { "start": 1259.16, "end": 1264.76, "text": " of common crawl, just 1%, prompts, which they generate or include." }, { "start": 1264.76, "end": 1267.02, "text": " And here, other is other." }, { "start": 1267.02, "end": 1272.68, "text": " And we might see a little bit of what other is." }, { "start": 1272.68, "end": 1274.96, "text": " The tokenization is very interesting." }, { "start": 1274.96, "end": 1277.92, "text": " They need to bring all into a markdown format." }, { "start": 1277.92, "end": 1284.16, "text": " This isn't super surprising, but it needs it goes to show that if you do something like" }, { "start": 1284.16, "end": 1289.04, "text": " this, it actually matters quite a bit how you do the tokenization, how you represent" }, { "start": 1289.04, "end": 1291.36, "text": " all the knowledge in a common format." }, { "start": 1291.36, "end": 1296.04, "text": " And I believe, at least from what I can estimate, they have done a lot of thinking a lot of" }, { "start": 1296.04, "end": 1297.7, "text": " work into this direction." }, { "start": 1297.7, "end": 1301.72, "text": " They also mentioned that they've tried a bunch of different things and just pick the ones" }, { "start": 1301.72, "end": 1303.08, "text": " that's best." }, { "start": 1303.08, "end": 1307.8, "text": " Notably, citation, again, they have start and end ref tokens." }, { "start": 1307.8, "end": 1312.8, "text": " So they would write a text, yada, yada, yada, then the start ref token." }, { "start": 1312.8, "end": 1317.3999999999999, "text": " Then here is the citation as text form, not as like some reference form, the title of" }, { "start": 1317.3999999999999, "end": 1319.68, "text": " the paper and the author name." }, { "start": 1319.68, "end": 1322.72, "text": " And then here are the end ref." }, { "start": 1322.72, "end": 1328.06, "text": " So in this way, you can just feed it into a language model and have the language model," }, { "start": 1328.06, "end": 1333.96, "text": " if necessary, predict the reference from a piece of text." }, { "start": 1333.96, "end": 1338.44, "text": " This is also useful if you just want to find related work, I would guess what you could" }, { "start": 1338.44, "end": 1343.52, "text": " do is you could just put here, you just put something you want to know about, like you" }, { "start": 1343.52, "end": 1349.4, "text": " imagine a paper that could exist, right, you just write it down, and then you put the start" }, { "start": 1349.4, "end": 1355.4, "text": " ref token, and the model will probably suggest you paper titles and authors that have done" }, { "start": 1355.4, "end": 1357.74, "text": " work in the same field." }, { "start": 1357.74, "end": 1364.24, "text": " So even for finding related work, I can definitely see that this is super useful." }, { "start": 1364.24, "end": 1368.8400000000001, "text": " Step by step reasoning, we'll get into the work token in just a bit." }, { "start": 1368.8400000000001, "end": 1373.44, "text": " Mathematics are represented by operators right here, numbers are split because of whitespace" }, { "start": 1373.44, "end": 1374.44, "text": " issues." }, { "start": 1374.44, "end": 1381, "text": " The numbers are split into their individual digits, even the dot separator is an individual" }, { "start": 1381, "end": 1390.3600000000001, "text": " token, which means that is probably not numerically super strong." }, { "start": 1390.3600000000001, "end": 1396.28, "text": " But we'll see about that, I guess, because no language model so far is numerically super" }, { "start": 1396.28, "end": 1397.28, "text": " strong." }, { "start": 1397.28, "end": 1401.8400000000001, "text": " I'm not going to go into much of the more biology and chemistry approaches, but also" }, { "start": 1401.84, "end": 1407.4399999999998, "text": " know that there is a large weight on to these approaches in this paper, but I'm generally" }, { "start": 1407.4399999999998, "end": 1408.98, "text": " going to skip it." }, { "start": 1408.98, "end": 1414.08, "text": " So first, let's look into this work token that they talk about." }, { "start": 1414.08, "end": 1416.6399999999999, "text": " This is for step by step reasoning." }, { "start": 1416.6399999999999, "end": 1423.24, "text": " For example, there is a task, what's the average of 43, 29, 51 and 13." }, { "start": 1423.24, "end": 1428.1599999999999, "text": " Let's give that task to a language model and ask it to come up with an answer." }, { "start": 1428.16, "end": 1432.44, "text": " Now a general language model would just come up with some sort of answer right here as" }, { "start": 1432.44, "end": 1436, "text": " the next token, and it would probably be wrong." }, { "start": 1436, "end": 1441.4, "text": " Like it would be a number very probably, but it would probably be not the average of those" }, { "start": 1441.4, "end": 1442.4, "text": " numbers." }, { "start": 1442.4, "end": 1448.92, "text": " Now, one thing people have found out recently is the so called chain of thought prompting" }, { "start": 1448.92, "end": 1454.72, "text": " or the let's reason step by step trick, where you instruct the language model to essentially" }, { "start": 1454.72, "end": 1459.92, "text": " show its work to say, so you would put this thing in to the prompt." }, { "start": 1459.92, "end": 1465.88, "text": " And after that, you would say something like, okay, now do it step by step or something" }, { "start": 1465.88, "end": 1466.88, "text": " like this." }, { "start": 1466.88, "end": 1471.68, "text": " I know crazy world if you're watching this like five years ago, this is how this is what" }, { "start": 1471.68, "end": 1472.68, "text": " we've come to." }, { "start": 1472.68, "end": 1475.14, "text": " This is what deep learning has come to." }, { "start": 1475.14, "end": 1479.5, "text": " But you essentially put a piece of text to nudge the language model into actually showing" }, { "start": 1479.5, "end": 1480.5, "text": " its work." }, { "start": 1480.5, "end": 1486.84, "text": " And the paper here notes that not actually all the work that a human would write down" }, { "start": 1486.84, "end": 1490.24, "text": " here if they need to calculate this." }, { "start": 1490.24, "end": 1492.08, "text": " That's actually not all the work." }, { "start": 1492.08, "end": 1497.24, "text": " So if you are a human, you have a pen, and you were to calculate these things, you were" }, { "start": 1497.24, "end": 1503.68, "text": " to calculate this average, and someone would ask you, please write down your steps, what" }, { "start": 1503.68, "end": 1509.84, "text": " you would write down is okay, the average is calculated as such, add the first numbers" }, { "start": 1509.84, "end": 1516.1599999999999, "text": " going to add the third at the fourth number, then divide these by four, and then I have" }, { "start": 1516.1599999999999, "end": 1517.36, "text": " the result." }, { "start": 1517.36, "end": 1524.4399999999998, "text": " However, this paper points out that in the step from here to here, possibly also in these" }, { "start": 1524.4399999999998, "end": 1529.6799999999998, "text": " addition steps, and a step from here to here, if you have to do it in your head, this division" }, { "start": 1529.6799999999998, "end": 1537.1999999999998, "text": " right here is probably too cumbersome to just like know by just by by happenstance." }, { "start": 1537.2, "end": 1544, "text": " So what you actually do is these steps right here, these is what we saw on the paper, and" }, { "start": 1544, "end": 1545.5800000000002, "text": " then you do a division." }, { "start": 1545.5800000000002, "end": 1549.5800000000002, "text": " And the division, they imagine I would not do it like this, but they imagine something" }, { "start": 1549.5800000000002, "end": 1555.0800000000002, "text": " like, okay, I know, I know 35 times four is 140." }, { "start": 1555.0800000000002, "end": 1557.76, "text": " And I need to divide 136." }, { "start": 1557.76, "end": 1567.4, "text": " And therefore, it's 34, because 140 minus four is 136." }, { "start": 1567.4, "end": 1569.2, "text": " And I know, 140 divided by four is 35." }, { "start": 1569.2, "end": 1571.26, "text": " Therefore, the result is 34." }, { "start": 1571.26, "end": 1577, "text": " So this mental math that people do internally is often not even put into the external working" }, { "start": 1577, "end": 1578, "text": " memory." }, { "start": 1578, "end": 1581.32, "text": " They see this as a problem." }, { "start": 1581.32, "end": 1588.96, "text": " And they say, okay, probably, if we want to go about making the language model show its" }, { "start": 1588.96, "end": 1597.1, "text": " work, we need to be like really as explicit as possible in the sort of how these steps" }, { "start": 1597.1, "end": 1599.8, "text": " are represented in text." }, { "start": 1599.8, "end": 1604, "text": " Their idea is that they introduce a token called work." }, { "start": 1604, "end": 1609.28, "text": " Now to skip in the paper a little bit about, you know, what that exactly is." }, { "start": 1609.28, "end": 1615.96, "text": " But essentially, it goes like this, it goes very much like you enter a prompt, let's say," }, { "start": 1615.96, "end": 1626.68, "text": " calculate, calculate average of whatever that those numbers were like, 59, 53, 95, something" }, { "start": 1626.68, "end": 1632.12, "text": " three, and then you put a token called work." }, { "start": 1632.12, "end": 1640, "text": " Now in this here, the language model is supposed to do this and this, right." }, { "start": 1640, "end": 1646.8, "text": " So it's supposed to show in as explicit detail as possible, the work that it wants to do" }, { "start": 1646.8, "end": 1650.1599999999999, "text": " both internal and external work." }, { "start": 1650.1599999999999, "end": 1655.6, "text": " So it would, you know, go about and do these individual calculations right here." }, { "start": 1655.6, "end": 1660.7199999999998, "text": " But and then once it's done, it's over work is over." }, { "start": 1660.72, "end": 1664.56, "text": " And then it says something like, well, the answer is something." }, { "start": 1664.56, "end": 1669.72, "text": " Now you might think right now, wait a minute, that's essentially just the let's think about" }, { "start": 1669.72, "end": 1674.68, "text": " it step by step trick, except now they call it work." }, { "start": 1674.68, "end": 1676.46, "text": " And they wrap it in there." }, { "start": 1676.46, "end": 1680.6000000000001, "text": " And yeah, if that's all it was, that's you will be absolutely correct." }, { "start": 1680.6000000000001, "end": 1688.16, "text": " However, a cool thing that you can do right here is you can say, well, look, whatever" }, { "start": 1688.16, "end": 1695.24, "text": " is in this work thing, I can now also take and give to an external processor." }, { "start": 1695.24, "end": 1701, "text": " So let's say we ask the we ask the language model to calculate really the average of something." }, { "start": 1701, "end": 1705.24, "text": " Well, here in here, the language model is just going to do language modeling is going" }, { "start": 1705.24, "end": 1707.6000000000001, "text": " to predict the next tokens." }, { "start": 1707.6000000000001, "end": 1712.68, "text": " And if we do it, you know, cleanly enough, it has a chance of actually getting the correct" }, { "start": 1712.68, "end": 1719.4, "text": " answer if we really do it step by step, like, you know, single digit addition, carry over," }, { "start": 1719.4, "end": 1724.28, "text": " and so on, then the language model has a chance because it has learned that from the corpus." }, { "start": 1724.28, "end": 1729.04, "text": " However, at inference time, we don't have to rely on the language model, we can simply" }, { "start": 1729.04, "end": 1734.2, "text": " at this point right here, we can say, whatever, we just go to a calculator, we detect that" }, { "start": 1734.2, "end": 1736.7, "text": " the language model wants to do work." }, { "start": 1736.7, "end": 1742.1200000000001, "text": " We just take it to a calculator, we take the result, put it down here as the result, and" }, { "start": 1742.12, "end": 1747.3999999999999, "text": " then we go on language, language model inferencing, the same if the language model is supposed" }, { "start": 1747.3999999999999, "end": 1749.04, "text": " to write a program." }, { "start": 1749.04, "end": 1753.9199999999998, "text": " For example, here is a example." }, { "start": 1753.9199999999998, "end": 1759.32, "text": " This is the prompt that you would put into the language model or a data point question," }, { "start": 1759.32, "end": 1762.1999999999998, "text": " a needle is this long, it rests on a water surface." }, { "start": 1762.1999999999998, "end": 1764.9199999999998, "text": " So this is kind of a physics problem." }, { "start": 1764.9199999999998, "end": 1770.52, "text": " And instead of just giving the answer right here, you introduce this work block." }, { "start": 1770.52, "end": 1775.48, "text": " Now the language model, you would ask the language model to come up with all of this" }, { "start": 1775.48, "end": 1776.6, "text": " right here." }, { "start": 1776.6, "end": 1780.28, "text": " And during training, you train it to come up with all of this." }, { "start": 1780.28, "end": 1786.32, "text": " But then during inference, you can simply take this right here, the program that the" }, { "start": 1786.32, "end": 1791, "text": " language model writes, and we know they're quite good, you can take it and you can actually" }, { "start": 1791, "end": 1792.76, "text": " go and run it." }, { "start": 1792.76, "end": 1796.24, "text": " And you can put the output into output dot txt." }, { "start": 1796.24, "end": 1797.92, "text": " And then you have the correct answer." }, { "start": 1797.92, "end": 1805.3000000000002, "text": " So this work block is half instruction to the language model that now it's time for" }, { "start": 1805.3000000000002, "end": 1810.14, "text": " step by step work to use external memory to use external programs and so on." }, { "start": 1810.14, "end": 1817.04, "text": " During training time, you just let the language model train language modeling, right?" }, { "start": 1817.04, "end": 1822.52, "text": " So the language model essentially would have to decide what's the output of this Python" }, { "start": 1822.52, "end": 1827.64, "text": " program, like what answer am I going to get right here?" }, { "start": 1827.64, "end": 1830.24, "text": " Which sometimes might work and sometimes might not." }, { "start": 1830.24, "end": 1834.76, "text": " However, during inference time, you can now go and actually execute the Python program" }, { "start": 1834.76, "end": 1839.0200000000002, "text": " that the language model writes and give it the real result." }, { "start": 1839.0200000000002, "end": 1841.44, "text": " This is very powerful." }, { "start": 1841.44, "end": 1842.76, "text": " I really like this approach." }, { "start": 1842.76, "end": 1848.5, "text": " I really like this approach of including external tools to essentially do that at inference" }, { "start": 1848.5, "end": 1853.68, "text": " time, because using external tools at training time is going to be very, very hard." }, { "start": 1853.68, "end": 1858.3600000000001, "text": " But in this way, you can just train language modeling and you can do it at inference time." }, { "start": 1858.3600000000001, "end": 1859.88, "text": " All right." }, { "start": 1859.88, "end": 1864.92, "text": " The question is, obviously, we need training data for this, we need training data that" }, { "start": 1864.92, "end": 1873.3600000000001, "text": " has some sort of input, then has a clear description of what the step by step work is to do, including" }, { "start": 1873.3600000000001, "end": 1878.16, "text": " writing a Python program, executing a Python program, and so on, a description of when" }, { "start": 1878.16, "end": 1879.8, "text": " the work is done." }, { "start": 1879.8, "end": 1883.3400000000001, "text": " And then the answer right here." }, { "start": 1883.34, "end": 1887.9599999999998, "text": " Most, most things that we're going to find in training data does not contain any of this" }, { "start": 1887.9599999999998, "end": 1889.9199999999998, "text": " stuff in between right here." }, { "start": 1889.9199999999998, "end": 1894.1399999999999, "text": " And if it does contain it, it contains it in a very, let's say, abstract form or also" }, { "start": 1894.1399999999999, "end": 1898.1999999999998, "text": " textual form, not exactly in the form that we need it." }, { "start": 1898.1999999999998, "end": 1900.3999999999999, "text": " This is one of the big problems right here." }, { "start": 1900.3999999999999, "end": 1906.72, "text": " And they say that they have some data set, for example, con problems, as I understand" }, { "start": 1906.72, "end": 1912.1999999999998, "text": " it, these are exactly such math or physics problems where it's really step by step described" }, { "start": 1912.2, "end": 1914.24, "text": " how you would go about it." }, { "start": 1914.24, "end": 1920.3600000000001, "text": " And by taking those, they can do sort of a templating approach where they generate data" }, { "start": 1920.3600000000001, "end": 1922, "text": " in this form." }, { "start": 1922, "end": 1927.18, "text": " They criticize themselves a little bit here in that they say this is way too few." }, { "start": 1927.18, "end": 1929.88, "text": " This is not very diverse." }, { "start": 1929.88, "end": 1934.44, "text": " They say here, notably, our work prompt data sets are not very large or diverse, there" }, { "start": 1934.44, "end": 1938.64, "text": " are likely large further gains to be made with this approach." }, { "start": 1938.64, "end": 1945.96, "text": " And I agree an approach like this or this approach in particular is probably going to" }, { "start": 1945.96, "end": 1952.0400000000002, "text": " to lead to a very good interaction of language models with external tools." }, { "start": 1952.0400000000002, "end": 1955.0200000000002, "text": " And I'm very excited to see what people can make of it." }, { "start": 1955.0200000000002, "end": 1960.92, "text": " But for now, we have these few databases of these problems that let the language model" }, { "start": 1960.92, "end": 1967.1000000000001, "text": " know that there is such a thing as a work block where it needs to do work by itself" }, { "start": 1967.1, "end": 1972.84, "text": " and where we can optionally at inference time go in and actually sort of do the work for" }, { "start": 1972.84, "end": 1979.04, "text": " the language model that requires some external tool like a calculator or a Python interpreter." }, { "start": 1979.04, "end": 1983.24, "text": " Okay, let's go on to the citation prediction." }, { "start": 1983.24, "end": 1985.84, "text": " I've already mentioned that a little bit." }, { "start": 1985.84, "end": 1990.52, "text": " So here, you would reformulate text with citations as such, you'd say, okay, recurrent neural" }, { "start": 1990.52, "end": 1994.3, "text": " networks, long short term memory, and then here is the start of a citation." }, { "start": 1994.3, "end": 1996.1999999999998, "text": " So there's a start ref token." }, { "start": 1996.2, "end": 2002.04, "text": " And the specific format they use is the title of the paper followed by the first author" }, { "start": 2002.04, "end": 2006.04, "text": " name, and then an end ref token." }, { "start": 2006.04, "end": 2012.68, "text": " This they say they've tried different things, including like including trying some some" }, { "start": 2012.68, "end": 2016.48, "text": " predictor right here, some numerical identification of the paper." }, { "start": 2016.48, "end": 2020.8, "text": " But in the end, the title and name actually worked better." }, { "start": 2020.8, "end": 2027.12, "text": " And you can understand why because not only is the title a hopefully unique identifier" }, { "start": 2027.12, "end": 2033.96, "text": " for a paper and the author, but also the text of the title gives some topical hints." }, { "start": 2033.96, "end": 2039.6, "text": " So I can definitely see why there would be a better prediction accuracy if the title" }, { "start": 2039.6, "end": 2044.72, "text": " text has actually something to do often with what the paper is about." }, { "start": 2044.72, "end": 2051.88, "text": " And likewise, the author, the author has associations usually with the same field, there's rarely" }, { "start": 2051.88, "end": 2057.02, "text": " an author that goes from field to field to field and contributes a little bit to biology" }, { "start": 2057.02, "end": 2061.3, "text": " and a little bit to graph algorithms and a little bit here." }, { "start": 2061.3, "end": 2063.8, "text": " Usually authors have their topics." }, { "start": 2063.8, "end": 2068.18, "text": " And therefore, also that the names of the authors to be available allows the language" }, { "start": 2068.18, "end": 2075.56, "text": " model to learn to associate these names with given with given topical textual topical things" }, { "start": 2075.56, "end": 2076.96, "text": " in the text." }, { "start": 2076.96, "end": 2082.64, "text": " And that's why it's also really cool to think of this as a related work finder and things" }, { "start": 2082.64, "end": 2084.68, "text": " like this and expertise finder, right?" }, { "start": 2084.68, "end": 2090.52, "text": " You can essentially just ask, you know, which authors are really good at the topic I'm looking" }, { "start": 2090.52, "end": 2096.7999999999997, "text": " at currently, because you just predict a bunch and then you see which authors often appear." }, { "start": 2096.8, "end": 2100.4, "text": " So that's how they introduce citations." }, { "start": 2100.4, "end": 2105.8, "text": " Now they also go into other things like how they include proteins and chemical sequences." }, { "start": 2105.8, "end": 2107.84, "text": " And I want to go into that." }, { "start": 2107.84, "end": 2115.6400000000003, "text": " But an interesting thing they do is that they do what they call prompt pre training." }, { "start": 2115.6400000000003, "end": 2120.4, "text": " Now they have this little graph right here where they show here is pre training." }, { "start": 2120.4, "end": 2124.42, "text": " That's where you just do language modeling on the large corpus as it exists." }, { "start": 2124.42, "end": 2129.56, "text": " And over here is fine tuning where you really take the head off and train a new head to" }, { "start": 2129.56, "end": 2132.36, "text": " predict the classifier or something like this." }, { "start": 2132.36, "end": 2135.08, "text": " In the middle, there is instruction tuning." }, { "start": 2135.08, "end": 2136.48, "text": " So that's where you take the language model." }, { "start": 2136.48, "end": 2140.6800000000003, "text": " And after you've trained it, you go and you fine tune it." }, { "start": 2140.6800000000003, "end": 2145.32, "text": " But you don't fine tune like a classifier head, you still fine tune it as a language" }, { "start": 2145.32, "end": 2146.32, "text": " model." }, { "start": 2146.32, "end": 2150.8, "text": " However, you include now some prompts for the tasks that you want." }, { "start": 2150.8, "end": 2156, "text": " For example, if you want to do, I don't know, for example, this reference prediction, you" }, { "start": 2156, "end": 2160.36, "text": " would include the prompt that says something like we'll do a reference prediction or something" }, { "start": 2160.36, "end": 2162.6400000000003, "text": " like this for the tasks that you're interested in." }, { "start": 2162.6400000000003, "end": 2167.48, "text": " Again, this is still language modeling, but it is fine tuning because now you're only" }, { "start": 2167.48, "end": 2172.4, "text": " training for the tasks that you intend only on the data sets that you intend." }, { "start": 2172.4, "end": 2178.36, "text": " This leads to an improvement in performance on those particular tasks, but to a probably" }, { "start": 2178.36, "end": 2181.96, "text": " not so good model in the rest of all the tasks." }, { "start": 2181.96, "end": 2184.56, "text": " The other way you can do it is prompt pre training." }, { "start": 2184.56, "end": 2189.56, "text": " And that's what Galactica is doing, which essentially just means they do the same thing" }, { "start": 2189.56, "end": 2192.86, "text": " as instruction tuning, but they do it at training time." }, { "start": 2192.86, "end": 2198.88, "text": " So they just take a bunch of samples that also have an instruction prompt in the data" }, { "start": 2198.88, "end": 2206.08, "text": " in the data point, like, you know, do this, solve this math exercise, rewrite this code" }, { "start": 2206.08, "end": 2212.2, "text": " or something like this, or even the step by step, what not prompt, and they just throw" }, { "start": 2212.2, "end": 2219.16, "text": " that in sometimes into the into the training data set, just so that the model gets used" }, { "start": 2219.16, "end": 2222.36, "text": " to seeing this kind of instructions." }, { "start": 2222.36, "end": 2227.9, "text": " And that tends to work quite well and also tends to not be that intrusive to the rest" }, { "start": 2227.9, "end": 2230.84, "text": " of the function of the language model." }, { "start": 2230.84, "end": 2236.52, "text": " I found pretty interesting this short section on the architecture right here, some noteworthy" }, { "start": 2236.52, "end": 2239.8, "text": " things is no biases." }, { "start": 2239.8, "end": 2246.46, "text": " It seems like that if you make your models large enough, then you get away with essentially" }, { "start": 2246.46, "end": 2251.96, "text": " streamlining more and more, you know, with the small models, we have to have adapters" }, { "start": 2251.96, "end": 2257, "text": " and this and the convolution and the weight tying and whatnot." }, { "start": 2257, "end": 2260.88, "text": " And the larger the models get, the more you just want to do matrix multiplications and" }, { "start": 2260.88, "end": 2263.76, "text": " anything that gets in the way just gets in the way." }, { "start": 2263.76, "end": 2266.44, "text": " So biases out the window." }, { "start": 2266.44, "end": 2273.54, "text": " They have a Galu activation, which is sort of a smooth version of a relu, which makes" }, { "start": 2273.54, "end": 2279.84, "text": " things a little bit less jaggy, I guess, which might come in handy, depending on the optimizer" }, { "start": 2279.84, "end": 2280.84, "text": " you use." }, { "start": 2280.84, "end": 2286.96, "text": " They have learned positional embeddings, which again, as your stuff gets larger, you should" }, { "start": 2286.96, "end": 2291.7200000000003, "text": " just want to straightforward learn a lot of stuff instead of using they said they tried" }, { "start": 2291.7200000000003, "end": 2296.76, "text": " Alibi, which are these kind of relative positional encodings." }, { "start": 2296.76, "end": 2301.2, "text": " And that apparently did not work." }, { "start": 2301.2, "end": 2303.68, "text": " And they use byte pair encoding for vocabulary." }, { "start": 2303.68, "end": 2305.92, "text": " I don't think that's too special." }, { "start": 2305.92, "end": 2306.92, "text": " Honestly." }, { "start": 2306.92, "end": 2309.48, "text": " Let's go down." }, { "start": 2309.48, "end": 2311.96, "text": " Now we come to the results." }, { "start": 2311.96, "end": 2317.56, "text": " And their main result is really this repeated tokens considered not harmful." }, { "start": 2317.56, "end": 2322.12, "text": " With repeated tokens, what they mean is that they not only train for one epoch, as you" }, { "start": 2322.12, "end": 2328.32, "text": " can see right here, every one of those dashed lines is one epoch, and they train for multiple" }, { "start": 2328.32, "end": 2329.32, "text": " epochs." }, { "start": 2329.32, "end": 2335.12, "text": " And usually, it's it's being said that that is kind of hurtful to train for multiple epochs," }, { "start": 2335.12, "end": 2336.96, "text": " but it seems to be okay." }, { "start": 2336.96, "end": 2341.16, "text": " In this case, as you can see right here, there is like a tiny bump." }, { "start": 2341.16, "end": 2344.2799999999997, "text": " They even point the sun in the next there's a tiny bump right here." }, { "start": 2344.2799999999997, "end": 2347.8799999999997, "text": " They say this might be a double descent phenomenon." }, { "start": 2347.8799999999997, "end": 2348.8799999999997, "text": " Not super sure." }, { "start": 2348.8799999999997, "end": 2351.3199999999997, "text": " And there is also sort of a bump right here." }, { "start": 2351.3199999999997, "end": 2356.8599999999997, "text": " So they say we actually stop before that we early stop the run of this largest model before" }, { "start": 2356.8599999999997, "end": 2357.8599999999997, "text": " that." }, { "start": 2357.8599999999997, "end": 2363.3999999999996, "text": " So it seems that even though you train on multiple epochs, because the code because" }, { "start": 2363.4, "end": 2372.7200000000003, "text": " the the text quality of the corpus is so high, it doesn't hurt to go over it multiple times." }, { "start": 2372.7200000000003, "end": 2380.12, "text": " And only this largest model right here might be starting to overfit after epoch five, we" }, { "start": 2380.12, "end": 2385.6800000000003, "text": " don't know it might, and they'd rather early stop in front of that." }, { "start": 2385.6800000000003, "end": 2391.4, "text": " If one of the authors is watching this, is this word overleaf here supposed to be in" }, { "start": 2391.4, "end": 2400.48, "text": " here, like example curves in figure 23, overleaf for the 30 B model, I'm not sure." }, { "start": 2400.48, "end": 2404.44, "text": " Maybe maybe overleaf has some other meaning that I don't know." }, { "start": 2404.44, "end": 2406.44, "text": " And that's actually a correct word." }, { "start": 2406.44, "end": 2413.64, "text": " Any case they say they also investigate whether some of the losses so maybe papers, maybe" }, { "start": 2413.64, "end": 2416.9, "text": " code and so on, are different from the others." }, { "start": 2416.9, "end": 2420.7200000000003, "text": " And it hurts them more to be repeated in the data set." }, { "start": 2420.72, "end": 2428.08, "text": " They say we see no signs of loss heterogeneity, the loss falls for all sources." }, { "start": 2428.08, "end": 2432.9599999999996, "text": " They say we suspect there are two factors could be at play a quality factor, the curated" }, { "start": 2432.9599999999996, "end": 2438.62, "text": " nature of the corpus enables more value per token to be extracted, or a modality factor," }, { "start": 2438.62, "end": 2444.16, "text": " the nature of scientific data enables more value of token, more value per token to be" }, { "start": 2444.16, "end": 2445.2999999999997, "text": " extracted." }, { "start": 2445.2999999999997, "end": 2449.6, "text": " These two things, they're very similar, but essentially they say higher quality, plus" }, { "start": 2449.6, "end": 2454.04, "text": " that the nature of the domain itself, which I guess is also a bit higher quality, but" }, { "start": 2454.04, "end": 2462.64, "text": " in a different way, in that scientific discourse and literature often happens to be quite precise," }, { "start": 2462.64, "end": 2469.2, "text": " very logical, very non noisy in terms of linguistics, and so on." }, { "start": 2469.2, "end": 2470.7999999999997, "text": " Some people might disagree." }, { "start": 2470.7999999999997, "end": 2477.64, "text": " But so they have these hypotheses, although they say they don't know how exactly that" }, { "start": 2477.64, "end": 2483.48, "text": " would lead to the so they say the missing step of causation is what leads specifically" }, { "start": 2483.48, "end": 2486.3599999999997, "text": " from either factor towards less overfitting." }, { "start": 2486.3599999999997, "end": 2488.2, "text": " We leave this question for future work." }, { "start": 2488.2, "end": 2494.52, "text": " We note that the implication that the token goes to infinity, so you need infinite amount" }, { "start": 2494.52, "end": 2499.96, "text": " of training data focus of current large language model projects may be overemphasized versus" }, { "start": 2499.96, "end": 2504.52, "text": " the importance of filtering the corpus for quality." }, { "start": 2504.52, "end": 2509.92, "text": " And yeah, I think we've seen a number of papers previously that essentially came to a similar" }, { "start": 2509.92, "end": 2516.24, "text": " conclusion, namely, higher quality can make up for missing quantity." }, { "start": 2516.24, "end": 2521.82, "text": " But what which one is really the way to to go like, should we aim for more and more and" }, { "start": 2521.82, "end": 2523.66, "text": " more and more training data?" }, { "start": 2523.66, "end": 2526.1, "text": " Or should we put more work into quality?" }, { "start": 2526.1, "end": 2529.4, "text": " Essentially if you have a dollar to spend, where do you spend it?" }, { "start": 2529.4, "end": 2530.4, "text": " Right?" }, { "start": 2530.4, "end": 2534.52, "text": " So both things can make your model become better." }, { "start": 2534.52, "end": 2541.2000000000003, "text": " But what sort of the marginal value of more quality and the marginal value of more quantity?" }, { "start": 2541.2000000000003, "end": 2545.4, "text": " I think that's going to be the interesting question that has to be researched in the" }, { "start": 2545.4, "end": 2548.96, "text": " near future." }, { "start": 2548.96, "end": 2551.6, "text": " So what's also interesting, this is Big Bench." }, { "start": 2551.6, "end": 2555.32, "text": " They also evaluate on Big Bench, which is an NLP task." }, { "start": 2555.32, "end": 2557.1600000000003, "text": " So not scientific." }, { "start": 2557.16, "end": 2561.72, "text": " Maybe some subparts are scientific, but not this is a general language model task." }, { "start": 2561.72, "end": 2564.14, "text": " And they also perform quite well there." }, { "start": 2564.14, "end": 2565.7999999999997, "text": " But I also find these curves." }, { "start": 2565.7999999999997, "end": 2568.8799999999997, "text": " I think this is just what a Big Bench chart looks like." }, { "start": 2568.8799999999997, "end": 2570.64, "text": " I find these curves like what was this?" }, { "start": 2570.64, "end": 2576.12, "text": " It's like, it goes here and here and here and here." }, { "start": 2576.12, "end": 2577.12, "text": " Like, yeah." }, { "start": 2577.12, "end": 2578.12, "text": " Okay." }, { "start": 2578.12, "end": 2582.04, "text": " It's a bit noisy, to say the least." }, { "start": 2582.04, "end": 2587.68, "text": " But I guess I've seen this multiple times now, and at least the average goes up." }, { "start": 2587.68, "end": 2592.96, "text": " So I think that is a valid sign." }, { "start": 2592.96, "end": 2594.56, "text": " They have a few more investigations." }, { "start": 2594.56, "end": 2596.52, "text": " I don't want to go too much into them." }, { "start": 2596.52, "end": 2603.42, "text": " But for example, you can see right here, they test on LaTeX equation prediction." }, { "start": 2603.42, "end": 2610.88, "text": " So they give a prompt, the description of a formula or the name of an equation, and" }, { "start": 2610.88, "end": 2616.48, "text": " they see whether or not the language model can predict the correct equation in proper" }, { "start": 2616.48, "end": 2617.84, "text": " LaTeX." }, { "start": 2617.84, "end": 2619.44, "text": " And turns out, yes, it can." }, { "start": 2619.44, "end": 2624.96, "text": " It can actually do that a lot better than a lot of the other language models available," }, { "start": 2624.96, "end": 2631.76, "text": " which is pretty cool to see like that much of a significant boost over publicly available" }, { "start": 2631.76, "end": 2634.04, "text": " and proprietary models." }, { "start": 2634.04, "end": 2639.58, "text": " Now naturally, it's going to be, let's say, expected if you train on scientific text," }, { "start": 2639.58, "end": 2641.88, "text": " that it's going to be better on scientific text." }, { "start": 2641.88, "end": 2644.84, "text": " But it's still cool that it's not just like a 2% gain." }, { "start": 2644.84, "end": 2647.88, "text": " It's actually like a massive, massive gain." }, { "start": 2647.88, "end": 2650.36, "text": " They also have investigations into this, into reasoning." }, { "start": 2650.36, "end": 2657.72, "text": " I don't want to go into reasoning, but these are essentially these type of math problems," }, { "start": 2657.72, "end": 2664.2, "text": " like step-by-step reasoning problems that they solve using their work block tokens." }, { "start": 2664.2, "end": 2672.24, "text": " And again, here, they do outperform other models, except like here, the fine-tuned models" }, { "start": 2672.24, "end": 2681.48, "text": " are still, seems to be still ahead, although these are again fine-tuned." }, { "start": 2681.48, "end": 2684.96, "text": " Downstream scientific NLP, I'm going to jump a bit." }, { "start": 2684.96, "end": 2686.7999999999997, "text": " This I found really interesting." }, { "start": 2686.7999999999997, "end": 2690.08, "text": " This is the citation prediction task." }, { "start": 2690.08, "end": 2694.2799999999997, "text": " And specifically, obviously, they do get better as the model grows." }, { "start": 2694.2799999999997, "end": 2702.44, "text": " But specifically, what I found interesting is that the model initially is biased towards" }, { "start": 2702.44, "end": 2709.16, "text": " papers, towards predicting papers that have high numbers of citations already, which is" }, { "start": 2709.16, "end": 2714.98, "text": " reasonable like a Bayesian would totally agree that if a paper is highly cited, then it's" }, { "start": 2714.98, "end": 2721.64, "text": " more likely that the citation you want is that paper." }, { "start": 2721.64, "end": 2725.54, "text": " Someone might criticize me for that statement, but in some way, that is correct." }, { "start": 2725.54, "end": 2728.02, "text": " And these models do obviously the same mistake." }, { "start": 2728.02, "end": 2731.72, "text": " They predict papers with high citations." }, { "start": 2731.72, "end": 2733.56, "text": " They actually over predict those." }, { "start": 2733.56, "end": 2739.16, "text": " So here you can see the distribution of the ground truth of their citation prediction" }, { "start": 2739.16, "end": 2740.16, "text": " dataset." }, { "start": 2740.16, "end": 2742.4, "text": " And here you can see what the model predicts." }, { "start": 2742.4, "end": 2749.88, "text": " So the model over predicts more high papers that are highly cited, which I guess you can't" }, { "start": 2749.88, "end": 2751.64, "text": " really fault the model." }, { "start": 2751.64, "end": 2756.04, "text": " But what's interesting is as the model gets bigger, so this is the smallest, this gets" }, { "start": 2756.04, "end": 2762.7000000000003, "text": " bigger, gets even bigger, gets even bigger, you see that this shifts gradually towards" }, { "start": 2762.7000000000003, "end": 2765, "text": " overlapping with the ground truth." }, { "start": 2765, "end": 2770, "text": " So it means that the higher scale of the model, that the larger the model is, the more competent" }, { "start": 2770, "end": 2777.64, "text": " it is also to recognize when maybe a paper that doesn't have as many citations should" }, { "start": 2777.64, "end": 2782.68, "text": " be cited right here as a direct consequence of it having more parameters and more ability" }, { "start": 2782.68, "end": 2786.82, "text": " to remember things from the training corpus." }, { "start": 2786.82, "end": 2791.76, "text": " Because some of these papers you can see right here, they're cited maybe 10 times, right?" }, { "start": 2791.76, "end": 2794.1, "text": " And some even lower right here." }, { "start": 2794.1, "end": 2796.96, "text": " And the model actually predicts them correctly." }, { "start": 2796.96, "end": 2802.8, "text": " That's really impressive that essentially it digests 100 billion tokens of scientific" }, { "start": 2802.8, "end": 2803.8, "text": " text." }, { "start": 2803.8, "end": 2808.84, "text": " And it still remembers that this one paper was cited like three times within in this" }, { "start": 2808.84, "end": 2813.84, "text": " particular topic, and then correctly cites that paper at that place." }, { "start": 2813.84, "end": 2819.6, "text": " I'm wondering how well the ground truth data here is, because the ground truth data got" }, { "start": 2819.6, "end": 2821.84, "text": " to be predicted by humans." }, { "start": 2821.84, "end": 2827.36, "text": " And again, with the search engines that we have, I'm not sure humans could always find" }, { "start": 2827.36, "end": 2832.08, "text": " all the relevant things." }, { "start": 2832.08, "end": 2835.2400000000002, "text": " Or maybe humans disagree what is relevant." }, { "start": 2835.2400000000002, "end": 2843.44, "text": " I think the last years of reviews at machine learning conferences have shown, well, I guess" }, { "start": 2843.44, "end": 2848.96, "text": " all of scientific review has shown that humans can disagree quite heavily what should be cited." }, { "start": 2848.96, "end": 2851.6400000000003, "text": " The last investigation is into toxicity and bias." }, { "start": 2851.64, "end": 2856.2, "text": " They say we find galactica is significantly less biased and toxic than existing language" }, { "start": 2856.2, "end": 2861.04, "text": " models, which again might come from the fact that it's higher quality data, or more the" }, { "start": 2861.04, "end": 2867.48, "text": " scientific nature, which generally has less slang, less everyday conversation, less off" }, { "start": 2867.48, "end": 2873.8799999999997, "text": " the cuff stuff, and therefore might be a bit less high in these in these data sets." }, { "start": 2873.8799999999997, "end": 2879.6, "text": " So they test a bunch of data sets, including including obviously truthful QA." }, { "start": 2879.6, "end": 2885.7599999999998, "text": " And I'm happy to report that galactica is the first large, openly available language" }, { "start": 2885.7599999999998, "end": 2893.8399999999997, "text": " model that beats in its largest instances that beats GPT-4 channel truthful QA." }, { "start": 2893.8399999999997, "end": 2894.96, "text": " So good job." }, { "start": 2894.96, "end": 2896.72, "text": " Well done." }, { "start": 2896.72, "end": 2903.48, "text": " This is this is a moment of joy to me that it's finally been surpassed." }, { "start": 2903.48, "end": 2909.56, "text": " Now the interesting thing is that usually truthful QA is adversarially adversarially" }, { "start": 2909.56, "end": 2916.08, "text": " constructed in such a way that the larger the models get, the worse they get on truthful" }, { "start": 2916.08, "end": 2917.64, "text": " QA." }, { "start": 2917.64, "end": 2923.08, "text": " And you can see that this model right here doesn't follow that trajectory." }, { "start": 2923.08, "end": 2927.04, "text": " Now we've seen other models in the past that also have that property." }, { "start": 2927.04, "end": 2932.68, "text": " But truthful QA is specifically adversarially constructed for things like GPT-3." }, { "start": 2932.68, "end": 2939.58, "text": " And that means that galactica is significantly different from GPT-3 that as it goes up in" }, { "start": 2939.58, "end": 2946.6, "text": " size, as it gets more performant, it also does get better or more performant on on these" }, { "start": 2946.6, "end": 2949.8799999999997, "text": " whatever the task considers truthful." }, { "start": 2949.8799999999997, "end": 2954.9199999999996, "text": " So it will be really interesting to actually investigate what's happening here." }, { "start": 2954.9199999999996, "end": 2957.44, "text": " But I'm not going to do that." }, { "start": 2957.44, "end": 2961.3999999999996, "text": " I'm just happy that this now turns out." }, { "start": 2961.4, "end": 2967.12, "text": " Lastly, they say, we show that language models are surprisingly strong absorbers of technical" }, { "start": 2967.12, "end": 2968.12, "text": " knowledge." }, { "start": 2968.12, "end": 2972.32, "text": " They tend to scale smoothly with model size." }, { "start": 2972.32, "end": 2976.96, "text": " We demonstrated this for citation prediction, where a language model outperforms tuned," }, { "start": 2976.96, "end": 2981, "text": " sparse and dense retrieval pace pipelines for this task." }, { "start": 2981, "end": 2989.52, "text": " And this, as I said previously, at the beginning of the video, this is really, really interesting" }, { "start": 2989.52, "end": 2996.24, "text": " that essentially this beats search engines for citation prediction." }, { "start": 2996.24, "end": 3002.92, "text": " And it would be interesting to see how good humans are like a human plus a search engine" }, { "start": 3002.92, "end": 3009.56, "text": " like the archive search field, or a human plus galactica for finding correct references." }, { "start": 3009.56, "end": 3014, "text": " I would be super interested which combo is better right there." }, { "start": 3014, "end": 3018.28, "text": " Because again, the tools alone, they don't do stuff." }, { "start": 3018.28, "end": 3021.76, "text": " It needs to have a human in the loop and that human can always make decisions." }, { "start": 3021.76, "end": 3027.92, "text": " It would be really interesting to use this right here as a tool rather than just, you" }, { "start": 3027.92, "end": 3033.96, "text": " know, it's either all or nothing either the model writes the paper or the humans do." }, { "start": 3033.96, "end": 3036.6400000000003, "text": " So that was it for this paper." }, { "start": 3036.6400000000003, "end": 3040.96, "text": " The last challenge, I guess, is to find out which parts of the paper that were actually" }, { "start": 3040.96, "end": 3043.8, "text": " written by galactica itself." }, { "start": 3043.8, "end": 3051.1200000000003, "text": " I hear that the part of the abstract may be written by galactica, although I don't know." }, { "start": 3051.1200000000003, "end": 3059.04, "text": " And I don't know if the authors will ever will ever lift that secret." }, { "start": 3059.04, "end": 3061.6000000000004, "text": " Let's hope they don't because I like the mystery." }, { "start": 3061.6000000000004, "end": 3063.32, "text": " All right, this was it from me." }, { "start": 3063.32, "end": 3065.92, "text": " Sorry for the bit longer rant at the beginning." }, { "start": 3065.92, "end": 3067.6400000000003, "text": " I still hope you enjoy this." }, { "start": 3067.6400000000003, "end": 3071.42, "text": " I think this is a really, really promising direction." }, { "start": 3071.42, "end": 3077, "text": " It raises a lot of really interesting points about quality of data, quantity of data, and" }, { "start": 3077, "end": 3080.56, "text": " about, you know, doing scientific work itself." }, { "start": 3080.56, "end": 3084.6, "text": " This could be a really powerful tool for scientists of the future." }, { "start": 3084.6, "end": 3087.48, "text": " And I'm waiting for the next iterations of it." }, { "start": 3087.48, "end": 3089.6, "text": " Leave comments if you have comments." }, { "start": 3089.6, "end": 3090.6, "text": " Thanks for watching." }, { "start": 3090.6, "end": 3091.6, "text": " See you next time." }, { "start": 3091.6, "end": 3104.48, "text": " Peace." } ]
n1SXlK5rhR8
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Drama] Yann LeCun against Twitter on Dataset Bias
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "ylc", "yann", "lecun", "convnet", "face", "pulse", "github", "colab", "jeff dean", "hardmaru", "charles sutton", "soumith", "meredith", "timnit", "bias", "noise", "dataset", "systems", "twitter", "mob" ]
Yann LeCun points out an instance of dataset bias and proposes a sensible solution. People are not happy about it. Original Tweet: https://twitter.com/ylecun/status/1274782757907030016 ERRATA: - My specific example of the L1 regularizer wrt to Porsches and Ferraris does not actually work in this particular case. What I mean is a general sparsity-inducing regularizer. - When I claim that an L1 regularizer would make the problem worse, this only holds in certain circumstances, for example when the data is Gaussian iid. Thumbnail: https://commons.wikimedia.org/wiki/File:Yann_LeCun_-_2018_(cropped).jpg by Jérémy Barande / Ecole polytechnique Université Paris-Saclay / CC BY-SA 2.0 Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! So you may have seen this already. There's a CVPR paper called Pulse. And what it does is it's a method to up sample a pixelated image in a way that makes it look realistic, but also that the again down sampled variant matches the original down sampled image. So it's kind of a cycle consistency loss together with a GAN. And all in all, it's a method to demonstrate how you could do this. Now this has been trained on this face data set, among others. There was a user Bomzy that made this into a colab so people could try it out and tweeted this out. And as you can see, it works pretty nicely, it gives pretty nice results on this particular data set. But of course, people started playing around with it and gave fairly funny results like this, or that. That gets more into the horrible category. These. So you can see these ones I particularly like Trump being made into the little child. So you can see as soon as you get away from the original kind of data set modality, you are going to get these results that are off. And people started to notice that so here you input Barack Obama, and what comes out is a fairly standard Caucasian person, someone tweeted out saying this image speaks volumes about the dangers of bias in AI, I guess here is where the entire story starts. So young Lacaan weighs in and says, ML systems are biased when data is biased. This face up sampling system makes everyone look white because the network was pre trained on flick face HQ, which mainly contains white people picks train the exact same system on a data set from Senegal, and everyone will look African. So this is pointing out why this happens namely because the data set is mainly Caucasian people. So the results of up sampling are going to be mainly Caucasian people. And this is like a straightforward explanation of why we're seeing what we're seeing. But of course, this was not okay. And here is where the piling starts. As an interjection, we have to talk about bias in machine learning. Technically, there's a statistical notion of bias, which has a very rigorous definition. And there is the societal definition of bias. And these two things, even though they're the same word, they're totally different. A machine learning system mainly consists of four different parts. There is a data set, the model, the loss function, and the optimization procedure. Statistical bias means whenever the model, the loss or the optimization procedure lead to a situation where the outcome doesn't reflect the distribution of the data that you input. This, for example, is achieved when you regularize your model, which means that you put some prior knowledge onto the model, you introduce bias, and therefore you choose to not accurately represent your data distribution, regularize it to a more biased distribution that in turn has lower variance. We know this as the bias variance trade off. It's actually very simple, right? You have the Ferraris and the Lamborghinis, and you want to make a model that predicts the accident probability. Now it just so happens that the Ferrari drivers are a bit more reckless, and they do slightly higher accidents. And now I train my logistic regression, and it tells me, okay, 60-40. Cool. But now I train my logistic regression with an L1 penalty, and I say, I want my model to be, you know, explainable. So I want it to be sparse. I want the least amount of variables to be contributing to it. What's the model going to say? The model is going to say, Ferrari drivers add Lamborghini drivers good. Societal bias in machine learning is way different. An example for this is when face detection systems work well on Caucasian people, but don't work so well faced with people from other heritages. And these societal biases are in the data set. As Jan LeCun points out here, if you change the data set, you'll change these biases. Notably, these societal biases can only be in the data set. Otherwise, you'd have to argue something like logistic regression itself has a preference for white people or something like this. Now there is a considerable interaction effect between the two, but as Jan LeCun points out, the actual societal bias of the final system is a direct result of the bias in the data set. And he is very correct. If you train that system on a different data set, it will exhibit different biases. Societal bias cannot be in the other parts of the machine learning pipeline. They can serve to exaggerate or mitigate that bias in the data set, but they themselves can only be statistically biased and not societally biased. But Jan LeCun make the terrible mistake of pinpointing the exact root cause of this problem and not addressing the I guess, wider ranging problems in the field as some people perceive it. And he shouldn't have to, right? He pretty clearly says, this is why it happens. We can solve it by swapping the data set. He doesn't say anything about anything else. Namely, he doesn't say that general bias in the field is not a problem. He doesn't say that this doesn't harm anyone. None of that. He simply he simply suggests a solution. Jonathan Peck says, well, yes, that's the point. ML researchers need to be more careful selecting their data so that they don't encode biases like this. And LeCun responds with not so much ML researchers, but ML engineers. The consequences of bias are considerably more dire in a deployed product than in an academic paper, which is also correct. This paper was about the method showing that this method works on this data set. Now, Sumit here makes an interesting point, which I agree with, saying that today, ML researchers are inadvertently powering product of a lot of non-AI companies who ignorantly start with a pre-trained BERT or ResNet or YOLO from the internet, probably ignoring the license, read me and so on, which is a valid point, right? There are going to be people that take this and think, oh, this is a face up sampler. Cool. I can use that without noting that this is simply an example implementation on an example data set. So you can argue that there might be some responsibility of the researchers right here. That doesn't make Jan LeCun not correct, but I'd still consider this to be like a fruitful discussion between individuals right here. But now we go on. This person saying, train it on the whole American population with an L2 loss and almost everyone will look white or train it on the whole American population with an L1 loss and more people might look black. Stop pretending that bias does not also come from algorithmic choices. Jan LeCun never says it doesn't, right? LeCun responds now saying, the most efficient way to do it though is to equalize the frequencies of the categories of samples during training. This forces the network to pay attention to all the relevant features for all the sample categories. And training with an L1 instead of an L2 will not even begin to solve the problem. I would pretty much argue training with an L1 loss here would exacerbate the problem because the L2 loss is much more sensitive to outliers. Charles Sutton says, serious question, why do you feel that it's important to make this point? Are you worried that people are going to start suing CycleGAN? And LeCun says, because people should be aware of this problem and know its cause so they can fix it. How terrible Jan, how terrible you dare pinpoint the exact cause of the problem so that people can fix it. The correct thing to do is to point out that everything is problematic. So Tim the Gibber says, Jan, I suggest you watch me and Emily's tutorial or a number of scholars who are experts in this area. You can't just reduce harms to data set bias. For once, listen to us people from marginalized communities and what we tell you. If not now during worldwide protests, not sure when. So again, I feel the argument here is that you can't simply point out that it's the data set bias. You must point out the bigger problems which Jan LeCun does not ever deny. He simply says this particular problem can be solved by switching the data set. Nicole Olleroux says, Jan was in my PhD jury. I am indebted for him for everything he taught me, but this constant dismissal of the harms caused directly or indirectly by the ML community is highly problematic. Where or when have I dismissed the harm caused by the ML community? I'm pointing out the cause of the harm so it can be fixed. You can't fix the harm unless you know what causes it. No. LeRoux says causes of the biases are numerous only pointing out data set bias deflects the attention away from the other more pervasive ones that make the whole field of bias in ML. Many people try to get your attention about these issues, but you kept focus on the data set because the data set is the problem right here. He doesn't dismiss any of the other things. He simply says here the data set is the problem if your problem is that it doesn't work as well for non-concassian people. Which was never the intent of this. The intent of this was to showcase the method. I mean ImageNet is like 60% dog species and still people train on it to showcase their image recognition techniques. No one training on ImageNet makes a claim that they have solved computer vision for all the classes in the world in a fair manner. Timnigibru goes on saying I'm sick of this framing, tired of it. Many people have tried to explain. Many scholars listen to us. You can't just reduce the harms caused by ML to data set bias. Doesn't do that. Doesn't do it. So someone asks her is he engaging in any ways with you? It's appalling to see that he answers to everybody but you. Yet maybe there is a conversation going on in private and I don't want to jeopardize it. Note that Yan LeCun's tweet has 500 retweets, 1.9k likes and comments as far as you can scroll. To what she responds to with yep but I'm used to white men refusing to engage with black and brown women even on issues of bias that mostly affect us. I mean he literally has ignored a whole body of work by people from that demographic hence the statement so not surprised. I mean in absence of the fact that an argument should be independent of the person making the argument that is a low blow. Hardmaru says I respectfully disagree with Yan here as long as progress is benchmarked on biased data such biases will also be reflected in the inductive biases of ML systems. Advancing ML with biased benchmarks and asking engineers to simply retrain models with unbiased data is not helpful. I don't disagree with you here. I don't think my tweet contradicts your statement which it doesn't. People are reading into this because he doesn't conform to the orthodoxy of pointing out that everything and everything is problematic and simply pinpoints a particular problem. He must be thinking all the wrong things. Jeff Dean says this is a clear example here is an illustration that seemingly minor choices in learning algorithms or loss can have significant effects so bias in ML systems is about much more than just avoid data bias. ML researchers and practitioners must pay attention to these issues and I think they are and Lacan doesn't say anything against that. He says as I point out in my comment to this tweet is much more efficient to correct this kind of bias. Note that Yan Lacan actually differentiates between the different kinds of biases by equalizing the frequencies of categories of samples during training than be hacking the loss function. Correct because if you hack the loss function you're trying to counter one kind of bias by another kind of bias. Meredith Whitaker says this is very racist and even if it recognized non-white people it would be very racist. This is cop tech. It's designed to allow those with power to surveil and control those with less power. Diverse training sets aren't going to fix it advocating that we should never build these systems and that's a discussion to be had but let me break this to you. This isn't going to help the cops. This isn't actually giving you the face of the person that was down pixeled. This is simply going to give you the most likely face associated with that down pixeled picture given the data set the algorithm was trained on. I don't see this whenever any machine learning algorithm does anything with faces at all. People jumping up going like this is cop technology. Well in line with all the broader impact statement advice can't it also be used to find lost children from very very bad security camera footage? And if I already mentioned that this doesn't actually give you back the person on the down sampled image it will give you back the most likely person given the data set. So with that I want to conclude this section. Please stop the witch hunting. Yann LeCun made a completely fine tweet here and there's no reason why people should pile on him this hard. He doesn't dismiss any of the other problems just because he doesn't mention them and while we all enjoy a good discussion where people disagree genuinely it's not helpful to accuse him of things he never said or meant. I mean where does this all lead? The result of this is going to be that small labs that don't have the resources to collect their own data sets or check for all the possible biases in their models that are reliant on the data sets that we do have even if they are biased and flawed will just be disincentivized from publishing their code or actually doing research at all. So this as every other additional constraint on research is going to help the large corporations with lots of money. And maybe that's just my opinion but we should be able to just talk about a problem and the solution to it without always having to make sure that we rabble down all the different things that are and might be wrong according to the canon. And big props to Yann LeCun here for holding his own. 90% of people by now would probably be like oh yes I'm so sorry I did a not thoughtful comment blah blah blah. Props to you Yann, keep going. And with that I conclude this section. Let me know what you think in the comments and I'll see you next time. Bye bye.
[ { "start": 0, "end": 6.32, "text": " Hi there! So you may have seen this already. There's a CVPR paper called Pulse. And what it" }, { "start": 6.32, "end": 12.88, "text": " does is it's a method to up sample a pixelated image in a way that makes it look realistic," }, { "start": 12.88, "end": 20.400000000000002, "text": " but also that the again down sampled variant matches the original down sampled image. So it's" }, { "start": 20.400000000000002, "end": 27.44, "text": " kind of a cycle consistency loss together with a GAN. And all in all, it's a method to demonstrate" }, { "start": 27.44, "end": 33.2, "text": " how you could do this. Now this has been trained on this face data set, among others. There was a" }, { "start": 33.2, "end": 41.760000000000005, "text": " user Bomzy that made this into a colab so people could try it out and tweeted this out. And as you" }, { "start": 41.760000000000005, "end": 48.24, "text": " can see, it works pretty nicely, it gives pretty nice results on this particular data set. But of" }, { "start": 48.24, "end": 56.56, "text": " course, people started playing around with it and gave fairly funny results like this, or that. That" }, { "start": 56.56, "end": 64.72, "text": " gets more into the horrible category. These. So you can see these ones I particularly like Trump" }, { "start": 65.44, "end": 72.88, "text": " being made into the little child. So you can see as soon as you get away from the original kind of" }, { "start": 72.88, "end": 80.72, "text": " data set modality, you are going to get these results that are off. And people started to notice" }, { "start": 80.72, "end": 87.92, "text": " that so here you input Barack Obama, and what comes out is a fairly standard Caucasian person," }, { "start": 87.92, "end": 94.08, "text": " someone tweeted out saying this image speaks volumes about the dangers of bias in AI, I guess" }, { "start": 94.08, "end": 101.84, "text": " here is where the entire story starts. So young Lacaan weighs in and says, ML systems are biased" }, { "start": 101.84, "end": 107.84, "text": " when data is biased. This face up sampling system makes everyone look white because the network was" }, { "start": 107.84, "end": 114.72, "text": " pre trained on flick face HQ, which mainly contains white people picks train the exact same system on" }, { "start": 114.72, "end": 120.72, "text": " a data set from Senegal, and everyone will look African. So this is pointing out why this happens" }, { "start": 120.72, "end": 126.16, "text": " namely because the data set is mainly Caucasian people. So the results of up sampling are going" }, { "start": 126.16, "end": 132.48000000000002, "text": " to be mainly Caucasian people. And this is like a straightforward explanation of why we're seeing" }, { "start": 132.48, "end": 139.28, "text": " what we're seeing. But of course, this was not okay. And here is where the piling starts. As an" }, { "start": 139.28, "end": 144.16, "text": " interjection, we have to talk about bias in machine learning. Technically, there's a statistical notion" }, { "start": 144.16, "end": 150.48, "text": " of bias, which has a very rigorous definition. And there is the societal definition of bias. And these" }, { "start": 150.48, "end": 154.79999999999998, "text": " two things, even though they're the same word, they're totally different. A machine learning" }, { "start": 154.79999999999998, "end": 160.48, "text": " system mainly consists of four different parts. There is a data set, the model, the loss function," }, { "start": 160.48, "end": 168.07999999999998, "text": " and the optimization procedure. Statistical bias means whenever the model, the loss or the optimization" }, { "start": 168.07999999999998, "end": 174.56, "text": " procedure lead to a situation where the outcome doesn't reflect the distribution of the data that" }, { "start": 174.56, "end": 180.07999999999998, "text": " you input. This, for example, is achieved when you regularize your model, which means that you put" }, { "start": 180.07999999999998, "end": 185.76, "text": " some prior knowledge onto the model, you introduce bias, and therefore you choose to not accurately" }, { "start": 185.76, "end": 192.32, "text": " represent your data distribution, regularize it to a more biased distribution that in turn has lower" }, { "start": 192.32, "end": 197.35999999999999, "text": " variance. We know this as the bias variance trade off. It's actually very simple, right? You have" }, { "start": 197.35999999999999, "end": 202.39999999999998, "text": " the Ferraris and the Lamborghinis, and you want to make a model that predicts the accident probability." }, { "start": 202.39999999999998, "end": 208.23999999999998, "text": " Now it just so happens that the Ferrari drivers are a bit more reckless, and they do slightly higher" }, { "start": 208.23999999999998, "end": 214.48, "text": " accidents. And now I train my logistic regression, and it tells me, okay, 60-40. Cool. But now I train" }, { "start": 214.48, "end": 219.6, "text": " my logistic regression with an L1 penalty, and I say, I want my model to be, you know, explainable." }, { "start": 219.6, "end": 224.32, "text": " So I want it to be sparse. I want the least amount of variables to be contributing to it. What's the" }, { "start": 224.32, "end": 228.95999999999998, "text": " model going to say? The model is going to say, Ferrari drivers add Lamborghini drivers good." }, { "start": 228.95999999999998, "end": 234.64, "text": " Societal bias in machine learning is way different. An example for this is when face detection systems" }, { "start": 234.64, "end": 240.39999999999998, "text": " work well on Caucasian people, but don't work so well faced with people from other heritages." }, { "start": 240.4, "end": 246.8, "text": " And these societal biases are in the data set. As Jan LeCun points out here, if you change the data" }, { "start": 246.8, "end": 252.96, "text": " set, you'll change these biases. Notably, these societal biases can only be in the data set." }, { "start": 252.96, "end": 257.76, "text": " Otherwise, you'd have to argue something like logistic regression itself has a preference for" }, { "start": 257.76, "end": 262.72, "text": " white people or something like this. Now there is a considerable interaction effect between the two," }, { "start": 262.72, "end": 270.32, "text": " but as Jan LeCun points out, the actual societal bias of the final system is a direct result of" }, { "start": 270.32, "end": 275.84, "text": " the bias in the data set. And he is very correct. If you train that system on a different data set," }, { "start": 275.84, "end": 281.59999999999997, "text": " it will exhibit different biases. Societal bias cannot be in the other parts of the machine" }, { "start": 281.59999999999997, "end": 288.24, "text": " learning pipeline. They can serve to exaggerate or mitigate that bias in the data set, but they" }, { "start": 288.24, "end": 293.36, "text": " themselves can only be statistically biased and not societally biased. But Jan LeCun make the" }, { "start": 293.36, "end": 299.68, "text": " terrible mistake of pinpointing the exact root cause of this problem and not addressing the" }, { "start": 299.68, "end": 306.32, "text": " I guess, wider ranging problems in the field as some people perceive it. And he shouldn't have to," }, { "start": 306.32, "end": 312.08, "text": " right? He pretty clearly says, this is why it happens. We can solve it by swapping the data" }, { "start": 312.08, "end": 317.44, "text": " set. He doesn't say anything about anything else. Namely, he doesn't say that general bias" }, { "start": 317.44, "end": 323.68, "text": " in the field is not a problem. He doesn't say that this doesn't harm anyone. None of that. He simply" }, { "start": 323.68, "end": 330.24, "text": " he simply suggests a solution. Jonathan Peck says, well, yes, that's the point. ML researchers need" }, { "start": 330.24, "end": 336.08, "text": " to be more careful selecting their data so that they don't encode biases like this. And LeCun" }, { "start": 336.08, "end": 342.08, "text": " responds with not so much ML researchers, but ML engineers. The consequences of bias are considerably" }, { "start": 342.08, "end": 348.56, "text": " more dire in a deployed product than in an academic paper, which is also correct. This paper was about" }, { "start": 348.56, "end": 356, "text": " the method showing that this method works on this data set. Now, Sumit here makes an interesting" }, { "start": 356, "end": 361.04, "text": " point, which I agree with, saying that today, ML researchers are inadvertently powering product" }, { "start": 361.04, "end": 366.4, "text": " of a lot of non-AI companies who ignorantly start with a pre-trained BERT or ResNet or YOLO from the" }, { "start": 366.4, "end": 371.12, "text": " internet, probably ignoring the license, read me and so on, which is a valid point, right?" }, { "start": 371.68, "end": 376.64, "text": " There are going to be people that take this and think, oh, this is a face up sampler. Cool. I can" }, { "start": 376.64, "end": 382.88, "text": " use that without noting that this is simply an example implementation on an example data set." }, { "start": 382.88, "end": 387.84, "text": " So you can argue that there might be some responsibility of the researchers right here." }, { "start": 387.84, "end": 392.88, "text": " That doesn't make Jan LeCun not correct, but I'd still consider this to be like a fruitful" }, { "start": 392.88, "end": 398.88, "text": " discussion between individuals right here. But now we go on. This person saying, train it on the whole" }, { "start": 398.88, "end": 404.64, "text": " American population with an L2 loss and almost everyone will look white or train it on the whole" }, { "start": 404.64, "end": 410.32, "text": " American population with an L1 loss and more people might look black. Stop pretending that bias does" }, { "start": 410.32, "end": 415.91999999999996, "text": " not also come from algorithmic choices. Jan LeCun never says it doesn't, right? LeCun responds now" }, { "start": 415.91999999999996, "end": 421.03999999999996, "text": " saying, the most efficient way to do it though is to equalize the frequencies of the categories of" }, { "start": 421.03999999999996, "end": 426.24, "text": " samples during training. This forces the network to pay attention to all the relevant features for" }, { "start": 426.24, "end": 431.59999999999997, "text": " all the sample categories. And training with an L1 instead of an L2 will not even begin to solve" }, { "start": 431.6, "end": 437.04, "text": " the problem. I would pretty much argue training with an L1 loss here would exacerbate the problem" }, { "start": 437.04, "end": 442.24, "text": " because the L2 loss is much more sensitive to outliers. Charles Sutton says, serious question," }, { "start": 442.24, "end": 446.40000000000003, "text": " why do you feel that it's important to make this point? Are you worried that people are going to" }, { "start": 446.40000000000003, "end": 453.12, "text": " start suing CycleGAN? And LeCun says, because people should be aware of this problem and know" }, { "start": 453.12, "end": 460.40000000000003, "text": " its cause so they can fix it. How terrible Jan, how terrible you dare pinpoint the exact cause of the" }, { "start": 460.4, "end": 466.08, "text": " problem so that people can fix it. The correct thing to do is to point out that everything is" }, { "start": 466.08, "end": 472.08, "text": " problematic. So Tim the Gibber says, Jan, I suggest you watch me and Emily's tutorial or a number of" }, { "start": 472.08, "end": 478.64, "text": " scholars who are experts in this area. You can't just reduce harms to data set bias. For once," }, { "start": 478.64, "end": 483.52, "text": " listen to us people from marginalized communities and what we tell you. If not now during worldwide" }, { "start": 483.52, "end": 489.52, "text": " protests, not sure when. So again, I feel the argument here is that you can't simply point out" }, { "start": 489.52, "end": 496.15999999999997, "text": " that it's the data set bias. You must point out the bigger problems which Jan LeCun does not ever" }, { "start": 496.15999999999997, "end": 502.32, "text": " deny. He simply says this particular problem can be solved by switching the data set. Nicole Olleroux" }, { "start": 502.32, "end": 508.15999999999997, "text": " says, Jan was in my PhD jury. I am indebted for him for everything he taught me, but this constant" }, { "start": 508.15999999999997, "end": 513.84, "text": " dismissal of the harms caused directly or indirectly by the ML community is highly problematic." }, { "start": 513.84, "end": 520.1600000000001, "text": " Where or when have I dismissed the harm caused by the ML community? I'm pointing out the cause of" }, { "start": 520.1600000000001, "end": 525.6800000000001, "text": " the harm so it can be fixed. You can't fix the harm unless you know what causes it. No. LeRoux says" }, { "start": 525.6800000000001, "end": 530.24, "text": " causes of the biases are numerous only pointing out data set bias deflects the attention away" }, { "start": 530.24, "end": 535.2, "text": " from the other more pervasive ones that make the whole field of bias in ML. Many people try to get" }, { "start": 535.2, "end": 540.8000000000001, "text": " your attention about these issues, but you kept focus on the data set because the data set is the" }, { "start": 540.8, "end": 547.5999999999999, "text": " problem right here. He doesn't dismiss any of the other things. He simply says here the data set is" }, { "start": 547.5999999999999, "end": 553.4399999999999, "text": " the problem if your problem is that it doesn't work as well for non-concassian people. Which was" }, { "start": 553.4399999999999, "end": 558.88, "text": " never the intent of this. The intent of this was to showcase the method. I mean ImageNet is like 60%" }, { "start": 558.88, "end": 565.5999999999999, "text": " dog species and still people train on it to showcase their image recognition techniques." }, { "start": 565.5999999999999, "end": 569.92, "text": " No one training on ImageNet makes a claim that they have solved computer vision for all the" }, { "start": 569.92, "end": 575.5999999999999, "text": " classes in the world in a fair manner. Timnigibru goes on saying I'm sick of this framing, tired of" }, { "start": 575.5999999999999, "end": 580.24, "text": " it. Many people have tried to explain. Many scholars listen to us. You can't just reduce the harms" }, { "start": 580.24, "end": 587.76, "text": " caused by ML to data set bias. Doesn't do that. Doesn't do it. So someone asks her is he engaging" }, { "start": 587.76, "end": 593.36, "text": " in any ways with you? It's appalling to see that he answers to everybody but you. Yet maybe there" }, { "start": 593.36, "end": 599.8399999999999, "text": " is a conversation going on in private and I don't want to jeopardize it. Note that Yan LeCun's tweet" }, { "start": 599.84, "end": 611.2800000000001, "text": " has 500 retweets, 1.9k likes and comments as far as you can scroll. To what she responds to with yep" }, { "start": 611.2800000000001, "end": 616.96, "text": " but I'm used to white men refusing to engage with black and brown women even on issues of bias that" }, { "start": 616.96, "end": 623.0400000000001, "text": " mostly affect us. I mean he literally has ignored a whole body of work by people from that demographic" }, { "start": 623.0400000000001, "end": 629.36, "text": " hence the statement so not surprised. I mean in absence of the fact that an argument should be" }, { "start": 629.36, "end": 638.4, "text": " independent of the person making the argument that is a low blow. Hardmaru says I respectfully" }, { "start": 638.4, "end": 643.76, "text": " disagree with Yan here as long as progress is benchmarked on biased data such biases will also" }, { "start": 643.76, "end": 650.32, "text": " be reflected in the inductive biases of ML systems. Advancing ML with biased benchmarks and asking" }, { "start": 650.32, "end": 655.84, "text": " engineers to simply retrain models with unbiased data is not helpful. I don't disagree with you" }, { "start": 655.84, "end": 660.72, "text": " here. I don't think my tweet contradicts your statement which it doesn't. People are reading" }, { "start": 660.72, "end": 665.6800000000001, "text": " into this because he doesn't conform to the orthodoxy of pointing out that everything and" }, { "start": 665.6800000000001, "end": 672.96, "text": " everything is problematic and simply pinpoints a particular problem. He must be thinking all the" }, { "start": 672.96, "end": 677.84, "text": " wrong things. Jeff Dean says this is a clear example here is an illustration that seemingly" }, { "start": 677.84, "end": 683.2800000000001, "text": " minor choices in learning algorithms or loss can have significant effects so bias in ML systems is" }, { "start": 683.28, "end": 689.12, "text": " about much more than just avoid data bias. ML researchers and practitioners must pay attention" }, { "start": 689.12, "end": 694.8, "text": " to these issues and I think they are and Lacan doesn't say anything against that. He says as I" }, { "start": 694.8, "end": 700.16, "text": " point out in my comment to this tweet is much more efficient to correct this kind of bias. Note that" }, { "start": 700.16, "end": 705.36, "text": " Yan Lacan actually differentiates between the different kinds of biases by equalizing the" }, { "start": 705.36, "end": 711.4399999999999, "text": " frequencies of categories of samples during training than be hacking the loss function." }, { "start": 711.44, "end": 716.6400000000001, "text": " Correct because if you hack the loss function you're trying to counter one kind of bias by" }, { "start": 716.6400000000001, "end": 723.36, "text": " another kind of bias. Meredith Whitaker says this is very racist and even if it recognized" }, { "start": 723.36, "end": 729.6, "text": " non-white people it would be very racist. This is cop tech. It's designed to allow those with power" }, { "start": 729.6, "end": 734.48, "text": " to surveil and control those with less power. Diverse training sets aren't going to fix it" }, { "start": 734.48, "end": 738.72, "text": " advocating that we should never build these systems and that's a discussion to be had" }, { "start": 738.72, "end": 744.5600000000001, "text": " but let me break this to you. This isn't going to help the cops. This isn't actually giving you the" }, { "start": 744.5600000000001, "end": 750.24, "text": " face of the person that was down pixeled. This is simply going to give you the most likely face" }, { "start": 750.24, "end": 756.4, "text": " associated with that down pixeled picture given the data set the algorithm was trained on. I don't" }, { "start": 756.4, "end": 762.48, "text": " see this whenever any machine learning algorithm does anything with faces at all. People jumping" }, { "start": 762.48, "end": 767.28, "text": " up going like this is cop technology. Well in line with all the broader impact statement advice" }, { "start": 767.28, "end": 773.8399999999999, "text": " can't it also be used to find lost children from very very bad security camera footage? And if I" }, { "start": 773.8399999999999, "end": 780, "text": " already mentioned that this doesn't actually give you back the person on the down sampled image" }, { "start": 781.04, "end": 787.04, "text": " it will give you back the most likely person given the data set. So with that I want to conclude this" }, { "start": 787.04, "end": 793.4399999999999, "text": " section. Please stop the witch hunting. Yann LeCun made a completely fine tweet here and there's no" }, { "start": 793.44, "end": 798.5600000000001, "text": " reason why people should pile on him this hard. He doesn't dismiss any of the other problems just" }, { "start": 798.5600000000001, "end": 803.44, "text": " because he doesn't mention them and while we all enjoy a good discussion where people disagree" }, { "start": 803.44, "end": 808.5600000000001, "text": " genuinely it's not helpful to accuse him of things he never said or meant. I mean where does this all" }, { "start": 808.5600000000001, "end": 813.6800000000001, "text": " lead? The result of this is going to be that small labs that don't have the resources to collect their" }, { "start": 813.6800000000001, "end": 819.9200000000001, "text": " own data sets or check for all the possible biases in their models that are reliant on the data sets" }, { "start": 819.92, "end": 825.92, "text": " that we do have even if they are biased and flawed will just be disincentivized from publishing their" }, { "start": 825.92, "end": 832.56, "text": " code or actually doing research at all. So this as every other additional constraint on research is" }, { "start": 832.56, "end": 837.4399999999999, "text": " going to help the large corporations with lots of money. And maybe that's just my opinion but we" }, { "start": 837.4399999999999, "end": 844.4, "text": " should be able to just talk about a problem and the solution to it without always having to make" }, { "start": 844.4, "end": 850.56, "text": " sure that we rabble down all the different things that are and might be wrong according to the canon." }, { "start": 850.56, "end": 856.24, "text": " And big props to Yann LeCun here for holding his own. 90% of people by now would probably be like" }, { "start": 856.24, "end": 861.68, "text": " oh yes I'm so sorry I did a not thoughtful comment blah blah blah. Props to you Yann," }, { "start": 861.68, "end": 865.84, "text": " keep going. And with that I conclude this section. Let me know what you think in the" }, { "start": 865.84, "end": 880.5600000000001, "text": " comments and I'll see you next time. Bye bye." } ]
BK3rv0MQMwY
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[News] The Siraj Raval Controversy
[ "Science & Technology" ]
[ "machine learning", "siraj", "controversy", "scam", "scammer", "fraud", "plagiarism", "plagiarized", "course", "refund", "policy", "ai", "online", "hype", "credit", "attribution", "paper", "scandal", "news", "twitter", "neural qubit", "intellectual property" ]
Popular ML YouTuber Siraj Raval is in the middle of not just one, but two controversies: First, a lot of students of his 200$ online-course have accused him of breaking major promises he made when advertising the course and denying them refunds. Second, his paper on "The Neural Qubit" appears to be plagiarized almost verbatim. https://www.reddit.com/r/MachineLearning/comments/d7ad2y/d_siraj_raval_potentially_exploiting_students/ https://www.reddit.com/r/MachineLearning/comments/dh2xfs/d_siraj_has_a_new_paper_the_neural_qubit_its/
There is a massive controversy going on right now and in the middle is Siraj Raval, a prominent YouTuber. So today I'll just be actually shortly reporting on this, not giving too much opinion, just kind of stating what's up in a very high level overview. Because if you haven't heard of this, I think it's important that you do. And this is both sad and funny to a degree, more sad actually, but you know, make your own opinions. So Siraj is this very prominent YouTuber that makes videos mostly, let's say coding tutorials or explaining short concepts in the field of machine learning. And recently also branched out into other fields like here Watch Me Build a Marketing Startup and so on. So what happened, it was two recent developments. First of all, he offered a course and the course was $200. And this is one of his students on Twitter and many more have come out. And he offered this course for $200 and basically said, make money with machine learning. That was the course. And he said he was going to take 500 students in this course and it would be personal and it would be a very, very high level. He said he was going to take 500 students in this course and it would be personalized learning, personalized support from basically from him or he said he is all in into this course. Then the students discovered that there were actually over a thousand people in the course and there was almost no personalized support. So there's only, he was giving 50 minutes of his weekly time to do Q&A, 30 minutes of video content and apparently he also replied to all the code submissions with the exact same email. So things like this. He actually split up the students into two different Slack groups so they wouldn't notice that there are over a thousand people. So about two, 500 people groups. Then people wanted a refund and then apparently when he hit the Slack limit, he transferred them to Discord and he added everyone that wanted a refund to Discord channel and then simply banned them. I mean, yeah. There are many more stories of students about this course apparently. This was kind of really a bit of a scam, this course, without especially the refunds. There was no refund policy and then he sent the students to Discord and they were like, oh, I want to see. Then about two weeks, I think, into the course there was a refund policy. After two weeks after the course started and the refund policy said you can get a refund within two weeks of the course starting. So this, I mean, this is all just kind of really, really weird. I encourage you to read up more on this because there are many more stories about this course. So he apologized publicly and said he shouldn't have done that, he should have hired TAs and so on. He apologized for it and that seemed to be kind of the end of that. I don't exactly know what happened to the students. Some claimed they never got a refund and so on. But then it went on and it went on badly for Siraj, if I may say, because he published a paper called The Neural Cubit and people have gone and it turns out that it is almost all plagiarized from one or two other papers. Actually, yeah, it turns out it's, I think it's two papers and it's almost all plagiarized from there. You can see on the left the green sections and on the right the red sections are exactly identical. For example, this table up here, I think it's on the next page of the other paper, is exactly this from the other paper. If you look at whatever these equations, they're all the same. The sentences are exactly the same and so on. He only changed, also the diagrams, you see here on the upper left, exactly taken from this other paper. I think he mentions this other paper, he cites it once and he says his work is kind of a derivative of that or leaned on that and so on. But these aren't explicitly quotes here. The only changes he made are changes like, so whenever the other paper says we can write the combined transformation, here you can see he says I can write. Thanks to the CV encoding, I get a nonlinear functional. There's a rule in computer science. The only person who's allowed to do this is Don Knuth. No one else. That's wholly rule broken. So more seriously, he changed that and then he also kind of used a couple of synonyms which make no sense. So, for example, he replaced the word gate by the word door and of course a logic gate then becomes a logic door. So here it's a non-Gaussian gate, phi. I don't know if in this instance, but in this instance he replaced it. Here it actually says gate, but sometimes it's replaced by door and also he replaced the word complex Hilbert space to complicated Hilbert space which makes no sense at all. So this, yeah, it's funny and sad at the same time. So this happened and again he's apologizing. He says I've seen claims that my neural qubit was partly plagiarized. This is true. And he basically claims it. He sort of blames it. He says he's doing too many videos a week which I agree. I mean, I can tell you that making videos is hard, even crappy videos like mine. And his are actually edited and so on. But the problem is many people more came out and said that he did the same thing to their project. Here you see someone. He did the exact same thing to our project. It took four people a couple of months to do. He acted like it was his own. And many more came out and said he plagiarized other things as well where he basically just takes code and gives minimal or no attribution to the original authors and then passed it off as his own. This after this course, yeah, everyone, this could not get any worse. Hold my gas in quantum doors. Yeah. So this all happened. I mean, I encourage you to go read up on it to make up your own mind. I just want to point out quickly the end. And I won't actually show the identity of the person. I'm posting this if you really want to find out. But it's not about that person. It's about the kind of sentiment. So there is a sentiment around that you should kind of unfollow him. And because that lends credibility to him. And there is a point to be made of that kind of if the kind of prominent researchers refer to him and so on that gives him some credibility. But I'm also very much against sort of cancel culture. It is also the case that he, like no matter how much he's plagiarized, has popularized the field more than anyone else. And maybe, you know, there is a conversation to be had and a lesson to be learned without immediately canceling someone. That's just so that I mean, there's, it's a it's a complicated issue, but just kind of want to get this out there. So go read up on this is all it's it's yeah, it's a wild world. So that being said, bye bye. Have fun.
[ { "start": 0, "end": 7, "text": " There is a massive controversy going on right now and in the middle is Siraj Raval, a prominent" }, { "start": 7.6000000000000005, "end": 14.6, "text": " YouTuber. So today I'll just be actually shortly reporting on this, not giving too much opinion," }, { "start": 15.08, "end": 21.6, "text": " just kind of stating what's up in a very high level overview. Because if you haven't heard" }, { "start": 21.6, "end": 31.560000000000002, "text": " of this, I think it's important that you do. And this is both sad and funny to a degree," }, { "start": 31.560000000000002, "end": 37.24, "text": " more sad actually, but you know, make your own opinions. So Siraj is this very prominent" }, { "start": 37.24, "end": 44.24, "text": " YouTuber that makes videos mostly, let's say coding tutorials or explaining short concepts" }, { "start": 44.24, "end": 51.24, "text": " in the field of machine learning. And recently also branched out into other fields like here" }, { "start": 51.24, "end": 58.24, "text": " Watch Me Build a Marketing Startup and so on. So what happened, it was two recent developments." }, { "start": 58.28, "end": 65.28, "text": " First of all, he offered a course and the course was $200. And this is one of his students" }, { "start": 65.28, "end": 72.28, "text": " on Twitter and many more have come out. And he offered this course for $200 and basically" }, { "start": 73.64, "end": 80.64, "text": " said, make money with machine learning. That was the course. And he said he was going to" }, { "start": 81.64, "end": 88.04, "text": " take 500 students in this course and it would be personal and it would be a very, very" }, { "start": 88.04, "end": 95.04, "text": " high level. He said he was going to take 500 students in this course and it would be personalized" }, { "start": 95.80000000000001, "end": 102.80000000000001, "text": " learning, personalized support from basically from him or he said he is all in into this" }, { "start": 104.36000000000001, "end": 111.36000000000001, "text": " course. Then the students discovered that there were actually over a thousand people" }, { "start": 111.36, "end": 118.36, "text": " in the course and there was almost no personalized support. So there's only, he was giving 50" }, { "start": 119.72, "end": 126.72, "text": " minutes of his weekly time to do Q&A, 30 minutes of video content and apparently he also replied" }, { "start": 129.92, "end": 136.92000000000002, "text": " to all the code submissions with the exact same email. So things like this. He actually" }, { "start": 136.92, "end": 142.92, "text": " split up the students into two different Slack groups so they wouldn't notice that there" }, { "start": 142.92, "end": 149.92, "text": " are over a thousand people. So about two, 500 people groups. Then people wanted a refund" }, { "start": 153.92, "end": 160.92, "text": " and then apparently when he hit the Slack limit, he transferred them to Discord and" }, { "start": 160.92, "end": 167.92, "text": " he added everyone that wanted a refund to Discord channel and then simply banned them." }, { "start": 168.92, "end": 175.92, "text": " I mean, yeah. There are many more stories of students about this course apparently." }, { "start": 176.92, "end": 183.92, "text": " This was kind of really a bit of a scam, this course, without especially the refunds. There" }, { "start": 183.92, "end": 189.92, "text": " was no refund policy and then he sent the students to Discord and they were like, oh," }, { "start": 189.92, "end": 196.92, "text": " I want to see. Then about two weeks, I think, into the course there was a refund policy." }, { "start": 197.04, "end": 201.07999999999998, "text": " After two weeks after the course started and the refund policy said you can get a refund" }, { "start": 201.07999999999998, "end": 208.07999999999998, "text": " within two weeks of the course starting. So this, I mean, this is all just kind of really," }, { "start": 209.83999999999997, "end": 216.83999999999997, "text": " really weird. I encourage you to read up more on this because there are many more stories" }, { "start": 216.84, "end": 223.84, "text": " about this course. So he apologized publicly and said he shouldn't have done that, he" }, { "start": 227.8, "end": 234.8, "text": " should have hired TAs and so on. He apologized for it and that seemed to be kind of the end" }, { "start": 238.8, "end": 242.8, "text": " of that. I don't exactly know what happened to the students. Some claimed they never got" }, { "start": 242.8, "end": 249.8, "text": " a refund and so on. But then it went on and it went on badly for Siraj, if I may say," }, { "start": 250.8, "end": 257.8, "text": " because he published a paper called The Neural Cubit and people have gone and it turns out" }, { "start": 257.8, "end": 266.8, "text": " that it is almost all plagiarized from one or two other papers. Actually, yeah, it turns" }, { "start": 266.8, "end": 271.8, "text": " out it's, I think it's two papers and it's almost all plagiarized from there. You can" }, { "start": 271.8, "end": 276.8, "text": " see on the left the green sections and on the right the red sections are exactly identical." }, { "start": 276.8, "end": 283.8, "text": " For example, this table up here, I think it's on the next page of the other paper, is exactly" }, { "start": 283.8, "end": 289.8, "text": " this from the other paper. If you look at whatever these equations, they're all the" }, { "start": 289.8, "end": 296.8, "text": " same. The sentences are exactly the same and so on. He only changed, also the diagrams," }, { "start": 296.8, "end": 303.8, "text": " you see here on the upper left, exactly taken from this other paper. I think he mentions" }, { "start": 303.8, "end": 310.8, "text": " this other paper, he cites it once and he says his work is kind of a derivative of that" }, { "start": 310.8, "end": 319.8, "text": " or leaned on that and so on. But these aren't explicitly quotes here. The only changes" }, { "start": 319.8, "end": 326.8, "text": " he made are changes like, so whenever the other paper says we can write the combined" }, { "start": 326.8, "end": 331.8, "text": " transformation, here you can see he says I can write. Thanks to the CV encoding, I get" }, { "start": 331.8, "end": 335.8, "text": " a nonlinear functional. There's a rule in computer science. The only person who's allowed" }, { "start": 335.8, "end": 345.8, "text": " to do this is Don Knuth. No one else. That's wholly rule broken. So more seriously, he" }, { "start": 345.8, "end": 353.8, "text": " changed that and then he also kind of used a couple of synonyms which make no sense." }, { "start": 353.8, "end": 359.8, "text": " So, for example, he replaced the word gate by the word door and of course a logic gate" }, { "start": 359.8, "end": 367.8, "text": " then becomes a logic door. So here it's a non-Gaussian gate, phi. I don't know if in" }, { "start": 367.8, "end": 376.8, "text": " this instance, but in this instance he replaced it. Here it actually says gate, but sometimes" }, { "start": 376.8, "end": 384.8, "text": " it's replaced by door and also he replaced the word complex Hilbert space to complicated" }, { "start": 384.8, "end": 393.8, "text": " Hilbert space which makes no sense at all. So this, yeah, it's funny and sad at the same" }, { "start": 393.8, "end": 405.8, "text": " time. So this happened and again he's apologizing. He says I've seen claims that my neural" }, { "start": 405.8, "end": 412.8, "text": " qubit was partly plagiarized. This is true. And he basically claims it. He sort of blames" }, { "start": 412.8, "end": 419.8, "text": " it. He says he's doing too many videos a week which I agree. I mean, I can tell you that" }, { "start": 419.8, "end": 426.8, "text": " making videos is hard, even crappy videos like mine. And his are actually edited and" }, { "start": 426.8, "end": 437.8, "text": " so on. But the problem is many people more came out and said that he did the same thing" }, { "start": 437.8, "end": 441.8, "text": " to their project. Here you see someone. He did the exact same thing to our project. It" }, { "start": 441.8, "end": 447.8, "text": " took four people a couple of months to do. He acted like it was his own. And many more" }, { "start": 447.8, "end": 457.8, "text": " came out and said he plagiarized other things as well where he basically just takes code" }, { "start": 457.8, "end": 464.8, "text": " and gives minimal or no attribution to the original authors and then passed it off as" }, { "start": 464.8, "end": 474.8, "text": " his own. This after this course, yeah, everyone, this could not get any worse. Hold my gas" }, { "start": 474.8, "end": 484.8, "text": " in quantum doors. Yeah. So this all happened. I mean, I encourage you to go read up on it" }, { "start": 484.8, "end": 489.8, "text": " to make up your own mind. I just want to point out quickly the end. And I won't actually" }, { "start": 489.8, "end": 495.8, "text": " show the identity of the person. I'm posting this if you really want to find out. But it's" }, { "start": 495.8, "end": 499.8, "text": " not about that person. It's about the kind of sentiment. So there is a sentiment around" }, { "start": 499.8, "end": 507.8, "text": " that you should kind of unfollow him. And because that lends credibility to him. And" }, { "start": 507.8, "end": 514.8, "text": " there is a point to be made of that kind of if the kind of prominent researchers refer" }, { "start": 514.8, "end": 520.8, "text": " to him and so on that gives him some credibility. But I'm also very much against sort of cancel" }, { "start": 520.8, "end": 526.8, "text": " culture. It is also the case that he, like no matter how much he's plagiarized, has" }, { "start": 526.8, "end": 533.8, "text": " popularized the field more than anyone else. And maybe, you know, there is a conversation" }, { "start": 533.8, "end": 542.8, "text": " to be had and a lesson to be learned without immediately canceling someone. That's just" }, { "start": 542.8, "end": 548.8, "text": " so that I mean, there's, it's a it's a complicated issue, but just kind of want to get this out" }, { "start": 548.8, "end": 558.8, "text": " there. So go read up on this is all it's it's yeah, it's a wild world. So that being said," }, { "start": 558.8, "end": 579.8, "text": " bye bye. Have fun." } ]
U0mxx7AoNz0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Player of Games: All the games, one algorithm! (w/ author Martin Schmid)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "reinforcement learning", "ai for go", "ai go", "ai chess", "chess ai", "stockfish", "alphazero", "alpha zero", "muzero", "player of games", "pog", "deepmind", "deepmind games", "imperfect information games", "ai for poker", "perfect vs imperfect information", "public state", "scotland yard", "ai for scotland yard", "reinforcement learning poker", "ai no limit holdem", "counterfactual regret minimization", "tree search" ]
#playerofgames #deepmind #alphazero Special Guest: First author Martin Schmid (https://twitter.com/Lifrordi) Games have been used throughout research as testbeds for AI algorithms, such as reinforcement learning agents. However, different types of games usually require different solution approaches, such as AlphaZero for Go or Chess, and Counterfactual Regret Minimization (CFR) for Poker. Player of Games bridges this gap between perfect and imperfect information games and delivers a single algorithm that uses tree search over public information states, and is trained via self-play. The resulting algorithm can play Go, Chess, Poker, Scotland Yard, and many more games, as well as non-game environments. OUTLINE: 0:00 - Introduction 2:50 - What games can Player of Games be trained on? 4:00 - Tree search algorithms (AlphaZero) 8:00 - What is different in imperfect information games? 15:40 - Counterfactual Value- and Policy-Networks 18:50 - The Player of Games search procedure 28:30 - How to train the network? 34:40 - Experimental Results 47:20 - Discussion & Outlook Paper: https://arxiv.org/abs/2112.03178 Abstract: Games have a long history of serving as a benchmark for progress in artificial intelligence. Recently, approaches using search and learning have shown strong performance across a set of perfect information games, and approaches using game-theoretic reasoning and learning have shown strong performance for specific imperfect information poker variants. We introduce Player of Games, a general-purpose algorithm that unifies previous approaches, combining guided search, self-play learning, and game-theoretic reasoning. Player of Games is the first algorithm to achieve strong empirical performance in large perfect and imperfect information games -- an important step towards truly general algorithms for arbitrary environments. We prove that Player of Games is sound, converging to perfect play as available computation time and approximation capacity increases. Player of Games reaches strong performance in chess and Go, beats the strongest openly available agent in heads-up no-limit Texas hold'em poker (Slumbot), and defeats the state-of-the-art agent in Scotland Yard, an imperfect information game that illustrates the value of guided search, learning, and game-theoretic reasoning. Authors: Martin Schmid, Matej Moravcik, Neil Burch, Rudolf Kadlec, Josh Davidson, Kevin Waugh, Nolan Bard, Finbarr Timbers, Marc Lanctot, Zach Holland, Elnaz Davoodi, Alden Christianson, Michael Bowling Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello everyone, today is a special day. I'm here, as you can see, not alone, not by myself as usual. I'm joined by Martin Schmidt, who is the first author of the paper called Player of Games. This is joint work with others by DeepMind and I have to say it's a very in-depth paper. It presents an algorithm called Player of Games that is sort of a unified algorithm to play all sorts of games. This starts at things like chess and go, which you might know from AlphaZero, but it goes beyond. It goes to things like poker and Scotland Yard, which I found really interesting that it appears here. But sort of the common denominator is that these new games, they have hidden information. So other than chess or go in Scotland Yard, you don't know where Mr. X is hiding. In poker, you have no clue what cards the other players hold. So you can't just look at the table and poker and decide what's the best thing to do because you don't know a lot of things. Same in Scotland Yard. There have been algorithms for poker, right? There have been algorithms for Scotland Yard, but they were always a bit tailored to sort of the specifics of the games. Player of Games combines a large set of techniques. And these techniques are things like, let's do search. So as we play the game, we do local search. We sort of invest some computation at inference time to tell us what the best possible move is. But we don't want to search throughout all the game because these game trees, they just get very big. So that's the part that comes in from AlphaZero a little bit. But then the other part with the unknown information that is coming in mostly from the from algorithms like counterfactual regret minimization, and so on. But yeah, the counterfactual regret minimization, if I understand these correctly, they were sort of solvers, like they either solved a complete game or they didn't, right? You'd have to like traverse the whole game. And then at the end, you knew, okay, in this situation, I need to do this and so on. And yeah, this, I was very excited when I saw this paper. And then I tried to read it. And it was, it was, it was, I have to say it was dense. And I'm very happy to have Martin here today, to guide us a little bit through the paper. So Martin, welcome. Thank you very much for being here. Hey, I'm happy to be here. Was it a sort of a good description of what I said so far about player of games? Oh, yes, very, very, very much so. If you could summarize sort of the main components of this algorithm. So this is a single algorithm that I can train on many, many games. What is the set of games I can train it on? So the currently we use, we use four games, the games that you mentioned, we have, we have chess, we have go, we have Scotlandia, which I find as a very cool and fun game. And we have, we have no limit poker. So that it's just to show the generality of it, because this is all about the generality. That's why we pick like two perfect and two imperfect information games. Yeah. So currently, it should be able to handle, handle most perfect and imperfect information games as it plans. So from scratch from self play, just like Alpha Alpha Zero does. There are some, some, some limitations for games that this can handle. And we can, it's, it's best to understand the limitations only after we understand a bit more about the algorithm itself. Yeah. So the algorithm itself is composed of many parts, but the, the central concepts here, I think are, and that's what people, I think people kind of know what Alpha Zero does, right? It, it uses self play and it searches, it searches a game tree to a certain depth, right? So, so it, in these games, we usually have like some sort of a state, right? And then we have various different actions that we could take in that state and every action leads to a next state and so on. And we have various different actions we could take right here and every action leads to a next state. And you can quickly see how this explodes, right? So what, what Alpha Zero and all these search algorithms do, they do this kind of limited depth search, right? They look maybe one or two moves ahead, but at some point they say, okay, no further. We can't afford to compute all of this tree. And that's why at a certain depth or after a certain time, they say, okay, here we cut off and we use like a neural network to tell us how good this node is. Even though we're not at the end of the game where we would either win or lose, we could still have a neural network that sort of predicts this node is, is very good for you or this node is very bad for you. And that's, that's essentially Alpha Alpha Zero in a nutshell, let's say uses self play, uses this tree search at a certain depth. It simply asks the neural network. Now what's the, what's the problem when you have imperfect information? How does, how does this change? Okay. I know that's, that's, that's the, that's the right question. Unfortunately, we probably spend quite some time to understand the intuition of it. Right. But even for Alpha Zero, it's, it's good to step back and see where it came from. It's not, it's not that Alpha Zero introduced search for say perfect information tips, right? Search has been here since 1950s, like first, first algorithm for, algorithms for chess did combination of search and some value functions. Alpha Zero is amazing in the sense that it learns those value functions that you just described for self play. And it's also really, really smart about how it's going to expand its search tree. It's not like it's going to always look two steps, steps ahead. It's very smart about building, building this tree that goes deep where they need to need it to go deep. But it still has those components, which these components are simply having some search tree that it ideally expands as it thinks about a policy in the search tree, and then using some value function at the, at the end of the search tree. Yeah, that is, that is one of the, one of the hallmarks of Alpha Zero. I think that, for example, in Go, you have so many actions, even at step one, right? If you were to consider only like even three steps ahead or so, this would just blow your computation budget. But as you can see, in Alpha Zero, it sort of, it sort of always starts from the root, and then it kind of goes down one of these branches that it has already explored a little bit. And in every new iteration, it re-decides which direction it should investigate. And that's a combination of sort of what the neural network says, but also how often it's been, it's explored something. So it says, you know, like this direction is very promising, but I've explored it a lot already, so now I'll go, I'll go a different branch or so. And then at the end, it always goes, gets to a leaf node that it hasn't expanded yet, right? And at that point, it asks the neural network, okay, you know, what's, what's my policy here? What's my value? And then it prepares sort of the next iteration that it could expand it even more. And so over time, it builds this very targeted plan. So the neural networks guide the tree search, as you say, that's very, very cool. And in imperfect information games, that is, yeah, that is different, right? Yeah, so it's somewhat different, but we still wanted to have exactly what we just described. This is like why Alpha Zero works so well, and we still wanted it. So on a high level, you can think of playoff games as combining, combining Alpha Zero and DeepStack, which if you were to Google DeepStack, it was the first AI to beat professional players in no limit poker. And it already introduced some of the ingredients that we will see in this paper, which is it introduced this notion of local search in poker and these value functions. And playoff games is really just putting together Alpha Zero in DeepStack into a single big unified algorithm. So let's maybe start with the component that you just talked about, which is value function. And the value function, if we get to a point where we understand value function in playoff games, say it's then you understand like 60 to 80% of the algorithm and complexity that imperfect information brings. So value function, if you think about how to use it, exactly as you said, rather than searching all the way to the end of the game, because it would be like way too long of a search, you just trumpet your search and use value function as a substitute for continued search. And that's how you use it. But what it really does, it maps some sub problem that you are thinking of to a game value of that sub problem or sub game. In chess or in Go, it's really easy to think about what it really is. You get to a new board, chess or Go board, and the value function ideally should tell you, hey, this is the value of this sub game. What it really means is what would be the outcome if two optimal players were to continue playing this game forward, right? So that's all the value functions do. And the same thing they do if you try to generalize them into imperfect information games, except that suddenly this notion of sub game and sub problem gets way more complicated. Yeah, so this basis on this notion of information states and sort of public beliefs about things. So on the left here, you've tried to show this in a diagram. And I think the notion is when I come to a poker table, I only see what's called the public state, right? I see and actually, if I come to a poker table and I observe a hand with all of its history, right? That is the public state. So I know, you know, who bet how much in which round and so on who acted how, but I don't see people's cards. So there could be many different cards that people hold. And some might be impossible just from the rules of the game, you know, maybe not in poker, but you know, in Scotland yard, you have this over here, there are certain locations, this Mr. X can be. And we want to assign probabilities to each one of them, right? If we knew if we knew where Mr. X was, the game would be easy, right? But since we don't know, we must estimate and I think that's also something you highlight in the paper, an interesting property of these games is that if I am Mr. X, or if I play poker, I have to not be deterministic, right? Otherwise, the game would be very easy for my opponents. If that's in poker, usually, you know, people they look at their cards, they go, and then they like bet everything they have. And you know, immediately know which hand they have if they don't also do the same thing with other other whole cards, or if they don't randomize a bit. So necessarily, other than, let's say in chess, the optimal strategy is kind of a distribution over actions. And you have to sort of randomize that in order to almost a bit hide your, your, your private state. So what we what we see are these public states, right? And what we can estimate is these things, which are called the ranges. So these are distributions over what private states the players could hold. And the thing the difficulty in this tree search comes from the fact that you can only go from a public state, yet you need to consider all the possibilities of the private states. So you can't just say this is the situation, you have to sort of consider all of them at the same time, right? Yes, exactly. That's, that's what you basically need in order to generalize those sub games or sub programs to improve information, right? It's not hard to hard to see that all perfect information games are just a special case where you have just a single single possible state for for for the player, right? Like a poker, you just talk about poker and public state states, and that's a that's a that's a perfect example, right? Like a sub program in poker, it's it makes little to no sense to say what's the value, what's the value of a sub game or sub program in a poker where I hold a pair of aces that's pretty much ill defined, ill defined sub game. What we what you need to do is given a given a public state, which is, as you say, I come to a table, I see everything that I could have observed as a public observer. So that's that's that's basically my state. But given this state, given this observation, there's a lot of possible individual individual states of the of the game that are consistent with this observation. And this simply correspond to all the different cards the players could be holding. And sub game is simply defined by by combination of this public state, which is the thing I get to observe as a public observer. And then I can see that observer and a distribution over all the possible private states that could be happening right now. And given this distribution on top, this simply defines a well defined sub game. And given this well defined sub game, I can suddenly ask questions of, well, what would what would be the values of this sub program given that they all the agents play the sub game optimally, just, just as you in chess or go? Yeah, I we used to we used to play poker a lot in like high school. And this was frequently you try to not try to guess what hands your opponent have, but you try to guess you know what their ranges right. So you consider like, okay, it's often going to be these cards, it's less often going to be these cards. I think that mirrors very much the reasoning that that people actually have in these things. And now given given this you at the one of the core things here is this neural network that is supposed to tell us what the values of the sub game is, right. And this, as you said, it gets us an input description of the public state. And it also gets as an input, your beliefs about what distribute like your beliefs about the ranges of the players, so what their private information could be and how often and if I remember correctly, these ranges, they're just a result of their strategies, right. If you know the strategies of the players, then you can calculate what their ranges are. Because if the strategy is I always bet high when I have aces, then if the player bet high, then aces are quite likely, you put all of this into a neural network, and the neural network gives you policies, which is understandable, it's how would a player act in a given situation. This is also what AlphaZero gives you. But then you have these counterfactual values. And this is a bit of a new term that only appears in, I think in imperfect information games, what is a counterfactual value? Right. So in this case, this value function very much is analogical to AlphaZero in the sense that you have values and policy or policy for a sub game. And we use them in very similar way. Except as we just described a sub game is, there's many possible states the game or the players could be in given a public state sub game or public sub game. And the value function given this sub game outputs not just a single value that says, hey, value of this sub game is five, it actually outputs a single value for all the possible player states that are possible given the sub game. So in poker, say I could be holding thousand different hand combinations in holding poker. So the network will tell me, hey, in this sub game, if you were to hold this particular pair of hands, this is the value and it will tell me such value for all the possible states I could be in. Yeah. Okay. And the neural network, how is it built to output? Does it have like one, let's say one output head? So does it output like a thousand dimensional vector one entry for each? Okay. So is it fair to say that your algorithm would struggle with games where the possible private states are huge? That's yeah, that's the this is brilliant. This is exactly why I said it will be nicer to understand the limitations once we get a bit deeper into the algorithm. And this is exactly the main limitation that we currently have because in some games, this just explodes. Yeah, I see. Okay. And you have this network and you train it in some way via via self play. And now we get to the part where you generalize this search procedure, right? And let me see. Oh, this is here. So this search procedure, as we said in in alpha, again, in alpha zero, you have something like you're at some state in the game, right? You've played until this state. And what you do is you do this search and you use an internal like simulator to do the search. This is at inference time. So what you do is you consider all your actions, you choose on one by given the neural networks output and the current search statistics. You go here, you ask the neural network, well, what's my value here, you expand that node. And then you start again. And then the next iteration, you start again from the root, you expand maybe the same or maybe another action, it depends. But let's say it's the same right here. If it's already expanded, you go further down the tree. And you would you would sort of you would make many iterations, let's say 50 iterations or something like this. In every iteration, you'd go down the tree, and you find a node that you haven't expanded yet. And you'd expand that node, right? In in in player of games, this is quite a bit more intricate, right? As as we also have many iterations, but within each iteration, we have to do a lot more work in order to actually in order to actually deal with with this uncertainty. So could you describe a little bit how your search algorithm works? Yes, happy to. So when we said at the beginning that player of games is a hybrid of deep stack and alpha zero, search algorithm is a perfect example of of this being a hybrid. So what deep stack already introduced is it, it had a fixed search tree. So you are poker players. So you what it really did is it search all the way through a single single betting ground. And it used value functions at the end of the round. And it ran this kind of actual regret minimization, which we might come back later to. But you can think of it simply as some some policy improvement algorithm given a fixed search. It would iterate and improve the policy and as it was walking up and down the tree and finding a good policy, it would use the value function at the end of the search tree, the very same value function that we just talked about. Now, player of games, it's this this smart idea of alpha zero, where it also tries to dynamically expand the search tree or didn't have enough fixed surgery. And the way it does we simply see intertwined two phases where we in one phase, given the sample given some surgery, we try to improve the policy within the surgery. And there's a second phase where it simply tries to expand just like alpha zero does using the same say PUCB PUCB formula, we try to expand the search tree where we think we need to expand it. And then we simply go back and forth with you like an expand the tree, improve the policy, expand the tree, improve the policy. Yeah, so this is built on an algorithm that is used that called counterfactual regret minimization. And this is an this is if you were to just apply a counterfactual regret minimization, this is a solver, like I give it a game description, and it just it will expand the entire game tree every state there is in the game. And it will just sort of go from node to node in this tree and improve the policy of both players, right. And it just does this for many, many iterations, it improves here, here, here, everywhere in the game tree, until the whole game tree is approximately optimal. And the biggest game that has been solved so far, if you describe this in the paper is limit, limit heads uphold them. Is that correct? Fixed? Yes, hold them. Yeah, that's, that's actually a solved game. Yes, it was done a few years ago by the by the computer research group at the University of Alberta, led by Michael Bowling, and it's still as far as I know, the largest game to be solved. And you use the word solver, which is a perfect, perfect name, really. And like the way I think about the solver is you give me some small or medium sized game that I can fit into like a big table on my computer. And by solving it means simply find a policy for all the possible states in a game. It's easy to see that it's like, I mean, people do know how to do it in say, tic tac toe or small, small games, right. And if you were to fit chess on your computer, then again, it's not hard to see that you could just solve it given the algorithms that people are familiar with. The thing is, even if you have a really, really small information game, you do have to use algorithms that that can handle imperfect information games. Often people just use algorithms that they like, say, I don't know, like policy gradient methods, Q learning or whatever. And if you just run it on imperfect information game, it just doesn't find a good policy. Yeah, I think the I mean, intuitively, it's a bit like if I start in some situation in chess, and I make some moves, I have I have still like that original state is still the same, right, I can I can look back, I come from there. But if I'm in poker, and I'm in some state, and I make some moves, that changes kind of the past, right? Because I look at, you know, maybe you're my opponent in poker, I look at what you do. And that changes my beliefs about what you what cards you had back in the past. Then I go back and I'm like, Oh, okay, you did this and this. So you can't you can't, I don't think you you will you're holding, you know, a king and an ace, given that you've done something in the future. And I think this, the fact that your future actions change the past, that's what, in my opinion, makes this so much more intriguing and complicated. So on the left side here, I think this is the this is you have a search a local search tree, right? You it's expanded until some depth at that depth, you asked the neural network for, you know, summarization of whatever happens below. And within that tree, you run now this counterfactual regret minimization or something akin to it, and you simply want to find the best policy within that tree, which is more complicated in alpha zero, I just visit every node once right, because the future doesn't change the past. Once I computed a node, I only expand things below it, that never changes that node. However, in imperfect information games, right, if I change something below, all of a sudden, the the past changes, so I need to sort of update and converge the whole tree. And then once you've done this for a number of steps, on the right side, then you add a new node by essentially doing what alpha zero does, you go to a leaf node, you choose some action, right in some information state that passes, and you perform that action, and that expands actually one more node. Is that you know, this is this is excellent. And the the property that you just described, like the future change in the past, that that is also something that makes search in particular, so much more complicated, right? Because there's you can figure with a two step process, if you were to just solve solve some game, you will just solve it, even that is more complicated, because of what we just described, but you could do it there. There's ways to solve solve imperfect information games. But we are doing search here. And the property that you talk about makes search so much more complicated. And the reason being is in imperfect information games, you cannot just glue together optimal policies, and hope that the resulting policy for the full game will be optimal. And that is something that many search algorithms just rely on. And it simply holds in perfect information games. So if you were to like, pick any optimal policy in any any any state and just put them together, this is an this is an optimal policy in imperfect information games. It does not hold because of exactly what we just described. But then how can you even do search at all if search is all about like local reason, right? You reason locally, you have to somehow need to make sure that the resulting policy for the full game is still optimal. Yeah, it's it's it's interesting. So essentially, for every step that Alpha Zero does, where it expands a new node, you also expand a new node, but then you have to like, get the entire tree in order again. So you expand the new node, and then you have to do the whole update of the whole tree for a bunch of iterations before you can expand another one, such that everything like stays consistent. Yeah, okay. That's, I mean, this this, it gives a bit of an impression of why this is much more, much more complex, right? Yes. So this is this is essentially at inference time, we do this search, right? We do the search. And now comes the time when we actually need to train this. So we have the ingredients. Now we have the search algorithm, we have the neural network. And now we need to train it. And you also have, you have a method, or various methods. And maybe you want to describe it yourself a little bit, because this is the part where I stumbled a little. So yeah, yeah, I will start to do it on very high level. So the idea is, again, we want it to take the self play style method from AlphaZero, so that you just throw the algorithm into a game, and it improves as the as the as it plays, and it gets better and better. And what it really means is you are improving your your value and policy, right? The network that we that we just discussed. And the on a high level, since you are using your value function in your search, you call basically call your neural network with some inputs, some states, public states, some beliefs. And this, this figure, this idea of queries is simply we call every single time we call a network, we call this a query, we are querying a network for some value over some game. So we store this we store this couple of public state and beliefs. And then we go through all this all those queries, and we simply try to basically improve the network on the states and the syringes that the network has been queried, because this is probably what's important because that's what occurred during the self play. So you collect the train is similar to AlphaZero, as you say, you collect the training set as you go. So the training set for the next iteration is whatever the network had to do during this iteration. So it's not just a random sample of states. And you train in the same manner as AlphaZero, you train to predict your own future outputs, is that approximate? So if let's let's distinguish if, like one or two or three steps in the future, you actually win or lose the game, you can train on your reward of the game. But AlphaZero also, if it doesn't win or lose the game in the next step or so, it tries to predict its own output. So it tries to improve that way using TD lambda. You here have TD one, right? So your targets, what do you target? What do you give the network as labels? So okay, so this is slightly more complicated here in the sense that each query basically defines you something, right? It's a public state and energies. And given a sub game, the ideal target for your neural network would be simply to solve the game, right? That's the ground truth that you want your neural network to learn or like then to work too. So rather than solving directly, because again, these sub games will still be way too big as they occur during the gameplay, we do like a small, small solver, where we also substitute the full solver with a small search. So rather than fully solving a game, we use the same method to basically do a search. And the outcome of the search, basically a small solver is what is the target. Okay, so you do the same thing as you do during inference when you actually want to make a move. So during that inference, you're going to make some queries to the network, you take these queries, and these I think here are the red dots, right? Exactly. So during maybe this has battery again. So during the inference, you make you do these queries, you store them in this in this buffer. And these now act as the root nodes for yet another search, which is exactly the same as the previous search, right? And so you you sort of rely on the fact that this search procedure can give you a better output than the neural network itself, right? Yes. Right. The query here, the neural network will output some value, like the value is eight, or one value for each for each information state. But you, I think the whole algorithm is, and that's of course, the reason we do search in the first place is that doing search gives you a better estimate than just using the neural network at the start. So doing search, and then asking the neural network further down the line gives you a better estimate. And yeah, it makes sense. You start at wherever you ask the neural network, you use local search to get a better value, doesn't need a perfect one, just a better one. And then you train the neural network to predict the result of the search. That's exactly one would hope though, that after a while, you know, if I do this again, and again, and again, at the end, I wouldn't even have to ask the neural network anymore. Sorry, I wouldn't even have to do search anymore during inference. Is that something you have you have you tried not even doing search just using the neural network that the policy output of the neural network during inference? Is that something that generally works? Because, you know, I train it to predict the output of the search. So technically, let's say it should it should kind of learn it, no? Yes, the same the same way you simply could just use the same policy network in AlphaZero and let it play chess, right? You can do it and people have have done it. It is still quite good chess, but it's far, far below the full strength of search. So yes, at the end of the day, even the policy network is quite good, but it's not as good. Okay. Yeah, I mean, it's just it shows a little bit that the search is in fact, in fact, really necessary, right? Yeah, so I think we're almost getting already to the sort of results. Wait, would you would you maybe summarize the results a little bit? I think if people are super interested, they may go into into the paper and into the tables. But maybe you can just summarize a little bit of the results you compared against AlphaZero in perfect information games, you compared against dedicated algorithms like like slombot in poker, and you even compared against like a dedicated AI for Scotland yard. What were generally the results for you? So yes, so so in general, the results are that the algorithm is all about generality, which is this is not as strong as AlphaZero in perfect information games where AlphaZero was designed to shine, right? So this this very much is trying to be a general rather than being the best chess or the best poker poker poker agent in the world. It's just trying to be really, really good in all of them at once. What is the diff? So if if a perfect information game is just a special case of an imperfect information game, right? What is then the difference between player of games and AlphaZero? Like, why couldn't it reach the same performance? So on paper, it could except that, for example, the policy improvement algorithm that we use, the counterfactual, we get minimization, right? It has to be also good able to handle imperfect information games. That's why it's not going to convert so nicely and quickly as as algorithm design design for perfect info. So the fact that you expect sometimes to see an imperfect information game, would it be fair? Would you estimate that if you just input more resources, input more computation time that it would actually reach the levels of AlphaZero? I don't think it's necessarily I mean, on paper, all of these would eventually converge. Right. Everything works on paper in in delimiter. In practice, AlphaZero and MCTS is probably always going to be ahead. But we don't really care. Right. Like, if I would be happy with a single algorithm for everything that's that's better in humans. I don't care if it's better by like a little bit or by a billion. Yeah. And then in in in poker here, you compared against Slumbot, which is you say that the best open source or best available poker bot to date. And this is no limit poker now. Right. This is this is way too big of a game to solve. And I think the other ones is you you simply compare to the numbers from their papers. Is that the do you mean for a slum bot or for Scotland that we're talking about poker? Oh, sorry. Yeah, let's let's talk about poker for a while. So the the player of games here gains what is this seven millibig blinds per per hand? Yeah, over slum bot. Yeah, again, like we we we could have beaten slum bot by by a lot more. Yeah, just like decided, oh, this is good enough to like to put into a paper, we can come back to it later. Like, as you know, it very much depends on how much time you spend tuning the network architecture and how for how long to train this is what this is just to show, hey, there's already an algorithm that can do all of these games and it still plays them really, really well. Yeah. And your neural network, just to say it's a bunch of like feed forward layers, correct? Like, it's not a complicated thing. So for poker, it for poker, it's just a feed forward network for chess and go. We do we try to mirror some of the older AlphaZero architectures. Yeah. Okay, so and here on the right side, you have Pym Bot, which is the Scotland Yard specific, but for people, maybe people don't. Does anyone not know what Scotland Yard is? Maybe you can describe 10 seconds what Scotland Yard even is as a game. It's somewhere. Yeah, there's a figure maybe, right? There is this figure, right? Right. Yeah, there's no point explaining the rules in detail, but on a high level, there's a graph, you are trying to chase down the chase down a stone that's called Mr. X, you have five detectives that are trying to chase the stone down. The trick is the stone, the Mr. X that you are trying to chase down is only partially observable. That's what makes it imperfect information. And you have to basically reason about states where he could be hiding and form some beliefs about his state and trying to chase him down. So yeah, and yeah, I guess that's all people need to know. You can spend like funny tickets on taxi rides and various methods of transport. And then every 10 turns or so Mr. X has to reveal their position. And that's how you sort of form a belief about where Mr. X could be given what actions Mr. X took. So this is quite a specific game. So it seems to me like a dedicated algorithm could do very, very well, again, in this game, because it could exploit various aspects of the game, you could hard code in various various things the AI could abuse. And here we see a graph of the win rate of player of games against what's on the x axis here, this is number of search iterations. So pinbot is a local search algorithm as well. Yes, it's a it's a it's a variant of MCTS. And this is to show regardless of how much time or search we give the MCTS, the hard code hand tune algorithm, even if it gets like a billion or something called search iterations, it's still behind alpha zero because it's using this general self play learning method. Yeah, so this is this will be I guess the final win rate is here like at 55% or something like this. And that is with a huge number of iterations for for pinbot. Yes, and we'll play our games is using only like 400 iterations on our site. So yeah, as you can see, as you can see, the regardless of the scale, we converge to a better policy. And you do you would attribute that to the use of self play to improve the strategies. It's the it's a combination of this and also the fact that player of games is built on some some on some methods like later in the appendix, if people are curious, they can open appendix, we show that on small games we were we can exactly measure how close to an optimal policy the our resulting search policy is we get closer and closer as the time goes. So basically, we are only limited by the by the power of neural networks. And we have some guarantees that we can get to an optimal policy. Other methods that are based on MCTS, they they are not guaranteed to converge even on small games. So there's there's there's there's also the limit of the of the fact that these methods are not sound. And just to get an idea of the scale of like we saw, you know, poker, Scotland yard, here we have the the chess and go and so on. Can you give us a number of just how many how many GP TP, whatever use do I need to run for how long to get anywhere close to what you did? I see. So I think the easiest for us was poker that like people probably can train on a few few GPUs. The by far the hardest is is go where we used a lot of a lot of GPUs. But that was simply because we we had them available. Yeah, I get okay. And you you did in the paper say that for comparison reasons you use sort of the same amount of compute as Alpha Zero did as well. That was that was tricky. I'd like it's like, because we do not want to claim that this is this is now state of the art chess agent and like there there we don't have to do all the proper and hard measurements, right? Then you have to use clock time. And suddenly if you use clock time, you have to argue that use the same hardware and everything gets gets more tricky. And he would just say, well, we use the we call the network as often as Alpha Zero did. So it should be roughly the same, but like we don't claim to be stronger. Okay, I mean, that's a I think community appreciates sort of fair comparison instead of every every paper having the new best state of the art, especially in RL, like it seems it seems clear just from the graphs here, like just from the lines, it seems clear, you can just invest more compute and get better. And that's what we also saw with Alpha Zero, like it used to be slightly superhuman. And now it's like, you know, no human not like not all humans together even will ever match Alpha Zero in in any of these games, which is crazy. Yeah, exactly. Do you have a bit of a demonstration ready? You told me of of of the player of games playing Scotland yard. So we can kind of see what's going on. Yeah, let me see if it's still still working. It was working this morning. It was we never plan to show it externally. We it was designed for our debugging purposes, but it would be a fun demo just so that people who are not familiar with Scotland yard maybe get some intuition about the game. Okay, so hopefully you can see this. Yeah. And the let me very quickly explain what is what is this about. I am now playing as Mr. X, which is this black color in here. And I can move all and all on on this graph basically walk walk in the edges. And as you were talking about those taxes and cubes, you can see that the edges have different colors. So all of these are yellow, but this this guy is blue. And they correspond to to different meaning of transportation that I get to use, say yellow stands for taxi taxi, I think, and blue stands for bus. Now, detectives do not get to see where I am, but they do get to see which color color details. So right now I'm in here and say I want to go through 49. And I want to use taxi together. So yeah, hopefully, like we have been talking for a while, so maybe maybe it's not alive anymore. But yeah, probably to it died. You have scaled to zero proper engineering. Nice. Yes. So yeah, it doesn't work right now. But at least people can get an idea of what would happen. Maybe. Yeah. So you'd need to you'd need to pretty quickly kind of reason. And the longer you don't see Mr. x, the more sort of fuzzy your idea gets of where Mr. x is, do you do you visualize sort of this distribution, the belief distribution of where Mr. x is or for debugging? Or we did it's I don't think it's it's turned on right now. But that's exactly what we tried to do at some at some point. Yeah. And did you see did you observe this that's the longer they didn't see Mr. x the more kind of spread out, the more unsure they become. Is that something you can clearly observe? Or is that something you just feel as a human? Oh, yes. And it was actually really, really fun to see. Yeah, crazy. And so the one improvement, let's say, or one follow up to Alpha zero was the muse zero algorithm, which, which the crucial difference is Alpha zero, you need sort of the simulator, you need to be able to simulate a lot of games. Internally, you need to know what happens when I do some action, what kind of state results from that. And muse zero alleviated this by sort of going to the latent space state and training everything in latent space. Is this something I could do with player of games? No, but that's, that's arguably the limitation number two, I think the biggest being the biggest thing is right now the the large, large beliefs, belief space. But the second one is we currently need the model of the environment. And muse zero doesn't even know you will need it. So we can think of player of games is running behind the Alpha zero Alpha zero lineage and trying to generalize things, but we are still looking behind in that regard. And maybe a more more conceptual question here in these in these entire game trees and so on, you know, for example, in Scotland yard, I don't know where Mr. x is, but Mr. x's movements are kind of deterministic, right? Mr. if Mr. x uses a taxi to get from 49 to 48. Mr. x is now at 48. However, in poker, for example, if I bet something, there will and my opponent calls the flop will reveal like random cards. How does this and this is different from me not knowing what my opponent's cards are, right? It's, it's sort of pure randomness within the game. Is that something that makes things very complicated? Or is the complicated part? Like how do you deal with stochasticity and with randomness in games, which is also something that doesn't exist in chess? That that part is actually quite easy. It's simply baked in into into a model. And that's, that's pretty much it. Okay, so you can you can sort of condition on previous information and the model will compute whatever expected value of of any future cards that could be drawn in like flop and turn and river. You can think of it as basically having you just draw the search tree at the beginning and simply one of those nodes you can think of as as some chance actor playing and you have simply a fixed policy in that node and a lot of lot of actions. That's it. So when you expand the search tree, do you need to expand once for every possible, let's say flop combination there is? Yes. Okay, that that is a lot of combinations, right? Or you can or you can substitute like if you are smart about it, you can again use a neural network. Yeah, okay. Do you do you think humans because in in Alpha zero, you can sort of think that you do the same internally, right? You kind of you kind of think ahead and until some depth and you say, okay, here, I guess, and a little bit. Do you think player of games or in general, these these algorithms with imperfect information is also a little bit like like humans do it. It seems vague that I go and I kind of go through all the different flop combinations there could be. Or do you do you think there is a fundamental difference between how humans tackle these problems and how these algorithms do? I would. So I would say we would both agree that in Scotland, they are you probably do the same, right? Yeah, like looking forward, like what if I go here? What if the opponent goes there? And then you do this like search forward as you are thinking about the beliefs of the opponent. Yeah. So in Scotland, I would say yes. In poker, it's simply complicated by the fact that suddenly the belief space is big. You like for humans, even 1000 is probably too much. And yeah, I did like probably humans use some like gender representation there already. I don't know. Cool. And what is next in this line? I mean, now you've, you know, you've built like a big unifying algorithm that can tackle any sort of game as long as it like has a simulator. What and you said it's probably not possible to go without a simulator. So what's next? Like, it seems like, you know, you've achieved kind of unification. Where do you go from here? I think the most natural path is to remove the constraints that we just discussed, right? This is going to fall apart if there's a big belief space. And it still needs a model. And I think this is something we probably want to play with play with next like, yeah, like, we like making algorithms that are truly general. I think is a big step in this direction. But it's not to say that we are finished. And is so do you think if this line of work continues, it would be an algorithm that at some point could be thrown at pretty much any problem, like Atari and like, but even beyond reinforcement learning, right? Question answering, visual classification, what not, or even robots, and so on. Or do you think that is kind of a very different line of work? I mean, I did use I did work on question answering and congeneration before. So yes, sorry, so on high level, this is certainly the dream, right? Like, not just of the team who work on this, but quite a few smart people in deep mind, like try to make something that's truly, truly general. You don't really care. Well, the algorithm doesn't really care what environment you throw it into, you just like throw it there and say, okay, learn. So that's, that's, that's the direction we are going if player games can walk all the way there, or if some of the ideas will be simply used in other approaches, we shall see. Cool. Excellent. Well, in this case, Martin Schmidt, thank you so much for being here. This this was way way I promise to everyone, this was way better if than if I had done this myself. So thanks a lot for for joining us. This was really awesome. Thank you for having me. This was fun. Thanks.
[ { "start": 0, "end": 7.28, "text": " Hello everyone, today is a special day. I'm here, as you can see, not alone, not by myself as usual." }, { "start": 7.28, "end": 13.84, "text": " I'm joined by Martin Schmidt, who is the first author of the paper called Player of Games." }, { "start": 13.84, "end": 20, "text": " This is joint work with others by DeepMind and I have to say it's a very in-depth paper." }, { "start": 20.72, "end": 27.04, "text": " It presents an algorithm called Player of Games that is sort of a unified algorithm to play all" }, { "start": 27.04, "end": 33.28, "text": " sorts of games. This starts at things like chess and go, which you might know from AlphaZero," }, { "start": 34, "end": 40.96, "text": " but it goes beyond. It goes to things like poker and Scotland Yard, which I found really interesting" }, { "start": 40.96, "end": 47.36, "text": " that it appears here. But sort of the common denominator is that these new games, they have" }, { "start": 47.36, "end": 56.16, "text": " hidden information. So other than chess or go in Scotland Yard, you don't know where Mr. X is" }, { "start": 56.16, "end": 62.72, "text": " hiding. In poker, you have no clue what cards the other players hold. So you can't just look" }, { "start": 63.36, "end": 70.39999999999999, "text": " at the table and poker and decide what's the best thing to do because you don't know a lot of things." }, { "start": 71.52, "end": 78, "text": " Same in Scotland Yard. There have been algorithms for poker, right? There have been algorithms for" }, { "start": 78, "end": 84.56, "text": " Scotland Yard, but they were always a bit tailored to sort of the specifics of the games. Player of" }, { "start": 84.56, "end": 93.12, "text": " Games combines a large set of techniques. And these techniques are things like, let's do search. So" }, { "start": 93.12, "end": 99.76, "text": " as we play the game, we do local search. We sort of invest some computation at inference time to" }, { "start": 99.76, "end": 105.52000000000001, "text": " tell us what the best possible move is. But we don't want to search throughout all the game" }, { "start": 105.52000000000001, "end": 111.92, "text": " because these game trees, they just get very big. So that's the part that comes in from AlphaZero" }, { "start": 111.92, "end": 118.88, "text": " a little bit. But then the other part with the unknown information that is coming in mostly from" }, { "start": 118.88, "end": 126.32000000000001, "text": " the from algorithms like counterfactual regret minimization, and so on. But yeah, the counterfactual" }, { "start": 126.32000000000001, "end": 131.52, "text": " regret minimization, if I understand these correctly, they were sort of solvers, like they" }, { "start": 131.52, "end": 136.4, "text": " either solved a complete game or they didn't, right? You'd have to like traverse the whole game. And" }, { "start": 136.4, "end": 143.28, "text": " then at the end, you knew, okay, in this situation, I need to do this and so on. And yeah, this, I was" }, { "start": 143.28, "end": 149.76, "text": " very excited when I saw this paper. And then I tried to read it. And it was, it was, it was," }, { "start": 149.76, "end": 155.28, "text": " I have to say it was dense. And I'm very happy to have Martin here today, to guide us a little bit" }, { "start": 155.28, "end": 160.4, "text": " through the paper. So Martin, welcome. Thank you very much for being here." }, { "start": 160.4, "end": 168.4, "text": " Hey, I'm happy to be here. Was it a sort of a good description of what I said so far about player of" }, { "start": 168.4, "end": 177.12, "text": " games? Oh, yes, very, very, very much so. If you could summarize sort of the main components" }, { "start": 177.12, "end": 184.8, "text": " of this algorithm. So this is a single algorithm that I can train on many, many games. What is" }, { "start": 184.8, "end": 192, "text": " the set of games I can train it on? So the currently we use, we use four games, the games" }, { "start": 192, "end": 196.64000000000001, "text": " that you mentioned, we have, we have chess, we have go, we have Scotlandia, which I find" }, { "start": 196.64000000000001, "end": 202.8, "text": " as a very cool and fun game. And we have, we have no limit poker. So that it's just to show" }, { "start": 202.8, "end": 208.8, "text": " the generality of it, because this is all about the generality. That's why we pick like two perfect" }, { "start": 208.8, "end": 215.76000000000002, "text": " and two imperfect information games. Yeah. So currently, it should be able to handle, handle" }, { "start": 215.76000000000002, "end": 222.48000000000002, "text": " most perfect and imperfect information games as it plans. So from scratch from self play, just like" }, { "start": 222.48000000000002, "end": 229.36, "text": " Alpha Alpha Zero does. There are some, some, some limitations for games that this can handle. And" }, { "start": 229.36, "end": 235.20000000000002, "text": " we can, it's, it's best to understand the limitations only after we understand a bit more about the" }, { "start": 235.2, "end": 243.2, "text": " algorithm itself. Yeah. So the algorithm itself is composed of many parts, but the, the central" }, { "start": 243.2, "end": 250.95999999999998, "text": " concepts here, I think are, and that's what people, I think people kind of know what Alpha Zero does," }, { "start": 250.95999999999998, "end": 258.88, "text": " right? It, it uses self play and it searches, it searches a game tree to a certain depth, right?" }, { "start": 258.88, "end": 264.48, "text": " So, so it, in these games, we usually have like some sort of a state, right? And then we have" }, { "start": 264.48, "end": 270.48, "text": " various different actions that we could take in that state and every action leads to a next state" }, { "start": 270.48, "end": 275.44, "text": " and so on. And we have various different actions we could take right here and every action leads" }, { "start": 275.44, "end": 281.68, "text": " to a next state. And you can quickly see how this explodes, right? So what, what Alpha Zero and all" }, { "start": 281.68, "end": 288, "text": " these search algorithms do, they do this kind of limited depth search, right? They look maybe one or" }, { "start": 288, "end": 294.64, "text": " two moves ahead, but at some point they say, okay, no further. We can't afford to compute all of this" }, { "start": 294.64, "end": 299.92, "text": " tree. And that's why at a certain depth or after a certain time, they say, okay, here we cut off" }, { "start": 299.92, "end": 305.04, "text": " and we use like a neural network to tell us how good this node is. Even though we're not at the" }, { "start": 305.04, "end": 310.64, "text": " end of the game where we would either win or lose, we could still have a neural network that sort of" }, { "start": 310.64, "end": 316.96, "text": " predicts this node is, is very good for you or this node is very bad for you. And that's, that's" }, { "start": 316.96, "end": 323.44, "text": " essentially Alpha Alpha Zero in a nutshell, let's say uses self play, uses this tree search at a" }, { "start": 323.44, "end": 331.12, "text": " certain depth. It simply asks the neural network. Now what's the, what's the problem when you have" }, { "start": 331.12, "end": 338.4, "text": " imperfect information? How does, how does this change? Okay. I know that's, that's, that's the," }, { "start": 338.4, "end": 343.84, "text": " that's the right question. Unfortunately, we probably spend quite some time to understand" }, { "start": 343.84, "end": 351.03999999999996, "text": " the intuition of it. Right. But even for Alpha Zero, it's, it's good to step back and see where" }, { "start": 351.03999999999996, "end": 357.76, "text": " it came from. It's not, it's not that Alpha Zero introduced search for say perfect information" }, { "start": 357.76, "end": 365.2, "text": " tips, right? Search has been here since 1950s, like first, first algorithm for, algorithms for chess" }, { "start": 365.2, "end": 371.2, "text": " did combination of search and some value functions. Alpha Zero is amazing in the sense that it learns" }, { "start": 371.2, "end": 377.44, "text": " those value functions that you just described for self play. And it's also really, really smart about" }, { "start": 377.44, "end": 384.32, "text": " how it's going to expand its search tree. It's not like it's going to always look two steps," }, { "start": 384.32, "end": 389.59999999999997, "text": " steps ahead. It's very smart about building, building this tree that goes deep where they need" }, { "start": 389.59999999999997, "end": 396.64, "text": " to need it to go deep. But it still has those components, which these components are simply" }, { "start": 396.64, "end": 402.8, "text": " having some search tree that it ideally expands as it thinks about a policy in the search tree," }, { "start": 402.8, "end": 406.8, "text": " and then using some value function at the, at the end of the search tree." }, { "start": 407.68, "end": 413.52, "text": " Yeah, that is, that is one of the, one of the hallmarks of Alpha Zero. I think that, for example," }, { "start": 413.52, "end": 420.8, "text": " in Go, you have so many actions, even at step one, right? If you were to consider only like" }, { "start": 420.8, "end": 426.56, "text": " even three steps ahead or so, this would just blow your computation budget. But as you can see," }, { "start": 426.56, "end": 432.08, "text": " in Alpha Zero, it sort of, it sort of always starts from the root, and then it kind of goes down" }, { "start": 432.08, "end": 439.2, "text": " one of these branches that it has already explored a little bit. And in every new iteration, it" }, { "start": 439.2, "end": 445.76, "text": " re-decides which direction it should investigate. And that's a combination of sort of what the" }, { "start": 445.76, "end": 452.48, "text": " neural network says, but also how often it's been, it's explored something. So it says, you know," }, { "start": 452.48, "end": 458.32, "text": " like this direction is very promising, but I've explored it a lot already, so now I'll go," }, { "start": 458.32, "end": 463.20000000000005, "text": " I'll go a different branch or so. And then at the end, it always goes, gets to a leaf node that it" }, { "start": 463.20000000000005, "end": 468.08000000000004, "text": " hasn't expanded yet, right? And at that point, it asks the neural network, okay, you know, what's," }, { "start": 468.08000000000004, "end": 473.04, "text": " what's my policy here? What's my value? And then it prepares sort of the next iteration that it" }, { "start": 473.04, "end": 479.76, "text": " could expand it even more. And so over time, it builds this very targeted plan. So the neural" }, { "start": 479.76, "end": 486.08, "text": " networks guide the tree search, as you say, that's very, very cool. And in imperfect information" }, { "start": 486.08, "end": 493.2, "text": " games, that is, yeah, that is different, right? Yeah, so it's somewhat different, but we still" }, { "start": 493.2, "end": 499.44, "text": " wanted to have exactly what we just described. This is like why Alpha Zero works so well, and we" }, { "start": 499.44, "end": 506.8, "text": " still wanted it. So on a high level, you can think of playoff games as combining, combining Alpha Zero" }, { "start": 506.8, "end": 514.48, "text": " and DeepStack, which if you were to Google DeepStack, it was the first AI to beat professional" }, { "start": 514.48, "end": 521.04, "text": " players in no limit poker. And it already introduced some of the ingredients that we will see in this" }, { "start": 521.04, "end": 528.32, "text": " paper, which is it introduced this notion of local search in poker and these value functions." }, { "start": 528.32, "end": 534.96, "text": " And playoff games is really just putting together Alpha Zero in DeepStack into a single big unified" }, { "start": 534.96, "end": 543.6800000000001, "text": " algorithm. So let's maybe start with the component that you just talked about, which is" }, { "start": 543.6800000000001, "end": 550.88, "text": " value function. And the value function, if we get to a point where we understand value function" }, { "start": 550.88, "end": 559.76, "text": " in playoff games, say it's then you understand like 60 to 80% of the algorithm and complexity" }, { "start": 559.76, "end": 567.36, "text": " that imperfect information brings. So value function, if you think about how to use it," }, { "start": 568.72, "end": 574.88, "text": " exactly as you said, rather than searching all the way to the end of the game, because it would be" }, { "start": 574.88, "end": 581.68, "text": " like way too long of a search, you just trumpet your search and use value function as a substitute" }, { "start": 581.68, "end": 589.28, "text": " for continued search. And that's how you use it. But what it really does, it maps some sub" }, { "start": 589.28, "end": 598.3199999999999, "text": " problem that you are thinking of to a game value of that sub problem or sub game. In chess or in" }, { "start": 598.3199999999999, "end": 603.6, "text": " Go, it's really easy to think about what it really is. You get to a new board, chess or Go board," }, { "start": 603.6, "end": 609.36, "text": " and the value function ideally should tell you, hey, this is the value of this sub game. What" }, { "start": 609.36, "end": 617.6, "text": " it really means is what would be the outcome if two optimal players were to continue playing this" }, { "start": 617.6, "end": 624.72, "text": " game forward, right? So that's all the value functions do. And the same thing they do if you" }, { "start": 624.72, "end": 630.72, "text": " try to generalize them into imperfect information games, except that suddenly this notion of sub" }, { "start": 630.72, "end": 638.4, "text": " game and sub problem gets way more complicated. Yeah, so this basis on this notion of information" }, { "start": 638.4, "end": 645.6, "text": " states and sort of public beliefs about things. So on the left here, you've tried to show this" }, { "start": 645.6, "end": 653.12, "text": " in a diagram. And I think the notion is when I come to a poker table, I only see what's called" }, { "start": 653.12, "end": 662.24, "text": " the public state, right? I see and actually, if I come to a poker table and I observe a hand with" }, { "start": 662.24, "end": 670, "text": " all of its history, right? That is the public state. So I know, you know, who bet how much in" }, { "start": 670, "end": 676.16, "text": " which round and so on who acted how, but I don't see people's cards. So there could be many" }, { "start": 676.16, "end": 682.72, "text": " different cards that people hold. And some might be impossible just from the rules of the game," }, { "start": 682.72, "end": 686.96, "text": " you know, maybe not in poker, but you know, in Scotland yard, you have this over here," }, { "start": 687.52, "end": 696.08, "text": " there are certain locations, this Mr. X can be. And we want to assign probabilities to each one" }, { "start": 696.08, "end": 701.6, "text": " of them, right? If we knew if we knew where Mr. X was, the game would be easy, right? But since we" }, { "start": 701.6, "end": 707.6800000000001, "text": " don't know, we must estimate and I think that's also something you highlight in the paper," }, { "start": 707.6800000000001, "end": 714.4000000000001, "text": " an interesting property of these games is that if I am Mr. X, or if I play poker, I have to" }, { "start": 715.2, "end": 721.44, "text": " not be deterministic, right? Otherwise, the game would be very easy for my opponents. If that's in" }, { "start": 721.44, "end": 727.36, "text": " poker, usually, you know, people they look at their cards, they go, and then they like bet" }, { "start": 727.36, "end": 734.8800000000001, "text": " everything they have. And you know, immediately know which hand they have if they don't also do" }, { "start": 734.8800000000001, "end": 741.6, "text": " the same thing with other other whole cards, or if they don't randomize a bit. So necessarily," }, { "start": 742.1600000000001, "end": 749.44, "text": " other than, let's say in chess, the optimal strategy is kind of a distribution over actions." }, { "start": 749.44, "end": 757.36, "text": " And you have to sort of randomize that in order to almost a bit hide your, your, your private state." }, { "start": 757.9200000000001, "end": 767.44, "text": " So what we what we see are these public states, right? And what we can estimate is these things," }, { "start": 767.44, "end": 775.5200000000001, "text": " which are called the ranges. So these are distributions over what private states the" }, { "start": 775.52, "end": 783.4399999999999, "text": " players could hold. And the thing the difficulty in this tree search comes from the fact that" }, { "start": 783.4399999999999, "end": 790.24, "text": " you can only go from a public state, yet you need to consider all the possibilities of the" }, { "start": 790.24, "end": 794.3199999999999, "text": " private states. So you can't just say this is the situation, you have to sort of consider" }, { "start": 794.3199999999999, "end": 796.3199999999999, "text": " all of them at the same time, right?" }, { "start": 796.32, "end": 803.7600000000001, "text": " Yes, exactly. That's, that's what you basically need in order to generalize those sub games or" }, { "start": 803.7600000000001, "end": 808.6400000000001, "text": " sub programs to improve information, right? It's not hard to hard to see that all perfect" }, { "start": 808.6400000000001, "end": 814.5600000000001, "text": " information games are just a special case where you have just a single single possible state for" }, { "start": 814.5600000000001, "end": 820.96, "text": " for for the player, right? Like a poker, you just talk about poker and public state states," }, { "start": 820.96, "end": 823.5200000000001, "text": " and that's a that's a that's a perfect example, right?" }, { "start": 823.52, "end": 831.36, "text": " Like a sub program in poker, it's it makes little to no sense to say what's the value," }, { "start": 832.16, "end": 837.76, "text": " what's the value of a sub game or sub program in a poker where I hold a pair of aces that's" }, { "start": 837.76, "end": 844.64, "text": " pretty much ill defined, ill defined sub game. What we what you need to do is given a given a" }, { "start": 844.64, "end": 850.48, "text": " public state, which is, as you say, I come to a table, I see everything that I could have observed" }, { "start": 850.48, "end": 855.2, "text": " as a public observer. So that's that's that's basically my state. But given this state, given" }, { "start": 855.2, "end": 860.96, "text": " this observation, there's a lot of possible individual individual states of the of the game" }, { "start": 860.96, "end": 867.04, "text": " that are consistent with this observation. And this simply correspond to all the different cards" }, { "start": 867.04, "end": 874.32, "text": " the players could be holding. And sub game is simply defined by by combination of this public" }, { "start": 874.32, "end": 879.9200000000001, "text": " state, which is the thing I get to observe as a public observer. And then I can see that" }, { "start": 879.92, "end": 887.1999999999999, "text": " observer and a distribution over all the possible private states that could be happening right now." }, { "start": 887.1999999999999, "end": 894.0799999999999, "text": " And given this distribution on top, this simply defines a well defined sub game. And given this" }, { "start": 894.0799999999999, "end": 899.12, "text": " well defined sub game, I can suddenly ask questions of, well, what would what would be the" }, { "start": 899.12, "end": 904.16, "text": " values of this sub program given that they all the agents play the sub game optimally, just," }, { "start": 904.16, "end": 911.28, "text": " just as you in chess or go? Yeah, I we used to we used to play poker a lot in like high school." }, { "start": 911.28, "end": 918.56, "text": " And this was frequently you try to not try to guess what hands your opponent have, but you try" }, { "start": 918.56, "end": 924.88, "text": " to guess you know what their ranges right. So you consider like, okay, it's often going to be these" }, { "start": 924.88, "end": 930.3199999999999, "text": " cards, it's less often going to be these cards. I think that mirrors very much the reasoning that" }, { "start": 930.32, "end": 938.5600000000001, "text": " that people actually have in these things. And now given given this you at the one of the core" }, { "start": 938.5600000000001, "end": 946.08, "text": " things here is this neural network that is supposed to tell us what the values of the sub game is," }, { "start": 946.08, "end": 952.4000000000001, "text": " right. And this, as you said, it gets us an input description of the public state. And it also gets" }, { "start": 952.4, "end": 960.0799999999999, "text": " as an input, your beliefs about what distribute like your beliefs about the ranges of the players," }, { "start": 960.0799999999999, "end": 966.3199999999999, "text": " so what their private information could be and how often and if I remember correctly," }, { "start": 966.3199999999999, "end": 972.48, "text": " these ranges, they're just a result of their strategies, right. If you know the strategies" }, { "start": 972.48, "end": 979.04, "text": " of the players, then you can calculate what their ranges are. Because if the strategy is I always" }, { "start": 979.04, "end": 985.52, "text": " bet high when I have aces, then if the player bet high, then aces are quite likely, you put all of" }, { "start": 985.52, "end": 992.4, "text": " this into a neural network, and the neural network gives you policies, which is understandable, it's" }, { "start": 992.4, "end": 999.5999999999999, "text": " how would a player act in a given situation. This is also what AlphaZero gives you. But then you have" }, { "start": 999.5999999999999, "end": 1007.52, "text": " these counterfactual values. And this is a bit of a new term that only appears in, I think in imperfect" }, { "start": 1007.52, "end": 1015.12, "text": " information games, what is a counterfactual value? Right. So in this case, this value function very" }, { "start": 1015.12, "end": 1021.92, "text": " much is analogical to AlphaZero in the sense that you have values and policy or policy for a sub game." }, { "start": 1021.92, "end": 1028.96, "text": " And we use them in very similar way. Except as we just described a sub game is," }, { "start": 1030.48, "end": 1036.4, "text": " there's many possible states the game or the players could be in given a public state" }, { "start": 1036.4, "end": 1043.68, "text": " sub game or public sub game. And the value function given this sub game outputs not just a single value" }, { "start": 1043.68, "end": 1050.16, "text": " that says, hey, value of this sub game is five, it actually outputs a single value for all the possible" }, { "start": 1050.16, "end": 1056.16, "text": " player states that are possible given the sub game. So in poker, say I could be holding" }, { "start": 1056.16, "end": 1064.0800000000002, "text": " thousand different hand combinations in holding poker. So the network will tell me, hey, in this" }, { "start": 1064.08, "end": 1070.56, "text": " sub game, if you were to hold this particular pair of hands, this is the value and it will tell me" }, { "start": 1070.56, "end": 1076, "text": " such value for all the possible states I could be in. Yeah. Okay. And the neural network," }, { "start": 1076.72, "end": 1086.24, "text": " how is it built to output? Does it have like one, let's say one output head? So does it output like" }, { "start": 1086.24, "end": 1094.16, "text": " a thousand dimensional vector one entry for each? Okay. So is it fair to say that your algorithm" }, { "start": 1094.16, "end": 1104.4, "text": " would struggle with games where the possible private states are huge? That's yeah, that's the" }, { "start": 1105.2, "end": 1111.1200000000001, "text": " this is brilliant. This is exactly why I said it will be nicer to understand the limitations once" }, { "start": 1111.12, "end": 1117.12, "text": " we get a bit deeper into the algorithm. And this is exactly the main limitation that we currently" }, { "start": 1117.12, "end": 1124, "text": " have because in some games, this just explodes. Yeah, I see. Okay. And you have this network and" }, { "start": 1124, "end": 1131.36, "text": " you train it in some way via via self play. And now we get to the part where you generalize this" }, { "start": 1131.36, "end": 1137.76, "text": " search procedure, right? And let me see. Oh, this is here. So this search procedure, as we said in" }, { "start": 1137.76, "end": 1144.96, "text": " in alpha, again, in alpha zero, you have something like you're at some state in the game, right?" }, { "start": 1144.96, "end": 1150.72, "text": " You've played until this state. And what you do is you do this search and you use an internal like" }, { "start": 1150.72, "end": 1155.68, "text": " simulator to do the search. This is at inference time. So what you do is you consider all your" }, { "start": 1155.68, "end": 1163.6, "text": " actions, you choose on one by given the neural networks output and the current search statistics." }, { "start": 1163.6, "end": 1168.8799999999999, "text": " You go here, you ask the neural network, well, what's my value here, you expand that node." }, { "start": 1169.4399999999998, "end": 1173.4399999999998, "text": " And then you start again. And then the next iteration, you start again from the root," }, { "start": 1173.4399999999998, "end": 1180.9599999999998, "text": " you expand maybe the same or maybe another action, it depends. But let's say it's the same right here." }, { "start": 1180.9599999999998, "end": 1187.76, "text": " If it's already expanded, you go further down the tree. And you would you would sort of you would" }, { "start": 1187.76, "end": 1193.04, "text": " make many iterations, let's say 50 iterations or something like this. In every iteration," }, { "start": 1193.04, "end": 1197.76, "text": " you'd go down the tree, and you find a node that you haven't expanded yet. And you'd expand that" }, { "start": 1197.76, "end": 1206.56, "text": " node, right? In in in player of games, this is quite a bit more intricate, right? As as we also" }, { "start": 1206.56, "end": 1213.68, "text": " have many iterations, but within each iteration, we have to do a lot more work in order to actually" }, { "start": 1214.6399999999999, "end": 1221.04, "text": " in order to actually deal with with this uncertainty. So could you describe a little" }, { "start": 1221.04, "end": 1227.6, "text": " bit how your search algorithm works? Yes, happy to. So when we said at the beginning that" }, { "start": 1227.6, "end": 1234.72, "text": " player of games is a hybrid of deep stack and alpha zero, search algorithm is a perfect example" }, { "start": 1234.72, "end": 1244.08, "text": " of of this being a hybrid. So what deep stack already introduced is it, it had a fixed search" }, { "start": 1244.08, "end": 1251.52, "text": " tree. So you are poker players. So you what it really did is it search all the way through a" }, { "start": 1251.52, "end": 1258.56, "text": " single single betting ground. And it used value functions at the end of the round. And it ran this" }, { "start": 1258.56, "end": 1264.1599999999999, "text": " kind of actual regret minimization, which we might come back later to. But you can think of it simply" }, { "start": 1264.1599999999999, "end": 1270.08, "text": " as some some policy improvement algorithm given a fixed search. It would iterate and improve the" }, { "start": 1270.08, "end": 1276.32, "text": " policy and as it was walking up and down the tree and finding a good policy, it would use the value" }, { "start": 1276.32, "end": 1281.4399999999998, "text": " function at the end of the search tree, the very same value function that we just talked about." }, { "start": 1282.24, "end": 1291.04, "text": " Now, player of games, it's this this smart idea of alpha zero, where it also tries to dynamically" }, { "start": 1291.04, "end": 1297.52, "text": " expand the search tree or didn't have enough fixed surgery. And the way it does we simply see" }, { "start": 1297.52, "end": 1304.32, "text": " intertwined two phases where we in one phase, given the sample given some surgery, we try to" }, { "start": 1304.32, "end": 1310.8799999999999, "text": " improve the policy within the surgery. And there's a second phase where it simply tries to expand" }, { "start": 1310.8799999999999, "end": 1318.24, "text": " just like alpha zero does using the same say PUCB PUCB formula, we try to expand the search" }, { "start": 1318.24, "end": 1322.4, "text": " tree where we think we need to expand it. And then we simply go back and forth with you like an" }, { "start": 1322.4, "end": 1325.52, "text": " expand the tree, improve the policy, expand the tree, improve the policy." }, { "start": 1325.52, "end": 1332, "text": " Yeah, so this is built on an algorithm that is used that called counterfactual regret minimization." }, { "start": 1332, "end": 1336.96, "text": " And this is an this is if you were to just apply a counterfactual regret minimization," }, { "start": 1336.96, "end": 1342.48, "text": " this is a solver, like I give it a game description, and it just it will expand the" }, { "start": 1342.48, "end": 1348.32, "text": " entire game tree every state there is in the game. And it will just sort of go from node to node" }, { "start": 1348.32, "end": 1354.8799999999999, "text": " in this tree and improve the policy of both players, right. And it just does this for many," }, { "start": 1354.88, "end": 1359.5200000000002, "text": " many iterations, it improves here, here, here, everywhere in the game tree, until the whole" }, { "start": 1359.5200000000002, "end": 1366.5600000000002, "text": " game tree is approximately optimal. And the biggest game that has been solved so far," }, { "start": 1366.5600000000002, "end": 1372.5600000000002, "text": " if you describe this in the paper is limit, limit heads uphold them. Is that correct? Fixed?" }, { "start": 1372.5600000000002, "end": 1375.2800000000002, "text": " Yes, hold them. Yeah, that's, that's actually a solved game." }, { "start": 1376.4, "end": 1381.7600000000002, "text": " Yes, it was done a few years ago by the by the computer research group at the University of" }, { "start": 1381.76, "end": 1387.92, "text": " Alberta, led by Michael Bowling, and it's still as far as I know, the largest game to be solved." }, { "start": 1387.92, "end": 1393.68, "text": " And you use the word solver, which is a perfect, perfect name, really. And like the way I think" }, { "start": 1393.68, "end": 1400.08, "text": " about the solver is you give me some small or medium sized game that I can fit into like a big" }, { "start": 1400.08, "end": 1405.92, "text": " table on my computer. And by solving it means simply find a policy for all the possible states" }, { "start": 1405.92, "end": 1411.92, "text": " in a game. It's easy to see that it's like, I mean, people do know how to do it in say," }, { "start": 1413.8400000000001, "end": 1419.68, "text": " tic tac toe or small, small games, right. And if you were to fit chess on your computer, then again," }, { "start": 1419.68, "end": 1424, "text": " it's not hard to see that you could just solve it given the algorithms that people are familiar with." }, { "start": 1424.8000000000002, "end": 1430.24, "text": " The thing is, even if you have a really, really small information game, you do have to use" }, { "start": 1430.24, "end": 1436.08, "text": " algorithms that that can handle imperfect information games. Often people just use" }, { "start": 1436.08, "end": 1441.52, "text": " algorithms that they like, say, I don't know, like policy gradient methods, Q learning or whatever." }, { "start": 1441.52, "end": 1446.16, "text": " And if you just run it on imperfect information game, it just doesn't find a good policy." }, { "start": 1446.88, "end": 1453.76, "text": " Yeah, I think the I mean, intuitively, it's a bit like if I start in some situation in chess," }, { "start": 1453.76, "end": 1460.4, "text": " and I make some moves, I have I have still like that original state is still the same, right," }, { "start": 1460.4, "end": 1466.4, "text": " I can I can look back, I come from there. But if I'm in poker, and I'm in some state," }, { "start": 1466.4, "end": 1472.96, "text": " and I make some moves, that changes kind of the past, right? Because I look at, you know, maybe" }, { "start": 1472.96, "end": 1480.4, "text": " you're my opponent in poker, I look at what you do. And that changes my beliefs about what you what" }, { "start": 1480.4, "end": 1486.72, "text": " cards you had back in the past. Then I go back and I'm like, Oh, okay, you did this and this. So you" }, { "start": 1486.72, "end": 1492.5600000000002, "text": " can't you can't, I don't think you you will you're holding, you know, a king and an ace, given that" }, { "start": 1492.5600000000002, "end": 1498.96, "text": " you've done something in the future. And I think this, the fact that your future actions change" }, { "start": 1498.96, "end": 1506.88, "text": " the past, that's what, in my opinion, makes this so much more intriguing and complicated." }, { "start": 1506.88, "end": 1513.5200000000002, "text": " So on the left side here, I think this is the this is you have a search a local search tree," }, { "start": 1513.5200000000002, "end": 1519.8400000000001, "text": " right? You it's expanded until some depth at that depth, you asked the neural network for," }, { "start": 1519.8400000000001, "end": 1525.68, "text": " you know, summarization of whatever happens below. And within that tree, you run now this" }, { "start": 1525.68, "end": 1530.96, "text": " counterfactual regret minimization or something akin to it, and you simply want to find the best" }, { "start": 1530.96, "end": 1537.3600000000001, "text": " policy within that tree, which is more complicated in alpha zero, I just visit every node once right," }, { "start": 1537.3600000000001, "end": 1543.28, "text": " because the future doesn't change the past. Once I computed a node, I only expand things below it," }, { "start": 1543.8400000000001, "end": 1548.72, "text": " that never changes that node. However, in imperfect information games, right, if I change" }, { "start": 1548.72, "end": 1555.52, "text": " something below, all of a sudden, the the past changes, so I need to sort of update and converge" }, { "start": 1555.52, "end": 1562.08, "text": " the whole tree. And then once you've done this for a number of steps, on the right side, then you" }, { "start": 1562.8, "end": 1569.04, "text": " add a new node by essentially doing what alpha zero does, you go to a leaf node, you choose some" }, { "start": 1569.04, "end": 1576.32, "text": " action, right in some information state that passes, and you perform that action, and that expands" }, { "start": 1576.32, "end": 1585.36, "text": " actually one more node. Is that you know, this is this is excellent. And the the property that you" }, { "start": 1585.36, "end": 1591.28, "text": " just described, like the future change in the past, that that is also something that makes" }, { "start": 1591.28, "end": 1597.28, "text": " search in particular, so much more complicated, right? Because there's you can figure with a" }, { "start": 1597.28, "end": 1603.6799999999998, "text": " two step process, if you were to just solve solve some game, you will just solve it, even that is" }, { "start": 1603.68, "end": 1608.48, "text": " more complicated, because of what we just described, but you could do it there. There's ways to solve" }, { "start": 1608.48, "end": 1615.52, "text": " solve imperfect information games. But we are doing search here. And the property that you talk about" }, { "start": 1615.52, "end": 1622.5600000000002, "text": " makes search so much more complicated. And the reason being is in imperfect information games," }, { "start": 1622.5600000000002, "end": 1632.88, "text": " you cannot just glue together optimal policies, and hope that the resulting policy for the full" }, { "start": 1632.88, "end": 1639.7600000000002, "text": " game will be optimal. And that is something that many search algorithms just rely on. And it" }, { "start": 1639.7600000000002, "end": 1645.5200000000002, "text": " simply holds in perfect information games. So if you were to like, pick any optimal policy in any" }, { "start": 1645.5200000000002, "end": 1650.88, "text": " any any state and just put them together, this is an this is an optimal policy in imperfect information" }, { "start": 1650.88, "end": 1657.68, "text": " games. It does not hold because of exactly what we just described. But then how can you even do" }, { "start": 1657.68, "end": 1663.3600000000001, "text": " search at all if search is all about like local reason, right? You reason locally, you have to" }, { "start": 1663.3600000000001, "end": 1667.8400000000001, "text": " somehow need to make sure that the resulting policy for the full game is still optimal." }, { "start": 1669.3600000000001, "end": 1675.28, "text": " Yeah, it's it's it's interesting. So essentially, for every step that Alpha Zero does, where it" }, { "start": 1675.28, "end": 1681.92, "text": " expands a new node, you also expand a new node, but then you have to like, get the entire tree in" }, { "start": 1681.92, "end": 1686.8, "text": " order again. So you expand the new node, and then you have to do the whole update of the whole tree" }, { "start": 1686.8, "end": 1692.48, "text": " for a bunch of iterations before you can expand another one, such that everything like stays" }, { "start": 1692.48, "end": 1698.72, "text": " consistent. Yeah, okay. That's, I mean, this this, it gives a bit of an impression of why this is" }, { "start": 1699.52, "end": 1705.76, "text": " much more, much more complex, right? Yes. So this is this is essentially at inference time," }, { "start": 1705.76, "end": 1712.56, "text": " we do this search, right? We do the search. And now comes the time when we actually need to train" }, { "start": 1712.56, "end": 1716.1599999999999, "text": " this. So we have the ingredients. Now we have the search algorithm, we have the neural network." }, { "start": 1716.16, "end": 1726.4, "text": " And now we need to train it. And you also have, you have a method, or various methods. And maybe" }, { "start": 1726.4, "end": 1734.0800000000002, "text": " you want to describe it yourself a little bit, because this is the part where I stumbled a little." }, { "start": 1734.0800000000002, "end": 1741.0400000000002, "text": " So yeah, yeah, I will start to do it on very high level. So the idea is, again, we want it to" }, { "start": 1741.04, "end": 1747.92, "text": " take the self play style method from AlphaZero, so that you just throw the algorithm into a game," }, { "start": 1747.92, "end": 1754.48, "text": " and it improves as the as the as it plays, and it gets better and better. And what it really means" }, { "start": 1754.48, "end": 1761.28, "text": " is you are improving your your value and policy, right? The network that we that we just discussed." }, { "start": 1761.28, "end": 1771.12, "text": " And the on a high level, since you are using your value function in your search, you call basically" }, { "start": 1771.12, "end": 1777.76, "text": " call your neural network with some inputs, some states, public states, some beliefs. And this," }, { "start": 1777.76, "end": 1784.96, "text": " this figure, this idea of queries is simply we call every single time we call a network," }, { "start": 1784.96, "end": 1790.6399999999999, "text": " we call this a query, we are querying a network for some value over some game. So we store this" }, { "start": 1790.64, "end": 1797.1200000000001, "text": " we store this couple of public state and beliefs. And then we go through all this all those queries," }, { "start": 1797.1200000000001, "end": 1804, "text": " and we simply try to basically improve the network on the states and the syringes that" }, { "start": 1804, "end": 1808.16, "text": " the network has been queried, because this is probably what's important because that's what" }, { "start": 1808.16, "end": 1813.92, "text": " occurred during the self play. So you collect the train is similar to AlphaZero, as you say," }, { "start": 1813.92, "end": 1820.3200000000002, "text": " you collect the training set as you go. So the training set for the next iteration is whatever" }, { "start": 1820.32, "end": 1825.52, "text": " the network had to do during this iteration. So it's not just a random sample of states." }, { "start": 1825.52, "end": 1833.36, "text": " And you train in the same manner as AlphaZero, you train to predict your own future outputs," }, { "start": 1833.36, "end": 1841.6, "text": " is that approximate? So if let's let's distinguish if, like one or two or three steps in the future," }, { "start": 1841.6, "end": 1847.84, "text": " you actually win or lose the game, you can train on your reward of the game. But AlphaZero also," }, { "start": 1847.84, "end": 1853.6, "text": " if it doesn't win or lose the game in the next step or so, it tries to predict its own output." }, { "start": 1853.6, "end": 1863.12, "text": " So it tries to improve that way using TD lambda. You here have TD one, right? So your targets," }, { "start": 1863.12, "end": 1869.76, "text": " what do you target? What do you give the network as labels? So okay, so this is slightly more" }, { "start": 1869.76, "end": 1877.1999999999998, "text": " complicated here in the sense that each query basically defines you something, right? It's a" }, { "start": 1877.2, "end": 1883.52, "text": " public state and energies. And given a sub game, the ideal target for your neural network would be" }, { "start": 1883.52, "end": 1888.8, "text": " simply to solve the game, right? That's the ground truth that you want your neural network to" }, { "start": 1890.4, "end": 1896.8, "text": " learn or like then to work too. So rather than solving directly, because again, these sub games" }, { "start": 1896.8, "end": 1905.1200000000001, "text": " will still be way too big as they occur during the gameplay, we do like a small, small solver," }, { "start": 1905.12, "end": 1911.6, "text": " where we also substitute the full solver with a small search. So rather than fully solving a game," }, { "start": 1911.6, "end": 1918.08, "text": " we use the same method to basically do a search. And the outcome of the search, basically a small" }, { "start": 1918.08, "end": 1925.9199999999998, "text": " solver is what is the target. Okay, so you do the same thing as you do during inference when" }, { "start": 1925.9199999999998, "end": 1932.3999999999999, "text": " you actually want to make a move. So during that inference, you're going to make some queries to" }, { "start": 1932.4, "end": 1937.44, "text": " the network, you take these queries, and these I think here are the red dots, right? Exactly." }, { "start": 1937.44, "end": 1943.0400000000002, "text": " So during maybe this has battery again. So during the inference, you make you do these queries," }, { "start": 1943.0400000000002, "end": 1949.8400000000001, "text": " you store them in this in this buffer. And these now act as the root nodes for yet another search," }, { "start": 1949.8400000000001, "end": 1956.0800000000002, "text": " which is exactly the same as the previous search, right? And so you you sort of rely on the fact" }, { "start": 1956.08, "end": 1962.8, "text": " that this search procedure can give you a better output than the neural network itself, right?" }, { "start": 1962.8, "end": 1969.4399999999998, "text": " Yes. Right. The query here, the neural network will output some value, like the value is eight," }, { "start": 1969.4399999999998, "end": 1975.9199999999998, "text": " or one value for each for each information state. But you, I think the whole algorithm is," }, { "start": 1975.9199999999998, "end": 1981.6799999999998, "text": " and that's of course, the reason we do search in the first place is that doing search gives you a" }, { "start": 1981.68, "end": 1988, "text": " better estimate than just using the neural network at the start. So doing search, and then asking" }, { "start": 1988, "end": 1993.28, "text": " the neural network further down the line gives you a better estimate. And yeah, it makes sense. You" }, { "start": 1993.8400000000001, "end": 2000.5600000000002, "text": " start at wherever you ask the neural network, you use local search to get a better value," }, { "start": 2000.5600000000002, "end": 2005.28, "text": " doesn't need a perfect one, just a better one. And then you train the neural network to predict" }, { "start": 2005.28, "end": 2014.8, "text": " the result of the search. That's exactly one would hope though, that after a while, you know," }, { "start": 2014.8, "end": 2020, "text": " if I do this again, and again, and again, at the end, I wouldn't even have to ask the neural" }, { "start": 2020, "end": 2026.32, "text": " network anymore. Sorry, I wouldn't even have to do search anymore during inference. Is that something" }, { "start": 2026.32, "end": 2032.24, "text": " you have you have you tried not even doing search just using the neural network that the policy" }, { "start": 2032.24, "end": 2036.8, "text": " output of the neural network during inference? Is that something that generally works? Because," }, { "start": 2037.36, "end": 2044.24, "text": " you know, I train it to predict the output of the search. So technically, let's say it should" }, { "start": 2044.24, "end": 2050.56, "text": " it should kind of learn it, no? Yes, the same the same way you simply could just use the same policy" }, { "start": 2050.56, "end": 2056.32, "text": " network in AlphaZero and let it play chess, right? You can do it and people have have done it. It" }, { "start": 2056.32, "end": 2063.92, "text": " is still quite good chess, but it's far, far below the full strength of search. So yes," }, { "start": 2064.96, "end": 2069.52, "text": " at the end of the day, even the policy network is quite good, but it's not as good." }, { "start": 2069.52, "end": 2075.04, "text": " Okay. Yeah, I mean, it's just it shows a little bit that the search is in fact," }, { "start": 2075.04, "end": 2081.84, "text": " in fact, really necessary, right? Yeah, so I think we're almost getting" }, { "start": 2081.84, "end": 2089.28, "text": " already to the sort of results. Wait, would you would you maybe summarize the results a little" }, { "start": 2089.28, "end": 2095.92, "text": " bit? I think if people are super interested, they may go into into the paper and into the tables." }, { "start": 2095.92, "end": 2102.1600000000003, "text": " But maybe you can just summarize a little bit of the results you compared against AlphaZero in" }, { "start": 2102.1600000000003, "end": 2109.92, "text": " perfect information games, you compared against dedicated algorithms like like slombot in poker," }, { "start": 2109.92, "end": 2117.76, "text": " and you even compared against like a dedicated AI for Scotland yard. What were generally the results" }, { "start": 2117.76, "end": 2126.7200000000003, "text": " for you? So yes, so so in general, the results are that the algorithm is all about generality," }, { "start": 2126.7200000000003, "end": 2133.44, "text": " which is this is not as strong as AlphaZero in perfect information games where AlphaZero was" }, { "start": 2133.44, "end": 2140.8, "text": " designed to shine, right? So this this very much is trying to be a general rather than being the" }, { "start": 2140.8, "end": 2146.4, "text": " best chess or the best poker poker poker agent in the world. It's just trying to be really," }, { "start": 2146.4, "end": 2153.76, "text": " really good in all of them at once. What is the diff? So if if a perfect information game is just" }, { "start": 2153.76, "end": 2159.28, "text": " a special case of an imperfect information game, right? What is then the difference between" }, { "start": 2159.28, "end": 2164.2400000000002, "text": " player of games and AlphaZero? Like, why couldn't it reach the same performance?" }, { "start": 2164.88, "end": 2171.84, "text": " So on paper, it could except that, for example, the policy improvement algorithm that we use," }, { "start": 2171.84, "end": 2178.2400000000002, "text": " the counterfactual, we get minimization, right? It has to be also good able to handle imperfect" }, { "start": 2178.2400000000002, "end": 2184.32, "text": " information games. That's why it's not going to convert so nicely and quickly as as algorithm" }, { "start": 2184.32, "end": 2191.6000000000004, "text": " design design for perfect info. So the fact that you expect sometimes to see an imperfect" }, { "start": 2191.6000000000004, "end": 2197.92, "text": " information game, would it be fair? Would you estimate that if you just input more resources," }, { "start": 2197.92, "end": 2202.1600000000003, "text": " input more computation time that it would actually reach the levels of AlphaZero?" }, { "start": 2203.52, "end": 2209.44, "text": " I don't think it's necessarily I mean, on paper, all of these would eventually converge." }, { "start": 2209.44, "end": 2218.32, "text": " Right. Everything works on paper in in delimiter. In practice, AlphaZero and MCTS is probably" }, { "start": 2218.32, "end": 2224.64, "text": " always going to be ahead. But we don't really care. Right. Like, if I would be happy with a" }, { "start": 2224.64, "end": 2230.16, "text": " single algorithm for everything that's that's better in humans. I don't care if it's better by" }, { "start": 2230.16, "end": 2240.48, "text": " like a little bit or by a billion. Yeah. And then in in in poker here, you compared against Slumbot," }, { "start": 2240.48, "end": 2247.6, "text": " which is you say that the best open source or best available poker bot to date. And this is no limit" }, { "start": 2247.6, "end": 2252.24, "text": " poker now. Right. This is this is way too big of a game to solve. And I think the other ones" }, { "start": 2252.7999999999997, "end": 2258.56, "text": " is you you simply compare to the numbers from their papers. Is that" }, { "start": 2258.56, "end": 2266.48, "text": " the do you mean for a slum bot or for Scotland that we're talking about poker? Oh, sorry. Yeah," }, { "start": 2266.48, "end": 2272.08, "text": " let's let's talk about poker for a while. So the the player of games here gains what is this seven" }, { "start": 2272.7999999999997, "end": 2280.88, "text": " millibig blinds per per hand? Yeah, over slum bot. Yeah, again, like we we we could have beaten" }, { "start": 2280.88, "end": 2286.96, "text": " slum bot by by a lot more. Yeah, just like decided, oh, this is good enough to like to put into a" }, { "start": 2286.96, "end": 2292.56, "text": " paper, we can come back to it later. Like, as you know, it very much depends on how much time you" }, { "start": 2292.56, "end": 2299.12, "text": " spend tuning the network architecture and how for how long to train this is what this is just to show," }, { "start": 2299.12, "end": 2303.6, "text": " hey, there's already an algorithm that can do all of these games and it still plays them really," }, { "start": 2303.6, "end": 2309.6, "text": " really well. Yeah. And your neural network, just to say it's a bunch of like feed forward layers," }, { "start": 2309.6, "end": 2316, "text": " correct? Like, it's not a complicated thing. So for poker, it for poker, it's just a feed forward" }, { "start": 2316, "end": 2322.64, "text": " network for chess and go. We do we try to mirror some of the older AlphaZero architectures. Yeah." }, { "start": 2323.76, "end": 2333.28, "text": " Okay, so and here on the right side, you have Pym Bot, which is the Scotland Yard specific," }, { "start": 2333.28, "end": 2339.2, "text": " but for people, maybe people don't. Does anyone not know what Scotland Yard is? Maybe you can" }, { "start": 2339.2, "end": 2345.44, "text": " describe 10 seconds what Scotland Yard even is as a game. It's somewhere. Yeah, there's a" }, { "start": 2345.44, "end": 2352.56, "text": " figure maybe, right? There is this figure, right? Right. Yeah, there's no point explaining the rules" }, { "start": 2352.56, "end": 2359.68, "text": " in detail, but on a high level, there's a graph, you are trying to chase down the chase down a" }, { "start": 2359.68, "end": 2366.96, "text": " stone that's called Mr. X, you have five detectives that are trying to chase the stone down. The trick" }, { "start": 2366.96, "end": 2374.4, "text": " is the stone, the Mr. X that you are trying to chase down is only partially observable. That's" }, { "start": 2374.4, "end": 2380.1600000000003, "text": " what makes it imperfect information. And you have to basically reason about states where he could be" }, { "start": 2380.1600000000003, "end": 2388.56, "text": " hiding and form some beliefs about his state and trying to chase him down. So yeah, and yeah, I" }, { "start": 2388.56, "end": 2393.76, "text": " guess that's all people need to know. You can spend like funny tickets on taxi rides and" }, { "start": 2396.8, "end": 2403.2000000000003, "text": " various methods of transport. And then every 10 turns or so Mr. X has to reveal" }, { "start": 2403.2, "end": 2409.9199999999996, "text": " their position. And that's how you sort of form a belief about where Mr. X could be given what" }, { "start": 2409.9199999999996, "end": 2420.08, "text": " actions Mr. X took. So this is quite a specific game. So it seems to me like a dedicated algorithm" }, { "start": 2421.04, "end": 2427.9199999999996, "text": " could do very, very well, again, in this game, because it could exploit various aspects of the" }, { "start": 2427.92, "end": 2435.36, "text": " game, you could hard code in various various things the AI could abuse. And here we see a graph of" }, { "start": 2435.36, "end": 2441.76, "text": " the win rate of player of games against what's on the x axis here, this is number of search" }, { "start": 2441.76, "end": 2448.2400000000002, "text": " iterations. So pinbot is a local search algorithm as well. Yes, it's a it's a it's a variant of" }, { "start": 2448.2400000000002, "end": 2455.28, "text": " MCTS. And this is to show regardless of how much time or search we give the MCTS, the hard code" }, { "start": 2455.28, "end": 2461.28, "text": " hand tune algorithm, even if it gets like a billion or something called search iterations," }, { "start": 2461.28, "end": 2466.4, "text": " it's still behind alpha zero because it's using this general self play learning method." }, { "start": 2467.44, "end": 2473.6000000000004, "text": " Yeah, so this is this will be I guess the final win rate is here like at 55% or something like" }, { "start": 2473.6000000000004, "end": 2481.36, "text": " this. And that is with a huge number of iterations for for pinbot. Yes, and we'll play our games is" }, { "start": 2481.36, "end": 2487.92, "text": " using only like 400 iterations on our site. So yeah, as you can see, as you can see, the" }, { "start": 2487.92, "end": 2494.8, "text": " regardless of the scale, we converge to a better policy. And you do you would attribute that to" }, { "start": 2494.8, "end": 2503.2000000000003, "text": " the use of self play to improve the strategies. It's the it's a combination of this and also the" }, { "start": 2503.2000000000003, "end": 2509.04, "text": " fact that player of games is built on some some on some methods like later in the appendix, if" }, { "start": 2509.04, "end": 2515.7599999999998, "text": " people are curious, they can open appendix, we show that on small games we were we can exactly" }, { "start": 2515.7599999999998, "end": 2522, "text": " measure how close to an optimal policy the our resulting search policy is we get closer and" }, { "start": 2522, "end": 2528.16, "text": " closer as the time goes. So basically, we are only limited by the by the power of neural networks." }, { "start": 2528.16, "end": 2533.92, "text": " And we have some guarantees that we can get to an optimal policy. Other methods that are based on" }, { "start": 2533.92, "end": 2541.52, "text": " MCTS, they they are not guaranteed to converge even on small games. So there's there's there's" }, { "start": 2541.52, "end": 2545.76, "text": " there's also the limit of the of the fact that these methods are not sound." }, { "start": 2546.8, "end": 2555.12, "text": " And just to get an idea of the scale of like we saw, you know, poker, Scotland yard, here we have" }, { "start": 2555.12, "end": 2564.96, "text": " the the chess and go and so on. Can you give us a number of just how many how many GP TP, whatever" }, { "start": 2564.96, "end": 2573.2799999999997, "text": " use do I need to run for how long to get anywhere close to what you did? I see. So I think the" }, { "start": 2574.08, "end": 2583.12, "text": " easiest for us was poker that like people probably can train on a few few GPUs." }, { "start": 2583.12, "end": 2592.48, "text": " The by far the hardest is is go where we used a lot of a lot of GPUs. But that was simply because" }, { "start": 2592.48, "end": 2600.88, "text": " we we had them available. Yeah, I get okay. And you you did in the paper say that for comparison" }, { "start": 2600.88, "end": 2607.12, "text": " reasons you use sort of the same amount of compute as Alpha Zero did as well. That was that was" }, { "start": 2607.12, "end": 2615.2, "text": " tricky. I'd like it's like, because we do not want to claim that this is this is now state of the art" }, { "start": 2615.2, "end": 2622, "text": " chess agent and like there there we don't have to do all the proper and hard measurements, right?" }, { "start": 2622, "end": 2626.48, "text": " Then you have to use clock time. And suddenly if you use clock time, you have to argue that use" }, { "start": 2626.48, "end": 2631.68, "text": " the same hardware and everything gets gets more tricky. And he would just say, well, we use the" }, { "start": 2631.68, "end": 2637.44, "text": " we call the network as often as Alpha Zero did. So it should be roughly the same, but like we don't" }, { "start": 2637.44, "end": 2643.9199999999996, "text": " claim to be stronger. Okay, I mean, that's a I think community appreciates sort of fair comparison" }, { "start": 2643.9199999999996, "end": 2650.8799999999997, "text": " instead of every every paper having the new best state of the art, especially in RL, like it seems" }, { "start": 2650.8799999999997, "end": 2654.7999999999997, "text": " it seems clear just from the graphs here, like just from the lines, it seems clear, you can just" }, { "start": 2655.3599999999997, "end": 2661.44, "text": " invest more compute and get better. And that's what we also saw with Alpha Zero, like it used to be" }, { "start": 2661.44, "end": 2668.56, "text": " slightly superhuman. And now it's like, you know, no human not like not all humans together even" }, { "start": 2668.56, "end": 2680.32, "text": " will ever match Alpha Zero in in any of these games, which is crazy. Yeah, exactly. Do you have" }, { "start": 2680.32, "end": 2688.8, "text": " a bit of a demonstration ready? You told me of of of the player of games playing Scotland yard." }, { "start": 2688.8, "end": 2693.6000000000004, "text": " So we can kind of see what's going on. Yeah, let me see if it's still still working. It was" }, { "start": 2693.6000000000004, "end": 2699.6000000000004, "text": " working this morning. It was we never plan to show it externally. We it was designed for our" }, { "start": 2699.6000000000004, "end": 2704.96, "text": " debugging purposes, but it would be a fun demo just so that people who are not familiar with" }, { "start": 2704.96, "end": 2713.76, "text": " Scotland yard maybe get some intuition about the game. Okay, so hopefully you can see this." }, { "start": 2713.76, "end": 2720.7200000000003, "text": " Yeah. And the let me very quickly explain what is what is this about. I am now playing as Mr. X," }, { "start": 2720.7200000000003, "end": 2728.32, "text": " which is this black color in here. And I can move all and all on on this graph basically walk" }, { "start": 2728.32, "end": 2734.7200000000003, "text": " walk in the edges. And as you were talking about those taxes and cubes, you can see that the edges" }, { "start": 2734.7200000000003, "end": 2740.1600000000003, "text": " have different colors. So all of these are yellow, but this this guy is blue. And they correspond to" }, { "start": 2740.16, "end": 2745.92, "text": " to different meaning of transportation that I get to use, say yellow stands for taxi taxi, I think," }, { "start": 2745.92, "end": 2752.48, "text": " and blue stands for bus. Now, detectives do not get to see where I am, but they do get to see" }, { "start": 2752.48, "end": 2759.52, "text": " which color color details. So right now I'm in here and say I want to go through 49." }, { "start": 2760.08, "end": 2765.68, "text": " And I want to use taxi together. So yeah, hopefully, like we have been talking for a while," }, { "start": 2765.68, "end": 2775.04, "text": " so maybe maybe it's not alive anymore. But yeah, probably to it died." }, { "start": 2775.04, "end": 2779.8399999999997, "text": " You have scaled to zero proper engineering. Nice." }, { "start": 2780.7999999999997, "end": 2787.6, "text": " Yes. So yeah, it doesn't work right now. But at least people can get an idea of what would happen." }, { "start": 2787.6, "end": 2794.96, "text": " Maybe. Yeah. So you'd need to you'd need to pretty quickly kind of reason. And the longer you don't" }, { "start": 2794.96, "end": 2804.7999999999997, "text": " see Mr. x, the more sort of fuzzy your idea gets of where Mr. x is, do you do you visualize" }, { "start": 2804.7999999999997, "end": 2810.24, "text": " sort of this distribution, the belief distribution of where Mr. x is or for debugging?" }, { "start": 2810.24, "end": 2817.7599999999998, "text": " Or we did it's I don't think it's it's turned on right now. But that's exactly what we tried to do" }, { "start": 2817.7599999999998, "end": 2823.6, "text": " at some at some point. Yeah. And did you see did you observe this that's the longer they didn't see" }, { "start": 2823.6, "end": 2830, "text": " Mr. x the more kind of spread out, the more unsure they become. Is that something you can" }, { "start": 2830, "end": 2836, "text": " clearly observe? Or is that something you just feel as a human? Oh, yes. And it was actually" }, { "start": 2836, "end": 2847.52, "text": " really, really fun to see. Yeah, crazy. And so the one improvement, let's say, or one follow up to" }, { "start": 2847.52, "end": 2855.6, "text": " Alpha zero was the muse zero algorithm, which, which the crucial difference is Alpha zero," }, { "start": 2855.6, "end": 2860.72, "text": " you need sort of the simulator, you need to be able to simulate a lot of games. Internally," }, { "start": 2860.72, "end": 2866.16, "text": " you need to know what happens when I do some action, what kind of state results from that." }, { "start": 2866.16, "end": 2873.7599999999998, "text": " And muse zero alleviated this by sort of going to the latent space state and training everything in" }, { "start": 2873.7599999999998, "end": 2880.8799999999997, "text": " latent space. Is this something I could do with player of games? No, but that's, that's arguably" }, { "start": 2880.8799999999997, "end": 2888.72, "text": " the limitation number two, I think the biggest being the biggest thing is right now the the" }, { "start": 2888.72, "end": 2894.7999999999997, "text": " large, large beliefs, belief space. But the second one is we currently need the model of the" }, { "start": 2894.7999999999997, "end": 2900.3999999999996, "text": " environment. And muse zero doesn't even know you will need it. So we can think of player of games" }, { "start": 2900.3999999999996, "end": 2906.24, "text": " is running behind the Alpha zero Alpha zero lineage and trying to generalize things, but" }, { "start": 2906.24, "end": 2913.2, "text": " we are still looking behind in that regard. And maybe a more more conceptual question here in" }, { "start": 2913.2, "end": 2920.8799999999997, "text": " these in these entire game trees and so on, you know, for example, in Scotland yard, I don't know" }, { "start": 2920.8799999999997, "end": 2928.96, "text": " where Mr. x is, but Mr. x's movements are kind of deterministic, right? Mr. if Mr. x uses a taxi to" }, { "start": 2928.96, "end": 2939.3599999999997, "text": " get from 49 to 48. Mr. x is now at 48. However, in poker, for example, if I bet something, there" }, { "start": 2939.36, "end": 2946.7200000000003, "text": " will and my opponent calls the flop will reveal like random cards. How does this and this is" }, { "start": 2946.7200000000003, "end": 2952.56, "text": " different from me not knowing what my opponent's cards are, right? It's, it's sort of pure" }, { "start": 2952.56, "end": 2959.44, "text": " randomness within the game. Is that something that makes things very complicated? Or is the" }, { "start": 2959.44, "end": 2965.6, "text": " complicated part? Like how do you deal with stochasticity and with randomness in games," }, { "start": 2965.6, "end": 2973.6, "text": " which is also something that doesn't exist in chess? That that part is actually quite easy." }, { "start": 2973.6, "end": 2981.12, "text": " It's simply baked in into into a model. And that's, that's pretty much it. Okay, so you can you can" }, { "start": 2981.12, "end": 2986.96, "text": " sort of condition on previous information and the model will compute whatever expected value" }, { "start": 2988.08, "end": 2994.56, "text": " of of any future cards that could be drawn in like flop and turn and river. You can think of it as" }, { "start": 2994.56, "end": 3001.2, "text": " basically having you just draw the search tree at the beginning and simply one of those nodes you" }, { "start": 3001.2, "end": 3008.64, "text": " can think of as as some chance actor playing and you have simply a fixed policy in that node and" }, { "start": 3008.64, "end": 3014.08, "text": " a lot of lot of actions. That's it. So when you expand the search tree, do you need to expand" }, { "start": 3014.7999999999997, "end": 3022.56, "text": " once for every possible, let's say flop combination there is? Yes. Okay, that that is a lot of" }, { "start": 3022.56, "end": 3028.64, "text": " combinations, right? Or you can or you can substitute like if you are smart about it," }, { "start": 3028.64, "end": 3034.7999999999997, "text": " you can again use a neural network. Yeah, okay. Do you do you think humans because in in Alpha" }, { "start": 3034.7999999999997, "end": 3040.72, "text": " zero, you can sort of think that you do the same internally, right? You kind of you kind of think" }, { "start": 3040.72, "end": 3047.12, "text": " ahead and until some depth and you say, okay, here, I guess, and a little bit. Do you think" }, { "start": 3047.12, "end": 3053.12, "text": " player of games or in general, these these algorithms with imperfect information is also" }, { "start": 3053.12, "end": 3059.04, "text": " a little bit like like humans do it. It seems vague that I go and I kind of go through all the" }, { "start": 3059.04, "end": 3064.96, "text": " different flop combinations there could be. Or do you do you think there is a fundamental" }, { "start": 3064.96, "end": 3070.16, "text": " difference between how humans tackle these problems and how these algorithms do? I would." }, { "start": 3070.16, "end": 3075.7599999999998, "text": " So I would say we would both agree that in Scotland, they are you probably do the same," }, { "start": 3075.76, "end": 3081.44, "text": " right? Yeah, like looking forward, like what if I go here? What if the opponent goes there? And then" }, { "start": 3082.48, "end": 3086.7200000000003, "text": " you do this like search forward as you are thinking about the beliefs of the opponent." }, { "start": 3087.6000000000004, "end": 3094.32, "text": " Yeah. So in Scotland, I would say yes. In poker, it's simply complicated by the fact that suddenly" }, { "start": 3094.32, "end": 3101.0400000000004, "text": " the belief space is big. You like for humans, even 1000 is probably too much. And yeah, I did" }, { "start": 3101.04, "end": 3106.24, "text": " like probably humans use some like gender representation there already. I don't know." }, { "start": 3107.6, "end": 3112.96, "text": " Cool. And what is next in this line? I mean, now you've, you know, you've built like a big" }, { "start": 3112.96, "end": 3118.8, "text": " unifying algorithm that can tackle any sort of game as long as it like has a simulator." }, { "start": 3118.8, "end": 3123.92, "text": " What and you said it's probably not possible to go without a simulator. So what's next?" }, { "start": 3123.92, "end": 3129.12, "text": " Like, it seems like, you know, you've achieved kind of unification. Where do you go from here?" }, { "start": 3129.12, "end": 3135.3599999999997, "text": " I think the most natural path is to remove the constraints that we just discussed, right? This" }, { "start": 3135.3599999999997, "end": 3141.8399999999997, "text": " is going to fall apart if there's a big belief space. And it still needs a model. And I think" }, { "start": 3142.64, "end": 3149.44, "text": " this is something we probably want to play with play with next like, yeah, like, we like making" }, { "start": 3149.44, "end": 3155.3599999999997, "text": " algorithms that are truly general. I think is a big step in this direction. But it's not to say" }, { "start": 3155.36, "end": 3161.6, "text": " that we are finished. And is so do you think if this line of work continues, it would be an" }, { "start": 3161.6, "end": 3172.32, "text": " algorithm that at some point could be thrown at pretty much any problem, like Atari and like, but" }, { "start": 3172.32, "end": 3179.44, "text": " even beyond reinforcement learning, right? Question answering, visual classification, what not, or" }, { "start": 3179.44, "end": 3185.76, "text": " even robots, and so on. Or do you think that is kind of a very different line of work?" }, { "start": 3186.8, "end": 3196.96, "text": " I mean, I did use I did work on question answering and congeneration before. So yes, sorry, so on" }, { "start": 3196.96, "end": 3204, "text": " high level, this is certainly the dream, right? Like, not just of the team who work on this, but" }, { "start": 3204, "end": 3208.8, "text": " quite a few smart people in deep mind, like try to make something that's truly, truly general." }, { "start": 3208.8, "end": 3213.92, "text": " You don't really care. Well, the algorithm doesn't really care what environment you throw it into," }, { "start": 3213.92, "end": 3219.84, "text": " you just like throw it there and say, okay, learn. So that's, that's, that's the direction we are" }, { "start": 3219.84, "end": 3225.36, "text": " going if player games can walk all the way there, or if some of the ideas will be simply used in" }, { "start": 3225.36, "end": 3233.2000000000003, "text": " other approaches, we shall see. Cool. Excellent. Well, in this case, Martin Schmidt, thank you so" }, { "start": 3233.2, "end": 3239.8399999999997, "text": " much for being here. This this was way way I promise to everyone, this was way better if than" }, { "start": 3239.8399999999997, "end": 3246.72, "text": " if I had done this myself. So thanks a lot for for joining us. This was really awesome." }, { "start": 3246.72, "end": 3264, "text": " Thank you for having me. This was fun. Thanks." } ]
fvctpYph8Pc
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Do ImageNet Classifiers Generalize to ImageNet? (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "imagenet", "cifar10", "cifar10.1", "generalization", "overfitting", "mturk", "arxiv", "vision", "models", "research", "hardness", "accuracy", "classifier", "resnet" ]
Has the world overfitted to ImageNet? What if we collect another dataset in exactly the same fashion? This paper gives a surprising answer! Paper: https://arxiv.org/abs/1902.10811 Data: https://github.com/modestyachts/ImageNetV2 Abstract: We build new test sets for the CIFAR-10 and ImageNet datasets. Both benchmarks have been the focus of intense research for almost a decade, raising the danger of overfitting to excessively re-used test sets. By closely following the original dataset creation processes, we test to what extent current classification models generalize to new data. We evaluate a broad range of models and find accuracy drops of 3% - 15% on CIFAR-10 and 11% - 14% on ImageNet. However, accuracy gains on the original test sets translate to larger gains on the new test sets. Our results suggest that the accuracy drops are not caused by adaptivity, but by the models' inability to generalize to slightly "harder" images than those found in the original test sets. Authors: Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, Vaishaal Shankar Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there today we're looking at to do image net classifiers Generalized to image net by Benjamin wrecked Rebecca are Olaf's Ludwig Schmidt and Vyshal Shankar So the premise of this paper is pretty simple We've been training models on image net now for a while Almost ten years to be exact image net is this data set with a lot of images millions of images categorized into many thousands of categories now the The classic part of image net that people know is that has about 1.5 million images in 1,000 different classes and this Image net was one of the main data sets now or has been for the last few years And as you can see on the right here the error rate year after year Was pretty much I think cut in half every year since 2012 When the first net Alex net was using deep learning instead of the classical visual computer vision approaches so we've been training on image net for a while and the idea or the question this paper asks if we Collect a second test set right so for image net we have a train and a test set If we now collect a second test set here test v2 right if We have a model that was trained on training here and evaluated on test Does it also perform well on this second test set right? the idea of this being that maybe over the years we've Tuned our hyper parameters and all such that the models perform well on that particular test set Let's call this v1 right and it might not it might not be as successful on a new test set So this paper goes about collecting a test set to image net in Exactly the way that the v1 test set was collected right? So they they try to match exactly the process of how v1 was collected To create another test set and then they evaluate models on that new test set They do this not only for image net but also for c4 10 which is a much smaller data set But also a lot of computer vision algorithms are evaluated on c4 10 So let's just put up a hypothesis here The hypothesis is that we have pretty much over fitted to image net by now. This is a very prestigious Number to get if you have state-of-the-art on image net and therefore Tuning your hyper parameters and your learning rate and everything such that it performs well on the test set v1 is very likely So this paper has the most important plots are like this It has two axes and on the bottom axis is the performance on v1 right and so performance and that means Accuracy basically so accuracy and on line two is v2 accuracy now Here is one Here is zero This line here means that if a model is performing 50% accuracy on v1 it is also performing 50% accuracy on v2 So being on this line would basically mean we have not over fitted and the model is performing equally well on both both sets So what what now if we assume that we have over fitted if we have over fitted? We would assume that you know the models that perform really poorly might also you know They perform kind of they're not really over fitted but over the years as we've gotten better on v1 We stray away from this so we get better on v1 But we don't really get better on v2 right and that means we've over fitted to v1 and this might even go down right the more The more kind of we might overfit to v1 The worse we're actually going to get on v2 So this this is kind of a meta over fitting so this is what we would expect if we over fit right over over fit to v1 And I think this was the initial hypothesis behind the people that ran this experiment to check Can we see an effect like this or is the effect more? A continuous one where we don't over fit and what they found was neither And these are basically these interesting plots here So again the dashed line here would be the not over fitting line So what they find if you for example look at image net every dot here is a model right? So this model right here is performing with like a 67 percent accuracy On v1 and it is performing with something like a 53 percent accuracy on v2 If you look at this line here what that means is that Every model kind of drops By about this much right so Not only not we don't see This and we don't see this but we see this line is shifted down And if you look closely especially on c410 you can see the line rather than being tilted like this is actually tilted A bit slanted upwards right? So the the the the angle is not is higher is greater the slope is greater than the one-to-one slope This is extremely interesting If you think about what does that mean? It means that if you Take a model right right here If you look at its order It's it's it's a this Let's let's look at this model here. This model is number one Best model in the world right it will still be number one on v2 This model here is number number three rank three on v1. It will also be rank three on v2 right? So the order of models is is pretty much constant So if if a model is doing well on v1 it is also doing well on v2 In relation to other models but every model experiences this drop in in accuracy And the most interesting part is that and again you can see this more here The the better you're doing the smaller this drop gets right this drop here is smaller than this drop here This is exactly counter to the notion of overfitting Where it seems the more accurate you get on v1 the more you're able to close this gap between v1 and v2 And if you extrapolate here you might as well think that once we are at at or actually sorry 100% is already here If you could go higher maybe or maybe you can see here that in the end these will actually converge But nevertheless if the models that are doing better on v1 aren't only not overfit But they are actually experience less of a drop with regards to a new test set So they generalize better to the new test set than the the worst models right And that is crazy and it is not only neural networks right so up here for up here you have the the deep neural networks and whatnot But you can also go with I believe some of these or even further down here are k nearest neighbor sorry k nearest neighbor classifiers and things like this So it doesn't seem to be a property of neural networks it really seems to be a property of the data set And this paper first of all goes over how they collected these and second of all their hypotheses and investigations into why this phenomenon exists Why we are all of a sudden worse on the new test set but completely worse in a different way than we expected compared to the original test set All right so they first say potential causes of accuracy drops and they propose a model They say here are two here is the entire difference between two data sets with regards to a classifier It can be decomposed into three different gaps and you see the first and the last part here are the ones from the left side So this is an expanding sum in the middle So there is the generalization gap the generalization gap refers to the gap that you have between different data sets of the same distribution So these this is like you know from the train versus test set you train on the training set and then you have a generalization gap to the test set In this case the generalization gap simply refers to the difference between the generalization to the first and to the second set They argue that this isn't really an issue here because they say they can put up confidence intervals So if those were identically distributed how much would the generalization gap be at maximum given some kind of confidence interval 95 confidence interval would only give you plus minus a one percent difference in generalization gap So they rule out that this is the reason for the big discrepancy Then have two others they have the adaptivity gap and the distribution gap So the adaptivity gap is what we hypothesized at the beginning It is the overfitting to the first data set or to one of the two data sets So if you have a big adaptivity gap then you have fitted much more to one than to the other data set Now because of the shape of the curve being the way it is they also rule out the adaptivity gap And we went over why right because it would look completely different than it does Now the only thing remaining here is this distribution gap So they explain that this difference here most likely comes from the fact that the old and the new test set have a different distribution And they go into why that is And I'm going to compress their hypothesis of why that is into a short summary Let's say we won't go over the entire paper They basically say that the Mechanical Turk part of the processing pipeline has a very big influence So what happens when you collect an ImageNet test set? You start with Flickr This is a big image database and the images as far as I can understand they are tagged and you can search for them and so on So they start by going to Flickr and searching for images And their ground truth class labels come, you may know this, from a system called WordNet And WordNet is sort of a linguistic classification of words into groups So it would have hierarchical, it would have animals, animal being a word and then below animal it would have dog and then it would have terrier And it would have these hierarchical groupings of these words And they search on Flickr for images and then they put the images to a human rater So the human rater on a system called Mechanical Turk, you may know it, you can just sign up there and do these kind of tasks They present the human with a grid of images and a class, terrier And they say please select all the images where a terrier appears, so the human might select this, this, this and this one And that will give you what they call a selection frequency So, selection frequency, so how often was a particular image selected given that class And of course the higher, so you do this over many, sorry, the selection frequency is across many of these Mechanical Turk workers So if for a given image the selection frequency is high, let's say going towards 1.0, that means every single human selected that image to be in this class So you can be pretty sure it's in the class, if it goes towards 0, then you can be pretty sure it's not in the class And this selection frequency criteria, the paper thinks that this is the main criteria why the datasets are of different difficulties, let's say Because even though they try to match the process exactly, even the questions they pose to the Turk workers They even restricted their flicker date range to the date range where the original image net was collected They still think that there is a difference in how the Mechanical Turk workers basically rated the images Or then after that how they were selected using the selection frequency So what they do is they do different, they select different images depending on criteria so they can test these hypotheses Their original V2 test set is called matched frequency Now what you do in matched frequency is you kind of play a little game What you do is every now and then you will implant an image here of the V1 test set So thereby, of the V1 test set, either of this class or of another class So you can kind of do a quality control From this you can now find out what is the selection frequency of images in the V1 class for Terrier And then you can simply select the same one in the V2 So if you know the selection frequency for V1 was 0.8, you can just put the threshold here at 0.8 And you know, you can be reasonably sure that you have selected a similar difficulty Or so you would think, right? So if they do it like this, then they get this drop that you saw at the beginning They also do this threshold 0.7, which I guess is arbitrary-ish And then they also say top images, where they say For each class we chose the 10 images with the highest selection frequency So these would be the sort of easiest ones And if you look at the graphs, and this is ImageNet for these different datasets So if you do the threshold 0.7 that they selected, now the old line was somewhere here Now the new line is much closer, right? You see that here And if you do these top images, so you just select the easy ones The new line is actually above, right? This is now Note that the red line here is above the black line, while here it's below It is still extremely interesting that still there is this almost linear relationship between the V1 and V2 accuracies And even here on this easier dataset there is So they basically hypothesize by thresholding differently for the new dataset You have a very good grip on the difficulty of the new dataset So this process of matching the selection frequency, so this matched frequency dataset It might actually not result in the same difficulty in dataset They do have some more experiments where they experiment with different difficulties So let's actually jump down there, it's a bit of a jump because there are over 70 pages in this paper The appendix has its own content directory, that's how crazy that is But I want to show you these plots, so here what they do is they do different bins of this set So they have bins of easy samples, less easy samples and so on And you can see that you have a pretty good grip on where this line is So if you only take the easy samples you're up here, and this is about what we saw before If this is the old line, it's the red line, right? If you take the entire new test set If you just take bin the second hardest bin you're somewhere here or here or here So you have a good hold on where this line is But that still doesn't explain that if you try to follow the protocol exactly Why does the accuracy drop? That is still a mystery Even though they say here is a variable that influences this a lot If they try to set the variable as it was set in v1, it is not equal And still mystery remains, I would say So the last thing they do is they try to just come up with a model for this So their hypothesis now is that the new test set just is harder And they have an analytical kind of a formal model of why if you assume certain things This results in this line, right? The really interesting thing about the paper is that the accuracies, they all fall exactly on this line There's this linear relationship, especially if you do like probit scaling of accuracies There's this line, and so they put up a model where you say What if we assume that each example i has a difficulty, right? It's just a number how difficult it is And each model j has a probability of correctly classifying an image with difficulty tau here Given by this function, right? So this here is the probability that the model will classify an image correctly, given that it's tau hard And so this is an increasing function And they put up this following parameterization This is the CDF of the... So they put up a model for this function now, right? For this, they say if we assume that it is like this That each model has a sort of a skill number So each model has a skill, and each image has a difficulty that is tau If the skill is higher than the tau, probably it will classify correctly If the skill is lower, probably, then this number is negative, it will classify it incorrectly So this is the CDF of a normal distribution, it goes something like this, right? And if the zero point is here, so if this number here is zero, it's like 50-50 Whether it will classify it correctly So if you assume this, and you assume a bunch of other Gaussian error distributions Then the performance of a model on the test set V2 is exactly the performance of the model under test set V1 Times this scalar here, plus this scalar here, which is a linear relationship So they put up a model for this Of course this doesn't explain anything, right? This doesn't explain the phenomena, but it still gives a clue of why the linear relationship here might result from the test set having a different difficulty setting Or a different difficulty properties So they go on, after discussing related work, they go on to say what can one do And suggestions for future research, I especially like the super holdout So if you ever make a data set, then make a super holdout set And once you're almost out of your career, just come up with it and say, oh I have this lost data set here that I made way back It will be fantastic Alright, so I think this paper is very interesting And I think everyone that sees and reads this comes up with their own hypothesis of why this is and what's going on here They have investigated a lot of this Especially I want to highlight an experiment where they taken part of V2 here So they split this one into a train and a test And they put this and this training together into like a super train So you train on both things together and they see whether it improves at this test set You would think that if you put this training in there that it would improve And it does improve, but it improves by like a miniscule amount So they've done a whole bunch of experiments like this to investigate what's going on This is all in this 70 page appendix that you can go over Alright, that was what I had to say for this paper If you like this video, consider subscribing and comment what you think I usually answer or like or read most comments Thanks for listening, bye bye
[ { "start": 0, "end": 3.3000000000000003, "text": " Hi there today we're looking at to do image net classifiers" }, { "start": 3.6, "end": 9.78, "text": " Generalized to image net by Benjamin wrecked Rebecca are Olaf's Ludwig Schmidt and Vyshal Shankar" }, { "start": 10.1, "end": 12.620000000000001, "text": " So the premise of this paper is pretty simple" }, { "start": 12.98, "end": 16.4, "text": " We've been training models on image net now for a while" }, { "start": 17.1, "end": 24.54, "text": " Almost ten years to be exact image net is this data set with a lot of images millions of images" }, { "start": 25.62, "end": 28.76, "text": " categorized into many thousands of categories now the" }, { "start": 28.76, "end": 35.44, "text": " The classic part of image net that people know is that has about 1.5 million images in" }, { "start": 36.24, "end": 38.24, "text": " 1,000 different classes and" }, { "start": 39.28, "end": 40.84, "text": " this" }, { "start": 40.84, "end": 45.52, "text": " Image net was one of the main data sets now or has been for the last few years" }, { "start": 45.6, "end": 49.84, "text": " And as you can see on the right here the error rate year after year" }, { "start": 50.6, "end": 56.32000000000001, "text": " Was pretty much I think cut in half every year since 2012" }, { "start": 56.32, "end": 62.52, "text": " When the first net Alex net was using deep learning instead of the classical" }, { "start": 63.6, "end": 65.6, "text": " visual computer vision approaches" }, { "start": 65.92, "end": 69.6, "text": " so we've been training on image net for a while and the" }, { "start": 70.44, "end": 74.64, "text": " idea or the question this paper asks if we" }, { "start": 75.96000000000001, "end": 82.12, "text": " Collect a second test set right so for image net we have a train and a test set" }, { "start": 82.12, "end": 89.64, "text": " If we now collect a second test set here test v2" }, { "start": 90.36, "end": 92.24000000000001, "text": " right if" }, { "start": 92.24000000000001, "end": 96.78, "text": " We have a model that was trained on training here and evaluated on test" }, { "start": 97, "end": 102, "text": " Does it also perform well on this second test set right?" }, { "start": 103.08000000000001, "end": 108, "text": " the idea of this being that maybe over the years we've" }, { "start": 108, "end": 113.8, "text": " Tuned our hyper parameters and all such that the models perform well on that particular test set" }, { "start": 113.8, "end": 122, "text": " Let's call this v1 right and it might not it might not be as successful on a new test set" }, { "start": 122.32, "end": 127.03999999999999, "text": " So this paper goes about collecting a test set to image net in" }, { "start": 127.68, "end": 131.4, "text": " Exactly the way that the v1 test set was collected right?" }, { "start": 132.04, "end": 136.8, "text": " So they they try to match exactly the process of how v1 was collected" }, { "start": 136.8, "end": 142.28, "text": " To create another test set and then they evaluate models on that new test set" }, { "start": 142.76000000000002, "end": 148.56, "text": " They do this not only for image net but also for c4 10 which is a much smaller data set" }, { "start": 148.56, "end": 153.32000000000002, "text": " But also a lot of computer vision algorithms are evaluated on c4 10" }, { "start": 154.32000000000002, "end": 157.28, "text": " So let's just put up a hypothesis here" }, { "start": 157.28, "end": 165.20000000000002, "text": " The hypothesis is that we have pretty much over fitted to image net by now. This is a very prestigious" }, { "start": 165.2, "end": 169.44, "text": " Number to get if you have state-of-the-art on image net and therefore" }, { "start": 169.76, "end": 176.95999999999998, "text": " Tuning your hyper parameters and your learning rate and everything such that it performs well on the test set v1 is very likely" }, { "start": 177.28, "end": 181.35999999999999, "text": " So this paper has the most important plots are like this" }, { "start": 181.35999999999999, "end": 187.95999999999998, "text": " It has two axes and on the bottom axis is the performance on v1 right and" }, { "start": 188.56, "end": 190.56, "text": " so performance" }, { "start": 190.56, "end": 192.56, "text": " and that means" }, { "start": 192.56, "end": 198.28, "text": " Accuracy basically so accuracy and on line two is v2 accuracy" }, { "start": 199.28, "end": 200.48, "text": " now" }, { "start": 200.48, "end": 202.48, "text": " Here is one" }, { "start": 202.48, "end": 204.48, "text": " Here is zero" }, { "start": 205.12, "end": 210.16, "text": " This line here means that if a model is performing" }, { "start": 210.88, "end": 216.8, "text": " 50% accuracy on v1 it is also performing 50% accuracy on v2" }, { "start": 216.8, "end": 224.32000000000002, "text": " So being on this line would basically mean we have not over fitted and the model is performing equally well on both" }, { "start": 224.88000000000002, "end": 226.88000000000002, "text": " both sets" }, { "start": 226.88000000000002, "end": 233.04000000000002, "text": " So what what now if we assume that we have over fitted if we have over fitted?" }, { "start": 233.04000000000002, "end": 238, "text": " We would assume that you know the models that perform really poorly might also you know" }, { "start": 238, "end": 244, "text": " They perform kind of they're not really over fitted but over the years as we've gotten better on v1" }, { "start": 244, "end": 248.4, "text": " We stray away from this so we get better on v1" }, { "start": 248.72, "end": 256.56, "text": " But we don't really get better on v2 right and that means we've over fitted to v1 and this might even go down right the more" }, { "start": 256.88, "end": 260, "text": " The more kind of we might overfit to v1" }, { "start": 260.72, "end": 263.28, "text": " The worse we're actually going to get on v2" }, { "start": 264.24, "end": 271.84, "text": " So this this is kind of a meta over fitting so this is what we would expect if we over fit right over" }, { "start": 271.84, "end": 273.84, "text": " over fit to v1" }, { "start": 274.71999999999997, "end": 280.56, "text": " And I think this was the initial hypothesis behind the people that ran this experiment to check" }, { "start": 280.96, "end": 286.08, "text": " Can we see an effect like this or is the effect more?" }, { "start": 286.71999999999997, "end": 292.15999999999997, "text": " A continuous one where we don't over fit and what they found was neither" }, { "start": 292.96, "end": 296.64, "text": " And these are basically these interesting plots here" }, { "start": 296.64, "end": 302.15999999999997, "text": " So again the dashed line here would be the not over fitting line" }, { "start": 302.15999999999997, "end": 307.44, "text": " So what they find if you for example look at image net every dot here is a model right?" }, { "start": 307.91999999999996, "end": 313.03999999999996, "text": " So this model right here is performing with like a 67 percent accuracy" }, { "start": 313.68, "end": 320, "text": " On v1 and it is performing with something like a 53 percent accuracy on v2" }, { "start": 320.32, "end": 324.15999999999997, "text": " If you look at this line here what that means is that" }, { "start": 324.16, "end": 326.64000000000004, "text": " Every model kind of drops" }, { "start": 327.36, "end": 329.36, "text": " By about this much" }, { "start": 329.84000000000003, "end": 330.96000000000004, "text": " right" }, { "start": 330.96000000000004, "end": 332.08000000000004, "text": " so" }, { "start": 332.08000000000004, "end": 334.08000000000004, "text": " Not only not we don't see" }, { "start": 334.64000000000004, "end": 340, "text": " This and we don't see this but we see this line is shifted down" }, { "start": 340, "end": 347.44000000000005, "text": " And if you look closely especially on c410 you can see the line rather than being tilted like this is actually tilted" }, { "start": 347.92, "end": 350.40000000000003, "text": " A bit slanted upwards right?" }, { "start": 350.4, "end": 358.96, "text": " So the the the the angle is not is higher is greater the slope is greater than the one-to-one slope" }, { "start": 358.96, "end": 360.96, "text": " This is extremely interesting" }, { "start": 361.52, "end": 364, "text": " If you think about what does that mean?" }, { "start": 364.71999999999997, "end": 367.28, "text": " It means that if you" }, { "start": 368.79999999999995, "end": 371.84, "text": " Take a model right right here" }, { "start": 374.23999999999995, "end": 377.44, "text": " If you look at its order" }, { "start": 377.44, "end": 380.56, "text": " It's it's it's a this" }, { "start": 380.56, "end": 383.28, "text": " Let's let's look at this model here. This model is number one" }, { "start": 383.76, "end": 388.32, "text": " Best model in the world right it will still be number one on v2" }, { "start": 389.36, "end": 397.28, "text": " This model here is number number three rank three on v1. It will also be rank three on v2 right?" }, { "start": 397.28, "end": 400.88, "text": " So the order of models is is pretty much constant" }, { "start": 400.88, "end": 405.68, "text": " So if if a model is doing well on v1 it is also doing well on v2" }, { "start": 405.68, "end": 415.04, "text": " In relation to other models but every model experiences this drop in in accuracy" }, { "start": 415.04, "end": 419.92, "text": " And the most interesting part is that and again you can see this more here" }, { "start": 422, "end": 429.92, "text": " The the better you're doing the smaller this drop gets right this drop here is smaller than this drop here" }, { "start": 429.92, "end": 436.24, "text": " This is exactly counter to the notion of overfitting" }, { "start": 437.84000000000003, "end": 446.88, "text": " Where it seems the more accurate you get on v1 the more you're able to close this gap between v1 and v2" }, { "start": 446.88, "end": 455.92, "text": " And if you extrapolate here you might as well think that once we are at at or actually sorry 100% is already here" }, { "start": 455.92, "end": 463.92, "text": " If you could go higher maybe or maybe you can see here that in the end these will actually converge" }, { "start": 463.92, "end": 472.88, "text": " But nevertheless if the models that are doing better on v1 aren't only not overfit" }, { "start": 472.88, "end": 477.52000000000004, "text": " But they are actually experience less of a drop with regards to a new test set" }, { "start": 477.52000000000004, "end": 484.8, "text": " So they generalize better to the new test set than the the worst models right" }, { "start": 484.8, "end": 495.28000000000003, "text": " And that is crazy and it is not only neural networks right so up here for up here you have the the deep neural networks and whatnot" }, { "start": 495.28000000000003, "end": 506.16, "text": " But you can also go with I believe some of these or even further down here are k nearest neighbor sorry k nearest neighbor classifiers and things like this" }, { "start": 506.16, "end": 513.92, "text": " So it doesn't seem to be a property of neural networks it really seems to be a property of the data set" }, { "start": 513.92, "end": 528.0799999999999, "text": " And this paper first of all goes over how they collected these and second of all their hypotheses and investigations into why this phenomenon exists" }, { "start": 528.0799999999999, "end": 543.4399999999999, "text": " Why we are all of a sudden worse on the new test set but completely worse in a different way than we expected compared to the original test set" }, { "start": 543.44, "end": 554.72, "text": " All right so they first say potential causes of accuracy drops and they propose a model" }, { "start": 554.72, "end": 563.6, "text": " They say here are two here is the entire difference between two data sets with regards to a classifier" }, { "start": 563.6, "end": 572.1600000000001, "text": " It can be decomposed into three different gaps and you see the first and the last part here are the ones from the left side" }, { "start": 572.16, "end": 576.9599999999999, "text": " So this is an expanding sum in the middle" }, { "start": 576.9599999999999, "end": 588.64, "text": " So there is the generalization gap the generalization gap refers to the gap that you have between different data sets of the same distribution" }, { "start": 588.64, "end": 598.9599999999999, "text": " So these this is like you know from the train versus test set you train on the training set and then you have a generalization gap to the test set" }, { "start": 598.96, "end": 608.1600000000001, "text": " In this case the generalization gap simply refers to the difference between the generalization to the first and to the second set" }, { "start": 608.1600000000001, "end": 622.32, "text": " They argue that this isn't really an issue here because they say they can put up confidence intervals" }, { "start": 622.32, "end": 633.0400000000001, "text": " So if those were identically distributed how much would the generalization gap be at maximum given some kind of confidence interval" }, { "start": 633.0400000000001, "end": 640.5600000000001, "text": " 95 confidence interval would only give you plus minus a one percent difference in generalization gap" }, { "start": 640.5600000000001, "end": 645.84, "text": " So they rule out that this is the reason for the big discrepancy" }, { "start": 645.84, "end": 650, "text": " Then have two others they have the adaptivity gap and the distribution gap" }, { "start": 650, "end": 657.36, "text": " So the adaptivity gap is what we hypothesized at the beginning" }, { "start": 657.36, "end": 663.12, "text": " It is the overfitting to the first data set or to one of the two data sets" }, { "start": 663.12, "end": 672, "text": " So if you have a big adaptivity gap then you have fitted much more to one than to the other data set" }, { "start": 672, "end": 680, "text": " Now because of the shape of the curve being the way it is they also rule out the adaptivity gap" }, { "start": 680, "end": 687.6, "text": " And we went over why right because it would look completely different than it does" }, { "start": 687.6, "end": 692, "text": " Now the only thing remaining here is this distribution gap" }, { "start": 692, "end": 704, "text": " So they explain that this difference here most likely comes from the fact that the old and the new test set have a different distribution" }, { "start": 704, "end": 708, "text": " And they go into why that is" }, { "start": 708, "end": 724, "text": " And I'm going to compress their hypothesis of why that is into a short summary" }, { "start": 724, "end": 728, "text": " Let's say we won't go over the entire paper" }, { "start": 728, "end": 744, "text": " They basically say that the Mechanical Turk part of the processing pipeline has a very big influence" }, { "start": 744, "end": 748, "text": " So what happens when you collect an ImageNet test set? You start with Flickr" }, { "start": 748, "end": 760, "text": " This is a big image database and the images as far as I can understand they are tagged and you can search for them and so on" }, { "start": 760, "end": 766, "text": " So they start by going to Flickr and searching for images" }, { "start": 766, "end": 772, "text": " And their ground truth class labels come, you may know this, from a system called WordNet" }, { "start": 772, "end": 778, "text": " And WordNet is sort of a linguistic classification of words into groups" }, { "start": 778, "end": 788, "text": " So it would have hierarchical, it would have animals, animal being a word and then below animal it would have dog and then it would have terrier" }, { "start": 788, "end": 792, "text": " And it would have these hierarchical groupings of these words" }, { "start": 792, "end": 800, "text": " And they search on Flickr for images and then they put the images to a human rater" }, { "start": 800, "end": 809, "text": " So the human rater on a system called Mechanical Turk, you may know it, you can just sign up there and do these kind of tasks" }, { "start": 809, "end": 817, "text": " They present the human with a grid of images and a class, terrier" }, { "start": 817, "end": 827, "text": " And they say please select all the images where a terrier appears, so the human might select this, this, this and this one" }, { "start": 827, "end": 832, "text": " And that will give you what they call a selection frequency" }, { "start": 832, "end": 842, "text": " So, selection frequency, so how often was a particular image selected given that class" }, { "start": 842, "end": 852, "text": " And of course the higher, so you do this over many, sorry, the selection frequency is across many of these Mechanical Turk workers" }, { "start": 852, "end": 865, "text": " So if for a given image the selection frequency is high, let's say going towards 1.0, that means every single human selected that image to be in this class" }, { "start": 865, "end": 873, "text": " So you can be pretty sure it's in the class, if it goes towards 0, then you can be pretty sure it's not in the class" }, { "start": 873, "end": 891, "text": " And this selection frequency criteria, the paper thinks that this is the main criteria why the datasets are of different difficulties, let's say" }, { "start": 891, "end": 899, "text": " Because even though they try to match the process exactly, even the questions they pose to the Turk workers" }, { "start": 899, "end": 907, "text": " They even restricted their flicker date range to the date range where the original image net was collected" }, { "start": 907, "end": 916, "text": " They still think that there is a difference in how the Mechanical Turk workers basically rated the images" }, { "start": 916, "end": 920, "text": " Or then after that how they were selected using the selection frequency" }, { "start": 920, "end": 930, "text": " So what they do is they do different, they select different images depending on criteria so they can test these hypotheses" }, { "start": 930, "end": 935, "text": " Their original V2 test set is called matched frequency" }, { "start": 935, "end": 941, "text": " Now what you do in matched frequency is you kind of play a little game" }, { "start": 941, "end": 951, "text": " What you do is every now and then you will implant an image here of the V1 test set" }, { "start": 951, "end": 957, "text": " So thereby, of the V1 test set, either of this class or of another class" }, { "start": 957, "end": 960, "text": " So you can kind of do a quality control" }, { "start": 960, "end": 970, "text": " From this you can now find out what is the selection frequency of images in the V1 class for Terrier" }, { "start": 970, "end": 977, "text": " And then you can simply select the same one in the V2" }, { "start": 977, "end": 986, "text": " So if you know the selection frequency for V1 was 0.8, you can just put the threshold here at 0.8" }, { "start": 986, "end": 993, "text": " And you know, you can be reasonably sure that you have selected a similar difficulty" }, { "start": 993, "end": 995, "text": " Or so you would think, right?" }, { "start": 995, "end": 1002, "text": " So if they do it like this, then they get this drop that you saw at the beginning" }, { "start": 1002, "end": 1010, "text": " They also do this threshold 0.7, which I guess is arbitrary-ish" }, { "start": 1010, "end": 1015, "text": " And then they also say top images, where they say" }, { "start": 1015, "end": 1020, "text": " For each class we chose the 10 images with the highest selection frequency" }, { "start": 1020, "end": 1026, "text": " So these would be the sort of easiest ones" }, { "start": 1026, "end": 1034, "text": " And if you look at the graphs, and this is ImageNet for these different datasets" }, { "start": 1034, "end": 1042, "text": " So if you do the threshold 0.7 that they selected, now the old line was somewhere here" }, { "start": 1042, "end": 1047, "text": " Now the new line is much closer, right? You see that here" }, { "start": 1047, "end": 1052, "text": " And if you do these top images, so you just select the easy ones" }, { "start": 1052, "end": 1057, "text": " The new line is actually above, right? This is now" }, { "start": 1057, "end": 1064, "text": " Note that the red line here is above the black line, while here it's below" }, { "start": 1064, "end": 1075, "text": " It is still extremely interesting that still there is this almost linear relationship between the V1 and V2 accuracies" }, { "start": 1075, "end": 1079, "text": " And even here on this easier dataset there is" }, { "start": 1079, "end": 1088, "text": " So they basically hypothesize by thresholding differently for the new dataset" }, { "start": 1088, "end": 1095, "text": " You have a very good grip on the difficulty of the new dataset" }, { "start": 1095, "end": 1102, "text": " So this process of matching the selection frequency, so this matched frequency dataset" }, { "start": 1102, "end": 1108, "text": " It might actually not result in the same difficulty in dataset" }, { "start": 1108, "end": 1117, "text": " They do have some more experiments where they experiment with different difficulties" }, { "start": 1117, "end": 1127, "text": " So let's actually jump down there, it's a bit of a jump because there are over 70 pages in this paper" }, { "start": 1127, "end": 1135, "text": " The appendix has its own content directory, that's how crazy that is" }, { "start": 1135, "end": 1145, "text": " But I want to show you these plots, so here what they do is they do different bins of this set" }, { "start": 1145, "end": 1150, "text": " So they have bins of easy samples, less easy samples and so on" }, { "start": 1150, "end": 1157, "text": " And you can see that you have a pretty good grip on where this line is" }, { "start": 1157, "end": 1162, "text": " So if you only take the easy samples you're up here, and this is about what we saw before" }, { "start": 1162, "end": 1169, "text": " If this is the old line, it's the red line, right? If you take the entire new test set" }, { "start": 1169, "end": 1176, "text": " If you just take bin the second hardest bin you're somewhere here or here or here" }, { "start": 1176, "end": 1182, "text": " So you have a good hold on where this line is" }, { "start": 1182, "end": 1187, "text": " But that still doesn't explain that if you try to follow the protocol exactly" }, { "start": 1187, "end": 1191, "text": " Why does the accuracy drop? That is still a mystery" }, { "start": 1191, "end": 1197, "text": " Even though they say here is a variable that influences this a lot" }, { "start": 1197, "end": 1203, "text": " If they try to set the variable as it was set in v1, it is not equal" }, { "start": 1203, "end": 1208, "text": " And still mystery remains, I would say" }, { "start": 1208, "end": 1216, "text": " So the last thing they do is they try to just come up with a model for this" }, { "start": 1216, "end": 1223, "text": " So their hypothesis now is that the new test set just is harder" }, { "start": 1223, "end": 1231, "text": " And they have an analytical kind of a formal model of why if you assume certain things" }, { "start": 1231, "end": 1234, "text": " This results in this line, right?" }, { "start": 1234, "end": 1242, "text": " The really interesting thing about the paper is that the accuracies, they all fall exactly on this line" }, { "start": 1242, "end": 1248, "text": " There's this linear relationship, especially if you do like probit scaling of accuracies" }, { "start": 1248, "end": 1255, "text": " There's this line, and so they put up a model where you say" }, { "start": 1255, "end": 1262, "text": " What if we assume that each example i has a difficulty, right?" }, { "start": 1262, "end": 1264, "text": " It's just a number how difficult it is" }, { "start": 1264, "end": 1276, "text": " And each model j has a probability of correctly classifying an image with difficulty tau here" }, { "start": 1276, "end": 1279, "text": " Given by this function, right?" }, { "start": 1279, "end": 1288, "text": " So this here is the probability that the model will classify an image correctly, given that it's tau hard" }, { "start": 1288, "end": 1297, "text": " And so this is an increasing function" }, { "start": 1297, "end": 1304, "text": " And they put up this following parameterization" }, { "start": 1304, "end": 1311, "text": " This is the CDF of the... So they put up a model for this function now, right?" }, { "start": 1311, "end": 1316, "text": " For this, they say if we assume that it is like this" }, { "start": 1316, "end": 1321, "text": " That each model has a sort of a skill number" }, { "start": 1321, "end": 1327, "text": " So each model has a skill, and each image has a difficulty that is tau" }, { "start": 1327, "end": 1333, "text": " If the skill is higher than the tau, probably it will classify correctly" }, { "start": 1333, "end": 1338, "text": " If the skill is lower, probably, then this number is negative, it will classify it incorrectly" }, { "start": 1338, "end": 1345, "text": " So this is the CDF of a normal distribution, it goes something like this, right?" }, { "start": 1345, "end": 1354, "text": " And if the zero point is here, so if this number here is zero, it's like 50-50" }, { "start": 1354, "end": 1359, "text": " Whether it will classify it correctly" }, { "start": 1359, "end": 1365, "text": " So if you assume this, and you assume a bunch of other Gaussian error distributions" }, { "start": 1365, "end": 1380, "text": " Then the performance of a model on the test set V2 is exactly the performance of the model under test set V1" }, { "start": 1380, "end": 1387, "text": " Times this scalar here, plus this scalar here, which is a linear relationship" }, { "start": 1387, "end": 1390, "text": " So they put up a model for this" }, { "start": 1390, "end": 1393, "text": " Of course this doesn't explain anything, right?" }, { "start": 1393, "end": 1408, "text": " This doesn't explain the phenomena, but it still gives a clue of why the linear relationship here might result from the test set having a different difficulty setting" }, { "start": 1408, "end": 1411, "text": " Or a different difficulty properties" }, { "start": 1411, "end": 1420, "text": " So they go on, after discussing related work, they go on to say what can one do" }, { "start": 1420, "end": 1428, "text": " And suggestions for future research, I especially like the super holdout" }, { "start": 1428, "end": 1441, "text": " So if you ever make a data set, then make a super holdout set" }, { "start": 1441, "end": 1452, "text": " And once you're almost out of your career, just come up with it and say, oh I have this lost data set here that I made way back" }, { "start": 1452, "end": 1454, "text": " It will be fantastic" }, { "start": 1454, "end": 1458, "text": " Alright, so I think this paper is very interesting" }, { "start": 1458, "end": 1467, "text": " And I think everyone that sees and reads this comes up with their own hypothesis of why this is and what's going on here" }, { "start": 1467, "end": 1469, "text": " They have investigated a lot of this" }, { "start": 1469, "end": 1476, "text": " Especially I want to highlight an experiment where they taken part of V2 here" }, { "start": 1476, "end": 1486, "text": " So they split this one into a train and a test" }, { "start": 1486, "end": 1492, "text": " And they put this and this training together into like a super train" }, { "start": 1492, "end": 1499, "text": " So you train on both things together and they see whether it improves at this test set" }, { "start": 1499, "end": 1506, "text": " You would think that if you put this training in there that it would improve" }, { "start": 1506, "end": 1510, "text": " And it does improve, but it improves by like a miniscule amount" }, { "start": 1510, "end": 1514, "text": " So they've done a whole bunch of experiments like this to investigate what's going on" }, { "start": 1514, "end": 1519, "text": " This is all in this 70 page appendix that you can go over" }, { "start": 1519, "end": 1522, "text": " Alright, that was what I had to say for this paper" }, { "start": 1522, "end": 1529, "text": " If you like this video, consider subscribing and comment what you think" }, { "start": 1529, "end": 1533, "text": " I usually answer or like or read most comments" }, { "start": 1533, "end": 1553, "text": " Thanks for listening, bye bye" } ]
PZypP7PiKi0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Gradient Surgery for Multi-Task Learning
[ "Science & Technology" ]
[ "deep learning", "machine learning", "neural networks", "multi task", "conflicting gradients", "magnitudes", "adam", "sgd", "momentum", "optimization", "projection" ]
Multi-Task Learning can be very challenging when gradients of different tasks are of severely different magnitudes or point into conflicting directions. PCGrad eliminates this problem by projecting conflicting gradients while still retaining optimality guarantees. https://arxiv.org/abs/2001.06782 Abstract: While deep learning and deep reinforcement learning (RL) systems have demonstrated impressive results in domains such as image classification, game playing, and robotic control, data efficiency remains a major challenge. Multi-task learning has emerged as a promising approach for sharing structure across multiple tasks to enable more efficient learning. However, the multi-task setting presents a number of optimization challenges, making it difficult to realize large efficiency gains compared to learning tasks independently. The reasons why multi-task learning is so challenging compared to single-task learning are not fully understood. In this work, we identify a set of three conditions of the multi-task optimization landscape that cause detrimental gradient interference, and develop a simple yet general approach for avoiding such interference between task gradients. We propose a form of gradient surgery that projects a task's gradient onto the normal plane of the gradient of any other task that has a conflicting gradient. On a series of challenging multi-task supervised and multi-task RL problems, this approach leads to substantial gains in efficiency and performance. Further, it is model-agnostic and can be combined with previously-proposed multi-task architectures for enhanced performance. Authors: Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, Chelsea Finn Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today we're looking at gradient surgery for multitask learning by Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergei Levine, Carole Hausmann and Chelsea Finn. So in this paper, the concern is a thing called multitask learning. Now what is multitask learning? So this has some very subtle distinctions from other things, that's I think why it's important to look at it a bit. So let's say you have multiple tasks, a learning problem in multiple tasks, this seems easy enough, right? So what we mean is that we have the same input, but then we want to perform two different tasks. So task one and task two. So it could be something like task one, if the input is a food, right? A food object. The task one could be, is it a fruit? Right? Task two could be how many calories does it have? Right? The input is this food item here. And you want to know both things. Is it a fruit? And how many calories does it have? And ideally, so what you could do is you could train two separate machine learning classifiers, right? Classifier one simply does the is it a fruit thing? Task two simply does how many calories does it have? Let's say this is, let's actually say this is a food picture, right? Since Instagram is full of food pictures, we have lots of training data, right? At least unsupervised, people usually label it. And we could train two different things. But it would be nice, since they're both kind of dealing with the same input, so they're not kind of, they actually deal with the same input distribution. It would be nice if we could kind of share a representation, right? So maybe we have some neural network here with many layers. And then we have at the end, we take this hidden representation here. And we just have maybe one or two fully connected layers for each individual task. But our goal would be that the hidden representation here is shared. So shared representation. And why could that help? Because we might have, maybe we have lots of training data for the how many calories does it have, right? But we don't have that much training data for is it a fruit. So lots of training data here, big database, but only like a handful of data points for the second task. Or we might just not have much training data at all for both tasks. And we just might benefit from training this shared representation, you might have already seen this with something like BERT. So in BERT's case, the input is text, right? And then you do something different. That's why BERT is different than multitask learning. What you do in BERT is you do first you do this masked language model pre training. So that's step one. And then in step two, you take this and then you, you fine tune it on a number of tasks, right? So here, question answering, sentiment detection, entailment, and so on. This is different. This is called pre training and fine tuning. Tuning in multitask learning, we actually want to train on different tasks at the same time. Maybe they have different data, right? And we simply want to create this shared representation. And we hope that by combining these tasks, we might learn them better than if we were to learn each task individually. Alright, so this paper says there are there's a big problem with things like this. And they illustrate this in this example right here. So let's say you have a multitask objective, and the learning landscape looks like this. So the objective for task one is the following. So this you have to have to imagine this is maybe a neural network with just two weights, right? Weight one, here is weight two. And this is what the optimization landscape look like looks like for task one. If you're not used to this kind of depiction, the light parts up here and here are high values for the loss function. And the darker parts are low values for the loss function. So you want to get to the darker parts. Now usually we discuss things like this in terms of optimization. So for example, we would talk about SGD. And we would talk about the fact that oh, if we have too large of a so if you're here, where does the gradient point the gradient points towards the direction of steepest increase. So here, so the negative gradient would point down. If we have SGD, maybe we'd go here. And then we take another gradient step, we would go here, right? Oh, now we've gone too far, right? So the gradient now points this direction. So we go here, and then we just continue this, right? So this is a problem with with SGD. And what we can do is we can decrease the step size, for example, and then we converge in this, or we can use something like Adam, that adjusts the gradient to the to the variance of the gradient landscape, things like this, right? So these are these are problems in optimization. But what happens when you have a multitask objective is that for just task one, the optimization landscape would look like this, right? If you were just to train your neural network, if you were to just train this part, and we just look here is is like theta one, and here is theta two, these are the two weights we care about right now, everything else, let's say is fixed. Task one looks like this, but for task two, because it's a different task, right, we need to set the weights differently to get our desired output, it looks different. So our loss function is going to be a combination. So our loss function is going to be the loss of task loss function for a given sample, it's going to be the loss on task one of that sample plus the loss of task two on that sample. So that's going to be the combination the combination you see on the right. So this plus this equals this right here. So you can see in task one, it almost let's say it didn't matter whether we were here or here, both had a relatively low loss value, right? And you can you can see in task two, this point here is not an optimum, well, this point or maybe these are slight, these are somewhat somewhat close together. So if you add them, you can see that now this thing here still has a low value but not as low as this is much darker, right? So the landscape for both tasks together looks differently from the landscape of either task alone. So your goal is to find this optimal point and optimal point here that works for both tasks. Now the paper identifies many, sorry, sorry, the paper identifies problems with this multitask learning and they say the problem is that you can have what are called conflicting gradients. So if you look if you look at if you look at where the gradients point in the different in the different tasks. So if we go by task two, sorry, let me put that in again. We care about the point right here that they care about, right? And they use Adam in this case, and their starting point is right here, and they've come this way so far. So we're going to draw this in here and draw this in right here. And we'll stop a little bit before that valley, right? So let's analyze the gradient, the gradient task one actually points in this direction, you see down the valley, right? And it's pretty big because it's pretty steep, right? You can see the curves here getting closer and closer together. That means the gradient is pretty steep, and it points in that direction. Whereas for task two, if you're here, right, the gradient actually points in this direction, but not as steep, right? Because here the the lines are pretty far apart still. So that means it's relatively flat. This is what the paper calls conflicting gradients, and they're drawn in here. I'm going to draw them just a little bit larger. So these two gradients, first of all, they have different magnitude. You see that the magnitude of this is much larger than the magnitude of this. And also their angle between them is large. That means conflicting, that they're more than 90 degrees apart from each other. And this results if you calculate the resulting gradient, of course, this results in a gradient like this, right? So our algorithm wouldn't actually go down this valley, it will go up the hill again, because you have differently sized gradients from the different tasks that go in different directions. Now, the important point, I was wondering for a long time, what's the difference between this and simply saying, look, any data set, right, your loss on any data set D is just the sum of the loss of your individual data points, Xi. Because it is the same case that you can have different data points and the gradients that you get, right? So that would result, if you've never done optimization, I'm sorry, I'm going a bit fast, that would result in the gradient with respect to your weights of your loss over the entire data set is, of course, approximated by the one over n in your mini batch. So by the gradients in your mini batch, right? So let's call this the loss of Xi. This is completely illegible. But what I'm saying is that your total gradient is the average of your individual data points. And these might be conflicting as well, right? You could have that that one points in this direction, and the other one points in that direction. And we've done this just and and things like things like Adam and SGD actually, are able to to handle that just fine, because we do this average operation. I think what is different here is in multitask learning is that the multitask the tasks distribution is not like stochastically IID, let's say. So in in this case, you can always count on that the expectation will average out this noise. So this noise, if you if you go in expectation, right, if you do mini batches, and aggregate over the whole data set, then that will kind of even out because for the different data points, okay, one gradient might be larger, one might be smaller, but there is no systematic error, or there's no systematic bias that comes from the different data points. Here you have, as we said, one task might be much harder than the other task, right? Or you might have much more data, or the loss function is just larger, like magnitude wise. So you can have any number of systematic biases that that different tasks have with each other. And therefore, the conflicting gradients seem to be a problem. So this paper does a good job of analyzing the situation of conflicting gradients. And what I find particularly interesting is that they first of all, they propose an algorithm to deal with these conflicting gradients. So they say whenever two gradients are conflicting, right, what we would do is we would project them on the normal plane of each other, right? So for example, here in step B, we take the gradient of task I, and we project it onto the normal plane of gradient from the task J, right? And if we do this, and they have a whole algorithm where it's general. So if we do this for multiple tasks, so basically, we get a mini batch of tasks, right? So they generalize this to that you have a bunch of tasks. We get the different gradients, and these can be stochastic because we can we can do this with stochastic data sets. We go through the batch. And if the gradients are conflicting, we simply project the gradients onto the onto each other. And that will result now in a set of non conflicting gradients. You might be a bit appalled by this. I was at first when I saw this. But they actually do, as I said, a good job of analyzing this. So they have two theorems here, which I find interesting. So theorem one is assume these are convex and differentiable. So somewhat standard assumptions in optimization. They say then the PC grad update rule with a step size smaller than one over L, L is the Lipschitz constant, will converge either to a location where the cosine is exactly negative one between two gradients. That never happens except if you construct it or the optimal value, right? So this is basically a consistency theorem saying that this algorithm will still converge to the optimum value. This here is this is the loss. So this loss is the sum of loss one and loss two, right of these two tasks. So for two tasks, they prove that the algorithm will still go to the correct point if you run it long enough. Doesn't say anything about the speed, though. This is where theorem two comes in. Theorem two says, suppose L is differentiable and the gradient of L is Lipschitz continuous. This again, same assumptions, except no longer need convexity. Let theta MT, which is the multitask gradient and theta, sorry, not the gradient, the parameters theta PC grad be the parameters after applying one update to theta with G and PC grad modified G. So this MT is the that would be kind of the original algorithm without their method. And this here would be with their method. Moreover, assume a bunch of things which we'll go into soon. Then the loss function of the PC grad theta is smaller or equal than the loss function of the MT of the original. So what does it mean? It means that if you're in your optimization landscape, and you're somewhere here, right, and your optimum is somewhere here, and your loss function is kind of how far away are you from this from this optimum, right? It means that as long as these conditions are given, if you do your update without the their method, which would be so here would be theta MT or with their method, theta PC grad, then the loss function that you get from their method will be smaller than the loss function that you get without their method. So this is a theorem, they prove it. And for this to be the case, they need these three things. So let's go from the back. The third one is a is a condition on the on the loss function, sorry, on the step size. And you can say, okay, the step size needs to be large enough. You can set the step size. This here, what is this? This here needs to be a this is a condition on the on this epsilon. So what's this thing? It is a curvature bounding measure. And that is compared to little l. And little l here is this thing. It is a constant that must be smaller than h and h is up here is the curvature. So it depends on the curvature, right? It depends on the curvature fulfilling some condition, they state down here. The curvature of the multitask gradient should be large. Yeah, and the first condition we've already seen is that the cosine of the angles needs to be smaller than negative something that depends on the gradients. And this here turns out actually to be the magnitudes of the gradients. So this, this first this here, we can we can neglect that's a step size condition. This here means the gradients should be conflicting. And this here means that there should be sufficient curvature in the loss function. This is exactly what we saw at the beginning in this in this thing here. So there was a sufficient curvature. Because in one direction, the gradient was very steep, and in the other direction, it wasn't, which basically means there is a change of steepness, right? There's a change of steepness in one direction versus the other direction. And also the two gradients were conflicting, which we saw right here. If this is the case, then this algorithm will bring you faster to the optimum than the the normal algorithm, but only if this is given. And notably, this can change step to step. They actually call this the I think the holy trifecta evil trifecta something, they have a name for it. But I'm going to read you out the the the conditions that how they describe it. The conditions are first, the angle between the task gradients is not too small, i.e. the two tasks need to conflict sufficiently. Second, the difference in magnitude needs to be sufficiently large. Third, the curvature of the multitask gradient should be large. And fourth, the learning rate should be big enough such that large curvature would lead to overestimation of performance improvement on the dominating task and underestimation of performance degradation on the dominated task. So here you see a little subtlety. I said before that this condition here was negligible because you can set the task size. In actuality, this you can so I'm not meaning to rag on this, but what does it mean the learning rate should be big enough such that blah blah blah. And what what comes here seems to be negative, right, such that the large curvature would lead to overestimation, which basically means this method, this thing here counts if the step size is large. So that means if I were to play devil's advocate, if I have a problem like this, I could either write I could either use their method PC grad, or I could just decrease my learning rate. And use the classic algorithm, because if I just decrease my learning rate relative to the curvature, then this theorem would no longer hold and it will no longer be the case that their algorithm gives me a faster convergence. So there's there's two ways of looking at these things. It's like, yes, in in these conditions, this algorithm is better, but it is better because someone has set the learning rate too high, and this algorithm kind of fixes that. Now the upside to this is, of course, that the usually you don't want to kind of set your learning rate in accordance with the curvature of the with the curvature of the problem and so on, you don't know the curvature most of the time. So you just set some learning rate, and their algorithm appears to be working. So when this learning rate is smaller, it's just not guaranteed to outperform the classic algorithm. But I just found find this interesting in terms of how you read a paper, right? If you read a paper, you come across something like this, these conditions, you can always see them as here is what needs to happen for us to succeed, or here is what needs to happen for the others to fail. And therefore, we're the only ones that succeed in this regime. As I said, it's a cool algorithm, but I found that to be funny. All right, so they test this on multitask, which these MT 10 and MT 50 benchmarks are these robotic manipulation. So multitask doesn't only mean like supervised learning. In this case, it's actually multitask reinforcement learning. So here you have everything you have mini batches, you have episodes, and you have you have multiple tasks. So this is everything together. Very cool. And you in their actual implementation, they say what they do is they have these multiple tasks. So they have the agent, and they first select the tasks. So for example, here, pull this right? Then they generate an episode by interacting with the environment, forth and back, forth and back, then they put that episode into a replay buffer. Then they maybe select another task and so on. So until they have a bunch of data in the replay buffer from different tasks, then they sample episodes from different tasks, right from task one, task two, and so on. And that will become a mini batch in the learning procedure. So pretty intricate thing. But of course, you the hope is that you can learn kind of a shared representation that you can then perform all of these tasks faster than if you were to learn them each independently. So the MT 10 and MT 50 come from this. And I think they also have goal condition pushing, where it the task is simply to push something to a what they call goal conditioned. And the cool thing about this is, it's not only 50 tasks, but you can produce an infinity of tasks because you can always specify a new location where you should push something to. Right. So that's, that's fairly, fairly cool. And oh, yeah, the curves. So you see that if you do something like soft actor critic, or multi head soft actor critic, so this multi head soft actor critic is probably the closest to what I defined in to what I defined at the beginning, where you have this shared representation, and then and then the individual heads. And you can see that severely under performs against the SAC plus PC grad plus their method that seems to outperform fairly consistently, even against learning the tasks independently. So it learns much faster than if you were to learn these tasks just independently from each other, which is pretty cool, right? So I, I think that's pretty cool. All right, so they do actually interesting investigations. First of all, they research, okay, in during these learning runs, how, what is the curvature here? And the curvature of the loss function, they measure like this. So basically, all this is, is a, a consequence of a Taylor approximation. So if you have like f of x, you can, you can write this as f of some x zero, plus the gradient of f that plus the gradient of f times x here, sorry, at x zero times x in this direction. And then if you subtract, so this is a first order approximation to this, to the function on the right. Then if you bring this over here, you or if you sorry, if you subtract the two sides from each other, then you can see there's the difference between the actual function and the first order approximation of the function that must be, or that is most likely the curvature. Now it is not, it is like every higher order term, but the assumption is that the dominant higher order term will be the curvature. Right? So this is, this would be this, except they don't, they, they do it not doing the x and x zero, they do it at theta t and theta t plus one. So you can see this is the first order approximation, and this is the actual function value after they do a step. And the resulting thing will be the curvature or dominated by the curvature. So they analyze this over the course of learning and they see that it actually increases as you, as you go on. And just, I'm not a big fan of like just large numbers, but they number seem to be large, right? Just compared to what you can handle with a computer, the numbers seem to be large and they seem to be getting larger in order of magnitude steps across training iterations. So I'm going to believe them that this curvature is given. I would have liked to have it seen compared to just a single task instead of a multitask, instead of, you know, comparing these things, which is useless because they reach different losses, right? So it's pretty useless to compare their curvature across the number of iterations. What I would have liked to see is a comparison multitask versus single task. And to show me that in single task learning, this curvature doesn't happen. Here you have the percentage of update steps where conditions A and B are held. You remember condition A was the condition on the conflicting angle. Condition B was the condition that the curvature is large enough and you can see that as you go on with learning these dotted and dashed lines, the conditions hold almost entirely at the beginning of learning, but then still hold by in a big time of the steps. So here is like about half the steps still at the end of training these conditions hold. So it is fairly, fairly good evidence that often the problems that they say are real are really there and then therefore their algorithm helps, right? So here's the average per task average return. And interestingly, they say in the text, look, this task here seems to be easier, right? And the task two, which is the dotted line seems to be harder. So SAC, the baseline algorithm never really manages to learn task two, whereas this PC grad manages after a while to learn it. And at that point, something happens over here, which I'm not super sure. Yeah, that's what they say in the text, but I have to squint a lot to see that exactly at that position, something happens. Suffice to say that the PC grad is able to learn the task that SAC isn't able to learn because probably task one is completely dominating the gradient at that point, right? All right, so this was the paper. I invite you to read it and thanks for listening. Bye bye.
[ { "start": 0, "end": 7, "text": " Hi there, today we're looking at gradient surgery for multitask learning by Tianhe Yu," }, { "start": 7, "end": 15.08, "text": " Saurabh Kumar, Abhishek Gupta, Sergei Levine, Carole Hausmann and Chelsea Finn." }, { "start": 15.08, "end": 22.28, "text": " So in this paper, the concern is a thing called multitask learning." }, { "start": 22.28, "end": 24.32, "text": " Now what is multitask learning?" }, { "start": 24.32, "end": 29.2, "text": " So this has some very subtle distinctions from other things, that's I think why it's" }, { "start": 29.2, "end": 32.76, "text": " important to look at it a bit." }, { "start": 32.76, "end": 37.76, "text": " So let's say you have multiple tasks, a learning problem in multiple tasks, this seems easy" }, { "start": 37.76, "end": 40.08, "text": " enough, right?" }, { "start": 40.08, "end": 47.519999999999996, "text": " So what we mean is that we have the same input, but then we want to perform two different" }, { "start": 47.519999999999996, "end": 48.58, "text": " tasks." }, { "start": 48.58, "end": 54.519999999999996, "text": " So task one and task two." }, { "start": 54.52, "end": 63.96, "text": " So it could be something like task one, if the input is a food, right?" }, { "start": 63.96, "end": 65.96000000000001, "text": " A food object." }, { "start": 65.96000000000001, "end": 71.60000000000001, "text": " The task one could be, is it a fruit?" }, { "start": 71.60000000000001, "end": 73.2, "text": " Right?" }, { "start": 73.2, "end": 80.56, "text": " Task two could be how many calories does it have?" }, { "start": 80.56, "end": 81.88, "text": " Right?" }, { "start": 81.88, "end": 85.36, "text": " The input is this food item here." }, { "start": 85.36, "end": 87.6, "text": " And you want to know both things." }, { "start": 87.6, "end": 88.8, "text": " Is it a fruit?" }, { "start": 88.8, "end": 91.72, "text": " And how many calories does it have?" }, { "start": 91.72, "end": 99.28, "text": " And ideally, so what you could do is you could train two separate machine learning classifiers," }, { "start": 99.28, "end": 100.28, "text": " right?" }, { "start": 100.28, "end": 104.67999999999999, "text": " Classifier one simply does the is it a fruit thing?" }, { "start": 104.67999999999999, "end": 107.56, "text": " Task two simply does how many calories does it have?" }, { "start": 107.56, "end": 113.56, "text": " Let's say this is, let's actually say this is a food picture, right?" }, { "start": 113.56, "end": 120.64, "text": " Since Instagram is full of food pictures, we have lots of training data, right?" }, { "start": 120.64, "end": 125.02000000000001, "text": " At least unsupervised, people usually label it." }, { "start": 125.02000000000001, "end": 127, "text": " And we could train two different things." }, { "start": 127, "end": 131.72, "text": " But it would be nice, since they're both kind of dealing with the same input, so they're" }, { "start": 131.72, "end": 135.8, "text": " not kind of, they actually deal with the same input distribution." }, { "start": 135.8, "end": 143.26000000000002, "text": " It would be nice if we could kind of share a representation, right?" }, { "start": 143.26000000000002, "end": 147.74, "text": " So maybe we have some neural network here with many layers." }, { "start": 147.74, "end": 154.04000000000002, "text": " And then we have at the end, we take this hidden representation here." }, { "start": 154.04000000000002, "end": 159.64000000000001, "text": " And we just have maybe one or two fully connected layers for each individual task." }, { "start": 159.64000000000001, "end": 165.34, "text": " But our goal would be that the hidden representation here is shared." }, { "start": 165.34, "end": 171.34, "text": " So shared representation." }, { "start": 171.34, "end": 172.8, "text": " And why could that help?" }, { "start": 172.8, "end": 182.6, "text": " Because we might have, maybe we have lots of training data for the how many calories" }, { "start": 182.6, "end": 184.04, "text": " does it have, right?" }, { "start": 184.04, "end": 188.04, "text": " But we don't have that much training data for is it a fruit." }, { "start": 188.04, "end": 194.24, "text": " So lots of training data here, big database, but only like a handful of data points for" }, { "start": 194.24, "end": 195.92000000000002, "text": " the second task." }, { "start": 195.92000000000002, "end": 200.5, "text": " Or we might just not have much training data at all for both tasks." }, { "start": 200.5, "end": 205.98000000000002, "text": " And we just might benefit from training this shared representation, you might have already" }, { "start": 205.98000000000002, "end": 210.28, "text": " seen this with something like BERT." }, { "start": 210.28, "end": 217.24, "text": " So in BERT's case, the input is text, right?" }, { "start": 217.24, "end": 219.72, "text": " And then you do something different." }, { "start": 219.72, "end": 222.48000000000002, "text": " That's why BERT is different than multitask learning." }, { "start": 222.48, "end": 230.44, "text": " What you do in BERT is you do first you do this masked language model pre training." }, { "start": 230.44, "end": 232.72, "text": " So that's step one." }, { "start": 232.72, "end": 240, "text": " And then in step two, you take this and then you, you fine tune it on a number of tasks," }, { "start": 240, "end": 241, "text": " right?" }, { "start": 241, "end": 248.72, "text": " So here, question answering, sentiment detection, entailment, and so on." }, { "start": 248.72, "end": 250.04, "text": " This is different." }, { "start": 250.04, "end": 254.76, "text": " This is called pre training and fine tuning." }, { "start": 254.76, "end": 263.96, "text": " Tuning in multitask learning, we actually want to train on different tasks at the same" }, { "start": 263.96, "end": 264.96, "text": " time." }, { "start": 264.96, "end": 266.44, "text": " Maybe they have different data, right?" }, { "start": 266.44, "end": 270.68, "text": " And we simply want to create this shared representation." }, { "start": 270.68, "end": 277.15999999999997, "text": " And we hope that by combining these tasks, we might learn them better than if we were" }, { "start": 277.16, "end": 280.24, "text": " to learn each task individually." }, { "start": 280.24, "end": 286.20000000000005, "text": " Alright, so this paper says there are there's a big problem with things like this." }, { "start": 286.20000000000005, "end": 289.40000000000003, "text": " And they illustrate this in this example right here." }, { "start": 289.40000000000003, "end": 294.66, "text": " So let's say you have a multitask objective, and the learning landscape looks like this." }, { "start": 294.66, "end": 297.74, "text": " So the objective for task one is the following." }, { "start": 297.74, "end": 303.16, "text": " So this you have to have to imagine this is maybe a neural network with just two weights," }, { "start": 303.16, "end": 304.16, "text": " right?" }, { "start": 304.16, "end": 307.52000000000004, "text": " Weight one, here is weight two." }, { "start": 307.52000000000004, "end": 311.24, "text": " And this is what the optimization landscape look like looks like for task one." }, { "start": 311.24, "end": 317.76000000000005, "text": " If you're not used to this kind of depiction, the light parts up here and here are high" }, { "start": 317.76000000000005, "end": 320.32000000000005, "text": " values for the loss function." }, { "start": 320.32000000000005, "end": 325.48, "text": " And the darker parts are low values for the loss function." }, { "start": 325.48, "end": 329, "text": " So you want to get to the darker parts." }, { "start": 329, "end": 333.34000000000003, "text": " Now usually we discuss things like this in terms of optimization." }, { "start": 333.34, "end": 337.59999999999997, "text": " So for example, we would talk about SGD." }, { "start": 337.59999999999997, "end": 342.47999999999996, "text": " And we would talk about the fact that oh, if we have too large of a so if you're here," }, { "start": 342.47999999999996, "end": 347.59999999999997, "text": " where does the gradient point the gradient points towards the direction of steepest increase." }, { "start": 347.59999999999997, "end": 350.96, "text": " So here, so the negative gradient would point down." }, { "start": 350.96, "end": 354.52, "text": " If we have SGD, maybe we'd go here." }, { "start": 354.52, "end": 358.96, "text": " And then we take another gradient step, we would go here, right?" }, { "start": 358.96, "end": 360.88, "text": " Oh, now we've gone too far, right?" }, { "start": 360.88, "end": 364.94, "text": " So the gradient now points this direction." }, { "start": 364.94, "end": 370.52, "text": " So we go here, and then we just continue this, right?" }, { "start": 370.52, "end": 372.9, "text": " So this is a problem with with SGD." }, { "start": 372.9, "end": 377.36, "text": " And what we can do is we can decrease the step size, for example, and then we converge" }, { "start": 377.36, "end": 386.36, "text": " in this, or we can use something like Adam, that adjusts the gradient to the to the variance" }, { "start": 386.36, "end": 390.84, "text": " of the gradient landscape, things like this, right?" }, { "start": 390.84, "end": 393.91999999999996, "text": " So these are these are problems in optimization." }, { "start": 393.91999999999996, "end": 399.76, "text": " But what happens when you have a multitask objective is that for just task one, the optimization" }, { "start": 399.76, "end": 401.44, "text": " landscape would look like this, right?" }, { "start": 401.44, "end": 409.47999999999996, "text": " If you were just to train your neural network, if you were to just train this part, and we" }, { "start": 409.47999999999996, "end": 414.47999999999996, "text": " just look here is is like theta one, and here is theta two, these are the two weights we" }, { "start": 414.47999999999996, "end": 418.47999999999996, "text": " care about right now, everything else, let's say is fixed." }, { "start": 418.48, "end": 423.06, "text": " Task one looks like this, but for task two, because it's a different task, right, we need" }, { "start": 423.06, "end": 427.94, "text": " to set the weights differently to get our desired output, it looks different." }, { "start": 427.94, "end": 431.62, "text": " So our loss function is going to be a combination." }, { "start": 431.62, "end": 436.96000000000004, "text": " So our loss function is going to be the loss of task loss function for a given sample," }, { "start": 436.96000000000004, "end": 443.26, "text": " it's going to be the loss on task one of that sample plus the loss of task two on that sample." }, { "start": 443.26, "end": 449.2, "text": " So that's going to be the combination the combination you see on the right." }, { "start": 449.2, "end": 455.56, "text": " So this plus this equals this right here." }, { "start": 455.56, "end": 463.15999999999997, "text": " So you can see in task one, it almost let's say it didn't matter whether we were here" }, { "start": 463.15999999999997, "end": 467.24, "text": " or here, both had a relatively low loss value, right?" }, { "start": 467.24, "end": 475.76, "text": " And you can you can see in task two, this point here is not an optimum, well, this point" }, { "start": 475.76, "end": 480.52, "text": " or maybe these are slight, these are somewhat somewhat close together." }, { "start": 480.52, "end": 485.96000000000004, "text": " So if you add them, you can see that now this thing here still has a low value but not as" }, { "start": 485.96000000000004, "end": 488.24, "text": " low as this is much darker, right?" }, { "start": 488.24, "end": 500.04, "text": " So the landscape for both tasks together looks differently from the landscape of either task" }, { "start": 500.04, "end": 501.04, "text": " alone." }, { "start": 501.04, "end": 505.84000000000003, "text": " So your goal is to find this optimal point and optimal point here that works for both" }, { "start": 505.84000000000003, "end": 508.2, "text": " tasks." }, { "start": 508.2, "end": 515.52, "text": " Now the paper identifies many, sorry, sorry, the paper identifies problems with this multitask" }, { "start": 515.52, "end": 523.72, "text": " learning and they say the problem is that you can have what are called conflicting gradients." }, { "start": 523.72, "end": 535.0799999999999, "text": " So if you look if you look at if you look at where the gradients point in the different" }, { "start": 535.0799999999999, "end": 538.48, "text": " in the different tasks." }, { "start": 538.48, "end": 542.3199999999999, "text": " So if we go by task two, sorry, let me put that in again." }, { "start": 542.32, "end": 547.2, "text": " We care about the point right here that they care about, right?" }, { "start": 547.2, "end": 552.96, "text": " And they use Adam in this case, and their starting point is right here, and they've" }, { "start": 552.96, "end": 554.8000000000001, "text": " come this way so far." }, { "start": 554.8000000000001, "end": 559.96, "text": " So we're going to draw this in here and draw this in right here." }, { "start": 559.96, "end": 564.88, "text": " And we'll stop a little bit before that valley, right?" }, { "start": 564.88, "end": 570.2, "text": " So let's analyze the gradient, the gradient task one actually points in this direction," }, { "start": 570.2, "end": 572.6400000000001, "text": " you see down the valley, right?" }, { "start": 572.6400000000001, "end": 575.48, "text": " And it's pretty big because it's pretty steep, right?" }, { "start": 575.48, "end": 579.5200000000001, "text": " You can see the curves here getting closer and closer together." }, { "start": 579.5200000000001, "end": 583.6400000000001, "text": " That means the gradient is pretty steep, and it points in that direction." }, { "start": 583.6400000000001, "end": 589.6400000000001, "text": " Whereas for task two, if you're here, right, the gradient actually points in this direction," }, { "start": 589.6400000000001, "end": 591.6600000000001, "text": " but not as steep, right?" }, { "start": 591.6600000000001, "end": 595.36, "text": " Because here the the lines are pretty far apart still." }, { "start": 595.36, "end": 599.08, "text": " So that means it's relatively flat." }, { "start": 599.08, "end": 603.08, "text": " This is what the paper calls conflicting gradients, and they're drawn in here." }, { "start": 603.08, "end": 608.72, "text": " I'm going to draw them just a little bit larger." }, { "start": 608.72, "end": 612.5200000000001, "text": " So these two gradients, first of all, they have different magnitude." }, { "start": 612.5200000000001, "end": 618.2, "text": " You see that the magnitude of this is much larger than the magnitude of this." }, { "start": 618.2, "end": 622.32, "text": " And also their angle between them is large." }, { "start": 622.32, "end": 628.24, "text": " That means conflicting, that they're more than 90 degrees apart from each other." }, { "start": 628.24, "end": 636.12, "text": " And this results if you calculate the resulting gradient, of course, this results in a gradient" }, { "start": 636.12, "end": 637.78, "text": " like this, right?" }, { "start": 637.78, "end": 644.4, "text": " So our algorithm wouldn't actually go down this valley, it will go up the hill again," }, { "start": 644.4, "end": 652.5600000000001, "text": " because you have differently sized gradients from the different tasks that go in different" }, { "start": 652.5600000000001, "end": 653.5600000000001, "text": " directions." }, { "start": 653.5600000000001, "end": 657.16, "text": " Now, the important point, I was wondering for a long time, what's the difference between" }, { "start": 657.16, "end": 664.36, "text": " this and simply saying, look, any data set, right, your loss on any data set D is just" }, { "start": 664.36, "end": 670.48, "text": " the sum of the loss of your individual data points, Xi." }, { "start": 670.48, "end": 677.4399999999999, "text": " Because it is the same case that you can have different data points and the gradients that" }, { "start": 677.4399999999999, "end": 679, "text": " you get, right?" }, { "start": 679, "end": 683.8399999999999, "text": " So that would result, if you've never done optimization, I'm sorry, I'm going a bit fast," }, { "start": 683.84, "end": 688.8000000000001, "text": " that would result in the gradient with respect to your weights of your loss over the entire" }, { "start": 688.8000000000001, "end": 698.62, "text": " data set is, of course, approximated by the one over n in your mini batch." }, { "start": 698.62, "end": 703.08, "text": " So by the gradients in your mini batch, right?" }, { "start": 703.08, "end": 707.7800000000001, "text": " So let's call this the loss of Xi." }, { "start": 707.7800000000001, "end": 710.5400000000001, "text": " This is completely illegible." }, { "start": 710.54, "end": 717.52, "text": " But what I'm saying is that your total gradient is the average of your individual data points." }, { "start": 717.52, "end": 720.04, "text": " And these might be conflicting as well, right?" }, { "start": 720.04, "end": 725.68, "text": " You could have that that one points in this direction, and the other one points in that" }, { "start": 725.68, "end": 726.68, "text": " direction." }, { "start": 726.68, "end": 732.4, "text": " And we've done this just and and things like things like Adam and SGD actually, are able" }, { "start": 732.4, "end": 736.5999999999999, "text": " to to handle that just fine, because we do this average operation." }, { "start": 736.6, "end": 746.28, "text": " I think what is different here is in multitask learning is that the multitask the tasks distribution" }, { "start": 746.28, "end": 751.28, "text": " is not like stochastically IID, let's say." }, { "start": 751.28, "end": 758.36, "text": " So in in this case, you can always count on that the expectation will average out this" }, { "start": 758.36, "end": 759.36, "text": " noise." }, { "start": 759.36, "end": 765.9200000000001, "text": " So this noise, if you if you go in expectation, right, if you do mini batches, and aggregate" }, { "start": 765.92, "end": 771.1999999999999, "text": " over the whole data set, then that will kind of even out because for the different data" }, { "start": 771.1999999999999, "end": 778, "text": " points, okay, one gradient might be larger, one might be smaller, but there is no systematic" }, { "start": 778, "end": 784.24, "text": " error, or there's no systematic bias that comes from the different data points." }, { "start": 784.24, "end": 792.3399999999999, "text": " Here you have, as we said, one task might be much harder than the other task, right?" }, { "start": 792.34, "end": 800.36, "text": " Or you might have much more data, or the loss function is just larger, like magnitude wise." }, { "start": 800.36, "end": 807.7, "text": " So you can have any number of systematic biases that that different tasks have with each other." }, { "start": 807.7, "end": 812.08, "text": " And therefore, the conflicting gradients seem to be a problem." }, { "start": 812.08, "end": 816.76, "text": " So this paper does a good job of analyzing the situation of conflicting gradients." }, { "start": 816.76, "end": 827.4399999999999, "text": " And what I find particularly interesting is that they first of all, they propose an algorithm" }, { "start": 827.4399999999999, "end": 830.08, "text": " to deal with these conflicting gradients." }, { "start": 830.08, "end": 836.8, "text": " So they say whenever two gradients are conflicting, right, what we would do is we would project" }, { "start": 836.8, "end": 840.08, "text": " them on the normal plane of each other, right?" }, { "start": 840.08, "end": 848.6, "text": " So for example, here in step B, we take the gradient of task I, and we project it onto" }, { "start": 848.6, "end": 855.8000000000001, "text": " the normal plane of gradient from the task J, right?" }, { "start": 855.8000000000001, "end": 860.96, "text": " And if we do this, and they have a whole algorithm where it's general." }, { "start": 860.96, "end": 869.58, "text": " So if we do this for multiple tasks, so basically, we get a mini batch of tasks, right?" }, { "start": 869.58, "end": 875.88, "text": " So they generalize this to that you have a bunch of tasks." }, { "start": 875.88, "end": 880.1600000000001, "text": " We get the different gradients, and these can be stochastic because we can we can do" }, { "start": 880.1600000000001, "end": 882.96, "text": " this with stochastic data sets." }, { "start": 882.96, "end": 885.86, "text": " We go through the batch." }, { "start": 885.86, "end": 894.2800000000001, "text": " And if the gradients are conflicting, we simply project the gradients onto the onto each other." }, { "start": 894.28, "end": 901.28, "text": " And that will result now in a set of non conflicting gradients." }, { "start": 901.28, "end": 903.56, "text": " You might be a bit appalled by this." }, { "start": 903.56, "end": 906.56, "text": " I was at first when I saw this." }, { "start": 906.56, "end": 911.76, "text": " But they actually do, as I said, a good job of analyzing this." }, { "start": 911.76, "end": 914.8, "text": " So they have two theorems here, which I find interesting." }, { "start": 914.8, "end": 919.92, "text": " So theorem one is assume these are convex and differentiable." }, { "start": 919.92, "end": 924, "text": " So somewhat standard assumptions in optimization." }, { "start": 924, "end": 929.84, "text": " They say then the PC grad update rule with a step size smaller than one over L, L is" }, { "start": 929.84, "end": 937.02, "text": " the Lipschitz constant, will converge either to a location where the cosine is exactly" }, { "start": 937.02, "end": 940.32, "text": " negative one between two gradients." }, { "start": 940.32, "end": 945.92, "text": " That never happens except if you construct it or the optimal value, right?" }, { "start": 945.92, "end": 952.16, "text": " So this is basically a consistency theorem saying that this algorithm will still converge" }, { "start": 952.16, "end": 954.4399999999999, "text": " to the optimum value." }, { "start": 954.4399999999999, "end": 959.4399999999999, "text": " This here is this is the loss." }, { "start": 959.4399999999999, "end": 966.64, "text": " So this loss is the sum of loss one and loss two, right of these two tasks." }, { "start": 966.64, "end": 972.4, "text": " So for two tasks, they prove that the algorithm will still go to the correct point if you" }, { "start": 972.4, "end": 974.04, "text": " run it long enough." }, { "start": 974.04, "end": 977.12, "text": " Doesn't say anything about the speed, though." }, { "start": 977.12, "end": 980.64, "text": " This is where theorem two comes in." }, { "start": 980.64, "end": 987.36, "text": " Theorem two says, suppose L is differentiable and the gradient of L is Lipschitz continuous." }, { "start": 987.36, "end": 994.04, "text": " This again, same assumptions, except no longer need convexity." }, { "start": 994.04, "end": 1003.64, "text": " Let theta MT, which is the multitask gradient and theta, sorry, not the gradient, the parameters" }, { "start": 1003.64, "end": 1010.08, "text": " theta PC grad be the parameters after applying one update to theta with G and PC grad modified" }, { "start": 1010.08, "end": 1011.08, "text": " G." }, { "start": 1011.08, "end": 1017.44, "text": " So this MT is the that would be kind of the original algorithm without their method." }, { "start": 1017.44, "end": 1021.64, "text": " And this here would be with their method." }, { "start": 1021.64, "end": 1028.1200000000001, "text": " Moreover, assume a bunch of things which we'll go into soon." }, { "start": 1028.1200000000001, "end": 1037.98, "text": " Then the loss function of the PC grad theta is smaller or equal than the loss function" }, { "start": 1037.98, "end": 1042.56, "text": " of the MT of the original." }, { "start": 1042.56, "end": 1043.6200000000001, "text": " So what does it mean?" }, { "start": 1043.6200000000001, "end": 1051.24, "text": " It means that if you're in your optimization landscape, and you're somewhere here, right," }, { "start": 1051.24, "end": 1058.32, "text": " and your optimum is somewhere here, and your loss function is kind of how far away are" }, { "start": 1058.32, "end": 1061.78, "text": " you from this from this optimum, right?" }, { "start": 1061.78, "end": 1069.36, "text": " It means that as long as these conditions are given, if you do your update without the" }, { "start": 1069.36, "end": 1079.8799999999999, "text": " their method, which would be so here would be theta MT or with their method, theta PC" }, { "start": 1079.8799999999999, "end": 1087.52, "text": " grad, then the loss function that you get from their method will be smaller than the" }, { "start": 1087.52, "end": 1092.16, "text": " loss function that you get without their method." }, { "start": 1092.16, "end": 1094.32, "text": " So this is a theorem, they prove it." }, { "start": 1094.32, "end": 1101.08, "text": " And for this to be the case, they need these three things." }, { "start": 1101.08, "end": 1102.8, "text": " So let's go from the back." }, { "start": 1102.8, "end": 1109, "text": " The third one is a is a condition on the on the loss function, sorry, on the step size." }, { "start": 1109, "end": 1113.96, "text": " And you can say, okay, the step size needs to be large enough." }, { "start": 1113.96, "end": 1116.52, "text": " You can set the step size." }, { "start": 1116.52, "end": 1118.8799999999999, "text": " This here, what is this?" }, { "start": 1118.8799999999999, "end": 1125.98, "text": " This here needs to be a this is a condition on the on this epsilon." }, { "start": 1125.98, "end": 1128.46, "text": " So what's this thing?" }, { "start": 1128.46, "end": 1132.96, "text": " It is a curvature bounding measure." }, { "start": 1132.96, "end": 1136.16, "text": " And that is compared to little l." }, { "start": 1136.16, "end": 1140.56, "text": " And little l here is this thing." }, { "start": 1140.56, "end": 1152.96, "text": " It is a constant that must be smaller than h and h is up here is the curvature." }, { "start": 1152.96, "end": 1156.36, "text": " So it depends on the curvature, right?" }, { "start": 1156.36, "end": 1163.26, "text": " It depends on the curvature fulfilling some condition, they state down here." }, { "start": 1163.26, "end": 1168.36, "text": " The curvature of the multitask gradient should be large." }, { "start": 1168.36, "end": 1179.04, "text": " Yeah, and the first condition we've already seen is that the cosine of the angles needs" }, { "start": 1179.04, "end": 1183.1599999999999, "text": " to be smaller than negative something that depends on the gradients." }, { "start": 1183.1599999999999, "end": 1187.1999999999998, "text": " And this here turns out actually to be the magnitudes of the gradients." }, { "start": 1187.1999999999998, "end": 1193.1599999999999, "text": " So this, this first this here, we can we can neglect that's a step size condition." }, { "start": 1193.16, "end": 1198.48, "text": " This here means the gradients should be conflicting." }, { "start": 1198.48, "end": 1205.94, "text": " And this here means that there should be sufficient curvature in the loss function." }, { "start": 1205.94, "end": 1213.1200000000001, "text": " This is exactly what we saw at the beginning in this in this thing here." }, { "start": 1213.1200000000001, "end": 1218.1200000000001, "text": " So there was a sufficient curvature." }, { "start": 1218.12, "end": 1223.32, "text": " Because in one direction, the gradient was very steep, and in the other direction, it" }, { "start": 1223.32, "end": 1227.9599999999998, "text": " wasn't, which basically means there is a change of steepness, right?" }, { "start": 1227.9599999999998, "end": 1232.6599999999999, "text": " There's a change of steepness in one direction versus the other direction." }, { "start": 1232.6599999999999, "end": 1238.8, "text": " And also the two gradients were conflicting, which we saw right here." }, { "start": 1238.8, "end": 1246.76, "text": " If this is the case, then this algorithm will bring you faster to the optimum than the the" }, { "start": 1246.76, "end": 1249.84, "text": " normal algorithm, but only if this is given." }, { "start": 1249.84, "end": 1253.36, "text": " And notably, this can change step to step." }, { "start": 1253.36, "end": 1260.82, "text": " They actually call this the I think the holy trifecta evil trifecta something, they have" }, { "start": 1260.82, "end": 1262.4, "text": " a name for it." }, { "start": 1262.4, "end": 1267.48, "text": " But I'm going to read you out the the the conditions that how they describe it." }, { "start": 1267.48, "end": 1271.36, "text": " The conditions are first, the angle between the task gradients is not too small, i.e." }, { "start": 1271.36, "end": 1274.44, "text": " the two tasks need to conflict sufficiently." }, { "start": 1274.44, "end": 1280, "text": " Second, the difference in magnitude needs to be sufficiently large." }, { "start": 1280, "end": 1285.52, "text": " Third, the curvature of the multitask gradient should be large." }, { "start": 1285.52, "end": 1290.3200000000002, "text": " And fourth, the learning rate should be big enough such that large curvature would lead" }, { "start": 1290.3200000000002, "end": 1296.6200000000001, "text": " to overestimation of performance improvement on the dominating task and underestimation" }, { "start": 1296.6200000000001, "end": 1300.6000000000001, "text": " of performance degradation on the dominated task." }, { "start": 1300.6000000000001, "end": 1303.2, "text": " So here you see a little subtlety." }, { "start": 1303.2, "end": 1311.32, "text": " I said before that this condition here was negligible because you can set the task size." }, { "start": 1311.32, "end": 1321.64, "text": " In actuality, this you can so I'm not meaning to rag on this, but what does it mean the" }, { "start": 1321.64, "end": 1325.4, "text": " learning rate should be big enough such that blah blah blah." }, { "start": 1325.4, "end": 1330.56, "text": " And what what comes here seems to be negative, right, such that the large curvature would" }, { "start": 1330.56, "end": 1339.6399999999999, "text": " lead to overestimation, which basically means this method, this thing here counts if the" }, { "start": 1339.6399999999999, "end": 1341.32, "text": " step size is large." }, { "start": 1341.32, "end": 1349.96, "text": " So that means if I were to play devil's advocate, if I have a problem like this, I could either" }, { "start": 1349.96, "end": 1360.48, "text": " write I could either use their method PC grad, or I could just decrease my learning rate." }, { "start": 1360.48, "end": 1367.08, "text": " And use the classic algorithm, because if I just decrease my learning rate relative" }, { "start": 1367.08, "end": 1371.88, "text": " to the curvature, then this theorem would no longer hold and it will no longer be the" }, { "start": 1371.88, "end": 1376.6, "text": " case that their algorithm gives me a faster convergence." }, { "start": 1376.6, "end": 1379.52, "text": " So there's there's two ways of looking at these things." }, { "start": 1379.52, "end": 1385.52, "text": " It's like, yes, in in these conditions, this algorithm is better, but it is better because" }, { "start": 1385.52, "end": 1392.44, "text": " someone has set the learning rate too high, and this algorithm kind of fixes that." }, { "start": 1392.44, "end": 1399.4, "text": " Now the upside to this is, of course, that the usually you don't want to kind of set" }, { "start": 1399.4, "end": 1405.32, "text": " your learning rate in accordance with the curvature of the with the curvature of the" }, { "start": 1405.32, "end": 1408.32, "text": " problem and so on, you don't know the curvature most of the time." }, { "start": 1408.32, "end": 1414.36, "text": " So you just set some learning rate, and their algorithm appears to be working." }, { "start": 1414.36, "end": 1419.3999999999999, "text": " So when this learning rate is smaller, it's just not guaranteed to outperform the classic" }, { "start": 1419.3999999999999, "end": 1420.3999999999999, "text": " algorithm." }, { "start": 1420.3999999999999, "end": 1426.1799999999998, "text": " But I just found find this interesting in terms of how you read a paper, right?" }, { "start": 1426.1799999999998, "end": 1430.34, "text": " If you read a paper, you come across something like this, these conditions, you can always" }, { "start": 1430.34, "end": 1436.7199999999998, "text": " see them as here is what needs to happen for us to succeed, or here is what needs to happen" }, { "start": 1436.7199999999998, "end": 1439.56, "text": " for the others to fail." }, { "start": 1439.56, "end": 1445.52, "text": " And therefore, we're the only ones that succeed in this regime." }, { "start": 1445.52, "end": 1450.04, "text": " As I said, it's a cool algorithm, but I found that to be funny." }, { "start": 1450.04, "end": 1459.76, "text": " All right, so they test this on multitask, which these MT 10 and MT 50 benchmarks are" }, { "start": 1459.76, "end": 1461.46, "text": " these robotic manipulation." }, { "start": 1461.46, "end": 1465.36, "text": " So multitask doesn't only mean like supervised learning." }, { "start": 1465.36, "end": 1467.96, "text": " In this case, it's actually multitask reinforcement learning." }, { "start": 1467.96, "end": 1473.8, "text": " So here you have everything you have mini batches, you have episodes, and you have you" }, { "start": 1473.8, "end": 1475.48, "text": " have multiple tasks." }, { "start": 1475.48, "end": 1478.64, "text": " So this is everything together." }, { "start": 1478.64, "end": 1479.64, "text": " Very cool." }, { "start": 1479.64, "end": 1488.8400000000001, "text": " And you in their actual implementation, they say what they do is they have these multiple" }, { "start": 1488.8400000000001, "end": 1489.8400000000001, "text": " tasks." }, { "start": 1489.8400000000001, "end": 1493.6200000000001, "text": " So they have the agent, and they first select the tasks." }, { "start": 1493.6200000000001, "end": 1497.14, "text": " So for example, here, pull this right?" }, { "start": 1497.14, "end": 1502.72, "text": " Then they generate an episode by interacting with the environment, forth and back, forth" }, { "start": 1502.72, "end": 1508.64, "text": " and back, then they put that episode into a replay buffer." }, { "start": 1508.64, "end": 1511.6000000000001, "text": " Then they maybe select another task and so on." }, { "start": 1511.6000000000001, "end": 1516.68, "text": " So until they have a bunch of data in the replay buffer from different tasks, then they" }, { "start": 1516.68, "end": 1524, "text": " sample episodes from different tasks, right from task one, task two, and so on." }, { "start": 1524, "end": 1527.52, "text": " And that will become a mini batch in the learning procedure." }, { "start": 1527.52, "end": 1530.36, "text": " So pretty intricate thing." }, { "start": 1530.36, "end": 1534.52, "text": " But of course, you the hope is that you can learn kind of a shared representation that" }, { "start": 1534.52, "end": 1543.16, "text": " you can then perform all of these tasks faster than if you were to learn them each independently." }, { "start": 1543.16, "end": 1546.44, "text": " So the MT 10 and MT 50 come from this." }, { "start": 1546.44, "end": 1553.6, "text": " And I think they also have goal condition pushing, where it the task is simply to push" }, { "start": 1553.6, "end": 1556.4399999999998, "text": " something to a what they call goal conditioned." }, { "start": 1556.4399999999998, "end": 1560.9199999999998, "text": " And the cool thing about this is, it's not only 50 tasks, but you can produce an infinity" }, { "start": 1560.9199999999998, "end": 1566.76, "text": " of tasks because you can always specify a new location where you should push something" }, { "start": 1566.76, "end": 1567.76, "text": " to." }, { "start": 1567.76, "end": 1568.9599999999998, "text": " Right." }, { "start": 1568.9599999999998, "end": 1572.1999999999998, "text": " So that's, that's fairly, fairly cool." }, { "start": 1572.1999999999998, "end": 1573.9599999999998, "text": " And oh, yeah, the curves." }, { "start": 1573.9599999999998, "end": 1582.56, "text": " So you see that if you do something like soft actor critic, or multi head soft actor critic," }, { "start": 1582.56, "end": 1589.76, "text": " so this multi head soft actor critic is probably the closest to what I defined in to what I" }, { "start": 1589.76, "end": 1595.8, "text": " defined at the beginning, where you have this shared representation, and then and then the" }, { "start": 1595.8, "end": 1597.2, "text": " individual heads." }, { "start": 1597.2, "end": 1605.08, "text": " And you can see that severely under performs against the SAC plus PC grad plus their method" }, { "start": 1605.08, "end": 1612.3999999999999, "text": " that seems to outperform fairly consistently, even against learning the tasks independently." }, { "start": 1612.4, "end": 1617.64, "text": " So it learns much faster than if you were to learn these tasks just independently from" }, { "start": 1617.64, "end": 1621.3200000000002, "text": " each other, which is pretty cool, right?" }, { "start": 1621.3200000000002, "end": 1624.2, "text": " So I, I think that's pretty cool." }, { "start": 1624.2, "end": 1630.72, "text": " All right, so they do actually interesting investigations." }, { "start": 1630.72, "end": 1637.52, "text": " First of all, they research, okay, in during these learning runs, how, what is the curvature" }, { "start": 1637.52, "end": 1639.4, "text": " here?" }, { "start": 1639.4, "end": 1643.6000000000001, "text": " And the curvature of the loss function, they measure like this." }, { "start": 1643.6000000000001, "end": 1650.0400000000002, "text": " So basically, all this is, is a, a consequence of a Taylor approximation." }, { "start": 1650.0400000000002, "end": 1660.44, "text": " So if you have like f of x, you can, you can write this as f of some x zero, plus the gradient" }, { "start": 1660.44, "end": 1671.64, "text": " of f that plus the gradient of f times x here, sorry, at x zero times x in this direction." }, { "start": 1671.64, "end": 1678.28, "text": " And then if you subtract, so this is a first order approximation to this, to the function" }, { "start": 1678.28, "end": 1679.28, "text": " on the right." }, { "start": 1679.28, "end": 1686.76, "text": " Then if you bring this over here, you or if you sorry, if you subtract the two sides from" }, { "start": 1686.76, "end": 1693.96, "text": " each other, then you can see there's the difference between the actual function and the first" }, { "start": 1693.96, "end": 1701.08, "text": " order approximation of the function that must be, or that is most likely the curvature." }, { "start": 1701.08, "end": 1707.44, "text": " Now it is not, it is like every higher order term, but the assumption is that the dominant" }, { "start": 1707.44, "end": 1710.48, "text": " higher order term will be the curvature." }, { "start": 1710.48, "end": 1711.8, "text": " Right?" }, { "start": 1711.8, "end": 1718.24, "text": " So this is, this would be this, except they don't, they, they do it not doing the x and" }, { "start": 1718.24, "end": 1721.84, "text": " x zero, they do it at theta t and theta t plus one." }, { "start": 1721.84, "end": 1728.3, "text": " So you can see this is the first order approximation, and this is the actual function value after" }, { "start": 1728.3, "end": 1730.8, "text": " they do a step." }, { "start": 1730.8, "end": 1736.68, "text": " And the resulting thing will be the curvature or dominated by the curvature." }, { "start": 1736.68, "end": 1744.64, "text": " So they analyze this over the course of learning and they see that it actually increases as" }, { "start": 1744.64, "end": 1746.16, "text": " you, as you go on." }, { "start": 1746.16, "end": 1753.88, "text": " And just, I'm not a big fan of like just large numbers, but they number seem to be large," }, { "start": 1753.88, "end": 1755.24, "text": " right?" }, { "start": 1755.24, "end": 1758.88, "text": " Just compared to what you can handle with a computer, the numbers seem to be large and" }, { "start": 1758.88, "end": 1765.6000000000001, "text": " they seem to be getting larger in order of magnitude steps across training iterations." }, { "start": 1765.6, "end": 1770.12, "text": " So I'm going to believe them that this curvature is given." }, { "start": 1770.12, "end": 1779.36, "text": " I would have liked to have it seen compared to just a single task instead of a multitask," }, { "start": 1779.36, "end": 1784.3999999999999, "text": " instead of, you know, comparing these things, which is useless because they reach different" }, { "start": 1784.3999999999999, "end": 1786.9199999999998, "text": " losses, right?" }, { "start": 1786.9199999999998, "end": 1792.98, "text": " So it's pretty useless to compare their curvature across the number of iterations." }, { "start": 1792.98, "end": 1798.98, "text": " What I would have liked to see is a comparison multitask versus single task." }, { "start": 1798.98, "end": 1807.08, "text": " And to show me that in single task learning, this curvature doesn't happen." }, { "start": 1807.08, "end": 1812.76, "text": " Here you have the percentage of update steps where conditions A and B are held." }, { "start": 1812.76, "end": 1817.8, "text": " You remember condition A was the condition on the conflicting angle." }, { "start": 1817.8, "end": 1828.8, "text": " Condition B was the condition that the curvature is large enough and you can see that as you" }, { "start": 1828.8, "end": 1836.84, "text": " go on with learning these dotted and dashed lines, the conditions hold almost entirely" }, { "start": 1836.84, "end": 1843.08, "text": " at the beginning of learning, but then still hold by in a big time of the steps." }, { "start": 1843.08, "end": 1850.1799999999998, "text": " So here is like about half the steps still at the end of training these conditions hold." }, { "start": 1850.1799999999998, "end": 1858.72, "text": " So it is fairly, fairly good evidence that often the problems that they say are real" }, { "start": 1858.72, "end": 1862.9399999999998, "text": " are really there and then therefore their algorithm helps, right?" }, { "start": 1862.9399999999998, "end": 1868, "text": " So here's the average per task average return." }, { "start": 1868, "end": 1876.76, "text": " And interestingly, they say in the text, look, this task here seems to be easier, right?" }, { "start": 1876.76, "end": 1880.2, "text": " And the task two, which is the dotted line seems to be harder." }, { "start": 1880.2, "end": 1887.8, "text": " So SAC, the baseline algorithm never really manages to learn task two, whereas this PC" }, { "start": 1887.8, "end": 1891.6, "text": " grad manages after a while to learn it." }, { "start": 1891.6, "end": 1900.6799999999998, "text": " And at that point, something happens over here, which I'm not super sure." }, { "start": 1900.6799999999998, "end": 1908.4399999999998, "text": " Yeah, that's what they say in the text, but I have to squint a lot to see that exactly" }, { "start": 1908.4399999999998, "end": 1911, "text": " at that position, something happens." }, { "start": 1911, "end": 1918.32, "text": " Suffice to say that the PC grad is able to learn the task that SAC isn't able to learn" }, { "start": 1918.32, "end": 1925.2, "text": " because probably task one is completely dominating the gradient at that point, right?" }, { "start": 1925.2, "end": 1927.84, "text": " All right, so this was the paper." }, { "start": 1927.84, "end": 1932.72, "text": " I invite you to read it and thanks for listening." }, { "start": 1932.72, "end": 1949.08, "text": " Bye bye." } ]
Z3knUzwuIgo
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
One Model For All The Tasks - BLIP (Author Interview)
[ "Science & Technology" ]
[]
#blip #interview #salesforce Paper Review Video: https://youtu.be/X2k7n4FuI7c Sponsor: Assembly AI https://www.assemblyai.com/?utm_source=youtube&utm_medium=social&utm_campaign=yannic2 This is an interview with Junnan Li and Dongxu Li, authors of BLIP and members of Salesforce research. Cross-modal pre-training has been all the rage lately in deep learning, especially training vision and language models together. However, there are a number of issues, such as low quality datasets that limit the performance of any model trained on it, and also the fact that pure contrastive pre-training cannot be easily fine-tuned for most downstream tasks. BLIP unifies different tasks and objectives in a single pre-training run and achieves a much more versatile model, which the paper immediately uses to create, filter, clean and thus bootstrap its own dataset to improve performance even more! OUTLINE: 0:00 - Intro 0:40 - Sponsor: Assembly AI 1:30 - Start of Interview 2:30 - What's the pitch? 4:40 - How did data bootstrapping come into the project? 7:10 - How big of a problem is data quality? 11:10 - Are the captioning & filtering models biased towards COCO data? 14:40 - Could the data bootstrapping be done multiple times? 16:20 - What was the evolution of the BLIP architecture? 21:15 - Are there additional benefits to adding language modelling? 23:50 - Can we imagine a modular future for pre-training? 29:45 - Diving into the experimental results 42:40 - What did and did not work out during the research? 45:00 - How is research life at Salesforce? 46:45 - Where do we go from here? Paper: https://arxiv.org/abs/2201.12086 Code: https://github.com/salesforce/BLIP Demo: https://huggingface.co/spaces/Salesforce/BLIP Abstract: Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to video-language tasks in a zero-shot manner. Code, models, and datasets are released at this https URL. Authors: Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ykilcher.com/discord BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello, this is an interview with the authors of the blip paper. If you haven't seen it, I've made a review video of the paper itself. Be sure to check that out. The authors have seen that and are directly able to respond to it. So we all start on an even footing. It's very cool to have the authors on and this interview particularly was really interesting to me. I hope it is to you. As always, thank you for everyone who leaves a like who leaves a comment. Thanks to all the patreons and the support I get on Twitter and on YouTube itself. It's really cool. And I wish you a lot of fun. Thank you. Hey there, a quick shout out to today's sponsor. Assembly AI is an AI company that offers accurate API's for speech to text. As a developer, you can use these API's to automatically transcribe and understand audio and video data in just a few lines of code. Assembly AI automatically converts asynchronous and even live audio streams into text. They have so many features that help you understand your audio data. For example, summarization, content moderation, topic detection, and much more. Please check them out using the link in the description to let them know I sent you. Now let's get on with the video. Hi everyone. Today I'm here with Junnan Li and Dongxu Li, who are two of the researchers of the blip paper. It's a very big honor to have you here. Welcome both of you. Thanks for having us. Really happy to share our work here. Yeah, this paper was really cool. I think when it came out, everyone saw it and it generated quite a bit of buzz because it is a new approach to incorporating images and language and it can do a lot of things at the same time. It is a big system and yeah, I was super happy when I saw it. And when I read the paper, I was also pretty happy after I read the paper, which sometimes isn't the case anymore after you read the paper. And if you would just to dive in maybe, if you would pitch your idea to someone, like someone comes to you in a poster session or so, maybe for people who haven't seen the paper review just extremely briefly, what does your paper say or what do you propose? So maybe I can take this question. I think the major point of our paper, the setting point is that we propose a unified framework for visual language pre-training where we can pre-train this model that has the capability of doing both visual language understanding and visual language generation. So what understanding means is that it can jointly understand the two modalities, namely image and text, and produce some kind of multimodal features that can be used such as for classification tasks. And what generation means here is that it can generate text based on some image input. For example, for image captioning, it's one of a typical generation task. So I think this is the main idea of our model. In terms of the technical, in terms of how do we achieve that, I think there is one big point that I would like to highlight is we do have this data set bootstrapping to tackle the challenge of noisy web training data. Because existing works, a lot of them pre-train on those data that are collected from the web, which contains the image and all text pairs, which can be noisy. I think you mentioned in the review video. So what we do here is we want to synthetically generate captions and also to use a filter to try to remove the noisy captions. And by doing so, we can significantly improve the quality of the data set. And I think one of the key message we want to send in the paper is that the quality of the data really matters, it's as important as if not more important than the quantity. So a lot of passwords have focused on scaling up the model with big data. But here we do scale up, but we also focus on the quality of the data. I want to dive into this data bootstrapping right away, because it is almost a bit of an independent thing from the system itself. We've long known that we can trade off quality for quantity, but usually it is in an exponential fashion. So to get the same amount more quality, we need exponentially more data if we want to achieve it with less quality data. Which came first, the idea of building the vision language model or the idea of filtering or the data set, because they both play nicely into one another in your paper. And I'm just a bit wondering, how did this come to be? Which came first? Why one or the other? Yeah. So actually, for my research, for my past papers, I focused some papers on this weekly supervised learning or learning from the noisy data. So I've always been quite interested in how do people train models with imperfect data, which is a very practical scenario. And I think this field may deserve more attention. It's not as popular as some of the other fields, but it's really a very practical issue. And it does exist for vision language pre-training. So actually, one of my previous papers in vision language pre-training, which we call it LBF model, it was published in NeurIPS last year, we have this kind of self-training scheme where we want to clean the noise in the data set. But it's in a relatively more simpler way than what we do here. So rather than generating synthetic captions, we were doing some self-dissolation thing. So then we take it to the next step in the brief paper, where we first look at the data set and we see a lot of noise. And here, noise basically means that the caption is not really describing the visual content of the image. It may still be a good human-written text. It's not the text is grammatically wrong, it's grammatically correct. It's just that it's not aligned with the image. So what we try to solve is how do we generate texts that are more aligned with the image such that our pre-training can benefit from this. I think this left picture here illustrates it well, where it just says, from a bridge near my house, right? Which is a weird thing to put in an alt text, you would put that usually in some sort of a social media post or so. But this is one of the examples where the alt text doesn't really describe the image. I thought that was really well. Were you always aware of this weakness? How do you even find out that that is a large-scale problem? Yeah, so I think I first come find out this problem when going through some of the Pergena data set. So I think what people previously used, a quite standard web data set, was this conceptual caption 3 million, which is a relatively medium scale. It's not too small, but not very huge. And there do exist a lot of captions like this in that data set. And I found this problem even exaggerated as I tried to use a bigger data set. For example, in this paper, we used a line data set, which was a very newly released data set. And the noisy problem was even more, like, happens a lot more frequent when you try to scale up the data to include more web images with alt text. So we feel like this is something that if we can solve it, it could really change the model's performance. Have you seen that there's a recent paper called something like, vision models are more robust and fair when trained on uncurated data or something like this? So this here, you seem to say we need better quality data. And that group is saying essentially, no, our models work better when we have less quality, but we just go out and collect data. Can you maybe establish a bit of a connection between the two views? Like how do they agree? Yeah, so I think maybe there's two different aspects. One is the quality, the other is the diversity. And I think what that paper tried to maybe claim is, I haven't read the detail, it's just my impression was that they tried to claim if you have this huge web data set that is more diverse maybe than your maybe human-curated data set, you can bring better advantage to the model. I think that doesn't contradict with what we say here. So actually in our experiment, we show that the diversity of captions do matter a lot. When we try to generate synthetic captions, we try to generate a diverse set of captions that covers a whole bunch of different concepts rather than a very common and safe description of the image. I think maybe these two approaches seem to me to not contradict but complementary to each other. On one aspect, when you have more data, of course, you can always scale up the size of your data as you are always having more samples. That gives you better capacity for the model. But on the other side, we have more focus on the quality side. If you really look at the number of images we are using here for the pre-training, compared with some of the other works, it's not a lot. It's not too much, too large a scale. But since the quality of our pre-training corpus is better, we are now with better performance. So I really think the skill and the quality, they are complementary and they do not contradict, I believe. Let's stay on the captioning and filtering for just one more second. You first pre-train the entire model on this uncurated dataset and then you use fine-tuning on a human-generated captioning dataset in order to get these filter and captioning models. My worry there would be a little bit exactly what we talked about right now. What my filter and captioning models learn is really dependent on, let's assume the quality of the human-generated dataset is good, but the diversity of it really matters. Because it needs to cover all the images that come from the uncurated dataset. Otherwise it is going to misjudge, misfilter or not being able to caption this dataset. How do you control for that? Maybe you can also comment on if I now, let's say I want to expand my dataset to areas that I know that the human one doesn't cover, what could be a method of still going and researching on this new type of data? Yeah, I think that's a very good question. I think it's a valid concern that this fine-tuning may be biased models to our certain domains. I think one of the reasons we achieve performance improvement is because a lot of these downstream tasks are similar to the Coco domain image. So I think that's a valid point. But in the meantime, I would say that this fine-tuning doesn't destroy the model's capability to generate diverse captions. Because the fine-tuning is really a very lightweight procedure. So for Peretrainion, we're peretrain on this huge dataset for 220 epochs, which would take a few days, maybe even a week. But this fine-tuning, we only fine-tune for five epochs on a very small scale of Coco dataset, which can finish within a few hours. So this fine-tuning would not make the model forget about what it has previously saw. It only slightly modified the model so that it can generate captions that are more like human-written ones. But we do find that even after fine-tuning, the model can generate captions that are not within the vocabulary of Coco dataset. So it's not like the fine-tuning completely destroyed the model's diversity capability. So that's your answer to our first question. And for the second question, if someone wants to try to expand the model to a different domain where there doesn't exist human annotations, I would say first, if you can collect some, it would be good. And if you cannot, maybe one solution is there might be some similar images from this huge web dataset that maybe you can retrieve. So let's say if you can retrieve some similar images associated with web captions, then maybe you can slightly fine-tune the model on those subsets so that the model becomes slightly more biased towards your domain and more suitable to your downstream task. You suggest with this arrow right here, almost you suggest like a loop, like suggesting that this could be done multiple times, right? I could go multiple times through this stage. Is this anything? Okay, I've maybe not seen this in the experiment. If this is anything you've tried, or would anything change in the loop number two or number three or number four, what would be the difference? There's no new data introduced. Yeah, so first of all, I would say it's definitely possible to do multiple rounds of iterations of this bootstrapping. And in our future work, we mentioned this as one of the future work. And in terms of extra knowledge, each round of bootstrapping, we can add in new captions. So if the model becomes better, it can generate better synthetic captions. And there might be a diminishing return if we do multiple rounds. I would say my intuition is the first round will probably help the most, and maybe the second or third will help less. But unfortunately, due to the time and computation constraint, we didn't really have the resource to produce the experiment before the paper. So that's definitely one of the future plans that we have. So let's shift maybe. Sorry. Good. Okay, this model here is quite big. Was my first impression when I saw it. There's a lot of stuff. Okay, I have also drawn a lot of stuff on it. Sorry, I can make this go away. So the model here is relatively big and relatively, you know, there's modules going around, there's parameter sharing going on. What was the evolution of this model? Is this version one that we're looking at right here? Or is this like, you know, version 50 after you've tried a bunch of other things? Yeah, yeah. Definitely not version one. So actually, this model is heavily inspired by our previous LBF model, which is an encoder-only model. So if you look at the model, there's not too much difference between LBF and BLEEP, except the fact that now we add the generation capability to BLEEP with the language modeling loss. So the reason why we want to add this is first that because the encoder model doesn't really transfer that well to image captioning task and other generation tasks, so it's better that we can pre-train it to have this capability. That's why we add in this new decoder module. And then after we add in the decoder module, we thought, since we are doing multitask learning, can we share some parameters? Because first of all, it's more efficient to share parameters. And secondly, it may bring some advantage from the multitask training by jointly optimizing those few losses. So we tried different sharing strategies. First, we started with not sharing any parameters at all. And then we tried to share maybe the... So we tried to decouple maybe some...the cross-attention layer or the self-attention layer or the feed-forward layer. And we find that decoupling the self-attention layer from the encoder and decoder is a more efficient and effective way. So that's why we choose this strategy. But there is a possibility that because we are doing this experiment on a relatively smaller scale pre-training, so we were using the 40 million images for pre-training, but our final model was pre-trained on 100 million images. So maybe this sharing strategy is not optimal for if you scale up the dataset. So I would imagine if you want to have the best possible performance, you may want to scale up the dataset and try to decouple the parameters more. But that would, of course, sacrifice some of the efficiencies brought by the parameter sharing. Yeah. Another point I probably want to add here is like this architecture is not like ad hoc design because remember that one of our starting point is to eliminate the noise levels in this pre-training datasets. So from there, on one side we need to identify what are the noisy ones, whether the image and the caption match with each other. And that ends up with this design of encoder model. On the other side, we want even more that when we find that the caption does not align well with the image itself, we don't want to simply discard the training data point. We want to generate some useful captions, surprising captions that can further help us. So from that, I really want to say that it's not like we want to put everything together, glue different models into a single model to make it big. It really serves very well for this caption filter algorithm. And I think that kind of, yeah. Yeah. Just one additional comment is that our model is really actually not big if you compare to some other models. So basically our model is a VIT plus a bird. So it's a base version of the bird. So in terms of the number of parameters, I would say it's a standard parameter deep learning model. It's not that crazy huge. So even we draw it in the current figure, actually there is because of this parameter sharing going on, the number of parameters and the training computation load is not that heavy. Yeah. I like the fact that this really arises from sort of the goal of cleaning the data set. I also thought the more I read it and the more I talked about it, it became more evident that the things really played together nicely. So you use the contrastive loss to get the hard negatives for the, I want to say, matching loss or ranker loss. And then that gives you the filter. And then the language model here gives you the captioning. With respect to parameter sharing, you said, okay, the matching head or the contrastive heads, they're not really good at captioning themselves. So we'd rather pre-train or train a captioning or a language generation model. Do you find that adding the task of language generation also helps the tasks that the other models would be good at? Like, do you find an additional benefit, except for our model can also do captioning, do you find an additional benefit for the already existing or the already tackled tasks by adding, let's say, the language model? Yes, yes. We find that there is an advantage brought by this language model loss. So this language model loss, if you think about it, is really quite similar to the mass language model loss, except that now it's an autoregressive version. So in our previous IOBF work and in some other papers, what people usually do is mass language learning to try to improve the model's capability to understand the text in a more fine-grained granularity, because the image text matching and image text contrastive learning is more like a global matching. You are trying to match the image and text. But the language model is more fine-grained. You want to generate the word based on the image. And by achieving so, you need to better understand maybe some details of the image and align it with the textual concept to be able to generate the word. Do you have, let's say, more extensive goals in mind here? You just said it's actually not that big. If it's really nice, I agree with all of that. Yet, I foresee a future where you could bring together lots of these modules. Essentially, what I'd like to have is, first of all, we could obviously think of doing the same with the image side right here. You just have an encoder here right now. But we could think of breaking out here, doing image generation, doing whatever we can do with images. But on the other hand, maybe an even bigger future vision would be I bring a data set and I say, look, these are pairs of images and text. Now please, system, make me a model that includes all of these losses that I can think of, like all of these different combinations. And the system would figure out, oh, okay, I can share parameters here and I can build that and so on. And maybe that would, given your findings, which I totally believe that adding more of these tasks and sharing the parameters actually mutually benefits each other, the representations, they become more capable, they become maybe more broadly meaningful and so on. So I think that might be a cool future to work against. I don't know how feasible it is though. Is that anything on your roadmap or what does the future look like of these models? Yeah, I think that's a very cool idea. Maybe a very ambitious goal. So we have considered to add in some image generation capability, but we didn't because it doesn't fit very well with our current framework. So we don't want to make the framework to be very huge and messy. We try to keep it more cleaner. And regarding your point that can we have automatic system that can maybe combine different modules and losses? I think that's a possible goal. It's just there could be a lot of obstacles in how to achieve that. For example, if we borrow some idea from the NAS community and maybe we borrow some reinforcement learning idea, maybe there are some ways we can train a policy to do that. But it's not entirely clear to me how can we achieve that because I think the main problem is this per training is how to evaluate a per training is a big problem. So you cannot just say that lower per training loss means that your model is better downstream task. If there is a correlation between per training loss and downstream task, then it may be easier. You just find the optimal module that you can minimize your per training loss. But usually it's not the case. It also depends on how well aligned is your per training task and downstream task. I think that's one of the major issues of why it may take some trial and error to find the best strategy for the per training. Maybe I can add a few sentence to that. I think being able to figure out how to combine these different modules together automatically would be super cool and futuristic. I think there are a couple of practical messages that we want to convey here, which is the first I think if you really look at how this we fine tune this MED model to make them a captioner, a filter, and also how we combine these different modules together in order to tackle the downstream tasks. There are really some dedicated ways to do that. And usually if you look at some per training works on the market, their strategies will be pretty simplistic in the sense that in most of occasions they just add the task specific heads. But in this particular work, we just move one step further than that. We are rethinking how to rearrange these modules and what are the best strategies for this parameter sharing strategy. Another message we may want to say here is a lot of people, they blindly do this multitasking by aggregating hundreds of different data sets and tasking to one per training model. And maybe by bleep we want people to revisit this decision next time they do this multitasking because not necessarily every task they complement with each other. And you may want to carefully look into what to share, what not to share. I think these are the two things we want to remind for future works. And I have one additional comment to follow what Dongxu said is that you can see a lot of other works, they really combine really like maybe eight or ten objectives together. So there are some strategies for visual language training is you bring in object detection objective to improve your localization capability. So we think that's a way to that's a valid way to improve performance. But here what we try to say is that we want to keep things very nice and simple. So we have these three laws where each law serves a very clear purpose and can be transferred to a very specific Dongxuan task. And all we need is just image text pairs. We don't need any bounding box or anything else. So I think that's one of the message we want to also convey. Cool. And yeah, and I especially I like the fact that with pre-training with the aspect of fine tuning, then you're able to recombine these different modules in very creative ways. So even though you have these modules, they have their purposes for the pre-training, for the captioning, for the filtering. But then they can be it seems it seems many, many tasks can now be tackled by some sort of combination of these models and a little bit of fine tuning, which is something that I find really cool. You have done extensive and like there are there are lots of lots of tables means means you had to run like and collect lots of numbers, which is is very nice because it gives a bit also of a broad overview than just having, you know, four numbers or so comparing with one baseline. Although could you maybe highlight some of the of the standing out results that you got or one of some of the more important results? Like how would you summarize or what would you highlight about your experimental evaluation of this? Yeah, sure. I think the most important one would be table one, where we demonstrate the performance gain achieved by how do we bootstrap our data set. And yeah, so this is table basically, if you look at the first column, it shows how many images you are using. So we have two settings, one is a 40 million images, another we scale up with small noisy image taxpayers. And the second column is how do we perform the bootstrapping? C stands for captioning and F stands for filtering. It means whether we do captioning to generate synthetic captions, or we do filtering to remove the noisy captions, or we do both together. So if you look at the first row, second row, third and fourth row, you can see that both the captioning and the filtering can help individually. And if you combine them together, they really have complemented each other. So by generating synthetic captions, and at the same time, try to remove the noise, we can achieve, I would say a quite good amount of gain in these two different, four different data sets covering both the retrieval task and the captioning task. So I think that's one of the key results we have here. And also maybe then it goes to the second table is how do we do the bootstrapping of the captions? So do we use beam search? Or do we use nuclear sampling? So the difference between those two approaches is that beam search is a deterministic sampling, not sampling, deterministic decoding strategy, where you try to find the most likely sentence associated with the image. And nuclear sampling is a stochastic approach where you try to sample according to some probability distribution. We find that surprisingly, if you compare beam search with no generation, there is a good gain achieved by beam search. But by moving beam search to nuclear sampling, there is a similar amount of gain. So this is something that we didn't expect at the first time we see the results. And after we really deep dive into what the captions look like, how does beam search and nuclear sampling generate different captions, we found out that the beam search will generate a kind of a safe caption that accurately describes the image most of the time, but it's not surprising. So you can commonly see those captions in the data set. And that doesn't add a lot of extra knowledge for the model to learn. But the nuclear sampling really introduces some really diverse captions that are more like human written ones. Humans don't write a very boring distribution like a man is with a dog in a park. So it's a very boring caption. But nuclear sampling can give you more diverse captions. And if you look at a noise ratio, which is actually how much of those captions were filtered out by our filter, you can also see that beam search is less noisy. But even though it's less noisy, it's not as beneficial as nuclear sampling here. And this really raises another question, which I think is a very interesting future work, is that is nuclear sampling the best way? So because those models are pertrained with the language modeling laws, which is kind of deterministic laws, you try to maximize the likelihood of your captions. And we are just doing that, and we try to do something in the decoding side to try to give more diverse captions. But this nuclear sampling was used in mostly NLP papers. So does there exist some better diverse captioning strategy for image captioning tasks? So I think that's a very interesting question. I think in recent times, this has been shining through in a lot of works that the fact that maybe we don't need to go maximum likelihood in our inference step, but maybe it's a better approach to go diverse with the sampling. And then exactly what you do have some sort of a classifier or some sort of a filter to just scrap out the noise. I think that's a really, really good approach. And we saw this anywhere. I think Dolly famously had Clip re-ranking all the outputs. And I think more and more models go towards this. It's really cool finding that you're essentially finding exactly the same thing. When I look at these numbers, all of the numbers, it's very convincing to see that everything uniformly almost uniformly gets better. You support whatever you say really well. This trend right here, it really works across all of the data sets. You uniformly almost get better in all the tables. However, the difference is always, the maximum difference is whatever. This from here to here is like two points in what is this? What's TR? It's the true... It's a recall, text recall. Text recall, sorry. Oh yeah, it's down here. Text recall, image recall. That's like 2%. Right here, again, it's like one point something percent. So there's a uniformly getting better. My question is, given that the getting better is convincing, but the scale of it is like yeah, 2% or so, when is it worth to do this week long pre-training you mentioned? This is a big procedure. The pre-training is big. And then to fine tune the pre-training again, when is it worth it? From what scale or for what applications does it become actually worth to do something like this? Yeah, I think that's a very good question. And first of all, I would say it is worth doing if your data is really... If you observe a large amount of noise in the data and maybe your data is incomplete in some of the domains. For example, here, the web data is primarily dominated by those alt text, which can be different from what human would write to describe an image. So if there is a noisy scenario or a domain gap, I think it's worth to do so. And secondly, actually, we have also released our dataset after bootstrapping so that if you are just trying to do regionally pre-training in a similar domain, I think you can just download our version and use that as a starting point to avoid the first round of pre-training. And maybe certainly about your previous comment that we have really unanimous improvement for those tasks. Actually in one of the tasks, maybe you can scroll down the paper. Let me try to find... I think it's the NLVR task. Table eight, maybe? Yeah, yeah, table eight. Yeah, actually for this task, this is where we find the better quality of captions doesn't necessarily give you a better game if you compare here. And actually by scaling up the number of pre-training image, it doesn't correlate very straightforwardly to a downstream performance game. So I think it still depends on your alignment between your pre-training and your downstream objective. So for most of the tasks, it is well aligned. And that's why improving your pre-training data quality can improve your downstream task. Yeah, maybe I can add a few sentences in terms of whether it is worthwhile to improve that much. I think if you really imagine the big picture here in terms of the multimodal retrieval, let's say if you deploy this retrieval algorithm, and that manages to improve their profit by 1%, that's a huge achievement. You won a lot. So at Salesforce, we also have the retrieval. We also work with clients for their retrieval services. So in terms of that, if you just let your GPU run for one week and improve by 1%, that's a huge improvement, I would say. And I would also like to say that these numbers, they kind of, I think, under hype what BLEAP has achieved. Because I think BLEAP, beyond this relative advantage over its competitors, is also qualitatively better in terms of how easy it is to use BLEAP. If you really look at the demo we created there on the web, and it just freely asks any questions in natural language rather easily. In contrast, a lot of these image question answering models, they are not doing the free form generation. They are kind of doing classification in order to tackle this question answering task. This point is, however, not fully demonstrated, I believe, in the current manuscript. So if you really want to get impressed, we really suggest you check out our demo and put whatever photos you like and questions. Cool. It's really neat, by the way, that you have a demo to go along with it, because I think it makes it more accessible and it demonstrates also the capabilities of this. It's almost like we're moving into the world that GPT-3 maybe has created for text with these image language models, because we got the same feeling from GPT-3. Oh no, I can just go and I can put any text, right, and I can interact with the system in a sort of free form way. And it's really cool to see that we're also moving in this direction with the image models. In terms of just the process of how this research went about, you ended up with a cool system with a nice way of bootstrapping data and so on. Can you maybe tell us a little bit about stuff that didn't necessarily work out during the research? Was there any point where you were maybe disheartened a little bit, things that didn't work out? What were your low and your high points during the creation of this paper? Yeah, actually, one of the experiments we had was when we first tried to scale up the potential with small web images using this line data set that we have downloaded, which takes quite some time. It doesn't help that much. So then it feels really feel like why scaling up the data is not benefiting the model. So then I did some more analysis and after that I realized that a lot of those images are very, very small in the resolution. Some are just icons or some brand names. And if I remove those, then it begins to show the gains. But I think that's one of the kind of the blockers we faced. And I think after we first get the bootstrapping, especially the nuclear sampling to give a big performance gain, then at that point, we are quite confident that this should be a good solution. And I think that point is when I realized, okay, this method should work well and we can write a paper about it. Great. Dongxin, do you want to say something? Yeah, I believe some of these strategies, they also arise from the internal discussions with other group members as well. So it's really a lot of crowd intelligence behind the scenes. How is the research organized at Salesforce? I have a bit of insight into, let's say, the big tech giants like Google and Facebook and so on, and they have their research divisions. At a company like Salesforce, who is more customer, I want to say customer, all these companies are customer oriented, obviously. But how is research organized there? What do you do while the model is pre-training for a week? Do you have other stuff to do or are you mainly researchers or what's life like there? Yeah. So first of all, I would say that AI is a big part of Salesforce, what they try to achieve, to use AI to better help the customers. So we have this separate research division, maybe not as large as Google or Facebook, but I think everything works quite well in our research team. In terms of our day-to-day operation, I think it's mostly similar to other industrial researchers. We can be quite flexible to do research or do some more product oriented work. We are motivated to do research that can generate high impact, that can really change the field in a more substantial way. And while we wait for the GPU to finish training, we already just do other research stuff or read some papers involving some internal discussions or maybe try to solve some real production problems. Cool. Is there anything else you want to get out about this paper? You already said people can go to the web, to your repo, and you have a demo also available. Is there anything you'd want to get out? What's the easiest for people to get started with this research? Yes. I think first, again, welcome to try out our demo and welcome to visit our GitHub. We do have, I think, quite detailed instructions on how to download and train our fine-tuned model. And also, I welcome any suggestions or questions you might have about our model that we can use that to improve our model or the code. That would be great. Dongxu, anything, any last messages? Our team is expanding, so if you are interested, just let you know. Yeah, we are looking for an intern position in the visual language research. Cool. Who can apply? Anyone that is at university? Yeah, anyone can apply. We hire globally, so we can do remote working now. Cool. Excellent. Okay, Dongxu and Jinan, thank you very much for being here. This was a lot of fun. Thank you for having us. Thank you. Have a great day of preparation.
[ { "start": 0, "end": 9.200000000000001, "text": " Hello, this is an interview with the authors of the blip paper." }, { "start": 9.200000000000001, "end": 13.64, "text": " If you haven't seen it, I've made a review video of the paper itself." }, { "start": 13.64, "end": 14.8, "text": " Be sure to check that out." }, { "start": 14.8, "end": 19.240000000000002, "text": " The authors have seen that and are directly able to respond to it." }, { "start": 19.240000000000002, "end": 21.86, "text": " So we all start on an even footing." }, { "start": 21.86, "end": 26.88, "text": " It's very cool to have the authors on and this interview particularly was really interesting" }, { "start": 26.88, "end": 27.88, "text": " to me." }, { "start": 27.88, "end": 29, "text": " I hope it is to you." }, { "start": 29, "end": 32.96, "text": " As always, thank you for everyone who leaves a like who leaves a comment." }, { "start": 32.96, "end": 38.8, "text": " Thanks to all the patreons and the support I get on Twitter and on YouTube itself." }, { "start": 38.8, "end": 40.2, "text": " It's really cool." }, { "start": 40.2, "end": 42.04, "text": " And I wish you a lot of fun." }, { "start": 42.04, "end": 43.04, "text": " Thank you." }, { "start": 43.04, "end": 44.88, "text": " Hey there, a quick shout out to today's sponsor." }, { "start": 44.88, "end": 50.74, "text": " Assembly AI is an AI company that offers accurate API's for speech to text." }, { "start": 50.74, "end": 56, "text": " As a developer, you can use these API's to automatically transcribe and understand audio" }, { "start": 56, "end": 59.04, "text": " and video data in just a few lines of code." }, { "start": 59.04, "end": 66.12, "text": " Assembly AI automatically converts asynchronous and even live audio streams into text." }, { "start": 66.12, "end": 70.12, "text": " They have so many features that help you understand your audio data." }, { "start": 70.12, "end": 75.74000000000001, "text": " For example, summarization, content moderation, topic detection, and much more." }, { "start": 75.74000000000001, "end": 79.84, "text": " Please check them out using the link in the description to let them know I sent you." }, { "start": 79.84, "end": 88.92, "text": " Now let's get on with the video." }, { "start": 88.92, "end": 89.92, "text": " Hi everyone." }, { "start": 89.92, "end": 95.48, "text": " Today I'm here with Junnan Li and Dongxu Li, who are two of the researchers of the blip" }, { "start": 95.48, "end": 96.48, "text": " paper." }, { "start": 96.48, "end": 98.76, "text": " It's a very big honor to have you here." }, { "start": 98.76, "end": 99.76, "text": " Welcome both of you." }, { "start": 99.76, "end": 102.76, "text": " Thanks for having us." }, { "start": 102.76, "end": 105.24000000000001, "text": " Really happy to share our work here." }, { "start": 105.24000000000001, "end": 107.48, "text": " Yeah, this paper was really cool." }, { "start": 107.48, "end": 114.88000000000001, "text": " I think when it came out, everyone saw it and it generated quite a bit of buzz because" }, { "start": 114.88000000000001, "end": 121.4, "text": " it is a new approach to incorporating images and language and it can do a lot of things" }, { "start": 121.4, "end": 123.4, "text": " at the same time." }, { "start": 123.4, "end": 128.92000000000002, "text": " It is a big system and yeah, I was super happy when I saw it." }, { "start": 128.92000000000002, "end": 134.24, "text": " And when I read the paper, I was also pretty happy after I read the paper, which sometimes" }, { "start": 134.24, "end": 139.32000000000002, "text": " isn't the case anymore after you read the paper." }, { "start": 139.32000000000002, "end": 145.24, "text": " And if you would just to dive in maybe, if you would pitch your idea to someone, like" }, { "start": 145.24, "end": 149.68, "text": " someone comes to you in a poster session or so, maybe for people who haven't seen the" }, { "start": 149.68, "end": 156.96, "text": " paper review just extremely briefly, what does your paper say or what do you propose?" }, { "start": 156.96, "end": 158.8, "text": " So maybe I can take this question." }, { "start": 158.8, "end": 165.52, "text": " I think the major point of our paper, the setting point is that we propose a unified" }, { "start": 165.52, "end": 171.72, "text": " framework for visual language pre-training where we can pre-train this model that has" }, { "start": 171.72, "end": 178.28, "text": " the capability of doing both visual language understanding and visual language generation." }, { "start": 178.28, "end": 184.8, "text": " So what understanding means is that it can jointly understand the two modalities, namely" }, { "start": 184.8, "end": 191.88000000000002, "text": " image and text, and produce some kind of multimodal features that can be used such as for classification" }, { "start": 191.88000000000002, "end": 193.12, "text": " tasks." }, { "start": 193.12, "end": 200.48000000000002, "text": " And what generation means here is that it can generate text based on some image input." }, { "start": 200.48000000000002, "end": 205.16000000000003, "text": " For example, for image captioning, it's one of a typical generation task." }, { "start": 205.16000000000003, "end": 210.76000000000002, "text": " So I think this is the main idea of our model." }, { "start": 210.76, "end": 215.6, "text": " In terms of the technical, in terms of how do we achieve that, I think there is one big" }, { "start": 215.6, "end": 222.76, "text": " point that I would like to highlight is we do have this data set bootstrapping to tackle" }, { "start": 222.76, "end": 226.92, "text": " the challenge of noisy web training data." }, { "start": 226.92, "end": 233.76, "text": " Because existing works, a lot of them pre-train on those data that are collected from the" }, { "start": 233.76, "end": 238.2, "text": " web, which contains the image and all text pairs, which can be noisy." }, { "start": 238.2, "end": 242.2, "text": " I think you mentioned in the review video." }, { "start": 242.2, "end": 249, "text": " So what we do here is we want to synthetically generate captions and also to use a filter" }, { "start": 249, "end": 251.72, "text": " to try to remove the noisy captions." }, { "start": 251.72, "end": 257.03999999999996, "text": " And by doing so, we can significantly improve the quality of the data set." }, { "start": 257.03999999999996, "end": 261.44, "text": " And I think one of the key message we want to send in the paper is that the quality of" }, { "start": 261.44, "end": 269.26, "text": " the data really matters, it's as important as if not more important than the quantity." }, { "start": 269.26, "end": 274.48, "text": " So a lot of passwords have focused on scaling up the model with big data." }, { "start": 274.48, "end": 280.96, "text": " But here we do scale up, but we also focus on the quality of the data." }, { "start": 280.96, "end": 287.56, "text": " I want to dive into this data bootstrapping right away, because it is almost a bit of" }, { "start": 287.56, "end": 291.28, "text": " an independent thing from the system itself." }, { "start": 291.28, "end": 297.67999999999995, "text": " We've long known that we can trade off quality for quantity, but usually it is in an exponential" }, { "start": 297.67999999999995, "end": 298.67999999999995, "text": " fashion." }, { "start": 298.67999999999995, "end": 304.4, "text": " So to get the same amount more quality, we need exponentially more data if we want to" }, { "start": 304.4, "end": 312.28, "text": " achieve it with less quality data." }, { "start": 312.28, "end": 320.71999999999997, "text": " Which came first, the idea of building the vision language model or the idea of filtering" }, { "start": 320.72, "end": 326.62, "text": " or the data set, because they both play nicely into one another in your paper." }, { "start": 326.62, "end": 329.72, "text": " And I'm just a bit wondering, how did this come to be?" }, { "start": 329.72, "end": 332.12, "text": " Which came first?" }, { "start": 332.12, "end": 334.12, "text": " Why one or the other?" }, { "start": 334.12, "end": 335.12, "text": " Yeah." }, { "start": 335.12, "end": 341.96000000000004, "text": " So actually, for my research, for my past papers, I focused some papers on this weekly" }, { "start": 341.96000000000004, "end": 345.46000000000004, "text": " supervised learning or learning from the noisy data." }, { "start": 345.46, "end": 351.64, "text": " So I've always been quite interested in how do people train models with imperfect data," }, { "start": 351.64, "end": 354.56, "text": " which is a very practical scenario." }, { "start": 354.56, "end": 358.47999999999996, "text": " And I think this field may deserve more attention." }, { "start": 358.47999999999996, "end": 363.91999999999996, "text": " It's not as popular as some of the other fields, but it's really a very practical issue." }, { "start": 363.91999999999996, "end": 368.21999999999997, "text": " And it does exist for vision language pre-training." }, { "start": 368.21999999999997, "end": 373.64, "text": " So actually, one of my previous papers in vision language pre-training, which we call" }, { "start": 373.64, "end": 380.76, "text": " it LBF model, it was published in NeurIPS last year, we have this kind of self-training" }, { "start": 380.76, "end": 385.56, "text": " scheme where we want to clean the noise in the data set." }, { "start": 385.56, "end": 391.52, "text": " But it's in a relatively more simpler way than what we do here." }, { "start": 391.52, "end": 396.59999999999997, "text": " So rather than generating synthetic captions, we were doing some self-dissolation thing." }, { "start": 396.59999999999997, "end": 402.32, "text": " So then we take it to the next step in the brief paper, where we first look at the data" }, { "start": 402.32, "end": 404.92, "text": " set and we see a lot of noise." }, { "start": 404.92, "end": 410.08, "text": " And here, noise basically means that the caption is not really describing the visual content" }, { "start": 410.08, "end": 411.15999999999997, "text": " of the image." }, { "start": 411.15999999999997, "end": 414.71999999999997, "text": " It may still be a good human-written text." }, { "start": 414.71999999999997, "end": 418.64, "text": " It's not the text is grammatically wrong, it's grammatically correct." }, { "start": 418.64, "end": 421.08, "text": " It's just that it's not aligned with the image." }, { "start": 421.08, "end": 426.03999999999996, "text": " So what we try to solve is how do we generate texts that are more aligned with the image" }, { "start": 426.03999999999996, "end": 430.56, "text": " such that our pre-training can benefit from this." }, { "start": 430.56, "end": 436.92, "text": " I think this left picture here illustrates it well, where it just says, from a bridge" }, { "start": 436.92, "end": 440.52, "text": " near my house, right?" }, { "start": 440.52, "end": 445.24, "text": " Which is a weird thing to put in an alt text, you would put that usually in some sort of" }, { "start": 445.24, "end": 447.44, "text": " a social media post or so." }, { "start": 447.44, "end": 452.48, "text": " But this is one of the examples where the alt text doesn't really describe the image." }, { "start": 452.48, "end": 454, "text": " I thought that was really well." }, { "start": 454, "end": 458.6, "text": " Were you always aware of this weakness?" }, { "start": 458.6, "end": 463.16, "text": " How do you even find out that that is a large-scale problem?" }, { "start": 463.16, "end": 470.24, "text": " Yeah, so I think I first come find out this problem when going through some of the Pergena" }, { "start": 470.24, "end": 471.42, "text": " data set." }, { "start": 471.42, "end": 476.96000000000004, "text": " So I think what people previously used, a quite standard web data set, was this conceptual" }, { "start": 476.96000000000004, "end": 481.32000000000005, "text": " caption 3 million, which is a relatively medium scale." }, { "start": 481.32000000000005, "end": 485, "text": " It's not too small, but not very huge." }, { "start": 485, "end": 489.32, "text": " And there do exist a lot of captions like this in that data set." }, { "start": 489.32, "end": 495.12, "text": " And I found this problem even exaggerated as I tried to use a bigger data set." }, { "start": 495.12, "end": 501.72, "text": " For example, in this paper, we used a line data set, which was a very newly released" }, { "start": 501.72, "end": 503, "text": " data set." }, { "start": 503, "end": 509.96, "text": " And the noisy problem was even more, like, happens a lot more frequent when you try to" }, { "start": 509.96, "end": 514.2, "text": " scale up the data to include more web images with alt text." }, { "start": 514.2, "end": 521.32, "text": " So we feel like this is something that if we can solve it, it could really change the" }, { "start": 521.32, "end": 523.36, "text": " model's performance." }, { "start": 523.36, "end": 528.84, "text": " Have you seen that there's a recent paper called something like, vision models are more" }, { "start": 528.84, "end": 534.88, "text": " robust and fair when trained on uncurated data or something like this?" }, { "start": 534.88, "end": 540.6, "text": " So this here, you seem to say we need better quality data." }, { "start": 540.6, "end": 546.76, "text": " And that group is saying essentially, no, our models work better when we have less quality," }, { "start": 546.76, "end": 549.86, "text": " but we just go out and collect data." }, { "start": 549.86, "end": 553.8000000000001, "text": " Can you maybe establish a bit of a connection between the two views?" }, { "start": 553.8000000000001, "end": 556.32, "text": " Like how do they agree?" }, { "start": 556.32, "end": 562.1800000000001, "text": " Yeah, so I think maybe there's two different aspects." }, { "start": 562.1800000000001, "end": 564.9200000000001, "text": " One is the quality, the other is the diversity." }, { "start": 564.92, "end": 572.0799999999999, "text": " And I think what that paper tried to maybe claim is, I haven't read the detail, it's" }, { "start": 572.0799999999999, "end": 578.36, "text": " just my impression was that they tried to claim if you have this huge web data set that" }, { "start": 578.36, "end": 584.4, "text": " is more diverse maybe than your maybe human-curated data set, you can bring better advantage to" }, { "start": 584.4, "end": 585.4, "text": " the model." }, { "start": 585.4, "end": 589.56, "text": " I think that doesn't contradict with what we say here." }, { "start": 589.56, "end": 596.16, "text": " So actually in our experiment, we show that the diversity of captions do matter a lot." }, { "start": 596.16, "end": 600.9599999999999, "text": " When we try to generate synthetic captions, we try to generate a diverse set of captions" }, { "start": 600.9599999999999, "end": 608.88, "text": " that covers a whole bunch of different concepts rather than a very common and safe description" }, { "start": 608.88, "end": 612.68, "text": " of the image." }, { "start": 612.68, "end": 621.68, "text": " I think maybe these two approaches seem to me to not contradict but complementary to" }, { "start": 621.68, "end": 622.68, "text": " each other." }, { "start": 622.68, "end": 629.1999999999999, "text": " On one aspect, when you have more data, of course, you can always scale up the size of" }, { "start": 629.1999999999999, "end": 632.0799999999999, "text": " your data as you are always having more samples." }, { "start": 632.0799999999999, "end": 635.5999999999999, "text": " That gives you better capacity for the model." }, { "start": 635.5999999999999, "end": 639.76, "text": " But on the other side, we have more focus on the quality side." }, { "start": 639.76, "end": 643.84, "text": " If you really look at the number of images we are using here for the pre-training, compared" }, { "start": 643.84, "end": 646.72, "text": " with some of the other works, it's not a lot." }, { "start": 646.72, "end": 651.76, "text": " It's not too much, too large a scale." }, { "start": 651.76, "end": 659.76, "text": " But since the quality of our pre-training corpus is better, we are now with better performance." }, { "start": 659.76, "end": 665.72, "text": " So I really think the skill and the quality, they are complementary and they do not contradict," }, { "start": 665.72, "end": 667.68, "text": " I believe." }, { "start": 667.68, "end": 676.3199999999999, "text": " Let's stay on the captioning and filtering for just one more second." }, { "start": 676.3199999999999, "end": 688.7199999999999, "text": " You first pre-train the entire model on this uncurated dataset and then you use fine-tuning" }, { "start": 688.7199999999999, "end": 697.3199999999999, "text": " on a human-generated captioning dataset in order to get these filter and captioning models." }, { "start": 697.32, "end": 703.32, "text": " My worry there would be a little bit exactly what we talked about right now." }, { "start": 703.32, "end": 710.7600000000001, "text": " What my filter and captioning models learn is really dependent on, let's assume the quality" }, { "start": 710.7600000000001, "end": 716, "text": " of the human-generated dataset is good, but the diversity of it really matters." }, { "start": 716, "end": 722.08, "text": " Because it needs to cover all the images that come from the uncurated dataset." }, { "start": 722.08, "end": 732.44, "text": " Otherwise it is going to misjudge, misfilter or not being able to caption this dataset." }, { "start": 732.44, "end": 735, "text": " How do you control for that?" }, { "start": 735, "end": 742.64, "text": " Maybe you can also comment on if I now, let's say I want to expand my dataset to areas that" }, { "start": 742.64, "end": 752.4399999999999, "text": " I know that the human one doesn't cover, what could be a method of still going and researching" }, { "start": 752.4399999999999, "end": 754.84, "text": " on this new type of data?" }, { "start": 754.84, "end": 757.88, "text": " Yeah, I think that's a very good question." }, { "start": 757.88, "end": 765.88, "text": " I think it's a valid concern that this fine-tuning may be biased models to our certain domains." }, { "start": 765.88, "end": 771.88, "text": " I think one of the reasons we achieve performance improvement is because a lot of these downstream" }, { "start": 771.88, "end": 776, "text": " tasks are similar to the Coco domain image." }, { "start": 776, "end": 778.8, "text": " So I think that's a valid point." }, { "start": 778.8, "end": 784.28, "text": " But in the meantime, I would say that this fine-tuning doesn't destroy the model's capability" }, { "start": 784.28, "end": 787.08, "text": " to generate diverse captions." }, { "start": 787.08, "end": 791.32, "text": " Because the fine-tuning is really a very lightweight procedure." }, { "start": 791.32, "end": 797.2, "text": " So for Peretrainion, we're peretrain on this huge dataset for 220 epochs, which would take" }, { "start": 797.2, "end": 800.24, "text": " a few days, maybe even a week." }, { "start": 800.24, "end": 805.44, "text": " But this fine-tuning, we only fine-tune for five epochs on a very small scale of Coco" }, { "start": 805.44, "end": 808.32, "text": " dataset, which can finish within a few hours." }, { "start": 808.32, "end": 816.44, "text": " So this fine-tuning would not make the model forget about what it has previously saw." }, { "start": 816.44, "end": 821.08, "text": " It only slightly modified the model so that it can generate captions that are more like" }, { "start": 821.08, "end": 822.84, "text": " human-written ones." }, { "start": 822.84, "end": 827.76, "text": " But we do find that even after fine-tuning, the model can generate captions that are not" }, { "start": 827.76, "end": 830.4, "text": " within the vocabulary of Coco dataset." }, { "start": 830.4, "end": 837.12, "text": " So it's not like the fine-tuning completely destroyed the model's diversity capability." }, { "start": 837.12, "end": 841.2, "text": " So that's your answer to our first question." }, { "start": 841.2, "end": 847.4399999999999, "text": " And for the second question, if someone wants to try to expand the model to a different" }, { "start": 847.4399999999999, "end": 854.92, "text": " domain where there doesn't exist human annotations, I would say first, if you can collect some," }, { "start": 854.92, "end": 857.3199999999999, "text": " it would be good." }, { "start": 857.32, "end": 862.88, "text": " And if you cannot, maybe one solution is there might be some similar images from this huge" }, { "start": 862.88, "end": 866.1800000000001, "text": " web dataset that maybe you can retrieve." }, { "start": 866.1800000000001, "end": 872.44, "text": " So let's say if you can retrieve some similar images associated with web captions, then" }, { "start": 872.44, "end": 877.0400000000001, "text": " maybe you can slightly fine-tune the model on those subsets so that the model becomes" }, { "start": 877.0400000000001, "end": 884.44, "text": " slightly more biased towards your domain and more suitable to your downstream task." }, { "start": 884.44, "end": 895.84, "text": " You suggest with this arrow right here, almost you suggest like a loop, like suggesting that" }, { "start": 895.84, "end": 898.5600000000001, "text": " this could be done multiple times, right?" }, { "start": 898.5600000000001, "end": 903.5200000000001, "text": " I could go multiple times through this stage." }, { "start": 903.5200000000001, "end": 904.5200000000001, "text": " Is this anything?" }, { "start": 904.5200000000001, "end": 907.2800000000001, "text": " Okay, I've maybe not seen this in the experiment." }, { "start": 907.2800000000001, "end": 912.6, "text": " If this is anything you've tried, or would anything change in the loop number two or" }, { "start": 912.6, "end": 918.48, "text": " number three or number four, what would be the difference?" }, { "start": 918.48, "end": 920.52, "text": " There's no new data introduced." }, { "start": 920.52, "end": 928.76, "text": " Yeah, so first of all, I would say it's definitely possible to do multiple rounds of iterations" }, { "start": 928.76, "end": 930.12, "text": " of this bootstrapping." }, { "start": 930.12, "end": 935.0600000000001, "text": " And in our future work, we mentioned this as one of the future work." }, { "start": 935.0600000000001, "end": 941.1600000000001, "text": " And in terms of extra knowledge, each round of bootstrapping, we can add in new captions." }, { "start": 941.16, "end": 945.4, "text": " So if the model becomes better, it can generate better synthetic captions." }, { "start": 945.4, "end": 949.76, "text": " And there might be a diminishing return if we do multiple rounds." }, { "start": 949.76, "end": 954.8399999999999, "text": " I would say my intuition is the first round will probably help the most, and maybe the" }, { "start": 954.8399999999999, "end": 958.24, "text": " second or third will help less." }, { "start": 958.24, "end": 964.16, "text": " But unfortunately, due to the time and computation constraint, we didn't really have the resource" }, { "start": 964.16, "end": 969.04, "text": " to produce the experiment before the paper." }, { "start": 969.04, "end": 977, "text": " So that's definitely one of the future plans that we have." }, { "start": 977, "end": 979.76, "text": " So let's shift maybe." }, { "start": 979.76, "end": 981.24, "text": " Sorry." }, { "start": 981.24, "end": 982.88, "text": " Good." }, { "start": 982.88, "end": 989.88, "text": " Okay, this model here is quite big." }, { "start": 989.88, "end": 991.5999999999999, "text": " Was my first impression when I saw it." }, { "start": 991.5999999999999, "end": 992.5999999999999, "text": " There's a lot of stuff." }, { "start": 992.5999999999999, "end": 995.7199999999999, "text": " Okay, I have also drawn a lot of stuff on it." }, { "start": 995.72, "end": 999.48, "text": " Sorry, I can make this go away." }, { "start": 999.48, "end": 1006.32, "text": " So the model here is relatively big and relatively, you know, there's modules going around, there's" }, { "start": 1006.32, "end": 1008.96, "text": " parameter sharing going on." }, { "start": 1008.96, "end": 1012.88, "text": " What was the evolution of this model?" }, { "start": 1012.88, "end": 1015.88, "text": " Is this version one that we're looking at right here?" }, { "start": 1015.88, "end": 1021.76, "text": " Or is this like, you know, version 50 after you've tried a bunch of other things?" }, { "start": 1021.76, "end": 1024.3600000000001, "text": " Yeah, yeah." }, { "start": 1024.36, "end": 1025.8799999999999, "text": " Definitely not version one." }, { "start": 1025.8799999999999, "end": 1034.8799999999999, "text": " So actually, this model is heavily inspired by our previous LBF model, which is an encoder-only" }, { "start": 1034.8799999999999, "end": 1035.8799999999999, "text": " model." }, { "start": 1035.8799999999999, "end": 1040.04, "text": " So if you look at the model, there's not too much difference between LBF and BLEEP, except" }, { "start": 1040.04, "end": 1047.1999999999998, "text": " the fact that now we add the generation capability to BLEEP with the language modeling loss." }, { "start": 1047.1999999999998, "end": 1053.3999999999999, "text": " So the reason why we want to add this is first that because the encoder model doesn't really" }, { "start": 1053.4, "end": 1059.1200000000001, "text": " transfer that well to image captioning task and other generation tasks, so it's better" }, { "start": 1059.1200000000001, "end": 1062.48, "text": " that we can pre-train it to have this capability." }, { "start": 1062.48, "end": 1066.4, "text": " That's why we add in this new decoder module." }, { "start": 1066.4, "end": 1072.5600000000002, "text": " And then after we add in the decoder module, we thought, since we are doing multitask learning," }, { "start": 1072.5600000000002, "end": 1075, "text": " can we share some parameters?" }, { "start": 1075, "end": 1079.2800000000002, "text": " Because first of all, it's more efficient to share parameters." }, { "start": 1079.28, "end": 1086.56, "text": " And secondly, it may bring some advantage from the multitask training by jointly optimizing" }, { "start": 1086.56, "end": 1088.92, "text": " those few losses." }, { "start": 1088.92, "end": 1091.04, "text": " So we tried different sharing strategies." }, { "start": 1091.04, "end": 1095.32, "text": " First, we started with not sharing any parameters at all." }, { "start": 1095.32, "end": 1098.68, "text": " And then we tried to share maybe the..." }, { "start": 1098.68, "end": 1103.8799999999999, "text": " So we tried to decouple maybe some...the cross-attention layer or the self-attention layer or the feed-forward" }, { "start": 1103.8799999999999, "end": 1104.8799999999999, "text": " layer." }, { "start": 1104.88, "end": 1110.0400000000002, "text": " And we find that decoupling the self-attention layer from the encoder and decoder is a more" }, { "start": 1110.0400000000002, "end": 1112.4, "text": " efficient and effective way." }, { "start": 1112.4, "end": 1116.0400000000002, "text": " So that's why we choose this strategy." }, { "start": 1116.0400000000002, "end": 1123.3200000000002, "text": " But there is a possibility that because we are doing this experiment on a relatively" }, { "start": 1123.3200000000002, "end": 1129.48, "text": " smaller scale pre-training, so we were using the 40 million images for pre-training, but" }, { "start": 1129.48, "end": 1133, "text": " our final model was pre-trained on 100 million images." }, { "start": 1133, "end": 1138.96, "text": " So maybe this sharing strategy is not optimal for if you scale up the dataset." }, { "start": 1138.96, "end": 1144.44, "text": " So I would imagine if you want to have the best possible performance, you may want to" }, { "start": 1144.44, "end": 1148.44, "text": " scale up the dataset and try to decouple the parameters more." }, { "start": 1148.44, "end": 1153.88, "text": " But that would, of course, sacrifice some of the efficiencies brought by the parameter" }, { "start": 1153.88, "end": 1154.88, "text": " sharing." }, { "start": 1154.88, "end": 1155.88, "text": " Yeah." }, { "start": 1155.88, "end": 1167.96, "text": " Another point I probably want to add here is like this architecture is not like ad hoc" }, { "start": 1167.96, "end": 1177.64, "text": " design because remember that one of our starting point is to eliminate the noise levels in" }, { "start": 1177.64, "end": 1179.5600000000002, "text": " this pre-training datasets." }, { "start": 1179.56, "end": 1188.08, "text": " So from there, on one side we need to identify what are the noisy ones, whether the image" }, { "start": 1188.08, "end": 1190.48, "text": " and the caption match with each other." }, { "start": 1190.48, "end": 1194.9199999999998, "text": " And that ends up with this design of encoder model." }, { "start": 1194.9199999999998, "end": 1201.04, "text": " On the other side, we want even more that when we find that the caption does not align" }, { "start": 1201.04, "end": 1207.28, "text": " well with the image itself, we don't want to simply discard the training data point." }, { "start": 1207.28, "end": 1212.72, "text": " We want to generate some useful captions, surprising captions that can further help" }, { "start": 1212.72, "end": 1213.72, "text": " us." }, { "start": 1213.72, "end": 1219.92, "text": " So from that, I really want to say that it's not like we want to put everything together," }, { "start": 1219.92, "end": 1223.92, "text": " glue different models into a single model to make it big." }, { "start": 1223.92, "end": 1231.16, "text": " It really serves very well for this caption filter algorithm." }, { "start": 1231.16, "end": 1233.76, "text": " And I think that kind of, yeah." }, { "start": 1233.76, "end": 1235.52, "text": " Yeah." }, { "start": 1235.52, "end": 1240.72, "text": " Just one additional comment is that our model is really actually not big if you compare" }, { "start": 1240.72, "end": 1242.04, "text": " to some other models." }, { "start": 1242.04, "end": 1248.76, "text": " So basically our model is a VIT plus a bird." }, { "start": 1248.76, "end": 1251.24, "text": " So it's a base version of the bird." }, { "start": 1251.24, "end": 1256.48, "text": " So in terms of the number of parameters, I would say it's a standard parameter deep learning" }, { "start": 1256.48, "end": 1257.48, "text": " model." }, { "start": 1257.48, "end": 1260.08, "text": " It's not that crazy huge." }, { "start": 1260.08, "end": 1264.84, "text": " So even we draw it in the current figure, actually there is because of this parameter" }, { "start": 1264.84, "end": 1271.32, "text": " sharing going on, the number of parameters and the training computation load is not that" }, { "start": 1271.32, "end": 1272.9599999999998, "text": " heavy." }, { "start": 1272.9599999999998, "end": 1274.9599999999998, "text": " Yeah." }, { "start": 1274.9599999999998, "end": 1282.32, "text": " I like the fact that this really arises from sort of the goal of cleaning the data set." }, { "start": 1282.32, "end": 1286.6399999999999, "text": " I also thought the more I read it and the more I talked about it, it became more evident" }, { "start": 1286.6399999999999, "end": 1289.4399999999998, "text": " that the things really played together nicely." }, { "start": 1289.44, "end": 1299.92, "text": " So you use the contrastive loss to get the hard negatives for the, I want to say, matching" }, { "start": 1299.92, "end": 1301.8, "text": " loss or ranker loss." }, { "start": 1301.8, "end": 1304.18, "text": " And then that gives you the filter." }, { "start": 1304.18, "end": 1308.3400000000001, "text": " And then the language model here gives you the captioning." }, { "start": 1308.3400000000001, "end": 1317.04, "text": " With respect to parameter sharing, you said, okay, the matching head or the contrastive" }, { "start": 1317.04, "end": 1320.1599999999999, "text": " heads, they're not really good at captioning themselves." }, { "start": 1320.1599999999999, "end": 1324.78, "text": " So we'd rather pre-train or train a captioning or a language generation model." }, { "start": 1324.78, "end": 1333.62, "text": " Do you find that adding the task of language generation also helps the tasks that the other" }, { "start": 1333.62, "end": 1335.52, "text": " models would be good at?" }, { "start": 1335.52, "end": 1340.3999999999999, "text": " Like, do you find an additional benefit, except for our model can also do captioning, do you" }, { "start": 1340.3999999999999, "end": 1346.96, "text": " find an additional benefit for the already existing or the already tackled tasks by adding," }, { "start": 1346.96, "end": 1348.68, "text": " let's say, the language model?" }, { "start": 1348.68, "end": 1350.08, "text": " Yes, yes." }, { "start": 1350.08, "end": 1356.44, "text": " We find that there is an advantage brought by this language model loss." }, { "start": 1356.44, "end": 1361.28, "text": " So this language model loss, if you think about it, is really quite similar to the mass" }, { "start": 1361.28, "end": 1365.02, "text": " language model loss, except that now it's an autoregressive version." }, { "start": 1365.02, "end": 1370.4, "text": " So in our previous IOBF work and in some other papers, what people usually do is mass language" }, { "start": 1370.4, "end": 1377.72, "text": " learning to try to improve the model's capability to understand the text in a more fine-grained" }, { "start": 1377.72, "end": 1383.0400000000002, "text": " granularity, because the image text matching and image text contrastive learning is more" }, { "start": 1383.0400000000002, "end": 1385.8000000000002, "text": " like a global matching." }, { "start": 1385.8000000000002, "end": 1388.68, "text": " You are trying to match the image and text." }, { "start": 1388.68, "end": 1390.3600000000001, "text": " But the language model is more fine-grained." }, { "start": 1390.3600000000001, "end": 1393.52, "text": " You want to generate the word based on the image." }, { "start": 1393.52, "end": 1399.6000000000001, "text": " And by achieving so, you need to better understand maybe some details of the image and align" }, { "start": 1399.6, "end": 1406.36, "text": " it with the textual concept to be able to generate the word." }, { "start": 1406.36, "end": 1413.76, "text": " Do you have, let's say, more extensive goals in mind here?" }, { "start": 1413.76, "end": 1415.52, "text": " You just said it's actually not that big." }, { "start": 1415.52, "end": 1418.28, "text": " If it's really nice, I agree with all of that." }, { "start": 1418.28, "end": 1425.32, "text": " Yet, I foresee a future where you could bring together lots of these modules." }, { "start": 1425.32, "end": 1431.96, "text": " Essentially, what I'd like to have is, first of all, we could obviously think of doing" }, { "start": 1431.96, "end": 1433.8799999999999, "text": " the same with the image side right here." }, { "start": 1433.8799999999999, "end": 1436.4399999999998, "text": " You just have an encoder here right now." }, { "start": 1436.4399999999998, "end": 1444.1599999999999, "text": " But we could think of breaking out here, doing image generation, doing whatever we can do" }, { "start": 1444.1599999999999, "end": 1445.98, "text": " with images." }, { "start": 1445.98, "end": 1452.8799999999999, "text": " But on the other hand, maybe an even bigger future vision would be I bring a data set" }, { "start": 1452.88, "end": 1456.64, "text": " and I say, look, these are pairs of images and text." }, { "start": 1456.64, "end": 1465.1000000000001, "text": " Now please, system, make me a model that includes all of these losses that I can think of, like" }, { "start": 1465.1000000000001, "end": 1467.0800000000002, "text": " all of these different combinations." }, { "start": 1467.0800000000002, "end": 1472.2800000000002, "text": " And the system would figure out, oh, okay, I can share parameters here and I can build" }, { "start": 1472.2800000000002, "end": 1473.8400000000001, "text": " that and so on." }, { "start": 1473.8400000000001, "end": 1481.16, "text": " And maybe that would, given your findings, which I totally believe that adding more of" }, { "start": 1481.16, "end": 1487.64, "text": " these tasks and sharing the parameters actually mutually benefits each other, the representations," }, { "start": 1487.64, "end": 1493.7, "text": " they become more capable, they become maybe more broadly meaningful and so on." }, { "start": 1493.7, "end": 1500.0400000000002, "text": " So I think that might be a cool future to work against." }, { "start": 1500.0400000000002, "end": 1502.0400000000002, "text": " I don't know how feasible it is though." }, { "start": 1502.0400000000002, "end": 1508.5600000000002, "text": " Is that anything on your roadmap or what does the future look like of these models?" }, { "start": 1508.56, "end": 1513.32, "text": " Yeah, I think that's a very cool idea." }, { "start": 1513.32, "end": 1516.48, "text": " Maybe a very ambitious goal." }, { "start": 1516.48, "end": 1523.3999999999999, "text": " So we have considered to add in some image generation capability, but we didn't because" }, { "start": 1523.3999999999999, "end": 1527.12, "text": " it doesn't fit very well with our current framework." }, { "start": 1527.12, "end": 1530.8, "text": " So we don't want to make the framework to be very huge and messy." }, { "start": 1530.8, "end": 1535.28, "text": " We try to keep it more cleaner." }, { "start": 1535.28, "end": 1540.6399999999999, "text": " And regarding your point that can we have automatic system that can maybe combine different" }, { "start": 1540.6399999999999, "end": 1543.76, "text": " modules and losses?" }, { "start": 1543.76, "end": 1546.76, "text": " I think that's a possible goal." }, { "start": 1546.76, "end": 1552.16, "text": " It's just there could be a lot of obstacles in how to achieve that." }, { "start": 1552.16, "end": 1558.08, "text": " For example, if we borrow some idea from the NAS community and maybe we borrow some reinforcement" }, { "start": 1558.08, "end": 1564.42, "text": " learning idea, maybe there are some ways we can train a policy to do that." }, { "start": 1564.42, "end": 1569.24, "text": " But it's not entirely clear to me how can we achieve that because I think the main problem" }, { "start": 1569.24, "end": 1576.52, "text": " is this per training is how to evaluate a per training is a big problem." }, { "start": 1576.52, "end": 1582.3200000000002, "text": " So you cannot just say that lower per training loss means that your model is better downstream" }, { "start": 1582.3200000000002, "end": 1584.04, "text": " task." }, { "start": 1584.04, "end": 1591.72, "text": " If there is a correlation between per training loss and downstream task, then it may be easier." }, { "start": 1591.72, "end": 1595.48, "text": " You just find the optimal module that you can minimize your per training loss." }, { "start": 1595.48, "end": 1597.16, "text": " But usually it's not the case." }, { "start": 1597.16, "end": 1602.16, "text": " It also depends on how well aligned is your per training task and downstream task." }, { "start": 1602.16, "end": 1608.84, "text": " I think that's one of the major issues of why it may take some trial and error to find" }, { "start": 1608.84, "end": 1613.8, "text": " the best strategy for the per training." }, { "start": 1613.8, "end": 1618.04, "text": " Maybe I can add a few sentence to that." }, { "start": 1618.04, "end": 1626.1599999999999, "text": " I think being able to figure out how to combine these different modules together automatically" }, { "start": 1626.1599999999999, "end": 1630.44, "text": " would be super cool and futuristic." }, { "start": 1630.44, "end": 1637.72, "text": " I think there are a couple of practical messages that we want to convey here, which is the" }, { "start": 1637.72, "end": 1647.32, "text": " first I think if you really look at how this we fine tune this MED model to make them a" }, { "start": 1647.32, "end": 1654, "text": " captioner, a filter, and also how we combine these different modules together in order" }, { "start": 1654, "end": 1656.96, "text": " to tackle the downstream tasks." }, { "start": 1656.96, "end": 1660.9199999999998, "text": " There are really some dedicated ways to do that." }, { "start": 1660.9199999999998, "end": 1668.56, "text": " And usually if you look at some per training works on the market, their strategies will" }, { "start": 1668.56, "end": 1675.3999999999999, "text": " be pretty simplistic in the sense that in most of occasions they just add the task specific" }, { "start": 1675.3999999999999, "end": 1676.3999999999999, "text": " heads." }, { "start": 1676.4, "end": 1682.3200000000002, "text": " But in this particular work, we just move one step further than that." }, { "start": 1682.3200000000002, "end": 1688.48, "text": " We are rethinking how to rearrange these modules and what are the best strategies for this" }, { "start": 1688.48, "end": 1692.96, "text": " parameter sharing strategy." }, { "start": 1692.96, "end": 1701.48, "text": " Another message we may want to say here is a lot of people, they blindly do this multitasking" }, { "start": 1701.48, "end": 1706.96, "text": " by aggregating hundreds of different data sets and tasking to one per training model." }, { "start": 1706.96, "end": 1719.32, "text": " And maybe by bleep we want people to revisit this decision next time they do this multitasking" }, { "start": 1719.32, "end": 1723.6, "text": " because not necessarily every task they complement with each other." }, { "start": 1723.6, "end": 1727.8, "text": " And you may want to carefully look into what to share, what not to share." }, { "start": 1727.8, "end": 1738, "text": " I think these are the two things we want to remind for future works." }, { "start": 1738, "end": 1743, "text": " And I have one additional comment to follow what Dongxu said is that you can see a lot" }, { "start": 1743, "end": 1749.8799999999999, "text": " of other works, they really combine really like maybe eight or ten objectives together." }, { "start": 1749.8799999999999, "end": 1755.6, "text": " So there are some strategies for visual language training is you bring in object detection" }, { "start": 1755.6, "end": 1759.6399999999999, "text": " objective to improve your localization capability." }, { "start": 1759.6399999999999, "end": 1764.7199999999998, "text": " So we think that's a way to that's a valid way to improve performance." }, { "start": 1764.7199999999998, "end": 1769.4399999999998, "text": " But here what we try to say is that we want to keep things very nice and simple." }, { "start": 1769.4399999999998, "end": 1775.56, "text": " So we have these three laws where each law serves a very clear purpose and can be transferred" }, { "start": 1775.56, "end": 1778.1999999999998, "text": " to a very specific Dongxuan task." }, { "start": 1778.1999999999998, "end": 1780.48, "text": " And all we need is just image text pairs." }, { "start": 1780.48, "end": 1784.36, "text": " We don't need any bounding box or anything else." }, { "start": 1784.36, "end": 1788.3999999999999, "text": " So I think that's one of the message we want to also convey." }, { "start": 1788.3999999999999, "end": 1789.8, "text": " Cool." }, { "start": 1789.8, "end": 1795.6399999999999, "text": " And yeah, and I especially I like the fact that with pre-training with the aspect of" }, { "start": 1795.6399999999999, "end": 1802.8, "text": " fine tuning, then you're able to recombine these different modules in very creative ways." }, { "start": 1802.8, "end": 1807.4399999999998, "text": " So even though you have these modules, they have their purposes for the pre-training," }, { "start": 1807.4399999999998, "end": 1809.32, "text": " for the captioning, for the filtering." }, { "start": 1809.32, "end": 1817.58, "text": " But then they can be it seems it seems many, many tasks can now be tackled by some sort" }, { "start": 1817.58, "end": 1821.52, "text": " of combination of these models and a little bit of fine tuning, which is something that" }, { "start": 1821.52, "end": 1824.72, "text": " I find really cool." }, { "start": 1824.72, "end": 1831.6599999999999, "text": " You have done extensive and like there are there are lots of lots of tables means means" }, { "start": 1831.66, "end": 1839.8200000000002, "text": " you had to run like and collect lots of numbers, which is is very nice because it gives a bit" }, { "start": 1839.8200000000002, "end": 1845.28, "text": " also of a broad overview than just having, you know, four numbers or so comparing with" }, { "start": 1845.28, "end": 1847.6200000000001, "text": " one baseline." }, { "start": 1847.6200000000001, "end": 1854.8000000000002, "text": " Although could you maybe highlight some of the of the standing out results that you got" }, { "start": 1854.8000000000002, "end": 1857.7, "text": " or one of some of the more important results?" }, { "start": 1857.7, "end": 1862.52, "text": " Like how would you summarize or what would you highlight about your experimental evaluation" }, { "start": 1862.52, "end": 1863.52, "text": " of this?" }, { "start": 1863.52, "end": 1864.52, "text": " Yeah, sure." }, { "start": 1864.52, "end": 1872.72, "text": " I think the most important one would be table one, where we demonstrate the performance" }, { "start": 1872.72, "end": 1878.1200000000001, "text": " gain achieved by how do we bootstrap our data set." }, { "start": 1878.1200000000001, "end": 1884.56, "text": " And yeah, so this is table basically, if you look at the first column, it shows how many" }, { "start": 1884.56, "end": 1886.2, "text": " images you are using." }, { "start": 1886.2, "end": 1892.64, "text": " So we have two settings, one is a 40 million images, another we scale up with small noisy" }, { "start": 1892.64, "end": 1894.8400000000001, "text": " image taxpayers." }, { "start": 1894.8400000000001, "end": 1899.28, "text": " And the second column is how do we perform the bootstrapping?" }, { "start": 1899.28, "end": 1903, "text": " C stands for captioning and F stands for filtering." }, { "start": 1903, "end": 1907.96, "text": " It means whether we do captioning to generate synthetic captions, or we do filtering to" }, { "start": 1907.96, "end": 1911.72, "text": " remove the noisy captions, or we do both together." }, { "start": 1911.72, "end": 1917.08, "text": " So if you look at the first row, second row, third and fourth row, you can see that both" }, { "start": 1917.08, "end": 1921.52, "text": " the captioning and the filtering can help individually." }, { "start": 1921.52, "end": 1926.1200000000001, "text": " And if you combine them together, they really have complemented each other." }, { "start": 1926.1200000000001, "end": 1932.32, "text": " So by generating synthetic captions, and at the same time, try to remove the noise, we" }, { "start": 1932.32, "end": 1939.24, "text": " can achieve, I would say a quite good amount of gain in these two different, four different" }, { "start": 1939.24, "end": 1944.6, "text": " data sets covering both the retrieval task and the captioning task." }, { "start": 1944.6, "end": 1951.44, "text": " So I think that's one of the key results we have here." }, { "start": 1951.44, "end": 1959.28, "text": " And also maybe then it goes to the second table is how do we do the bootstrapping of" }, { "start": 1959.28, "end": 1960.52, "text": " the captions?" }, { "start": 1960.52, "end": 1962.36, "text": " So do we use beam search?" }, { "start": 1962.36, "end": 1964.72, "text": " Or do we use nuclear sampling?" }, { "start": 1964.72, "end": 1970.1200000000001, "text": " So the difference between those two approaches is that beam search is a deterministic sampling," }, { "start": 1970.1200000000001, "end": 1977.48, "text": " not sampling, deterministic decoding strategy, where you try to find the most likely sentence" }, { "start": 1977.48, "end": 1979.84, "text": " associated with the image." }, { "start": 1979.84, "end": 1985.96, "text": " And nuclear sampling is a stochastic approach where you try to sample according to some" }, { "start": 1985.96, "end": 1989.32, "text": " probability distribution." }, { "start": 1989.32, "end": 1996, "text": " We find that surprisingly, if you compare beam search with no generation, there is a" }, { "start": 1996, "end": 1999.2, "text": " good gain achieved by beam search." }, { "start": 1999.2, "end": 2004.6399999999999, "text": " But by moving beam search to nuclear sampling, there is a similar amount of gain." }, { "start": 2004.6399999999999, "end": 2009.96, "text": " So this is something that we didn't expect at the first time we see the results." }, { "start": 2009.96, "end": 2017.56, "text": " And after we really deep dive into what the captions look like, how does beam search and" }, { "start": 2017.56, "end": 2023.1599999999999, "text": " nuclear sampling generate different captions, we found out that the beam search will generate" }, { "start": 2023.1599999999999, "end": 2030.36, "text": " a kind of a safe caption that accurately describes the image most of the time, but it's not surprising." }, { "start": 2030.36, "end": 2036.08, "text": " So you can commonly see those captions in the data set." }, { "start": 2036.08, "end": 2040.8799999999999, "text": " And that doesn't add a lot of extra knowledge for the model to learn." }, { "start": 2040.8799999999999, "end": 2047.32, "text": " But the nuclear sampling really introduces some really diverse captions that are more" }, { "start": 2047.32, "end": 2049.3199999999997, "text": " like human written ones." }, { "start": 2049.3199999999997, "end": 2055.44, "text": " Humans don't write a very boring distribution like a man is with a dog in a park." }, { "start": 2055.44, "end": 2058.6, "text": " So it's a very boring caption." }, { "start": 2058.6, "end": 2062.36, "text": " But nuclear sampling can give you more diverse captions." }, { "start": 2062.36, "end": 2068.52, "text": " And if you look at a noise ratio, which is actually how much of those captions were filtered" }, { "start": 2068.52, "end": 2074.48, "text": " out by our filter, you can also see that beam search is less noisy." }, { "start": 2074.48, "end": 2080.32, "text": " But even though it's less noisy, it's not as beneficial as nuclear sampling here." }, { "start": 2080.32, "end": 2085.4, "text": " And this really raises another question, which I think is a very interesting future work," }, { "start": 2085.4, "end": 2087.92, "text": " is that is nuclear sampling the best way?" }, { "start": 2087.92, "end": 2093.48, "text": " So because those models are pertrained with the language modeling laws, which is kind" }, { "start": 2093.48, "end": 2099.6, "text": " of deterministic laws, you try to maximize the likelihood of your captions." }, { "start": 2099.6, "end": 2105.24, "text": " And we are just doing that, and we try to do something in the decoding side to try to" }, { "start": 2105.24, "end": 2107.52, "text": " give more diverse captions." }, { "start": 2107.52, "end": 2112.7999999999997, "text": " But this nuclear sampling was used in mostly NLP papers." }, { "start": 2112.7999999999997, "end": 2120.52, "text": " So does there exist some better diverse captioning strategy for image captioning tasks?" }, { "start": 2120.52, "end": 2124.56, "text": " So I think that's a very interesting question." }, { "start": 2124.56, "end": 2131.24, "text": " I think in recent times, this has been shining through in a lot of works that the fact that" }, { "start": 2131.24, "end": 2138.32, "text": " maybe we don't need to go maximum likelihood in our inference step, but maybe it's a better" }, { "start": 2138.32, "end": 2142, "text": " approach to go diverse with the sampling." }, { "start": 2142, "end": 2148.12, "text": " And then exactly what you do have some sort of a classifier or some sort of a filter to" }, { "start": 2148.12, "end": 2149.94, "text": " just scrap out the noise." }, { "start": 2149.94, "end": 2151.92, "text": " I think that's a really, really good approach." }, { "start": 2151.92, "end": 2154.08, "text": " And we saw this anywhere." }, { "start": 2154.08, "end": 2160.88, "text": " I think Dolly famously had Clip re-ranking all the outputs." }, { "start": 2160.88, "end": 2163.56, "text": " And I think more and more models go towards this." }, { "start": 2163.56, "end": 2171.86, "text": " It's really cool finding that you're essentially finding exactly the same thing." }, { "start": 2171.86, "end": 2179.62, "text": " When I look at these numbers, all of the numbers, it's very convincing to see that everything" }, { "start": 2179.62, "end": 2185.3199999999997, "text": " uniformly almost uniformly gets better." }, { "start": 2185.3199999999997, "end": 2189.16, "text": " You support whatever you say really well." }, { "start": 2189.16, "end": 2194.92, "text": " This trend right here, it really works across all of the data sets." }, { "start": 2194.92, "end": 2199.7599999999998, "text": " You uniformly almost get better in all the tables." }, { "start": 2199.7599999999998, "end": 2206.16, "text": " However, the difference is always, the maximum difference is whatever." }, { "start": 2206.16, "end": 2211.7599999999998, "text": " This from here to here is like two points in what is this?" }, { "start": 2211.7599999999998, "end": 2212.7599999999998, "text": " What's TR?" }, { "start": 2212.7599999999998, "end": 2213.7599999999998, "text": " It's the true..." }, { "start": 2213.7599999999998, "end": 2216.7599999999998, "text": " It's a recall, text recall." }, { "start": 2216.7599999999998, "end": 2218.7599999999998, "text": " Text recall, sorry." }, { "start": 2218.7599999999998, "end": 2221.48, "text": " Oh yeah, it's down here." }, { "start": 2221.48, "end": 2224.52, "text": " Text recall, image recall." }, { "start": 2224.52, "end": 2225.52, "text": " That's like 2%." }, { "start": 2225.52, "end": 2229.7999999999997, "text": " Right here, again, it's like one point something percent." }, { "start": 2229.7999999999997, "end": 2232.56, "text": " So there's a uniformly getting better." }, { "start": 2232.56, "end": 2239.64, "text": " My question is, given that the getting better is convincing, but the scale of it is like" }, { "start": 2239.64, "end": 2248.88, "text": " yeah, 2% or so, when is it worth to do this week long pre-training you mentioned?" }, { "start": 2248.88, "end": 2250.32, "text": " This is a big procedure." }, { "start": 2250.32, "end": 2251.32, "text": " The pre-training is big." }, { "start": 2251.32, "end": 2257, "text": " And then to fine tune the pre-training again, when is it worth it?" }, { "start": 2257, "end": 2262.34, "text": " From what scale or for what applications does it become actually worth to do something" }, { "start": 2262.34, "end": 2263.34, "text": " like this?" }, { "start": 2263.34, "end": 2267.2400000000002, "text": " Yeah, I think that's a very good question." }, { "start": 2267.2400000000002, "end": 2273.92, "text": " And first of all, I would say it is worth doing if your data is really..." }, { "start": 2273.92, "end": 2280.76, "text": " If you observe a large amount of noise in the data and maybe your data is incomplete" }, { "start": 2280.76, "end": 2282.4, "text": " in some of the domains." }, { "start": 2282.4, "end": 2289.32, "text": " For example, here, the web data is primarily dominated by those alt text, which can be" }, { "start": 2289.32, "end": 2293.32, "text": " different from what human would write to describe an image." }, { "start": 2293.32, "end": 2300.28, "text": " So if there is a noisy scenario or a domain gap, I think it's worth to do so." }, { "start": 2300.28, "end": 2306.92, "text": " And secondly, actually, we have also released our dataset after bootstrapping so that if" }, { "start": 2306.92, "end": 2313.2000000000003, "text": " you are just trying to do regionally pre-training in a similar domain, I think you can just" }, { "start": 2313.2, "end": 2320.72, "text": " download our version and use that as a starting point to avoid the first round of pre-training." }, { "start": 2320.72, "end": 2328.24, "text": " And maybe certainly about your previous comment that we have really unanimous improvement" }, { "start": 2328.24, "end": 2330.6, "text": " for those tasks." }, { "start": 2330.6, "end": 2338.24, "text": " Actually in one of the tasks, maybe you can scroll down the paper." }, { "start": 2338.24, "end": 2339.24, "text": " Let me try to find..." }, { "start": 2339.24, "end": 2348.3599999999997, "text": " I think it's the NLVR task." }, { "start": 2348.3599999999997, "end": 2350.3599999999997, "text": " Table eight, maybe?" }, { "start": 2350.3599999999997, "end": 2352.3599999999997, "text": " Yeah, yeah, table eight." }, { "start": 2352.3599999999997, "end": 2361.2, "text": " Yeah, actually for this task, this is where we find the better quality of captions doesn't" }, { "start": 2361.2, "end": 2367.52, "text": " necessarily give you a better game if you compare here." }, { "start": 2367.52, "end": 2374.7599999999998, "text": " And actually by scaling up the number of pre-training image, it doesn't correlate very straightforwardly" }, { "start": 2374.7599999999998, "end": 2377.72, "text": " to a downstream performance game." }, { "start": 2377.72, "end": 2383.6, "text": " So I think it still depends on your alignment between your pre-training and your downstream" }, { "start": 2383.6, "end": 2384.64, "text": " objective." }, { "start": 2384.64, "end": 2387.16, "text": " So for most of the tasks, it is well aligned." }, { "start": 2387.16, "end": 2392.84, "text": " And that's why improving your pre-training data quality can improve your downstream task." }, { "start": 2392.84, "end": 2400.6000000000004, "text": " Yeah, maybe I can add a few sentences in terms of whether it is worthwhile to improve that" }, { "start": 2400.6000000000004, "end": 2401.6000000000004, "text": " much." }, { "start": 2401.6000000000004, "end": 2409.2000000000003, "text": " I think if you really imagine the big picture here in terms of the multimodal retrieval," }, { "start": 2409.2000000000003, "end": 2416.6800000000003, "text": " let's say if you deploy this retrieval algorithm, and that manages to improve their profit by" }, { "start": 2416.6800000000003, "end": 2419.8, "text": " 1%, that's a huge achievement." }, { "start": 2419.8, "end": 2421.2400000000002, "text": " You won a lot." }, { "start": 2421.24, "end": 2428.4399999999996, "text": " So at Salesforce, we also have the retrieval." }, { "start": 2428.4399999999996, "end": 2434.2799999999997, "text": " We also work with clients for their retrieval services." }, { "start": 2434.2799999999997, "end": 2440.2, "text": " So in terms of that, if you just let your GPU run for one week and improve by 1%, that's" }, { "start": 2440.2, "end": 2443.24, "text": " a huge improvement, I would say." }, { "start": 2443.24, "end": 2453.2, "text": " And I would also like to say that these numbers, they kind of, I think, under hype what BLEAP" }, { "start": 2453.2, "end": 2454.2, "text": " has achieved." }, { "start": 2454.2, "end": 2465.7999999999997, "text": " Because I think BLEAP, beyond this relative advantage over its competitors, is also qualitatively" }, { "start": 2465.7999999999997, "end": 2472.68, "text": " better in terms of how easy it is to use BLEAP." }, { "start": 2472.68, "end": 2482.2, "text": " If you really look at the demo we created there on the web, and it just freely asks" }, { "start": 2482.2, "end": 2487.3199999999997, "text": " any questions in natural language rather easily." }, { "start": 2487.3199999999997, "end": 2495.2, "text": " In contrast, a lot of these image question answering models, they are not doing the free" }, { "start": 2495.2, "end": 2496.2, "text": " form generation." }, { "start": 2496.2, "end": 2503.48, "text": " They are kind of doing classification in order to tackle this question answering task." }, { "start": 2503.48, "end": 2510.7799999999997, "text": " This point is, however, not fully demonstrated, I believe, in the current manuscript." }, { "start": 2510.7799999999997, "end": 2518.3999999999996, "text": " So if you really want to get impressed, we really suggest you check out our demo and" }, { "start": 2518.3999999999996, "end": 2521.7999999999997, "text": " put whatever photos you like and questions." }, { "start": 2521.7999999999997, "end": 2523.24, "text": " Cool." }, { "start": 2523.24, "end": 2529.3999999999996, "text": " It's really neat, by the way, that you have a demo to go along with it, because I think" }, { "start": 2529.3999999999996, "end": 2534.8599999999997, "text": " it makes it more accessible and it demonstrates also the capabilities of this." }, { "start": 2534.8599999999997, "end": 2544.7599999999998, "text": " It's almost like we're moving into the world that GPT-3 maybe has created for text with" }, { "start": 2544.7599999999998, "end": 2550.64, "text": " these image language models, because we got the same feeling from GPT-3." }, { "start": 2550.64, "end": 2555.72, "text": " Oh no, I can just go and I can put any text, right, and I can interact with the system" }, { "start": 2555.72, "end": 2558.24, "text": " in a sort of free form way." }, { "start": 2558.24, "end": 2565.74, "text": " And it's really cool to see that we're also moving in this direction with the image models." }, { "start": 2565.74, "end": 2571.22, "text": " In terms of just the process of how this research went about, you ended up with a cool system" }, { "start": 2571.22, "end": 2575.68, "text": " with a nice way of bootstrapping data and so on." }, { "start": 2575.68, "end": 2581.52, "text": " Can you maybe tell us a little bit about stuff that didn't necessarily work out during the" }, { "start": 2581.52, "end": 2582.52, "text": " research?" }, { "start": 2582.52, "end": 2589.64, "text": " Was there any point where you were maybe disheartened a little bit, things that didn't work out?" }, { "start": 2589.64, "end": 2595.8799999999997, "text": " What were your low and your high points during the creation of this paper?" }, { "start": 2595.8799999999997, "end": 2604.7999999999997, "text": " Yeah, actually, one of the experiments we had was when we first tried to scale up the" }, { "start": 2604.8, "end": 2611.6800000000003, "text": " potential with small web images using this line data set that we have downloaded, which" }, { "start": 2611.6800000000003, "end": 2614.52, "text": " takes quite some time." }, { "start": 2614.52, "end": 2617.88, "text": " It doesn't help that much." }, { "start": 2617.88, "end": 2624.88, "text": " So then it feels really feel like why scaling up the data is not benefiting the model." }, { "start": 2624.88, "end": 2632, "text": " So then I did some more analysis and after that I realized that a lot of those images" }, { "start": 2632, "end": 2635.84, "text": " are very, very small in the resolution." }, { "start": 2635.84, "end": 2640.12, "text": " Some are just icons or some brand names." }, { "start": 2640.12, "end": 2645.68, "text": " And if I remove those, then it begins to show the gains." }, { "start": 2645.68, "end": 2651.36, "text": " But I think that's one of the kind of the blockers we faced." }, { "start": 2651.36, "end": 2658.88, "text": " And I think after we first get the bootstrapping, especially the nuclear sampling to give a" }, { "start": 2658.88, "end": 2664.92, "text": " big performance gain, then at that point, we are quite confident that this should be" }, { "start": 2664.92, "end": 2667.36, "text": " a good solution." }, { "start": 2667.36, "end": 2673.52, "text": " And I think that point is when I realized, okay, this method should work well and we" }, { "start": 2673.52, "end": 2677.32, "text": " can write a paper about it." }, { "start": 2677.32, "end": 2679.32, "text": " Great." }, { "start": 2679.32, "end": 2684.08, "text": " Dongxin, do you want to say something?" }, { "start": 2684.08, "end": 2690.72, "text": " Yeah, I believe some of these strategies, they also arise from the internal discussions" }, { "start": 2690.72, "end": 2693.16, "text": " with other group members as well." }, { "start": 2693.16, "end": 2701.12, "text": " So it's really a lot of crowd intelligence behind the scenes." }, { "start": 2701.12, "end": 2705.56, "text": " How is the research organized at Salesforce?" }, { "start": 2705.56, "end": 2711.56, "text": " I have a bit of insight into, let's say, the big tech giants like Google and Facebook and" }, { "start": 2711.56, "end": 2716, "text": " so on, and they have their research divisions." }, { "start": 2716, "end": 2723.12, "text": " At a company like Salesforce, who is more customer, I want to say customer, all these" }, { "start": 2723.12, "end": 2726.12, "text": " companies are customer oriented, obviously." }, { "start": 2726.12, "end": 2731, "text": " But how is research organized there?" }, { "start": 2731, "end": 2734.24, "text": " What do you do while the model is pre-training for a week?" }, { "start": 2734.24, "end": 2740.92, "text": " Do you have other stuff to do or are you mainly researchers or what's life like there?" }, { "start": 2740.92, "end": 2741.92, "text": " Yeah." }, { "start": 2741.92, "end": 2748.12, "text": " So first of all, I would say that AI is a big part of Salesforce, what they try to achieve," }, { "start": 2748.12, "end": 2751.04, "text": " to use AI to better help the customers." }, { "start": 2751.04, "end": 2757.76, "text": " So we have this separate research division, maybe not as large as Google or Facebook," }, { "start": 2757.76, "end": 2762.44, "text": " but I think everything works quite well in our research team." }, { "start": 2762.44, "end": 2769.92, "text": " In terms of our day-to-day operation, I think it's mostly similar to other industrial researchers." }, { "start": 2769.92, "end": 2780.52, "text": " We can be quite flexible to do research or do some more product oriented work." }, { "start": 2780.52, "end": 2787.6, "text": " We are motivated to do research that can generate high impact, that can really change the field" }, { "start": 2787.6, "end": 2791, "text": " in a more substantial way." }, { "start": 2791, "end": 2797.6, "text": " And while we wait for the GPU to finish training, we already just do other research stuff or" }, { "start": 2797.6, "end": 2805.4, "text": " read some papers involving some internal discussions or maybe try to solve some real production" }, { "start": 2805.4, "end": 2806.88, "text": " problems." }, { "start": 2806.88, "end": 2809.36, "text": " Cool." }, { "start": 2809.36, "end": 2812.92, "text": " Is there anything else you want to get out about this paper?" }, { "start": 2812.92, "end": 2819.48, "text": " You already said people can go to the web, to your repo, and you have a demo also available." }, { "start": 2819.48, "end": 2823.8399999999997, "text": " Is there anything you'd want to get out?" }, { "start": 2823.84, "end": 2827.88, "text": " What's the easiest for people to get started with this research?" }, { "start": 2827.88, "end": 2829.36, "text": " Yes." }, { "start": 2829.36, "end": 2836.36, "text": " I think first, again, welcome to try out our demo and welcome to visit our GitHub." }, { "start": 2836.36, "end": 2843, "text": " We do have, I think, quite detailed instructions on how to download and train our fine-tuned" }, { "start": 2843, "end": 2845.1200000000003, "text": " model." }, { "start": 2845.1200000000003, "end": 2853.6800000000003, "text": " And also, I welcome any suggestions or questions you might have about our model that we can" }, { "start": 2853.68, "end": 2858.3599999999997, "text": " use that to improve our model or the code." }, { "start": 2858.3599999999997, "end": 2861.48, "text": " That would be great." }, { "start": 2861.48, "end": 2868.2, "text": " Dongxu, anything, any last messages?" }, { "start": 2868.2, "end": 2872.3599999999997, "text": " Our team is expanding, so if you are interested, just let you know." }, { "start": 2872.3599999999997, "end": 2878.2, "text": " Yeah, we are looking for an intern position in the visual language research." }, { "start": 2878.2, "end": 2879.3599999999997, "text": " Cool." }, { "start": 2879.3599999999997, "end": 2880.3599999999997, "text": " Who can apply?" }, { "start": 2880.3599999999997, "end": 2882.6, "text": " Anyone that is at university?" }, { "start": 2882.6, "end": 2884.7999999999997, "text": " Yeah, anyone can apply." }, { "start": 2884.7999999999997, "end": 2888.56, "text": " We hire globally, so we can do remote working now." }, { "start": 2888.56, "end": 2889.56, "text": " Cool." }, { "start": 2889.56, "end": 2890.56, "text": " Excellent." }, { "start": 2890.56, "end": 2894.16, "text": " Okay, Dongxu and Jinan, thank you very much for being here." }, { "start": 2894.16, "end": 2896.3199999999997, "text": " This was a lot of fun." }, { "start": 2896.3199999999997, "end": 2897.3199999999997, "text": " Thank you for having us." }, { "start": 2897.3199999999997, "end": 2898.3199999999997, "text": " Thank you." }, { "start": 2898.32, "end": 2912.92, "text": " Have a great day of preparation." } ]
3Tqp_B2G6u0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Blockwise Parallel Decoding for Deep Autoregressive Models
[ "Science & Technology" ]
[ "machine learning", "deep learning", "transformers", "nlp", "natural language processing", "ai", "artificial intelligence", "google brain", "autoregressive", "greedy decoding", "inference", "language model", "speedup" ]
https://arxiv.org/abs/1811.03115 Abstract: Deep autoregressive sequence-to-sequence models have demonstrated impressive performance across a wide variety of tasks in recent years. While common architecture classes such as recurrent, convolutional, and self-attention networks make different trade-offs between the amount of computation needed per layer and the length of the critical path at training time, generation still remains an inherently sequential process. To overcome this limitation, we propose a novel blockwise parallel decoding scheme in which we make predictions for multiple time steps in parallel then back off to the longest prefix validated by a scoring model. This allows for substantial theoretical improvements in generation speed when applied to architectures that can process output sequences in parallel. We verify our approach empirically through a series of experiments using state-of-the-art self-attention models for machine translation and image super-resolution, achieving iteration reductions of up to 2x over a baseline greedy decoder with no loss in quality, or up to 7x in exchange for a slight decrease in performance. In terms of wall-clock time, our fastest models exhibit real-time speedups of up to 4x over standard greedy decoding. Authors: Mitchell Stern, Noam Shazeer, Jakob Uszkoreit
Hi there, today we'll look at blockwise parallel decoding for deep autoregressive models by Mitchell Stern, Noam Shazir and Jakob Uschkordei of UC Berkeley and Google Brain. So this is a bit more of an engineering paper than usual, which I find cool. It's basically an engineering trick to get these autoregressive models to decode faster, while you can either preserve fully their performance or suffer a bit of a drop in performance, while even speeding them up more. Alright, so let's dive in actually. The paper starts out with a description of what autoregressive models are and what decoding is in them. So let me try to quickly explain this. So what is an autoregressive model? So basically we're talking about, let's say, language models. So language models are the classic examples of these models, where you have a language model is a model that simply predicts the next word in a sequence. So you could have something like a cat sits on the, and then here is blank. So the language model is asked to predict which word is the word that follows. The language model basically does this by predicting the probability distribution over the next word. So w t plus one, if this is t here, this is t minus one and so on, given all the w's smaller or equal than t. So all the words that come before should lead to the next word being predicted. So the language model is tasked to ask what is the next word in the sequence, or what's the probability distribution over the next word. And then you can simply, you know, pick the maximum probability word or something like this. So that's that's pretty standard so far. So what is the autoregressive part in here? So basically the autoregressive part means that in order for me to find this word here, this next word, I will look at all of these words here. And what does it mean then when I want to use this language model for generating generating a sentence, let's say, so I'm now I've trained the language model, it's really good at predicting the next word, I want to actually use it to do something more interesting. So I, I want it to generate a full sentence, what I do, let's say I pick the first word, the right, I pick the first word, and I simply ask the language model, why what's the next word? Right? And the language model can do this, it can assess what's the probability distribution here over words, and it will, for example, give me some some distribution over words, and I pick the maximum one, I say, okay, the maximum one here is house. Okay, the house. The house. And then I go back and I ask the language model, well, what's the next word then? See, clearly, you're a language model. So you can give me based on the two previous words, you can give me the next word, what's the next word, and the language model will maybe say the house is, and so on. So you can see how you can generate a sentence by simply basically feeding the answer that the language model gives feeding it into the next step of predicting. So all of these now go into the next step, and once you've predicted the next step, the house is on. Once you've predicted that, then you can take that and in conjunction with everything you've predicted so far, to predict the next step. So you can use the language model that is trained to predict the next word to predict an entire sentence and the autoregressive part basically means that its own predictions will serve as the basis for the next predictions, and this introduces a fundamental problem, namely that I have to basically wait for one prediction, so I have to wait here for is before I can predict on, and this means if I have a, I basically can't help but, so if this is my language model, it's a box, I can't help but go to the language model, wait for a response, okay, then go to the language model again, wait for a response again. This is inherently sequential nature here where I have to do like M steps if M is the length of the sentence that I want, and we can't make use of batching normally, so usually what you do during training, during training you have a whole bunch of data, right, you have the cat sits on the mat, you have the house, the house is blue, so I can generate, just from these two sentences I can generate a bunch of training examples, I can ask, this is a training example where the input is the cat and it's meaning to predict sits, then this is a training example where the input is the cat sits and the language model has to predict on, this here is a training example, this, this is a training example, so I can chunk this up into a whole bunch of training examples and all of those I can write, I can feed in parallel into a big matrix, I can all put them here and then run this thing through my language model in training mode because each of them is already like is in the corpus, I can batch the training but I can't batch the prediction because of what we've seen before because inherently the next predicting the next word depends on the last word that the model itself has output, so there is no training corpus around since we're not training, yeah, so this is the fundamental problem and these authors tackle this problem, they say how can we make this faster, this decoding, so they introduce greedy decoding here where they say okay, this is what we've just seen, the probability of the next word is like the maximum, the maximum log probability here in that case if the model predicts a log probability over the words that we've input so far, right, and this X here is, so this is for example a translation task, a machine translation task, so the X would be the source language sentence, so maybe like a French sentence and the Y smaller equal to J would be the so far decoded English sentence if we're trying to translate to English and the Y J plus one would be the next word that we're trying to predict in the English sentence given the English sentence so far and the French sentence, the total French sentence, so greedy decoding just does this one step after another and we try to go to what they call blockwise parallel decoding. So we can just jump to the graphics straight away because what they do is pretty straightforward and is best illustrated in this graphic actually, so they go from this situation where they already have this here, they have a saw a dog ride, this is the sentence that has been decoded so far and we have to try to complete it, naturally we'll ask what's the next word, but they say okay what if we could predict not only the next word from this but the word two positions away or three positions away, we could do this all at the same time, right, I mean I can certainly build a model, a language model that doesn't only predict the next word but predicts the word after that as well, though of course if then this word, the predictor for this word still only gets this as an input so this is the important thing here, so the part of the model that predicts the is two words away isn't being informed that this word is being produced here, so naturally you would expect the quality to be worse because the word one position away, two positions away and three positions away are each predicted basically independently of each other just from the source context, so there is no, you can't expect like a coherency between the words or not a lot, so this is the fundamental trade-off with such a model, you can predict farther into the future at the same time but then these predictions can't basically depend on each other and this degrades your performance quite a bit, so what these authors do is to remedy that, they say well these things here we can, I mean we can produce a bunch of them, right, since all that's required as an input is this, we can actually produce like, we can produce a batch of them at the same time, so we can produce one, two and three words into the future and we can do this like a hundred times in parallel, no problem, alright, and we can sample this, we don't have to always take the most likely word, we can actually sample a bunch into the future and this now gets smarter because now I have a list of one hundred basically suggestions of what the continuation here could be, right, I have, I take this not as a given but I take these outputs as suggestions, alright, and then I can have another model that, this is called verify here, I can have another model that scores all of these different, all of these different decodings in parallel, both of these can be done by the same model, we saw the language model can be either used to predict or to score something, since it inherently predicts the probability of sequences or of following words, we can, we can let it output this probability all in parallel, so this also can count as a score, what I'm trying to say is you can, since the language model is constructed as a, as outputting probabilities anyway, like such, we can use it both to predict the next word and also if we have a suggestion we can use it to score that and to say okay how likely is that, right, and then what we can make sure is that the suggestion, we are looking for the suggestion basically that has the highest score and if you want to be really true to your original model you say I want to look for the suggestion that has the maximum, that would have had the maximum score had I decoded one by one, so then basically you retain the original performance and you gain a speed up as long as the, what the greedy decoding would have produced is in your suggestion, in your box of suggestions that you produce, as long as that's in there you gain a speed up, if that's not in there then you can always, you always have the one word ahead model because that's, you have that anyway, you predict the next word anyway, so in case none of these suggestions work out you still have this one word prediction basically which is the model you started with, so at worst case you're as fast as the greedy model and in best case you always, your suggestions are so good that they are always the one that would have been decoded anyway, so you can basically in this case do three steps at once. Alright, so this verify step here is shown here and you see it will decode, now this is just one suggestion keep in mind, they can produce many suggestions at the same time if there's memory or and they can actually, they can score each of this, so they can score this, they can score this and they can score this also independently as a batch, so they can do this in parallel and here you see, yeah here is executed in parallel, so the model will go and will score this word in and say ah this would have been, this is the argmax of the greedy decoding anyway and it can also score this step and say aha given that there is an in that this the is the argmax anyway, right and you can score this step and say ah given that there's in the, the argmax would have been car, so that's not bus, so we reject this suggestion but we keep that part of the suggestion and say okay the in the is basically what would have been decoded anyway according to the greedy decoding, so we can basically accept this here and continue from there, this is the accept step here, so this basically, so you can see in this one step which yeah we'll call one decoding step, we have basically done two of the greedy decoding steps in one go, so by predicting into the future and then selecting the one that agrees with the original model because we can, the fundamental thing is we can score in parallel but we can greedily produce not in parallel, alright so they actually push this further by also eliminating one of the, one of the evaluations here by combining basically the next predict step with the previous verify step and it's pretty cool to look at that, so we're in the same situation, you have this and you suggest this continuation and then the score model again will go here but while you verify you also do the next predict at the same time, since you've built your model, since it's the same model and this model every time you execute it, it outputs a distribution over the next set of positions, you might as well take the outputs of it, right, so when you then decide to accept this here, you will already have the outputs computed for the next three positions, so this you can feed directly into this next predict step, you basically don't have to execute it, you simply go to the one you've accepted and then you look at the outputs that you get anyway from this model and use them, so you might ask, okay which, how does a model look that like scores and predicts into the future and this, the answer is here, it's a bit out of order, I would have maybe liked this more previously but in any case this is what they do, so they use a transformer architecture and you have to imagine it starts down here and actually there is a huge network down here, right, this is just the output layer, so there's a giant transformer network down below and it produces this output representation, now normally from this representation you would go to this what's called p layer here, this is a output vocabulary projection, so this has one entry for each of the words in your vocabulary, so the, a, cat and so on and you would then for each one predict a probability, so with this representation you basically project it onto this vocabulary and predict the probability distribution over the next word, but what they do is they say no no no we not only need the next word, we need the next three words, so let's actually split this output signal into three output signals and they do this by introducing this hidden feed forward layer here or a hidden transformer layer, it's a hidden layer, yeah we insert a single feed forward layer with hidden size, okay, so they insert a hidden layer and then they also add these skip connections here, right, they add the skip connections which basically just means they feed through this output directly to here and add it to that, so basically the feed forward layer needs to transform this output here into the vocabulary input, one step ahead, two steps ahead and three steps ahead and you can see here that those are independent, right, they don't depend on each other, there's nothing feeding back p1 here into the decision of p2 so they can be executed in parallel, but they lose the dependence on each other, alright, so that's the architecture and you can clearly see here it's able to predict three steps into the future at the same time, so yeah, alright so they also do different adjustments where they say now yeah we can also kind of sacrifice a bit of the fidelity to the original model by not requiring that the basically we don't only accept when the suggestion is the perfect best suggestion that would have been decoded by the greedy model, but what we could do is we could just if it's in the top k we could accept it, if it's in the if it's good enough basically one of the suggestions that we have is good enough then we'll accept it or when you have like some sort of distance metric they say here so the distance between our suggestion and the maximum so the what would have been best by the greedy should be smaller than some constant epsilon and that way you can sacrifice a bit of performance but your suggestions will be accepted much more often and thereby your speedup will be much higher and they also experiment with whether or not they should fine tune the original model along with their model and also the experiment with knowledge distillation where they basically have like some some teacher model and you train the your model on the output of the teacher model don't want to go too far into this since these are mostly kind of things to make it work even better and you can see here that this is for example a machine translation task so this is the WMT 2014 English German translation and there's a regular they get a blow score of 26 and here higher is better and if you can see they get a fairly sizable speedups by keeping the blow scores fairly constant so they they almost speed up by 2x but if they allow the blow scores to go down a bit they get a much higher speedup of like 3 and then if they do like distillation and fine tuning they actually manage to keep up the performance even though they get very very high speedups so they get speedups until like 5x by not dropping the blow scores very much so that's that's pretty impressive another experiment they do is image super resolution where you can see here with regular they try to really keep exactly the original model output and it doesn't it doesn't speed it up too much but when they allow for a bit of a mistake to be made so here this is image super resolution so values are between zero and 255 and they allow epsilon equals to two of that so that's that's kind of less than 1% error on the individual pixel then they get a speed ups of 7x or something like this and you can see in this region here that when the K is for in case the number of steps that you decode ahead so and the mini mean block size is 3.75 that means on average 3.75 steps ahead or accepted which means basically there their suggestions are almost always good enough to be accepted so they get this massive speed up by basically being able to jump these decoding steps yeah so they have a bunch of other results here there show their wall clock time speed up since iteration speed up as well but if you have to pay in huge computational cost it's not so good but they also show that they have a big kind of wall clock speed up up to up to 4x here in super resolution and over 3x in translation so it's a pretty cool paper they give some examples here a bunch of more tables some examples of their super resolution and yeah if this might be something for you then use it it's I think it's a pretty neat trick and yeah especially for production systems all right that was it bye bye.
[ { "start": 0, "end": 6.640000000000001, "text": " Hi there, today we'll look at blockwise parallel decoding for deep autoregressive models by" }, { "start": 6.640000000000001, "end": 15.200000000000001, "text": " Mitchell Stern, Noam Shazir and Jakob Uschkordei of UC Berkeley and Google Brain." }, { "start": 15.200000000000001, "end": 21.44, "text": " So this is a bit more of an engineering paper than usual, which I find cool." }, { "start": 21.44, "end": 28.400000000000002, "text": " It's basically an engineering trick to get these autoregressive models to decode faster," }, { "start": 28.4, "end": 36.48, "text": " while you can either preserve fully their performance or suffer a bit of a drop in performance," }, { "start": 36.48, "end": 39.12, "text": " while even speeding them up more." }, { "start": 39.12, "end": 46.72, "text": " Alright, so let's dive in actually." }, { "start": 46.72, "end": 52.32, "text": " The paper starts out with a description of what autoregressive models are and what decoding" }, { "start": 52.32, "end": 54.239999999999995, "text": " is in them." }, { "start": 54.24, "end": 59.36, "text": " So let me try to quickly explain this." }, { "start": 59.36, "end": 62.800000000000004, "text": " So what is an autoregressive model?" }, { "start": 62.800000000000004, "end": 68, "text": " So basically we're talking about, let's say, language models." }, { "start": 68, "end": 73.2, "text": " So language models are the classic examples of these models, where you have a language" }, { "start": 73.2, "end": 77.08, "text": " model is a model that simply predicts the next word in a sequence." }, { "start": 77.08, "end": 88.4, "text": " So you could have something like a cat sits on the, and then here is blank." }, { "start": 88.4, "end": 95.44, "text": " So the language model is asked to predict which word is the word that follows." }, { "start": 95.44, "end": 101.16, "text": " The language model basically does this by predicting the probability distribution over" }, { "start": 101.16, "end": 102.28, "text": " the next word." }, { "start": 102.28, "end": 112.84, "text": " So w t plus one, if this is t here, this is t minus one and so on, given all the w's smaller" }, { "start": 112.84, "end": 114.76, "text": " or equal than t." }, { "start": 114.76, "end": 122.36, "text": " So all the words that come before should lead to the next word being predicted." }, { "start": 122.36, "end": 128.64, "text": " So the language model is tasked to ask what is the next word in the sequence, or what's" }, { "start": 128.64, "end": 131.16, "text": " the probability distribution over the next word." }, { "start": 131.16, "end": 136.04, "text": " And then you can simply, you know, pick the maximum probability word or something like" }, { "start": 136.04, "end": 137.04, "text": " this." }, { "start": 137.04, "end": 143.04, "text": " So that's that's pretty standard so far." }, { "start": 143.04, "end": 146.28, "text": " So what is the autoregressive part in here?" }, { "start": 146.28, "end": 153.16, "text": " So basically the autoregressive part means that in order for me to find this word here," }, { "start": 153.16, "end": 158.04, "text": " this next word, I will look at all of these words here." }, { "start": 158.04, "end": 164.23999999999998, "text": " And what does it mean then when I want to use this language model for generating generating" }, { "start": 164.23999999999998, "end": 169.68, "text": " a sentence, let's say, so I'm now I've trained the language model, it's really good at predicting" }, { "start": 169.68, "end": 174.89999999999998, "text": " the next word, I want to actually use it to do something more interesting." }, { "start": 174.89999999999998, "end": 182.04, "text": " So I, I want it to generate a full sentence, what I do, let's say I pick the first word," }, { "start": 182.04, "end": 187.92, "text": " the right, I pick the first word, and I simply ask the language model, why what's the next" }, { "start": 187.92, "end": 188.92, "text": " word?" }, { "start": 188.92, "end": 189.92, "text": " Right?" }, { "start": 189.92, "end": 195.95999999999998, "text": " And the language model can do this, it can assess what's the probability distribution" }, { "start": 195.95999999999998, "end": 201.72, "text": " here over words, and it will, for example, give me some some distribution over words," }, { "start": 201.72, "end": 206.44, "text": " and I pick the maximum one, I say, okay, the maximum one here is house." }, { "start": 206.44, "end": 210.2, "text": " Okay, the house." }, { "start": 210.2, "end": 211.92, "text": " The house." }, { "start": 211.92, "end": 216.79999999999998, "text": " And then I go back and I ask the language model, well, what's the next word then?" }, { "start": 216.8, "end": 218.8, "text": " See, clearly, you're a language model." }, { "start": 218.8, "end": 223.52, "text": " So you can give me based on the two previous words, you can give me the next word, what's" }, { "start": 223.52, "end": 230.8, "text": " the next word, and the language model will maybe say the house is, and so on." }, { "start": 230.8, "end": 237.8, "text": " So you can see how you can generate a sentence by simply basically feeding the answer that" }, { "start": 237.8, "end": 242.72000000000003, "text": " the language model gives feeding it into the next step of predicting." }, { "start": 242.72, "end": 247.35999999999999, "text": " So all of these now go into the next step, and once you've predicted the next step, the" }, { "start": 247.35999999999999, "end": 251.07999999999998, "text": " house is on." }, { "start": 251.07999999999998, "end": 255.6, "text": " Once you've predicted that, then you can take that and in conjunction with everything you've" }, { "start": 255.6, "end": 258.66, "text": " predicted so far, to predict the next step." }, { "start": 258.66, "end": 263.76, "text": " So you can use the language model that is trained to predict the next word to predict" }, { "start": 263.76, "end": 268.36, "text": " an entire sentence and the autoregressive part basically means that its own predictions" }, { "start": 268.36, "end": 275.32, "text": " will serve as the basis for the next predictions, and this introduces a fundamental problem," }, { "start": 275.32, "end": 283.68, "text": " namely that I have to basically wait for one prediction, so I have to wait here for is" }, { "start": 283.68, "end": 292.92, "text": " before I can predict on, and this means if I have a, I basically can't help but, so if" }, { "start": 292.92, "end": 298.28000000000003, "text": " this is my language model, it's a box, I can't help but go to the language model, wait for" }, { "start": 298.28, "end": 303.23999999999995, "text": " a response, okay, then go to the language model again, wait for a response again." }, { "start": 303.23999999999995, "end": 309.44, "text": " This is inherently sequential nature here where I have to do like M steps if M is the" }, { "start": 309.44, "end": 318.35999999999996, "text": " length of the sentence that I want, and we can't make use of batching normally, so usually" }, { "start": 318.35999999999996, "end": 324.23999999999995, "text": " what you do during training, during training you have a whole bunch of data, right, you" }, { "start": 324.24, "end": 343.96000000000004, "text": " have the cat sits on the mat, you have the house, the house is blue, so I can generate," }, { "start": 343.96000000000004, "end": 349.28000000000003, "text": " just from these two sentences I can generate a bunch of training examples, I can ask, this" }, { "start": 349.28, "end": 356.67999999999995, "text": " is a training example where the input is the cat and it's meaning to predict sits, then" }, { "start": 356.67999999999995, "end": 361.79999999999995, "text": " this is a training example where the input is the cat sits and the language model has" }, { "start": 361.79999999999995, "end": 368.52, "text": " to predict on, this here is a training example, this, this is a training example, so I can" }, { "start": 368.52, "end": 373.79999999999995, "text": " chunk this up into a whole bunch of training examples and all of those I can write, I can" }, { "start": 373.8, "end": 381.84000000000003, "text": " feed in parallel into a big matrix, I can all put them here and then run this thing" }, { "start": 381.84000000000003, "end": 387.12, "text": " through my language model in training mode because each of them is already like is in" }, { "start": 387.12, "end": 394.2, "text": " the corpus, I can batch the training but I can't batch the prediction because of what" }, { "start": 394.2, "end": 400.12, "text": " we've seen before because inherently the next predicting the next word depends on the last" }, { "start": 400.12, "end": 405.72, "text": " word that the model itself has output, so there is no training corpus around since we're" }, { "start": 405.72, "end": 412.04, "text": " not training, yeah, so this is the fundamental problem and these authors tackle this problem," }, { "start": 412.04, "end": 419.64, "text": " they say how can we make this faster, this decoding, so they introduce greedy decoding" }, { "start": 419.64, "end": 428.14, "text": " here where they say okay, this is what we've just seen, the probability of the next word" }, { "start": 428.14, "end": 435.88, "text": " is like the maximum, the maximum log probability here in that case if the model predicts a" }, { "start": 435.88, "end": 444.76, "text": " log probability over the words that we've input so far, right, and this X here is, so" }, { "start": 444.76, "end": 449, "text": " this is for example a translation task, a machine translation task, so the X would be" }, { "start": 449, "end": 456.56, "text": " the source language sentence, so maybe like a French sentence and the Y smaller equal" }, { "start": 456.56, "end": 463.36, "text": " to J would be the so far decoded English sentence if we're trying to translate to English and" }, { "start": 463.36, "end": 468.72, "text": " the Y J plus one would be the next word that we're trying to predict in the English sentence" }, { "start": 468.72, "end": 475.48, "text": " given the English sentence so far and the French sentence, the total French sentence," }, { "start": 475.48, "end": 482.98, "text": " so greedy decoding just does this one step after another and we try to go to what they" }, { "start": 482.98, "end": 487.64000000000004, "text": " call blockwise parallel decoding." }, { "start": 487.64000000000004, "end": 494.28000000000003, "text": " So we can just jump to the graphics straight away because what they do is pretty straightforward" }, { "start": 494.28000000000003, "end": 500.92, "text": " and is best illustrated in this graphic actually, so they go from this situation where they" }, { "start": 500.92, "end": 510.6, "text": " already have this here, they have a saw a dog ride, this is the sentence that has been" }, { "start": 510.6, "end": 518.52, "text": " decoded so far and we have to try to complete it, naturally we'll ask what's the next word," }, { "start": 518.52, "end": 524.76, "text": " but they say okay what if we could predict not only the next word from this but the word" }, { "start": 524.76, "end": 531.12, "text": " two positions away or three positions away, we could do this all at the same time, right," }, { "start": 531.12, "end": 535.9200000000001, "text": " I mean I can certainly build a model, a language model that doesn't only predict the next word" }, { "start": 535.92, "end": 544.9599999999999, "text": " but predicts the word after that as well, though of course if then this word, the predictor" }, { "start": 544.9599999999999, "end": 550.68, "text": " for this word still only gets this as an input so this is the important thing here, so the" }, { "start": 550.68, "end": 559.2199999999999, "text": " part of the model that predicts the is two words away isn't being informed that this" }, { "start": 559.2199999999999, "end": 565, "text": " word is being produced here, so naturally you would expect the quality to be worse because" }, { "start": 565, "end": 571.4, "text": " the word one position away, two positions away and three positions away are each predicted" }, { "start": 571.4, "end": 579.08, "text": " basically independently of each other just from the source context, so there is no, you" }, { "start": 579.08, "end": 588.84, "text": " can't expect like a coherency between the words or not a lot, so this is the fundamental" }, { "start": 588.84, "end": 593.44, "text": " trade-off with such a model, you can predict farther into the future at the same time but" }, { "start": 593.44, "end": 599.72, "text": " then these predictions can't basically depend on each other and this degrades your performance" }, { "start": 599.72, "end": 606.8000000000001, "text": " quite a bit, so what these authors do is to remedy that, they say well these things here" }, { "start": 606.8000000000001, "end": 613.12, "text": " we can, I mean we can produce a bunch of them, right, since all that's required as an input" }, { "start": 613.12, "end": 618.7600000000001, "text": " is this, we can actually produce like, we can produce a batch of them at the same time," }, { "start": 618.76, "end": 624.2, "text": " so we can produce one, two and three words into the future and we can do this like a" }, { "start": 624.2, "end": 631.2, "text": " hundred times in parallel, no problem, alright, and we can sample this, we don't have to always" }, { "start": 631.2, "end": 639.48, "text": " take the most likely word, we can actually sample a bunch into the future and this now" }, { "start": 639.48, "end": 646.42, "text": " gets smarter because now I have a list of one hundred basically suggestions of what" }, { "start": 646.42, "end": 652.3199999999999, "text": " the continuation here could be, right, I have, I take this not as a given but I take these" }, { "start": 652.3199999999999, "end": 660.12, "text": " outputs as suggestions, alright, and then I can have another model that, this is called" }, { "start": 660.12, "end": 668.24, "text": " verify here, I can have another model that scores all of these different, all of these" }, { "start": 668.24, "end": 672.92, "text": " different decodings in parallel, both of these can be done by the same model, we saw the" }, { "start": 672.92, "end": 679.9599999999999, "text": " language model can be either used to predict or to score something, since it inherently" }, { "start": 679.9599999999999, "end": 689.28, "text": " predicts the probability of sequences or of following words, we can, we can let it output" }, { "start": 689.28, "end": 694.92, "text": " this probability all in parallel, so this also can count as a score, what I'm trying" }, { "start": 694.92, "end": 701.04, "text": " to say is you can, since the language model is constructed as a, as outputting probabilities" }, { "start": 701.04, "end": 710.28, "text": " anyway, like such, we can use it both to predict the next word and also if we have a suggestion" }, { "start": 710.28, "end": 719.16, "text": " we can use it to score that and to say okay how likely is that, right, and then what we" }, { "start": 719.16, "end": 726.5999999999999, "text": " can make sure is that the suggestion, we are looking for the suggestion basically that" }, { "start": 726.6, "end": 733.72, "text": " has the highest score and if you want to be really true to your original model you say" }, { "start": 733.72, "end": 741.58, "text": " I want to look for the suggestion that has the maximum, that would have had the maximum" }, { "start": 741.58, "end": 750.88, "text": " score had I decoded one by one, so then basically you retain the original performance and you" }, { "start": 750.88, "end": 759.92, "text": " gain a speed up as long as the, what the greedy decoding would have produced is in your suggestion," }, { "start": 759.92, "end": 763.6, "text": " in your box of suggestions that you produce, as long as that's in there you gain a speed" }, { "start": 763.6, "end": 769.66, "text": " up, if that's not in there then you can always, you always have the one word ahead model because" }, { "start": 769.66, "end": 775.72, "text": " that's, you have that anyway, you predict the next word anyway, so in case none of these" }, { "start": 775.72, "end": 782.88, "text": " suggestions work out you still have this one word prediction basically which is the model" }, { "start": 782.88, "end": 792.08, "text": " you started with, so at worst case you're as fast as the greedy model and in best case" }, { "start": 792.08, "end": 798.72, "text": " you always, your suggestions are so good that they are always the one that would have been" }, { "start": 798.72, "end": 807.36, "text": " decoded anyway, so you can basically in this case do three steps at once. Alright, so this" }, { "start": 807.36, "end": 814.9, "text": " verify step here is shown here and you see it will decode, now this is just one suggestion" }, { "start": 814.9, "end": 822.44, "text": " keep in mind, they can produce many suggestions at the same time if there's memory or and" }, { "start": 822.44, "end": 827.6, "text": " they can actually, they can score each of this, so they can score this, they can score" }, { "start": 827.6, "end": 837.72, "text": " this and they can score this also independently as a batch, so they can do this in parallel" }, { "start": 837.72, "end": 843.84, "text": " and here you see, yeah here is executed in parallel, so the model will go and will score" }, { "start": 843.84, "end": 848.52, "text": " this word in and say ah this would have been, this is the argmax of the greedy decoding" }, { "start": 848.52, "end": 854.88, "text": " anyway and it can also score this step and say aha given that there is an in that this" }, { "start": 854.88, "end": 861.72, "text": " the is the argmax anyway, right and you can score this step and say ah given that there's" }, { "start": 861.72, "end": 869.08, "text": " in the, the argmax would have been car, so that's not bus, so we reject this suggestion" }, { "start": 869.08, "end": 876.24, "text": " but we keep that part of the suggestion and say okay the in the is basically what would" }, { "start": 876.24, "end": 886.44, "text": " have been decoded anyway according to the greedy decoding, so we can basically accept" }, { "start": 886.44, "end": 896.48, "text": " this here and continue from there, this is the accept step here, so this basically, so" }, { "start": 896.48, "end": 902.52, "text": " you can see in this one step which yeah we'll call one decoding step, we have basically" }, { "start": 902.52, "end": 912.42, "text": " done two of the greedy decoding steps in one go, so by predicting into the future and then" }, { "start": 912.42, "end": 919.04, "text": " selecting the one that agrees with the original model because we can, the fundamental thing" }, { "start": 919.04, "end": 928.4, "text": " is we can score in parallel but we can greedily produce not in parallel, alright so they actually" }, { "start": 928.4, "end": 939.04, "text": " push this further by also eliminating one of the, one of the evaluations here by combining" }, { "start": 939.04, "end": 948.4, "text": " basically the next predict step with the previous verify step and it's pretty cool to look at" }, { "start": 948.4, "end": 957.04, "text": " that, so we're in the same situation, you have this and you suggest this continuation" }, { "start": 957.04, "end": 968.04, "text": " and then the score model again will go here but while you verify you also do the next" }, { "start": 968.04, "end": 973.56, "text": " predict at the same time, since you've built your model, since it's the same model and" }, { "start": 973.56, "end": 982.52, "text": " this model every time you execute it, it outputs a distribution over the next set of positions," }, { "start": 982.52, "end": 988.4, "text": " you might as well take the outputs of it, right, so when you then decide to accept this" }, { "start": 988.4, "end": 996.36, "text": " here, you will already have the outputs computed for the next three positions, so this you" }, { "start": 996.36, "end": 1001.48, "text": " can feed directly into this next predict step, you basically don't have to execute it, you" }, { "start": 1001.48, "end": 1009.76, "text": " simply go to the one you've accepted and then you look at the outputs that you get anyway" }, { "start": 1009.76, "end": 1018.88, "text": " from this model and use them, so you might ask, okay which, how does a model look that" }, { "start": 1018.88, "end": 1024.12, "text": " like scores and predicts into the future and this, the answer is here, it's a bit out of" }, { "start": 1024.12, "end": 1029.8799999999999, "text": " order, I would have maybe liked this more previously but in any case this is what they" }, { "start": 1029.8799999999999, "end": 1034.52, "text": " do, so they use a transformer architecture and you have to imagine it starts down here" }, { "start": 1034.52, "end": 1040.48, "text": " and actually there is a huge network down here, right, this is just the output layer," }, { "start": 1040.48, "end": 1047.6, "text": " so there's a giant transformer network down below and it produces this output representation," }, { "start": 1047.6, "end": 1054.84, "text": " now normally from this representation you would go to this what's called p layer here," }, { "start": 1054.84, "end": 1060.52, "text": " this is a output vocabulary projection, so this has one entry for each of the words in" }, { "start": 1060.52, "end": 1068.76, "text": " your vocabulary, so the, a, cat and so on and you would then for each one predict a" }, { "start": 1068.76, "end": 1076.24, "text": " probability, so with this representation you basically project it onto this vocabulary" }, { "start": 1076.24, "end": 1082.6399999999999, "text": " and predict the probability distribution over the next word, but what they do is they say" }, { "start": 1082.6399999999999, "end": 1087.68, "text": " no no no we not only need the next word, we need the next three words, so let's actually" }, { "start": 1087.68, "end": 1095.5600000000002, "text": " split this output signal into three output signals and they do this by introducing this" }, { "start": 1095.5600000000002, "end": 1103.3200000000002, "text": " hidden feed forward layer here or a hidden transformer layer, it's a hidden layer, yeah" }, { "start": 1103.3200000000002, "end": 1110.28, "text": " we insert a single feed forward layer with hidden size, okay, so they insert a hidden" }, { "start": 1110.28, "end": 1119.16, "text": " layer and then they also add these skip connections here, right, they add the skip connections" }, { "start": 1119.16, "end": 1127.52, "text": " which basically just means they feed through this output directly to here and add it to" }, { "start": 1127.52, "end": 1135.08, "text": " that, so basically the feed forward layer needs to transform this output here into the" }, { "start": 1135.08, "end": 1141.84, "text": " vocabulary input, one step ahead, two steps ahead and three steps ahead and you can see" }, { "start": 1141.84, "end": 1146.6, "text": " here that those are independent, right, they don't depend on each other, there's nothing" }, { "start": 1146.6, "end": 1151.84, "text": " feeding back p1 here into the decision of p2 so they can be executed in parallel, but" }, { "start": 1151.84, "end": 1160.12, "text": " they lose the dependence on each other, alright, so that's the architecture and you can clearly" }, { "start": 1160.12, "end": 1171.1599999999999, "text": " see here it's able to predict three steps into the future at the same time, so yeah," }, { "start": 1171.1599999999999, "end": 1177.2399999999998, "text": " alright so they also do different adjustments where they say now yeah we can also kind of" }, { "start": 1177.2399999999998, "end": 1187.6799999999998, "text": " sacrifice a bit of the fidelity to the original model by not requiring that the basically" }, { "start": 1187.68, "end": 1192.96, "text": " we don't only accept when the suggestion is the perfect best suggestion that would have" }, { "start": 1192.96, "end": 1199.04, "text": " been decoded by the greedy model, but what we could do is we could just if it's in the" }, { "start": 1199.04, "end": 1205.48, "text": " top k we could accept it, if it's in the if it's good enough basically one of the suggestions" }, { "start": 1205.48, "end": 1210.4, "text": " that we have is good enough then we'll accept it or when you have like some sort of distance" }, { "start": 1210.4, "end": 1216, "text": " metric they say here so the distance between our suggestion and the maximum so the what" }, { "start": 1216, "end": 1222, "text": " would have been best by the greedy should be smaller than some constant epsilon and" }, { "start": 1222, "end": 1226.8, "text": " that way you can sacrifice a bit of performance but your suggestions will be accepted much" }, { "start": 1226.8, "end": 1232.6, "text": " more often and thereby your speedup will be much higher and they also experiment with" }, { "start": 1232.6, "end": 1239.4, "text": " whether or not they should fine tune the original model along with their model and also the" }, { "start": 1239.4, "end": 1246.5600000000002, "text": " experiment with knowledge distillation where they basically have like some some teacher" }, { "start": 1246.5600000000002, "end": 1251.92, "text": " model and you train the your model on the output of the teacher model don't want to" }, { "start": 1251.92, "end": 1258.92, "text": " go too far into this since these are mostly kind of things to make it work even better" }, { "start": 1258.92, "end": 1266.64, "text": " and you can see here that this is for example a machine translation task so this is the" }, { "start": 1266.64, "end": 1274.44, "text": " WMT 2014 English German translation and there's a regular they get a blow score of 26 and" }, { "start": 1274.44, "end": 1283.3600000000001, "text": " here higher is better and if you can see they get a fairly sizable speedups by keeping the" }, { "start": 1283.3600000000001, "end": 1289.8000000000002, "text": " blow scores fairly constant so they they almost speed up by 2x but if they allow the blow" }, { "start": 1289.8, "end": 1297.12, "text": " scores to go down a bit they get a much higher speedup of like 3 and then if they do like" }, { "start": 1297.12, "end": 1303.12, "text": " distillation and fine tuning they actually manage to keep up the performance even though" }, { "start": 1303.12, "end": 1310.56, "text": " they get very very high speedups so they get speedups until like 5x by not dropping the" }, { "start": 1310.56, "end": 1319.1399999999999, "text": " blow scores very much so that's that's pretty impressive another experiment they do is image" }, { "start": 1319.14, "end": 1326.2800000000002, "text": " super resolution where you can see here with regular they try to really keep exactly the" }, { "start": 1326.2800000000002, "end": 1332.5600000000002, "text": " original model output and it doesn't it doesn't speed it up too much but when they allow for" }, { "start": 1332.5600000000002, "end": 1339.8400000000001, "text": " a bit of a mistake to be made so here this is image super resolution so values are between" }, { "start": 1339.8400000000001, "end": 1347.64, "text": " zero and 255 and they allow epsilon equals to two of that so that's that's kind of less" }, { "start": 1347.64, "end": 1355.44, "text": " than 1% error on the individual pixel then they get a speed ups of 7x or something like" }, { "start": 1355.44, "end": 1361.72, "text": " this and you can see in this region here that when the K is for in case the number of steps" }, { "start": 1361.72, "end": 1371.64, "text": " that you decode ahead so and the mini mean block size is 3.75 that means on average 3.75" }, { "start": 1371.64, "end": 1376.3200000000002, "text": " steps ahead or accepted which means basically there their suggestions are almost always" }, { "start": 1376.32, "end": 1381.84, "text": " good enough to be accepted so they get this massive speed up by basically being able to" }, { "start": 1381.84, "end": 1390.3999999999999, "text": " jump these decoding steps yeah so they have a bunch of other results here there show their" }, { "start": 1390.3999999999999, "end": 1395.96, "text": " wall clock time speed up since iteration speed up as well but if you have to pay in huge" }, { "start": 1395.96, "end": 1401.28, "text": " computational cost it's not so good but they also show that they have a big kind of wall" }, { "start": 1401.28, "end": 1410.08, "text": " clock speed up up to up to 4x here in super resolution and over 3x in translation so it's" }, { "start": 1410.08, "end": 1415.12, "text": " a pretty cool paper they give some examples here a bunch of more tables some examples" }, { "start": 1415.12, "end": 1424, "text": " of their super resolution and yeah if this might be something for you then use it it's" }, { "start": 1424, "end": 1429.92, "text": " I think it's a pretty neat trick and yeah especially for production systems all right" }, { "start": 1429.92, "end": 1431.44, "text": " that was it bye bye." } ]
8wkgDnNxiVs
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
POET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and Solutions
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "evolution", "reinforcement learning", "neat", "open-ended", "never ending", "population", "bipedal walker" ]
From the makers of Go-Explore, POET is a mixture of ideas from novelty search, evolutionary methods, open-ended learning and curriculum learning. https://arxiv.org/abs/1901.01753 Abstract: While the history of machine learning so far largely encompasses a series of problems posed by researchers and algorithms that learn their solutions, an important question is whether the problems themselves can be generated by the algorithm at the same time as they are being solved. Such a process would in effect build its own diverse and expanding curricula, and the solutions to problems at various stages would become stepping stones towards solving even more challenging problems later in the process. The Paired Open-Ended Trailblazer (POET) algorithm introduced in this paper does just that: it pairs the generation of environmental challenges and the optimization of agents to solve those challenges. It simultaneously explores many different paths through the space of possible problems and solutions and, critically, allows these stepping-stone solutions to transfer between problems if better, catalyzing innovation. The term open-ended signifies the intriguing potential for algorithms like POET to continue to create novel and increasingly complex capabilities without bound. Our results show that POET produces a diverse range of sophisticated behaviors that solve a wide range of environmental challenges, many of which cannot be solved by direct optimization alone, or even through a direct-path curriculum-building control algorithm introduced to highlight the critical role of open-endedness in solving ambitious challenges. The ability to transfer solutions from one environment to another proves essential to unlocking the full potential of the system as a whole, demonstrating the unpredictable nature of fortuitous stepping stones. We hope that POET will inspire a new push towards open-ended discovery across many domains, where algorithms like POET can blaze a trail through their interesting possible manifestations and solutions. Authors: Rui Wang, Joel Lehman, Jeff Clune, Kenneth O. Stanley Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Alright, so what you're seeing here are solutions found to this bipedal walker problem by a new algorithm called PoET. So as you might guess, the challenge is to keep this little thing here walking to the right as far as you can while it encounters various obstacles. And it is and remains a challenging reinforcement learning problem to have an agent learn to overcome various obstacles and walk well in different environments. So the paper we're going to look at is called PoET. It's by Uber Engineering. And the full pronunciation is the Paired Open-Ended Trail Blazer, endlessly generating increasingly complex and diverse learning environments and their solutions by Roy Wang, Joel Lehmann, Jeff Klun and Kenneth O. Stanley, as I said from Uber AI Labs. So as you already saw, the challenge they take on is this bipedal walker problem. Now their method is very general and not limited to this problem, but this is the problem that they focus on. I'm going to jump some of the explanations here and dig right into the problem. As you can see, the problem is the following. You have this thing here, which is the walker, and it has two legs and specifically it has four joints. So the four joints are here too, and here too. And you can give torque on all of the four joints. So it's basically a four output problem. And you do have sensors as input. So the inputs, I believe, is a LIDAR. So the LIDAR is this red line you see here. I think it has 16 of those in various angles. And also it has pressure detection on the feet, I believe, to see whether or not they are in contact with the ground. And it might also have a gyroscope in that tells you which angle with respect to the ground the head is. So you have various sensors on these things, and you're able to basically control what the legs are doing. And your goal is to make this go as far to the right and as fast as possible. You see the reward down here is negative 100 if the robot falls over. That means if the head hits the ground. And then it is 130 times delta x. That's how far you go to the right minus the whole angle. And the whole angle, as I said, is this angle here. So you want to keep it as stable as possible. Because if there's a difference in the angle per step, then you get penalized. And also you get penalized for each torque you apply. So you want to kind of apply minimal force on the joints in order to go very far. But by far the most important point is to go to the right as far and as fast as you can. There is an end here somewhere. And if you reach it, you get a score that is above 230. They choose the limit of 230 here to determine. So if the agent gets 230 or more, then it has solved the environment. That's what they claim. That's from experience. So as you see, the environment has various obstacles here. There are holes that you can fall into that you need to jump or step over. There are these kind of stumps here. They can be of various height. So this is a bit shorter and this is a bit longer. And the general terrain has a roughness. And this can go to very rough from very smooth. So this is a parameterized environment. And obviously they are able to generate these environments from parameters. And the goal now is to have an agent that walks well in any environment that you can think of. Right, so here on the left you see this is very challenging down the stairs. This also isn't too easy because there is a gap here. And there are five parameters of these environments. So there is the general roughness of the terrain. That means how many hills it has and how fast they are coming. There is the stump lower bound and stump upper bound, I believe. So how high the stumps are. And also how long the gaps are. And with these parameters you control how difficult an environment is. So the straightforward thing to do is simply to sample environments and have a reinforcement learn approach to this. And that usually doesn't work. I already want to see this without having talked about what the algorithm is. This is the approach where you try this thing. It's called evolution strategies. But you can think of it as just a straightforward optimization procedure. So there is an agent and there is an environment and you are trying to solve the environment using just straightforward optimization. Now the evolution strategies are not your classic algorithm but you can compare it to it. It's just that these people, they like the more, I have a feeling they like the more esoteric learning algorithms. In any case, you see in these environments large gap, rough surface and so on. These are supposed to be the platinum figures. So these two environments and also these environments here. The evolution strategy, so the classic approach if you just straight forward optimize, they get very low scores on average, whereas poet gets here very high scores above the 230 threshold. So what's happening? If you're trying to just solve these environments from scratch, you basically don't really have a big chance of solving them. Because let's say you're here and you're trying to move to the right, you know, you might learn how to do this and you see this from scratch solution actually manages to get to the right. But then as soon as you reach this, you're in this gap and you just fall down the gap because all you've learned so far is how to move right. So what you would need to do is you would need to plan ahead like what poet does. You need to see that there is a gap. You need to plan ahead and already lift up a leg in order to then step over the gap here and then do a little jump right here. And this sequence of action, this kind of planning ahead, it is very difficult to learn this for a classic RL algorithm because you basically get reward for everything you do. So initially you get reward for moving to the right. So that's 10 if you reach here, another 10 if you reach here. And so there is another 10 if you reach here and another 10 if you reach here. Whereas if you lift up your leg, that's like minus five because now this you've changed this angle and we saw this is negative reward, right? So a classic optimization algorithm will always fall into the hole because that is where you get the immediate reward. Whereas you'd have to you'd have to do a sequence of action that doesn't give you a reward right now, but it gives you more reward later. And in order to learn this, we need a kind of a better algorithm that just straightforward optimization. So maybe I can explain this if you have a maze, here is the start and here is the goal and there is like walls and the walls are something like this. What you need to do is go around here. But what a classic optimization algorithm does is always like goes here because that's ever so closer to the goal. And then it just gets stuck because it can't fathom that it needs to go around here. So it needs to go farther away before it gets closer. So these people we've talked about this before in like open ended learning novelty search. What you would want to do is you would want to gradually build up solutions that can explore the space like to go here, go here, go here and basically build up these solutions. And there are two components to what this poet algorithm does. So the first component is curriculum learning. Curriculum learning. What does curriculum learning mean? Curriculum learning means that you start off with easy tasks and you increasingly build up more and more and more complex tasks. So let's say I have an environment here and I'm going to draw and at the beginning we just kind of start off with this flat surface right and here is our little walker right here. And we'll just train it to move right on that and that should be doable with kind of a classic approach. And then we gradually move to more difficult environments. So maybe we'll make it a bit more rough right. And an agent that can already walk to the right already kind of has think of it as a pre-training in like NLP. You can then get more and more challenging and then maybe at some point you can build in a gap right. So you build in one of these gaps and now it already knows how to move to the right and now it might actually learn to jump a small gap right if you make it small at the beginning not like this one down here. There's a very large gap. But if you make it small by accident it might stumble over it and then learn and continuously how to master the gap. So this is the curriculum learning approach. It means that from environment to environment you get harder, harder and harder challenges. So first flat then more rough then more rough with a gap and so on. The second approach, the second ingredient to POET is what they call stepping stone learning or transfer learning or things like this. And that's where you kind of have to think of this not as a single agent optimizing but as a population of agents. So let's say you do this curriculum learning right. And you're getting fairly well here at rough terrains right. More and more rough terrains. But in parallel you also have a second optimization procedure. You also start out kind of flat but with this thing you go as we said before small gap you keep it flat but you just increase the number of gaps here right. Whereas over here you just keep making the terrain rougher and rougher. So what the philosophy is that an agent that might be able to master this rougher terrain it might actually that skill because here you this kind of this kind of looks like a gap here. The skill of hopping over this gap here might actually transfer to the environment over here where you do have a proper you know a gap in the environment or the skill that you learn from an environment where you have one of these stumps right. So here let's draw in one of these stumps where you have to go over and if you have a walker that can successfully walk over this that skill now might transfer over here in order to get over this over this peaky terrain here. So the idea of poet is to start off with a generic flat very easy environment and then spawn new ones so you want to spawn new environments in kind of a hereditary way. So this one might get a bit rougher this one might include this and this one might include a gap or something like this and then again you want to spawn new environments and more rough more rough more rough with a stump here and this one retains the gap sorry and um this one now gets two gaps and so on and you want to continuously train these and then always you want to check whether or not the skill that you learn over here might actually transfer to anyone over here. So you get this tree of this continuous tree of solutions and once you improve on one branch this might actually be good on another branch right they always make the comparison to let's say biological evolution where a strategy that works over here for birds is all of a sudden can be cross adopted by mammals for an entirely different problem but the same skill might be valuable. Yeah so this this is basically the two ingredients of poet and now I want to show you the complete poet algorithm. So what does it do you start off with an initial environment right and in poet every environment is paired with an agent so there is one agent per environment right so for the time steps what you do is first of all you go through your environments and you mutate them and we already seen these environments they can be generated from a parameter vector so we have five numbers right how rough how stumpy and how wide the gaps are let's say we have three numbers to two and this might be one this might be two this might be five right so what you want to do is you want to mutate them right you want to spawn children and each of these parameters has a chance of mutating this might be one three five and this environment might be one four six and this one might be two two five right you spawn new ones you already see that the requirement here is that you can actually have environments that are procedurally generated and mutated like this where a small mutation probably is going to lead to a small change in the environment in any case you mutate them and then you you want to let's you want to optimize your eight each agent so each of these environments is paired with a new agent that always tries to solve that particular environment so now within one environment you simply do your classic optimization we already saw here the evolution strategy is akin to a classic optimization algorithm from reinforcement learning all right so each agent you optimize for a couple of steps right not fully every time but for a couple of steps so each agent including the one in the original environment each agent is continuously trained on its environment throughout the process of course you like you have to be you have bounded computation so you need to drop out the very old ones but in principle continuously as all of this goes on all the agents are always trained on their environments so the agent here this Walker will always try to solve this particular environment and the Walker here that is now newly generated when the environment is generated will only try to solve this particular environment throughout the whole algorithm right and then all right so you do mutations you spawn new ones and then you do a couple of steps in optimization right and yes step and then you do this transfer attempt right what you want to do is you want to evaluate all the candidates on all the environments in principle you can you can cut this down but in principle you want to go through the environments and say okay this environment right here I'm going to evaluate all of the other agents in this environment you can do this in a couple of different ways where you just straight up try them or try to optimize them for a few steps to see whether they can be adapted easily to that environment but ultimately you have to come up with a criterion to say for each agent is the agent better or worse than the agent that is continuously trained on this environment if it's worse then you keep this one if if anyone is better then you transfer that better one to replace this one right and you basically copy it over to this new environment and that's where this transfer learning comes in so you're continuously trying all the agents on all the environments and if they are better you transfer them right so here you say if the environment score is better than the one that you have you transfer it all right now there is a lot hidden here for example in this mutate environment step they do check whether or not the new mutated environments are not too hard and not too easy and that basically means whether or not the agents can solve them but not solve them too easily they also check whether the environments are enough novel so you need a couple of checks here you solvable and that that means not too easy and not too hard right so they need to pass like a certain score but they need to be kind of solvable to a to an okay score so there's a score range and also novel they check whether or not the out the mutated environments are novel enough and I believe they just do this by calculating the the distance between two environments in terms of their parameter vectors so to determine whether or not these are novel and sorry I don't mean the distance just between two but the distance of all of the ones you've seen so far so if we go to original very beautiful drawing here where is my tree if you create a new environment let's say you create a new environment right here then you want to check it against all environments you've seen so far to determine whether or not it is new or not so you want to create the distance to all of these and if you have enough distance to your nearest neighbors then you are novel and that's kind of how they they determine whether environment is new all right so that's basically the poet algorithm you continuously create new environments by mutation you ensure that they are solvable not hard enough sorry not too hard but hard enough ensure that they are novel and then you optimize each agent for its own environment continuously as the process goes on and so it's not I want to stress this it's not only the frontier so you're not only looking at the newest generation but you're always looking at all of the generation of the because the older ones while the environments are easier they have been optimized for longer on this environment so the skills might be very handy so you always want to look at your entire population and then you do crucially you do this these transfer attempts so that's the poet algorithm there is a lot hidden here and I kind of want to stress that just if you just look at the amount of hyper parameters there is so many hyper parameters in this how much you transfer how much you mutate how many steps you do each of these subroutines here has a billion hyper parameters and learning rates and and so on so to me that's a that is kind of if I look at this algorithm I am very scared if I attempted to do something like this myself it's it's going to be a long and hard thing to evaluate all of these different hyper parameters that you have to do shortly want to dip into what the evolution strategy does just so you know because you just might be familiar with your classic your classic reinforce algorithm so in policy gradient methods what you do is you scale your parameters of your neural network which is you can if this is your policy then your policy network here you want to scale the gradient according to your reward so in classic reinforcement learning this here would be the reward you got which basically means if you did an action and you got higher reward you want to make your network do that action more right here in evolution strategies what you do is you spawn it's a different way of doing the same thing basically you spawn different environments and sorry you spawn you spawn different agents so you have your current parameters and you want to spawn a number of noisy versions of those parameters and then you want to evaluate each one right and now you want to adjust your parameters into the direction of that particular so basically you are here with your parameters you create a bunch of noisy versions of it right and let's say these two performed really well you want to adjust your parameters into the direction of those two right that's basically what this says so this is the noisy version and then this is the noise that produced the noisy version so if this is high if this number here is high then you will adjust your parameters into that direction it's a fairly cool way if you especially if you can't back prop through your policy as it's pretty neat thing so this is the ES step algorithm but you can think of it just as a RL algorithm all right so they do various experiments to show that this actually has merits I've already shown you if you're trying if you take the same environments and try to solve them directly by this evolution step then it will not succeed because of the problems we've discussed before now the comparison is a bit unfair because um of course these environments for poet poet the problem here is you can't have it solve a particular environments because the environments they constantly change right you constantly mutate the environments you never know where it's going it's not directed so if your goal is to solve a particular environment you cannot do it with poet you can hope that the agent that comes out will perform well right you can do something like this but I believe I believe that these environments that they test on here are ones that appeared during the poet run right so it's kind of an unfair comparison I feel to to do this on an environment that you know this environment this poet agent actually comes from an environment that poet has generated in its all mutation tree curriculum while building it up and then the poor ES algorithm is simply tasked with solving that particular environment from scratch so yes always keep in mind this is this can have a goal this doesn't have a goal right that's kind of the drawback but as you can see poet does get super high scores whereas es the classic algorithm completely fails and they also investigate the importance of transfer learning so they compare to like a classic classic curriculum learning algorithms there are curriculum learning algorithms where you can continuously try to build up the difficulties of these environments but you also do it in a goal-directed way so as I said if you have an environment that has like a gap and then a stump a high stump or two high stumps you want to start out flat and then maybe build in a small gap and a small stump and so on until you're here it's very much goal-directed but it doesn't have this kind of population with transfer learning aspect of poet so if they compare this you can see here the red the red the red one sorry colored it blue stupidly the red one is whatever poet was able to solve now these are the five dimensions of the parameters and the more on the outside it is the harder the environment and for the same for the same environment the blue one is what the curriculum learning algorithm has managed so it's the best environment the curriculum learning algorithm has been able to solve while trying to build up to the so if we take this here is the environment that poet solved again the comparison is kind of unfair because we're starting out from an environment that poet has already solved and then we're trying to build our way up to it with the classic algorithm by basically again this is it's comparing a non goal-directed thing something that just happened to a goal-directed process that needs to get this particular environment to work in any case at some point this curriculum learning algorithm will fail like let's say that's here that's the environment that has somewhat of a gap but no stump right and that would be the the blue line here they do like five runs and they plot them here and you can see every time the classic curriculum learning algorithm manages to only solve a much much less challenging environment than the poet algorithm achieved even though it's it's trying to reach exactly that right and so here they show the difference so if you just the classified environment if it's just challenging then the classic algorithm the curriculum learning algorithm can solve it somewhat so the distance is close to zero but as you go more and more challenging the distance between poet and the classic becomes larger and larger they do give some examples of what this transfer learning does so they have this parent environment that just kind of slouches forward on the ground and then the child environment has a mutation that has now little stumps in it right so you can't get over it right now but the child environment because it's it's a small stump so it might stumble across learns to lift its leg here and it transfers this back to the parent right at a later iteration which is pretty cool and then the parent gets even better as a result of that transfer so we have two transfer learning events here that mutually help these agents remember both the parent and the child are continuously trained as the process goes on all right and they do some more things where they do actual poet not a classic algorithm but poet without transfer learning and they see that okay the poet without transfer is able to solve some of the very challenging problems but never reaches the extremely challenging stage and that's kind of their argument why the transfer learning is necessary so in total I would say this is a cool algorithm it has many many many many many many hyper parameters and these experimental results with that many hyper parameters you need to take it with a grain of salt because it's always possible that they just haven't put as much effort into their comparisons as they have into their own thing to get it to work all right with that I wish you a nice day and check out the paper they have lots of descriptions check out the blog post where they have animations and the YouTube video and with that bye bye
[ { "start": 0, "end": 6.88, "text": " Alright, so what you're seeing here are solutions found to this bipedal walker problem by a" }, { "start": 6.88, "end": 10.52, "text": " new algorithm called PoET." }, { "start": 10.52, "end": 16.84, "text": " So as you might guess, the challenge is to keep this little thing here walking to the" }, { "start": 16.84, "end": 21.72, "text": " right as far as you can while it encounters various obstacles." }, { "start": 21.72, "end": 30.92, "text": " And it is and remains a challenging reinforcement learning problem to have an agent learn to" }, { "start": 30.92, "end": 35.96, "text": " overcome various obstacles and walk well in different environments." }, { "start": 35.96, "end": 41.2, "text": " So the paper we're going to look at is called PoET." }, { "start": 41.2, "end": 46.08, "text": " It's by Uber Engineering." }, { "start": 46.08, "end": 52.96, "text": " And the full pronunciation is the Paired Open-Ended Trail Blazer, endlessly generating increasingly" }, { "start": 52.96, "end": 57.96, "text": " complex and diverse learning environments and their solutions by Roy Wang, Joel Lehmann," }, { "start": 57.96, "end": 64.56, "text": " Jeff Klun and Kenneth O. Stanley, as I said from Uber AI Labs." }, { "start": 64.56, "end": 70.48, "text": " So as you already saw, the challenge they take on is this bipedal walker problem." }, { "start": 70.48, "end": 75.6, "text": " Now their method is very general and not limited to this problem, but this is the problem that" }, { "start": 75.6, "end": 76.6, "text": " they focus on." }, { "start": 76.6, "end": 83.67999999999999, "text": " I'm going to jump some of the explanations here and dig right into the problem." }, { "start": 83.67999999999999, "end": 86.03999999999999, "text": " As you can see, the problem is the following." }, { "start": 86.03999999999999, "end": 91.36, "text": " You have this thing here, which is the walker, and it has two legs and specifically it has" }, { "start": 91.36, "end": 93.03999999999999, "text": " four joints." }, { "start": 93.03999999999999, "end": 97.56, "text": " So the four joints are here too, and here too." }, { "start": 97.56, "end": 102.56, "text": " And you can give torque on all of the four joints." }, { "start": 102.56, "end": 109.68, "text": " So it's basically a four output problem." }, { "start": 109.68, "end": 112.72, "text": " And you do have sensors as input." }, { "start": 112.72, "end": 116.26, "text": " So the inputs, I believe, is a LIDAR." }, { "start": 116.26, "end": 118.84, "text": " So the LIDAR is this red line you see here." }, { "start": 118.84, "end": 123.48, "text": " I think it has 16 of those in various angles." }, { "start": 123.48, "end": 129.88, "text": " And also it has pressure detection on the feet, I believe, to see whether or not they" }, { "start": 129.88, "end": 132.76, "text": " are in contact with the ground." }, { "start": 132.76, "end": 143.72, "text": " And it might also have a gyroscope in that tells you which angle with respect to the" }, { "start": 143.72, "end": 146.68, "text": " ground the head is." }, { "start": 146.68, "end": 151.07999999999998, "text": " So you have various sensors on these things, and you're able to basically control what" }, { "start": 151.07999999999998, "end": 153.68, "text": " the legs are doing." }, { "start": 153.68, "end": 161.76000000000002, "text": " And your goal is to make this go as far to the right and as fast as possible." }, { "start": 161.76000000000002, "end": 170.06, "text": " You see the reward down here is negative 100 if the robot falls over." }, { "start": 170.06, "end": 173.52, "text": " That means if the head hits the ground." }, { "start": 173.52, "end": 178.20000000000002, "text": " And then it is 130 times delta x." }, { "start": 178.2, "end": 184.64, "text": " That's how far you go to the right minus the whole angle." }, { "start": 184.64, "end": 187.39999999999998, "text": " And the whole angle, as I said, is this angle here." }, { "start": 187.39999999999998, "end": 190.48, "text": " So you want to keep it as stable as possible." }, { "start": 190.48, "end": 197.07999999999998, "text": " Because if there's a difference in the angle per step, then you get penalized." }, { "start": 197.07999999999998, "end": 200.94, "text": " And also you get penalized for each torque you apply." }, { "start": 200.94, "end": 209.64, "text": " So you want to kind of apply minimal force on the joints in order to go very far." }, { "start": 209.64, "end": 216.35999999999999, "text": " But by far the most important point is to go to the right as far and as fast as you" }, { "start": 216.35999999999999, "end": 217.36, "text": " can." }, { "start": 217.36, "end": 220.56, "text": " There is an end here somewhere." }, { "start": 220.56, "end": 227.36, "text": " And if you reach it, you get a score that is above 230." }, { "start": 227.36, "end": 233.24, "text": " They choose the limit of 230 here to determine." }, { "start": 233.24, "end": 238.12, "text": " So if the agent gets 230 or more, then it has solved the environment." }, { "start": 238.12, "end": 240.72000000000003, "text": " That's what they claim." }, { "start": 240.72000000000003, "end": 242.04000000000002, "text": " That's from experience." }, { "start": 242.04000000000002, "end": 244.76000000000002, "text": " So as you see, the environment has various obstacles here." }, { "start": 244.76000000000002, "end": 251.24, "text": " There are holes that you can fall into that you need to jump or step over." }, { "start": 251.24, "end": 253.76000000000002, "text": " There are these kind of stumps here." }, { "start": 253.76000000000002, "end": 255.86, "text": " They can be of various height." }, { "start": 255.86, "end": 259.36, "text": " So this is a bit shorter and this is a bit longer." }, { "start": 259.36, "end": 262.28000000000003, "text": " And the general terrain has a roughness." }, { "start": 262.28000000000003, "end": 268.7, "text": " And this can go to very rough from very smooth." }, { "start": 268.7, "end": 273.04, "text": " So this is a parameterized environment." }, { "start": 273.04, "end": 280.88, "text": " And obviously they are able to generate these environments from parameters." }, { "start": 280.88, "end": 288.71999999999997, "text": " And the goal now is to have an agent that walks well in any environment that you can" }, { "start": 288.71999999999997, "end": 289.71999999999997, "text": " think of." }, { "start": 289.71999999999997, "end": 295.48, "text": " Right, so here on the left you see this is very challenging down the stairs." }, { "start": 295.48, "end": 301.8, "text": " This also isn't too easy because there is a gap here." }, { "start": 301.8, "end": 306.96, "text": " And there are five parameters of these environments." }, { "start": 306.96, "end": 310.32, "text": " So there is the general roughness of the terrain." }, { "start": 310.32, "end": 314.2, "text": " That means how many hills it has and how fast they are coming." }, { "start": 314.2, "end": 319.4, "text": " There is the stump lower bound and stump upper bound, I believe." }, { "start": 319.4, "end": 322.52, "text": " So how high the stumps are." }, { "start": 322.52, "end": 326.84, "text": " And also how long the gaps are." }, { "start": 326.84, "end": 332.76, "text": " And with these parameters you control how difficult an environment is." }, { "start": 332.76, "end": 342.03999999999996, "text": " So the straightforward thing to do is simply to sample environments and have a reinforcement" }, { "start": 342.03999999999996, "end": 344.44, "text": " learn approach to this." }, { "start": 344.44, "end": 347.2, "text": " And that usually doesn't work." }, { "start": 347.2, "end": 353.8, "text": " I already want to see this without having talked about what the algorithm is." }, { "start": 353.8, "end": 358.24, "text": " This is the approach where you try this thing." }, { "start": 358.24, "end": 360.5, "text": " It's called evolution strategies." }, { "start": 360.5, "end": 364.68, "text": " But you can think of it as just a straightforward optimization procedure." }, { "start": 364.68, "end": 371.4, "text": " So there is an agent and there is an environment and you are trying to solve the environment" }, { "start": 371.4, "end": 374.76, "text": " using just straightforward optimization." }, { "start": 374.76, "end": 380.72, "text": " Now the evolution strategies are not your classic algorithm but you can compare it to" }, { "start": 380.72, "end": 381.72, "text": " it." }, { "start": 381.72, "end": 386.08, "text": " It's just that these people, they like the more, I have a feeling they like the more" }, { "start": 386.08, "end": 390.96, "text": " esoteric learning algorithms." }, { "start": 390.96, "end": 399, "text": " In any case, you see in these environments large gap, rough surface and so on." }, { "start": 399, "end": 402.68, "text": " These are supposed to be the platinum figures." }, { "start": 402.68, "end": 408.88, "text": " So these two environments and also these environments here." }, { "start": 408.88, "end": 414.91999999999996, "text": " The evolution strategy, so the classic approach if you just straight forward optimize, they" }, { "start": 414.92, "end": 424.6, "text": " get very low scores on average, whereas poet gets here very high scores above the 230 threshold." }, { "start": 424.6, "end": 426.28000000000003, "text": " So what's happening?" }, { "start": 426.28000000000003, "end": 434.34000000000003, "text": " If you're trying to just solve these environments from scratch, you basically don't really have" }, { "start": 434.34000000000003, "end": 437.02000000000004, "text": " a big chance of solving them." }, { "start": 437.02000000000004, "end": 441.68, "text": " Because let's say you're here and you're trying to move to the right, you know, you might" }, { "start": 441.68, "end": 447.72, "text": " learn how to do this and you see this from scratch solution actually manages to get to" }, { "start": 447.72, "end": 448.72, "text": " the right." }, { "start": 448.72, "end": 453.84000000000003, "text": " But then as soon as you reach this, you're in this gap and you just fall down the gap" }, { "start": 453.84000000000003, "end": 457.82, "text": " because all you've learned so far is how to move right." }, { "start": 457.82, "end": 464.74, "text": " So what you would need to do is you would need to plan ahead like what poet does." }, { "start": 464.74, "end": 466.24, "text": " You need to see that there is a gap." }, { "start": 466.24, "end": 472.92, "text": " You need to plan ahead and already lift up a leg in order to then step over the gap here" }, { "start": 472.92, "end": 476, "text": " and then do a little jump right here." }, { "start": 476, "end": 481.24, "text": " And this sequence of action, this kind of planning ahead, it is very difficult to learn" }, { "start": 481.24, "end": 488.06, "text": " this for a classic RL algorithm because you basically get reward for everything you do." }, { "start": 488.06, "end": 490.76, "text": " So initially you get reward for moving to the right." }, { "start": 490.76, "end": 494.72, "text": " So that's 10 if you reach here, another 10 if you reach here." }, { "start": 494.72, "end": 502.8, "text": " And so there is another 10 if you reach here and another 10 if you reach here." }, { "start": 502.8, "end": 508.12, "text": " Whereas if you lift up your leg, that's like minus five because now this you've changed" }, { "start": 508.12, "end": 512.44, "text": " this angle and we saw this is negative reward, right?" }, { "start": 512.44, "end": 517.24, "text": " So a classic optimization algorithm will always fall into the hole because that is where you" }, { "start": 517.24, "end": 519.6800000000001, "text": " get the immediate reward." }, { "start": 519.6800000000001, "end": 524.5600000000001, "text": " Whereas you'd have to you'd have to do a sequence of action that doesn't give you a reward right" }, { "start": 524.56, "end": 528.7199999999999, "text": " now, but it gives you more reward later." }, { "start": 528.7199999999999, "end": 534.8399999999999, "text": " And in order to learn this, we need a kind of a better algorithm that just straightforward" }, { "start": 534.8399999999999, "end": 536.4399999999999, "text": " optimization." }, { "start": 536.4399999999999, "end": 542.68, "text": " So maybe I can explain this if you have a maze, here is the start and here is the goal" }, { "start": 542.68, "end": 547.66, "text": " and there is like walls and the walls are something like this." }, { "start": 547.66, "end": 550.3599999999999, "text": " What you need to do is go around here." }, { "start": 550.3599999999999, "end": 554.52, "text": " But what a classic optimization algorithm does is always like goes here because that's" }, { "start": 554.52, "end": 557.12, "text": " ever so closer to the goal." }, { "start": 557.12, "end": 563.6, "text": " And then it just gets stuck because it can't fathom that it needs to go around here." }, { "start": 563.6, "end": 567.96, "text": " So it needs to go farther away before it gets closer." }, { "start": 567.96, "end": 574.0799999999999, "text": " So these people we've talked about this before in like open ended learning novelty search." }, { "start": 574.0799999999999, "end": 581.24, "text": " What you would want to do is you would want to gradually build up solutions that can explore" }, { "start": 581.24, "end": 589.28, "text": " the space like to go here, go here, go here and basically build up these solutions." }, { "start": 589.28, "end": 595.24, "text": " And there are two components to what this poet algorithm does." }, { "start": 595.24, "end": 602.6800000000001, "text": " So the first component is curriculum learning." }, { "start": 602.6800000000001, "end": 606.62, "text": " Curriculum learning." }, { "start": 606.62, "end": 608.42, "text": " What does curriculum learning mean?" }, { "start": 608.42, "end": 615.04, "text": " Curriculum learning means that you start off with easy tasks and you increasingly build" }, { "start": 615.04, "end": 620.28, "text": " up more and more and more complex tasks." }, { "start": 620.28, "end": 627.68, "text": " So let's say I have an environment here and I'm going to draw and at the beginning we" }, { "start": 627.68, "end": 632.8399999999999, "text": " just kind of start off with this flat surface right and here is our little walker right" }, { "start": 632.8399999999999, "end": 633.9599999999999, "text": " here." }, { "start": 633.96, "end": 644, "text": " And we'll just train it to move right on that and that should be doable with kind of a classic" }, { "start": 644, "end": 645.6800000000001, "text": " approach." }, { "start": 645.6800000000001, "end": 649.94, "text": " And then we gradually move to more difficult environments." }, { "start": 649.94, "end": 653.4000000000001, "text": " So maybe we'll make it a bit more rough right." }, { "start": 653.4000000000001, "end": 657.88, "text": " And an agent that can already walk to the right already kind of has think of it as a" }, { "start": 657.88, "end": 661.36, "text": " pre-training in like NLP." }, { "start": 661.36, "end": 666.92, "text": " You can then get more and more challenging and then maybe at some point you can build" }, { "start": 666.92, "end": 670.64, "text": " in a gap right." }, { "start": 670.64, "end": 675.44, "text": " So you build in one of these gaps and now it already knows how to move to the right" }, { "start": 675.44, "end": 682.32, "text": " and now it might actually learn to jump a small gap right if you make it small at the" }, { "start": 682.32, "end": 684.5600000000001, "text": " beginning not like this one down here." }, { "start": 684.5600000000001, "end": 686.5600000000001, "text": " There's a very large gap." }, { "start": 686.56, "end": 692.7199999999999, "text": " But if you make it small by accident it might stumble over it and then learn and continuously" }, { "start": 692.7199999999999, "end": 695.4399999999999, "text": " how to master the gap." }, { "start": 695.4399999999999, "end": 698.2399999999999, "text": " So this is the curriculum learning approach." }, { "start": 698.2399999999999, "end": 703.9599999999999, "text": " It means that from environment to environment you get harder, harder and harder challenges." }, { "start": 703.9599999999999, "end": 711.1199999999999, "text": " So first flat then more rough then more rough with a gap and so on." }, { "start": 711.12, "end": 721.36, "text": " The second approach, the second ingredient to POET is what they call stepping stone learning" }, { "start": 721.36, "end": 726, "text": " or transfer learning or things like this." }, { "start": 726, "end": 731.82, "text": " And that's where you kind of have to think of this not as a single agent optimizing but" }, { "start": 731.82, "end": 734.64, "text": " as a population of agents." }, { "start": 734.64, "end": 738.1800000000001, "text": " So let's say you do this curriculum learning right." }, { "start": 738.18, "end": 744.8, "text": " And you're getting fairly well here at rough terrains right." }, { "start": 744.8, "end": 746.04, "text": " More and more rough terrains." }, { "start": 746.04, "end": 751.12, "text": " But in parallel you also have a second optimization procedure." }, { "start": 751.12, "end": 761.76, "text": " You also start out kind of flat but with this thing you go as we said before small gap you" }, { "start": 761.76, "end": 770.16, "text": " keep it flat but you just increase the number of gaps here right." }, { "start": 770.16, "end": 776.96, "text": " Whereas over here you just keep making the terrain rougher and rougher." }, { "start": 776.96, "end": 786.4399999999999, "text": " So what the philosophy is that an agent that might be able to master this rougher terrain" }, { "start": 786.44, "end": 791.6, "text": " it might actually that skill because here you this kind of this kind of looks like a" }, { "start": 791.6, "end": 793.8800000000001, "text": " gap here." }, { "start": 793.8800000000001, "end": 802.6800000000001, "text": " The skill of hopping over this gap here might actually transfer to the environment over" }, { "start": 802.6800000000001, "end": 809.4000000000001, "text": " here where you do have a proper you know a gap in the environment or the skill that you" }, { "start": 809.4000000000001, "end": 813.32, "text": " learn from an environment where you have one of these stumps right." }, { "start": 813.32, "end": 821.9200000000001, "text": " So here let's draw in one of these stumps where you have to go over and if you have" }, { "start": 821.9200000000001, "end": 830.72, "text": " a walker that can successfully walk over this that skill now might transfer over here in" }, { "start": 830.72, "end": 836.88, "text": " order to get over this over this peaky terrain here." }, { "start": 836.88, "end": 849.8, "text": " So the idea of poet is to start off with a generic flat very easy environment and then" }, { "start": 849.8, "end": 859.28, "text": " spawn new ones so you want to spawn new environments in kind of a hereditary way." }, { "start": 859.28, "end": 869.68, "text": " So this one might get a bit rougher this one might include this and this one might include" }, { "start": 869.68, "end": 876.6, "text": " a gap or something like this and then again you want to spawn new environments and more" }, { "start": 876.6, "end": 887.72, "text": " rough more rough more rough with a stump here and this one retains the gap sorry and um" }, { "start": 887.72, "end": 897.08, "text": " this one now gets two gaps and so on and you want to continuously train these and then" }, { "start": 897.08, "end": 902.52, "text": " always you want to check whether or not the skill that you learn over here might actually" }, { "start": 902.52, "end": 905.26, "text": " transfer to anyone over here." }, { "start": 905.26, "end": 914.48, "text": " So you get this tree of this continuous tree of solutions and once you improve on one branch" }, { "start": 914.48, "end": 920.32, "text": " this might actually be good on another branch right they always make the comparison to let's" }, { "start": 920.32, "end": 926.88, "text": " say biological evolution where a strategy that works over here for birds is all of a" }, { "start": 926.88, "end": 935, "text": " sudden can be cross adopted by mammals for an entirely different problem but the same" }, { "start": 935, "end": 938.6, "text": " skill might be valuable." }, { "start": 938.6, "end": 948.64, "text": " Yeah so this this is basically the two ingredients of poet and now I want to show you the complete" }, { "start": 948.64, "end": 951.88, "text": " poet algorithm." }, { "start": 951.88, "end": 960.64, "text": " So what does it do you start off with an initial environment right and in poet every environment" }, { "start": 960.64, "end": 969.9399999999999, "text": " is paired with an agent so there is one agent per environment right so for the time steps" }, { "start": 969.9399999999999, "end": 976.68, "text": " what you do is first of all you go through your environments and you mutate them and" }, { "start": 976.68, "end": 982.48, "text": " we already seen these environments they can be generated from a parameter vector so we" }, { "start": 982.48, "end": 994.76, "text": " have five numbers right how rough how stumpy and how wide the gaps are let's say we have" }, { "start": 994.76, "end": 999.6800000000001, "text": " three numbers to two and this might be one this might be two this might be five right" }, { "start": 999.6800000000001, "end": 1006.64, "text": " so what you want to do is you want to mutate them right you want to spawn children and" }, { "start": 1006.64, "end": 1013.4399999999999, "text": " each of these parameters has a chance of mutating this might be one three five and this environment" }, { "start": 1013.4399999999999, "end": 1025.92, "text": " might be one four six and this one might be two two five right you spawn new ones you" }, { "start": 1025.92, "end": 1031.52, "text": " already see that the requirement here is that you can actually have environments that are" }, { "start": 1031.52, "end": 1038.92, "text": " procedurally generated and mutated like this where a small mutation probably is going to" }, { "start": 1038.92, "end": 1050.52, "text": " lead to a small change in the environment in any case you mutate them and then you you" }, { "start": 1050.52, "end": 1061.84, "text": " want to let's you want to optimize your eight each agent so each of these environments is" }, { "start": 1061.84, "end": 1069.82, "text": " paired with a new agent that always tries to solve that particular environment so now" }, { "start": 1069.82, "end": 1075.74, "text": " within one environment you simply do your classic optimization we already saw here the" }, { "start": 1075.74, "end": 1084.16, "text": " evolution strategy is akin to a classic optimization algorithm from reinforcement learning all" }, { "start": 1084.16, "end": 1090.36, "text": " right so each agent you optimize for a couple of steps right not fully every time but for" }, { "start": 1090.36, "end": 1097, "text": " a couple of steps so each agent including the one in the original environment each agent" }, { "start": 1097, "end": 1104.36, "text": " is continuously trained on its environment throughout the process of course you like" }, { "start": 1104.36, "end": 1110.2199999999998, "text": " you have to be you have bounded computation so you need to drop out the very old ones" }, { "start": 1110.2199999999998, "end": 1117.32, "text": " but in principle continuously as all of this goes on all the agents are always trained" }, { "start": 1117.32, "end": 1122.8799999999999, "text": " on their environments so the agent here this Walker will always try to solve this particular" }, { "start": 1122.8799999999999, "end": 1128.6999999999998, "text": " environment and the Walker here that is now newly generated when the environment is generated" }, { "start": 1128.7, "end": 1135.28, "text": " will only try to solve this particular environment throughout the whole algorithm right and then" }, { "start": 1135.28, "end": 1144.88, "text": " all right so you do mutations you spawn new ones and then you do a couple of steps in" }, { "start": 1144.88, "end": 1153.04, "text": " optimization right and yes step and then you do this transfer attempt right what you want" }, { "start": 1153.04, "end": 1159.32, "text": " to do is you want to evaluate all the candidates on all the environments in principle you can" }, { "start": 1159.32, "end": 1167.6, "text": " you can cut this down but in principle you want to go through the environments and say" }, { "start": 1167.6, "end": 1174.32, "text": " okay this environment right here I'm going to evaluate all of the other agents in this" }, { "start": 1174.32, "end": 1179.5, "text": " environment you can do this in a couple of different ways where you just straight up" }, { "start": 1179.5, "end": 1186.52, "text": " try them or try to optimize them for a few steps to see whether they can be adapted easily" }, { "start": 1186.52, "end": 1193.52, "text": " to that environment but ultimately you have to come up with a criterion to say for each" }, { "start": 1193.52, "end": 1199.6, "text": " agent is the agent better or worse than the agent that is continuously trained on this" }, { "start": 1199.6, "end": 1208.5, "text": " environment if it's worse then you keep this one if if anyone is better then you transfer" }, { "start": 1208.5, "end": 1215.8, "text": " that better one to replace this one right and you basically copy it over to this new" }, { "start": 1215.8, "end": 1220.7, "text": " environment and that's where this transfer learning comes in so you're continuously trying" }, { "start": 1220.7, "end": 1228.08, "text": " all the agents on all the environments and if they are better you transfer them right" }, { "start": 1228.08, "end": 1235.72, "text": " so here you say if the environment score is better than the one that you have you transfer" }, { "start": 1235.72, "end": 1245.88, "text": " it all right now there is a lot hidden here for example in this mutate environment step" }, { "start": 1245.88, "end": 1252.92, "text": " they do check whether or not the new mutated environments are not too hard and not too" }, { "start": 1252.92, "end": 1262.08, "text": " easy and that basically means whether or not the agents can solve them but not solve them" }, { "start": 1262.08, "end": 1268.9199999999998, "text": " too easily they also check whether the environments are enough novel so you need a couple of checks" }, { "start": 1268.9199999999998, "end": 1279.04, "text": " here you solvable and that that means not too easy and not too hard right so they need" }, { "start": 1279.04, "end": 1285.96, "text": " to pass like a certain score but they need to be kind of solvable to a to an okay score" }, { "start": 1285.96, "end": 1293.04, "text": " so there's a score range and also novel they check whether or not the out the mutated environments" }, { "start": 1293.04, "end": 1299.72, "text": " are novel enough and I believe they just do this by calculating the the distance between" }, { "start": 1299.72, "end": 1307.3600000000001, "text": " two environments in terms of their parameter vectors so to determine whether or not these" }, { "start": 1307.3600000000001, "end": 1313.76, "text": " are novel and sorry I don't mean the distance just between two but the distance of all of" }, { "start": 1313.76, "end": 1323.44, "text": " the ones you've seen so far so if we go to original very beautiful drawing here where" }, { "start": 1323.44, "end": 1329.4, "text": " is my tree if you create a new environment let's say you create a new environment right" }, { "start": 1329.4, "end": 1337.56, "text": " here then you want to check it against all environments you've seen so far to determine" }, { "start": 1337.56, "end": 1342.96, "text": " whether or not it is new or not so you want to create the distance to all of these and" }, { "start": 1342.96, "end": 1348.1200000000001, "text": " if you have enough distance to your nearest neighbors then you are novel and that's kind" }, { "start": 1348.1200000000001, "end": 1356.64, "text": " of how they they determine whether environment is new all right so that's basically the poet" }, { "start": 1356.64, "end": 1363.72, "text": " algorithm you continuously create new environments by mutation you ensure that they are solvable" }, { "start": 1363.72, "end": 1371.54, "text": " not hard enough sorry not too hard but hard enough ensure that they are novel and then" }, { "start": 1371.54, "end": 1380.72, "text": " you optimize each agent for its own environment continuously as the process goes on and so" }, { "start": 1380.72, "end": 1385.76, "text": " it's not I want to stress this it's not only the frontier so you're not only looking at" }, { "start": 1385.76, "end": 1391.44, "text": " the newest generation but you're always looking at all of the generation of the because the" }, { "start": 1391.44, "end": 1397.52, "text": " older ones while the environments are easier they have been optimized for longer on this" }, { "start": 1397.52, "end": 1403.16, "text": " environment so the skills might be very handy so you always want to look at your entire" }, { "start": 1403.16, "end": 1411.96, "text": " population and then you do crucially you do this these transfer attempts so that's the" }, { "start": 1411.96, "end": 1418.48, "text": " poet algorithm there is a lot hidden here and I kind of want to stress that just if" }, { "start": 1418.48, "end": 1427.04, "text": " you just look at the amount of hyper parameters there is so many hyper parameters in this" }, { "start": 1427.04, "end": 1433.08, "text": " how much you transfer how much you mutate how many steps you do each of these subroutines" }, { "start": 1433.08, "end": 1443.08, "text": " here has a billion hyper parameters and learning rates and and so on so to me that's a that" }, { "start": 1443.08, "end": 1449.3999999999999, "text": " is kind of if I look at this algorithm I am very scared if I attempted to do something" }, { "start": 1449.4, "end": 1457.64, "text": " like this myself it's it's going to be a long and hard thing to evaluate all of these different" }, { "start": 1457.64, "end": 1465.1200000000001, "text": " hyper parameters that you have to do shortly want to dip into what the evolution strategy" }, { "start": 1465.1200000000001, "end": 1473.68, "text": " does just so you know because you just might be familiar with your classic your classic" }, { "start": 1473.68, "end": 1482.72, "text": " reinforce algorithm so in policy gradient methods what you do is you scale your parameters" }, { "start": 1482.72, "end": 1492.88, "text": " of your neural network which is you can if this is your policy then your policy network" }, { "start": 1492.88, "end": 1501.76, "text": " here you want to scale the gradient according to your reward so in classic reinforcement" }, { "start": 1501.76, "end": 1507.04, "text": " learning this here would be the reward you got which basically means if you did an action" }, { "start": 1507.04, "end": 1514.44, "text": " and you got higher reward you want to make your network do that action more right here" }, { "start": 1514.44, "end": 1521.92, "text": " in evolution strategies what you do is you spawn it's a different way of doing the same" }, { "start": 1521.92, "end": 1530.9, "text": " thing basically you spawn different environments and sorry you spawn you spawn different agents" }, { "start": 1530.9, "end": 1537.68, "text": " so you have your current parameters and you want to spawn a number of noisy versions of" }, { "start": 1537.68, "end": 1545.76, "text": " those parameters and then you want to evaluate each one right and now you want to adjust" }, { "start": 1545.76, "end": 1553.74, "text": " your parameters into the direction of that particular so basically you are here with" }, { "start": 1553.74, "end": 1564.16, "text": " your parameters you create a bunch of noisy versions of it right and let's say these two" }, { "start": 1564.16, "end": 1571.84, "text": " performed really well you want to adjust your parameters into the direction of those two" }, { "start": 1571.84, "end": 1579.36, "text": " right that's basically what this says so this is the noisy version and then this is the" }, { "start": 1579.36, "end": 1586.3999999999999, "text": " noise that produced the noisy version so if this is high if this number here is high" }, { "start": 1586.3999999999999, "end": 1594.56, "text": " then you will adjust your parameters into that direction it's a fairly cool way if you" }, { "start": 1594.56, "end": 1603.52, "text": " especially if you can't back prop through your policy as it's pretty neat thing so this" }, { "start": 1603.52, "end": 1614, "text": " is the ES step algorithm but you can think of it just as a RL algorithm all right so" }, { "start": 1614, "end": 1619.28, "text": " they do various experiments to show that this actually has merits I've already shown you" }, { "start": 1619.28, "end": 1626.28, "text": " if you're trying if you take the same environments and try to solve them directly by this evolution" }, { "start": 1626.28, "end": 1633, "text": " step then it will not succeed because of the problems we've discussed before now the comparison" }, { "start": 1633, "end": 1641.04, "text": " is a bit unfair because um of course these environments for poet poet the problem here" }, { "start": 1641.04, "end": 1646, "text": " is you can't have it solve a particular environments because the environments they constantly change" }, { "start": 1646, "end": 1651.32, "text": " right you constantly mutate the environments you never know where it's going it's not directed" }, { "start": 1651.32, "end": 1657.26, "text": " so if your goal is to solve a particular environment you cannot do it with poet you can hope that" }, { "start": 1657.26, "end": 1662.48, "text": " the agent that comes out will perform well right you can do something like this but I" }, { "start": 1662.48, "end": 1672, "text": " believe I believe that these environments that they test on here are ones that appeared" }, { "start": 1672, "end": 1680.1200000000001, "text": " during the poet run right so it's kind of an unfair comparison I feel to to do this" }, { "start": 1680.1200000000001, "end": 1685.64, "text": " on an environment that you know this environment this poet agent actually comes from an environment" }, { "start": 1685.64, "end": 1692.44, "text": " that poet has generated in its all mutation tree curriculum while building it up and then" }, { "start": 1692.44, "end": 1699.56, "text": " the poor ES algorithm is simply tasked with solving that particular environment from scratch" }, { "start": 1699.56, "end": 1706.76, "text": " so yes always keep in mind this is this can have a goal this doesn't have a goal right" }, { "start": 1706.76, "end": 1713.8, "text": " that's kind of the drawback but as you can see poet does get super high scores whereas" }, { "start": 1713.8, "end": 1722.72, "text": " es the classic algorithm completely fails and they also investigate the importance of transfer" }, { "start": 1722.72, "end": 1733.2, "text": " learning so they compare to like a classic classic curriculum learning algorithms there" }, { "start": 1733.2, "end": 1738.44, "text": " are curriculum learning algorithms where you can continuously try to build up the difficulties" }, { "start": 1738.44, "end": 1744.04, "text": " of these environments but you also do it in a goal-directed way so as I said if you have" }, { "start": 1744.04, "end": 1751.16, "text": " an environment that has like a gap and then a stump a high stump or two high stumps you" }, { "start": 1751.16, "end": 1758.68, "text": " want to start out flat and then maybe build in a small gap and a small stump and so on" }, { "start": 1758.68, "end": 1764.96, "text": " until you're here it's very much goal-directed but it doesn't have this kind of population" }, { "start": 1764.96, "end": 1774.64, "text": " with transfer learning aspect of poet so if they compare this you can see here the red" }, { "start": 1774.64, "end": 1785.1200000000001, "text": " the red the red one sorry colored it blue stupidly the red one is whatever poet was" }, { "start": 1785.1200000000001, "end": 1791.96, "text": " able to solve now these are the five dimensions of the parameters and the more on the outside" }, { "start": 1791.96, "end": 1802.72, "text": " it is the harder the environment and for the same for the same environment the blue one" }, { "start": 1802.72, "end": 1808.24, "text": " is what the curriculum learning algorithm has managed so it's the best environment the" }, { "start": 1808.24, "end": 1815.24, "text": " curriculum learning algorithm has been able to solve while trying to build up to the so" }, { "start": 1815.24, "end": 1821.56, "text": " if we take this here is the environment that poet solved again the comparison is kind of" }, { "start": 1821.56, "end": 1826.12, "text": " unfair because we're starting out from an environment that poet has already solved and" }, { "start": 1826.12, "end": 1833.28, "text": " then we're trying to build our way up to it with the classic algorithm by basically again" }, { "start": 1833.28, "end": 1840.9199999999998, "text": " this is it's comparing a non goal-directed thing something that just happened to a goal-directed" }, { "start": 1840.9199999999998, "end": 1848.8, "text": " process that needs to get this particular environment to work in any case at some point" }, { "start": 1848.8, "end": 1853.76, "text": " this curriculum learning algorithm will fail like let's say that's here that's the environment" }, { "start": 1853.76, "end": 1861.8, "text": " that has somewhat of a gap but no stump right and that would be the the blue line here they" }, { "start": 1861.8, "end": 1868.76, "text": " do like five runs and they plot them here and you can see every time the classic curriculum" }, { "start": 1868.76, "end": 1874.48, "text": " learning algorithm manages to only solve a much much less challenging environment than" }, { "start": 1874.48, "end": 1884.08, "text": " the poet algorithm achieved even though it's it's trying to reach exactly that right and" }, { "start": 1884.08, "end": 1889.08, "text": " so here they show the difference so if you just the classified environment if it's just" }, { "start": 1889.08, "end": 1895.24, "text": " challenging then the classic algorithm the curriculum learning algorithm can solve it" }, { "start": 1895.24, "end": 1900.96, "text": " somewhat so the distance is close to zero but as you go more and more challenging the" }, { "start": 1900.96, "end": 1911, "text": " distance between poet and the classic becomes larger and larger they do give some examples" }, { "start": 1911, "end": 1917.8, "text": " of what this transfer learning does so they have this parent environment that just kind" }, { "start": 1917.8, "end": 1923.4, "text": " of slouches forward on the ground and then the child environment has a mutation that" }, { "start": 1923.4, "end": 1930.16, "text": " has now little stumps in it right so you can't get over it right now but the child environment" }, { "start": 1930.16, "end": 1936.52, "text": " because it's it's a small stump so it might stumble across learns to lift its leg here" }, { "start": 1936.52, "end": 1943.3200000000002, "text": " and it transfers this back to the parent right at a later iteration which is pretty cool" }, { "start": 1943.3200000000002, "end": 1949.0800000000002, "text": " and then the parent gets even better as a result of that transfer so we have two transfer" }, { "start": 1949.0800000000002, "end": 1955.8400000000001, "text": " learning events here that mutually help these agents remember both the parent and the child" }, { "start": 1955.84, "end": 1964.6799999999998, "text": " are continuously trained as the process goes on all right and they do some more things" }, { "start": 1964.6799999999998, "end": 1970.76, "text": " where they do actual poet not a classic algorithm but poet without transfer learning and they" }, { "start": 1970.76, "end": 1977.48, "text": " see that okay the poet without transfer is able to solve some of the very challenging" }, { "start": 1977.48, "end": 1983.36, "text": " problems but never reaches the extremely challenging stage and that's kind of their argument why" }, { "start": 1983.36, "end": 1991.7199999999998, "text": " the transfer learning is necessary so in total I would say this is a cool algorithm it has" }, { "start": 1991.7199999999998, "end": 1999.56, "text": " many many many many many many hyper parameters and these experimental results with that many" }, { "start": 1999.56, "end": 2004.6399999999999, "text": " hyper parameters you need to take it with a grain of salt because it's always possible" }, { "start": 2004.6399999999999, "end": 2010.84, "text": " that they just haven't put as much effort into their comparisons as they have into their" }, { "start": 2010.84, "end": 2019.76, "text": " own thing to get it to work all right with that I wish you a nice day and check out the" }, { "start": 2019.76, "end": 2025.1999999999998, "text": " paper they have lots of descriptions check out the blog post where they have animations" }, { "start": 2025.2, "end": 2041.88, "text": " and the YouTube video and with that bye bye" } ]
We20YSAJZSE
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
MuZero: Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
[ "Science & Technology" ]
[ "ml", "ai", "machine learning", "reinforcement learning", "deep rl", "deepmind", "google", "alphago", "alphazero", "value function", "policy", "artificial intelligence", "rl", "deep reinforcement learning", "model-free", "model-based", "environment model", "hidden representation", "latent state", "transition", "chess", "shogi", "go", "atari" ]
MuZero harnesses the power of AlphaZero, but without relying on an accurate environment model. This opens up planning-based reinforcement learning to entirely new domains, where such environment models aren't available. The difference to previous work is that, instead of learning a model predicting future observations, MuZero predicts the future observations' latent representations, and thus learns to only represent things that matter to the task! Abstract: Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess and Go, where a perfect simulator is available. However, in real-world problems the dynamics governing the environment are often complex and unknown. In this work we present the MuZero algorithm which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. MuZero learns a model that, when applied iteratively, predicts the quantities most directly relevant to planning: the reward, the action-selection policy, and the value function. When evaluated on 57 different Atari games - the canonical video game environment for testing AI techniques, in which model-based planning approaches have historically struggled - our new algorithm achieved a new state of the art. When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules. Authors: Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, Timothy Lillicrap, David Silver https://arxiv.org/abs/1911.08265 Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at mastering Atari Go, Chess and Shogi by planning with a learned model by Julian Schrittweiser and people generally from DeepMind. So this paper is an extension to AlphaZero, the kind of famous algorithm that learned to play Go and Chess simply by playing itself and the kind of cool thing about this model is that it has a learned environment model. So what does this mean? Usually if you have a game such as chess, I believe there is a picture of chess down here, if you have a game such as chess and you want to learn to play it, you need to know the kind of the rules of chess, right? So in chess you have the rules like the pawn can move two or one, right? The bishop can move diagonally and so on. Similarly in Shogi or Go here, you know where you can place the stones and when you win everything is clearly defined. So what you can do is actually you can plan, right? You can now think of okay if I do this opening, right, my opponent could do either this or this or you know this and for each of the three moves I'll have response. So if they do, if they move this pawn, I'll go for like a gambit here and if they move this pawn then I can, you know, move on. Something like this, right? So what in a sense what you have is a tree search. So you start out with the state you're currently in, right? And then your opponent, sorry, this should be your state you're currently in, your opponent has the option of performing any one of these moves. Let's say there are three moves and then from each of these three moves you again have the option of performing any of these moves. And the good thing is in chess you know each exactly what they do. Like if I move my pawn then the new board configuration will be the pawn will no longer be here but here, right? So you know exactly what's going to happen. You can calculate that you have perfect simulator. And other domains you don't have that. For example in Atari all you have in Atari is this screen, right? Maybe you have a little submarine here, right? You have some opponents, right? The opponent, I don't know, what do your opponents look like? Are they fish? I don't even know in this game, right? And you can, I think you can shoot? There's coins to select? I don't know. Okay, in any case and sometimes you need to go up and there is like a health bar. But in essence you only have this screen here, right? You don't have more. And if you press a button you don't exactly know what's going to happen. You don't exactly know what the pixel space will look like as this shot moves forward, right? I guess you could know but you can't use that to plan because the kind of space is too big and your actions may be not clearly predictable. And when you win aren't clearly predictable and there may be randomness. So all of this stuff, usually what people do is here they do use a model-free reinforcement learning. We've had this discussion before. So this would be model-free and while chess here you'd go about model-based. Now what MuZero does is it uses a model-based planning but it learns the model. So it tries to construct a model for this here. It tries to say, okay if I have this screen A here, right? My thing is here and I press the button right then probably my submarine is going to be a bit more to the right. But it doesn't do this exactly. So this has been done before and this is what's kind of known as learning an environment model where you map current environment plus action to the next step in the environment, right? And this usually doesn't work too well because you're really trying to generate this entire pixel space here. What the cool thing about MuZero is it doesn't do that. It doesn't predict the next state. What it does predict is a hidden state and let's draw the hidden state as a little cloud here. It predicts a hidden state of the next step and from the hidden state it will predict things like the reward, the policy, the value and then it can use from that hidden state it'll predict the next hidden state. And from that it will again predict the reward. So the base idea is you only predict what you absolutely need to obtain the values that are important for doing reinforcement learning. You're not trying to predict the full environment. You're simply trying to predict whatever is necessary and this here is a learned quantity. Whatever is necessary to predict what your RL model is going to need. So that's the basic gist of it and we'll look at how they do it or how they describe what they're doing. So basically the picture A here is how MuZero plans. So imagine you have a configuration, a current state. This is an observation. This could be a chessboard. This could also be a position in shogi but it could also be a screen in an Atari game or a camera input of a self-driving car and so on. And the first thing it does it encodes that observation using this H here. I believe they call this a representation function. You encode that to this hidden state. Now the hidden state, this is appropriately sized, the hidden state here is supposed to capture everything you need about the state to predict the kind of RL quantities in the future. And you learn this function H which in this case of course is going to be a neural network in order to produce such a state. Now from this state you do two things. First of all you have this function F here and they call this the I don't remember but you have a function to predict the following two quantities. You predict the value function at that state and the value function simply means if you are in this state here, this is now not a true state but a hidden state, but still if you're in this state, in this hidden state that belongs to this observation, then in the future you're going to make this much reward on average with your current policy. That's the value function. So the value function basically tells you how good it is to be in a given state. And then the policy, this is a bit special, the policy is predicting how you would act in this state. Now this is a bit confusing or it was to me when I first learned it because we're going to see over here how a mu0 decides on how to act. Namely it does this entire tree search thing up to a certain depth, right? And then it creates this histogram and from that it produces the action. But in order to produce, to do this tree search, this is exactly this picture A. This is that tree search that is done. And in order to do that you need these p-values because we'll go there in a second, you need these p-values and they cannot themselves again do a tree search, right? That would be like infinite recursion. So what you need is you need kind of an estimate, right? Like if I were, and especially down, it makes more sense, if I were in that state how would I act, right? If I were to do a tree search like this. So you simply build a neural network that tells you with one evaluation without having to do the entire tree search down from here how you would act. This doesn't need to be a perfect approximation of how you would actually act but it needs to be good enough, right? So this simply tells you how you would act in that state. And that's important because what we do next is we use this policy to generate this action. And this is a simulated action. This isn't a real action because the real action would go here to the next actual observation. This is a simulated action saying if I'm in this hidden state, right, my policy approximately would be this thing. And so I can sample from that and say my action in that state would be this action. And so now I have a hidden state and an action and from that I can produce the next hidden state. Now of course if I were to apply the action up here to the observation, right, action one, I would get the next observation. And that is exactly how alpha zero works, right? You use your simulator, your perfect simulator, to take the current observation, the current state, with a given action that this policy gives you and you produce the next state. But we don't have a perfect simulator, right? And we don't want to learn a model that predicts the entire state. But what we want to do is we want to predict the following. If we were to take a one here, if, right, we would get an observation, can we predict the result when we would apply the function h to that, right, giving me s prime, right? This is observation prime. So this function h here, which is the function that maps from observation space to hidden space, if we were to apply this to the next hidden, to the next observation, we would obtain some hidden state for that observation. Can we predict that thing? So we need a function that maps from the hidden state given an action, right, to the next hidden state. And that's exactly what what happens down here, right? This function g here maps exactly this hidden state plus the action to the next hidden state. And also, also at the same time, it will predict a reward, right? Because in each step you might get a reward. So each transition here gives you a reward. And we're trying to predict that as well. Not that important, especially for games like chess or shogi, where there's only win or lose at the very end. But they incorporate this here to also be able to play these Atari games and like a broader range of reinforcement learning games. But in essence, that's what it is, right? We're trying to predict the next hidden state. And now we can basically recursively apply this. So from here, I have an idea of what my policy might be in that state, right? My proximate policy, my kind of mini policy that only needs one evaluation. I can sample an action from that policy. And if maybe it's action two here, and I can then predict the next hidden state that I would be in. Also the reward, right? And therefore, using this, I can do like a tree search. So I can simulate future trajectories, right? First, all of these policies, I can sample from them. I can sample from them, giving me different actions so that that will lead me down the tree different routes. So I can simulate future trajectories in this tree. And at the end, I have a pretty good idea. I can do this up to a certain depth, right? I don't have to do it until the very end, I can. And then I'll have a pretty good idea of how my immediate the immediate future looks right, which actions lead me to approximately which states and for each state, of course, especially for each bottom state here, I have an estimation of the value of that state. So basically, I can, the easiest thing would simply be to whatever search, how many steps is this? One, no, this is zero. One, two, three steps into the future. And for each of these states, obtain the value v here, v here, v, v, v, v, v. And then I simply pick the action up, the action up here. I'm running out of colors. And simply pick the action up here that will lead me eventually to the highest value state. So that's, we of course, we've not incorporated opponent plays here and so on. But that's the basic idea. You can do this more sophisticated this tree search. And this is a topic that we might cover in a video about AlphaGo or AlphaZero. But in essence, you can do the same thing as AlphaGo or AlphaZero, except if you're not working with the simulator, but you're working with a learned model on the hidden states of the true observations. So B is how you would actually act, right? So for each observation here, we'd say you'd run such a tree search, and you kind of get a histogram over visited actions. And again, we'll skip over that here. But this, this is part of the AlphaZero paper. And you decide on an action. And that will give you a reward and a next observation. And that's how you act. And then you train these things end to end. So you train the networks such that, of course, the reward, you know what the rewards are, right? The reward prediction of G, you know what that should be, right? From given a trajectory and action sequence, you know what the individual reward should be. So that's, you can train G for that. First of all, you can also train to predict the correct value functions like in classic reinforcement learning, you can do like an end step into the future prediction, or you can play until the end sample trajectories and so on. And the policy you predict, you, you predict the policy, your approximate policy to to match your true actions, right? Because your true actions you've generated by doing this entire tree search thing, which is, you know, the your what you're actually going to do. So you're training your approximate policy predictor that you use to run the tree search to match as close as possible to your actual actions, right? This in this fashion. So this policy resulting from hidden state zero should be as close as possible to the action you actually took in the observation that led to hidden state zero. Yeah, so this is how you search, search, act and train using mu zero. And this is pretty, this is it, right? This is the rest is experiments. The rest is simply showing that they can handle these games, they can keep the performance basically of the simulator based alpha zero in, in games. Sorry, where are the results here? Yeah, so in these games in these left hand games, they can keep the performance of alpha zero even exceeded here in go. And remember, they don't have a simulator like alpha zero, they have to learn this model. And in Atari, they actually out compete the current state of the art, which is I think, or to D two, or Impala. But it's it's some model, I guess some model free RL baseline here on the on Atari. So that's pretty cool. And I think that brings RL to kind of a new level with this hidden learning. And yeah, they so they compare it against against multiple ones are two D two different things. All right. Yeah, so that's that's that. For me, it's a cool paper. It's short. Read it if you if you want. I invite you to also look at the additional experiments where they basically ablate what they need is the learned model really as good or better as the real simulator? Does it take as much time actually takes less time, which for for higher elo, which is pretty cool. How many simulations are needed? Things like this. All right, that was it. I like this paper, check it out. Bye bye.
[ { "start": 0, "end": 5.82, "text": " Hi there! Today we're looking at mastering Atari Go, Chess and Shogi by" }, { "start": 5.82, "end": 12.120000000000001, "text": " planning with a learned model by Julian Schrittweiser and people generally from" }, { "start": 12.120000000000001, "end": 21.32, "text": " DeepMind. So this paper is an extension to AlphaZero, the kind of famous" }, { "start": 21.32, "end": 29.400000000000002, "text": " algorithm that learned to play Go and Chess simply by playing itself and the" }, { "start": 29.4, "end": 35.48, "text": " kind of cool thing about this model is that it has a learned environment" }, { "start": 35.48, "end": 40.92, "text": " model. So what does this mean? Usually if you have a game such as chess, I believe" }, { "start": 40.92, "end": 45.9, "text": " there is a picture of chess down here, if you have a game such as chess and you" }, { "start": 45.9, "end": 50.08, "text": " want to learn to play it, you need to know the kind of the rules of chess," }, { "start": 50.08, "end": 58.16, "text": " right? So in chess you have the rules like the pawn can move two or one, right?" }, { "start": 58.16, "end": 65.11999999999999, "text": " The bishop can move diagonally and so on. Similarly in Shogi or Go here, you know" }, { "start": 65.11999999999999, "end": 70.6, "text": " where you can place the stones and when you win everything is clearly defined." }, { "start": 70.6, "end": 76.12, "text": " So what you can do is actually you can plan, right? You can now think of" }, { "start": 76.12, "end": 83.88, "text": " okay if I do this opening, right, my opponent could do either this or" }, { "start": 83.88, "end": 91.72, "text": " this or you know this and for each of the three moves I'll have response. So if" }, { "start": 91.72, "end": 98.84, "text": " they do, if they move this pawn, I'll go for like a gambit here and if they move" }, { "start": 98.84, "end": 106.39999999999999, "text": " this pawn then I can, you know, move on. Something like this, right? So what in a" }, { "start": 106.39999999999999, "end": 110.36, "text": " sense what you have is a tree search. So you start out with the state you're" }, { "start": 110.36, "end": 115.76, "text": " currently in, right? And then your opponent, sorry, this should be your" }, { "start": 115.76, "end": 120.68, "text": " state you're currently in, your opponent has the option of performing any one of" }, { "start": 120.68, "end": 125.6, "text": " these moves. Let's say there are three moves and then from each of these three" }, { "start": 125.6, "end": 131.24, "text": " moves you again have the option of performing any of these moves. And the" }, { "start": 131.24, "end": 137, "text": " good thing is in chess you know each exactly what they do. Like if I move my" }, { "start": 137, "end": 144.52, "text": " pawn then the new board configuration will be the pawn will no longer be here" }, { "start": 144.52, "end": 148.8, "text": " but here, right? So you know exactly what's going to happen. You can calculate" }, { "start": 148.8, "end": 154.56, "text": " that you have perfect simulator. And other domains you don't have that. For example" }, { "start": 154.56, "end": 162.2, "text": " in Atari all you have in Atari is this screen, right? Maybe you have a" }, { "start": 162.2, "end": 168.6, "text": " little submarine here, right? You have some opponents, right? The" }, { "start": 168.6, "end": 174.92, "text": " opponent, I don't know, what do your opponents look like? Are they fish? I don't even know in this" }, { "start": 174.92, "end": 181.23999999999998, "text": " game, right? And you can, I think you can shoot? There's coins to select? I don't" }, { "start": 181.23999999999998, "end": 185.6, "text": " know. Okay, in any case and sometimes you need to go up and there is like a health" }, { "start": 185.6, "end": 192.92, "text": " bar. But in essence you only have this screen here, right? You don't" }, { "start": 192.92, "end": 199.72, "text": " have more. And if you press a button you don't" }, { "start": 199.72, "end": 203.32, "text": " exactly know what's going to happen. You don't exactly know what the pixel space" }, { "start": 203.32, "end": 210, "text": " will look like as this shot moves forward, right? I guess you could know but" }, { "start": 210, "end": 215.76, "text": " you can't use that to plan because the kind of space is too big and" }, { "start": 215.76, "end": 221.32, "text": " your actions may be not clearly predictable. And when you win aren't" }, { "start": 221.32, "end": 226.08, "text": " clearly predictable and there may be randomness. So all of this stuff, usually" }, { "start": 226.08, "end": 229.6, "text": " what people do is here they do use a model-free reinforcement learning. We've" }, { "start": 229.6, "end": 237.72, "text": " had this discussion before. So this would be model-free and while chess" }, { "start": 237.72, "end": 248.16, "text": " here you'd go about model-based. Now what MuZero does is it uses a model-based" }, { "start": 248.16, "end": 254.96, "text": " planning but it learns the model. So it tries to construct a model for this here." }, { "start": 254.96, "end": 261.48, "text": " It tries to say, okay if I have this screen A here, right? My thing is here and" }, { "start": 261.48, "end": 270, "text": " I press the button right then probably my submarine is going to be a bit more to" }, { "start": 270, "end": 276.32, "text": " the right. But it doesn't do this exactly. So this has been done before and this is" }, { "start": 276.32, "end": 281.40000000000003, "text": " what's kind of known as learning an environment model where you map current" }, { "start": 281.40000000000003, "end": 288.68, "text": " environment plus action to the next step in the environment, right? And this" }, { "start": 288.68, "end": 294.44, "text": " usually doesn't work too well because you're really trying to generate this" }, { "start": 294.44, "end": 300.44, "text": " entire pixel space here. What the cool thing about MuZero is it doesn't do that." }, { "start": 300.44, "end": 306.04, "text": " It doesn't predict the next state. What it does predict is a hidden state and" }, { "start": 306.04, "end": 310.64, "text": " let's draw the hidden state as a little cloud here. It predicts a hidden" }, { "start": 310.64, "end": 315, "text": " state of the next step and from the hidden state it will predict things like" }, { "start": 315, "end": 323.04, "text": " the reward, the policy, the value and then it can use from that hidden state it'll" }, { "start": 323.04, "end": 329.2, "text": " predict the next hidden state. And from that it will again predict the" }, { "start": 329.2, "end": 334.76, "text": " reward. So the base idea is you only predict what you" }, { "start": 334.76, "end": 341.48, "text": " absolutely need to obtain the values that are important for doing reinforcement" }, { "start": 341.48, "end": 346.64000000000004, "text": " learning. You're not trying to predict the full environment. You're simply trying" }, { "start": 346.64000000000004, "end": 351.56, "text": " to predict whatever is necessary and this here is a learned quantity. Whatever" }, { "start": 351.56, "end": 358.04, "text": " is necessary to predict what your RL model is going to need. So" }, { "start": 358.04, "end": 367, "text": " that's the basic gist of it and we'll look at how they do it or how" }, { "start": 367, "end": 374.12, "text": " they describe what they're doing. So basically the picture A here is how MuZero" }, { "start": 374.12, "end": 380.16, "text": " plans. So imagine you have a configuration, a current state. This is an" }, { "start": 380.16, "end": 384.56, "text": " observation. This could be a chessboard. This could also be a position in" }, { "start": 384.56, "end": 389.8, "text": " shogi but it could also be a screen in an Atari game or a camera input of a" }, { "start": 389.8, "end": 394.92, "text": " self-driving car and so on. And the first thing it does it encodes that" }, { "start": 394.92, "end": 400.88, "text": " observation using this H here. I believe they call this a representation" }, { "start": 400.88, "end": 408.08000000000004, "text": " function. You encode that to this hidden state. Now the hidden state, this is" }, { "start": 408.08000000000004, "end": 416.88, "text": " appropriately sized, the hidden state here is supposed to capture everything" }, { "start": 416.88, "end": 422.56, "text": " you need about the state to predict the kind of RL quantities in the future." }, { "start": 422.56, "end": 428.2, "text": " And you learn this function H which in this case of course is going to be a" }, { "start": 428.2, "end": 434.48, "text": " neural network in order to produce such a state. Now from this state you do two" }, { "start": 434.48, "end": 440.72, "text": " things. First of all you have this function F here and they call this the" }, { "start": 440.72, "end": 446.36, "text": " I don't remember but you have a function to predict the following two quantities." }, { "start": 446.36, "end": 452.2, "text": " You predict the value function at that state and the value function simply" }, { "start": 452.2, "end": 458.15999999999997, "text": " means if you are in this state here, this is now not a true state but a" }, { "start": 458.15999999999997, "end": 463.47999999999996, "text": " hidden state, but still if you're in this state, in this hidden state that belongs" }, { "start": 463.47999999999996, "end": 471.96, "text": " to this observation, then in the future you're going to make this much reward on" }, { "start": 471.96, "end": 476.64, "text": " average with your current policy. That's the value function. So the value" }, { "start": 476.64, "end": 481.52, "text": " function basically tells you how good it is to be in a given state. And" }, { "start": 481.52, "end": 490.03999999999996, "text": " then the policy, this is a bit special, the policy is predicting how you would" }, { "start": 490.03999999999996, "end": 495.52, "text": " act in this state. Now this is a bit confusing or it was to me when I" }, { "start": 495.52, "end": 502.76, "text": " first learned it because we're going to see over here how a mu0 decides on how" }, { "start": 502.76, "end": 507.84, "text": " to act. Namely it does this entire tree search thing up to a certain depth, right?" }, { "start": 507.84, "end": 512.64, "text": " And then it creates this histogram and from that it produces the action. But in" }, { "start": 512.64, "end": 518.64, "text": " order to produce, to do this tree search, this is exactly this picture A. This is" }, { "start": 518.64, "end": 524.24, "text": " that tree search that is done. And in order to do that you need these p-values" }, { "start": 524.24, "end": 530.36, "text": " because we'll go there in a second, you need these p-values and they cannot" }, { "start": 530.36, "end": 535.24, "text": " themselves again do a tree search, right? That would be like infinite recursion. So" }, { "start": 535.24, "end": 542.08, "text": " what you need is you need kind of an estimate, right? Like if I were, and" }, { "start": 542.08, "end": 549.8, "text": " especially down, it makes more sense, if I were in that state how would I" }, { "start": 549.8, "end": 554.76, "text": " act, right? If I were to do a tree search like this. So you simply build a neural" }, { "start": 554.76, "end": 559.36, "text": " network that tells you with one evaluation without having to do the" }, { "start": 559.36, "end": 565.36, "text": " entire tree search down from here how you would act. This doesn't need to be a" }, { "start": 565.36, "end": 570.64, "text": " perfect approximation of how you would actually act but it needs to be good" }, { "start": 570.64, "end": 575.08, "text": " enough, right? So this simply tells you how you would act in that state. And" }, { "start": 575.08, "end": 581.5600000000001, "text": " that's important because what we do next is we use this policy to generate this" }, { "start": 581.5600000000001, "end": 586.16, "text": " action. And this is a simulated action. This isn't a real action because the" }, { "start": 586.16, "end": 590.48, "text": " real action would go here to the next actual observation. This is a simulated" }, { "start": 590.48, "end": 597.12, "text": " action saying if I'm in this hidden state, right, my policy approximately" }, { "start": 597.12, "end": 602.88, "text": " would be this thing. And so I can sample from that and say my action in that" }, { "start": 602.88, "end": 609.8, "text": " state would be this action. And so now I have a hidden state and an action and" }, { "start": 609.8, "end": 615.48, "text": " from that I can produce the next hidden state. Now of course if I were to apply" }, { "start": 615.48, "end": 620.72, "text": " the action up here to the observation, right, action one, I would get the next" }, { "start": 620.72, "end": 627, "text": " observation. And that is exactly how alpha zero works, right? You use your" }, { "start": 627, "end": 632.04, "text": " simulator, your perfect simulator, to take the current observation, the current" }, { "start": 632.04, "end": 637.9200000000001, "text": " state, with a given action that this policy gives you and you produce the" }, { "start": 637.9200000000001, "end": 641.48, "text": " next state. But we don't have a perfect simulator, right? And we don't want to" }, { "start": 641.48, "end": 646.44, "text": " learn a model that predicts the entire state. But what we want to do is we want" }, { "start": 646.44, "end": 654.04, "text": " to predict the following. If we were to take a one here, if, right, we would get" }, { "start": 654.04, "end": 663.28, "text": " an observation, can we predict the result when we would apply the function h to" }, { "start": 663.28, "end": 669.64, "text": " that, right, giving me s prime, right? This is observation prime. So this" }, { "start": 669.64, "end": 674.28, "text": " function h here, which is the function that maps from observation space to" }, { "start": 674.28, "end": 680.3199999999999, "text": " hidden space, if we were to apply this to the next hidden, to the next observation," }, { "start": 680.3199999999999, "end": 688.3199999999999, "text": " we would obtain some hidden state for that observation. Can we predict that" }, { "start": 688.3199999999999, "end": 694.96, "text": " thing? So we need a function that maps from the hidden state given an action," }, { "start": 694.96, "end": 701.2800000000001, "text": " right, to the next hidden state. And that's exactly what what happens down" }, { "start": 701.2800000000001, "end": 708.0400000000001, "text": " here, right? This function g here maps exactly this hidden state plus the" }, { "start": 708.0400000000001, "end": 717.84, "text": " action to the next hidden state. And also, also at the same time, it will predict a" }, { "start": 717.84, "end": 723.08, "text": " reward, right? Because in each step you might get a reward. So each transition" }, { "start": 723.08, "end": 727.32, "text": " here gives you a reward. And we're trying to predict that as well. Not that" }, { "start": 727.32, "end": 731.08, "text": " important, especially for games like chess or shogi, where there's only win" }, { "start": 731.08, "end": 735.1600000000001, "text": " or lose at the very end. But they incorporate this here to also be able to" }, { "start": 735.1600000000001, "end": 739.44, "text": " play these Atari games and like a broader range of reinforcement learning" }, { "start": 739.44, "end": 744.2, "text": " games. But in essence, that's what it is, right? We're trying to predict the next" }, { "start": 744.2, "end": 748.2800000000001, "text": " hidden state. And now we can basically recursively apply this. So from here, I" }, { "start": 748.28, "end": 754.28, "text": " have an idea of what my policy might be in that state, right? My proximate policy," }, { "start": 754.28, "end": 761.28, "text": " my kind of mini policy that only needs one evaluation. I can sample an action" }, { "start": 761.28, "end": 766.68, "text": " from that policy. And if maybe it's action two here, and I can then predict" }, { "start": 766.68, "end": 774.76, "text": " the next hidden state that I would be in. Also the reward, right? And therefore," }, { "start": 774.76, "end": 780.84, "text": " using this, I can do like a tree search. So I can simulate future trajectories," }, { "start": 780.84, "end": 787.36, "text": " right? First, all of these policies, I can sample from them. I can sample" }, { "start": 787.36, "end": 791.92, "text": " from them, giving me different actions so that that will lead me down" }, { "start": 791.92, "end": 797.4399999999999, "text": " the tree different routes. So I can simulate future trajectories in this" }, { "start": 797.4399999999999, "end": 802.36, "text": " tree. And at the end, I have a pretty good idea. I can do this up to a certain" }, { "start": 802.36, "end": 807.64, "text": " depth, right? I don't have to do it until the very end, I can. And then I'll have a" }, { "start": 807.64, "end": 815.2, "text": " pretty good idea of how my immediate the immediate future looks right, which" }, { "start": 815.2, "end": 820.36, "text": " actions lead me to approximately which states and for each state, of course," }, { "start": 820.36, "end": 824.2, "text": " especially for each bottom state here, I have an estimation of the value of that" }, { "start": 824.2, "end": 829.4, "text": " state. So basically, I can, the easiest thing would simply be to whatever" }, { "start": 829.4, "end": 837.68, "text": " search, how many steps is this? One, no, this is zero. One, two, three steps into" }, { "start": 837.68, "end": 844.6, "text": " the future. And for each of these states, obtain the value v here, v here, v, v, v," }, { "start": 844.6, "end": 850.4399999999999, "text": " v, v. And then I simply pick the action up, the action up here. I'm running out" }, { "start": 850.4399999999999, "end": 855.84, "text": " of colors. And simply pick the action up here that will lead me eventually to the" }, { "start": 855.84, "end": 864.24, "text": " highest value state. So that's, we of course, we've not incorporated opponent" }, { "start": 864.24, "end": 868.2800000000001, "text": " plays here and so on. But that's the basic idea. You can do this more" }, { "start": 868.2800000000001, "end": 873.36, "text": " sophisticated this tree search. And this is a topic that we might cover in a" }, { "start": 873.36, "end": 880, "text": " video about AlphaGo or AlphaZero. But in essence, you can do the same thing as" }, { "start": 880, "end": 885.4000000000001, "text": " AlphaGo or AlphaZero, except if you're not working with the simulator, but" }, { "start": 885.4, "end": 890.4, "text": " you're working with a learned model on the hidden states of the true" }, { "start": 890.4, "end": 895.9599999999999, "text": " observations. So B is how you would actually act, right? So for each" }, { "start": 895.9599999999999, "end": 901.56, "text": " observation here, we'd say you'd run such a tree search, and you kind of get a" }, { "start": 901.56, "end": 906.88, "text": " histogram over visited actions. And again, we'll skip over that here. But this," }, { "start": 906.88, "end": 912.8, "text": " this is part of the AlphaZero paper. And you decide on an action. And that will" }, { "start": 912.8, "end": 918.28, "text": " give you a reward and a next observation. And that's how you act. And then you" }, { "start": 918.28, "end": 931.04, "text": " train these things end to end. So you train the networks such that, of" }, { "start": 931.04, "end": 935.7199999999999, "text": " course, the reward, you know what the rewards are, right? The reward prediction" }, { "start": 935.7199999999999, "end": 940.24, "text": " of G, you know what that should be, right? From given a trajectory and action" }, { "start": 940.24, "end": 945.2, "text": " sequence, you know what the individual reward should be. So that's, you can train" }, { "start": 945.2, "end": 952.96, "text": " G for that. First of all, you can also train to predict the correct value" }, { "start": 952.96, "end": 957.4, "text": " functions like in classic reinforcement learning, you can do like an end step" }, { "start": 957.6, "end": 962.64, "text": " into the future prediction, or you can play until the end sample trajectories" }, { "start": 962.64, "end": 969.4, "text": " and so on. And the policy you predict, you, you predict the policy, your" }, { "start": 969.4, "end": 975.12, "text": " approximate policy to to match your true actions, right? Because your true" }, { "start": 975.12, "end": 981.68, "text": " actions you've generated by doing this entire tree search thing, which is, you" }, { "start": 981.68, "end": 987.0799999999999, "text": " know, the your what you're actually going to do. So you're training your" }, { "start": 987.0799999999999, "end": 993.8, "text": " approximate policy predictor that you use to run the tree search to match as" }, { "start": 993.8, "end": 1004.64, "text": " close as possible to your actual actions, right? This in this fashion. So this" }, { "start": 1004.8, "end": 1011.52, "text": " policy resulting from hidden state zero should be as close as possible to the" }, { "start": 1011.52, "end": 1016.4799999999999, "text": " action you actually took in the observation that led to hidden state zero." }, { "start": 1016.48, "end": 1025.3600000000001, "text": " Yeah, so this is how you search, search, act and train using mu zero. And this is" }, { "start": 1025.3600000000001, "end": 1033.76, "text": " pretty, this is it, right? This is the rest is experiments. The rest is simply" }, { "start": 1033.76, "end": 1039.04, "text": " showing that they can handle these games, they can keep the performance basically" }, { "start": 1039.04, "end": 1046.6399999999999, "text": " of the simulator based alpha zero in, in games. Sorry, where are the results here?" }, { "start": 1046.6399999999999, "end": 1050.3999999999999, "text": " Yeah, so in these games in these left hand games, they can keep the" }, { "start": 1050.3999999999999, "end": 1057.76, "text": " performance of alpha zero even exceeded here in go. And remember, they don't have" }, { "start": 1057.76, "end": 1064.1599999999999, "text": " a simulator like alpha zero, they have to learn this model. And in Atari, they" }, { "start": 1064.16, "end": 1073.0400000000002, "text": " actually out compete the current state of the art, which is I think, or to D two, or" }, { "start": 1073.0400000000002, "end": 1080.4, "text": " Impala. But it's it's some model, I guess some model free RL baseline here on the" }, { "start": 1080.4, "end": 1087.0400000000002, "text": " on Atari. So that's pretty cool. And I think that brings RL to kind of a new" }, { "start": 1087.04, "end": 1094.56, "text": " level with this hidden learning. And yeah, they so they compare it against against" }, { "start": 1094.56, "end": 1104.96, "text": " multiple ones are two D two different things. All right. Yeah, so that's that's" }, { "start": 1104.96, "end": 1112.8, "text": " that. For me, it's a cool paper. It's short. Read it if you if you want. I" }, { "start": 1112.8, "end": 1118, "text": " invite you to also look at the additional experiments where they basically ablate" }, { "start": 1118, "end": 1122.56, "text": " what they need is the learned model really as good or better as the real" }, { "start": 1122.56, "end": 1127.28, "text": " simulator? Does it take as much time actually takes less time, which for for" }, { "start": 1127.28, "end": 1131.84, "text": " higher elo, which is pretty cool. How many simulations are needed? Things like" }, { "start": 1131.84, "end": 1143.28, "text": " this. All right, that was it. I like this paper, check it out. Bye bye." } ]
i-J4T3uLC9M
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
VOS: Learning What You Don't Know by Virtual Outlier Synthesis (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "paper explained", "virtual outliers", "how to detect outliers", "deep learning outliers", "deep learning outlier detection", "vos", "deep learning energy", "latent space outliers", "density estimation", "classification boundaries", "generative models" ]
#vos #outliers #deeplearning Sponsor: Assembly AI Check them out here: https://www.assemblyai.com/?utm_source=youtube&utm_medium=social&utm_campaign=yannic1 Outliers are data points that are highly unlikely to be seen in the training distribution, and therefore deep neural networks have troubles when dealing with them. Many approaches to detecting outliers at inference time have been proposed, but most of them show limited success. This paper presents Virtual Outlier Synthesis, which is a method that pairs synthetic outliers, forged in the latent space, with an energy-based regularization of the network at training time. The result is a deep network that can reliably detect outlier datapoints during inference with minimal overhead. OUTLINE: 0:00 - Intro 2:00 - Sponsor: Assembly AI (Link below) 4:05 - Paper Overview 6:45 - Where do traditional classifiers fail? 11:00 - How object detectors work 17:00 - What are virtual outliers and how are they created? 24:00 - Is this really an appropriate model for outliers? 26:30 - How virtual outliers are used during training 34:00 - Plugging it all together to detect outliers Paper: https://arxiv.org/abs/2202.01197 Code: https://github.com/deeplearning-wisc/vos Abstract: Out-of-distribution (OOD) detection has received much attention lately due to its importance in the safe deployment of neural networks. One of the key challenges is that models lack supervision signals from unknown data, and as a result, can produce overconfident predictions on OOD data. Previous approaches rely on real outlier datasets for model regularization, which can be costly and sometimes infeasible to obtain in practice. In this paper, we present VOS, a novel framework for OOD detection by adaptively synthesizing virtual outliers that can meaningfully regularize the model's decision boundary during training. Specifically, VOS samples virtual outliers from the low-likelihood region of the class-conditional distribution estimated in the feature space. Alongside, we introduce a novel unknown-aware training objective, which contrastively shapes the uncertainty space between the ID data and synthesized outlier data. VOS achieves state-of-the-art performance on both object detection and image classification models, reducing the FPR95 by up to 7.87% compared to the previous best method. Code is available at this https URL. Authors: Xuefeng Du, Zhaoning Wang, Mu Cai, Yixuan Li Links: Merch: http://store.ykilcher.com TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Outliers, we all know them, we all hate them. How can these data points just be out of distribution, not in the training data, things that we haven't seen before, things that we don't even expect? Well, they suck. So today we're going to look at what you can do about it. Specifically, we're going to look at the paper learning what you don't know by virtual outlier synthesis. This paper presents a technique to generate what it calls virtual outliers, which are synthetic data points that are out of distribution. The core idea is that rather than trying to come up with data space out of distribution samples, this paper comes up with latent space out of distribution samples, which is much easier and much more useful. They're then designing a loss that pushes down the energy of the model wherever the outliers are and pushes up the energy wherever the data is. This paper is really interesting because it presented very successful results on a multitude of benchmarks. So definitely this technique looks like it works. However, when I read the paper, I was quite critical. I had a lot of criticisms, I had a lot of open questions, and that's why I've invited the authors for an interview to the channel. So this video right here is a comprehensive paper review. I'll explain in detail what is in the paper, what the method does, what its contributions are, what its experimental results look like, what is good about it, and what I think is bad about it. Then in the next video released tomorrow, I'll interview the authors of the paper, the authors will have seen my review, and therefore are able to respond to any criticism and any questions that I had. So be sure to check out the interview part as well, because it was really, really cool to get all my questions answered. As always, let me know how I can improve these videos by leaving a comment, leave a like if you do like and I'll see you around. Bye bye. And this works in the traditional way where you upload audio and you get back the transcription, but they can also do this real time. So you get a web socket to their neural network powered backend and in real time, it gives you back text for your speech. That's insane. But this is not all they have a ton of features on top of that. For example, they can do summarization, they can do topic detection, they can do bad word detection, content moderation in your audio. And I have to say, this is really good. In fact, I have uploaded this video right here to their API's and the text you see on screen is the raw output of that model. So judge yourself how good it is. We'll actually try some Swiss German words on it. It is an English model, but we'll just give it a shot. Oh, well, isn't that great. So give them a try. They even have a basic free tier at their documentation is super extensive. They give you walkthroughs and examples of all the parameters that you can send. They have a great blog where they describe different feature sets and different ways of applying their technology. And yeah, it's a really cool thing. Now I've only scratched the surface right here. They do much more. They have features upon features on this, but it's best you check them out yourself. So thank you very much to assembly AI for sponsoring this video is really great. Please check them out. A link is in the description and I wish you a lot of fun. Hello there today we'll look at VOS learning what you don't know by virtual outlier synthesis by Shefeng Du, Zhao Ning Wang, Mu Cai and Yixuan Li. This paper presents a model that can do out of distribution detection in object detection networks, but not only in object detection, they show it on object detection, but it is a general framework for detecting out of distribution data at inference time. If this really works, this could mean a lot for especially for safety critical applications, networks that are deployed as a classifier or a detector somewhere. And they would be able to recognize accurately when they are presented with something they didn't learn at training time, like some out of distribution class. And this particular case on the left here, you see an image, which is an object detection network at inference time, it has correctly recognized the car on the right hand side. However, it thinks that the moose here is a pedestrian, it doesn't even classify all of the moose, but it recognizes there is an object. And the class is pedestrian, probably because it hasn't hasn't seen mooses, meese. What's the plural of moose? In any case, it hasn't seen a moose or multiple meese at training time. And therefore, it cannot classify it. And very often these networks make very, very high confidence predictions for classes that they haven't seen. This paper tackles this and proposes this technique called virtual outlier synthesis, to which we'll get to in a second. As I said, it's a general framework. They demonstrated on object detection, which is a particularly hard task, but this could also be applied to image classification. They do make the point that if you have an image like this, and you haven't seen the moose class during training, most of the image will still be in distribution. Like this will not be a particularly out of distribution image, except for that small part with the moose. However, if you do object detection, then the object itself here is out of distribution. And maybe that makes actually their tasks as researchers a bit more easy, because they are less often in these ambiguous cases where like half the data point is out of distribution. In any case, they mentioned here, they that the networks that we currently have, they often struggle to handle the unknowns. And they assign high posterior probability for out of distribution test inputs. Now, why might that be? If you train a typical classifier, the classifier will just attempt to separate classes from each other. You see this here in the middle. This is a projection of the last layer of a neural network right before the classifier layer. So right before the softmax. So the classification layer, all it can do is it can lay linear decision boundaries, essentially, through the distribution of data points. So what the model does is it sees three classes right here. So this is class one, this is class two, this is class three. And what it needs to do is linearly separate them. So it says, well, okay, I'm gonna this is not an ideal color for this. I'm going to just put my decision boundaries like this. And now I've essentially separated the classes, because all that is important to a classification loss is that, you know, points in class three are away from points in class one and away from points in class two. So that also means that the more away from classes one and two I go, the better, like the more likely it is to be class three, because all I've ever seen at training is samples from class three. And my entire objective was just to make it, to push it away or distinguish it, to discriminate it from class one and class two. So obviously, if I go more into the direction of class three, the network will become will output a more and more confident number about this being class three, even though, as you can see, the data is all in this region right here. And out there, there is no data, yet the network is still very, very confident. Red here means quite confident. An ideal situation would be if the network was very confident, where the training data is right here. However, again, we have the decision boundaries like this. However, if you go further out, it will say something like, wait a minute, even though this is not class one, for sure, and not class two, for sure, it's most likely class three, but still, I haven't seen any training data around that area. So I'm also going to be to just output a low probability or a low confidence score. I'm going to say it's class three, but I'm going to assign it a low confidence, because I haven't seen actual training data in that vicinity. Now, this all seems intuitive and makes sense and so on. Mostly, that is because low dimensionality and high dimensionality data is very different and can deceive if you look at it in this in a kind of a very simple projection like this, you as a human, you see this data and you go like, of course, that makes total sense. However, this becomes very different if you look at high dimensional data. Note that there is a reason why our classifiers do the thing on the left, because the thing on the right essentially amounts to like a probabilistic model of the data distribution, right? The thing on the right, it has an idea where all the data is, right? The thing on the left, it just needs to separate data from each other. Three lines are enough for that. The thing on the right actually needs to model the data in the latent space, which can become pretty complicated in high dimensions, and it needs some very, very distinct assumptions to make it tractable. So the right thing is essentially a generative model of the data, like a distributional model of the data, which needs a lot more resources and power and could pull away resources from the classification task to be solved. So what does this model do? First of all, they have some notation right here, which I found to be... Well, let's just first look at the diagram right here. So this is the whole model architecture. They have an input over here. So there's input X, right? I'm going to use the green highlighter, I guess, for this stuff. There's input X. You can see this is the input image. In general, first you have this proposal generator, and that proposal generator will generate bounding boxes. So some of these detection networks, they have two stages. First, proposal generation, and then a post-processing stage where they assign labels to the proposals. So the proposal generator would simply ask, where are objects? Any sort of object. The objectness property, it generalizes between objects. So it makes sense to train the object detector to just predict where are bounding boxes. In this case, it will predict, well, there is one here, there is an object, and there is an object here. And then it will pass on those to the classifier to determine what's in the bounding boxes. And you can already see the object detector has done a good job. It detected that this thing right here is an object. However, the classifier, what can it do? It has to assign a label. There is no option for it to say, no, actually, this isn't an object. And previous methods have tried this. They've just added like an extra class for outlier. It usually doesn't work too well, because the reason is pretty simple. In order to do that here on the left, you'd have to introduce like another line and say, okay, so I'm going to introduce another line, I'm running out of colors here, introduce another line, you know, like right here. So this would now be outlier, sorry, outlier space. Well, that doesn't cover, that doesn't cover this region or this region, or the region back here, right. So having a single class for outliers is sort of useless, because there are just so many places where outliers could be, and not just like a single, a single slice of the space. So you'd have to have many, you'd actually have to have like a lot. And ultimately, that amounts to exactly the situation on the right where, you know, ultimately, you're going to train a classifier that is a threshold between low and high density areas. And that's exactly a generative model of the data. All right, first stage is the bounding box proposal, this thing right here. Then you pass on the bounding box to multiple things. First of all, there is a loss that's simply concerned with did you detect the objects correctly. So during training, the proposal generator would simply be trained with that loss right here. Now everything here is back propagated, obviously, but that would be the main loss to localize the bounding boxes. The second, the second stage here would be the assignment of a label, this would be the so called classification head. So that would take the latent representation that is generated, including the bounding box, right. So we're going to feed this through a neural network. And that will give us a latent representation, this H thing mean that they call that the latent representation right before the classification layer, and the classification layer would assign a label to it. And that would be the normal way of doing things. And now we augment that by a bit. Just to say they formulate this here, as saying we have a data set, the data set here contains x is data, b is bounding box and y is labels. So b and y would be the labels, right, those would be the things to predict. And then they say they split it up into two things. So first of all, the p of the bounding box, and then the one of the label. And I don't think that's correct. I think that's a typo right here. I think this should be the probability of the bounding box given x, not the label. And this should probably be the probability of the label given x as well as the predicted bounding box. Let's call this b hat right here, the predicted bounding box. So b hat would be sampled from this. But this is minor, because the rest of the paper essentially treats it as I think I write it down. In any case, what they do in addition to that is they also have this classifier right here. The classifier that takes into a sample and the bounding box and it tries to predict this number g. And g is one if the object is in distribution and g should be zero if it's out of distribution. So this is a binary classifier that classifies any sample into in or out of distribution, independent of what the classifier head says what class it is. So that would amount to the situation on the right, where if you're anywhere in this region right here, the classifier would still say, well, that's clearly class three, because that's the region of class three. But your other classifier would say yes, but the the outlier probability is very high, the in in layer probability is very low for that region. So you can do outlier detection at inference time. How do we do this? We do this by generating these virtual outliers during training. Virtual outliers are essentially outlier data points that you synthesize. Now, you what you could do, and they mentioned that what you could do is you could train like again, you can simply train a generative model of the data, and then use that to sample out of distribution data. However, they mentioned that synthesizing images in the high dimensional pixel space can be difficult to optimize. Instead, our key idea is to synthesize virtual outliers in the feature space. So the feature space is if you have your have your image, right, let's just talk about classifier, you feed it through a bunch of neural networks. And then here is the last layer. And all you do at the end is you have a classification head that classifies it into multiple classes. And this right here is just described by a matrix W. This is just a linear layer that goes from the amount of features, I guess D or something like this to the amount of classes C. That's the dimensionality. So in this space at the end, you would do in this space right here, that's the space we've seen in in these diagrams up there. Here is where we would sample the virtual outliers. So what we would do is we would look at our training data, where does our training data fall? And we say, aha, okay, there is class one, two and three, as we had it. Then we'd build a Gaussian mixture model of the training data. Essentially, we'd assume that each class is described well by a high dimensional, by a multivariate Gaussian. They all share the covariance matrix, by the way. And then we would say, well, okay, given that that is the case, which ends up at the situation in the right, we would sample data points from outside of those Gaussians. So that have a sufficiently low probability. So these would be these virtual outliers. We would just sample them anywhere where we where our Gaussian mixture model says that there is no data. But still, we sample according to the Gaussians. So we're not going to be like way out here in undefined space. Just because this is in our support set, we're still going to sample from these Gaussians. But we're going to sample until we get a sample that has a very low likelihood. So we're deliberately going to sample outliers from these Gaussians. And those are going to serve as samples for our outlier classifier. So then the outlier classifier, what it needs to do is it needs to find a decision boundary between these virtual outliers and the data. You can see, draw this right here. So there's going to be a decision boundary. Now, you can see this decision boundary gets quite a bit more complicated than the decision boundary of between the classes, especially, you know, given that we do it in the last layer. So we'll go on in the paper a little bit. What we just said is going to come up in a second here. So they say we assume the feature representation of object instances forms a class conditional multivariate Gaussian distribution. And they state this right here. So every class has a mean, all the classes share a covariance matrix. And they do calculate, they don't learn these things, they do just calculate them from the training data in an online fashion. So this is in the penultimate layer of the neural network, as I just said. Yeah, they compute empirical class mean and covariance of training samples. And they do this in an online, sorry about that, in an online estimation fashion, which means that as they train the network, they collect the training data. And then in an online fashion, they compute these metrics to always be up to date. They do say here, we assume the feature representation is this Gaussian, and they say see figure three, and figure three is a UMAP projection of UMAP visualization of feature embeddings of the Pascal VOC data set. And I'm not sure what they mean by look at figure three. This is a UMAP. This is like a projection, a nonlinear projection into low dimensional space. If I'm not exactly remembering what UMAP does, but for sure, this is a projection. This doesn't convince me that the data is Gaussian. It convinces me that the data is kind of in one place-ish, right? Or it convinces me that all the blue points are closer, or most of the blue points are closer to each other than they are close to, for example, the green points here. Like that is what is convincing to me from this graphic. It is not at all convincing that in the original high dimensional space where they come from, they are somehow a cluster or a Gaussian, even, or even that all of these classes would have the same covariance matrix, even if they were Gaussians. So that is a wild assumption. But it seems to work. So the results of the paper are that they are very, very good at this outlier detection. They reduce false positive rates by a lot. So it seems to work. I'm just saying this does not convince me. Or maybe I don't understand UMAP. Maybe there is something. So here is where they say they sample the virtual outliers from in this feature representation space using the multivariate distributions. So they would simply sample the virtual outliers from the Gaussians, but then evaluate them and only take them if their likelihood is smaller than some epsilon. They say it's sufficiently small so that the sample outliers are near the class boundary. These outliers would then be converted to the output. So this would be the output, the classifier head by the classifier matrix. Now, this is a very interesting example. That is how they sample the outliers. And you know, all good so far. I have a few concerns right here. For example, what you're going to teach the model is, you know, successfully, if in the last layer before the classifier, there is a data point, and that data point is not where the training data is, then if this model works, it will, in fact, recognize it as an outlier. What will not happen, and this seems okay, what will not be the case if that moose right here, for some reason, an earlier layer already confuses it with something. An earlier layer thinks, oh, this, you know, it's four legs, it's probably like it looks like a dog, right, then the moose will come to lie really inside of the dog class, because it would have the features of a dog, which the lower layers would have confused it. So you'd have to have done this technique in one of the lower layers. And there, you could see that this isn't an outlier. But the lower the layers, you go, you know, the less your data, even less your data looks like a Gaussian, I mean, ultimately, you'd have to do it in the input layer, right. And there, it becomes clear that this is just like a distribution of the data that you're trying to approximate. And in the input layer, certainly, this is not Gaussian at all. So I think this only works for specific outliers. If there is an outlier that, as I say, has like the same features as some in distribution data, resulting that in the last layer, they are in like inside of this cluster, then this method will not be able to detect it. Yeah, that is that is kind of my one concern. The other concern I've already said is that this is separating these outliers is naturally a harder task because as well, it essentially amounts to a generative or a distributional model of the data rather than just a discriminative classifier. So how are they incorporating this into training? During training, we still don't know, right, we have, so up here, right, we have our loss right here for the localization, we have a classification loss, which is fine, is good. So our classification loss tells us if we have the class correctly, but we still need a third thing, which is this uncertainty loss. We are going to estimate the uncertainty, which is going to be our measure of how much the model thinks that this is an out of distribution data point or not. And how are they doing it? They are using the log partition function for that. So the log partition function is this thing right here. It's essentially what is at the bottom of the softmax if you use a softmax for classification. So if the f here is the logit of class k, so if this is the output of your classifier, and then you do a softmax in the last layer across your logits, the softmax would look like this, right. So you'd have the class y at the top, and then you'd have that log some x of all the classes at the bottom. So the bottom right here is kind of like a measure of how peaky your distribution is, right. If your logits are, you know, one is just standing out heavily, then that is kind of a measure for low uncertainty, like you're quite sure about what you're doing. And if all the logits are kind of the same, then they are all more even. So this measure is a little bit of an indicator of certainty, right. So this was already shown to be an effective uncertainty measurement for out of distribution detection. So what we're going to do is we're going to use this as an uncertainty loss right here. So what we're going to do is we're going to train, or not to train, we're going to have a logit-based loss. So we're going to say we are going to use a sigmoid. And what we want is we want this measure right here. We want this right here, which is one is the logit and one is one minus the logit. I can't remember which one is which. In any case, we want this to be a in any case, we want this measure to be high for in distribution data and low for out of distribution data or the other way around. We want the uncertainty to be high for out of distribution data and low for in distribution data. So if we get a data point, we'll plug it in to this free energy. Well, the, by the way, this the negative of the log partition function is called the free energy. Sorry, I forgot to mention that that would make some connections to other, even other fields of science. So we're going to take our data point. And we're going to not plug it into the classifier, but just this bottom part of the classifier, right, to measure is the distribution data that we're getting very certain or very uncertain. And then what we want is that if we have a true data point, then we want the we want the uncertainty to be very low. If we have a fake data point, we want the uncertainty to be very high. So by adding this loss right here, by adding this loss, what this does is this trains our classifier to be more certain if the data point is real, and less certain if the data point is fake, which ultimately, right, will result in decision in decision boundaries like this or or certainty estimates like this on the right here. So the certainty estimate on the left would just be if we just train the classifier objective, the thing will get more and more certain as we go away from the classification boundaries. If we look at this certainty measure, and now we explicitly train the model to only be certain around the data, and to be again very uncertain around all the virtual outliers. So that's why you see blue anywhere away from the data. We explicitly train the model to do that. So our uncertainty classifier that we talked about, where was it? This thing right here. Our uncertainty classifier is not in fact an additionally trained model. It is simply us plugging a data point into this uncertainty measure. And during training, we make sure that this measure is low for fake data and high for clean data. Now, this loss, if I see this correctly, this uncertainty loss, initially, it will directly affect this parameter set right here. Since we only generate the fake data in the last layer, the only parameters that are really affected by this loss in that case is the classification weights right here. However, implicitly, obviously, by saying that the true data here must have a high certainty or a low uncertainty, and by contrasting this with the fake data in the last layer, it may also be that through back propagation, the entire network is shaped such that the latent space will be more optimal for doing this classification. However, I cannot conceive super well how all the effects and counter effects and so on are going to work out. But it would be interesting to think a bit more clearly through that. So what we're going to end up with is a probabilistic score for out of distribution detection. Our loss is going to be a mixture of these classification and localization losses and the uncertainty loss added with a given hyperparameter. So this is going to be our detector for in distribution. We simply take predicted or we take an inference sample, we take the predicted bounding box, we'll plug it into this uncertainty estimate right here. So this here is this free energy, we plug it into the sigmoid formula here. And that will give us one, if the classifier is very certain and zero, if it's very uncertain, that this is in distribution data, we can define a threshold, and that's going to be our out of distribution classifier. So that's it for the method. They go through a bunch of results. Now I'll shorten the results by saying they're just very good at everything like at the data sets they try against the baseline, baselines. They do ablations, and particularly noteworthy, for example, here is the false positive rate where lower is better. You can see if they were just to add an outlier class, this would hurt the performance quite a bit, like more than other modifications right here, which I found interesting to see. Yeah, they detect they compare against other outlier detection methods. And they they do have, I believe, some samples right here. Needless to say, I have my concerns, but it does work pretty well. So and I'm just a person that looks at this paper for the first time and hasn't worked in this field at all and hasn't tried anything. So I'm going to give the the the right away to the authors right here. But let me know what you think, and I'll see you next time.
[ { "start": 0, "end": 12.8, "text": " Outliers, we all know them, we all hate them. How can these data points just be out of distribution," }, { "start": 12.8, "end": 19.2, "text": " not in the training data, things that we haven't seen before, things that we don't even expect?" }, { "start": 19.2, "end": 23.76, "text": " Well, they suck. So today we're going to look at what you can do about it. Specifically," }, { "start": 23.76, "end": 29.44, "text": " we're going to look at the paper learning what you don't know by virtual outlier synthesis. This" }, { "start": 29.44, "end": 36.24, "text": " paper presents a technique to generate what it calls virtual outliers, which are synthetic data" }, { "start": 36.24, "end": 41.92, "text": " points that are out of distribution. The core idea is that rather than trying to come up with data" }, { "start": 41.92, "end": 48.56, "text": " space out of distribution samples, this paper comes up with latent space out of distribution samples," }, { "start": 48.56, "end": 54.480000000000004, "text": " which is much easier and much more useful. They're then designing a loss that pushes down the energy" }, { "start": 54.48, "end": 60.4, "text": " of the model wherever the outliers are and pushes up the energy wherever the data is. This paper is" }, { "start": 60.4, "end": 65.52, "text": " really interesting because it presented very successful results on a multitude of benchmarks." }, { "start": 65.52, "end": 71.52, "text": " So definitely this technique looks like it works. However, when I read the paper, I was quite" }, { "start": 71.52, "end": 76.4, "text": " critical. I had a lot of criticisms, I had a lot of open questions, and that's why I've invited the" }, { "start": 76.4, "end": 82.56, "text": " authors for an interview to the channel. So this video right here is a comprehensive paper review." }, { "start": 82.56, "end": 87.68, "text": " I'll explain in detail what is in the paper, what the method does, what its contributions are," }, { "start": 87.68, "end": 92.88, "text": " what its experimental results look like, what is good about it, and what I think is bad about it." }, { "start": 92.88, "end": 98.24000000000001, "text": " Then in the next video released tomorrow, I'll interview the authors of the paper, the authors" }, { "start": 98.24000000000001, "end": 104.16, "text": " will have seen my review, and therefore are able to respond to any criticism and any questions that" }, { "start": 104.16, "end": 110.4, "text": " I had. So be sure to check out the interview part as well, because it was really, really cool to get" }, { "start": 110.4, "end": 116.56, "text": " all my questions answered. As always, let me know how I can improve these videos by leaving a comment," }, { "start": 116.56, "end": 141.44, "text": " leave a like if you do like and I'll see you around. Bye bye." }, { "start": 141.44, "end": 146.48, "text": " And this works in the traditional way where you upload audio and you get back the transcription," }, { "start": 146.48, "end": 152.64, "text": " but they can also do this real time. So you get a web socket to their neural network powered backend" }, { "start": 152.64, "end": 158.72, "text": " and in real time, it gives you back text for your speech. That's insane. But this is not all they" }, { "start": 158.72, "end": 164.56, "text": " have a ton of features on top of that. For example, they can do summarization, they can do topic" }, { "start": 164.56, "end": 171.2, "text": " detection, they can do bad word detection, content moderation in your audio. And I have to say," }, { "start": 171.2, "end": 178.64, "text": " this is really good. In fact, I have uploaded this video right here to their API's and the text you" }, { "start": 178.64, "end": 184.95999999999998, "text": " see on screen is the raw output of that model. So judge yourself how good it is. We'll actually try" }, { "start": 184.95999999999998, "end": 194.79999999999998, "text": " some Swiss German words on it. It is an English model, but we'll just give it a shot. Oh, well," }, { "start": 194.79999999999998, "end": 200.48, "text": " isn't that great. So give them a try. They even have a basic free tier at their documentation" }, { "start": 200.48, "end": 206.56, "text": " is super extensive. They give you walkthroughs and examples of all the parameters that you can send." }, { "start": 206.56, "end": 211.28, "text": " They have a great blog where they describe different feature sets and different ways of" }, { "start": 211.28, "end": 216.39999999999998, "text": " applying their technology. And yeah, it's a really cool thing. Now I've only scratched the surface" }, { "start": 216.39999999999998, "end": 222.56, "text": " right here. They do much more. They have features upon features on this, but it's best you check" }, { "start": 222.56, "end": 228.79999999999998, "text": " them out yourself. So thank you very much to assembly AI for sponsoring this video is really" }, { "start": 228.8, "end": 234.16000000000003, "text": " great. Please check them out. A link is in the description and I wish you a lot of fun." }, { "start": 241.76000000000002, "end": 248, "text": " Hello there today we'll look at VOS learning what you don't know by virtual outlier synthesis by" }, { "start": 248, "end": 256, "text": " Shefeng Du, Zhao Ning Wang, Mu Cai and Yixuan Li. This paper presents a model that can do" }, { "start": 256, "end": 262, "text": " out of distribution detection in object detection networks, but not only in object detection," }, { "start": 262, "end": 267.84, "text": " they show it on object detection, but it is a general framework for detecting out of distribution" }, { "start": 267.84, "end": 273.52, "text": " data at inference time. If this really works, this could mean a lot for especially for safety" }, { "start": 273.52, "end": 280.64, "text": " critical applications, networks that are deployed as a classifier or a detector somewhere. And they" }, { "start": 280.64, "end": 286.8, "text": " would be able to recognize accurately when they are presented with something they didn't learn" }, { "start": 286.8, "end": 292.15999999999997, "text": " at training time, like some out of distribution class. And this particular case on the left here," }, { "start": 292.15999999999997, "end": 298.15999999999997, "text": " you see an image, which is an object detection network at inference time, it has correctly" }, { "start": 298.15999999999997, "end": 304.4, "text": " recognized the car on the right hand side. However, it thinks that the moose here is a" }, { "start": 304.4, "end": 309.76, "text": " pedestrian, it doesn't even classify all of the moose, but it recognizes there is an object." }, { "start": 309.76, "end": 315.52, "text": " And the class is pedestrian, probably because it hasn't hasn't seen mooses," }, { "start": 315.52, "end": 323.12, "text": " meese. What's the plural of moose? In any case, it hasn't seen a moose or multiple meese" }, { "start": 323.12, "end": 329.92, "text": " at training time. And therefore, it cannot classify it. And very often these networks make very," }, { "start": 329.92, "end": 337.52, "text": " very high confidence predictions for classes that they haven't seen. This paper tackles this" }, { "start": 337.52, "end": 343.03999999999996, "text": " and proposes this technique called virtual outlier synthesis, to which we'll get to in a second. As" }, { "start": 343.03999999999996, "end": 349.35999999999996, "text": " I said, it's a general framework. They demonstrated on object detection, which is a particularly hard" }, { "start": 349.35999999999996, "end": 354.15999999999997, "text": " task, but this could also be applied to image classification. They do make the point that if" }, { "start": 354.15999999999997, "end": 359.68, "text": " you have an image like this, and you haven't seen the moose class during training, most of the image" }, { "start": 359.68, "end": 364.71999999999997, "text": " will still be in distribution. Like this will not be a particularly out of distribution image," }, { "start": 364.72, "end": 371.20000000000005, "text": " except for that small part with the moose. However, if you do object detection, then the object itself" }, { "start": 371.20000000000005, "end": 377.12, "text": " here is out of distribution. And maybe that makes actually their tasks as researchers a bit more" }, { "start": 377.12, "end": 382.08000000000004, "text": " easy, because they are less often in these ambiguous cases where like half the data point" }, { "start": 382.08000000000004, "end": 389.44000000000005, "text": " is out of distribution. In any case, they mentioned here, they that the networks that we currently" }, { "start": 389.44, "end": 396.64, "text": " have, they often struggle to handle the unknowns. And they assign high posterior probability for" }, { "start": 396.64, "end": 403.52, "text": " out of distribution test inputs. Now, why might that be? If you train a typical classifier," }, { "start": 403.52, "end": 408.56, "text": " the classifier will just attempt to separate classes from each other. You see this here" }, { "start": 408.56, "end": 414.48, "text": " in the middle. This is a projection of the last layer of a neural network right before the" }, { "start": 414.48, "end": 421.44, "text": " classifier layer. So right before the softmax. So the classification layer, all it can do" }, { "start": 421.44, "end": 429.68, "text": " is it can lay linear decision boundaries, essentially, through the distribution of data" }, { "start": 429.68, "end": 437.92, "text": " points. So what the model does is it sees three classes right here. So this is class one, this is" }, { "start": 437.92, "end": 444.8, "text": " class two, this is class three. And what it needs to do is linearly separate them. So it says, well," }, { "start": 444.8, "end": 452.96000000000004, "text": " okay, I'm gonna this is not an ideal color for this. I'm going to just put my decision boundaries" }, { "start": 452.96000000000004, "end": 459.04, "text": " like this. And now I've essentially separated the classes, because all that is important to a" }, { "start": 459.04, "end": 465.92, "text": " classification loss is that, you know, points in class three are away from points in class one and" }, { "start": 465.92, "end": 474.08000000000004, "text": " away from points in class two. So that also means that the more away from classes one and two I go," }, { "start": 474.08000000000004, "end": 479.76, "text": " the better, like the more likely it is to be class three, because all I've ever seen at training is" }, { "start": 481.6, "end": 489.04, "text": " samples from class three. And my entire objective was just to make it, to push it away or distinguish" }, { "start": 489.04, "end": 495.36, "text": " it, to discriminate it from class one and class two. So obviously, if I go more into the direction" }, { "start": 495.36, "end": 501.76, "text": " of class three, the network will become will output a more and more confident number about" }, { "start": 501.76, "end": 508.08000000000004, "text": " this being class three, even though, as you can see, the data is all in this region right here." }, { "start": 508.08000000000004, "end": 513.76, "text": " And out there, there is no data, yet the network is still very, very confident. Red here means" }, { "start": 513.76, "end": 521.2, "text": " quite confident. An ideal situation would be if the network was very confident, where the training" }, { "start": 521.2, "end": 527.6, "text": " data is right here. However, again, we have the decision boundaries like this. However, if you go" }, { "start": 527.6, "end": 533.36, "text": " further out, it will say something like, wait a minute, even though this is not class one, for sure," }, { "start": 533.36, "end": 540, "text": " and not class two, for sure, it's most likely class three, but still, I haven't seen any training data" }, { "start": 540, "end": 549.12, "text": " around that area. So I'm also going to be to just output a low probability or a low confidence score." }, { "start": 549.12, "end": 553.68, "text": " I'm going to say it's class three, but I'm going to assign it a low confidence, because I haven't" }, { "start": 553.68, "end": 561.36, "text": " seen actual training data in that vicinity. Now, this all seems intuitive and makes sense and so on." }, { "start": 562.72, "end": 568.8, "text": " Mostly, that is because low dimensionality and high dimensionality data is very different and" }, { "start": 568.8, "end": 576.24, "text": " can deceive if you look at it in this in a kind of a very simple projection like this, you as a human," }, { "start": 576.24, "end": 582.48, "text": " you see this data and you go like, of course, that makes total sense. However, this becomes very" }, { "start": 582.48, "end": 588.96, "text": " different if you look at high dimensional data. Note that there is a reason why our classifiers" }, { "start": 588.96, "end": 595.04, "text": " do the thing on the left, because the thing on the right essentially amounts to like a probabilistic" }, { "start": 595.04, "end": 602.5600000000001, "text": " model of the data distribution, right? The thing on the right, it has an idea where all the data is," }, { "start": 602.56, "end": 607.5999999999999, "text": " right? The thing on the left, it just needs to separate data from each other. Three lines are" }, { "start": 607.5999999999999, "end": 613.3599999999999, "text": " enough for that. The thing on the right actually needs to model the data in the latent space," }, { "start": 613.3599999999999, "end": 619.3599999999999, "text": " which can become pretty complicated in high dimensions, and it needs some very, very" }, { "start": 620, "end": 626, "text": " distinct assumptions to make it tractable. So the right thing is essentially a generative model of" }, { "start": 626, "end": 633.36, "text": " the data, like a distributional model of the data, which needs a lot more resources and power and" }, { "start": 633.36, "end": 643.44, "text": " could pull away resources from the classification task to be solved. So what does this model do?" }, { "start": 645.92, "end": 652.48, "text": " First of all, they have some notation right here, which I found to be..." }, { "start": 652.48, "end": 657.28, "text": " Well, let's just first look at the diagram right here. So this is the whole model architecture." }, { "start": 657.28, "end": 663.52, "text": " They have an input over here. So there's input X, right? I'm going to use the green highlighter," }, { "start": 663.52, "end": 672.72, "text": " I guess, for this stuff. There's input X. You can see this is the input image. In general," }, { "start": 672.72, "end": 680.64, "text": " first you have this proposal generator, and that proposal generator will generate bounding boxes." }, { "start": 680.64, "end": 687.92, "text": " So some of these detection networks, they have two stages. First, proposal generation, and then" }, { "start": 688.8, "end": 696.08, "text": " a post-processing stage where they assign labels to the proposals. So the proposal generator" }, { "start": 696.08, "end": 705.6, "text": " would simply ask, where are objects? Any sort of object. The objectness property, it generalizes" }, { "start": 705.6, "end": 711.6800000000001, "text": " between objects. So it makes sense to train the object detector to just predict where are bounding" }, { "start": 711.6800000000001, "end": 716.5600000000001, "text": " boxes. In this case, it will predict, well, there is one here, there is an object, and there is an" }, { "start": 716.5600000000001, "end": 724.4, "text": " object here. And then it will pass on those to the classifier to determine what's in the bounding" }, { "start": 724.4, "end": 730.1600000000001, "text": " boxes. And you can already see the object detector has done a good job. It detected that this thing" }, { "start": 730.16, "end": 738.7199999999999, "text": " right here is an object. However, the classifier, what can it do? It has to assign a label. There" }, { "start": 738.7199999999999, "end": 745.8399999999999, "text": " is no option for it to say, no, actually, this isn't an object. And previous methods have tried" }, { "start": 745.8399999999999, "end": 751.92, "text": " this. They've just added like an extra class for outlier. It usually doesn't work too well," }, { "start": 751.92, "end": 759.92, "text": " because the reason is pretty simple. In order to do that here on the left, you'd have to introduce" }, { "start": 759.92, "end": 766.3199999999999, "text": " like another line and say, okay, so I'm going to introduce another line, I'm running out of colors" }, { "start": 766.3199999999999, "end": 772.88, "text": " here, introduce another line, you know, like right here. So this would now be outlier, sorry," }, { "start": 772.88, "end": 780.48, "text": " outlier space. Well, that doesn't cover, that doesn't cover this region or this region, or the" }, { "start": 780.48, "end": 789.6, "text": " region back here, right. So having a single class for outliers is sort of useless, because there are" }, { "start": 789.6, "end": 797.2, "text": " just so many places where outliers could be, and not just like a single, a single slice of the space." }, { "start": 797.2, "end": 803.76, "text": " So you'd have to have many, you'd actually have to have like a lot. And ultimately, that amounts to" }, { "start": 803.76, "end": 809.12, "text": " exactly the situation on the right where, you know, ultimately, you're going to train a classifier" }, { "start": 809.12, "end": 816.16, "text": " that is a threshold between low and high density areas. And that's exactly a generative model of" }, { "start": 816.16, "end": 824.32, "text": " the data. All right, first stage is the bounding box proposal, this thing right here. Then you pass" }, { "start": 824.32, "end": 830.48, "text": " on the bounding box to multiple things. First of all, there is a loss that's simply concerned with" }, { "start": 830.48, "end": 837.52, "text": " did you detect the objects correctly. So during training, the proposal generator would simply be" }, { "start": 837.52, "end": 843.4399999999999, "text": " trained with that loss right here. Now everything here is back propagated, obviously, but that would" }, { "start": 843.4399999999999, "end": 852.56, "text": " be the main loss to localize the bounding boxes. The second, the second stage here would be the" }, { "start": 852.56, "end": 860.3199999999999, "text": " assignment of a label, this would be the so called classification head. So that would take the latent" }, { "start": 860.3199999999999, "end": 865.4399999999999, "text": " representation that is generated, including the bounding box, right. So we're going to feed this" }, { "start": 865.44, "end": 872.1600000000001, "text": " through a neural network. And that will give us a latent representation, this H thing mean that they" }, { "start": 872.1600000000001, "end": 877.36, "text": " call that the latent representation right before the classification layer, and the classification" }, { "start": 877.36, "end": 883.6800000000001, "text": " layer would assign a label to it. And that would be the normal way of doing things. And now we" }, { "start": 883.6800000000001, "end": 892.4000000000001, "text": " augment that by a bit. Just to say they formulate this here, as saying we have a data set, the data" }, { "start": 892.4, "end": 902, "text": " set here contains x is data, b is bounding box and y is labels. So b and y would be the labels," }, { "start": 902, "end": 908.8, "text": " right, those would be the things to predict. And then they say they split it up into two things." }, { "start": 908.8, "end": 915.36, "text": " So first of all, the p of the bounding box, and then the one of the label. And I don't think that's" }, { "start": 915.36, "end": 922.3199999999999, "text": " correct. I think that's a typo right here. I think this should be the probability of the bounding box" }, { "start": 922.32, "end": 930.1600000000001, "text": " given x, not the label. And this should probably be the probability of the label given x as well" }, { "start": 930.1600000000001, "end": 937.0400000000001, "text": " as the predicted bounding box. Let's call this b hat right here, the predicted bounding box. So b" }, { "start": 937.0400000000001, "end": 946.08, "text": " hat would be sampled from this. But this is minor, because the rest of the paper essentially treats" }, { "start": 946.08, "end": 954.5600000000001, "text": " it as I think I write it down. In any case, what they do in addition to that is they also have this" }, { "start": 954.5600000000001, "end": 963.6800000000001, "text": " classifier right here. The classifier that takes into a sample and the bounding box and it tries" }, { "start": 963.6800000000001, "end": 971.6800000000001, "text": " to predict this number g. And g is one if the object is in distribution and g should be zero" }, { "start": 971.68, "end": 978.56, "text": " if it's out of distribution. So this is a binary classifier that classifies any sample into in or" }, { "start": 978.56, "end": 984.64, "text": " out of distribution, independent of what the classifier head says what class it is. So that" }, { "start": 984.64, "end": 991.4399999999999, "text": " would amount to the situation on the right, where if you're anywhere in this region right here," }, { "start": 991.4399999999999, "end": 995.4399999999999, "text": " the classifier would still say, well, that's clearly class three, because that's the region" }, { "start": 995.44, "end": 1002.4000000000001, "text": " of class three. But your other classifier would say yes, but the the outlier probability is very" }, { "start": 1002.4000000000001, "end": 1009.36, "text": " high, the in in layer probability is very low for that region. So you can do outlier detection" }, { "start": 1009.36, "end": 1017.44, "text": " at inference time. How do we do this? We do this by generating these virtual outliers during training." }, { "start": 1017.44, "end": 1027.44, "text": " Virtual outliers are essentially outlier data points that you synthesize. Now, you what you" }, { "start": 1027.44, "end": 1034.16, "text": " could do, and they mentioned that what you could do is you could train like again, you can simply" }, { "start": 1034.16, "end": 1041.52, "text": " train a generative model of the data, and then use that to sample out of distribution data. However," }, { "start": 1041.52, "end": 1046.16, "text": " they mentioned that synthesizing images in the high dimensional pixel space can be difficult" }, { "start": 1046.16, "end": 1051.76, "text": " to optimize. Instead, our key idea is to synthesize virtual outliers in the feature space." }, { "start": 1052.4, "end": 1058, "text": " So the feature space is if you have your have your image, right, let's just talk about classifier," }, { "start": 1058, "end": 1064.3200000000002, "text": " you feed it through a bunch of neural networks. And then here is the last layer. And all you do" }, { "start": 1064.3200000000002, "end": 1071.92, "text": " at the end is you have a classification head that classifies it into multiple classes. And this right" }, { "start": 1071.92, "end": 1078.72, "text": " here is just described by a matrix W. This is just a linear layer that goes from the amount of" }, { "start": 1078.72, "end": 1085.28, "text": " features, I guess D or something like this to the amount of classes C. That's the dimensionality." }, { "start": 1085.28, "end": 1092.64, "text": " So in this space at the end, you would do in this space right here, that's the space we've seen in" }, { "start": 1092.64, "end": 1099.92, "text": " in these diagrams up there. Here is where we would sample the virtual outliers. So what we would do" }, { "start": 1099.92, "end": 1106.5600000000002, "text": " is we would look at our training data, where does our training data fall? And we say, aha," }, { "start": 1106.5600000000002, "end": 1114.88, "text": " okay, there is class one, two and three, as we had it. Then we'd build a Gaussian mixture model" }, { "start": 1114.88, "end": 1121.6000000000001, "text": " of the training data. Essentially, we'd assume that each class is described well by a high" }, { "start": 1121.6000000000001, "end": 1126.88, "text": " dimensional, by a multivariate Gaussian. They all share the covariance matrix, by the way." }, { "start": 1126.88, "end": 1133.92, "text": " And then we would say, well, okay, given that that is the case, which ends up at the situation" }, { "start": 1133.92, "end": 1142.5600000000002, "text": " in the right, we would sample data points from outside of those Gaussians. So that have a" }, { "start": 1142.5600000000002, "end": 1148.72, "text": " sufficiently low probability. So these would be these virtual outliers. We would just sample them" }, { "start": 1148.72, "end": 1157.68, "text": " anywhere where we where our Gaussian mixture model says that there is no data. But still," }, { "start": 1158.4, "end": 1164.56, "text": " we sample according to the Gaussians. So we're not going to be like way out here in undefined space." }, { "start": 1165.3600000000001, "end": 1170.64, "text": " Just because this is in our support set, we're still going to sample from these Gaussians." }, { "start": 1170.64, "end": 1177.04, "text": " But we're going to sample until we get a sample that has a very low likelihood. So" }, { "start": 1177.04, "end": 1183.68, "text": " we're deliberately going to sample outliers from these Gaussians. And those are going to serve" }, { "start": 1184.1599999999999, "end": 1190.3999999999999, "text": " as samples for our outlier classifier. So then the outlier classifier, what it needs to do is" }, { "start": 1190.3999999999999, "end": 1198.6399999999999, "text": " it needs to find a decision boundary between these virtual outliers and the data. You can see," }, { "start": 1199.36, "end": 1205.84, "text": " draw this right here. So there's going to be a decision boundary. Now, you can see this decision" }, { "start": 1205.84, "end": 1212.8, "text": " boundary gets quite a bit more complicated than the decision boundary of between the classes," }, { "start": 1212.8, "end": 1221.1999999999998, "text": " especially, you know, given that we do it in the last layer. So we'll go on in the paper a little" }, { "start": 1221.1999999999998, "end": 1228.24, "text": " bit. What we just said is going to come up in a second here. So they say we assume the feature" }, { "start": 1228.24, "end": 1233.76, "text": " representation of object instances forms a class conditional multivariate Gaussian distribution." }, { "start": 1233.76, "end": 1241.76, "text": " And they state this right here. So every class has a mean, all the classes share a covariance" }, { "start": 1241.76, "end": 1246.4, "text": " matrix. And they do calculate, they don't learn these things, they do just calculate them from" }, { "start": 1246.4, "end": 1252.4, "text": " the training data in an online fashion. So this is in the penultimate layer of the neural network," }, { "start": 1252.4, "end": 1259.6, "text": " as I just said. Yeah, they compute empirical class mean and covariance of training samples." }, { "start": 1259.6, "end": 1265.28, "text": " And they do this in an online, sorry about that, in an online estimation fashion, which means that" }, { "start": 1265.28, "end": 1270.1599999999999, "text": " as they train the network, they collect the training data. And then in an online fashion," }, { "start": 1270.1599999999999, "end": 1277.76, "text": " they compute these metrics to always be up to date. They do say here, we assume the feature" }, { "start": 1277.76, "end": 1284.9599999999998, "text": " representation is this Gaussian, and they say see figure three, and figure three is a UMAP projection" }, { "start": 1284.96, "end": 1294.32, "text": " of UMAP visualization of feature embeddings of the Pascal VOC data set. And I'm not sure what they" }, { "start": 1294.32, "end": 1301.76, "text": " mean by look at figure three. This is a UMAP. This is like a projection, a nonlinear projection" }, { "start": 1301.76, "end": 1310.16, "text": " into low dimensional space. If I'm not exactly remembering what UMAP does, but for sure, this is" }, { "start": 1310.16, "end": 1316.72, "text": " a projection. This doesn't convince me that the data is Gaussian. It convinces me that the data" }, { "start": 1316.72, "end": 1329.52, "text": " is kind of in one place-ish, right? Or it convinces me that all the blue points are closer, or most of" }, { "start": 1329.52, "end": 1336, "text": " the blue points are closer to each other than they are close to, for example, the green points here." }, { "start": 1336, "end": 1344.96, "text": " Like that is what is convincing to me from this graphic. It is not at all convincing that in the" }, { "start": 1344.96, "end": 1352.48, "text": " original high dimensional space where they come from, they are somehow a cluster or a Gaussian," }, { "start": 1352.48, "end": 1360.96, "text": " even, or even that all of these classes would have the same covariance matrix, even if they were" }, { "start": 1360.96, "end": 1371.04, "text": " Gaussians. So that is a wild assumption. But it seems to work. So the results of the paper are that" }, { "start": 1371.04, "end": 1377.92, "text": " they are very, very good at this outlier detection. They reduce false positive rates by a lot. So" }, { "start": 1377.92, "end": 1385.92, "text": " it seems to work. I'm just saying this does not convince me. Or maybe I don't understand UMAP." }, { "start": 1385.92, "end": 1392.0800000000002, "text": " Maybe there is something. So here is where they say they sample the virtual outliers from in this" }, { "start": 1392.0800000000002, "end": 1398.0800000000002, "text": " feature representation space using the multivariate distributions. So they would simply sample the" }, { "start": 1398.0800000000002, "end": 1407.52, "text": " virtual outliers from the Gaussians, but then evaluate them and only take them if their" }, { "start": 1407.52, "end": 1416.4, "text": " likelihood is smaller than some epsilon. They say it's sufficiently small so that the sample" }, { "start": 1416.4, "end": 1425.44, "text": " outliers are near the class boundary. These outliers would then be converted to the output. So this" }, { "start": 1425.44, "end": 1435.68, "text": " would be the output, the classifier head by the classifier matrix. Now, this is a very interesting" }, { "start": 1435.68, "end": 1448.24, "text": " example. That is how they sample the outliers. And you know, all good so far. I have a few concerns" }, { "start": 1448.24, "end": 1454.64, "text": " right here. For example, what you're going to teach the model is, you know, successfully," }, { "start": 1456.16, "end": 1464.8, "text": " if in the last layer before the classifier, there is a data point, and that data point" }, { "start": 1464.8, "end": 1474.48, "text": " is not where the training data is, then if this model works, it will, in fact, recognize it as an" }, { "start": 1474.48, "end": 1484.96, "text": " outlier. What will not happen, and this seems okay, what will not be the case if that moose right here," }, { "start": 1484.96, "end": 1492.56, "text": " for some reason, an earlier layer already confuses it with something. An earlier layer thinks," }, { "start": 1492.56, "end": 1499.84, "text": " oh, this, you know, it's four legs, it's probably like it looks like a dog, right, then the moose" }, { "start": 1499.84, "end": 1508.3999999999999, "text": " will come to lie really inside of the dog class, because it would have the features of a dog," }, { "start": 1508.3999999999999, "end": 1514.96, "text": " which the lower layers would have confused it. So you'd have to have done this technique in one of" }, { "start": 1514.96, "end": 1521.84, "text": " the lower layers. And there, you could see that this isn't an outlier. But the lower the layers," }, { "start": 1521.84, "end": 1527.84, "text": " you go, you know, the less your data, even less your data looks like a Gaussian, I mean, ultimately," }, { "start": 1527.84, "end": 1534.9599999999998, "text": " you'd have to do it in the input layer, right. And there, it becomes clear that this is just like a" }, { "start": 1534.9599999999998, "end": 1539.76, "text": " distribution of the data that you're trying to approximate. And in the input layer, certainly," }, { "start": 1539.76, "end": 1548.9599999999998, "text": " this is not Gaussian at all. So I think this only works for specific outliers. If there is an outlier" }, { "start": 1548.96, "end": 1559.1200000000001, "text": " that, as I say, has like the same features as some in distribution data, resulting that in the last" }, { "start": 1559.1200000000001, "end": 1566.16, "text": " layer, they are in like inside of this cluster, then this method will not be able to detect it." }, { "start": 1568, "end": 1574.32, "text": " Yeah, that is that is kind of my one concern. The other concern I've already said is that this" }, { "start": 1574.32, "end": 1582.96, "text": " is separating these outliers is naturally a harder task because as well, it essentially amounts to a" }, { "start": 1582.96, "end": 1588.56, "text": " generative or a distributional model of the data rather than just a discriminative classifier." }, { "start": 1589.36, "end": 1599.2, "text": " So how are they incorporating this into training? During training, we still don't know, right, we" }, { "start": 1599.2, "end": 1607.6000000000001, "text": " have, so up here, right, we have our loss right here for the localization, we have a classification" }, { "start": 1607.6000000000001, "end": 1615.6000000000001, "text": " loss, which is fine, is good. So our classification loss tells us if we have the class correctly," }, { "start": 1615.6000000000001, "end": 1623.2, "text": " but we still need a third thing, which is this uncertainty loss. We are going to estimate" }, { "start": 1623.2, "end": 1632.32, "text": " the uncertainty, which is going to be our measure of how much the model thinks that this is an out" }, { "start": 1632.32, "end": 1643.2, "text": " of distribution data point or not. And how are they doing it? They are using the log partition" }, { "start": 1643.2, "end": 1655.2, "text": " function for that. So the log partition function is this thing right here. It's essentially what" }, { "start": 1655.2, "end": 1664.64, "text": " is at the bottom of the softmax if you use a softmax for classification. So if the f here is the" }, { "start": 1664.64, "end": 1672.24, "text": " logit of class k, so if this is the output of your classifier, and then you do a softmax in the last" }, { "start": 1672.24, "end": 1680, "text": " layer across your logits, the softmax would look like this, right. So you'd have the class y at the" }, { "start": 1680, "end": 1689.84, "text": " top, and then you'd have that log some x of all the classes at the bottom. So the bottom right here" }, { "start": 1689.84, "end": 1698.08, "text": " is kind of like a measure of how peaky your distribution is, right. If your logits are," }, { "start": 1698.08, "end": 1704.6399999999999, "text": " you know, one is just standing out heavily, then that is kind of a measure for low uncertainty," }, { "start": 1704.6399999999999, "end": 1712.6399999999999, "text": " like you're quite sure about what you're doing. And if all the logits are kind of the same, then" }, { "start": 1715.36, "end": 1723.1999999999998, "text": " they are all more even. So this measure is a little bit of an indicator of certainty, right." }, { "start": 1723.2, "end": 1729.6000000000001, "text": " So this was already shown to be an effective uncertainty measurement for out of distribution" }, { "start": 1729.6000000000001, "end": 1738.32, "text": " detection. So what we're going to do is we're going to use this as an uncertainty loss right here." }, { "start": 1738.32, "end": 1744.4, "text": " So what we're going to do is we're going to train, or not to train, we're going to have a" }, { "start": 1744.4, "end": 1754.5600000000002, "text": " logit-based loss. So we're going to say we are going to use a sigmoid. And what we want is we want" }, { "start": 1755.68, "end": 1765.8400000000001, "text": " this measure right here. We want this right here, which is one is the logit and one is one minus" }, { "start": 1765.8400000000001, "end": 1772.4, "text": " the logit. I can't remember which one is which. In any case, we want this to be a" }, { "start": 1772.4, "end": 1780.24, "text": " in any case, we want this measure to be high for in distribution data and low for out of distribution" }, { "start": 1780.24, "end": 1785.68, "text": " data or the other way around. We want the uncertainty to be high for out of distribution data" }, { "start": 1785.68, "end": 1793.92, "text": " and low for in distribution data. So if we get a data point, we'll plug it in to this free energy." }, { "start": 1793.92, "end": 1800.72, "text": " Well, the, by the way, this the negative of the log partition function is called the free energy." }, { "start": 1800.72, "end": 1807.28, "text": " Sorry, I forgot to mention that that would make some connections to other, even other fields of" }, { "start": 1807.28, "end": 1815.3600000000001, "text": " science. So we're going to take our data point. And we're going to not plug it into the classifier," }, { "start": 1815.3600000000001, "end": 1822.24, "text": " but just this bottom part of the classifier, right, to measure is the distribution data" }, { "start": 1822.24, "end": 1831.1200000000001, "text": " that we're getting very certain or very uncertain. And then what we want is that if we have a true" }, { "start": 1831.1200000000001, "end": 1843.52, "text": " data point, then we want the we want the uncertainty to be very low. If we have a fake data point," }, { "start": 1843.52, "end": 1852.4, "text": " we want the uncertainty to be very high. So by adding this loss right here, by adding this loss," }, { "start": 1852.8799999999999, "end": 1860.6399999999999, "text": " what this does is this trains our classifier to be more certain if the data point is real," }, { "start": 1860.6399999999999, "end": 1870.56, "text": " and less certain if the data point is fake, which ultimately, right, will result in decision" }, { "start": 1870.56, "end": 1879.2, "text": " in decision boundaries like this or or certainty estimates like this on the right here. So the" }, { "start": 1880.6399999999999, "end": 1885.6, "text": " certainty estimate on the left would just be if we just train the classifier objective," }, { "start": 1885.6, "end": 1891.9199999999998, "text": " the thing will get more and more certain as we go away from the classification boundaries." }, { "start": 1891.9199999999998, "end": 1899.6799999999998, "text": " If we look at this certainty measure, and now we explicitly train the model to only be certain" }, { "start": 1899.68, "end": 1909.28, "text": " around the data, and to be again very uncertain around all the virtual outliers. So that's why" }, { "start": 1909.28, "end": 1916.96, "text": " you see blue anywhere away from the data. We explicitly train the model to do that." }, { "start": 1918.16, "end": 1923.76, "text": " So our uncertainty classifier that we talked about, where was it? This thing right here." }, { "start": 1923.76, "end": 1930.8, "text": " Our uncertainty classifier is not in fact an additionally trained model. It is simply us" }, { "start": 1930.8, "end": 1936.8799999999999, "text": " plugging a data point into this uncertainty measure. And during training, we make sure" }, { "start": 1936.8799999999999, "end": 1947.28, "text": " that this measure is low for fake data and high for clean data. Now, this loss, if I see this" }, { "start": 1947.28, "end": 1955.36, "text": " correctly, this uncertainty loss, initially, it will directly affect this parameter set right here." }, { "start": 1955.36, "end": 1962.24, "text": " Since we only generate the fake data in the last layer, the only parameters that are really affected" }, { "start": 1963.44, "end": 1970.3999999999999, "text": " by this loss in that case is the classification weights right here. However, implicitly," }, { "start": 1970.4, "end": 1980.4, "text": " obviously, by saying that the true data here must have a high certainty or a low uncertainty," }, { "start": 1981.8400000000001, "end": 1987.68, "text": " and by contrasting this with the fake data in the last layer, it may also be that through back" }, { "start": 1987.68, "end": 1994.88, "text": " propagation, the entire network is shaped such that the latent space will be more optimal for" }, { "start": 1994.88, "end": 2004.3200000000002, "text": " doing this classification. However, I cannot conceive super well how all the effects and" }, { "start": 2004.3200000000002, "end": 2010.88, "text": " counter effects and so on are going to work out. But it would be interesting to think a bit more" }, { "start": 2010.88, "end": 2018.24, "text": " clearly through that. So what we're going to end up with is a probabilistic score for out of" }, { "start": 2018.24, "end": 2025.1200000000001, "text": " distribution detection. Our loss is going to be a mixture of these classification and localization" }, { "start": 2025.1200000000001, "end": 2032.16, "text": " losses and the uncertainty loss added with a given hyperparameter. So this is going to be our" }, { "start": 2032.16, "end": 2039.1200000000001, "text": " detector for in distribution. We simply take predicted or we take an inference sample, we" }, { "start": 2039.1200000000001, "end": 2045.1200000000001, "text": " take the predicted bounding box, we'll plug it into this uncertainty estimate right here. So this" }, { "start": 2045.12, "end": 2053.2799999999997, "text": " here is this free energy, we plug it into the sigmoid formula here. And that will give us one," }, { "start": 2053.2799999999997, "end": 2060.48, "text": " if the classifier is very certain and zero, if it's very uncertain, that this is in distribution" }, { "start": 2060.48, "end": 2066.88, "text": " data, we can define a threshold, and that's going to be our out of distribution classifier." }, { "start": 2066.88, "end": 2072.48, "text": " So that's it for the method. They go through a bunch of results. Now I'll shorten the results by" }, { "start": 2072.48, "end": 2078.4, "text": " saying they're just very good at everything like at the data sets they try against the baseline," }, { "start": 2078.4, "end": 2085.92, "text": " baselines. They do ablations, and particularly noteworthy, for example, here is the false" }, { "start": 2085.92, "end": 2091.76, "text": " positive rate where lower is better. You can see if they were just to add an outlier class," }, { "start": 2092.32, "end": 2099.6, "text": " this would hurt the performance quite a bit, like more than other modifications right here," }, { "start": 2099.6, "end": 2107.44, "text": " which I found interesting to see. Yeah, they detect they compare against other outlier detection" }, { "start": 2107.44, "end": 2115.6, "text": " methods. And they they do have, I believe, some samples right here. Needless to say," }, { "start": 2115.6, "end": 2122.48, "text": " I have my concerns, but it does work pretty well. So and I'm just a person that looks at this paper" }, { "start": 2122.48, "end": 2127.7599999999998, "text": " for the first time and hasn't worked in this field at all and hasn't tried anything. So I'm going to" }, { "start": 2127.76, "end": 2135.36, "text": " give the the the right away to the authors right here. But let me know what you think," }, { "start": 2135.36, "end": 2158.32, "text": " and I'll see you next time." } ]
lqtlua-Ylts
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
State-of-Art-Reviewing: A Radical Proposal to Improve Scientific Publication
[ "Science & Technology" ]
[ "deep learning", "machine learning", "nlp", "natural language processing", "arxiv", "attention", "peer review", "automate", "distributed", "scalable", "neurips", "score", "objective" ]
Peer Review is outdated and ineffective. SOAR is a new and revolutionary way to distribute scientific reviewing and scale to the new age of faster, better and more significant research. https://arxiv.org/abs/2003.14415 Abstract: Peer review forms the backbone of modern scientific manuscript evaluation. But after two hundred and eighty-nine years of egalitarian service to the scientific community, does this protocol remain fit for purpose in 2020? In this work, we answer this question in the negative (strong reject, high confidence) and propose instead State-Of-the-Art Review (SOAR), a neoteric reviewing pipeline that serves as a 'plug-and-play' replacement for peer review. At the heart of our approach is an interpretation of the review process as a multi-objective, massively distributed and extremely-high-latency optimisation, which we scalarise and solve efficiently for PAC and CMT-optimal solutions. We make the following contributions: (1) We propose a highly scalable, fully automatic methodology for review, drawing inspiration from best-practices from premier computer vision and machine learning conferences; (2) We explore several instantiations of our approach and demonstrate that SOAR can be used to both review prints and pre-review pre-prints; (3) We wander listlessly in vain search of catharsis from our latest rounds of savage CVPR rejections. Authors: Samuel Albanie, Jaime Thewmore, Robert McCraith, Joao F. Henriques Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi everyone. Today we're looking at state-of-the-art reviewing a radical proposal to improve scientific publication. This has been on my mind for a while. The review process for modern science, especially machine learning, is just broken. I've spoken numerous times about the fact that we need to replace it with a better system. Samuel Albany at Al have actually come up with such a system and we're going to explore it today. I am a big fan of this work and I'm 100% on board with this. They basically say peer review forms the backbone of modern scientific manuscript evaluation. If you don't know what peer review is in machine learning right now, if you have some genius idea, so here is your idea, that's a light bulb by the way, you write it up into an eight-page PDF. Yes, it must be a PDF and yes, it must be eight pages. You submit it to a conference, which is some kind of... So you submit it to be accepted in a conference proceeding. And if conference organizers, of course, they're just a bunch of people, they can't review these 1,000 million applications that come by themselves. So what they do is they recruit experts, which are called peers. So peers are other people. These are called peers and they have usually written up their own papers and they can critique each other's paper and they decide what gets accepted and what doesn't. Now, I've spoken numerous times of how this is super noisy right now. They're way, way, they're not enough peers, they're not experienced enough. So whether or not your particular idea gets accepted is extremely dependent on probability, on a coin flip usually. And it's just overloaded and just makes no sense. And they ask the same question, is this fit for purpose in 2020? And we need to replace it and I support. So they, you can already see they kind of want to automate this away with their state of the art review score and the score will be an out of 10 score that can be integrated into something like archive. And, you know, display right away. So they have some requirements to this new system. What should it be done? It should have the ability to scale. Very important. Our current review system doesn't have this, right? Our current review system relies on other humans reviewing your paper. And that means that the reviewers need to scale with the amount of papers, which just isn't the case currently. So a new review system must have the ability to scale. Right. And then, you know, automating the reviews away or scaling it up in a distributed fashion does this. Speed. Yes, because right now, if I submit my manuscript for review, it takes them months to review it. And our science progress is faster than that. So a speedy, more speedy version of peer review is definitely required. And then consistency. And this is the most shocking part, right? There is a the grand 2014 NURIPS experiment, which concluded that 57 percent of papers accepted by one committee were rejected by another committee and vice versa. Reviewing the exact same paper, different committees came to completely different conclusions to an astounding degree. So basically, you're flipping a coin of whether or not your paper gets accepted or not. And I think this is just not acceptable. And so they propose these three things, speed, scale, consistency, and their new method certainly has this. Now, let's jump down here where they introduce this state of the art reviewing SOAR. So they say, OK, the quality of a scientific work can be judged along three axes, efficacy, significance and novelty. So there are these three pillars, right? Efficacy, which means is is kind of how how effective is your work in achieving the goal in machine learning? That's usually to train some good classifier or something like this. Then the other one, sorry, is significance, right? Significance is how relevant is what you've done to the to the field. Right. And the third one is novelty. So, you know, in your scientific work should be an original contribution to the knowledge of mankind and therefore it should be novel. Right. So the more of these three things you have, of course, the higher your score should be. And here in the middle is where the highest scores should be. So imagine this is kind of a landscape. And so you want to grade papers along these three axes. But they have a pretty good method of of of assessing these in an automated fashion. So, first of all, assessing efficacy, efficacy, they say, is best assessed by determining if the proposed method achieves a new state of the art. I think that's not really I don't think you can really doubt this. I mean, this this is this is kind of the gold standard of of whether your paper is effective, is whether or not it achieves state of the art. I mean, I it might be a bit of a controversial opinion, but if a paper doesn't achieve a state of the art, it's you know, why? Why do you even care? Like no one cares. So from they say from an implementation perspective, they can they can use they can kind of abuse a fact of the current research environment is that you don't actually have to review this yourself. But the authors themselves can be relied upon to state this repeatedly in the text. Right. And this this is important. So the authors will state that they have state of the art many, many times in the text if they have actually achieved it. If they haven't achieved it or not so sure about it, they probably won't repeat it as many times. But this is can be can kind of abuse now to distribute it. Basically, you don't imagine now these these all of these reviewers. They don't they don't have to do this work anymore. They can just distribute to all the authors of their own papers, right? Because the authors in the text by the way, the text is structures is kind of an NLP approach to reviewing kind of NLP mixed with game theory. Right. So the other authors themselves if they have state of the art, you have to do some stemming and stuff, but they will put that into the text a lot. So it's a bit controversial, but the the authors here propose to simply count the number of word occurrences of state of the art case and sensitive very important in the text, right? It stands to reason that a higher state of the art count is preferable. Of course. All right. So the second thing so this might be a bit controversial. The second thing significance and they now make the claim significance is measured by efficacy. So they simply the efficacy term. So if your paper is effective at achieving its goal, you can also say it's significant for the community because again, significance should like if you have state of the art, then your paper is significant. If you do not have state of the art, then your paper is obviously not significant because why should it matter if you don't have state of the art in a given task? It's useless. All right. So we weigh it twice. That's pretty good. And then novelty now here they take much of the same approach. They say the authors probably state this. So how much they use the word novel in their manuscript will dictate. So here they say, okay, they novel. Wow, this is failing me. Hello. How much they use the word novel in the text will probably be an indication. I don't think so though. They do do the smart thing of they include the works. They include the related work section from this. Sorry, they exclude the related work section. They say we make the key observation that individuals best play to make the judgment are the authors themselves since they have likely read at least one of the works cited in the bibliography. I don't agree here. I think a better method would be to simply count the number of references and the lower the amount of references to related work, the higher the novelty. Because if you think, if these are current papers and your paper is here, you'll have a lot of related work. So it's not as novel. If you're way out here, you'll have maybe one or two related works. So it's way more novel if you have less references. So this would be my criticism of this. So this novelty thing here, I think this term should be replaced by a graph centrality measure or simply a count of how many references you have would be enough. All right, so they define their score. Their score, as we saw, is the SOTA term weighted twice. A geometric mean between that and the novelty term, which I've criticized. They add the suffix out of 10 because out of 10 score is pretty interpretable. So they divide by 10 here. So yeah, they say that here. We attach a suffix out of 10 because that's easy to interpret. And as you saw in the kind of archive implementation right here, sorry, this will be then easy to integrate right here. So they even give code, right? They give code in the paper themselves of how to implement this. It's pretty easy. And I think, yeah, even though it's quite a short paper, it's thorough and it's a good new method. And I think this could revolutionize publishing. And they even, so as a bit of a bonus, they give the official pronunciation of state of the art reviewing, which is something like state of the art reviewing pretty smooth. And yeah, with that, I hope you enjoyed this. And if the authors could just be a little more subtle next time, that would be great. And I guess you'd have to go. Yeah, nothing more. Bye.
[ { "start": 0, "end": 8.6, "text": " Hi everyone. Today we're looking at state-of-the-art reviewing a radical proposal to improve scientific publication." }, { "start": 8.6, "end": 17.8, "text": " This has been on my mind for a while. The review process for modern science, especially machine learning, is just broken." }, { "start": 17.8, "end": 23.400000000000002, "text": " I've spoken numerous times about the fact that we need to replace it with a better system." }, { "start": 23.4, "end": 30, "text": " Samuel Albany at Al have actually come up with such a system and we're going to explore it today." }, { "start": 30, "end": 35.6, "text": " I am a big fan of this work and I'm 100% on board with this." }, { "start": 35.6, "end": 42.4, "text": " They basically say peer review forms the backbone of modern scientific manuscript evaluation." }, { "start": 42.4, "end": 47.8, "text": " If you don't know what peer review is in machine learning right now, if you have some genius idea," }, { "start": 47.8, "end": 54.199999999999996, "text": " so here is your idea, that's a light bulb by the way, you write it up into an eight-page PDF." }, { "start": 54.199999999999996, "end": 64.19999999999999, "text": " Yes, it must be a PDF and yes, it must be eight pages. You submit it to a conference, which is some kind of..." }, { "start": 64.19999999999999, "end": 68.6, "text": " So you submit it to be accepted in a conference proceeding." }, { "start": 68.6, "end": 79.39999999999999, "text": " And if conference organizers, of course, they're just a bunch of people, they can't review these 1,000 million applications that come by themselves." }, { "start": 79.39999999999999, "end": 87, "text": " So what they do is they recruit experts, which are called peers. So peers are other people." }, { "start": 87, "end": 95, "text": " These are called peers and they have usually written up their own papers and they can critique each other's paper" }, { "start": 95, "end": 102.2, "text": " and they decide what gets accepted and what doesn't. Now, I've spoken numerous times of how this is super noisy right now." }, { "start": 102.2, "end": 106.4, "text": " They're way, way, they're not enough peers, they're not experienced enough." }, { "start": 106.4, "end": 117.6, "text": " So whether or not your particular idea gets accepted is extremely dependent on probability, on a coin flip usually." }, { "start": 117.6, "end": 126, "text": " And it's just overloaded and just makes no sense. And they ask the same question, is this fit for purpose in 2020?" }, { "start": 126, "end": 136.79999999999998, "text": " And we need to replace it and I support. So they, you can already see they kind of want to automate this away" }, { "start": 136.79999999999998, "end": 145.4, "text": " with their state of the art review score and the score will be an out of 10 score that can be integrated into something like archive." }, { "start": 145.4, "end": 156.4, "text": " And, you know, display right away. So they have some requirements to this new system." }, { "start": 156.4, "end": 164.20000000000002, "text": " What should it be done? It should have the ability to scale. Very important. Our current review system doesn't have this, right?" }, { "start": 164.20000000000002, "end": 171.8, "text": " Our current review system relies on other humans reviewing your paper." }, { "start": 171.8, "end": 178, "text": " And that means that the reviewers need to scale with the amount of papers, which just isn't the case currently." }, { "start": 178, "end": 182.60000000000002, "text": " So a new review system must have the ability to scale. Right." }, { "start": 182.60000000000002, "end": 190, "text": " And then, you know, automating the reviews away or scaling it up in a distributed fashion does this." }, { "start": 190, "end": 197.4, "text": " Speed. Yes, because right now, if I submit my manuscript for review, it takes them months to review it." }, { "start": 197.4, "end": 207.6, "text": " And our science progress is faster than that. So a speedy, more speedy version of peer review is definitely required." }, { "start": 207.6, "end": 211, "text": " And then consistency. And this is the most shocking part, right?" }, { "start": 211, "end": 219, "text": " There is a the grand 2014 NURIPS experiment, which concluded that" }, { "start": 219, "end": 229, "text": " 57 percent of papers accepted by one committee were rejected by another committee and vice versa. Reviewing the exact same paper," }, { "start": 229, "end": 233.8, "text": " different committees came to completely different conclusions to an astounding degree." }, { "start": 233.8, "end": 239.6, "text": " So basically, you're flipping a coin of whether or not your paper gets accepted or not." }, { "start": 239.6, "end": 243.2, "text": " And I think this is just not acceptable." }, { "start": 243.2, "end": 251.39999999999998, "text": " And so they propose these three things, speed, scale, consistency, and their new method certainly has this." }, { "start": 251.39999999999998, "end": 260.8, "text": " Now, let's jump down here where they introduce this state of the art reviewing SOAR." }, { "start": 260.8, "end": 270.8, "text": " So they say, OK, the quality of a scientific work can be judged along three axes, efficacy, significance and novelty." }, { "start": 270.8, "end": 275.8, "text": " So there are these three pillars, right?" }, { "start": 275.8, "end": 286, "text": " Efficacy, which means is is kind of how how effective is your work in achieving the goal in machine learning?" }, { "start": 286, "end": 292.2, "text": " That's usually to train some good classifier or something like this." }, { "start": 292.2, "end": 299.8, "text": " Then the other one, sorry, is significance, right?" }, { "start": 299.8, "end": 308.40000000000003, "text": " Significance is how relevant is what you've done to the to the field." }, { "start": 308.40000000000003, "end": 314.2, "text": " Right. And the third one is novelty." }, { "start": 314.2, "end": 323, "text": " So, you know, in your scientific work should be an original contribution to the knowledge of mankind and therefore it should be novel." }, { "start": 323, "end": 327.8, "text": " Right. So the more of these three things you have, of course," }, { "start": 327.8, "end": 333.6, "text": " the higher your score should be. And here in the middle is where the highest scores should be." }, { "start": 333.6, "end": 340.40000000000003, "text": " So imagine this is kind of a landscape. And so you want to grade papers along these three axes." }, { "start": 340.40000000000003, "end": 348.7, "text": " But they have a pretty good method of of of assessing these in an automated fashion." }, { "start": 348.7, "end": 355.5, "text": " So, first of all, assessing efficacy, efficacy, they say," }, { "start": 355.5, "end": 361.9, "text": " is best assessed by determining if the proposed method achieves a new state of the art." }, { "start": 361.9, "end": 367, "text": " I think that's not really I don't think you can really doubt this." }, { "start": 367, "end": 373.9, "text": " I mean, this this is this is kind of the gold standard of of whether your paper is effective," }, { "start": 373.9, "end": 376.3, "text": " is whether or not it achieves state of the art." }, { "start": 376.3, "end": 381, "text": " I mean, I it might be a bit of a controversial opinion," }, { "start": 381, "end": 385.9, "text": " but if a paper doesn't achieve a state of the art, it's you know, why?" }, { "start": 385.9, "end": 389.6, "text": " Why do you even care? Like no one cares." }, { "start": 389.6, "end": 393.1, "text": " So from they say from an implementation perspective," }, { "start": 393.1, "end": 402.3, "text": " they can they can use they can kind of abuse a fact of the current research environment is that you don't actually have to review this yourself." }, { "start": 402.3, "end": 409.5, "text": " But the authors themselves can be relied upon to state this repeatedly in the text." }, { "start": 409.5, "end": 412.1, "text": " Right. And this this is important." }, { "start": 412.1, "end": 416.2, "text": " So the authors will state that they have state of the art many," }, { "start": 416.2, "end": 419.8, "text": " many times in the text if they have actually achieved it." }, { "start": 419.8, "end": 422.2, "text": " If they haven't achieved it or not so sure about it," }, { "start": 422.2, "end": 424.6, "text": " they probably won't repeat it as many times." }, { "start": 424.6, "end": 431.3, "text": " But this is can be can kind of abuse now to distribute it." }, { "start": 431.3, "end": 436.8, "text": " Basically, you don't imagine now these these all of these reviewers." }, { "start": 436.8, "end": 439.7, "text": " They don't they don't have to do this work anymore." }, { "start": 439.7, "end": 443.90000000000003, "text": " They can just distribute to all the authors of their own papers," }, { "start": 443.90000000000003, "end": 448.90000000000003, "text": " right? Because the authors in the text by the way," }, { "start": 448.90000000000003, "end": 456.8, "text": " the text is structures is kind of an NLP approach to reviewing kind of NLP mixed with game theory." }, { "start": 456.8, "end": 461.1, "text": " Right. So the other authors themselves if they have state of the art," }, { "start": 461.1, "end": 466.7, "text": " you have to do some stemming and stuff, but they will put that into the text a lot." }, { "start": 466.7, "end": 468.8, "text": " So it's a bit controversial," }, { "start": 468.8, "end": 478.7, "text": " but the the authors here propose to simply count the number of word occurrences of state of the art case" }, { "start": 478.7, "end": 483, "text": " and sensitive very important in the text, right?" }, { "start": 483, "end": 485.9, "text": " It stands to reason that a higher state of the art count is preferable." }, { "start": 485.9, "end": 489.09999999999997, "text": " Of course." }, { "start": 489.09999999999997, "end": 489.8, "text": " All right." }, { "start": 489.8, "end": 492.59999999999997, "text": " So the second thing so this might be a bit controversial." }, { "start": 492.6, "end": 500.6, "text": " The second thing significance and they now make the claim significance is measured by efficacy." }, { "start": 500.6, "end": 503.70000000000005, "text": " So they simply the efficacy term." }, { "start": 503.70000000000005, "end": 506.70000000000005, "text": " So if your paper is effective at achieving its goal," }, { "start": 506.70000000000005, "end": 510.90000000000003, "text": " you can also say it's significant for the community because again," }, { "start": 510.90000000000003, "end": 516.8000000000001, "text": " significance should like if you have state of the art," }, { "start": 516.8000000000001, "end": 519.3000000000001, "text": " then your paper is significant." }, { "start": 519.3, "end": 523.5999999999999, "text": " If you do not have state of the art, then your paper is obviously not significant" }, { "start": 523.5999999999999, "end": 530.4, "text": " because why should it matter if you don't have state of the art in a given task?" }, { "start": 530.4, "end": 532.1999999999999, "text": " It's useless." }, { "start": 532.1999999999999, "end": 532.5, "text": " All right." }, { "start": 532.5, "end": 534.3, "text": " So we weigh it twice." }, { "start": 534.3, "end": 535.5, "text": " That's pretty good." }, { "start": 535.5, "end": 541.5, "text": " And then novelty now here they take much of the same approach." }, { "start": 541.5, "end": 543.5999999999999, "text": " They say the authors probably state this." }, { "start": 543.5999999999999, "end": 549.0999999999999, "text": " So how much they use the word novel in their manuscript will dictate." }, { "start": 549.1, "end": 554.6, "text": " So here they say, okay, they novel." }, { "start": 554.6, "end": 557, "text": " Wow, this is failing me." }, { "start": 557, "end": 557.9, "text": " Hello." }, { "start": 557.9, "end": 563.4, "text": " How much they use the word novel in the text will probably be an indication." }, { "start": 563.4, "end": 565.2, "text": " I don't think so though." }, { "start": 565.2, "end": 574.9, "text": " They do do the smart thing of they include the works." }, { "start": 574.9, "end": 579.6, "text": " They include the related work section from this." }, { "start": 579.6, "end": 584.1999999999999, "text": " Sorry, they exclude the related work section." }, { "start": 584.1999999999999, "end": 587.5, "text": " They say we make the key observation that individuals best play to make the judgment" }, { "start": 587.5, "end": 591.3, "text": " are the authors themselves since they have likely read at least one of the works" }, { "start": 591.3, "end": 593.9, "text": " cited in the bibliography." }, { "start": 593.9, "end": 595, "text": " I don't agree here." }, { "start": 595, "end": 601.5, "text": " I think a better method would be to simply count the number of references" }, { "start": 601.5, "end": 607.7, "text": " and the lower the amount of references to related work, the higher the novelty." }, { "start": 607.7, "end": 617.7, "text": " Because if you think, if these are current papers and your paper is here," }, { "start": 617.7, "end": 620.2, "text": " you'll have a lot of related work." }, { "start": 620.2, "end": 622.3, "text": " So it's not as novel." }, { "start": 622.3, "end": 627.2, "text": " If you're way out here, you'll have maybe one or two related works." }, { "start": 627.2, "end": 631.1, "text": " So it's way more novel if you have less references." }, { "start": 631.1, "end": 634, "text": " So this would be my criticism of this." }, { "start": 634, "end": 640, "text": " So this novelty thing here, I think this term should be replaced by a graph" }, { "start": 640, "end": 646.6, "text": " centrality measure or simply a count of how many references you have would be enough." }, { "start": 646.6, "end": 649.4, "text": " All right, so they define their score." }, { "start": 649.4, "end": 653.6, "text": " Their score, as we saw, is the SOTA term weighted twice." }, { "start": 653.6, "end": 661, "text": " A geometric mean between that and the novelty term, which I've criticized." }, { "start": 661, "end": 669.2, "text": " They add the suffix out of 10 because out of 10 score is pretty interpretable." }, { "start": 669.2, "end": 673.6, "text": " So they divide by 10 here." }, { "start": 673.6, "end": 677, "text": " So yeah, they say that here." }, { "start": 677, "end": 682.8, "text": " We attach a suffix out of 10 because that's easy to interpret." }, { "start": 682.8, "end": 689.3, "text": " And as you saw in the kind of archive implementation right here," }, { "start": 689.3, "end": 694.5999999999999, "text": " sorry, this will be then easy to integrate right here." }, { "start": 694.5999999999999, "end": 700.0999999999999, "text": " So they even give code, right?" }, { "start": 700.0999999999999, "end": 705, "text": " They give code in the paper themselves of how to implement this." }, { "start": 705, "end": 708.8, "text": " It's pretty easy." }, { "start": 708.8, "end": 714.0999999999999, "text": " And I think, yeah, even though it's quite a short paper," }, { "start": 714.0999999999999, "end": 719.1999999999999, "text": " it's thorough and it's a good new method." }, { "start": 719.2, "end": 722.6, "text": " And I think this could revolutionize publishing." }, { "start": 722.6, "end": 725, "text": " And they even, so as a bit of a bonus," }, { "start": 725, "end": 728.9000000000001, "text": " they give the official pronunciation of state of the art reviewing," }, { "start": 728.9000000000001, "end": 734.8000000000001, "text": " which is something like state of the art reviewing pretty smooth." }, { "start": 734.8000000000001, "end": 739.8000000000001, "text": " And yeah, with that, I hope you enjoyed this." }, { "start": 739.8000000000001, "end": 744.4000000000001, "text": " And if the authors could just be a little more subtle next time," }, { "start": 744.4000000000001, "end": 747.2, "text": " that would be great." }, { "start": 747.2, "end": 756.6, "text": " And I guess you'd have to go." }, { "start": 756.6, "end": 758.4000000000001, "text": " Yeah, nothing more." }, { "start": 758.4, "end": 784.4, "text": " Bye." } ]
xJrKIPwVwGM
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Rethinking Attention with Performers (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "nlp", "natural language processing", "natural language understanding", "data science", "transformer", "attention", "attention mechanism", "transformers", "attention is all you need", "gpus", "tpu", "linformer", "reformer", "explanation", "imagenet64", "kernels", "gaussian kernel", "softmax", "softmax kernel", "approximation", "random features", "random positive features", "random fourier features", "google", "favor", "machine translation" ]
#ai #research #attention Transformers have huge memory and compute requirements because they construct an Attention matrix, which grows quadratically in the size of the input. The Performer is a model that uses random positive orthogonal features to construct an unbiased estimator to the Attention matrix and obtains an arbitrarily good approximation in linear time! The method generalizes beyond attention and opens the door to the next generation of deep learning architectures. OUTLINE: 0:00 - Intro & Outline 6:15 - Quadratic Bottleneck in Attention Mechanisms 10:00 - Decomposing the Attention Matrix 15:30 - Approximating the Softmax Kernel 24:45 - Different Choices, Different Kernels 28:00 - Why the Naive Approach does not work! 31:30 - Better Approximation via Positive Features 36:55 - Positive Features are Infinitely Better 40:10 - Orthogonal Features are Even Better 43:25 - Experiments 49:20 - Broader Impact Statement 50:00 - Causal Attention via Prefix Sums 52:10 - Code 53:50 - Final Remarks & Conclusion Paper: https://arxiv.org/abs/2009.14794 Code: https://github.com/google-research/google-research/tree/master/performer Blog: https://ai.googleblog.com/2020/10/rethinking-attention-with-performers.html Kernels on ML Street Talk: https://www.youtube.com/watch?v=y_RjsDHl5Y4 My Video on Linformer: https://www.youtube.com/watch?v=-_2AF9Lhweo My Video on Reformer: https://www.youtube.com/watch?v=i4H0kjxrias My Video on Attention: https://www.youtube.com/watch?v=iDulhoQ2pro Abstract: We introduce Performers, Transformer architectures which can estimate regular (softmax) full-rank-attention Transformers with provable accuracy, but using only linear (as opposed to quadratic) space and time complexity, without relying on any priors such as sparsity or low-rankness. To approximate softmax attention-kernels, Performers use a novel Fast Attention Via positive Orthogonal Random features approach (FAVOR+), which may be of independent interest for scalable kernel methods. FAVOR+ can be also used to efficiently model kernelizable attention mechanisms beyond softmax. This representational power is crucial to accurately compare softmax with other kernels for the first time on large-scale tasks, beyond the reach of regular Transformers, and investigate optimal attention-kernels. Performers are linear architectures fully compatible with regular Transformers and with strong theoretical guarantees: unbiased or nearly-unbiased estimation of the attention matrix, uniform convergence and low estimation variance. We tested Performers on a rich set of tasks stretching from pixel-prediction through text models to protein sequence modeling. We demonstrate competitive results with other examined efficient sparse and dense attention methods, showcasing effectiveness of the novel attention-learning paradigm leveraged by Performers. Authors: Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, Adrian Weller Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there, today we'll look at rethinking attention with performers by researchers of Google, the University of Cambridge, DeepMind and the Alan Turing Institute. This paper is yet another paper in the quest to make transformers more performant and what better name to give to a technique than the performer. So the performer, performers are a new kind of class of models. They try to approximate the transformer. If you don't know what a transformer is, I've done like a ton of videos on transformers, on attention mechanisms, and you can, there's more than enough material to look that up. Today we'll talk about performers. And the performers, as I already said, they approximate transformers. And they do so without running into the classic transformer bottleneck, which is that the attention matrix in the transformer is, has space and compute requirements that are quadratic in the size of the input and that limits how much input you can put into the model. So it kind of limits how long of text you can input if you work with text or how big your images are that you can work with. This is all kind of bad at when you use transformers. So the performers get around this by this technique they call fast attention via positive orthogonal random features abbreviated favor plus they use this favor plus to get around it and what's interesting is that the favor pluff, I'll just call it favor this fast attention, it is potentially useful beyond transformers. So it's apparently been here developed in the realm of the transformers, but they say, which may be of independent interest for scalable kernel methods. You'll see what they do is they approximate the attention matrix by decomposing it, but they do it in a special way. And they do it in the in the way if you know what random Fourier features are, maybe you can kind of think, think ahead a little bit, if not, we'll get into it for sure. I think honestly, this might be one of the enabling one of the next mini breakthroughs in deep learning, not big breakthrough, but kind of mini breakthrough. I remember a time when we used sigmoid and tan h nonlinearities, believe it or not, you young kids at the beginning of deep learning, not the beginning of deep learning, but before deep learning really took off, it was the sensible thing to use softmax and tan h nonlinearities everywhere in your neural networks. Because well, first of all, they were like differentiable. So that was cool. And then, you know, it was sort of how nature does it with the step function in, like it was an approximation to the step function in the true neuron and so on. And it was just kind of well motivated. So people thought that must be the way to go. But then, of course, turned out that relu's are much easier, much more stable, give much better results, and so on, don't saturate all these cool things. This here is kind of the it feels like the same thing, because right now, we're doing this softmax thing in attention. And it's very important because it normalizes the attention matrix, right? It gives you kind of this thing that comes out is kind of a distribution over the inputs and so on. So it's well motivated. And you may be able to see, but also as the sigmoid is, it's kind of has this exponential thing in there. And the favor algorithm is going to approximate this softmax thing, but it can be used to approximate much more. So maybe, you know, we're going to find that if we swap out these, the nonlinearity in there, we might be able to build much better transformers, or whatever the model will be called performers, I guess they already do this here with relu's in this very paper. So the performer is going to be fully compatible with regular transformer, and with strong theoretical guarantees, unbiased or nearly unbiased estimation of the attention matrix uniform convergence and low estimation variance. So the difference of the performer here is going to be that there have been methods before that decompose the attention matrix into low rank matrices. But those either don't work, or they kind of rely on on priors, like the you're assuming that your attention matrix has a certain structure, if it doesn't, it sort of fails. This method here is going to be an unbiased estimator. And it's going to sort of converge to the attention matrix if you add more of these random features. Okay, they this is fed here like provably not relying on any priors fully compatible with regular transformers, which means that you can take a transformer checkpoint and sort of plug it into this framework. And then you just have to fine tune a little bit to sort of use the checkpoint of a regular transformer, which is pretty cool, right. So we'll go through the paper. It's quite a heavy paper. It's quite a math heavy paper. We won't go through all of it. I just kind of want you to get the idea of what these performers do, what the reasoning behind it is, and how you might be able to kind of work with them or extend them where it's going from here. As always, if you like content like this, don't hesitate to share it out and tell your friends about it. All right. So the problem with attention or the problem with transformers is like I've done this a million times and you can go look it up. But if you want to map a sequence of layer L into a sequence or a set or whatnot of layer L plus one, and you need to compute these attention weights, right. So the attention weights are going to be from each token here to each token in the next layer, you're going to compute one of these weights. All right. So there is this matrix is called A, the attention matrix, and A is going to be of size L by L. And that is a problem if you have long sequences, right, you can already see this. So the way that this A comes to be is that conceptually, the upper layer, like it's all the same layer, but conceptually, the upper layer emits something that are called queries and the lower layer emits something that are called keys and values. Now the keys and the queries, they go together into matrices. So it multiply the keys and the queries. Then you run this through and this is the problem you run this through a softmax nonlinearity to basically get a distribution and then you multiply it by the values. So the query key matrix, this attention matrix, it will tell you how to aggregate the values. All right. If it weren't for the softmax, so you can you can think if if these if these the dimensions of the queries and keys and values, let's call it small d, then the dimensionality here will be something like here you'd have L by D, here it have D by L for the transposed. And then here you'd have L by D. So because you have to do the softmax, you have to compute this first, which gives you this L by L, which is the terrible thing. However, if you could, if you could, if somehow decompose the softmax operation, you could first do keys and values, which will give you a D by D matrix. And then you could multiply it by the Q matrix, right, which would be much, much, much more easy if D is smaller than L. Certainly wouldn't grow quadratically in L, it would just grow linearly in in space and time. So here this is formulated out the attention mechanism right here. The attention mechanism is made of queries, keys and values. And it's given by this formula right here. Now there is a bit of a technicality I wasn't exactly correct in what a is. So here, they, they say, they, I called this thing here a, okay, they are very specific what they mean by a, by a, they simply mean the exponential function of the normalized queries times keys. And then to get the actual softmax, you have to normalize by here. So D, which is so you see, the inverse is made here, D is constructed from a and normalize as a, but the normalization is of secondary importance. The important part here is that this exponential cannot be easily decomposed, right? It's not like you can decompose the inner multiplication into two exponentials or something, otherwise the problem would be solved. So what is this paper doing? It's exactly what I just said was impossible. So you have this matrix a right here, and you multiplied by V. Yes, again, forget about the normalization by now. It will decompose a into the query, the Q prime and K prime. Now they are called prime because they are not the queries and the keys, because we've just said the queries and the keys, they go into the exponential. So it's going to be that K, sorry, Q prime times K prime transposed is going to be approximately equal to exponential function of Q times K, maybe normalized by square root of D. But you can see that this here isn't decomposable, and yet they decompose it. And the question is how, because there have been papers before that try to decompose the attention matrix. I think Lin former maybe, and there is also the reformer, which uses LSH and so on. So there have been a number of tricks, but they all don't perform as well, which this paper also shows empirically. And they all rely on certain assumptions of the attention matrix. And they all are not unbiased estimators in general, this paper is going to be an unbiased estimator. And they do this via sort of a kernel framework. So what they they first of all, they make this problem more general, they say we have our attention matrix A, the ijth entry is going to be the query i, the key j, and some kernel function of that. In our case, this is going to be the right X of query times key, like this, sorry, the other way around. Query transpose, transpose, query times key, the inner product of that. However, you can think of any sort of kernel function. So yeah, if I'm not going to try to explain more details into kernels, we had a fantastic machine learning street talk. So if you don't know about this, this is our podcast, machine learning street talk, where Alex Stanlik explained kernels in great detail, and with very, very precise language, and very understandable as well. So what I'm going to say is that they allow you to do things like this. So you can think of kernels as kind of connecting two things, they allow you, they represent an inner product in some other space. So the kernel function of two inputs right here will be equal to some inner product of the two inputs when pulled through this function phi right here. And that's what we're going to use. Now usually, usually when you learn about kernels, you do it in this way. You say, we would like to compute in this very high dimensional space, but we can't, we can't do inner products, we can't map this function phi explicitly. So we're going to instead use this kernel right here, this kernel function. And that's going to be equal. If you pick the right kernel function for the particular phi, in this paper, we're going to do it the other way around, because we say, well, this thing here is this is the softmax function. And that's just a beast, right? We can't possibly compute that. However, if we could find out what inner product that corresponds to, what other space, we could just go to that other space and perform an inner product. And this thing over here is linear, right? This is a linear function. This here is the nonlinear function. This is our softmax. So you can see that by going in this way, by finding what is the higher or the phi function for the softmax kernel, we can construct all of this attention business in a linear fashion. And that's what this paper does. What it allows you to do is it allows you to find these q and k, q prime and k prime matrices such that as over here, right, this is the kernel function. And this here is linear. And then you can simply first multiply k by v, or k prime by v, and then you can multiply q by k, and that will alleviate you of having this giant attention matrix. So how do they do it? If you again, if you know about random Fourier features, this is going to be very much or very similar thing right here. They're not going to explicitly construct the high dimensional space such that this is exactly equal, but they're going to construct an approximation. And the approximation, you can make arbitrarily good. And you do that via the following you say, so here you see this is how do I have to map something into this other dimensional space, where this whole softmax business is just a linear operation. So what you would do ultimately is you would take your queries, you would map it through this phi, okay, and you would take your keys, and you would also map it through this phi. And this will give you query prime, and this will give you key prime, right. So and then in the higher down in the higher lower whatever dimensional space, you would take the inner product. And the inner product between the two is going to approximately be as if you had multiple so the inner product is going to be approximately as if you had taken the original q and k, multiply them and put them through a softmax. How do we do it? So here we define what the function needs to look like, sit such that this holds the function again, they go very general here, the function in general is going to look like the following. So you have one function here that's called h, that is a function of your input, and it's in front, it's a deterministic function of your input. And you also have a normalization factor. So this is kind of it's kind of a factor in front of it. You see that here comes a vector. So this is a vector, right, we are mapping this to a some dimensional space. And this is the vector. Now it's a bit you have to pay a bit of attention. So inside this vector, you have l different sub vectors, they're all concatenated after each other. Okay, so you have CC here, this, where the F, this is f1, and then f2, f3, f4, and so on until fl. Okay, so you have all these sub vectors. It doesn't matter ultimately, you just concatenate them all. But it's important to just keep in mind, within each of these vectors, within each of these sub vectors, you always have the same repeated term, you have this w times your x, so the inner product between w and x, you can see there's w1 through wm or omega, I think it's an omega. And again, in the in each sub vector, you have this repeated. So what are these omegas, first of all, the omegas are random vectors drawn for from some distribution. Now in practicality, this is going to be a normal distribution like this one here, an isotropic normal distribution. So and the the other part here is what are the F's. So the F's f1 through fl are going to be functions, deterministic functions. So in a an example they gave right here, f1 is the sine function, f2 is the cosine function. And then you have to specify h and h in this particular example is one, but it can be a function of x here, here, it's just the identity. Sorry, not the identity, the constant function one. So let's break this a little down. So we have x, and x is going to be a vector x, as I said, x is going to be like one of the queries here, or one of the one of the keys here, one one of them, right, one column or one row, however you conceptualize it, and we wonder how do we want to map so x is going to be some vector. Okay, then this is an ugly vector. Let's draw it like this. x is a vector. Then what we're going to do is we're going to take a bunch of omegas. Now it's important that the omegas are random. So they come from this isotropic normal distribution, but they're going to remain the same throughout the algorithm. So this is a method to resample them, but just conceptualize that at the beginning of the algorithm, you choose these omegas and then you fix them. So the omegas are going to be also vectors, which are random, just a bunch of random vectors. Let's take three. What you're going to do is you're going to compute the inner product between your x and each of the omegas. So inner product in your x and each of the omegas. So this gives you omega 1x, omega 2x, omega 3x. The inner product, this is going to be these, this is going to be numbers. And then you're going to have a collection of functions. So these are going to be functions, maybe function one is going maybe here, the sine function function two is going to be the cosine function. Now you're going to take each to make a table. You're going to take each of these products you computed and put them through each of the functions. So this is going to be sine of omega 1x, cosine of omega 1x, sine of omega 2x and so on. And then you're going to take this table and you're going to flatten it to a big vector. So sine omega 1x, cosine or no sine first, the ordering data doesn't matter as long as you always do it the same omega 2x, and so on right until you have here cosine of omega 3x. So that's the vector they're constructing. And these are those random features. Okay, so this here is going to be the vector that you're constructing. What you do is basically geometrically your x is like somewhere here. And it's a bit hard to draw in low dimensional space because you don't get the intuition. But this is if this is your x, you're going to choose a bunch of these omegas, these omegas are going to be randomly sampled from a uniform Gaussian. So this is omega 1, maybe omega 2, omega 3, omega 4. And you're going to compute the inner product between between any of the two. Okay, so you're going to be essentially computing the projections onto each other or the angle however you want to conceptualize it, the angle of this to each of the two of the omegas. And then you're going to make a features out of these angles, right? So this will sort of tell you how your vector stands to each of these random features. Now the reason I say it's difficult in low dimension is because now I have more omegas than the dimensionality, which is two right here. And this makes no sense, right? As soon as I have two vectors that are not collinear in two dimensional space, I can if I project x onto them, like like this, sorry, like if I project x onto both of them, I already have x fully represented, right? There's no need to have more of them. However, if you are in super duper high dimensional space, and you don't you don't have as many features, then you get some interesting approximation properties, namely, so this was an example, right? We don't always have the sine and the cosine here. This is purely an example, you can only have one function, you see like this f one, you don't need two functions, you can have one, you can have many. Okay. And you can choose how many omegas you sample, that is a parameter. So yeah, you have a couple of choices, I want to make it clear the choice of h, so the choice of h and f, they go hand in hand, the choice of h and the F's determine what the phi function is. Okay. So the choice of h f determine which kernel function this phi function corresponds to, if you construct it like this. So by choosing the correct functions, you tell the function which kernel you would like to approximate. And then by sampling the omegas, the more omegas you sample, the more accurately you approximate that kernel, and then you can give some approximation guarantees. As they say, so the softmax kernel is given by this thing here, which we've already seen. Okay. And now how do we approximate the softmax kernel? And they show that right here, softmax kernel is approximated by this thing right here. So it's a bit of a ugly formula, and it contains this Gaussian kernel, the Gauss kernel. So they say, if we choose h equals to one, so just a constant factor, and this f1 and f2 to the sine and cosine, and in if we choose d, the distribution to be a normal distribution isotropic around the mean, this is the Gaussian kernel. And then we simply have to choose h differently, this factor in front to make it into the softmax kernel, so as long as we put this factor in front, you can see that this here represents an inner product, right? So you have to kind of think of decomposition. So if you put, you can see f1, the sine, f2, the cosine, which is this makes it the Gaussian kernel, and then this factor in front of it here, two for h, this makes it now the softmax kernel. So if we choose h and f like this, then when we map our queries and keys through, if we map our queries and keys through the phi function, and then make the inner product between them, okay, like here, that will approximate depending on how many omegas we've sampled better or worse, they approximate the result as if we had multiplied them first, and then put them through the softmax function. All right. So this you can see how this becomes much easier, because we can independently put them through the phi, okay. And then it's just a linear operation, which allows us to do our trick where we multiply k and v first, and then multiply by q instead of the other way around, which we're forced to do when we apply the softmax. This was a long, long way to get here. But I hope you're with this. And this is, this is pretty straightforward, actually, so far. Now renormalization, we can take care of that easily. But there is a problem. And this is they argue, this hasn't been proposed so far, because it doesn't work like this. So even though you approximate this kernel fairly well, it's it's a bad approximation. And they say here, there is however, a caveat here, the attention module from one constructs for each token, a convex combination of value vectors with coefficients given as corresponding green renormalized kernel scores. That is why kernels producing non negative scores are used. Applying random feature maps with potentially negative dimension values leads to unstable behaviors, especially when kernel scores close to zero, which is the case for lots of entries of a corresponding to not relevant tokens are approximated by estimators with large variants in such regions. This results in abnormal behaviors, eg negative diagonal value renormalizers, and consequently either completely prevents training or leads to sub optimal models. So what they're saying is that when you use softmax, you always always get positive values, right? So if I have a bunch of vectors, or a bunch of numbers, this is, you know, positive number, negative number, very positive number, negative number, and I run it through a softmax, I will get out a distribution right, like this, or really big, sorry, that softmax will scale that up, I will get out a positive district like a kind of a histogram. And now I'm trying to approximate this by this formula right here. And you can see these are these are vectors, which gives me sine and cosine coefficients, and I linearly multiply two vectors together, which definitely means I can get negative entries and so on. So the renormalization then has to somehow maybe take care of that. And it says especially, especially around zero, when the original softmax matrix would have values close to zero, this approximation is really bad and has high variance. And they also argue, a lot of attention vectors are close to zero, because we know that attention is sort of sparsify, just by the fact of what how the softmax works, it exaggerates the largest inner products, and it really dampens the low inner products. Okay. Actually, I might not even have done this correctly here. If it's, if it's very negative, I'm not sure. In any case, they say that's why this doesn't work, because it has such high variance, it's a good approximation, but has such high variance in the wrong places, they really around zero where most values are. So they call this these s, the SM the softmax approximation with m sampled features trig, because it uses the sine and cosine functions. And now they're trying to remedy this. And for that, they propose a different decomposition. So a different approximation to the softmax kernel. And they say we can also decompose the softmax or approximate the softmax kernel with the following formula. And I look, I, I'm not going to, they have a proof for this. But this is the formula. You sample again, you sample these things. And then you perform this inner, this is the inner product that approximates the softmax kernel. Okay. And this is further, you can reduce this to this thing right here. So it's a deterministic matrix right here, this which is given by that. And it's this cos h. So cos h is the hyperbolic tangent. This can be this is so cos h of x is e to the x plus e to the minus x divided by two. Okay, so this function approximates the softmax. And that's just something you'll have to take from their proof. However, you can now see that this can be fairly easily represented as an inner product, you already see it here, right? This you simply, this is the part that comes from x, and this is the part that comes from y. If you want to note this in our in our notation earlier, again, we use the distribution that we sampled the omegas from is going to be a normal distribution. And our functions are going to be this h function is the pre factor, it's simply going to be the made up of the norm of x and put through the exponential function. And then we have two options actually, right here. I don't even know why they put the first one. But the second option makes more sense. And there's a bit of a more of a factor right here. So you have two functions, there is x of u and negative x and x of negative u, as the two function you remember, this is where we had sine and cosine before. Now we have x u and negative x, sorry, x of negative u. And we can quickly check that this gives us the same thing. So this h, these h functions, if we inner product them, that's going to be to give us the this, what is that even lambda? Is that a big lambda matrix right here? And our vector, let's just say we sample one single omega, right? So we have our x, we sample one single omega. So x is going to give us a vector with two sub vectors, right? Since we have two functions, each sub vector is of length one. So the first is going to be e to the omega x, and the second entry is going to be e to the negative omega x. If we put in y through the same or as instead of x and y, you can think of queries and keys, that's going to be y e to the negative omega y. If we now take the inner product, that is going to give us and I'm resolving the exponentials already right here. So that's going to give us e to the e to the w x plus y. And here is going to give us plus e to the w or sorry, the negative w x plus y. And that's the you know, there is a normalization factor. That's why the square root of two is here, right? So that comes in somewhere here to give us this normalization factor. So this is exactly the hyperbolic cosine of omega times z and z is x plus y that they say it somewhere. Yeah. Okay. So if we choose f1 and f2 to be this x, u and x negative u, then we get if we perform the inner product, we get out exactly this formula number seven right here. So this is this. And that is an approximation of the softmax kernel of the softmax function. It's just a different approximation than before. Okay. And the cool thing about this approximation is that the approximation itself only ever has positive values. So these vectors here, you can see the x, the vectors here, and there's of course a four a factor in front of this right here, which is going to be also an exponential. These are all exponential. So these are all going to be positive features, which is very, very nice. And they also show this theoretically. So here, this kind of funky graphic shows this. This is the ratio of the approximation mistake. Okay, the ratio of the approximation mistake of the of the original approximation that we discussed and this new positive approximation that we just built right now. And you can see that in parts here, it's fairly similar. So this, I believe, so R is the ratio. So it's fairly flat right here. But there are parts where it just shoots up, right? And in fact, they can prove that you can see this also right here. So the error of the trig approximation that shoots up while the positive approximation just stays flat or flatter in these regions. They can in fact prove that the the error of the Yeah, so you see the error. If the softmax values go to zero, so that's the problematic regions, the error of the trigonomic approximation can go to infinity while the error of the positive approximation goes to zero. Okay, they have a number of theoretical results in here. I think that's one of the main ones, the fact that the this approximation succeeds where the other approximation fails. Really quickly, they also have this variant here, where they don't build a two vector or a vector of two sub vectors, but just one with just the exponential function. And that is the same thing. Because of course, if you sample w, you're going to have sorry, omega, if you sample omega, you're going to have omega as much as negative omega, I believe and and thereby in expectation, you're going to get this hyperbolic cosine again, I think that's the reason why but this lower this lower construction here gives you the hyperbolic cosine. Okay, so pretty cool. We simply use this approximation, we run our queries, right? This your queries and our keys through this. And again, we ideally use more omegas than just one, maybe a bunch. The more we use the better we obtain a linear function that approximates the softmax function. The more we sample, the more it approximated, it's unbiased, and so on. And have a bunch of variants of it. So variant where you normalize the omegas, which gives you the regularized softmax kernel, which is not a softmax anymore, but it's a regularized softmax. And they can approximate this in pretty much the same way. Except instead of a normal distribution, you use a uniform distribution right here. And they have a bunch of other things, namely, one other improvement is that so far, we've simply sampled these W's, okay, we sampled the W's from a normal distribution like this here. They say we can improve even further. Namely, we can strictly improve with this gives us an estimator with strictly lower variance if we make sure that the W's we sample are exactly orthogonal. So they're already approximately orthogonal if we sample them from a high dimensional space. But if we make sure that they are exactly orthogonal, sorry, then they are giving us an even better approximation. And you can do that by this procedure called the Gram-Schmidt orthogonalization or Gram-Schmidt renormalization procedure. It's a pretty easy procedure. And it doesn't mess with your unbiasedness. Whenever D is an isotropic distribution, isotropic just means the same in every direction. So like a standard Gaussian would fulfill or a uniform would fulfill this thing as long as it's centered. I think maybe even if it's not centered, depends on how you renormalize. Okay, this is irrelevant. But if you make them exactly orthogonal, say this leads to the first theoretical results showing that orthogonal random features can be applied to reduce the variance of the softmax or Gaussian kernel estimators for any dimensionality D rather than just asymptotically for large enough D as it is the case for previous methods. And leads to the first exponentially small bounds on large deviations probabilities that are strictly smaller than for non-orthogonal methods. So you're going to end up with a thing that's strictly smaller, so bounds that are strictly smaller than if you don't use orthogonality. The only thing it requires is that m is smaller or equal to D. So the number of omega u sample is going to be smaller equal to the dimensionality that the original space operates in, which they say this will be the case in all our experiments. Okay, and again, these are exponentially small bounds, which is pretty cool. I guess for you, the end user, it matters that this works. And if you use all of their tricks with the positivity and the orthogonality. So by the way, this here is where they show that CDD or orthogonal MSE, the mean squared error is smaller than the original one minus some thing. And as long as the something of course is greater than zero, you're going to have something that's smaller. Okay, then they prove a bunch of other things again about this kind of this regularized, sorry, not regularized. I forget it's the where you divide by the norm. In any case, they implement this in jacks. Oh, great. Wow, cool. I okay, I have no opinion on jacks. But they have the code released and I'll of course link to it. And here you can clearly see so this is a log log plot, where you have l the size of the input and the number of seconds that it takes to go forward and backward over here in the model. And you can see the x here. The x is the baseline where you simply bypass the attention matrix, you simply take the identity function and just return the value matrix. And you can see that the performance the performers, they scale fairly well with that baseline. And in fact, they scale at the same slope, which is the important part right here, you can really see that this is linear slope where the transformers which are the dashed lines, they all curve upwards, which of course is that that quadratic requirement. The same in the backward pass, I don't know if they continue curving. I think it's also a straight line in the log log plot. But the slope is two instead of one, like the linear like the linear models. Again, the comparison is only important between the baseline and the lines that you're looking at. If they have the same slope, they scale the same as you get higher. Look at it. This is log L, right? So this is these these are now two to the 18th tokens. And I believe this is done on one GPU. Yes, so an out of memory error on a V 100 GPU. And this is pretty good. This is pretty good news for everyone who wants to run the performers in in kind of a low resource environment low risk with low resource. I mean, like a deep learning GPU instead of 1000 TPUs, which is pretty cool. They also show the that their method is better than the kind of so the orthogonality is better than the ID features. And then of course, the positive ID features are better than these original trigonometric decomposition. And they show that this thing that you can take a transformer checkpoint, and you plug it into the performer. And you simply have to fine tune a little bit to get it to the performance that the transformer was at, right? This is I believe this is the original training curve of the transformer. So you know, it's not a fair comparison, because the performer starts from the checkpoint already. At least that's how I interpret it. It's not clearly written and they say, okay, over here, this trig thing works. This is the original approximation, this even works. However, if we do that on a bit more challenging, more longer sequences, data, data set, then you can see that the trig softmax, it just it just whacks out. That's this thing here. And you actually need better these positive approximations. And that compared to the Linformer here, which is pretty cool. So the Linformer, another, I've made a video about it, if you want to know about it, but they also do random projections of the attention matrix. But you can see that the Linformer plateaus along with the performers, if you don't redraw the random features. So if you want in the performer, if you do it at the right time, you redraw these random features, these omegas, you have to have to see where you can you can't just arbitrarily redraw them between computation steps. But at the end of like a computation step, you can redraw for the next computation step. And if you do that, and the even better with the regularized or the the normalized features, you get to the same level of performance that a standard transformer would get. But of course, without the quadratic requirements. And okay, lastly, as I said, they've already they've already swapped out the they swapped out this nonlinearity by a relu. So here they construct performer relu, taking f equals relu in equation five, you remember what f was, f was the sine and cosine when we had the first approximation and f was the x x of u and x of minus u, the second one. And as I said, the big improvement in deep learning came when we swapped sigmoids for relus. And here they've already they're already trying swapping now this because they say, well, so we have a method that we can basically plug in anything we want. So they plug in relu because it's you know, worked well. And this again, it works pretty well. So they compare again also with the reformer here with the Lin former, as you can see, and of course, they beat everything now, whether or not this method is going to be the next thing, like the thing that everyone uses is to be we don't know. It's fairly possible. It's pretty cool. And it appears to be theoretically solidly grounded, but you never know from the experiments of the single paper, the broader impact statement, much respect, they just use it to tell you how awesome their paper is. Like there's no mention on on on any kind of ethical impact, which I believe like I'm all for these kinds of broader impact statements, like just kind of okay, research on transformers is going to be better because now people have access to it. It's backward compatible. That's pretty cool. It's applicable to biology and medicine because we can take longer sequences. It's all like, yeah, I like these kinds of broader impact statement. The last thing here is that you might be so the only problem is if you want to do this causal attention that if you want to do like a generative model, like a GPT sort of model, you have to do a bit of a trick. And that is because your attention matrix isn't the full attention matrix. So you can't just decompose it. It's this lower triangular matrix right here. But since you have linear decomposition of this thing, you can do these kind of prefix sums, namely, you can compute simply so you you you can compute the key one times value one, and then you can compute key two times value two plus key one times value one. And you compute key three value three plus key two value two plus key one, sorry, value one, and so on. You compute these things. And these are all these are all the big where the L goes away, right? So we do that first. And then we simply have to come along and we take q q one, multiply by q one, v one, we take q two, multiply by this and this q three will multiply by this, this and this. And you see, that's how you get your causal attention. So you simply keep track of these prefix sums right here. And then when the next q comes along, simply multiplied by all of the things that are above it in the prefix sum, that's how you get your triangular matrix. So even that is solved, a thing that I believe the Lin former wasn't able to do with its particular decomposition, I might be I might be wrong here. All right, they have a bunch of experiments on protein analysis, and so on, which of course, wasn't possible, I guess before because it was so so heavy. They also have like image net 64, as you can see right here, which is an impossible data set for a classic transformer. As I said, they have code code is in jacks, which is like this is it's ugly code. Let's be honest, but it's code. So that's fairly cool. And I want to point out the right at the bottom here is actually where the stuff happens. So you can see that. Just quickly, you have here keys and queries are, where is it? Exactly. So queries and keys are going to be constructed right here. So query prime and key prime are going to be pulled through this feature creator, which implements these these kernels. So these can either as we said, these x or the relu's or the sine cosine, whatnot, then you're going to multiply the queries and the keys, which gives you yet this W matrix. And all that we need to do now is normalize it. Okay, so we re normalize by constructing this denominator right here. And then there's a whole block for the unit directionality, which you can imagine is pretty ugly, but the renormalization we constructed, we reciprocal means we take the inverse multiplied by the W and return the result, this should be translatable into your favorite whatnot pytorch or TensorFlow, maybe it's already been done, I haven't researched that particular thing. In any case, I invite you to check out the paper, the code, play around with the functions used here, as long as you, you know, use fun, you don't even you don't need to know, like these papers, they always know which kind of kernels their functions correspond to. But you know, in SVM, people just went, went nuts, I just plug in some functions, see what happens. Probably nothing good, but it's possible. Alright, so that was it for the performer. I hope you gained something from this kind of an understanding of how it works. And I wish you the best. Bye bye.
[ { "start": 0, "end": 7.640000000000001, "text": " Hi there, today we'll look at rethinking attention with performers by researchers of Google," }, { "start": 7.640000000000001, "end": 12.280000000000001, "text": " the University of Cambridge, DeepMind and the Alan Turing Institute." }, { "start": 12.280000000000001, "end": 18.78, "text": " This paper is yet another paper in the quest to make transformers more performant and what" }, { "start": 18.78, "end": 23.36, "text": " better name to give to a technique than the performer." }, { "start": 23.36, "end": 29.04, "text": " So the performer, performers are a new kind of class of models." }, { "start": 29.04, "end": 33.44, "text": " They try to approximate the transformer." }, { "start": 33.44, "end": 38.96, "text": " If you don't know what a transformer is, I've done like a ton of videos on transformers," }, { "start": 38.96, "end": 45.879999999999995, "text": " on attention mechanisms, and you can, there's more than enough material to look that up." }, { "start": 45.879999999999995, "end": 48.96, "text": " Today we'll talk about performers." }, { "start": 48.96, "end": 54.2, "text": " And the performers, as I already said, they approximate transformers." }, { "start": 54.2, "end": 59.440000000000005, "text": " And they do so without running into the classic transformer bottleneck, which is that the" }, { "start": 59.440000000000005, "end": 66.52000000000001, "text": " attention matrix in the transformer is, has space and compute requirements that are quadratic" }, { "start": 66.52000000000001, "end": 71.66, "text": " in the size of the input and that limits how much input you can put into the model." }, { "start": 71.66, "end": 77.82000000000001, "text": " So it kind of limits how long of text you can input if you work with text or how big" }, { "start": 77.82000000000001, "end": 80.76, "text": " your images are that you can work with." }, { "start": 80.76, "end": 86.04, "text": " This is all kind of bad at when you use transformers." }, { "start": 86.04, "end": 91.64, "text": " So the performers get around this by this technique they call fast attention via positive" }, { "start": 91.64, "end": 98.60000000000001, "text": " orthogonal random features abbreviated favor plus they use this favor plus to get around" }, { "start": 98.60000000000001, "end": 106.96000000000001, "text": " it and what's interesting is that the favor pluff, I'll just call it favor this fast attention," }, { "start": 106.96, "end": 110.96, "text": " it is potentially useful beyond transformers." }, { "start": 110.96, "end": 116.83999999999999, "text": " So it's apparently been here developed in the realm of the transformers, but they say," }, { "start": 116.83999999999999, "end": 121.72, "text": " which may be of independent interest for scalable kernel methods." }, { "start": 121.72, "end": 129.16, "text": " You'll see what they do is they approximate the attention matrix by decomposing it, but" }, { "start": 129.16, "end": 131.44, "text": " they do it in a special way." }, { "start": 131.44, "end": 137.88, "text": " And they do it in the in the way if you know what random Fourier features are, maybe you" }, { "start": 137.88, "end": 144.32, "text": " can kind of think, think ahead a little bit, if not, we'll get into it for sure." }, { "start": 144.32, "end": 150.56, "text": " I think honestly, this might be one of the enabling one of the next mini breakthroughs" }, { "start": 150.56, "end": 154.68, "text": " in deep learning, not big breakthrough, but kind of mini breakthrough." }, { "start": 154.68, "end": 161.24, "text": " I remember a time when we used sigmoid and tan h nonlinearities, believe it or not, you" }, { "start": 161.24, "end": 166.64000000000001, "text": " young kids at the beginning of deep learning, not the beginning of deep learning, but before" }, { "start": 166.64000000000001, "end": 175.24, "text": " deep learning really took off, it was the sensible thing to use softmax and tan h nonlinearities" }, { "start": 175.24, "end": 178.12, "text": " everywhere in your neural networks." }, { "start": 178.12, "end": 180.56, "text": " Because well, first of all, they were like differentiable." }, { "start": 180.56, "end": 181.92000000000002, "text": " So that was cool." }, { "start": 181.92000000000002, "end": 189.72, "text": " And then, you know, it was sort of how nature does it with the step function in, like it" }, { "start": 189.72, "end": 194.72, "text": " was an approximation to the step function in the true neuron and so on." }, { "start": 194.72, "end": 196.28, "text": " And it was just kind of well motivated." }, { "start": 196.28, "end": 199.24, "text": " So people thought that must be the way to go." }, { "start": 199.24, "end": 205.64, "text": " But then, of course, turned out that relu's are much easier, much more stable, give much" }, { "start": 205.64, "end": 209.86, "text": " better results, and so on, don't saturate all these cool things." }, { "start": 209.86, "end": 215.88, "text": " This here is kind of the it feels like the same thing, because right now, we're doing" }, { "start": 215.88, "end": 218.84, "text": " this softmax thing in attention." }, { "start": 218.84, "end": 221.96, "text": " And it's very important because it normalizes the attention matrix, right?" }, { "start": 221.96, "end": 228.76, "text": " It gives you kind of this thing that comes out is kind of a distribution over the inputs" }, { "start": 228.76, "end": 229.76, "text": " and so on." }, { "start": 229.76, "end": 230.76, "text": " So it's well motivated." }, { "start": 230.76, "end": 236.74, "text": " And you may be able to see, but also as the sigmoid is, it's kind of has this exponential" }, { "start": 236.74, "end": 238.4, "text": " thing in there." }, { "start": 238.4, "end": 245.98000000000002, "text": " And the favor algorithm is going to approximate this softmax thing, but it can be used to" }, { "start": 245.98000000000002, "end": 248.16, "text": " approximate much more." }, { "start": 248.16, "end": 255.56, "text": " So maybe, you know, we're going to find that if we swap out these, the nonlinearity in" }, { "start": 255.56, "end": 261.96, "text": " there, we might be able to build much better transformers, or whatever the model will be" }, { "start": 261.96, "end": 269.38, "text": " called performers, I guess they already do this here with relu's in this very paper." }, { "start": 269.38, "end": 277.88, "text": " So the performer is going to be fully compatible with regular transformer, and with strong" }, { "start": 277.88, "end": 282.71999999999997, "text": " theoretical guarantees, unbiased or nearly unbiased estimation of the attention matrix" }, { "start": 282.71999999999997, "end": 285.92, "text": " uniform convergence and low estimation variance." }, { "start": 285.92, "end": 292.06, "text": " So the difference of the performer here is going to be that there have been methods before" }, { "start": 292.06, "end": 296.71999999999997, "text": " that decompose the attention matrix into low rank matrices." }, { "start": 296.71999999999997, "end": 305.36, "text": " But those either don't work, or they kind of rely on on priors, like the you're assuming" }, { "start": 305.36, "end": 311.28000000000003, "text": " that your attention matrix has a certain structure, if it doesn't, it sort of fails." }, { "start": 311.28000000000003, "end": 316.24, "text": " This method here is going to be an unbiased estimator." }, { "start": 316.24, "end": 320.82, "text": " And it's going to sort of converge to the attention matrix if you add more of these" }, { "start": 320.82, "end": 322.12, "text": " random features." }, { "start": 322.12, "end": 328.04, "text": " Okay, they this is fed here like provably not relying on any priors fully compatible" }, { "start": 328.04, "end": 334, "text": " with regular transformers, which means that you can take a transformer checkpoint and" }, { "start": 334, "end": 336.48, "text": " sort of plug it into this framework." }, { "start": 336.48, "end": 343.12, "text": " And then you just have to fine tune a little bit to sort of use the checkpoint of a regular" }, { "start": 343.12, "end": 345.4, "text": " transformer, which is pretty cool, right." }, { "start": 345.4, "end": 346.56, "text": " So we'll go through the paper." }, { "start": 346.56, "end": 347.64, "text": " It's quite a heavy paper." }, { "start": 347.64, "end": 349.46, "text": " It's quite a math heavy paper." }, { "start": 349.46, "end": 351.56, "text": " We won't go through all of it." }, { "start": 351.56, "end": 357.16, "text": " I just kind of want you to get the idea of what these performers do, what the reasoning" }, { "start": 357.16, "end": 362.36, "text": " behind it is, and how you might be able to kind of work with them or extend them where" }, { "start": 362.36, "end": 364.8, "text": " it's going from here." }, { "start": 364.8, "end": 370.24, "text": " As always, if you like content like this, don't hesitate to share it out and tell your" }, { "start": 370.24, "end": 371.8, "text": " friends about it." }, { "start": 371.8, "end": 372.88, "text": " All right." }, { "start": 372.88, "end": 380.52000000000004, "text": " So the problem with attention or the problem with transformers is like I've done this a" }, { "start": 380.52000000000004, "end": 382.68, "text": " million times and you can go look it up." }, { "start": 382.68, "end": 391.14, "text": " But if you want to map a sequence of layer L into a sequence or a set or whatnot of layer" }, { "start": 391.14, "end": 395.08, "text": " L plus one, and you need to compute these attention weights, right." }, { "start": 395.08, "end": 400.96, "text": " So the attention weights are going to be from each token here to each token in the next" }, { "start": 400.96, "end": 404.68, "text": " layer, you're going to compute one of these weights." }, { "start": 404.68, "end": 405.68, "text": " All right." }, { "start": 405.68, "end": 413.36, "text": " So there is this matrix is called A, the attention matrix, and A is going to be of size L by" }, { "start": 413.36, "end": 419.2, "text": " L. And that is a problem if you have long sequences, right, you can already see this." }, { "start": 419.2, "end": 425.92, "text": " So the way that this A comes to be is that conceptually, the upper layer, like it's all" }, { "start": 425.92, "end": 432.62, "text": " the same layer, but conceptually, the upper layer emits something that are called queries" }, { "start": 432.62, "end": 437.15999999999997, "text": " and the lower layer emits something that are called keys and values." }, { "start": 437.15999999999997, "end": 442.06, "text": " Now the keys and the queries, they go together into matrices." }, { "start": 442.06, "end": 445.59999999999997, "text": " So it multiply the keys and the queries." }, { "start": 445.6, "end": 452.64000000000004, "text": " Then you run this through and this is the problem you run this through a softmax nonlinearity" }, { "start": 452.64000000000004, "end": 458.40000000000003, "text": " to basically get a distribution and then you multiply it by the values." }, { "start": 458.40000000000003, "end": 466.08000000000004, "text": " So the query key matrix, this attention matrix, it will tell you how to aggregate the values." }, { "start": 466.08000000000004, "end": 467.72, "text": " All right." }, { "start": 467.72, "end": 474.94, "text": " If it weren't for the softmax, so you can you can think if if these if these the dimensions" }, { "start": 474.94, "end": 481, "text": " of the queries and keys and values, let's call it small d, then the dimensionality here" }, { "start": 481, "end": 488.72, "text": " will be something like here you'd have L by D, here it have D by L for the transposed." }, { "start": 488.72, "end": 496.72, "text": " And then here you'd have L by D. So because you have to do the softmax, you have to compute" }, { "start": 496.72, "end": 501.88, "text": " this first, which gives you this L by L, which is the terrible thing." }, { "start": 501.88, "end": 510.36, "text": " However, if you could, if you could, if somehow decompose the softmax operation, you could" }, { "start": 510.36, "end": 516.36, "text": " first do keys and values, which will give you a D by D matrix." }, { "start": 516.36, "end": 521.16, "text": " And then you could multiply it by the Q matrix, right, which would be much, much, much more" }, { "start": 521.16, "end": 528.76, "text": " easy if D is smaller than L. Certainly wouldn't grow quadratically in L, it would just grow" }, { "start": 528.76, "end": 533.12, "text": " linearly in in space and time." }, { "start": 533.12, "end": 539.72, "text": " So here this is formulated out the attention mechanism right here." }, { "start": 539.72, "end": 544.08, "text": " The attention mechanism is made of queries, keys and values." }, { "start": 544.08, "end": 546.96, "text": " And it's given by this formula right here." }, { "start": 546.96, "end": 552.6, "text": " Now there is a bit of a technicality I wasn't exactly correct in what a is." }, { "start": 552.6, "end": 561.84, "text": " So here, they, they say, they, I called this thing here a, okay, they are very specific" }, { "start": 561.84, "end": 568.76, "text": " what they mean by a, by a, they simply mean the exponential function of the normalized" }, { "start": 568.76, "end": 570.4200000000001, "text": " queries times keys." }, { "start": 570.4200000000001, "end": 576, "text": " And then to get the actual softmax, you have to normalize by here." }, { "start": 576, "end": 582.58, "text": " So D, which is so you see, the inverse is made here, D is constructed from a and normalize" }, { "start": 582.58, "end": 586.48, "text": " as a, but the normalization is of secondary importance." }, { "start": 586.48, "end": 594.84, "text": " The important part here is that this exponential cannot be easily decomposed, right?" }, { "start": 594.84, "end": 599.76, "text": " It's not like you can decompose the inner multiplication into two exponentials or something," }, { "start": 599.76, "end": 602.5, "text": " otherwise the problem would be solved." }, { "start": 602.5, "end": 605, "text": " So what is this paper doing?" }, { "start": 605, "end": 608.88, "text": " It's exactly what I just said was impossible." }, { "start": 608.88, "end": 614.52, "text": " So you have this matrix a right here, and you multiplied by V. Yes, again, forget about" }, { "start": 614.52, "end": 618.36, "text": " the normalization by now." }, { "start": 618.36, "end": 625.36, "text": " It will decompose a into the query, the Q prime and K prime." }, { "start": 625.36, "end": 631.2, "text": " Now they are called prime because they are not the queries and the keys, because we've" }, { "start": 631.2, "end": 634.62, "text": " just said the queries and the keys, they go into the exponential." }, { "start": 634.62, "end": 643.72, "text": " So it's going to be that K, sorry, Q prime times K prime transposed is going to be approximately" }, { "start": 643.72, "end": 654, "text": " equal to exponential function of Q times K, maybe normalized by square root of D. But" }, { "start": 654, "end": 660.42, "text": " you can see that this here isn't decomposable, and yet they decompose it." }, { "start": 660.42, "end": 666.64, "text": " And the question is how, because there have been papers before that try to decompose the" }, { "start": 666.64, "end": 669.3199999999999, "text": " attention matrix." }, { "start": 669.3199999999999, "end": 677.3199999999999, "text": " I think Lin former maybe, and there is also the reformer, which uses LSH and so on." }, { "start": 677.3199999999999, "end": 681.4399999999999, "text": " So there have been a number of tricks, but they all don't perform as well, which this" }, { "start": 681.4399999999999, "end": 683.5999999999999, "text": " paper also shows empirically." }, { "start": 683.5999999999999, "end": 686.88, "text": " And they all rely on certain assumptions of the attention matrix." }, { "start": 686.88, "end": 694.12, "text": " And they all are not unbiased estimators in general, this paper is going to be an unbiased" }, { "start": 694.12, "end": 695.32, "text": " estimator." }, { "start": 695.32, "end": 699.18, "text": " And they do this via sort of a kernel framework." }, { "start": 699.18, "end": 709.26, "text": " So what they they first of all, they make this problem more general, they say we have" }, { "start": 709.26, "end": 720.8, "text": " our attention matrix A, the ijth entry is going to be the query i, the key j, and some" }, { "start": 720.8, "end": 726.18, "text": " kernel function of that." }, { "start": 726.18, "end": 734.3199999999999, "text": " In our case, this is going to be the right X of query times key, like this, sorry, the" }, { "start": 734.3199999999999, "end": 736.4399999999999, "text": " other way around." }, { "start": 736.44, "end": 741.32, "text": " Query transpose, transpose, query times key, the inner product of that." }, { "start": 741.32, "end": 746.5600000000001, "text": " However, you can think of any sort of kernel function." }, { "start": 746.5600000000001, "end": 757.32, "text": " So yeah, if I'm not going to try to explain more details into kernels, we had a fantastic" }, { "start": 757.32, "end": 759.22, "text": " machine learning street talk." }, { "start": 759.22, "end": 764.48, "text": " So if you don't know about this, this is our podcast, machine learning street talk, where" }, { "start": 764.48, "end": 773.08, "text": " Alex Stanlik explained kernels in great detail, and with very, very precise language, and" }, { "start": 773.08, "end": 775.16, "text": " very understandable as well." }, { "start": 775.16, "end": 781.82, "text": " So what I'm going to say is that they allow you to do things like this." }, { "start": 781.82, "end": 791.16, "text": " So you can think of kernels as kind of connecting two things, they allow you, they represent" }, { "start": 791.16, "end": 795.36, "text": " an inner product in some other space." }, { "start": 795.36, "end": 804.12, "text": " So the kernel function of two inputs right here will be equal to some inner product of" }, { "start": 804.12, "end": 809.64, "text": " the two inputs when pulled through this function phi right here." }, { "start": 809.64, "end": 811.76, "text": " And that's what we're going to use." }, { "start": 811.76, "end": 817.4599999999999, "text": " Now usually, usually when you learn about kernels, you do it in this way." }, { "start": 817.46, "end": 825.0400000000001, "text": " You say, we would like to compute in this very high dimensional space, but we can't," }, { "start": 825.0400000000001, "end": 830.2800000000001, "text": " we can't do inner products, we can't map this function phi explicitly." }, { "start": 830.2800000000001, "end": 836.44, "text": " So we're going to instead use this kernel right here, this kernel function." }, { "start": 836.44, "end": 839.24, "text": " And that's going to be equal." }, { "start": 839.24, "end": 843.9200000000001, "text": " If you pick the right kernel function for the particular phi, in this paper, we're going" }, { "start": 843.92, "end": 849.4799999999999, "text": " to do it the other way around, because we say, well, this thing here is this is the" }, { "start": 849.4799999999999, "end": 850.76, "text": " softmax function." }, { "start": 850.76, "end": 853.4599999999999, "text": " And that's just a beast, right?" }, { "start": 853.4599999999999, "end": 855.4799999999999, "text": " We can't possibly compute that." }, { "start": 855.4799999999999, "end": 864.1999999999999, "text": " However, if we could find out what inner product that corresponds to, what other space, we" }, { "start": 864.1999999999999, "end": 868.8199999999999, "text": " could just go to that other space and perform an inner product." }, { "start": 868.8199999999999, "end": 873.4599999999999, "text": " And this thing over here is linear, right?" }, { "start": 873.46, "end": 875.1600000000001, "text": " This is a linear function." }, { "start": 875.1600000000001, "end": 877.1600000000001, "text": " This here is the nonlinear function." }, { "start": 877.1600000000001, "end": 879.0600000000001, "text": " This is our softmax." }, { "start": 879.0600000000001, "end": 887.4000000000001, "text": " So you can see that by going in this way, by finding what is the higher or the phi function" }, { "start": 887.4000000000001, "end": 896.9200000000001, "text": " for the softmax kernel, we can construct all of this attention business in a linear fashion." }, { "start": 896.9200000000001, "end": 898.88, "text": " And that's what this paper does." }, { "start": 898.88, "end": 905.72, "text": " What it allows you to do is it allows you to find these q and k, q prime and k prime" }, { "start": 905.72, "end": 911.82, "text": " matrices such that as over here, right, this is the kernel function." }, { "start": 911.82, "end": 914.86, "text": " And this here is linear." }, { "start": 914.86, "end": 921.9, "text": " And then you can simply first multiply k by v, or k prime by v, and then you can multiply" }, { "start": 921.9, "end": 929, "text": " q by k, and that will alleviate you of having this giant attention matrix." }, { "start": 929, "end": 930.9, "text": " So how do they do it?" }, { "start": 930.9, "end": 935.56, "text": " If you again, if you know about random Fourier features, this is going to be very much or" }, { "start": 935.56, "end": 939.9399999999999, "text": " very similar thing right here." }, { "start": 939.9399999999999, "end": 945.34, "text": " They're not going to explicitly construct the high dimensional space such that this" }, { "start": 945.34, "end": 950.72, "text": " is exactly equal, but they're going to construct an approximation." }, { "start": 950.72, "end": 956, "text": " And the approximation, you can make arbitrarily good." }, { "start": 956, "end": 963.86, "text": " And you do that via the following you say, so here you see this is how do I have to map" }, { "start": 963.86, "end": 969.72, "text": " something into this other dimensional space, where this whole softmax business is just" }, { "start": 969.72, "end": 970.88, "text": " a linear operation." }, { "start": 970.88, "end": 975.2, "text": " So what you would do ultimately is you would take your queries, you would map it through" }, { "start": 975.2, "end": 981.8000000000001, "text": " this phi, okay, and you would take your keys, and you would also map it through this phi." }, { "start": 981.8000000000001, "end": 987.1600000000001, "text": " And this will give you query prime, and this will give you key prime, right." }, { "start": 987.1600000000001, "end": 992.6400000000001, "text": " So and then in the higher down in the higher lower whatever dimensional space, you would" }, { "start": 992.6400000000001, "end": 995.0600000000001, "text": " take the inner product." }, { "start": 995.0600000000001, "end": 1001.6, "text": " And the inner product between the two is going to approximately be as if you had multiple" }, { "start": 1001.6, "end": 1009.48, "text": " so the inner product is going to be approximately as if you had taken the original q and k," }, { "start": 1009.48, "end": 1014.96, "text": " multiply them and put them through a softmax." }, { "start": 1014.96, "end": 1016.36, "text": " How do we do it?" }, { "start": 1016.36, "end": 1023.4, "text": " So here we define what the function needs to look like, sit such that this holds the" }, { "start": 1023.4, "end": 1028.5, "text": " function again, they go very general here, the function in general is going to look like" }, { "start": 1028.5, "end": 1029.64, "text": " the following." }, { "start": 1029.64, "end": 1036.44, "text": " So you have one function here that's called h, that is a function of your input, and it's" }, { "start": 1036.44, "end": 1039, "text": " in front, it's a deterministic function of your input." }, { "start": 1039, "end": 1041.0600000000002, "text": " And you also have a normalization factor." }, { "start": 1041.0600000000002, "end": 1045.46, "text": " So this is kind of it's kind of a factor in front of it." }, { "start": 1045.46, "end": 1048.5600000000002, "text": " You see that here comes a vector." }, { "start": 1048.5600000000002, "end": 1056.0800000000002, "text": " So this is a vector, right, we are mapping this to a some dimensional space." }, { "start": 1056.0800000000002, "end": 1058.14, "text": " And this is the vector." }, { "start": 1058.14, "end": 1061.88, "text": " Now it's a bit you have to pay a bit of attention." }, { "start": 1061.88, "end": 1069.76, "text": " So inside this vector, you have l different sub vectors, they're all concatenated after" }, { "start": 1069.76, "end": 1070.76, "text": " each other." }, { "start": 1070.76, "end": 1077.8600000000001, "text": " Okay, so you have CC here, this, where the F, this is f1, and then f2, f3, f4, and so" }, { "start": 1077.8600000000001, "end": 1078.8600000000001, "text": " on until fl." }, { "start": 1078.8600000000001, "end": 1083, "text": " Okay, so you have all these sub vectors." }, { "start": 1083, "end": 1085.8000000000002, "text": " It doesn't matter ultimately, you just concatenate them all." }, { "start": 1085.8, "end": 1094.2, "text": " But it's important to just keep in mind, within each of these vectors, within each of these" }, { "start": 1094.2, "end": 1102.76, "text": " sub vectors, you always have the same repeated term, you have this w times your x, so the" }, { "start": 1102.76, "end": 1108.3, "text": " inner product between w and x, you can see there's w1 through wm or omega, I think it's" }, { "start": 1108.3, "end": 1109.8, "text": " an omega." }, { "start": 1109.8, "end": 1114.98, "text": " And again, in the in each sub vector, you have this repeated." }, { "start": 1114.98, "end": 1124.1200000000001, "text": " So what are these omegas, first of all, the omegas are random vectors drawn for from some" }, { "start": 1124.1200000000001, "end": 1125.8, "text": " distribution." }, { "start": 1125.8, "end": 1132.48, "text": " Now in practicality, this is going to be a normal distribution like this one here, an" }, { "start": 1132.48, "end": 1135.6200000000001, "text": " isotropic normal distribution." }, { "start": 1135.6200000000001, "end": 1140.72, "text": " So and the the other part here is what are the F's." }, { "start": 1140.72, "end": 1147.08, "text": " So the F's f1 through fl are going to be functions, deterministic functions." }, { "start": 1147.08, "end": 1155.22, "text": " So in a an example they gave right here, f1 is the sine function, f2 is the cosine function." }, { "start": 1155.22, "end": 1161.46, "text": " And then you have to specify h and h in this particular example is one, but it can be a" }, { "start": 1161.46, "end": 1164.3600000000001, "text": " function of x here, here, it's just the identity." }, { "start": 1164.3600000000001, "end": 1169.46, "text": " Sorry, not the identity, the constant function one." }, { "start": 1169.46, "end": 1174.8, "text": " So let's break this a little down." }, { "start": 1174.8, "end": 1181.32, "text": " So we have x, and x is going to be a vector x, as I said, x is going to be like one of" }, { "start": 1181.32, "end": 1187.16, "text": " the queries here, or one of the one of the keys here, one one of them, right, one column" }, { "start": 1187.16, "end": 1195.04, "text": " or one row, however you conceptualize it, and we wonder how do we want to map so x is" }, { "start": 1195.04, "end": 1197.2, "text": " going to be some vector." }, { "start": 1197.2, "end": 1201.76, "text": " Okay, then this is an ugly vector." }, { "start": 1201.76, "end": 1203.8400000000001, "text": " Let's draw it like this." }, { "start": 1203.8400000000001, "end": 1207.22, "text": " x is a vector." }, { "start": 1207.22, "end": 1213.04, "text": " Then what we're going to do is we're going to take a bunch of omegas." }, { "start": 1213.04, "end": 1216.5, "text": " Now it's important that the omegas are random." }, { "start": 1216.5, "end": 1222.68, "text": " So they come from this isotropic normal distribution, but they're going to remain the same throughout" }, { "start": 1222.68, "end": 1223.68, "text": " the algorithm." }, { "start": 1223.68, "end": 1228.68, "text": " So this is a method to resample them, but just conceptualize that at the beginning of" }, { "start": 1228.68, "end": 1232.8, "text": " the algorithm, you choose these omegas and then you fix them." }, { "start": 1232.8, "end": 1242.96, "text": " So the omegas are going to be also vectors, which are random, just a bunch of random vectors." }, { "start": 1242.96, "end": 1245.28, "text": " Let's take three." }, { "start": 1245.28, "end": 1250.72, "text": " What you're going to do is you're going to compute the inner product between your x and" }, { "start": 1250.72, "end": 1252.04, "text": " each of the omegas." }, { "start": 1252.04, "end": 1255.3799999999999, "text": " So inner product in your x and each of the omegas." }, { "start": 1255.3799999999999, "end": 1263.68, "text": " So this gives you omega 1x, omega 2x, omega 3x." }, { "start": 1263.68, "end": 1269.68, "text": " The inner product, this is going to be these, this is going to be numbers." }, { "start": 1269.68, "end": 1275.1599999999999, "text": " And then you're going to have a collection of functions." }, { "start": 1275.16, "end": 1284.5600000000002, "text": " So these are going to be functions, maybe function one is going maybe here, the sine" }, { "start": 1284.5600000000002, "end": 1289.52, "text": " function function two is going to be the cosine function." }, { "start": 1289.52, "end": 1294.88, "text": " Now you're going to take each to make a table." }, { "start": 1294.88, "end": 1299.76, "text": " You're going to take each of these products you computed and put them through each of" }, { "start": 1299.76, "end": 1300.8200000000002, "text": " the functions." }, { "start": 1300.82, "end": 1316.8, "text": " So this is going to be sine of omega 1x, cosine of omega 1x, sine of omega 2x and so on." }, { "start": 1316.8, "end": 1323.7, "text": " And then you're going to take this table and you're going to flatten it to a big vector." }, { "start": 1323.7, "end": 1333.22, "text": " So sine omega 1x, cosine or no sine first, the ordering data doesn't matter as long as" }, { "start": 1333.22, "end": 1341.1200000000001, "text": " you always do it the same omega 2x, and so on right until you have here cosine of omega" }, { "start": 1341.1200000000001, "end": 1343.4, "text": " 3x." }, { "start": 1343.4, "end": 1345.76, "text": " So that's the vector they're constructing." }, { "start": 1345.76, "end": 1348.04, "text": " And these are those random features." }, { "start": 1348.04, "end": 1354.52, "text": " Okay, so this here is going to be the vector that you're constructing." }, { "start": 1354.52, "end": 1360.3999999999999, "text": " What you do is basically geometrically your x is like somewhere here." }, { "start": 1360.3999999999999, "end": 1365.52, "text": " And it's a bit hard to draw in low dimensional space because you don't get the intuition." }, { "start": 1365.52, "end": 1371.72, "text": " But this is if this is your x, you're going to choose a bunch of these omegas, these omegas" }, { "start": 1371.72, "end": 1375.52, "text": " are going to be randomly sampled from a uniform Gaussian." }, { "start": 1375.52, "end": 1380.52, "text": " So this is omega 1, maybe omega 2, omega 3, omega 4." }, { "start": 1380.52, "end": 1387.48, "text": " And you're going to compute the inner product between between any of the two." }, { "start": 1387.48, "end": 1394, "text": " Okay, so you're going to be essentially computing the projections onto each other or the angle" }, { "start": 1394, "end": 1401.2, "text": " however you want to conceptualize it, the angle of this to each of the two of the omegas." }, { "start": 1401.2, "end": 1408.0800000000002, "text": " And then you're going to make a features out of these angles, right?" }, { "start": 1408.0800000000002, "end": 1414.64, "text": " So this will sort of tell you how your vector stands to each of these random features." }, { "start": 1414.64, "end": 1420.72, "text": " Now the reason I say it's difficult in low dimension is because now I have more omegas" }, { "start": 1420.72, "end": 1424.72, "text": " than the dimensionality, which is two right here." }, { "start": 1424.72, "end": 1426.24, "text": " And this makes no sense, right?" }, { "start": 1426.24, "end": 1432.04, "text": " As soon as I have two vectors that are not collinear in two dimensional space, I can" }, { "start": 1432.04, "end": 1439.2, "text": " if I project x onto them, like like this, sorry, like if I project x onto both of them," }, { "start": 1439.2, "end": 1442.96, "text": " I already have x fully represented, right?" }, { "start": 1442.96, "end": 1445.68, "text": " There's no need to have more of them." }, { "start": 1445.68, "end": 1452.32, "text": " However, if you are in super duper high dimensional space, and you don't you don't have as many" }, { "start": 1452.32, "end": 1460.08, "text": " features, then you get some interesting approximation properties, namely, so this was an example," }, { "start": 1460.08, "end": 1461.08, "text": " right?" }, { "start": 1461.08, "end": 1464.08, "text": " We don't always have the sine and the cosine here." }, { "start": 1464.08, "end": 1470.04, "text": " This is purely an example, you can only have one function, you see like this f one, you" }, { "start": 1470.04, "end": 1473.6, "text": " don't need two functions, you can have one, you can have many." }, { "start": 1473.6, "end": 1474.6, "text": " Okay." }, { "start": 1474.6, "end": 1480.6, "text": " And you can choose how many omegas you sample, that is a parameter." }, { "start": 1480.6, "end": 1489.32, "text": " So yeah, you have a couple of choices, I want to make it clear the choice of h, so the choice" }, { "start": 1489.32, "end": 1499.36, "text": " of h and f, they go hand in hand, the choice of h and the F's determine what the phi function" }, { "start": 1499.36, "end": 1500.36, "text": " is." }, { "start": 1500.36, "end": 1501.36, "text": " Okay." }, { "start": 1501.36, "end": 1508.76, "text": " So the choice of h f determine which kernel function this phi function corresponds to," }, { "start": 1508.76, "end": 1511.12, "text": " if you construct it like this." }, { "start": 1511.12, "end": 1517.28, "text": " So by choosing the correct functions, you tell the function which kernel you would like" }, { "start": 1517.28, "end": 1519.36, "text": " to approximate." }, { "start": 1519.36, "end": 1526.68, "text": " And then by sampling the omegas, the more omegas you sample, the more accurately you" }, { "start": 1526.68, "end": 1532.92, "text": " approximate that kernel, and then you can give some approximation guarantees." }, { "start": 1532.92, "end": 1540.68, "text": " As they say, so the softmax kernel is given by this thing here, which we've already seen." }, { "start": 1540.68, "end": 1541.68, "text": " Okay." }, { "start": 1541.68, "end": 1545.26, "text": " And now how do we approximate the softmax kernel?" }, { "start": 1545.26, "end": 1552.2, "text": " And they show that right here, softmax kernel is approximated by this thing right here." }, { "start": 1552.2, "end": 1561.04, "text": " So it's a bit of a ugly formula, and it contains this Gaussian kernel, the Gauss kernel." }, { "start": 1561.04, "end": 1571.12, "text": " So they say, if we choose h equals to one, so just a constant factor, and this f1 and" }, { "start": 1571.12, "end": 1578.52, "text": " f2 to the sine and cosine, and in if we choose d, the distribution to be a normal distribution" }, { "start": 1578.52, "end": 1582.84, "text": " isotropic around the mean, this is the Gaussian kernel." }, { "start": 1582.84, "end": 1589.96, "text": " And then we simply have to choose h differently, this factor in front to make it into the softmax" }, { "start": 1589.96, "end": 1596.8, "text": " kernel, so as long as we put this factor in front, you can see that this here represents" }, { "start": 1596.8, "end": 1598.76, "text": " an inner product, right?" }, { "start": 1598.76, "end": 1602.1200000000001, "text": " So you have to kind of think of decomposition." }, { "start": 1602.1200000000001, "end": 1609.4, "text": " So if you put, you can see f1, the sine, f2, the cosine, which is this makes it the Gaussian" }, { "start": 1609.4, "end": 1617, "text": " kernel, and then this factor in front of it here, two for h, this makes it now the softmax" }, { "start": 1617, "end": 1618, "text": " kernel." }, { "start": 1618, "end": 1629.4, "text": " So if we choose h and f like this, then when we map our queries and keys through, if we" }, { "start": 1629.4, "end": 1638.24, "text": " map our queries and keys through the phi function, and then make the inner product between them," }, { "start": 1638.24, "end": 1644.96, "text": " okay, like here, that will approximate depending on how many omegas we've sampled better or" }, { "start": 1644.96, "end": 1653.64, "text": " worse, they approximate the result as if we had multiplied them first, and then put them" }, { "start": 1653.64, "end": 1657.08, "text": " through the softmax function." }, { "start": 1657.08, "end": 1658.44, "text": " All right." }, { "start": 1658.44, "end": 1663.3400000000001, "text": " So this you can see how this becomes much easier, because we can independently put them" }, { "start": 1663.3400000000001, "end": 1665.44, "text": " through the phi, okay." }, { "start": 1665.44, "end": 1669.8600000000001, "text": " And then it's just a linear operation, which allows us to do our trick where we multiply" }, { "start": 1669.86, "end": 1676.28, "text": " k and v first, and then multiply by q instead of the other way around, which we're forced" }, { "start": 1676.28, "end": 1679.6799999999998, "text": " to do when we apply the softmax." }, { "start": 1679.6799999999998, "end": 1683.6, "text": " This was a long, long way to get here." }, { "start": 1683.6, "end": 1687.4799999999998, "text": " But I hope you're with this." }, { "start": 1687.4799999999998, "end": 1692.7199999999998, "text": " And this is, this is pretty straightforward, actually, so far." }, { "start": 1692.7199999999998, "end": 1697.7199999999998, "text": " Now renormalization, we can take care of that easily." }, { "start": 1697.7199999999998, "end": 1698.9799999999998, "text": " But there is a problem." }, { "start": 1698.98, "end": 1705.82, "text": " And this is they argue, this hasn't been proposed so far, because it doesn't work like this." }, { "start": 1705.82, "end": 1713.26, "text": " So even though you approximate this kernel fairly well, it's it's a bad approximation." }, { "start": 1713.26, "end": 1720.56, "text": " And they say here, there is however, a caveat here, the attention module from one constructs" }, { "start": 1720.56, "end": 1725.26, "text": " for each token, a convex combination of value vectors with coefficients given as corresponding" }, { "start": 1725.26, "end": 1727.6, "text": " green renormalized kernel scores." }, { "start": 1727.6, "end": 1732.04, "text": " That is why kernels producing non negative scores are used." }, { "start": 1732.04, "end": 1736.36, "text": " Applying random feature maps with potentially negative dimension values leads to unstable" }, { "start": 1736.36, "end": 1742.12, "text": " behaviors, especially when kernel scores close to zero, which is the case for lots of entries" }, { "start": 1742.12, "end": 1748.48, "text": " of a corresponding to not relevant tokens are approximated by estimators with large" }, { "start": 1748.48, "end": 1750.02, "text": " variants in such regions." }, { "start": 1750.02, "end": 1755.36, "text": " This results in abnormal behaviors, eg negative diagonal value renormalizers, and consequently" }, { "start": 1755.36, "end": 1759.6399999999999, "text": " either completely prevents training or leads to sub optimal models." }, { "start": 1759.6399999999999, "end": 1768.08, "text": " So what they're saying is that when you use softmax, you always always get positive values," }, { "start": 1768.08, "end": 1769.08, "text": " right?" }, { "start": 1769.08, "end": 1775.1399999999999, "text": " So if I have a bunch of vectors, or a bunch of numbers, this is, you know, positive number," }, { "start": 1775.1399999999999, "end": 1782.6399999999999, "text": " negative number, very positive number, negative number, and I run it through a softmax, I" }, { "start": 1782.64, "end": 1790.2800000000002, "text": " will get out a distribution right, like this, or really big, sorry, that softmax will scale" }, { "start": 1790.2800000000002, "end": 1795.16, "text": " that up, I will get out a positive district like a kind of a histogram." }, { "start": 1795.16, "end": 1801.5400000000002, "text": " And now I'm trying to approximate this by this formula right here." }, { "start": 1801.5400000000002, "end": 1806.6200000000001, "text": " And you can see these are these are vectors, which gives me sine and cosine coefficients," }, { "start": 1806.6200000000001, "end": 1812.5600000000002, "text": " and I linearly multiply two vectors together, which definitely means I can get negative" }, { "start": 1812.56, "end": 1814.1599999999999, "text": " entries and so on." }, { "start": 1814.1599999999999, "end": 1819.6799999999998, "text": " So the renormalization then has to somehow maybe take care of that." }, { "start": 1819.6799999999998, "end": 1826.56, "text": " And it says especially, especially around zero, when the original softmax matrix would" }, { "start": 1826.56, "end": 1834, "text": " have values close to zero, this approximation is really bad and has high variance." }, { "start": 1834, "end": 1839.5, "text": " And they also argue, a lot of attention vectors are close to zero, because we know that attention" }, { "start": 1839.5, "end": 1846.52, "text": " is sort of sparsify, just by the fact of what how the softmax works, it exaggerates the" }, { "start": 1846.52, "end": 1851.2, "text": " largest inner products, and it really dampens the low inner products." }, { "start": 1851.2, "end": 1852.2, "text": " Okay." }, { "start": 1852.2, "end": 1855.56, "text": " Actually, I might not even have done this correctly here." }, { "start": 1855.56, "end": 1860.32, "text": " If it's, if it's very negative, I'm not sure." }, { "start": 1860.32, "end": 1864.4, "text": " In any case, they say that's why this doesn't work, because it has such high variance, it's" }, { "start": 1864.4, "end": 1870, "text": " a good approximation, but has such high variance in the wrong places, they really around zero" }, { "start": 1870, "end": 1872.7, "text": " where most values are." }, { "start": 1872.7, "end": 1880.3200000000002, "text": " So they call this these s, the SM the softmax approximation with m sampled features trig," }, { "start": 1880.3200000000002, "end": 1883.44, "text": " because it uses the sine and cosine functions." }, { "start": 1883.44, "end": 1888.3600000000001, "text": " And now they're trying to remedy this." }, { "start": 1888.3600000000001, "end": 1892.7800000000002, "text": " And for that, they propose a different decomposition." }, { "start": 1892.78, "end": 1896.84, "text": " So a different approximation to the softmax kernel." }, { "start": 1896.84, "end": 1902.6399999999999, "text": " And they say we can also decompose the softmax or approximate the softmax kernel with the" }, { "start": 1902.6399999999999, "end": 1904.72, "text": " following formula." }, { "start": 1904.72, "end": 1909.66, "text": " And I look, I, I'm not going to, they have a proof for this." }, { "start": 1909.66, "end": 1913.36, "text": " But this is the formula." }, { "start": 1913.36, "end": 1917.3799999999999, "text": " You sample again, you sample these things." }, { "start": 1917.38, "end": 1924.2800000000002, "text": " And then you perform this inner, this is the inner product that approximates the softmax" }, { "start": 1924.2800000000002, "end": 1925.2800000000002, "text": " kernel." }, { "start": 1925.2800000000002, "end": 1926.2800000000002, "text": " Okay." }, { "start": 1926.2800000000002, "end": 1931.96, "text": " And this is further, you can reduce this to this thing right here." }, { "start": 1931.96, "end": 1940.92, "text": " So it's a deterministic matrix right here, this which is given by that." }, { "start": 1940.92, "end": 1943.3600000000001, "text": " And it's this cos h." }, { "start": 1943.3600000000001, "end": 1946.7, "text": " So cos h is the hyperbolic tangent." }, { "start": 1946.7, "end": 1961.1200000000001, "text": " This can be this is so cos h of x is e to the x plus e to the minus x divided by two." }, { "start": 1961.1200000000001, "end": 1971.48, "text": " Okay, so this function approximates the softmax." }, { "start": 1971.48, "end": 1975.1200000000001, "text": " And that's just something you'll have to take from their proof." }, { "start": 1975.12, "end": 1982.4199999999998, "text": " However, you can now see that this can be fairly easily represented as an inner product," }, { "start": 1982.4199999999998, "end": 1985.3, "text": " you already see it here, right?" }, { "start": 1985.3, "end": 1992.28, "text": " This you simply, this is the part that comes from x, and this is the part that comes from" }, { "start": 1992.28, "end": 1993.28, "text": " y." }, { "start": 1993.28, "end": 2000.9199999999998, "text": " If you want to note this in our in our notation earlier, again, we use the distribution that" }, { "start": 2000.92, "end": 2005.96, "text": " we sampled the omegas from is going to be a normal distribution." }, { "start": 2005.96, "end": 2012.72, "text": " And our functions are going to be this h function is the pre factor, it's simply going to be" }, { "start": 2012.72, "end": 2018.02, "text": " the made up of the norm of x and put through the exponential function." }, { "start": 2018.02, "end": 2022.96, "text": " And then we have two options actually, right here." }, { "start": 2022.96, "end": 2026.24, "text": " I don't even know why they put the first one." }, { "start": 2026.24, "end": 2028.26, "text": " But the second option makes more sense." }, { "start": 2028.26, "end": 2030.64, "text": " And there's a bit of a more of a factor right here." }, { "start": 2030.64, "end": 2038.2, "text": " So you have two functions, there is x of u and negative x and x of negative u, as the" }, { "start": 2038.2, "end": 2041.8000000000002, "text": " two function you remember, this is where we had sine and cosine before." }, { "start": 2041.8000000000002, "end": 2047.3200000000002, "text": " Now we have x u and negative x, sorry, x of negative u." }, { "start": 2047.3200000000002, "end": 2050.2200000000003, "text": " And we can quickly check that this gives us the same thing." }, { "start": 2050.2200000000003, "end": 2056.7400000000002, "text": " So this h, these h functions, if we inner product them, that's going to be to give us" }, { "start": 2056.7400000000002, "end": 2060.3, "text": " the this, what is that even lambda?" }, { "start": 2060.3, "end": 2064.7400000000002, "text": " Is that a big lambda matrix right here?" }, { "start": 2064.7400000000002, "end": 2070.78, "text": " And our vector, let's just say we sample one single omega, right?" }, { "start": 2070.78, "end": 2073.7400000000002, "text": " So we have our x, we sample one single omega." }, { "start": 2073.7400000000002, "end": 2078.44, "text": " So x is going to give us a vector with two sub vectors, right?" }, { "start": 2078.44, "end": 2082.88, "text": " Since we have two functions, each sub vector is of length one." }, { "start": 2082.88, "end": 2091.06, "text": " So the first is going to be e to the omega x, and the second entry is going to be e to" }, { "start": 2091.06, "end": 2093.82, "text": " the negative omega x." }, { "start": 2093.82, "end": 2101.2200000000003, "text": " If we put in y through the same or as instead of x and y, you can think of queries and keys," }, { "start": 2101.2200000000003, "end": 2106.6, "text": " that's going to be y e to the negative omega y." }, { "start": 2106.6, "end": 2114.42, "text": " If we now take the inner product, that is going to give us and I'm resolving the exponentials" }, { "start": 2114.42, "end": 2116.02, "text": " already right here." }, { "start": 2116.02, "end": 2125.9, "text": " So that's going to give us e to the e to the w x plus y." }, { "start": 2125.9, "end": 2136.3399999999997, "text": " And here is going to give us plus e to the w or sorry, the negative w x plus y." }, { "start": 2136.34, "end": 2140.42, "text": " And that's the you know, there is a normalization factor." }, { "start": 2140.42, "end": 2142.86, "text": " That's why the square root of two is here, right?" }, { "start": 2142.86, "end": 2146.54, "text": " So that comes in somewhere here to give us this normalization factor." }, { "start": 2146.54, "end": 2155.5, "text": " So this is exactly the hyperbolic cosine of omega times z and z is x plus y that they" }, { "start": 2155.5, "end": 2156.5, "text": " say it somewhere." }, { "start": 2156.5, "end": 2157.5, "text": " Yeah." }, { "start": 2157.5, "end": 2158.5, "text": " Okay." }, { "start": 2158.5, "end": 2167.42, "text": " So if we choose f1 and f2 to be this x, u and x negative u, then we get if we perform" }, { "start": 2167.42, "end": 2173.1, "text": " the inner product, we get out exactly this formula number seven right here." }, { "start": 2173.1, "end": 2175.18, "text": " So this is this." }, { "start": 2175.18, "end": 2182.84, "text": " And that is an approximation of the softmax kernel of the softmax function." }, { "start": 2182.84, "end": 2185.34, "text": " It's just a different approximation than before." }, { "start": 2185.34, "end": 2186.34, "text": " Okay." }, { "start": 2186.34, "end": 2192.26, "text": " And the cool thing about this approximation is that the approximation itself only ever" }, { "start": 2192.26, "end": 2193.7000000000003, "text": " has positive values." }, { "start": 2193.7000000000003, "end": 2197.94, "text": " So these vectors here, you can see the x, the vectors here, and there's of course a" }, { "start": 2197.94, "end": 2204.1400000000003, "text": " four a factor in front of this right here, which is going to be also an exponential." }, { "start": 2204.1400000000003, "end": 2205.1400000000003, "text": " These are all exponential." }, { "start": 2205.1400000000003, "end": 2211.84, "text": " So these are all going to be positive features, which is very, very nice." }, { "start": 2211.84, "end": 2215.1000000000004, "text": " And they also show this theoretically." }, { "start": 2215.1, "end": 2218.7599999999998, "text": " So here, this kind of funky graphic shows this." }, { "start": 2218.7599999999998, "end": 2223.2999999999997, "text": " This is the ratio of the approximation mistake." }, { "start": 2223.2999999999997, "end": 2233.58, "text": " Okay, the ratio of the approximation mistake of the of the original approximation that" }, { "start": 2233.58, "end": 2240.54, "text": " we discussed and this new positive approximation that we just built right now." }, { "start": 2240.54, "end": 2244.58, "text": " And you can see that in parts here, it's fairly similar." }, { "start": 2244.58, "end": 2248.14, "text": " So this, I believe, so R is the ratio." }, { "start": 2248.14, "end": 2251.02, "text": " So it's fairly flat right here." }, { "start": 2251.02, "end": 2254.22, "text": " But there are parts where it just shoots up, right?" }, { "start": 2254.22, "end": 2260.62, "text": " And in fact, they can prove that you can see this also right here." }, { "start": 2260.62, "end": 2266.2599999999998, "text": " So the error of the trig approximation that shoots up while the positive approximation" }, { "start": 2266.2599999999998, "end": 2270.98, "text": " just stays flat or flatter in these regions." }, { "start": 2270.98, "end": 2283.44, "text": " They can in fact prove that the the error of the Yeah, so you see the error." }, { "start": 2283.44, "end": 2288.5, "text": " If the softmax values go to zero, so that's the problematic regions, the error of the" }, { "start": 2288.5, "end": 2294.38, "text": " trigonomic approximation can go to infinity while the error of the positive approximation" }, { "start": 2294.38, "end": 2295.66, "text": " goes to zero." }, { "start": 2295.66, "end": 2298.94, "text": " Okay, they have a number of theoretical results in here." }, { "start": 2298.94, "end": 2305.26, "text": " I think that's one of the main ones, the fact that the this approximation succeeds where" }, { "start": 2305.26, "end": 2307.86, "text": " the other approximation fails." }, { "start": 2307.86, "end": 2313.26, "text": " Really quickly, they also have this variant here, where they don't build a two vector" }, { "start": 2313.26, "end": 2319.06, "text": " or a vector of two sub vectors, but just one with just the exponential function." }, { "start": 2319.06, "end": 2321.38, "text": " And that is the same thing." }, { "start": 2321.38, "end": 2325.7400000000002, "text": " Because of course, if you sample w, you're going to have sorry, omega, if you sample" }, { "start": 2325.74, "end": 2333.2999999999997, "text": " omega, you're going to have omega as much as negative omega, I believe and and thereby" }, { "start": 2333.2999999999997, "end": 2340.1, "text": " in expectation, you're going to get this hyperbolic cosine again, I think that's the reason why" }, { "start": 2340.1, "end": 2346.1, "text": " but this lower this lower construction here gives you the hyperbolic cosine." }, { "start": 2346.1, "end": 2348.3599999999997, "text": " Okay, so pretty cool." }, { "start": 2348.3599999999997, "end": 2354.5, "text": " We simply use this approximation, we run our queries, right?" }, { "start": 2354.5, "end": 2358.74, "text": " This your queries and our keys through this." }, { "start": 2358.74, "end": 2363.9, "text": " And again, we ideally use more omegas than just one, maybe a bunch." }, { "start": 2363.9, "end": 2371.7, "text": " The more we use the better we obtain a linear function that approximates the softmax function." }, { "start": 2371.7, "end": 2375.46, "text": " The more we sample, the more it approximated, it's unbiased, and so on." }, { "start": 2375.46, "end": 2378.38, "text": " And have a bunch of variants of it." }, { "start": 2378.38, "end": 2386.58, "text": " So variant where you normalize the omegas, which gives you the regularized softmax kernel," }, { "start": 2386.58, "end": 2391.2200000000003, "text": " which is not a softmax anymore, but it's a regularized softmax." }, { "start": 2391.2200000000003, "end": 2397.3, "text": " And they can approximate this in pretty much the same way." }, { "start": 2397.3, "end": 2406.44, "text": " Except instead of a normal distribution, you use a uniform distribution right here." }, { "start": 2406.44, "end": 2414.94, "text": " And they have a bunch of other things, namely, one other improvement is that so far, we've" }, { "start": 2414.94, "end": 2421.1, "text": " simply sampled these W's, okay, we sampled the W's from a normal distribution like this" }, { "start": 2421.1, "end": 2422.44, "text": " here." }, { "start": 2422.44, "end": 2424.98, "text": " They say we can improve even further." }, { "start": 2424.98, "end": 2430.78, "text": " Namely, we can strictly improve with this gives us an estimator with strictly lower" }, { "start": 2430.78, "end": 2438.5, "text": " variance if we make sure that the W's we sample are exactly orthogonal." }, { "start": 2438.5, "end": 2442.5800000000004, "text": " So they're already approximately orthogonal if we sample them from a high dimensional" }, { "start": 2442.5800000000004, "end": 2443.5800000000004, "text": " space." }, { "start": 2443.5800000000004, "end": 2450.34, "text": " But if we make sure that they are exactly orthogonal, sorry, then they are giving us" }, { "start": 2450.34, "end": 2453.2200000000003, "text": " an even better approximation." }, { "start": 2453.2200000000003, "end": 2459.1600000000003, "text": " And you can do that by this procedure called the Gram-Schmidt orthogonalization or Gram-Schmidt" }, { "start": 2459.16, "end": 2462.18, "text": " renormalization procedure." }, { "start": 2462.18, "end": 2464.3399999999997, "text": " It's a pretty easy procedure." }, { "start": 2464.3399999999997, "end": 2469.1, "text": " And it doesn't mess with your unbiasedness." }, { "start": 2469.1, "end": 2475.02, "text": " Whenever D is an isotropic distribution, isotropic just means the same in every direction." }, { "start": 2475.02, "end": 2482.52, "text": " So like a standard Gaussian would fulfill or a uniform would fulfill this thing as long" }, { "start": 2482.52, "end": 2485.7799999999997, "text": " as it's centered." }, { "start": 2485.78, "end": 2490.0600000000004, "text": " I think maybe even if it's not centered, depends on how you renormalize." }, { "start": 2490.0600000000004, "end": 2492.46, "text": " Okay, this is irrelevant." }, { "start": 2492.46, "end": 2499.6400000000003, "text": " But if you make them exactly orthogonal, say this leads to the first theoretical results" }, { "start": 2499.6400000000003, "end": 2503.38, "text": " showing that orthogonal random features can be applied to reduce the variance of the softmax" }, { "start": 2503.38, "end": 2510.1800000000003, "text": " or Gaussian kernel estimators for any dimensionality D rather than just asymptotically for large" }, { "start": 2510.1800000000003, "end": 2513.82, "text": " enough D as it is the case for previous methods." }, { "start": 2513.82, "end": 2520.1000000000004, "text": " And leads to the first exponentially small bounds on large deviations probabilities that" }, { "start": 2520.1000000000004, "end": 2525.26, "text": " are strictly smaller than for non-orthogonal methods." }, { "start": 2525.26, "end": 2530.86, "text": " So you're going to end up with a thing that's strictly smaller, so bounds that are strictly" }, { "start": 2530.86, "end": 2534.78, "text": " smaller than if you don't use orthogonality." }, { "start": 2534.78, "end": 2542.54, "text": " The only thing it requires is that m is smaller or equal to D. So the number of omega u sample" }, { "start": 2542.54, "end": 2550.42, "text": " is going to be smaller equal to the dimensionality that the original space operates in, which" }, { "start": 2550.42, "end": 2554.7799999999997, "text": " they say this will be the case in all our experiments." }, { "start": 2554.7799999999997, "end": 2562.86, "text": " Okay, and again, these are exponentially small bounds, which is pretty cool." }, { "start": 2562.86, "end": 2567.34, "text": " I guess for you, the end user, it matters that this works." }, { "start": 2567.34, "end": 2572.3, "text": " And if you use all of their tricks with the positivity and the orthogonality." }, { "start": 2572.3, "end": 2577.78, "text": " So by the way, this here is where they show that CDD or orthogonal MSE, the mean squared" }, { "start": 2577.78, "end": 2583.54, "text": " error is smaller than the original one minus some thing." }, { "start": 2583.54, "end": 2588.2200000000003, "text": " And as long as the something of course is greater than zero, you're going to have something" }, { "start": 2588.2200000000003, "end": 2590.0600000000004, "text": " that's smaller." }, { "start": 2590.0600000000004, "end": 2597.46, "text": " Okay, then they prove a bunch of other things again about this kind of this regularized," }, { "start": 2597.46, "end": 2600.46, "text": " sorry, not regularized." }, { "start": 2600.46, "end": 2604.94, "text": " I forget it's the where you divide by the norm." }, { "start": 2604.94, "end": 2609.42, "text": " In any case, they implement this in jacks." }, { "start": 2609.42, "end": 2610.42, "text": " Oh, great." }, { "start": 2610.42, "end": 2611.42, "text": " Wow, cool." }, { "start": 2611.42, "end": 2616.06, "text": " I okay, I have no opinion on jacks." }, { "start": 2616.06, "end": 2621.5, "text": " But they have the code released and I'll of course link to it." }, { "start": 2621.5, "end": 2627.9, "text": " And here you can clearly see so this is a log log plot, where you have l the size of" }, { "start": 2627.9, "end": 2636.02, "text": " the input and the number of seconds that it takes to go forward and backward over here" }, { "start": 2636.02, "end": 2637.02, "text": " in the model." }, { "start": 2637.02, "end": 2640.26, "text": " And you can see the x here." }, { "start": 2640.26, "end": 2646.86, "text": " The x is the baseline where you simply bypass the attention matrix, you simply take the" }, { "start": 2646.86, "end": 2650.1800000000003, "text": " identity function and just return the value matrix." }, { "start": 2650.18, "end": 2658.22, "text": " And you can see that the performance the performers, they scale fairly well with that baseline." }, { "start": 2658.22, "end": 2663.3399999999997, "text": " And in fact, they scale at the same slope, which is the important part right here, you" }, { "start": 2663.3399999999997, "end": 2668.66, "text": " can really see that this is linear slope where the transformers which are the dashed lines," }, { "start": 2668.66, "end": 2677.1, "text": " they all curve upwards, which of course is that that quadratic requirement." }, { "start": 2677.1, "end": 2679.74, "text": " The same in the backward pass, I don't know if they continue curving." }, { "start": 2679.74, "end": 2683.4599999999996, "text": " I think it's also a straight line in the log log plot." }, { "start": 2683.4599999999996, "end": 2691.2599999999998, "text": " But the slope is two instead of one, like the linear like the linear models." }, { "start": 2691.2599999999998, "end": 2697.5, "text": " Again, the comparison is only important between the baseline and the lines that you're looking" }, { "start": 2697.5, "end": 2698.5, "text": " at." }, { "start": 2698.5, "end": 2702.74, "text": " If they have the same slope, they scale the same as you get higher." }, { "start": 2702.74, "end": 2704.4799999999996, "text": " Look at it." }, { "start": 2704.4799999999996, "end": 2706.06, "text": " This is log L, right?" }, { "start": 2706.06, "end": 2711.06, "text": " So this is these these are now two to the 18th tokens." }, { "start": 2711.06, "end": 2714.42, "text": " And I believe this is done on one GPU." }, { "start": 2714.42, "end": 2720.2599999999998, "text": " Yes, so an out of memory error on a V 100 GPU." }, { "start": 2720.2599999999998, "end": 2722.58, "text": " And this is pretty good." }, { "start": 2722.58, "end": 2729.74, "text": " This is pretty good news for everyone who wants to run the performers in in kind of" }, { "start": 2729.74, "end": 2732.86, "text": " a low resource environment low risk with low resource." }, { "start": 2732.86, "end": 2740.9, "text": " I mean, like a deep learning GPU instead of 1000 TPUs, which is pretty cool." }, { "start": 2740.9, "end": 2747.82, "text": " They also show the that their method is better than the kind of so the orthogonality is better" }, { "start": 2747.82, "end": 2749.78, "text": " than the ID features." }, { "start": 2749.78, "end": 2756.2200000000003, "text": " And then of course, the positive ID features are better than these original trigonometric" }, { "start": 2756.2200000000003, "end": 2758.38, "text": " decomposition." }, { "start": 2758.38, "end": 2768.02, "text": " And they show that this thing that you can take a transformer checkpoint, and you plug" }, { "start": 2768.02, "end": 2772.38, "text": " it into the performer." }, { "start": 2772.38, "end": 2777.9, "text": " And you simply have to fine tune a little bit to get it to the performance that the" }, { "start": 2777.9, "end": 2779.38, "text": " transformer was at, right?" }, { "start": 2779.38, "end": 2783.38, "text": " This is I believe this is the original training curve of the transformer." }, { "start": 2783.38, "end": 2788.1400000000003, "text": " So you know, it's not a fair comparison, because the performer starts from the checkpoint" }, { "start": 2788.14, "end": 2789.94, "text": " already." }, { "start": 2789.94, "end": 2791.22, "text": " At least that's how I interpret it." }, { "start": 2791.22, "end": 2797.2999999999997, "text": " It's not clearly written and they say, okay, over here, this trig thing works." }, { "start": 2797.2999999999997, "end": 2800.2599999999998, "text": " This is the original approximation, this even works." }, { "start": 2800.2599999999998, "end": 2808.46, "text": " However, if we do that on a bit more challenging, more longer sequences, data, data set, then" }, { "start": 2808.46, "end": 2811.7, "text": " you can see that the trig softmax, it just it just whacks out." }, { "start": 2811.7, "end": 2813.22, "text": " That's this thing here." }, { "start": 2813.22, "end": 2818.8599999999997, "text": " And you actually need better these positive approximations." }, { "start": 2818.8599999999997, "end": 2822.7799999999997, "text": " And that compared to the Linformer here, which is pretty cool." }, { "start": 2822.7799999999997, "end": 2827.4599999999996, "text": " So the Linformer, another, I've made a video about it, if you want to know about it, but" }, { "start": 2827.4599999999996, "end": 2833.2799999999997, "text": " they also do random projections of the attention matrix." }, { "start": 2833.2799999999997, "end": 2840.74, "text": " But you can see that the Linformer plateaus along with the performers, if you don't redraw" }, { "start": 2840.74, "end": 2842.7, "text": " the random features." }, { "start": 2842.7, "end": 2848.02, "text": " So if you want in the performer, if you do it at the right time, you redraw these random" }, { "start": 2848.02, "end": 2854.2599999999998, "text": " features, these omegas, you have to have to see where you can you can't just arbitrarily" }, { "start": 2854.2599999999998, "end": 2856.54, "text": " redraw them between computation steps." }, { "start": 2856.54, "end": 2861.6, "text": " But at the end of like a computation step, you can redraw for the next computation step." }, { "start": 2861.6, "end": 2870.5, "text": " And if you do that, and the even better with the regularized or the the normalized features," }, { "start": 2870.5, "end": 2876.1, "text": " you get to the same level of performance that a standard transformer would get." }, { "start": 2876.1, "end": 2883.06, "text": " But of course, without the quadratic requirements." }, { "start": 2883.06, "end": 2892.74, "text": " And okay, lastly, as I said, they've already they've already swapped out the" }, { "start": 2892.74, "end": 2898.26, "text": " they swapped out this nonlinearity by a relu." }, { "start": 2898.26, "end": 2904.1200000000003, "text": " So here they construct performer relu, taking f equals relu in equation five, you remember" }, { "start": 2904.1200000000003, "end": 2911.0600000000004, "text": " what f was, f was the sine and cosine when we had the first approximation and f was the" }, { "start": 2911.0600000000004, "end": 2915.44, "text": " x x of u and x of minus u, the second one." }, { "start": 2915.44, "end": 2922.34, "text": " And as I said, the big improvement in deep learning came when we swapped sigmoids for" }, { "start": 2922.34, "end": 2923.5400000000004, "text": " relus." }, { "start": 2923.54, "end": 2928.7799999999997, "text": " And here they've already they're already trying swapping now this because they say, well," }, { "start": 2928.7799999999997, "end": 2933.18, "text": " so we have a method that we can basically plug in anything we want." }, { "start": 2933.18, "end": 2936.34, "text": " So they plug in relu because it's you know, worked well." }, { "start": 2936.34, "end": 2940.06, "text": " And this again, it works pretty well." }, { "start": 2940.06, "end": 2945.84, "text": " So they compare again also with the reformer here with the Lin former, as you can see," }, { "start": 2945.84, "end": 2950.42, "text": " and of course, they beat everything now, whether or not this method is going to be the next" }, { "start": 2950.42, "end": 2957.06, "text": " thing, like the thing that everyone uses is to be we don't know." }, { "start": 2957.06, "end": 2959.02, "text": " It's fairly possible." }, { "start": 2959.02, "end": 2960.34, "text": " It's pretty cool." }, { "start": 2960.34, "end": 2965.52, "text": " And it appears to be theoretically solidly grounded, but you never know from the experiments" }, { "start": 2965.52, "end": 2971.42, "text": " of the single paper, the broader impact statement, much respect, they just use it to tell you" }, { "start": 2971.42, "end": 2973.5, "text": " how awesome their paper is." }, { "start": 2973.5, "end": 2981.78, "text": " Like there's no mention on on on any kind of ethical impact, which I believe like I'm" }, { "start": 2981.78, "end": 2987.34, "text": " all for these kinds of broader impact statements, like just kind of okay, research on transformers" }, { "start": 2987.34, "end": 2990.6, "text": " is going to be better because now people have access to it." }, { "start": 2990.6, "end": 2991.92, "text": " It's backward compatible." }, { "start": 2991.92, "end": 2993.88, "text": " That's pretty cool." }, { "start": 2993.88, "end": 2998.78, "text": " It's applicable to biology and medicine because we can take longer sequences." }, { "start": 2998.78, "end": 3003.06, "text": " It's all like, yeah, I like these kinds of broader impact statement." }, { "start": 3003.06, "end": 3010.7799999999997, "text": " The last thing here is that you might be so the only problem is if you want to do this" }, { "start": 3010.7799999999997, "end": 3017.12, "text": " causal attention that if you want to do like a generative model, like a GPT sort of model," }, { "start": 3017.12, "end": 3019.82, "text": " you have to do a bit of a trick." }, { "start": 3019.82, "end": 3023.46, "text": " And that is because your attention matrix isn't the full attention matrix." }, { "start": 3023.46, "end": 3025.44, "text": " So you can't just decompose it." }, { "start": 3025.44, "end": 3028.7, "text": " It's this lower triangular matrix right here." }, { "start": 3028.7, "end": 3034.22, "text": " But since you have linear decomposition of this thing, you can do these kind of prefix" }, { "start": 3034.22, "end": 3046.9399999999996, "text": " sums, namely, you can compute simply so you you you can compute the key one times value" }, { "start": 3046.9399999999996, "end": 3055.14, "text": " one, and then you can compute key two times value two plus key one times value one." }, { "start": 3055.14, "end": 3062.66, "text": " And you compute key three value three plus key two value two plus key one, sorry, value" }, { "start": 3062.66, "end": 3064.94, "text": " one, and so on." }, { "start": 3064.94, "end": 3066.2599999999998, "text": " You compute these things." }, { "start": 3066.2599999999998, "end": 3070.94, "text": " And these are all these are all the big where the L goes away, right?" }, { "start": 3070.94, "end": 3073.68, "text": " So we do that first." }, { "start": 3073.68, "end": 3082.66, "text": " And then we simply have to come along and we take q q one, multiply by q one, v one," }, { "start": 3082.66, "end": 3090.54, "text": " we take q two, multiply by this and this q three will multiply by this, this and this." }, { "start": 3090.54, "end": 3093.3399999999997, "text": " And you see, that's how you get your causal attention." }, { "start": 3093.3399999999997, "end": 3098.8199999999997, "text": " So you simply keep track of these prefix sums right here." }, { "start": 3098.8199999999997, "end": 3105.3199999999997, "text": " And then when the next q comes along, simply multiplied by all of the things that are above" }, { "start": 3105.3199999999997, "end": 3110.2599999999998, "text": " it in the prefix sum, that's how you get your triangular matrix." }, { "start": 3110.26, "end": 3117.28, "text": " So even that is solved, a thing that I believe the Lin former wasn't able to do with its" }, { "start": 3117.28, "end": 3120.82, "text": " particular decomposition, I might be I might be wrong here." }, { "start": 3120.82, "end": 3125.78, "text": " All right, they have a bunch of experiments on protein analysis, and so on, which of course," }, { "start": 3125.78, "end": 3130.86, "text": " wasn't possible, I guess before because it was so so heavy." }, { "start": 3130.86, "end": 3137.7400000000002, "text": " They also have like image net 64, as you can see right here, which is an impossible data" }, { "start": 3137.74, "end": 3140.2, "text": " set for a classic transformer." }, { "start": 3140.2, "end": 3146.16, "text": " As I said, they have code code is in jacks, which is like this is it's ugly code." }, { "start": 3146.16, "end": 3148.52, "text": " Let's be honest, but it's code." }, { "start": 3148.52, "end": 3150.4599999999996, "text": " So that's fairly cool." }, { "start": 3150.4599999999996, "end": 3156.3799999999997, "text": " And I want to point out the right at the bottom here is actually where the stuff happens." }, { "start": 3156.3799999999997, "end": 3160.2599999999998, "text": " So you can see that." }, { "start": 3160.26, "end": 3168.42, "text": " Just quickly, you have here keys and queries are, where is it?" }, { "start": 3168.42, "end": 3169.42, "text": " Exactly." }, { "start": 3169.42, "end": 3173, "text": " So queries and keys are going to be constructed right here." }, { "start": 3173, "end": 3178.5800000000004, "text": " So query prime and key prime are going to be pulled through this feature creator, which" }, { "start": 3178.5800000000004, "end": 3180.3, "text": " implements these these kernels." }, { "start": 3180.3, "end": 3188.1800000000003, "text": " So these can either as we said, these x or the relu's or the sine cosine, whatnot, then" }, { "start": 3188.18, "end": 3197.7, "text": " you're going to multiply the queries and the keys, which gives you yet this W matrix." }, { "start": 3197.7, "end": 3201.22, "text": " And all that we need to do now is normalize it." }, { "start": 3201.22, "end": 3207.98, "text": " Okay, so we re normalize by constructing this denominator right here." }, { "start": 3207.98, "end": 3212.7, "text": " And then there's a whole block for the unit directionality, which you can imagine is pretty" }, { "start": 3212.7, "end": 3223.9399999999996, "text": " ugly, but the renormalization we constructed, we reciprocal means we take the inverse multiplied" }, { "start": 3223.9399999999996, "end": 3229.62, "text": " by the W and return the result, this should be translatable into your favorite whatnot" }, { "start": 3229.62, "end": 3235.22, "text": " pytorch or TensorFlow, maybe it's already been done, I haven't researched that particular" }, { "start": 3235.22, "end": 3236.4199999999996, "text": " thing." }, { "start": 3236.42, "end": 3243.14, "text": " In any case, I invite you to check out the paper, the code, play around with the functions" }, { "start": 3243.14, "end": 3248.1800000000003, "text": " used here, as long as you, you know, use fun, you don't even you don't need to know, like" }, { "start": 3248.1800000000003, "end": 3253.34, "text": " these papers, they always know which kind of kernels their functions correspond to." }, { "start": 3253.34, "end": 3259.2200000000003, "text": " But you know, in SVM, people just went, went nuts, I just plug in some functions, see what" }, { "start": 3259.2200000000003, "end": 3260.98, "text": " happens." }, { "start": 3260.98, "end": 3264.14, "text": " Probably nothing good, but it's possible." }, { "start": 3264.14, "end": 3268.42, "text": " Alright, so that was it for the performer." }, { "start": 3268.42, "end": 3275.14, "text": " I hope you gained something from this kind of an understanding of how it works." }, { "start": 3275.14, "end": 3277.02, "text": " And I wish you the best." }, { "start": 3277.02, "end": 3294.3, "text": " Bye bye." } ]
ccBMRryxGog
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Sparse Expert Models (Switch Transformers, GLAM, and more... w/ the Authors)
[ "Science & Technology" ]
[]
#nlp #sparsity #transformers This video is an interview with Barret Zoph and William Fedus of Google Brain about Sparse Expert Models. Sparse Expert models have been hugely successful at distributing parts of models, mostly Transformers, across large array of machines and use a routing function to effectively route signals between them. This means that even though these models have a huge number of parameters, the computational load for a given signal does not increase because the model is only sparsely activated. Sparse expert models, such as Switch Transformers and GLAM can scale up to trillions of parameters and bring a number of desirable properties. We discuss everything from the fundamentals, history, strengths and weaknesses, up to the current state of the art of these models. OUTLINE: 0:00 - Intro 0:30 - What are sparse expert models? 4:25 - Start of Interview 5:55 - What do you mean by sparse experts? 8:10 - How does routing work in these models? 12:10 - What is the history of sparse experts? 14:45 - What does an individual expert learn? 19:25 - When are these models appropriate? 22:30 - How comparable are sparse to dense models? 26:30 - How does the pathways system connect to this? 28:45 - What improvements did GLAM make? 31:30 - The "designing sparse experts" paper 37:45 - Can experts be frozen during training? 41:20 - Can the routing function be improved? 47:15 - Can experts be distributed beyond data centers? 50:20 - Are there sparse experts for other domains than NLP? 52:15 - Are sparse and dense models in competition? 53:35 - Where do we go from here? 56:30 - How can people get started with this? Papers: Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity (https://arxiv.org/abs/2101.03961) GLaM: Efficient Scaling of Language Models with Mixture-of-Experts (https://arxiv.org/abs/2112.06905) Designing Effective Sparse Expert Models (https://arxiv.org/abs/2202.08906) Links: Merch: http://store.ykilcher.com TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ykilcher.com/discord BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello, today I'm having an interview about the topic of sparse experts. Now, ironically, the people are absolute experts in this type of models. These models, they are huge, they're usually language models, but they don't have to be they're usually transformers, but they don't have to be what they do have in common is this notion of sparse experts, these models go up to the trillions of parameters, and they achieve this via sparsity. Now I want to do a very, very brief introduction of what sparse expert models are. And then we'll dive into the interview right away because I don't want to keep it from you. So let's look at a transformer model. Usually, I have some sort of an input that is tokens, a sequence of tokens, which are represented here by circles. And what I'm going to do with these tokens is I'm going to alternatingly push them through different layers. Now one big layer type that is common in transformers is the attention layer, we're not going to talk about the attention layer today, all you have to know is that it takes in a sequence of tokens, and it outputs a sequence of tokens again, ideally the same amount as went in, which I failed to draw here, the other very common big type of layer in these transformers is what's called the feed forward layer. Now the feed forward layer is just a linear layer, and every token goes through this linear layer by itself. So every token individually goes through the same transformation. And thus, as we do this with all tokens, again, we end up with a sequence of as many tokens as we input. Now a sparse expert model isn't very different than this, the attention layers commonly aren't really touched. So that works just the same. However, in the feed forward layer, we see a big difference. Notably, we don't only have one feed forward layer, we have many. So here is feed forward one, here is feed forward two, here is feed forward three, and here is feed forward four, each one representing a different individual linear transformation of a token. Now when we talk about sparse experts, these things here are called the experts, they're called the experts because they're thought to specialize in very specific tasks. And the goal in sparse expert models is to route the tokens to the corresponding correct experts. So every token goes through what's known as a routing function. We're going to talk about this routing function in the interview. But in essence, it is a very simple, usually something like a linear function or a simple transformation that decides to which of the experts any given token is routed. So sometimes even in sparse expert models, a token is routed to multiple experts. But in the newest iterations, the tokens are simply routed to one single experts and none of the other. Usually this is done, as I said, by some sort of a linear transformation, followed by a softmax to decide where the token goes. So every token would be assigned to one expert. And that gives the possibility of scaling these models up dramatically. Not only do you save a lot of compute because the tokens only go to one place ergo, you only need to compute that one thing for that particular token. But also there's the opportunity to massively shard and parallelize these different experts across different machines, as you only need to route the token to one place. That means you dramatically reduce these big all to all reductions, they still happen, but not as much. So as I already said, the biggest models have trillions of parameters, you need to take a little bit of care of how you then aggregate the tokens once they come out of the experts. So essentially what you want to do is you want to carry over the likelihood from the routing function up here. But this is a minor detail, a minor details are important, but you know, so I know it doesn't look like much, but these sparse expert models really have the potential to massively scale up our current efforts in AI. And I have no doubt that they're going to play a role in the near future, when we're looking at bigger and bigger models, because at some point, the purely dense models will reach sort of the limit of what's physically doable. And then it's a good opportunity that we have models that can go even larger. Alright, so without further ado, let's jump into the interview. I hope you're enjoying yourself. If you do have any sort of comments, please leave a comment, share the video around if you like it, and I'll see you around. Bye bye. Hello, everyone, my guests today are William Fedes and Barrett Zoff, who are engineers and researchers at Google, Google Brain, and have been diving into large models, specifically sparse expert models, which are models that, well, feature this notion of experts, and also have a notion of sparsity. And hopefully today, we'll discover what this is all about. Specifically, we'll talk broadly about three papers in a long line of work. One is the switch transformers paper, which was really, I believe, one of the first papers that just had like massive amounts of parameter was that like trillion, probably trillion parameters. It was big. 1.6 trillion parameters. That's right. Yeah, yeah, it's insane. And then there's there's glam, which demonstrated really nice scaling laws with these sparse experts. And more recently, there is designing effective sparse expert models, which as far as I can see, is also a bit of a of a maybe a summary recommendations, more of a what we learned type of thing. So William and Barrett, welcome to the channel. Thanks so much for being here. Yeah, thanks for having me. So can you give us just a little bit of context what you mean when you say sparse expert models? Yeah, sure. So this is a great question, especially since the word sparsity crops up in like many different aspects of deep learning, whether it's, you know, like sparse attention or, you know, various other sparse paradigms. So yes, sparsity in our case means that each input can get different subsets of parameters. So that's kind of like the main sparsity that we're talking about here. And it's like, you know, it's a very natural concept, right? Like normally, in like a dense transformer, for example, you have, you know, a word embedding, and, you know, any word will have the same parameters and compute applied to it. And in sparse models, typically what happens is you have the same amount of compute, but you can have different subsets of the model parameters be like, you know, acting on the model inputs. And what does that mean in in practice? So we're talking mainly about, let's say transformer models here. No, is that a good characterization of things? Or do you do you see sparse expert models in a more general sense? Yeah, I mean, these things actually almost sort of like cropped up originally as almost like in the context of like ensemble type methods, where you have a bunch of like almost like fully independent models. And then you're sort of using these as like, you know, each model as an expert. But the common paradigm as of like 2022, is sort of experts as a layer. So this is like really popularized by Noam Shazir's work in 2017, outrageously large models. And in that context, they were actually inserting it in between LSTM layers, which is like the prevailing like recurrent architecture at the time. Most of the things just because like the world has sort of shifted towards transformers in it seems like almost all modalities now, we're often thinking about experts as a layer inside transformers. Typically, we're sort of doing this at the feed forward. So these blocks that just sort of independently apply on the different like tokens. But we've also kind of considered it in self attention layers, it's just sort of like a very general concept. But yeah, typically in transformers. So you have this notion of an expert, which you say is is sort of a specialized function or something like this. And then there's often this thing called a router. How how does information find its way through these experts? What are the general principles in that? And why would I even consider doing something like this? Yeah, so great question. So yeah, so you have this figure up here. And so one thing to notice that basically, if you only have a single expert, it essentially reduces to just a normal dense transformer. So the interpretation is pretty natural. And in almost all of the ways people are doing sparse expert model nowadays, there's some notion of a learned mechanism that for, you know, embedding at the current layer, you figure out what expert you should send this representation to. And this can be ranging from very simple to just like a simple softmax function over the total number of experts to very complicated linear programming type solutions that have a more like globally optimal solution. So yeah, so this is this is kind of like the paradigm. And I think it's a pretty natural one. So even if you want to only, you know, yeah, apply one set of weights per representation, now you have the option of just instead of always applying the same weight matrix. Now you can, you know, maybe have a selection of in this figure for different weight matrices. And the way that, you know, we've done this in our work, and I think is the most common is just as a single feed forward network. So you take your input representation, and then you just, you know, apply it with something that's going to be like, you know, the model dimension by the number of experts, and then you apply like a softmax function to get like a probability over all of the different experts. And our switch transformer work, the routing was extremely simple, where it's just like you just send it to the highest, like the highest expert with the highest probability. And then, you know, you just simply route it to that expert, then the output of that computation gets scaled by the router probability. So if it was like, oh, with 0.9, send it to expert two, then when you have the output of that computation, you scale it all by 0.9. Do I remember correctly that there was some paper that it was this an older paper, and this might be getting very technical for a second, but was there an older paper that said something like you always needed to send it to at least two of these experts, otherwise, it's kind of unstable. Is that an older paper, or a newer than yours? It actually wasn't instability that they're clashing against. It was more this idea that we're doing this like weird discretize operation. So instead of using like reinforcement learning to sort of like update on the experts, we're kind of doing this like kind of hacky back propagation through these like softmax operations, which have been masked. And the idea that top two or greater was necessary because they were thinking, well, I'm creating a probability distribution for this token for this word over the available experts. If I don't have at least two, I can't tell whether expert i or j was sort of better for this one. So it's like, in order to have the hypothesis was sort of like a useful gradient signal for the router, it has to know, well, should I have sent it to i or j? And then we just sort of didn't follow convention and did one. And it also seems to work just fine. I think in part because you're sort of doing this sort of normalization. So you can still get an up waiting or down waiting if you select an expert. So it's like, oh, if that expert selection worked out well for you, or worked out poorly for you, you can then sort of adjust the embedding for that expert. And then you at the next pass, if you saw that same token, you're still doing this like softmax distribution. So you're kind of like up waiting or down waiting it. So I think that's sort of like the gist of the mechanism. And this, this, I think this idea was at least from 2017, it may have predated it. Could you maybe now that we're talking about history, trace the evolution of this line of research a little bit. You already mentioned this existed as sort of ensemble methods inside of it. I'm talking now specifically about sparse experts within transformers, which are the things that allow us to really scale up to these giant models. What's the what's sort of the line of research? What are the original things? I'm going to guess this this work is among them. And what were the improvements that happened since then in this field? Bear, do you want me to go or you go for it? Yeah, so I mean, like, going back 30 years, like you have like Jordans and Jacob, this obviously predates transformer because transformer was a 2017 development. So I mean, the concept is very, very old. I think it just kind of like resurged in popularity. I'd say the first, yeah, the very first sort of use of mixture of experts in transformer was left in it all in 2020. So this is G shard. And it just showed really remarkable improvements in translation. What they were doing was, you know, analogous to switch transforming these other works is they just sort of substitute these feed forward blocks with experts. And in that case, sort of also similar with switch transformer, they had many, many experts, I think in that case, it was thousands. And they were showing really significant improvements over state of the art translation models. I think as the field has sort of evolved, as we've sort of like learned a bit more about it, there seemed to be this like kind of general trend of like, okay, cool, we can pre train these models or like in the case of translation, there's no big distribution shift. When you're training to translate, you're also doing inference to translate. But in switch transformer, we found, okay, we'll pre train to, you know, improve the perplexity, improve the prediction of next token. And we were getting significant improvements. But then when we took it under a data distribution shift to fine tuning, it was performing quite badly with many experts. So I think there's been this trend to try to balance the computation and the parameters a bit more. So I think some of the prevailing models have actually in transformers have actually gone have actually gone towards fewer experts. So 1632, 64 experts, not 1000s of experts. So that's kind of like the lineage of mixture of experts and then like mixture of experts in the context of transformers. And what is so in that context, if one expert is the classic transformer model, and that seems to not work as well as many experts, but too many don't work, what is the abstraction that I can think of for an expert? Like, what does an expert learn? What is an expert responsible for? Approximately? Do you have any idea what happens? Like what, what, how does it make sense that the optimal number is, let's say, a few dozen and not super many, but also not one? Yeah, so great question. So yeah, there's like a few parts to this. So one, like, I think it's really just like an empirical observation right now that, you know, 16 versus 64 versus, you know, 2048 versus 10,000. You know, like, it seems like the expert numbers in the middle, like, it's not from the standpoint of like on a per step basis, more experts typically don't make things worse. Usually it's like better or about the same, but things start to level off. But it's very inconvenient to have a lot of experts because it's just this like a huge memory footprint, the way that the models are distributed, it's not really amenable towards typically, unless you have like tons of, you know, parallel cores going. So like actually the observation where you kind of want to actually have like a middle amount of experts is a lot of the times actually driven by just the like practicality of then like training, serving these models. Yeah, in terms of like, what these models are actually learning, like intuitively. So we actually studied this in our most recent work, kind of looking at, you know, each expert, what are they specializing in, what are they learning? And interestingly, they kind of specialize in some shallow concepts, which you would think maybe there would be like only really deep things going on. And it would be kind of hard to inspect them. But you know, we noticed like, oh, there's like a punctuation expert, or an expert that will, you know, talk about, you know, like proper nouns, which we thought was pretty funny, and maybe not super intuitive for, you know, how. Yeah, actually, if you want, you can switch over to the recent paper, and we actually have a figure which sort of shows some of these things. So you can kind of like follow along and see how shallow these things actually are. Yeah. Yeah. So this this would be this would be different. So you you found an expert or in this case, multiple experts that that focused on the these sort of things. So there's conjunctions, punctuation, verb, visual description, which is which is interesting, because that's kind of I want to say like a higher level thing than just the punctuation, right? Counting numbers. Yeah, how do you make sense of this stuff? Like, what's going on? I Yeah, I mean, I think we were sort of expecting maybe like a higher level of description, but like, or like, sort of like representation. Um, it's, I think we've just started started to sort of like crack and like, look into these models to actually see what's going on. That obviously, like one big specialization that you're seeing here are these Sentinel tokens. To make sense of that, we were sort of doing pre training where it's sort of fill in the blank test. And a blank is sort of represented by these like little Sentinels. So like extra ID 10 represents like, you know, the blank 10. And we often really frequently see experts are specializing on these blanks. So, like, we're doing pre training. So that's sort of an interesting thing. And then I think that also might segue into maybe you want to actually, given this sort of like, you know, observed specialization, maybe you actually want to make some experts higher capacity or give them more compute to sort of do things that might be harder. But honestly, I mean, this is still very early. It'd be interesting for sort of like, you know, some of the interpretability lens that like entropic has on some of the recent sparse expert models. Some questions we've kind of received are, what is the interplay of expert specialization with sort of like self attention specialization? And that's honestly completely open. I think we were just sort of putting this table forth to the community to be like, well, we, we started, it's not exactly what we would have expected. But definitely kind of like a call to dig further and hopefully like, you know, further improve things with the also I believe that this was Oh, yeah, here already in switch transformers, this ability to distribute these things across devices that comes naturally with, with having sparse experts. So sparsity meaning in this case, I only send stuff to one or a few experts. And there there came the ability to shard this across devices, how, like, how practical is this really to like, what, when would I do something like this? At what point would it become practical and useful and the best thing to do to communicate across devices for my experts? Yeah, so really great question. And I actually think this is the reason why the method works so well, actually. So the standard way I would say people are doing distributed training of these models is they have, you know, either fully data parallelism, which means like, you know, each machine has the same set of weights, but different slices of data, or a blend of data and model parallelism, where it's like, you know, kind of a mix where certain like, you know, cores have sometimes different weights or sometimes different data, and then you communicate stuff to make it, you know, emulate like a full model. But I think experts, one really easy interpretation of this is like, let's say you have a model, and, you know, you're using data parallelism, and you have four different machines, a really natural way to overlay experts on this would be you just have one expert per machine. And then, yeah, so this is like a really nice interpretation, because then when you have all of your, you know, local data per core, you'd have the router weights replicated, but then you just figure out what expert they need to go to. And then that's when you kind of, you know, shuffle all the tokens around to the machines, do all the computation, and then shuffle them back. And this makes it really nice, because then per machine, you actually never have any more parameters than you would have had just with the Dense Transformer. But now you have experts. So it's actually like a really nice way of kind of, you know, thinking about how to design the models would be like, oh, you know, you have this many cores for data parallelism, just have that many experts. And that's actually a paradigm that Bim and I use a lot when designing these models as well. And yeah, I mean, I think as soon as you have this sort of like, distributed model, where you're already going across accelerators and devices, you do already have these communication patterns, right? Like you need to get activations to a certain place, you need to like get gradients to a certain place. So you already have these sort of like all reduced communication collectives. Expert model is going to introduce all to all communication patterns. So that can be like a more expensive thing, especially based on like your topology and the bandwidth between all of your networks, or between all of your devices. But yeah, so I mean, this is something you sort of have to like, kind of empirically test like, okay, how much does this architecture kind of buy you in terms of performance on your task, versus the additional costs of all to all communication. But you will be communicating across devices for these big models, regardless to train them. Yeah. So this is a good, I guess, a good segue, because you can achieve these giant models, like trillions of parameters using these is the sparse expert models, because naturally, I can parallelize these experts, it doesn't cost me really much more compute, because any data point, or any token only goes to one single expert. There is always a bit of the, let's say, the question of how comparable this is to the dense models. It was it was often I don't know if this is a latent feeling that I get from the community, but people would rather have the 175 billion GPT three model compared to the switch transformer, even if it is trillions of parameters. Is there some sort of division factor where I could compare to a dense model? Or do you think that it's an entirely different nature of function that's computed here? Yeah, so this is a really great question. And I think there's a lot of different ways you have to kind of look at this to figure out if a sparse model is right for you. So I think actually, in a lot of applications, if it's like, hey, I want to train the model with the smallest memory footprint, so I can just be using it on the smallest amount of devices as possible, a dense model will always be better. Like I think on a per parameter basis, dense models are going to be performing better. So for those types of applications, I'm like, yeah, I don't think it makes sense to be using sparse models. Maybe you want to just train the best thing that you can fit onto your local 2 GPU machine or like a 10 GPU machine, and do really kind of low throughput, feeding in data to this, like not high or anything like that. I think sparse models are good, where you're going to be training a model and you're going to be hosting it on a lot of machines and you're going to be having a lot of high throughput going through it. So a lot of queries, a lot of stuff going through it, because then things can be batched together and then the models actually become pretty efficient. So I think that's kind of one lens to look at when you would want to use a sparse versus dense model. And I think the kind of second lens is that, for a given amount of GPU or TPU hours on a compute cluster, what model will get you the best performance? And I think that's the lens that we actually would spend a lot of time looking at for pre-training models in this paper, like, oh, you have 512 TPU chips, and I give you X budget training hours, is a dense model or sparse model going to give you the best pre-training performance? And I think our assessment was that, yeah, I think actually the Pareto optimal model typically is a sparse model in that setup. Yeah, and comparing parameters, especially between a dense and a sparse model, is just totally incomparable. So using GPT-3 and then our largest switch transformer model, it's just wildly different amount of computes in our case. You can't infer that from the parameter budget. So I don't know what the compute ratio was between the two, but far different. Our 1.6 trillion parameter model was actually only doing about as much compute as a billion parameter model. So for each token, it was doing roughly a billion parameters worth of flops. And whereas GPT-3 is doing 175 billion parameters worth of flops. So you can sort of tune this, and DeepMind has sort of also tried to come up with a characterization of scaling properties, far more robust than we've been able to do, of sparse expert models, and try to come up with a dense model equivalent. So that might be an interesting work to refer to in the future. But really, it's just like, practically speaking, it's like, OK, I give you these accelerators for this amount of time. What's the best model? So that's probably the fairest comparison. Have you seen this Pathways paper? Yes, definitely. They came out. How does it play into something like this? Is it going to make this easier? Is it going to make it superfluous? How does the ability to schedule things heterogeneously across devices, or does it enable new possibilities in the sparse expert world? Yeah, so great question. So one thing to note is, OK, so typically you have dense models. And a dense model, like every input, will have the same amount of compute and parameters applied to it. And sparse models, now you have the same amount of compute, but different parameters. And I think the kind of natural next step that I think makes a lot of sense to both Liam and I is that now for each input, you have a different amount of compute applied as well. And I think Pathways is really exciting, again, like you kind of mentioned for the heterogeneous compute, where we want to have inputs that might require different parameters and also different amounts of compute. Yeah, and I think a framework like this is going to really open up a lot of really exciting research avenues along that direction. And I think it feels like a very natural interpretation for kind of where our models are headed for in the future. Yeah, like right now, it's like our experts are all sort of completely homogenous. They're all the same size. They do the same operations. Pathways, you could be like, oh, this is like a recurrent expert. This is a huge expert. There's a group of small experts. You could just be a lot more flexible in design. And sort of like alluding to that a little bit with when we were sort of looking at the visualization, it's like, oh, wow, a really consistent thing. Our experts that want to specialize in these like fill in the blank tokens, these Sentinel tokens, perhaps that might be an avenue or an area where it's like, oh, let's dramatically increase the compute here. This is, oh, hi, Kat. This is like an area where we like a lot of extra compute could really be helpful. And there wasn't really an effective way to do this with the existing infrastructures before pathways. Is there a... Yeah, sorry, that's lost the train of thought. Explain to me a little bit how GLAM improved upon switch transformers. Like what's new? What's exciting there? Yeah, so I think GLAM... So one also thing to note is like there's kind of a right now division of two different types of model classes in language modeling space, I would say. So one is like these decoder only models where it's just a single set of parameters and it's like you're just predicting the next token like autoregressively. And this is like what GPT-3 is. And this is also the kind of architecture that GLAM studies these models in. So the other classes, these encoder decoder models like T5, this was also G-shard. This is kind of what also we studied in switch transformer in our most recent work as well. So I think GLAM did a few things. So one, they really, I think, pushed the scale of these models. So like while our original model of switch transformer had more parameters, like GLAM had like much more compute applied per token. And they studied these very extensively with decoder only language models. And yeah, I think their main comparison point was to GPT-3 as well. So they were studying a lot in the context of few-shot and like one-shot evaluations, whereas I think a lot of our work actually centered around like fine tuning the models. But yeah, I think GLAM really like pushed the scale of these, especially in these decoder only language models and showed that like, yeah, you know, you can get as good of quality as GPT-3 with like, you know, huge computational training savings as well. And they did a really, a lot of really good work in that space. Is there a functional difference between the sparse expert routing or anything around this in GLAM? Or is it mainly what you said with decoder only and applying more compute scaling it up? So actually, there is a few differences that are more nuanced and technical. But yeah, at a high level, you know, there's a routing function, and they actually route each token to two experts. And actually, there's like some of the differences in these models comes from like how much buffer you give each token, each expert, because, you know, you need to have like fixed batch sizes for all the experts ahead of time. And so what can happen is like, you can't guarantee that like, there's going to be perfect balancing among all of the tokens getting sent to experts. So like experts can overflow. And there's this key parameter that we call the capacity factor. That's probably the single-handedly most important parameter when designing a mixture of expert models, because it just has such a huge impact on the communication costs, compute and everything like that for how much buffer you should have. And yeah, I think a big difference from GLAM versus our models is they actually use like a much larger capacity factor than we've used in our other works. But yeah, the routing algorithm is essentially the same. That is, yeah, I want to get a bit more into the into the routing algorithm in just a bit, but just to end this with the with the last paper that we've previously looked at, was I right in saying that this is much more often, let's say a general, almost like a review paper? Or how would you describe it? Yeah, I mean, I think we we tried to make sure like we're contextualizing a lot of the work. So we tried to make sure the related work was like, pretty inclusive, because I mean, I think the field's really adjusted and improved a lot in the last two years. But I would sort of characterize this paper as fixing the two big flaws from our first one from switch transformers. The first was these models are unstable to train. So we'd be training and then all of a sudden the loss or just diverge, which thwarted a lot of our issues. Interestingly, it doesn't seem like the instability arises from a lot of experts. We were consistently able to train models like our trillion parameter model, for instance, with thousands of experts, never really hitting any unstable sections, really kind of came from like high clops or high computation expert models, even with like few experts, those were highly unstable. And then the second thing that this paper sort of fixed was the sort of like poor fine tuning quality. So we would sort of pre train a model, it would show like really significant speed ups over a dense counterpart. But then when it came time to fine tuning, say I'm like super glue or some like other task of interest, it would just be considerably worse. So I think this paper was just really trying to sort of like kind of patch up a couple of those issues, we identified them in our first work. Yeah, I'm always a bit intimidated when a paper has a table of index by itself. Can you can you go to something that Barry and I discussed, it's like, okay, should we break this up into multiple papers? Or should this be one because, you know, this is like, you know, a lot of work. And, you know, this is like something that we discussed, like maybe in the future, we should probably be producing like more bite size pieces of work. When you when you talk about fine tuning, can you go a bit into more detail? Like, what was exactly the problem? How did you how did you also go about fixing it? So I'm not only interested in, you know, how did how what's the final model like, but what does the process of debugging something like this and then getting to an architecture or a solution that actually works look like? Yeah, I mean, it's sort of this like very interesting problem of like, you want to, there's really just like fundamental trade off. And whenever you're sort of doing a sort of like large scale work, where you want to try to understand and characterize things at a smaller scale, understand scaling properties, understand, understand like hyper parameter dependencies. But then you also want to be consistently checking yourself at the largest scales. And this sort of balance of like, okay, you have this much compute, you have this much time, where do you allocate it? Do you do a lot of small experiments? Or do you do a few big experiments? It's kind of tricky. But I'd say part of our like findings were the first one was like, okay, well, characterization is we're not doing better on fine tuning. What's the cause? And it seemed like perhaps our cause is not that of optimization, it's that of generalization. So if you scroll down into section four, you can just click on the link. We might be Yeah, exactly. Yeah, so this is an example that, you know, kind of supports a lot of the trends we're seeing. On the left is a small superglue task. So this task has only 250 training sequences, so very small. And on the right is record. So this has over 100,000 training examples. We're showing sparse models versus dense models in the two things, in the two plots. Blue represents the sparse training though, and you can see it just very quickly gets to 100%. And it outpaces in both cases, the small task and the large task outpaces the dense model getting to 100% train evaluation accuracy. But when in the small task, we'll see the dense model in red actually outperforming the ultimate performance for the sparse model in orange, whereas for the bigger tasks, the sparse model does well. And so we kind of kept seeing this like, you know, overfitting issues. And a lot of this was then led us to sort of like investigate hyperparameters. And, you know, some of the hyperparameters can sort of be adjusted in a way to make the model like less susceptible to overfitting. So you can use like different dropout parameterizations, but also things like batch size and learning rate can inject more noise, which can also be sort of like a counter to some like overfitting properties. So we tried and then sort of consistent with this, like a lot of these things were sort of like, you know, more exhaustive studies at say, a billion parameter scale, we then tried to continue to sort of like fact check this against our larger model, and make sure that these conclusions were holding. So I think it was just sort of like, you know, the debugging process was, okay, what more precisely is going wrong? And then like, what are our levers that we can sort of like pull in order to try to like improve it? But you know, a bit of art and science really. You so you is it you observed, okay, we are probably overfitting, because you saw the smaller the tasks got sort of the worst the sparse models would ultimately perform on the validation set of those tasks. Did you? And you have it's not like quite like, yeah, it's not always like quite so easy as that. But it's sort of like, you know, directionally, like, I think we have support of the hypothesis. But it's not like every single small task does poorly. And every large task is great. Yeah, but I'd say directionally, it seems to be a phenomenon we've observed. You have also a bunch of experiments down here where you where you investigate some of these, for example, dropout probabilities, you also have expert dropout probability, which is one of the questions I had in that you have particular architecture, right with these with these experts. And when I think about overfitting, what in regular transformers, I have kind of handles, I can use adapter layers, I can only fine tune the head and so on. Did you ever investigate maybe only fine tuning some of the experts? Like is that the keeping others constant? Is that ever a thing? Like, would that work? Or, or, or, you know, can we can we make use somehow of the fact that we have these different experts, and they're actually different functions? Yeah, great question. And I think actually, if you scroll down, we did a very naive kind of version of this, not where we freeze different experts, but we, you know, freeze all of the experts, or maybe only train all the experts and freeze all of the other parameters. I would say our findings were this were surprising in a bad way. So nothing, nothing really worked super well. So here you can see that and this is also, we only studied this on super glue, right? So it's far from exhaustive. But yeah, so one thing we tried was updating first all of the non mixture of expert parameters only. And that actually performed about the same, which was kind of interesting. It's like, hey, like actually freezing the mixture of expert weights like seem to perform about as well as just like updating the whole model. Then when we started to, you know, update only the mixture of expert weights and freeze all the other model parameters, like the performance was actually really bad. And there was some we still fully don't understand what's going on here. We have like a few kind of like half baked hypotheses. But yeah, then when we update only the attention parameters, things are worse. And we found a slight boost updating only the feed forward network parameters that weren't the mixture of expert layers. But yeah, overall, nothing worked that well. But yeah, I think there might be some potential really interesting things of like, hey, maybe allowing only, you know, a certain subset of experts to be fine tuned. We did spend a little bit of time actually studying like pruning off experts during fine tuning. So like for a specific fine tuning task, if your pre trained model has like 64 experts, can you just take like a subset of like two, four, eight or 16 of them? Yeah, and we also didn't really get that good of signal with this as well. Also to some of your recommendations, they actually would be compatible with expert models too. So you're free to just like fine tune like the top, like top logit layer, or you could add in adapter layers. Yeah, we didn't do anything like really funky, like you were suggesting like, oh, we're only going to expert like update experts like three, eight and 14 or something. Yeah, my intuition is that probably wouldn't work well. But I mean, I've been proven wrong many times. Yeah, we tried some like other things that didn't make it to this table or these plots. And yeah, again, we didn't really see like a significant boost. That said, if you are only updating like a fraction of the parameters, you get some memory savings. So you know, some nice things. Cool. I guess one, you know, there's, there's almost an infinite number of things one could try with these things like distilling experts like distilling multiple experts into a single expert. So you have another expert that's again free to do some some new tasks. Once you know that two experts are converging something like, I think there's, it's really interesting, right? A lot of we're adding a new experts on the fly. Yeah, a lot of possibilities. And that brings me a bit to this this routing function that we talked about before and at the beginning, which seems to me is a really crucial part of the system. Yet, as you said before, very often, I've just seen this being implemented quite simplistically, maybe there's a linear transform and then a softmax or something like this, maybe not even maybe there is some some sort of a, you know, a some fixed keys for all of the experts and then you route according to that. Do you like my intuition would be that this could be a powerful handle on what's, you know, on my performance downstream, this routing function, especially also making this different during inference, you know, any any number of things, doing a Monte Carlo tree search at inference time to be as accurate as possible, kind of like, like AlphaGo or something. Do you have an idea on what the power of the routing function in these sparse models is? And how does it work currently? Like, what's the most, the latest and greatest? And how good is it? Yeah, so this is a really good question, actually, and something we've actually spent a lot of time about. So I would say actually, in this project, probably the thing I maybe spent the most time with is trying out different routing algorithms and routing parameterizations. But we ended up kind of going with the default thing, which I also think says something a little bit about the results of it. Yeah, so I would say my intuition is that the model actually works surprisingly well with a lot of different ways you can route the tokens. So like, you know, we tried a lot of other routing algorithms, we tried making like the routing network larger, we tried like, you know, some fancier ways of actually figuring out where you should send the token to, we tried, you know, using additional information of like, oh, when you're routing this current representation, you have access to whether or not like it was routed, or like where it was routed before in previous layers, using like word embedding information too. But yeah, I think overall, it seemed to be, you know, kind of insensitive, we actually did find like one or two methods that improve things, but they can only be used in certain situations. So it was a bit trickier to just like replace everything. The current routing algorithm we're using is basically what the original one was doing, I think in Shazir et al in 2017, when these kind of things were like really introduced into the LSTM language models. And I think, you know, our newer work, and then also Glam as well, we're using these kind of routing algorithms too. Yeah, and also like one kind of like detail here, it's like, so right now, we're sort of splitting out this little box, and we're like, oh, this is the router. It's not really an accurate characterization. It's like, yes, okay, you're mapping some vector into a vector that has like the same like length as number of experts. But if you just don't update that matrix, it still works fine, right? Because now just the represent like the weight matrices below you are just sort of adapting and just piping whatever activation they need, right? If you freeze the great if you stop a gradient through that, then it's like catastrophically bad. But yeah, I mean, I've also sort of been surprised by the relative insensitivity to the routing algorithm. Like we've seen like, you know, maybe some small boosts here and there, but it hasn't been super significant. I think you probably have a better sort of like a bigger significance by actually just sort of fundamentally changing like the architecture. Like maybe there's like some wildly different approach for sort of sparse models that we're not considering, maybe we're in some sort of like local men. And like these small tweaks on like, oh, okay, precisely, how are we doing this? Maybe doesn't matter as much. And DeepMinds also explored some other kind of interesting routing algorithms, like you sort of alluded to fixed routing algorithms, where it's just like, you're not even learning. They've also tried RL based routing algorithms. And I think it had like actually similar scaling properties. So again, kind of corroborating what Barrett is saying, it's just like, a lot of these things when we're kind of doing this, like per token routing, haven't really moved the needle substantially. That's been our our luck. Yeah, and I think another important trend actually, is that we when we were experimenting with a lot of these different routing algorithms, we actually found that they did help models. And maybe when you had like a 1 billion parameter dense modelish size, but then like, as we scaled up the models, like actually a lot of the time, sometimes the differences would just like wash away, as well. So it's kind of this interesting effect of when more scale is increased, like it maybe becomes a little bit less insensitive to some of these decisions. Yeah, I was I was Yeah, I can totally see that that essentially that the rest of the network adjusts, especially if everything is trainable. What I would be excited about maybe is is to somehow at inference time doing something smarter than because at training time, I can adjust to everything right, but at inference time, maybe there's something that I could do, especially with regards to, you know, domain shift, domain adaptation, anything like this, where I could, I could tweak routing in some way, but I guess that's also, also up for for future work. Okay. So there's a little bit of this not tweaking the routing algorithm, but tweaking the capacity factor hyper parameter I mentioned a while ago. So that's this is basically the parameter that's going to dictate how many tokens are being dropped. And one cool thing you can do is you can have some capacity factor during training. But then at eval time, depending on if you want to use more or less compute, you can be either dropping more or less tokens, and either kind of, you know, increase or decrease the performance, which is pretty cool. And the model is actually pretty robust to having that train from training evaluation time. So that's actually kind of like a good lever for like, you know, depending on if you want to use more or less compute during evaluation. I think we have we have a pretty good overview. Now I want to get a little bit into just the future, the future prospects, maybe also of this we already talked about, and with pathways, we could have heterogeneous things, could this be pushed to some sort of limit? Whenever I see a distributed system, you know, I immediately think distributed maybe not even in a data center, but across users across networks is their applications to maybe, what was it called federated, some kind of some kind of federated computing, some kind of federated learning where I could somehow contribute with my maybe confidential data, but I could still contribute to a whole compute process is there, I'm gonna say the the B word, is there an application for blockchain distribution, something like this? Like, have you do you think about sort of the higher degrees of distribution here? Do you want me to go for it? Yeah, go for it. I mean, yeah, so I mean, yes, me personally, I haven't spent a ton of time thinking about this. But I do think it's like very interesting. And yeah, there definitely seems to be a lot of really, you know, open problems around this, especially given the growing amount of like fragmented compute, fragmented devices, like there's so much computer on here, like, you know, how can you effectively utilize all of this, utilize different, you know, data and stuff, I think it's like a super cool and I think it was going to require a lot of really interesting research, because right now the way we're currently training these models is it's all like synchronized lockstep typically, right, you're doing like, oh, like after each batch, you do these gradients, you send the gradients around and everything. But like, I think actually, maybe the future of these models, when you're really, you know, allowing them to be distributed across very different types of computing, everything might actually now introduce like asynchronous training as kind of like the new paradigm. So I think that's like a really exciting space. But yeah, I haven't spent too much time thinking about it personally. Yeah, and I think like, as it pertains to say like blockchain or something, like, I think one problem with these expert models as designed in this way, are these all to all communications. So over this sort of like, you know, decentralized, like peer to peer network, where it's like, you know, nodes are like, you know, really far apart, inconsistent sort of bandwidth and stuff. That could be really tough if sort of your experts were sort of distributed among like many different nodes in this sort of like unreliable network where nodes are kind of coming and going. Like right now, all our systems are in this sort of like very constrained fault intolerant area where it's like, oh, all highly internet work chips that are highly reliable. And then so like blockchain would just have like a whole different set of like kind of problems that you'd have to sort of address like unreliability and you know, some of these other areas, not to say I think you just like require some like additional kind of research, like just sort of adopting the model as is, I think would pretty poorly map on that kind of computing infrastructure. But I think there's something there that could be done. Is there work on because I see these works mostly here in NLP yet transformers kind of taking over the rest of the world. Is there work on how these experts, sparse expert transformers behave in vision in reinforcement learning, speech, whatever? Yeah, yeah, great question. So absolutely, actually, there's been some really good work applying these models to like VIP based, like image classification and stuff. And there, it's actually really nice, because then you can leverage all of the, you know, niceties around like people figuring out how to get these working really well and transformers and kind of, you know, nicely map it over as well. I've, yeah, there's also been some good work using these in speech as well. Liam, any other things to add on top of that? Some, I used to do reinforcement learning more full time, and some colleagues kind of reached out about doing like sparse expert models for RL. I haven't seen, I'm not familiar with some work. But, you know, that might be sort of like another interesting avenue, but like for sure. So language, vision, speech. I don't know if there's been any videos, any video work yet. Yeah, but like high data, a lot of throughput, those would be like, you know, really good areas. So I think video would be also really promising. Yeah, I really like also the, I feel like it feels very natural in these high dimensionality spaces that you really might want different parameters to be applied, like when you have a video, like one, I think you don't want to be applying the same amount of compute to every frame. But then on top of that, I could see like, actually, you really want to have different parameters applying to different, you know, things going on in the video, because it's just gonna be like wildly different stuff happening. So yeah, I think I'm very excited about these models for video as well. Do you imagine that these models will just, essentially right now they're competition to dense models. They are competing, you're tracking Pareto frontiers, how much compute, how well are they doing, tackling very much the same tasks. Do you think this will go on? Like, do you think these models might overtake dense models if we figure out how to handle them correctly? Or is it more like there's a killer app for each one of them? Yeah, I think in, oh, do you want to go ahead, then? Yeah, I mean, I honestly think that the future is going to be adaptive. Like, I don't think there's any way that like in 10 years, our models are treating all examples coming in with like the same parameters over and over again, and the same amount of compute. It may not be this precise sort of like sparsity regime, or may not be the precise sort of adaptive computation, kind of like paradigms that have been put forth. But I view this sort of kind of work of like sparsity adaptive computation, as kind of like inevitable, like, I don't think it's going to be considered like competition, it's just going to be sort of like integrated into a lot of like leading models. That's, that's my expectation. I'd be really shocked in like 10 years, we're training like a 100 trillion parameter dense model. And it's just kind of doing the same thing, like, over and over again, for no matter what comes in, just seems really strange to me. What's the future for your particular research? Like, where do you see, where do you see yourself going in the next, maybe not the next paper that you haven't published yet, but maybe a bit broader time scale? Like, what excites you? And what are your next plans here? Yeah, great question. I mean, I think the thing that really excites me is like what we were kind of talking about earlier of each input getting a different amount of compute applied. Like, I think right now, the models are working well for each input getting different parameters. And I think, you know, coupling this with like adaptive amounts of computation is like, I think, really where I want to be spending time thinking about in the next, you know, upcoming years. Is there? Yeah, I don't know, is you have something like Ponder, there's PonderNet, and so on, there's these recursive architectures, or recurrent architectures that, that sort of decide themselves when to when to exit. Would that be one thing? Or do you simply imagine that each expert is kind of one is the buff expert, and one is the lean expert, and then the routing function essentially takes care of the different amount of compute? Yeah, I don't know. This is a great question. I think, I don't know, I can see either approach potentially working, or maybe you actually want combinations or potentially something completely new. Yeah, it feels like the space is still, you know, very exciting. And there's like a lot of really interesting different verticals being pushed. So the space still feels like, you know, pretty young to me. Okay, last question from my side, what's the connection of this to something like Capsules? I don't know if you've ever thought about the the connection there. But with Capsules, I always think this is these abstract, very abstract things, very high level ideas flying around. And you here have something like very practical, you know, very on the metal thing. Yeah, there seems to be quite some commonalities. Is there is that something that ever came up to you? Or, or is that something that ever came up to you or? In the two years of doing sparsity research, this is literally the first time. I actually should be going back to that work. I feel like capsules like yeah, had a lot of like really interesting conceptions. But maybe like you're kind of alluding to it didn't like map super well to the metal. So maybe that sort of like hindered it's like it's use, whereas this is just like highly motivated from like an engineering perspective. We've had like some questions like, oh, what is like the neuroscientific kind of motivation of our work? And it's like, it's really engineering kind of driven. So it's like, okay, what what will be fast on our existing hardware? But yeah, I will revisit capsules and kind of see like, oh, okay, how could we actually map this a little bit better to the hardware? And like, you know, I think that could be like, you know, an interesting source of ideas. Is there any any last thing you want to get out to viewers that they should take away from this work? Any any way that a regular person can get into this type of research? Anything like this? Yes, a great question. So actually, one thing we tried to show in our switch transformer work is that these models work pretty well, even if you only have two experts. So I definitely I don't want people to think that, you know, you're really a supercomputer to run the models or to, you know, get benefits from having experts, even having I think, as little as two experts and running models could lead to developing really interesting research ideas, improving the performance and everything like that. So yeah, I definitely hope that, you know, more people can continue to experiment and push forward these models. Yeah, and then I would say, like, another interesting trend that I've been following is sort of in parallel to sparsity in these like, you know, really large models is the idea of like, well, what if we just sort of like, have the model sort of offload and like, sort of do lookups or, you know, look at documents and retrieval type methods. I think this is sort of like a very interesting area. And I'd love to see like, kind of head to head comparisons of like, okay, do we want to try to encapsulate the knowledge into parameters? Or do we want to just like, keep it sort of like, you know, parametric, non-parametric type thing? And we keep the information kind of written in docs or like, what does the interplay look like? I think that's sort of like another really interesting avenue, like, kind of comparing these things. Awesome. Yeah, it sounds really cool. I'm excited to to see what the future of these models bring. Yeah, Barrett and William, thank you so much for being here. This was a lot of fun. I hope to see you again soon. Yeah, cool. Thanks for having us. Yeah, thanks for having us.
[ { "start": 0, "end": 5.2, "text": " Hello, today I'm having an interview about the topic of sparse experts. Now, ironically," }, { "start": 5.2, "end": 11.120000000000001, "text": " the people are absolute experts in this type of models. These models, they are huge, they're" }, { "start": 11.120000000000001, "end": 15.68, "text": " usually language models, but they don't have to be they're usually transformers, but they don't" }, { "start": 15.68, "end": 21.12, "text": " have to be what they do have in common is this notion of sparse experts, these models go up to" }, { "start": 21.12, "end": 26.96, "text": " the trillions of parameters, and they achieve this via sparsity. Now I want to do a very," }, { "start": 26.96, "end": 31.84, "text": " very brief introduction of what sparse expert models are. And then we'll dive into the interview" }, { "start": 31.84, "end": 37.120000000000005, "text": " right away because I don't want to keep it from you. So let's look at a transformer model. Usually," }, { "start": 37.120000000000005, "end": 42.8, "text": " I have some sort of an input that is tokens, a sequence of tokens, which are represented here" }, { "start": 42.8, "end": 48.16, "text": " by circles. And what I'm going to do with these tokens is I'm going to alternatingly push them" }, { "start": 48.16, "end": 54.480000000000004, "text": " through different layers. Now one big layer type that is common in transformers is the attention" }, { "start": 54.48, "end": 59.76, "text": " layer, we're not going to talk about the attention layer today, all you have to know is that it takes" }, { "start": 59.76, "end": 66.64, "text": " in a sequence of tokens, and it outputs a sequence of tokens again, ideally the same amount as went" }, { "start": 66.64, "end": 72.4, "text": " in, which I failed to draw here, the other very common big type of layer in these transformers" }, { "start": 72.4, "end": 77.36, "text": " is what's called the feed forward layer. Now the feed forward layer is just a linear layer," }, { "start": 77.36, "end": 84.72, "text": " and every token goes through this linear layer by itself. So every token individually goes through" }, { "start": 84.72, "end": 90.16, "text": " the same transformation. And thus, as we do this with all tokens, again, we end up with a sequence" }, { "start": 90.16, "end": 96.08, "text": " of as many tokens as we input. Now a sparse expert model isn't very different than this," }, { "start": 96.08, "end": 101.28, "text": " the attention layers commonly aren't really touched. So that works just the same. However," }, { "start": 101.28, "end": 106.96000000000001, "text": " in the feed forward layer, we see a big difference. Notably, we don't only have one feed forward layer," }, { "start": 106.96, "end": 113.19999999999999, "text": " we have many. So here is feed forward one, here is feed forward two, here is feed forward three," }, { "start": 113.83999999999999, "end": 119.11999999999999, "text": " and here is feed forward four, each one representing a different individual linear" }, { "start": 119.11999999999999, "end": 125.28, "text": " transformation of a token. Now when we talk about sparse experts, these things here are called the" }, { "start": 125.28, "end": 131.28, "text": " experts, they're called the experts because they're thought to specialize in very specific tasks. And" }, { "start": 131.28, "end": 137.76, "text": " the goal in sparse expert models is to route the tokens to the corresponding correct experts. So" }, { "start": 137.76, "end": 142.32, "text": " every token goes through what's known as a routing function. We're going to talk about this routing" }, { "start": 142.32, "end": 147.52, "text": " function in the interview. But in essence, it is a very simple, usually something like a linear" }, { "start": 147.52, "end": 154.64, "text": " function or a simple transformation that decides to which of the experts any given token is routed." }, { "start": 154.64, "end": 160.08, "text": " So sometimes even in sparse expert models, a token is routed to multiple experts. But in the newest" }, { "start": 160.08, "end": 166.16000000000003, "text": " iterations, the tokens are simply routed to one single experts and none of the other. Usually this" }, { "start": 166.16000000000003, "end": 172.4, "text": " is done, as I said, by some sort of a linear transformation, followed by a softmax to decide" }, { "start": 172.4, "end": 178.32000000000002, "text": " where the token goes. So every token would be assigned to one expert. And that gives the" }, { "start": 178.32000000000002, "end": 184.08, "text": " possibility of scaling these models up dramatically. Not only do you save a lot of compute because the" }, { "start": 184.08, "end": 189.84, "text": " tokens only go to one place ergo, you only need to compute that one thing for that particular" }, { "start": 189.84, "end": 195.84, "text": " token. But also there's the opportunity to massively shard and parallelize these different experts" }, { "start": 195.84, "end": 201.04, "text": " across different machines, as you only need to route the token to one place. That means you" }, { "start": 201.04, "end": 207.04, "text": " dramatically reduce these big all to all reductions, they still happen, but not as much. So as I" }, { "start": 207.04, "end": 211.84, "text": " already said, the biggest models have trillions of parameters, you need to take a little bit of care" }, { "start": 211.84, "end": 217.2, "text": " of how you then aggregate the tokens once they come out of the experts. So essentially what you" }, { "start": 217.2, "end": 223.35999999999999, "text": " want to do is you want to carry over the likelihood from the routing function up here. But this is a" }, { "start": 223.35999999999999, "end": 228.64, "text": " minor detail, a minor details are important, but you know, so I know it doesn't look like much," }, { "start": 228.64, "end": 234.64, "text": " but these sparse expert models really have the potential to massively scale up our current" }, { "start": 234.64, "end": 240, "text": " efforts in AI. And I have no doubt that they're going to play a role in the near future, when" }, { "start": 240, "end": 245.6, "text": " we're looking at bigger and bigger models, because at some point, the purely dense models will reach" }, { "start": 245.6, "end": 251.68, "text": " sort of the limit of what's physically doable. And then it's a good opportunity that we have models" }, { "start": 251.68, "end": 257.2, "text": " that can go even larger. Alright, so without further ado, let's jump into the interview. I hope you're" }, { "start": 257.2, "end": 261.44, "text": " enjoying yourself. If you do have any sort of comments, please leave a comment, share the" }, { "start": 261.44, "end": 268.32, "text": " video around if you like it, and I'll see you around. Bye bye. Hello, everyone, my guests today" }, { "start": 268.32, "end": 275.12, "text": " are William Fedes and Barrett Zoff, who are engineers and researchers at Google, Google Brain," }, { "start": 275.12, "end": 283.2, "text": " and have been diving into large models, specifically sparse expert models, which are models that," }, { "start": 283.2, "end": 289.84000000000003, "text": " well, feature this notion of experts, and also have a notion of sparsity. And hopefully today," }, { "start": 289.84000000000003, "end": 296.88, "text": " we'll discover what this is all about. Specifically, we'll talk broadly about three papers in a long" }, { "start": 296.88, "end": 302.64, "text": " line of work. One is the switch transformers paper, which was really, I believe, one of the first" }, { "start": 302.64, "end": 308.56, "text": " papers that just had like massive amounts of parameter was that like trillion, probably" }, { "start": 308.56, "end": 314.24, "text": " trillion parameters. It was big. 1.6 trillion parameters. That's right. Yeah, yeah, it's insane." }, { "start": 314.8, "end": 324.15999999999997, "text": " And then there's there's glam, which demonstrated really nice scaling laws with these sparse experts." }, { "start": 324.15999999999997, "end": 330.47999999999996, "text": " And more recently, there is designing effective sparse expert models, which as far as I can see," }, { "start": 330.48, "end": 337.84000000000003, "text": " is also a bit of a of a maybe a summary recommendations, more of a what we learned" }, { "start": 337.84000000000003, "end": 344.88, "text": " type of thing. So William and Barrett, welcome to the channel. Thanks so much for being here." }, { "start": 345.6, "end": 353.20000000000005, "text": " Yeah, thanks for having me. So can you give us just a little bit of context what you mean when" }, { "start": 353.20000000000005, "end": 360.08000000000004, "text": " you say sparse expert models? Yeah, sure. So this is a great question, especially since the word" }, { "start": 360.08, "end": 364.56, "text": " sparsity crops up in like many different aspects of deep learning, whether it's, you know, like" }, { "start": 364.56, "end": 370.96, "text": " sparse attention or, you know, various other sparse paradigms. So yes, sparsity in our case" }, { "start": 370.96, "end": 376.88, "text": " means that each input can get different subsets of parameters. So that's kind of like the main" }, { "start": 377.44, "end": 381.59999999999997, "text": " sparsity that we're talking about here. And it's like, you know, it's a very natural concept," }, { "start": 381.59999999999997, "end": 386.79999999999995, "text": " right? Like normally, in like a dense transformer, for example, you have, you know, a word embedding," }, { "start": 386.8, "end": 393.12, "text": " and, you know, any word will have the same parameters and compute applied to it. And in" }, { "start": 393.12, "end": 396.88, "text": " sparse models, typically what happens is you have the same amount of compute, but you can have" }, { "start": 396.88, "end": 402.08000000000004, "text": " different subsets of the model parameters be like, you know, acting on the model inputs." }, { "start": 402.08000000000004, "end": 408.40000000000003, "text": " And what does that mean in in practice? So we're talking mainly about, let's say transformer" }, { "start": 408.40000000000003, "end": 414.08000000000004, "text": " models here. No, is that a good characterization of things? Or do you do you see sparse expert" }, { "start": 414.08, "end": 419.03999999999996, "text": " models in a more general sense? Yeah, I mean, these things actually almost sort of like cropped" }, { "start": 419.03999999999996, "end": 423.35999999999996, "text": " up originally as almost like in the context of like ensemble type methods, where you have" }, { "start": 423.35999999999996, "end": 428.8, "text": " a bunch of like almost like fully independent models. And then you're sort of using these as" }, { "start": 428.8, "end": 435.2, "text": " like, you know, each model as an expert. But the common paradigm as of like 2022, is sort of" }, { "start": 435.2, "end": 441.59999999999997, "text": " experts as a layer. So this is like really popularized by Noam Shazir's work in 2017," }, { "start": 441.6, "end": 446.24, "text": " outrageously large models. And in that context, they were actually inserting it in between LSTM" }, { "start": 446.24, "end": 451.28000000000003, "text": " layers, which is like the prevailing like recurrent architecture at the time. Most of the things just" }, { "start": 451.28000000000003, "end": 455.68, "text": " because like the world has sort of shifted towards transformers in it seems like almost all modalities" }, { "start": 455.68, "end": 463.76000000000005, "text": " now, we're often thinking about experts as a layer inside transformers. Typically, we're sort of" }, { "start": 463.76000000000005, "end": 468.40000000000003, "text": " doing this at the feed forward. So these blocks that just sort of independently apply on the" }, { "start": 468.4, "end": 474.15999999999997, "text": " different like tokens. But we've also kind of considered it in self attention layers, it's" }, { "start": 474.15999999999997, "end": 479.84, "text": " just sort of like a very general concept. But yeah, typically in transformers. So you have this" }, { "start": 479.84, "end": 487.67999999999995, "text": " notion of an expert, which you say is is sort of a specialized function or something like this. And" }, { "start": 487.67999999999995, "end": 495.35999999999996, "text": " then there's often this thing called a router. How how does information find its way through these" }, { "start": 495.36, "end": 501.52000000000004, "text": " experts? What are the general principles in that? And why would I even consider doing something like" }, { "start": 501.52000000000004, "end": 509.28000000000003, "text": " this? Yeah, so great question. So yeah, so you have this figure up here. And so one thing to" }, { "start": 509.28000000000003, "end": 514.4, "text": " notice that basically, if you only have a single expert, it essentially reduces to just a normal" }, { "start": 514.4, "end": 520.8000000000001, "text": " dense transformer. So the interpretation is pretty natural. And in almost all of the ways people are" }, { "start": 520.8, "end": 527.04, "text": " doing sparse expert model nowadays, there's some notion of a learned mechanism that for, you know," }, { "start": 527.04, "end": 532.64, "text": " embedding at the current layer, you figure out what expert you should send this representation to." }, { "start": 533.68, "end": 539.04, "text": " And this can be ranging from very simple to just like a simple softmax function over the" }, { "start": 539.04, "end": 544.64, "text": " total number of experts to very complicated linear programming type solutions that have a more" }, { "start": 544.64, "end": 551.76, "text": " like globally optimal solution. So yeah, so this is this is kind of like the paradigm. And I think" }, { "start": 551.76, "end": 559.68, "text": " it's a pretty natural one. So even if you want to only, you know, yeah, apply one set of weights per" }, { "start": 559.68, "end": 564.48, "text": " representation, now you have the option of just instead of always applying the same weight matrix." }, { "start": 564.48, "end": 570.16, "text": " Now you can, you know, maybe have a selection of in this figure for different weight matrices. And" }, { "start": 570.16, "end": 574.8, "text": " the way that, you know, we've done this in our work, and I think is the most common is just as a" }, { "start": 574.8, "end": 579.4399999999999, "text": " single feed forward network. So you take your input representation, and then you just, you know," }, { "start": 579.4399999999999, "end": 583.28, "text": " apply it with something that's going to be like, you know, the model dimension by the number of" }, { "start": 583.28, "end": 587.68, "text": " experts, and then you apply like a softmax function to get like a probability over all of the different" }, { "start": 587.68, "end": 592.0799999999999, "text": " experts. And our switch transformer work, the routing was extremely simple, where it's just like" }, { "start": 592.0799999999999, "end": 598.3199999999999, "text": " you just send it to the highest, like the highest expert with the highest probability. And then, you" }, { "start": 598.32, "end": 603.6, "text": " know, you just simply route it to that expert, then the output of that computation gets scaled" }, { "start": 603.6, "end": 610.24, "text": " by the router probability. So if it was like, oh, with 0.9, send it to expert two, then when you" }, { "start": 610.24, "end": 616.32, "text": " have the output of that computation, you scale it all by 0.9. Do I remember correctly that there was" }, { "start": 616.32, "end": 623.84, "text": " some paper that it was this an older paper, and this might be getting very technical for a second," }, { "start": 623.84, "end": 627.9200000000001, "text": " but was there an older paper that said something like you always needed to send it to at least" }, { "start": 627.92, "end": 634.0799999999999, "text": " two of these experts, otherwise, it's kind of unstable. Is that an older paper, or a newer" }, { "start": 634.0799999999999, "end": 641.68, "text": " than yours? It actually wasn't instability that they're clashing against. It was more this idea" }, { "start": 641.68, "end": 647.5999999999999, "text": " that we're doing this like weird discretize operation. So instead of using like reinforcement" }, { "start": 647.5999999999999, "end": 652.24, "text": " learning to sort of like update on the experts, we're kind of doing this like kind of hacky back" }, { "start": 652.24, "end": 660.48, "text": " propagation through these like softmax operations, which have been masked. And the idea that top two" }, { "start": 660.48, "end": 665.28, "text": " or greater was necessary because they were thinking, well, I'm creating a probability" }, { "start": 665.28, "end": 671.04, "text": " distribution for this token for this word over the available experts. If I don't have at least two," }, { "start": 671.04, "end": 678.32, "text": " I can't tell whether expert i or j was sort of better for this one. So it's like, in order to" }, { "start": 678.32, "end": 684.08, "text": " have the hypothesis was sort of like a useful gradient signal for the router, it has to know," }, { "start": 684.08, "end": 690.1600000000001, "text": " well, should I have sent it to i or j? And then we just sort of didn't follow convention and did" }, { "start": 690.1600000000001, "end": 695.9200000000001, "text": " one. And it also seems to work just fine. I think in part because you're sort of doing this sort of" }, { "start": 695.9200000000001, "end": 702.4000000000001, "text": " normalization. So you can still get an up waiting or down waiting if you select an expert. So it's" }, { "start": 702.4, "end": 708.3199999999999, "text": " like, oh, if that expert selection worked out well for you, or worked out poorly for you, you can then" }, { "start": 708.3199999999999, "end": 713.6, "text": " sort of adjust the embedding for that expert. And then you at the next pass, if you saw that same" }, { "start": 713.6, "end": 717.28, "text": " token, you're still doing this like softmax distribution. So you're kind of like up waiting" }, { "start": 717.28, "end": 722.4, "text": " or down waiting it. So I think that's sort of like the gist of the mechanism. And this, this, I think" }, { "start": 722.4, "end": 730.8, "text": " this idea was at least from 2017, it may have predated it. Could you maybe now that we're talking" }, { "start": 730.8, "end": 737.1999999999999, "text": " about history, trace the evolution of this line of research a little bit. You already mentioned this" }, { "start": 737.1999999999999, "end": 745.04, "text": " existed as sort of ensemble methods inside of it. I'm talking now specifically about sparse experts" }, { "start": 745.04, "end": 751.5999999999999, "text": " within transformers, which are the things that allow us to really scale up to these giant models." }, { "start": 751.5999999999999, "end": 757.3599999999999, "text": " What's the what's sort of the line of research? What are the original things? I'm going to guess" }, { "start": 757.36, "end": 762.88, "text": " this this work is among them. And what were the improvements that happened since then in this field?" }, { "start": 762.88, "end": 769.84, "text": " Bear, do you want me to go or you go for it? Yeah, so I mean, like, going back 30 years, like you have" }, { "start": 769.84, "end": 775.92, "text": " like Jordans and Jacob, this obviously predates transformer because transformer was a 2017" }, { "start": 775.92, "end": 783.2, "text": " development. So I mean, the concept is very, very old. I think it just kind of like resurged in" }, { "start": 783.2, "end": 789.44, "text": " popularity. I'd say the first, yeah, the very first sort of use of mixture of experts in" }, { "start": 789.44, "end": 795.9200000000001, "text": " transformer was left in it all in 2020. So this is G shard. And it just showed really remarkable" }, { "start": 795.9200000000001, "end": 801.12, "text": " improvements in translation. What they were doing was, you know, analogous to switch transforming" }, { "start": 801.12, "end": 806.48, "text": " these other works is they just sort of substitute these feed forward blocks with experts. And in" }, { "start": 806.48, "end": 811.2800000000001, "text": " that case, sort of also similar with switch transformer, they had many, many experts, I think" }, { "start": 811.28, "end": 815.28, "text": " in that case, it was thousands. And they were showing really significant improvements over" }, { "start": 815.28, "end": 821.92, "text": " state of the art translation models. I think as the field has sort of evolved, as we've sort of" }, { "start": 821.92, "end": 826.9599999999999, "text": " like learned a bit more about it, there seemed to be this like kind of general trend of like," }, { "start": 826.9599999999999, "end": 832.3199999999999, "text": " okay, cool, we can pre train these models or like in the case of translation, there's no big" }, { "start": 832.3199999999999, "end": 837.04, "text": " distribution shift. When you're training to translate, you're also doing inference to translate." }, { "start": 837.04, "end": 842.56, "text": " But in switch transformer, we found, okay, we'll pre train to, you know, improve the perplexity," }, { "start": 842.56, "end": 847.04, "text": " improve the prediction of next token. And we were getting significant improvements. But then when we" }, { "start": 847.04, "end": 853.4399999999999, "text": " took it under a data distribution shift to fine tuning, it was performing quite badly with many" }, { "start": 853.4399999999999, "end": 859.28, "text": " experts. So I think there's been this trend to try to balance the computation and the parameters a" }, { "start": 859.28, "end": 864.16, "text": " bit more. So I think some of the prevailing models have actually in transformers have actually gone" }, { "start": 864.16, "end": 872.7199999999999, "text": " have actually gone towards fewer experts. So 1632, 64 experts, not 1000s of experts. So that's kind" }, { "start": 872.7199999999999, "end": 878.3199999999999, "text": " of like the lineage of mixture of experts and then like mixture of experts in the context of transformers." }, { "start": 879.76, "end": 889.04, "text": " And what is so in that context, if one expert is the classic transformer model, and that seems to" }, { "start": 889.04, "end": 896.7199999999999, "text": " not work as well as many experts, but too many don't work, what is the abstraction that I can" }, { "start": 896.7199999999999, "end": 902.3199999999999, "text": " think of for an expert? Like, what does an expert learn? What is an expert responsible for?" }, { "start": 902.88, "end": 909.1999999999999, "text": " Approximately? Do you have any idea what happens? Like what, what, how does it make sense that the" }, { "start": 909.1999999999999, "end": 915.52, "text": " optimal number is, let's say, a few dozen and not super many, but also not one?" }, { "start": 915.52, "end": 921.76, "text": " Yeah, so great question. So yeah, there's like a few parts to this. So one, like, I think it's" }, { "start": 921.76, "end": 927.6, "text": " really just like an empirical observation right now that, you know, 16 versus 64 versus, you know," }, { "start": 927.6, "end": 933.76, "text": " 2048 versus 10,000. You know, like, it seems like the expert numbers in the middle, like," }, { "start": 933.76, "end": 938.72, "text": " it's not from the standpoint of like on a per step basis, more experts typically don't make things" }, { "start": 938.72, "end": 943.84, "text": " worse. Usually it's like better or about the same, but things start to level off. But it's" }, { "start": 943.84, "end": 949.12, "text": " very inconvenient to have a lot of experts because it's just this like a huge memory footprint," }, { "start": 949.12, "end": 953.44, "text": " the way that the models are distributed, it's not really amenable towards typically, unless you have" }, { "start": 953.44, "end": 958.48, "text": " like tons of, you know, parallel cores going. So like actually the observation where you kind of" }, { "start": 958.48, "end": 963.9200000000001, "text": " want to actually have like a middle amount of experts is a lot of the times actually driven by" }, { "start": 963.9200000000001, "end": 972.1600000000001, "text": " just the like practicality of then like training, serving these models. Yeah, in terms of like," }, { "start": 972.16, "end": 977.12, "text": " what these models are actually learning, like intuitively. So we actually studied this in our" }, { "start": 977.12, "end": 981.6, "text": " most recent work, kind of looking at, you know, each expert, what are they specializing in, what" }, { "start": 981.6, "end": 987.52, "text": " are they learning? And interestingly, they kind of specialize in some shallow concepts, which you" }, { "start": 987.52, "end": 991.52, "text": " would think maybe there would be like only really deep things going on. And it would be kind of hard" }, { "start": 991.52, "end": 996.8, "text": " to inspect them. But you know, we noticed like, oh, there's like a punctuation expert, or an expert" }, { "start": 996.8, "end": 1001.76, "text": " that will, you know, talk about, you know, like proper nouns, which we thought was pretty funny," }, { "start": 1001.76, "end": 1007.04, "text": " and maybe not super intuitive for, you know, how. Yeah, actually, if you want, you can switch over to" }, { "start": 1007.04, "end": 1011.76, "text": " the recent paper, and we actually have a figure which sort of shows some of these things. So you" }, { "start": 1011.76, "end": 1020, "text": " can kind of like follow along and see how shallow these things actually are. Yeah. Yeah. So this" }, { "start": 1020, "end": 1026.88, "text": " this would be this would be different. So you you found an expert or in this case, multiple experts" }, { "start": 1026.88, "end": 1036.24, "text": " that that focused on the these sort of things. So there's conjunctions, punctuation, verb," }, { "start": 1036.24, "end": 1044.08, "text": " visual description, which is which is interesting, because that's kind of I want to say like a higher" }, { "start": 1044.08, "end": 1050.24, "text": " level thing than just the punctuation, right? Counting numbers. Yeah, how do you make sense of" }, { "start": 1050.24, "end": 1053.1999999999998, "text": " this stuff? Like, what's going on?" }, { "start": 1056.8, "end": 1062.8799999999999, "text": " I Yeah, I mean, I think we were sort of expecting maybe like a higher level of description, but like," }, { "start": 1062.8799999999999, "end": 1070.48, "text": " or like, sort of like representation. Um, it's, I think we've just started started to sort of like" }, { "start": 1070.48, "end": 1076.24, "text": " crack and like, look into these models to actually see what's going on. That obviously, like one big" }, { "start": 1076.24, "end": 1080.96, "text": " specialization that you're seeing here are these Sentinel tokens. To make sense of that, we were" }, { "start": 1080.96, "end": 1085.28, "text": " sort of doing pre training where it's sort of fill in the blank test. And a blank is sort of represented" }, { "start": 1085.28, "end": 1091.2, "text": " by these like little Sentinels. So like extra ID 10 represents like, you know, the blank 10. And we" }, { "start": 1091.2, "end": 1099.68, "text": " often really frequently see experts are specializing on these blanks. So, like, we're" }, { "start": 1099.68, "end": 1105.92, "text": " doing pre training. So that's sort of an interesting thing. And then I think that also might segue into" }, { "start": 1105.92, "end": 1110.64, "text": " maybe you want to actually, given this sort of like, you know, observed specialization, maybe you" }, { "start": 1110.64, "end": 1116.72, "text": " actually want to make some experts higher capacity or give them more compute to sort of do things" }, { "start": 1116.72, "end": 1123.3600000000001, "text": " that might be harder. But honestly, I mean, this is still very early. It'd be interesting for sort" }, { "start": 1123.3600000000001, "end": 1128, "text": " of like, you know, some of the interpretability lens that like entropic has on some of the recent" }, { "start": 1128, "end": 1134, "text": " sparse expert models. Some questions we've kind of received are, what is the interplay of expert" }, { "start": 1134, "end": 1138.96, "text": " specialization with sort of like self attention specialization? And that's honestly completely" }, { "start": 1138.96, "end": 1144.56, "text": " open. I think we were just sort of putting this table forth to the community to be like, well," }, { "start": 1144.56, "end": 1151.28, "text": " we, we started, it's not exactly what we would have expected. But definitely kind of like a call to" }, { "start": 1151.28, "end": 1158.6399999999999, "text": " dig further and hopefully like, you know, further improve things with the also I believe that this" }, { "start": 1158.6399999999999, "end": 1166.24, "text": " was Oh, yeah, here already in switch transformers, this ability to distribute these things across" }, { "start": 1166.24, "end": 1172.8799999999999, "text": " devices that comes naturally with, with having sparse experts. So sparsity meaning in this case," }, { "start": 1172.88, "end": 1181.3600000000001, "text": " I only send stuff to one or a few experts. And there there came the ability to shard this across" }, { "start": 1181.3600000000001, "end": 1191.5200000000002, "text": " devices, how, like, how practical is this really to like, what, when would I do something like this?" }, { "start": 1191.5200000000002, "end": 1198.48, "text": " At what point would it become practical and useful and the best thing to do to communicate" }, { "start": 1198.48, "end": 1205.2, "text": " across devices for my experts? Yeah, so really great question. And I actually think this is" }, { "start": 1205.2, "end": 1211.2, "text": " the reason why the method works so well, actually. So the standard way I would say people are doing" }, { "start": 1211.2, "end": 1215.28, "text": " distributed training of these models is they have, you know, either fully data parallelism, which" }, { "start": 1215.28, "end": 1220, "text": " means like, you know, each machine has the same set of weights, but different slices of data, or a" }, { "start": 1220, "end": 1224.32, "text": " blend of data and model parallelism, where it's like, you know, kind of a mix where certain like," }, { "start": 1224.32, "end": 1228.8799999999999, "text": " you know, cores have sometimes different weights or sometimes different data, and then you communicate" }, { "start": 1228.8799999999999, "end": 1234.6399999999999, "text": " stuff to make it, you know, emulate like a full model. But I think experts, one really easy" }, { "start": 1234.6399999999999, "end": 1239.76, "text": " interpretation of this is like, let's say you have a model, and, you know, you're using data parallelism," }, { "start": 1239.76, "end": 1245.9199999999998, "text": " and you have four different machines, a really natural way to overlay experts on this would be" }, { "start": 1245.9199999999998, "end": 1251.36, "text": " you just have one expert per machine. And then, yeah, so this is like a really nice interpretation," }, { "start": 1251.36, "end": 1257.28, "text": " because then when you have all of your, you know, local data per core, you'd have the router weights" }, { "start": 1257.28, "end": 1262.1599999999999, "text": " replicated, but then you just figure out what expert they need to go to. And then that's when" }, { "start": 1262.1599999999999, "end": 1266.8, "text": " you kind of, you know, shuffle all the tokens around to the machines, do all the computation," }, { "start": 1266.8, "end": 1274.08, "text": " and then shuffle them back. And this makes it really nice, because then per machine, you actually" }, { "start": 1274.08, "end": 1278.32, "text": " never have any more parameters than you would have had just with the Dense Transformer. But now you" }, { "start": 1278.32, "end": 1284.08, "text": " have experts. So it's actually like a really nice way of kind of, you know, thinking about how to" }, { "start": 1284.08, "end": 1287.9199999999998, "text": " design the models would be like, oh, you know, you have this many cores for data parallelism," }, { "start": 1287.9199999999998, "end": 1292.8799999999999, "text": " just have that many experts. And that's actually a paradigm that Bim and I use a lot when designing" }, { "start": 1292.8799999999999, "end": 1298.96, "text": " these models as well. And yeah, I mean, I think as soon as you have this sort of like, distributed" }, { "start": 1298.96, "end": 1304.32, "text": " model, where you're already going across accelerators and devices, you do already have" }, { "start": 1304.32, "end": 1309.12, "text": " these communication patterns, right? Like you need to get activations to a certain place, you need to" }, { "start": 1309.12, "end": 1313.4399999999998, "text": " like get gradients to a certain place. So you already have these sort of like all reduced" }, { "start": 1314.24, "end": 1321.04, "text": " communication collectives. Expert model is going to introduce all to all communication patterns. So" }, { "start": 1321.6, "end": 1326.1599999999999, "text": " that can be like a more expensive thing, especially based on like your topology and" }, { "start": 1326.1599999999999, "end": 1332.24, "text": " the bandwidth between all of your networks, or between all of your devices. But yeah, so I mean," }, { "start": 1332.24, "end": 1338.72, "text": " this is something you sort of have to like, kind of empirically test like, okay, how much does this" }, { "start": 1339.76, "end": 1345.76, "text": " architecture kind of buy you in terms of performance on your task, versus the additional" }, { "start": 1345.76, "end": 1351.44, "text": " costs of all to all communication. But you will be communicating across devices for these big models," }, { "start": 1351.44, "end": 1358.96, "text": " regardless to train them. Yeah. So this is a good, I guess, a good segue, because you can achieve" }, { "start": 1358.96, "end": 1366, "text": " these giant models, like trillions of parameters using these is the sparse expert models, because" }, { "start": 1366, "end": 1371.3600000000001, "text": " naturally, I can parallelize these experts, it doesn't cost me really much more compute," }, { "start": 1371.3600000000001, "end": 1378.16, "text": " because any data point, or any token only goes to one single expert. There is always a bit of the," }, { "start": 1379.2, "end": 1385.76, "text": " let's say, the question of how comparable this is to the dense models. It was it was often I don't" }, { "start": 1385.76, "end": 1391.28, "text": " know if this is a latent feeling that I get from the community, but people would rather have the" }, { "start": 1391.28, "end": 1398.96, "text": " 175 billion GPT three model compared to the switch transformer, even if it is trillions of parameters." }, { "start": 1400.8799999999999, "end": 1407.28, "text": " Is there some sort of division factor where I could compare to a dense model? Or do you think" }, { "start": 1407.28, "end": 1411.2, "text": " that it's an entirely different nature of function that's computed here?" }, { "start": 1411.2, "end": 1416.56, "text": " Yeah, so this is a really great question. And I think there's a lot of different ways you" }, { "start": 1416.56, "end": 1420, "text": " have to kind of look at this to figure out if a sparse model is right for you." }, { "start": 1420.56, "end": 1424.0800000000002, "text": " So I think actually, in a lot of applications, if it's like, hey, I want to train the model" }, { "start": 1424.64, "end": 1429.04, "text": " with the smallest memory footprint, so I can just be using it on the smallest amount of" }, { "start": 1429.76, "end": 1435.52, "text": " devices as possible, a dense model will always be better. Like I think on a per parameter basis," }, { "start": 1435.52, "end": 1438.64, "text": " dense models are going to be performing better. So for those types of applications, I'm like," }, { "start": 1438.64, "end": 1442.16, "text": " yeah, I don't think it makes sense to be using sparse models. Maybe you want to just train the" }, { "start": 1442.16, "end": 1447.6000000000001, "text": " best thing that you can fit onto your local 2 GPU machine or like a 10 GPU machine, and do really" }, { "start": 1447.6000000000001, "end": 1453.92, "text": " kind of low throughput, feeding in data to this, like not high or anything like that." }, { "start": 1453.92, "end": 1458.3200000000002, "text": " I think sparse models are good, where you're going to be training a model and you're going" }, { "start": 1458.3200000000002, "end": 1462.72, "text": " to be hosting it on a lot of machines and you're going to be having a lot of high throughput going" }, { "start": 1462.72, "end": 1466.24, "text": " through it. So a lot of queries, a lot of stuff going through it, because then things can be" }, { "start": 1466.24, "end": 1470.72, "text": " batched together and then the models actually become pretty efficient. So I think that's kind" }, { "start": 1470.72, "end": 1476.48, "text": " of one lens to look at when you would want to use a sparse versus dense model. And I think the kind" }, { "start": 1476.48, "end": 1484.48, "text": " of second lens is that, for a given amount of GPU or TPU hours on a compute cluster, what model" }, { "start": 1484.48, "end": 1488.32, "text": " will get you the best performance? And I think that's the lens that we actually would spend a" }, { "start": 1488.32, "end": 1493.1200000000001, "text": " lot of time looking at for pre-training models in this paper, like, oh, you have 512 TPU chips," }, { "start": 1493.12, "end": 1497.9199999999998, "text": " and I give you X budget training hours, is a dense model or sparse model going to give you" }, { "start": 1497.9199999999998, "end": 1502.3999999999999, "text": " the best pre-training performance? And I think our assessment was that, yeah, I think actually" }, { "start": 1502.3999999999999, "end": 1506.8799999999999, "text": " the Pareto optimal model typically is a sparse model in that setup." }, { "start": 1508.7199999999998, "end": 1513.52, "text": " Yeah, and comparing parameters, especially between a dense and a sparse model, is just" }, { "start": 1514.32, "end": 1519.9199999999998, "text": " totally incomparable. So using GPT-3 and then our largest switch transformer model," }, { "start": 1519.92, "end": 1525.8400000000001, "text": " it's just wildly different amount of computes in our case. You can't infer that from the parameter" }, { "start": 1525.8400000000001, "end": 1534.0800000000002, "text": " budget. So I don't know what the compute ratio was between the two, but far different. Our 1.6" }, { "start": 1534.0800000000002, "end": 1538.96, "text": " trillion parameter model was actually only doing about as much compute as a billion parameter" }, { "start": 1538.96, "end": 1545.44, "text": " model. So for each token, it was doing roughly a billion parameters worth of flops. And whereas" }, { "start": 1545.44, "end": 1551.68, "text": " GPT-3 is doing 175 billion parameters worth of flops. So you can sort of tune this, and DeepMind" }, { "start": 1551.68, "end": 1557.68, "text": " has sort of also tried to come up with a characterization of scaling properties, far more" }, { "start": 1557.68, "end": 1564.48, "text": " robust than we've been able to do, of sparse expert models, and try to come up with a dense" }, { "start": 1564.48, "end": 1571.3600000000001, "text": " model equivalent. So that might be an interesting work to refer to in the future. But really," }, { "start": 1571.36, "end": 1575.52, "text": " it's just like, practically speaking, it's like, OK, I give you these accelerators for this amount" }, { "start": 1575.52, "end": 1581.6, "text": " of time. What's the best model? So that's probably the fairest comparison." }, { "start": 1584.3999999999999, "end": 1587.6, "text": " Have you seen this Pathways paper?" }, { "start": 1589.28, "end": 1590, "text": " Yes, definitely." }, { "start": 1590, "end": 1597.1999999999998, "text": " They came out. How does it play into something like this? Is it going to make this easier? Is" }, { "start": 1597.2, "end": 1605.68, "text": " it going to make it superfluous? How does the ability to schedule things heterogeneously across" }, { "start": 1605.68, "end": 1611.8400000000001, "text": " devices, or does it enable new possibilities in the sparse expert world?" }, { "start": 1612.32, "end": 1618.56, "text": " Yeah, so great question. So one thing to note is, OK, so typically you have dense models. And a" }, { "start": 1618.56, "end": 1622.16, "text": " dense model, like every input, will have the same amount of compute and parameters applied to it." }, { "start": 1622.64, "end": 1626.16, "text": " And sparse models, now you have the same amount of compute, but different parameters." }, { "start": 1626.16, "end": 1631.28, "text": " And I think the kind of natural next step that I think makes a lot of sense to both Liam and I is" }, { "start": 1631.28, "end": 1635.8400000000001, "text": " that now for each input, you have a different amount of compute applied as well. And I think" }, { "start": 1635.8400000000001, "end": 1640.5600000000002, "text": " Pathways is really exciting, again, like you kind of mentioned for the heterogeneous compute," }, { "start": 1640.5600000000002, "end": 1644.16, "text": " where we want to have inputs that might require different parameters and also different amounts" }, { "start": 1644.16, "end": 1648.5600000000002, "text": " of compute. Yeah, and I think a framework like this is going to really open up a lot of really" }, { "start": 1648.5600000000002, "end": 1653.1200000000001, "text": " exciting research avenues along that direction. And I think it feels like a very natural" }, { "start": 1653.12, "end": 1656.32, "text": " interpretation for kind of where our models are headed for in the future." }, { "start": 1658.3999999999999, "end": 1662.8, "text": " Yeah, like right now, it's like our experts are all sort of completely homogenous. They're all" }, { "start": 1662.8, "end": 1667.9199999999998, "text": " the same size. They do the same operations. Pathways, you could be like, oh, this is like" }, { "start": 1667.9199999999998, "end": 1673.28, "text": " a recurrent expert. This is a huge expert. There's a group of small experts. You could just be" }, { "start": 1673.84, "end": 1680.3999999999999, "text": " a lot more flexible in design. And sort of like alluding to that a little bit with when we were" }, { "start": 1680.4, "end": 1684.8000000000002, "text": " sort of looking at the visualization, it's like, oh, wow, a really consistent thing. Our experts" }, { "start": 1684.8000000000002, "end": 1690.5600000000002, "text": " that want to specialize in these like fill in the blank tokens, these Sentinel tokens, perhaps that" }, { "start": 1690.5600000000002, "end": 1694.88, "text": " might be an avenue or an area where it's like, oh, let's dramatically increase the compute here." }, { "start": 1695.92, "end": 1705.44, "text": " This is, oh, hi, Kat. This is like an area where we like a lot of extra compute could really be" }, { "start": 1705.44, "end": 1710.56, "text": " helpful. And there wasn't really an effective way to do this with the existing infrastructures" }, { "start": 1710.56, "end": 1723.52, "text": " before pathways. Is there a... Yeah, sorry, that's lost the train of thought. Explain to me a little" }, { "start": 1723.52, "end": 1730.56, "text": " bit how GLAM improved upon switch transformers. Like what's new? What's exciting there?" }, { "start": 1730.56, "end": 1737.76, "text": " Yeah, so I think GLAM... So one also thing to note is like there's kind of a right now division of" }, { "start": 1737.76, "end": 1742.8, "text": " two different types of model classes in language modeling space, I would say. So one is like these" }, { "start": 1742.8, "end": 1747.9199999999998, "text": " decoder only models where it's just a single set of parameters and it's like you're just predicting" }, { "start": 1747.9199999999998, "end": 1754.3999999999999, "text": " the next token like autoregressively. And this is like what GPT-3 is. And this is also the kind" }, { "start": 1754.3999999999999, "end": 1759.44, "text": " of architecture that GLAM studies these models in. So the other classes, these encoder decoder" }, { "start": 1759.44, "end": 1764.4, "text": " models like T5, this was also G-shard. This is kind of what also we studied in switch transformer" }, { "start": 1764.4, "end": 1770.56, "text": " in our most recent work as well. So I think GLAM did a few things. So one, they really, I think," }, { "start": 1770.56, "end": 1775.52, "text": " pushed the scale of these models. So like while our original model of switch transformer had more" }, { "start": 1775.52, "end": 1780.3200000000002, "text": " parameters, like GLAM had like much more compute applied per token. And they studied these very" }, { "start": 1780.3200000000002, "end": 1785.8400000000001, "text": " extensively with decoder only language models. And yeah, I think their main comparison point was" }, { "start": 1785.84, "end": 1791.6799999999998, "text": " to GPT-3 as well. So they were studying a lot in the context of few-shot and like one-shot evaluations," }, { "start": 1791.6799999999998, "end": 1794.8, "text": " whereas I think a lot of our work actually centered around like fine tuning the models." }, { "start": 1795.76, "end": 1800.24, "text": " But yeah, I think GLAM really like pushed the scale of these, especially in these decoder only" }, { "start": 1800.24, "end": 1805.52, "text": " language models and showed that like, yeah, you know, you can get as good of quality as GPT-3 with" }, { "start": 1805.52, "end": 1810.32, "text": " like, you know, huge computational training savings as well. And they did a really, a lot of really" }, { "start": 1810.32, "end": 1818.3999999999999, "text": " good work in that space. Is there a functional difference between the sparse expert routing or" }, { "start": 1818.3999999999999, "end": 1827.2, "text": " anything around this in GLAM? Or is it mainly what you said with decoder only and applying" }, { "start": 1827.2, "end": 1834.72, "text": " more compute scaling it up? So actually, there is a few differences that are more nuanced and" }, { "start": 1834.72, "end": 1838.72, "text": " technical. But yeah, at a high level, you know, there's a routing function, and they actually" }, { "start": 1838.72, "end": 1844.08, "text": " route each token to two experts. And actually, there's like some of the differences in these" }, { "start": 1844.08, "end": 1848.4, "text": " models comes from like how much buffer you give each token, each expert, because, you know, you" }, { "start": 1848.4, "end": 1853.76, "text": " need to have like fixed batch sizes for all the experts ahead of time. And so what can happen is" }, { "start": 1853.76, "end": 1858.48, "text": " like, you can't guarantee that like, there's going to be perfect balancing among all of the tokens" }, { "start": 1858.48, "end": 1862.8, "text": " getting sent to experts. So like experts can overflow. And there's this key parameter that" }, { "start": 1862.8, "end": 1867.52, "text": " we call the capacity factor. That's probably the single-handedly most important parameter when" }, { "start": 1867.52, "end": 1871.36, "text": " designing a mixture of expert models, because it just has such a huge impact on the communication" }, { "start": 1871.36, "end": 1876.16, "text": " costs, compute and everything like that for how much buffer you should have. And yeah, I think" }, { "start": 1876.16, "end": 1881.04, "text": " a big difference from GLAM versus our models is they actually use like a much larger capacity factor" }, { "start": 1881.04, "end": 1886.24, "text": " than we've used in our other works. But yeah, the routing algorithm is essentially the same." }, { "start": 1888.8, "end": 1893.76, "text": " That is, yeah, I want to get a bit more into the into the routing algorithm in just a bit," }, { "start": 1893.76, "end": 1900.96, "text": " but just to end this with the with the last paper that we've previously looked at, was I right in" }, { "start": 1900.96, "end": 1908.96, "text": " saying that this is much more often, let's say a general, almost like a review paper? Or how would" }, { "start": 1908.96, "end": 1916.24, "text": " you describe it? Yeah, I mean, I think we we tried to make sure like we're contextualizing a lot of" }, { "start": 1916.24, "end": 1920.64, "text": " the work. So we tried to make sure the related work was like, pretty inclusive, because I mean," }, { "start": 1920.64, "end": 1927.1200000000001, "text": " I think the field's really adjusted and improved a lot in the last two years. But I would sort of" }, { "start": 1927.1200000000001, "end": 1932.5600000000002, "text": " characterize this paper as fixing the two big flaws from our first one from switch transformers." }, { "start": 1933.1200000000001, "end": 1937.0400000000002, "text": " The first was these models are unstable to train. So we'd be training and then all of a sudden the" }, { "start": 1937.0400000000002, "end": 1943.0400000000002, "text": " loss or just diverge, which thwarted a lot of our issues. Interestingly, it doesn't seem like the" }, { "start": 1943.0400000000002, "end": 1948.0800000000002, "text": " instability arises from a lot of experts. We were consistently able to train models like our" }, { "start": 1948.08, "end": 1952.24, "text": " trillion parameter model, for instance, with thousands of experts, never really hitting any" }, { "start": 1952.8, "end": 1958.6399999999999, "text": " unstable sections, really kind of came from like high clops or high computation expert models," }, { "start": 1958.6399999999999, "end": 1962.96, "text": " even with like few experts, those were highly unstable. And then the second thing that this" }, { "start": 1962.96, "end": 1968.56, "text": " paper sort of fixed was the sort of like poor fine tuning quality. So we would sort of pre train a" }, { "start": 1968.56, "end": 1973.6, "text": " model, it would show like really significant speed ups over a dense counterpart. But then when" }, { "start": 1973.6, "end": 1978.6399999999999, "text": " it came time to fine tuning, say I'm like super glue or some like other task of interest, it would" }, { "start": 1978.6399999999999, "end": 1985.04, "text": " just be considerably worse. So I think this paper was just really trying to sort of like kind of" }, { "start": 1985.04, "end": 1990.3999999999999, "text": " patch up a couple of those issues, we identified them in our first work. Yeah, I'm always a bit" }, { "start": 1990.3999999999999, "end": 1999.36, "text": " intimidated when a paper has a table of index by itself. Can you can you go to something that" }, { "start": 1999.36, "end": 2004.32, "text": " Barry and I discussed, it's like, okay, should we break this up into multiple papers? Or should this" }, { "start": 2004.32, "end": 2009.12, "text": " be one because, you know, this is like, you know, a lot of work. And, you know, this is like something" }, { "start": 2009.12, "end": 2013.9199999999998, "text": " that we discussed, like maybe in the future, we should probably be producing like more bite size" }, { "start": 2013.9199999999998, "end": 2020.6399999999999, "text": " pieces of work. When you when you talk about fine tuning, can you go a bit into more detail? Like," }, { "start": 2020.6399999999999, "end": 2026.08, "text": " what was exactly the problem? How did you how did you also go about fixing it? So I'm not only" }, { "start": 2026.08, "end": 2033.04, "text": " interested in, you know, how did how what's the final model like, but what does the process of" }, { "start": 2033.04, "end": 2038.8, "text": " debugging something like this and then getting to an architecture or a solution that actually works" }, { "start": 2038.8, "end": 2049.04, "text": " look like? Yeah, I mean, it's sort of this like very interesting problem of like, you want to," }, { "start": 2050.08, "end": 2053.68, "text": " there's really just like fundamental trade off. And whenever you're sort of doing a sort of like" }, { "start": 2053.68, "end": 2058.56, "text": " large scale work, where you want to try to understand and characterize things at a smaller" }, { "start": 2058.56, "end": 2064.64, "text": " scale, understand scaling properties, understand, understand like hyper parameter dependencies." }, { "start": 2065.68, "end": 2071.3599999999997, "text": " But then you also want to be consistently checking yourself at the largest scales. And this sort of" }, { "start": 2071.3599999999997, "end": 2075.7599999999998, "text": " balance of like, okay, you have this much compute, you have this much time, where do you allocate it?" }, { "start": 2075.7599999999998, "end": 2080.7999999999997, "text": " Do you do a lot of small experiments? Or do you do a few big experiments? It's kind of tricky." }, { "start": 2080.8, "end": 2088, "text": " But I'd say part of our like findings were the first one was like, okay, well, characterization" }, { "start": 2088, "end": 2094.48, "text": " is we're not doing better on fine tuning. What's the cause? And it seemed like perhaps our cause is" }, { "start": 2094.48, "end": 2100.32, "text": " not that of optimization, it's that of generalization. So if you scroll down into section four," }, { "start": 2100.32, "end": 2108.32, "text": " you can just click on the link. We might be Yeah, exactly. Yeah, so this is an example that, you" }, { "start": 2108.32, "end": 2114.88, "text": " know, kind of supports a lot of the trends we're seeing. On the left is a small superglue task. So" }, { "start": 2114.88, "end": 2121.6000000000004, "text": " this task has only 250 training sequences, so very small. And on the right is record. So this has" }, { "start": 2121.6000000000004, "end": 2130.4, "text": " over 100,000 training examples. We're showing sparse models versus dense models in the two things," }, { "start": 2130.4, "end": 2135.76, "text": " in the two plots. Blue represents the sparse training though, and you can see it just very" }, { "start": 2135.76, "end": 2141.92, "text": " quickly gets to 100%. And it outpaces in both cases, the small task and the large task outpaces" }, { "start": 2141.92, "end": 2148, "text": " the dense model getting to 100% train evaluation accuracy. But when in the small task, we'll see" }, { "start": 2148, "end": 2153.1200000000003, "text": " the dense model in red actually outperforming the ultimate performance for the sparse model in orange," }, { "start": 2153.1200000000003, "end": 2158.5600000000004, "text": " whereas for the bigger tasks, the sparse model does well. And so we kind of kept seeing this like," }, { "start": 2158.5600000000004, "end": 2164.48, "text": " you know, overfitting issues. And a lot of this was then led us to sort of like investigate" }, { "start": 2164.48, "end": 2169.04, "text": " hyperparameters. And, you know, some of the hyperparameters can sort of be adjusted in a way" }, { "start": 2169.04, "end": 2174.8, "text": " to make the model like less susceptible to overfitting. So you can use like different" }, { "start": 2174.8, "end": 2181.36, "text": " dropout parameterizations, but also things like batch size and learning rate can inject more noise," }, { "start": 2181.36, "end": 2189.04, "text": " which can also be sort of like a counter to some like overfitting properties. So we tried and then" }, { "start": 2189.44, "end": 2193.2, "text": " sort of consistent with this, like a lot of these things were sort of like, you know, more exhaustive" }, { "start": 2193.2, "end": 2199.3599999999997, "text": " studies at say, a billion parameter scale, we then tried to continue to sort of like fact check this" }, { "start": 2199.3599999999997, "end": 2205.2, "text": " against our larger model, and make sure that these conclusions were holding. So I think it was just" }, { "start": 2205.2, "end": 2209.9199999999996, "text": " sort of like, you know, the debugging process was, okay, what more precisely is going wrong? And then" }, { "start": 2209.9199999999996, "end": 2215.7599999999998, "text": " like, what are our levers that we can sort of like pull in order to try to like improve it? But you" }, { "start": 2215.76, "end": 2224.88, "text": " know, a bit of art and science really. You so you is it you observed, okay, we are probably overfitting," }, { "start": 2224.88, "end": 2231.44, "text": " because you saw the smaller the tasks got sort of the worst the sparse models would ultimately" }, { "start": 2231.44, "end": 2237.2000000000003, "text": " perform on the validation set of those tasks. Did you? And you have it's not like quite like," }, { "start": 2237.2000000000003, "end": 2242.0800000000004, "text": " yeah, it's not always like quite so easy as that. But it's sort of like, you know," }, { "start": 2242.08, "end": 2246.48, "text": " directionally, like, I think we have support of the hypothesis. But it's not like every single" }, { "start": 2246.48, "end": 2250.88, "text": " small task does poorly. And every large task is great. Yeah, but I'd say directionally," }, { "start": 2250.88, "end": 2257.84, "text": " it seems to be a phenomenon we've observed. You have also a bunch of experiments down here where" }, { "start": 2257.84, "end": 2263.2799999999997, "text": " you where you investigate some of these, for example, dropout probabilities, you also have" }, { "start": 2263.2799999999997, "end": 2270, "text": " expert dropout probability, which is one of the questions I had in that you have particular" }, { "start": 2270, "end": 2274.48, "text": " architecture, right with these with these experts. And when I think about overfitting," }, { "start": 2274.48, "end": 2280.4, "text": " what in regular transformers, I have kind of handles, I can use adapter layers, I can only" }, { "start": 2280.4, "end": 2287.84, "text": " fine tune the head and so on. Did you ever investigate maybe only fine tuning some of the" }, { "start": 2287.84, "end": 2293.92, "text": " experts? Like is that the keeping others constant? Is that ever a thing? Like, would that work? Or," }, { "start": 2293.92, "end": 2300.08, "text": " or, or, you know, can we can we make use somehow of the fact that we have these different experts," }, { "start": 2300.08, "end": 2304.2400000000002, "text": " and they're actually different functions? Yeah, great question. And I think actually," }, { "start": 2304.2400000000002, "end": 2308.7200000000003, "text": " if you scroll down, we did a very naive kind of version of this, not where we freeze different" }, { "start": 2308.7200000000003, "end": 2313.12, "text": " experts, but we, you know, freeze all of the experts, or maybe only train all the experts" }, { "start": 2313.12, "end": 2319.76, "text": " and freeze all of the other parameters. I would say our findings were this were surprising in" }, { "start": 2319.76, "end": 2326.96, "text": " a bad way. So nothing, nothing really worked super well. So here you can see that and this is also," }, { "start": 2326.96, "end": 2332.6400000000003, "text": " we only studied this on super glue, right? So it's far from exhaustive. But yeah, so one thing we" }, { "start": 2332.6400000000003, "end": 2336.96, "text": " tried was updating first all of the non mixture of expert parameters only. And that actually" }, { "start": 2336.96, "end": 2340.48, "text": " performed about the same, which was kind of interesting. It's like, hey, like actually" }, { "start": 2340.48, "end": 2344.48, "text": " freezing the mixture of expert weights like seem to perform about as well as just like updating the" }, { "start": 2344.48, "end": 2350, "text": " whole model. Then when we started to, you know, update only the mixture of expert weights and" }, { "start": 2350, "end": 2354, "text": " freeze all the other model parameters, like the performance was actually really bad. And there" }, { "start": 2354, "end": 2357.44, "text": " was some we still fully don't understand what's going on here. We have like a few kind of like" }, { "start": 2357.44, "end": 2362.96, "text": " half baked hypotheses. But yeah, then when we update only the attention parameters, things are" }, { "start": 2362.96, "end": 2368.32, "text": " worse. And we found a slight boost updating only the feed forward network parameters that weren't" }, { "start": 2368.32, "end": 2374.2400000000002, "text": " the mixture of expert layers. But yeah, overall, nothing worked that well. But yeah, I think there" }, { "start": 2374.24, "end": 2378.08, "text": " might be some potential really interesting things of like, hey, maybe allowing only, you know," }, { "start": 2378.08, "end": 2383.7599999999998, "text": " a certain subset of experts to be fine tuned. We did spend a little bit of time actually studying" }, { "start": 2383.7599999999998, "end": 2389.2, "text": " like pruning off experts during fine tuning. So like for a specific fine tuning task, if your" }, { "start": 2389.2, "end": 2394.4799999999996, "text": " pre trained model has like 64 experts, can you just take like a subset of like two, four, eight or 16" }, { "start": 2394.4799999999996, "end": 2398.7999999999997, "text": " of them? Yeah, and we also didn't really get that good of signal with this as well." }, { "start": 2398.8, "end": 2403.36, "text": " Also to some of your recommendations, they actually would be compatible with expert models too. So" }, { "start": 2403.76, "end": 2409.76, "text": " you're free to just like fine tune like the top, like top logit layer, or you could add in adapter" }, { "start": 2409.76, "end": 2413.28, "text": " layers. Yeah, we didn't do anything like really funky, like you were suggesting like, oh, we're" }, { "start": 2413.28, "end": 2420, "text": " only going to expert like update experts like three, eight and 14 or something. Yeah, my intuition" }, { "start": 2420, "end": 2426.48, "text": " is that probably wouldn't work well. But I mean, I've been proven wrong many times. Yeah," }, { "start": 2426.48, "end": 2433.28, "text": " we tried some like other things that didn't make it to this table or these plots. And yeah, again," }, { "start": 2433.28, "end": 2437.92, "text": " we didn't really see like a significant boost. That said, if you are only updating like a fraction" }, { "start": 2437.92, "end": 2442.8, "text": " of the parameters, you get some memory savings. So you know, some nice things." }, { "start": 2444.96, "end": 2451.36, "text": " Cool. I guess one, you know, there's, there's almost an infinite number of things one could" }, { "start": 2451.36, "end": 2458.1600000000003, "text": " try with these things like distilling experts like distilling multiple experts into a single expert." }, { "start": 2458.1600000000003, "end": 2464.2400000000002, "text": " So you have another expert that's again free to do some some new tasks. Once you know that" }, { "start": 2464.2400000000002, "end": 2470.32, "text": " two experts are converging something like, I think there's, it's really interesting, right? A lot of" }, { "start": 2470.32, "end": 2476.48, "text": " we're adding a new experts on the fly. Yeah, a lot of possibilities. And that brings me a bit to" }, { "start": 2476.48, "end": 2482.8, "text": " this this routing function that we talked about before and at the beginning, which seems to me" }, { "start": 2482.8, "end": 2491.84, "text": " is a really crucial part of the system. Yet, as you said before, very often, I've just seen this" }, { "start": 2491.84, "end": 2497.76, "text": " being implemented quite simplistically, maybe there's a linear transform and then a softmax" }, { "start": 2497.76, "end": 2504.56, "text": " or something like this, maybe not even maybe there is some some sort of a, you know, a" }, { "start": 2504.56, "end": 2514.72, "text": " some fixed keys for all of the experts and then you route according to that. Do you like my intuition" }, { "start": 2514.72, "end": 2523.04, "text": " would be that this could be a powerful handle on what's, you know, on my performance downstream," }, { "start": 2523.04, "end": 2529.84, "text": " this routing function, especially also making this different during inference, you know, any" }, { "start": 2529.84, "end": 2536, "text": " any number of things, doing a Monte Carlo tree search at inference time to be as accurate as" }, { "start": 2536, "end": 2542.7200000000003, "text": " possible, kind of like, like AlphaGo or something. Do you have an idea on what the power of the" }, { "start": 2542.7200000000003, "end": 2547.92, "text": " routing function in these sparse models is? And how does it work currently? Like, what's the most," }, { "start": 2547.92, "end": 2555.44, "text": " the latest and greatest? And how good is it? Yeah, so this is a really good question, actually," }, { "start": 2555.44, "end": 2559.28, "text": " and something we've actually spent a lot of time about. So I would say actually, in this project," }, { "start": 2559.28, "end": 2562.6400000000003, "text": " probably the thing I maybe spent the most time with is trying out different routing algorithms" }, { "start": 2562.6400000000003, "end": 2567.36, "text": " and routing parameterizations. But we ended up kind of going with the default thing, which I also" }, { "start": 2567.36, "end": 2574.96, "text": " think says something a little bit about the results of it. Yeah, so I would say my intuition is that" }, { "start": 2575.92, "end": 2579.92, "text": " the model actually works surprisingly well with a lot of different ways you can route" }, { "start": 2580.48, "end": 2585.2000000000003, "text": " the tokens. So like, you know, we tried a lot of other routing algorithms, we tried making like" }, { "start": 2585.2, "end": 2590, "text": " the routing network larger, we tried like, you know, some fancier ways of actually figuring out" }, { "start": 2590, "end": 2594.48, "text": " where you should send the token to, we tried, you know, using additional information of like," }, { "start": 2594.48, "end": 2599.2, "text": " oh, when you're routing this current representation, you have access to whether or not like it was" }, { "start": 2599.2, "end": 2603.52, "text": " routed, or like where it was routed before in previous layers, using like word embedding" }, { "start": 2603.52, "end": 2610.56, "text": " information too. But yeah, I think overall, it seemed to be, you know, kind of insensitive," }, { "start": 2610.56, "end": 2615.84, "text": " we actually did find like one or two methods that improve things, but they can only be used" }, { "start": 2615.84, "end": 2622.48, "text": " in certain situations. So it was a bit trickier to just like replace everything. The current routing" }, { "start": 2622.48, "end": 2627.68, "text": " algorithm we're using is basically what the original one was doing, I think in Shazir et al" }, { "start": 2627.68, "end": 2632.7999999999997, "text": " in 2017, when these kind of things were like really introduced into the LSTM language models." }, { "start": 2633.36, "end": 2638.32, "text": " And I think, you know, our newer work, and then also Glam as well, we're using these kind of" }, { "start": 2638.32, "end": 2645.2000000000003, "text": " routing algorithms too. Yeah, and also like one kind of like detail here, it's like, so right now," }, { "start": 2645.2000000000003, "end": 2651.6000000000004, "text": " we're sort of splitting out this little box, and we're like, oh, this is the router. It's not" }, { "start": 2651.6000000000004, "end": 2656.32, "text": " really an accurate characterization. It's like, yes, okay, you're mapping some vector into a" }, { "start": 2656.32, "end": 2662.48, "text": " vector that has like the same like length as number of experts. But if you just don't update" }, { "start": 2662.48, "end": 2668.32, "text": " that matrix, it still works fine, right? Because now just the represent like the weight matrices" }, { "start": 2668.32, "end": 2672.8, "text": " below you are just sort of adapting and just piping whatever activation they need, right?" }, { "start": 2672.8, "end": 2676.96, "text": " If you freeze the great if you stop a gradient through that, then it's like catastrophically bad." }, { "start": 2678.08, "end": 2682.96, "text": " But yeah, I mean, I've also sort of been surprised by the relative insensitivity" }, { "start": 2682.96, "end": 2687.92, "text": " to the routing algorithm. Like we've seen like, you know, maybe some small boosts here and there," }, { "start": 2687.92, "end": 2693.6, "text": " but it hasn't been super significant. I think you probably have a better sort of like a bigger" }, { "start": 2693.6, "end": 2699.76, "text": " significance by actually just sort of fundamentally changing like the architecture. Like maybe there's" }, { "start": 2699.76, "end": 2705.6, "text": " like some wildly different approach for sort of sparse models that we're not considering, maybe" }, { "start": 2705.6, "end": 2710.64, "text": " we're in some sort of like local men. And like these small tweaks on like, oh, okay, precisely," }, { "start": 2710.64, "end": 2715.6, "text": " how are we doing this? Maybe doesn't matter as much. And DeepMinds also explored some other kind" }, { "start": 2715.6, "end": 2720.16, "text": " of interesting routing algorithms, like you sort of alluded to fixed routing algorithms, where it's" }, { "start": 2720.16, "end": 2725.04, "text": " just like, you're not even learning. They've also tried RL based routing algorithms. And I think it" }, { "start": 2725.04, "end": 2728.96, "text": " had like actually similar scaling properties. So again, kind of corroborating what Barrett is" }, { "start": 2728.96, "end": 2733.04, "text": " saying, it's just like, a lot of these things when we're kind of doing this, like per token routing," }, { "start": 2733.8399999999997, "end": 2739.2, "text": " haven't really moved the needle substantially. That's been our our luck." }, { "start": 2739.2, "end": 2743.36, "text": " Yeah, and I think another important trend actually, is that we when we were experimenting with a lot" }, { "start": 2743.36, "end": 2747.2000000000003, "text": " of these different routing algorithms, we actually found that they did help models. And maybe when" }, { "start": 2747.2000000000003, "end": 2752.4, "text": " you had like a 1 billion parameter dense modelish size, but then like, as we scaled up the models," }, { "start": 2752.4, "end": 2755.6, "text": " like actually a lot of the time, sometimes the differences would just like wash away," }, { "start": 2755.6, "end": 2758.88, "text": " as well. So it's kind of this interesting effect of when more scale is increased," }, { "start": 2758.88, "end": 2761.6800000000003, "text": " like it maybe becomes a little bit less insensitive to some of these decisions." }, { "start": 2763.92, "end": 2770.56, "text": " Yeah, I was I was Yeah, I can totally see that that essentially that the rest of the network" }, { "start": 2770.56, "end": 2776.88, "text": " adjusts, especially if everything is trainable. What I would be excited about maybe is is to" }, { "start": 2776.88, "end": 2781.44, "text": " somehow at inference time doing something smarter than because at training time, I can adjust to" }, { "start": 2781.44, "end": 2786.24, "text": " everything right, but at inference time, maybe there's something that I could do, especially" }, { "start": 2786.24, "end": 2793.04, "text": " with regards to, you know, domain shift, domain adaptation, anything like this, where I could," }, { "start": 2793.04, "end": 2798.88, "text": " I could tweak routing in some way, but I guess that's also, also up for for future work." }, { "start": 2798.88, "end": 2803.84, "text": " Okay. So there's a little bit of this not tweaking the routing algorithm, but tweaking the capacity" }, { "start": 2803.84, "end": 2807.84, "text": " factor hyper parameter I mentioned a while ago. So that's this is basically the parameter that's" }, { "start": 2807.84, "end": 2811.84, "text": " going to dictate how many tokens are being dropped. And one cool thing you can do is you can have some" }, { "start": 2812.4, "end": 2816.6400000000003, "text": " capacity factor during training. But then at eval time, depending on if you want to use more or less" }, { "start": 2816.6400000000003, "end": 2820.4, "text": " compute, you can be either dropping more or less tokens, and either kind of, you know, increase or" }, { "start": 2820.4, "end": 2824.32, "text": " decrease the performance, which is pretty cool. And the model is actually pretty robust to having" }, { "start": 2824.32, "end": 2829.1200000000003, "text": " that train from training evaluation time. So that's actually kind of like a good lever for like," }, { "start": 2829.1200000000003, "end": 2833.04, "text": " you know, depending on if you want to use more or less compute during evaluation." }, { "start": 2833.92, "end": 2841.92, "text": " I think we have we have a pretty good overview. Now I want to get a little bit into just the future," }, { "start": 2841.92, "end": 2847.2000000000003, "text": " the future prospects, maybe also of this we already talked about, and with pathways, we could have" }, { "start": 2847.2, "end": 2853.6, "text": " heterogeneous things, could this be pushed to some sort of limit? Whenever I see a distributed" }, { "start": 2853.6, "end": 2859.2799999999997, "text": " system, you know, I immediately think distributed maybe not even in a data center, but across" }, { "start": 2860.08, "end": 2868.8799999999997, "text": " users across networks is their applications to maybe, what was it called federated, some kind of" }, { "start": 2868.8799999999997, "end": 2874, "text": " some kind of federated computing, some kind of federated learning where I could somehow contribute" }, { "start": 2874, "end": 2881.92, "text": " with my maybe confidential data, but I could still contribute to a whole compute process is there," }, { "start": 2882.48, "end": 2888.24, "text": " I'm gonna say the the B word, is there an application for blockchain distribution," }, { "start": 2888.24, "end": 2894.4, "text": " something like this? Like, have you do you think about sort of the higher degrees of distribution" }, { "start": 2894.4, "end": 2899.44, "text": " here? Do you want me to go for it? Yeah, go for it. I mean, yeah, so I mean, yes, me personally," }, { "start": 2899.44, "end": 2904.7200000000003, "text": " I haven't spent a ton of time thinking about this. But I do think it's like very interesting. And" }, { "start": 2904.7200000000003, "end": 2909.52, "text": " yeah, there definitely seems to be a lot of really, you know, open problems around this," }, { "start": 2909.52, "end": 2913.52, "text": " especially given the growing amount of like fragmented compute, fragmented devices, like" }, { "start": 2913.52, "end": 2917.68, "text": " there's so much computer on here, like, you know, how can you effectively utilize all of this," }, { "start": 2917.68, "end": 2921.76, "text": " utilize different, you know, data and stuff, I think it's like a super cool and I think it" }, { "start": 2921.76, "end": 2926.7200000000003, "text": " was going to require a lot of really interesting research, because right now the way we're currently" }, { "start": 2926.72, "end": 2930.8799999999997, "text": " training these models is it's all like synchronized lockstep typically, right, you're doing like," }, { "start": 2930.8799999999997, "end": 2935.9199999999996, "text": " oh, like after each batch, you do these gradients, you send the gradients around and everything. But" }, { "start": 2935.9199999999996, "end": 2939.8399999999997, "text": " like, I think actually, maybe the future of these models, when you're really, you know," }, { "start": 2939.8399999999997, "end": 2942.72, "text": " allowing them to be distributed across very different types of computing, everything might" }, { "start": 2942.72, "end": 2948, "text": " actually now introduce like asynchronous training as kind of like the new paradigm. So I think" }, { "start": 2948, "end": 2951.7599999999998, "text": " that's like a really exciting space. But yeah, I haven't spent too much time thinking about it" }, { "start": 2951.76, "end": 2958.0800000000004, "text": " personally. Yeah, and I think like, as it pertains to say like blockchain or something, like, I think" }, { "start": 2958.0800000000004, "end": 2963.2000000000003, "text": " one problem with these expert models as designed in this way, are these all to all communications." }, { "start": 2963.92, "end": 2968.7200000000003, "text": " So over this sort of like, you know, decentralized, like peer to peer network, where it's like, you" }, { "start": 2968.7200000000003, "end": 2973.0400000000004, "text": " know, nodes are like, you know, really far apart, inconsistent sort of bandwidth and stuff." }, { "start": 2974.5600000000004, "end": 2980.7200000000003, "text": " That could be really tough if sort of your experts were sort of distributed among like many different" }, { "start": 2980.72, "end": 2986.24, "text": " nodes in this sort of like unreliable network where nodes are kind of coming and going. Like" }, { "start": 2986.24, "end": 2993.12, "text": " right now, all our systems are in this sort of like very constrained fault intolerant area where" }, { "start": 2993.12, "end": 2999.52, "text": " it's like, oh, all highly internet work chips that are highly reliable. And then so like blockchain" }, { "start": 2999.52, "end": 3002.9599999999996, "text": " would just have like a whole different set of like kind of problems that you'd have to sort of" }, { "start": 3002.9599999999996, "end": 3008.8799999999997, "text": " address like unreliability and you know, some of these other areas, not to say I think you just" }, { "start": 3008.88, "end": 3013.84, "text": " like require some like additional kind of research, like just sort of adopting the model as is, I think" }, { "start": 3013.84, "end": 3019.84, "text": " would pretty poorly map on that kind of computing infrastructure. But I think there's something there" }, { "start": 3019.84, "end": 3028.32, "text": " that could be done. Is there work on because I see these works mostly here in NLP yet transformers" }, { "start": 3028.32, "end": 3034.56, "text": " kind of taking over the rest of the world. Is there work on how these experts, sparse expert" }, { "start": 3034.56, "end": 3041.84, "text": " transformers behave in vision in reinforcement learning, speech, whatever? Yeah, yeah, great" }, { "start": 3041.84, "end": 3045.6, "text": " question. So absolutely, actually, there's been some really good work applying these models to" }, { "start": 3045.6, "end": 3049.92, "text": " like VIP based, like image classification and stuff. And there, it's actually really nice," }, { "start": 3049.92, "end": 3054.16, "text": " because then you can leverage all of the, you know, niceties around like people figuring out" }, { "start": 3054.16, "end": 3059.2, "text": " how to get these working really well and transformers and kind of, you know, nicely map it over as well." }, { "start": 3059.2, "end": 3064.64, "text": " I've, yeah, there's also been some good work using these in speech as well. Liam, any other" }, { "start": 3064.64, "end": 3071.04, "text": " things to add on top of that? Some, I used to do reinforcement learning more full time, and some" }, { "start": 3071.04, "end": 3076.7999999999997, "text": " colleagues kind of reached out about doing like sparse expert models for RL. I haven't seen, I'm" }, { "start": 3076.7999999999997, "end": 3081.6, "text": " not familiar with some work. But, you know, that might be sort of like another interesting avenue," }, { "start": 3081.6, "end": 3088.56, "text": " but like for sure. So language, vision, speech. I don't know if there's been any videos," }, { "start": 3088.56, "end": 3096.16, "text": " any video work yet. Yeah, but like high data, a lot of throughput, those would be like, you know," }, { "start": 3096.16, "end": 3101.52, "text": " really good areas. So I think video would be also really promising. Yeah, I really like also the," }, { "start": 3101.52, "end": 3105.6, "text": " I feel like it feels very natural in these high dimensionality spaces that you really might want" }, { "start": 3105.6, "end": 3109.04, "text": " different parameters to be applied, like when you have a video, like one, I think you don't want to" }, { "start": 3109.04, "end": 3113.12, "text": " be applying the same amount of compute to every frame. But then on top of that, I could see like," }, { "start": 3113.12, "end": 3116.88, "text": " actually, you really want to have different parameters applying to different, you know," }, { "start": 3116.88, "end": 3120.7200000000003, "text": " things going on in the video, because it's just gonna be like wildly different stuff happening." }, { "start": 3120.7200000000003, "end": 3124.48, "text": " So yeah, I think I'm very excited about these models for video as well." }, { "start": 3126.1600000000003, "end": 3132.2400000000002, "text": " Do you imagine that these models will just, essentially right now they're competition to" }, { "start": 3132.2400000000002, "end": 3140.1600000000003, "text": " dense models. They are competing, you're tracking Pareto frontiers, how much compute, how well are" }, { "start": 3140.16, "end": 3147.6, "text": " they doing, tackling very much the same tasks. Do you think this will go on? Like, do you think" }, { "start": 3147.6, "end": 3152, "text": " these models might overtake dense models if we figure out how to handle them correctly?" }, { "start": 3152, "end": 3157.52, "text": " Or is it more like there's a killer app for each one of them?" }, { "start": 3159.2799999999997, "end": 3165.68, "text": " Yeah, I think in, oh, do you want to go ahead, then? Yeah, I mean, I honestly think that the future" }, { "start": 3165.68, "end": 3170.56, "text": " is going to be adaptive. Like, I don't think there's any way that like in 10 years, our models are" }, { "start": 3170.56, "end": 3175.2799999999997, "text": " treating all examples coming in with like the same parameters over and over again, and the same amount" }, { "start": 3175.2799999999997, "end": 3182.16, "text": " of compute. It may not be this precise sort of like sparsity regime, or may not be the precise" }, { "start": 3182.16, "end": 3188.8799999999997, "text": " sort of adaptive computation, kind of like paradigms that have been put forth. But I view this sort of" }, { "start": 3188.8799999999997, "end": 3193.68, "text": " kind of work of like sparsity adaptive computation, as kind of like inevitable, like, I don't think" }, { "start": 3193.68, "end": 3198.3999999999996, "text": " it's going to be considered like competition, it's just going to be sort of like integrated into a" }, { "start": 3198.3999999999996, "end": 3203.52, "text": " lot of like leading models. That's, that's my expectation. I'd be really shocked in like 10" }, { "start": 3203.52, "end": 3208.48, "text": " years, we're training like a 100 trillion parameter dense model. And it's just kind of doing the same" }, { "start": 3208.48, "end": 3213.9199999999996, "text": " thing, like, over and over again, for no matter what comes in, just seems really strange to me." }, { "start": 3216.56, "end": 3223.12, "text": " What's the future for your particular research? Like, where do you see, where do you see yourself" }, { "start": 3223.12, "end": 3230.64, "text": " going in the next, maybe not the next paper that you haven't published yet, but maybe a bit broader" }, { "start": 3230.64, "end": 3234.24, "text": " time scale? Like, what excites you? And what are your next plans here?" }, { "start": 3236, "end": 3239.7599999999998, "text": " Yeah, great question. I mean, I think the thing that really excites me is like what we were kind" }, { "start": 3239.7599999999998, "end": 3244.16, "text": " of talking about earlier of each input getting a different amount of compute applied. Like, I think" }, { "start": 3244.16, "end": 3247.6, "text": " right now, the models are working well for each input getting different parameters. And I think," }, { "start": 3247.6, "end": 3251.6, "text": " you know, coupling this with like adaptive amounts of computation is like, I think," }, { "start": 3251.6, "end": 3255.6, "text": " really where I want to be spending time thinking about in the next, you know, upcoming years." }, { "start": 3258.24, "end": 3263.52, "text": " Is there? Yeah, I don't know, is you have something like Ponder, there's PonderNet," }, { "start": 3263.52, "end": 3268.96, "text": " and so on, there's these recursive architectures, or recurrent architectures that, that sort of" }, { "start": 3268.96, "end": 3274.24, "text": " decide themselves when to when to exit. Would that be one thing? Or do you simply imagine that each" }, { "start": 3274.24, "end": 3279.36, "text": " expert is kind of one is the buff expert, and one is the lean expert, and then the routing function" }, { "start": 3279.36, "end": 3284.6400000000003, "text": " essentially takes care of the different amount of compute? Yeah, I don't know. This is a great" }, { "start": 3284.6400000000003, "end": 3289.52, "text": " question. I think, I don't know, I can see either approach potentially working, or maybe you" }, { "start": 3289.52, "end": 3294.6400000000003, "text": " actually want combinations or potentially something completely new. Yeah, it feels like the space is" }, { "start": 3294.6400000000003, "end": 3299.2000000000003, "text": " still, you know, very exciting. And there's like a lot of really interesting different verticals" }, { "start": 3299.2000000000003, "end": 3302.2400000000002, "text": " being pushed. So the space still feels like, you know, pretty young to me." }, { "start": 3302.24, "end": 3307.4399999999996, "text": " Okay, last question from my side, what's the connection of this to something like Capsules?" }, { "start": 3307.4399999999996, "end": 3312.72, "text": " I don't know if you've ever thought about the the connection there. But with Capsules, I always" }, { "start": 3312.72, "end": 3318.24, "text": " think this is these abstract, very abstract things, very high level ideas flying around. And you here" }, { "start": 3318.24, "end": 3324.3199999999997, "text": " have something like very practical, you know, very on the metal thing. Yeah, there seems to be quite" }, { "start": 3324.3199999999997, "end": 3330, "text": " some commonalities. Is there is that something that ever came up to you? Or, or is that something" }, { "start": 3330, "end": 3337.92, "text": " that ever came up to you or? In the two years of doing sparsity research, this is literally the" }, { "start": 3337.92, "end": 3346, "text": " first time. I actually should be going back to that work. I feel like capsules like yeah, had a" }, { "start": 3346, "end": 3350.72, "text": " lot of like really interesting conceptions. But maybe like you're kind of alluding to it didn't" }, { "start": 3350.72, "end": 3356.08, "text": " like map super well to the metal. So maybe that sort of like hindered it's like it's use, whereas" }, { "start": 3356.08, "end": 3361.36, "text": " this is just like highly motivated from like an engineering perspective. We've had like some" }, { "start": 3361.36, "end": 3365.44, "text": " questions like, oh, what is like the neuroscientific kind of motivation of our work? And it's like," }, { "start": 3365.44, "end": 3371.92, "text": " it's really engineering kind of driven. So it's like, okay, what what will be fast on our existing" }, { "start": 3371.92, "end": 3378.56, "text": " hardware? But yeah, I will revisit capsules and kind of see like, oh, okay, how could we actually" }, { "start": 3378.56, "end": 3382.16, "text": " map this a little bit better to the hardware? And like, you know, I think that could be like," }, { "start": 3382.16, "end": 3387.7599999999998, "text": " you know, an interesting source of ideas. Is there any any last thing you want to get out to viewers" }, { "start": 3387.7599999999998, "end": 3395.04, "text": " that they should take away from this work? Any any way that a regular person can get into this type" }, { "start": 3395.04, "end": 3399.7599999999998, "text": " of research? Anything like this? Yes, a great question. So actually, one thing we tried to show" }, { "start": 3399.7599999999998, "end": 3403.12, "text": " in our switch transformer work is that these models work pretty well, even if you only have" }, { "start": 3403.12, "end": 3407.8399999999997, "text": " two experts. So I definitely I don't want people to think that, you know, you're really a supercomputer" }, { "start": 3407.84, "end": 3412.7200000000003, "text": " to run the models or to, you know, get benefits from having experts, even having I think, as little" }, { "start": 3412.7200000000003, "end": 3417.1200000000003, "text": " as two experts and running models could lead to developing really interesting research ideas," }, { "start": 3417.1200000000003, "end": 3421.04, "text": " improving the performance and everything like that. So yeah, I definitely hope that, you know," }, { "start": 3421.04, "end": 3426.96, "text": " more people can continue to experiment and push forward these models. Yeah, and then I would say," }, { "start": 3426.96, "end": 3433.2000000000003, "text": " like, another interesting trend that I've been following is sort of in parallel to sparsity in" }, { "start": 3433.2, "end": 3437.52, "text": " these like, you know, really large models is the idea of like, well, what if we just sort of like," }, { "start": 3437.52, "end": 3443.2799999999997, "text": " have the model sort of offload and like, sort of do lookups or, you know, look at documents and" }, { "start": 3443.2799999999997, "end": 3448.64, "text": " retrieval type methods. I think this is sort of like a very interesting area. And I'd love to see" }, { "start": 3448.64, "end": 3453.7599999999998, "text": " like, kind of head to head comparisons of like, okay, do we want to try to encapsulate the knowledge" }, { "start": 3453.7599999999998, "end": 3458.64, "text": " into parameters? Or do we want to just like, keep it sort of like, you know, parametric," }, { "start": 3458.64, "end": 3464.3199999999997, "text": " non-parametric type thing? And we keep the information kind of written in docs or like," }, { "start": 3464.3199999999997, "end": 3468.96, "text": " what does the interplay look like? I think that's sort of like another really interesting avenue," }, { "start": 3468.96, "end": 3474.8799999999997, "text": " like, kind of comparing these things. Awesome. Yeah, it sounds really cool. I'm excited to" }, { "start": 3474.8799999999997, "end": 3480.7999999999997, "text": " to see what the future of these models bring. Yeah, Barrett and William, thank you so much" }, { "start": 3480.7999999999997, "end": 3486.56, "text": " for being here. This was a lot of fun. I hope to see you again soon. Yeah, cool. Thanks for having" }, { "start": 3486.56, "end": 3495.92, "text": " us. Yeah, thanks for having us." } ]
3jT1qJ8ETzk
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
SupSup: Supermasks in Superposition (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "supsup", "supermasks", "lottery ticket", "lottery ticket hypothesis", "gradient", "entropy", "surplus", "superfluous neurons", "lifelong learning", "multitask learning", "catastrophic forgetting", "continuous learning", "binary mask", "random network", "optimization", "hopfield network", "gradient descent", "superposition" ]
Supermasks are binary masks of a randomly initialized neural network that result in the masked network performing well on a particular task. This paper considers the problem of (sequential) Lifelong Learning and trains one Supermask per Task, while keeping the randomly initialized base network constant. By minimizing the output entropy, the system can automatically derive the Task ID of a data point at inference time and distinguish up to 2500 tasks automatically. OUTLINE: 0:00 - Intro & Overview 1:20 - Catastrophic Forgetting 5:20 - Supermasks 9:35 - Lifelong Learning using Supermasks 11:15 - Inference Time Task Discrimination by Entropy 15:05 - Mask Superpositions 24:20 - Proof-of-Concept, Task Given at Inference 30:15 - Binary Maximum Entropy Search 32:00 - Task Not Given at Inference 37:15 - Task Not Given at Training 41:35 - Ablations 45:05 - Superfluous Neurons 51:10 - Task Selection by Detecting Outliers 57:40 - Encoding Masks in Hopfield Networks 59:40 - Conclusion Paper: https://arxiv.org/abs/2006.14769 Code: https://github.com/RAIVNLab/supsup My Video about Lottery Tickets: https://youtu.be/ZVVnvZdUMUk My Video about Supermasks: https://youtu.be/jhCInVFE2sc Abstract: We present the Supermasks in Superposition (SupSup) model, capable of sequentially learning thousands of tasks without catastrophic forgetting. Our approach uses a randomly initialized, fixed base network and for each task finds a subnetwork (supermask) that achieves good performance. If task identity is given at test time, the correct subnetwork can be retrieved with minimal memory usage. If not provided, SupSup can infer the task using gradient-based optimization to find a linear superposition of learned supermasks which minimizes the output entropy. In practice we find that a single gradient step is often sufficient to identify the correct mask, even among 2500 tasks. We also showcase two promising extensions. First, SupSup models can be trained entirely without task identity information, as they may detect when they are uncertain about new data and allocate an additional supermask for the new training distribution. Finally the entire, growing set of supermasks can be stored in a constant-sized reservoir by implicitly storing them as attractors in a fixed-sized Hopfield network. Authors: Mitchell Wortsman, Vivek Ramanujan, Rosanne Liu, Aniruddha Kembhavi, Mohammad Rastegari, Jason Yosinski, Ali Farhadi Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher
Hi there, today we'll look at super masks in superposition by Mitchell Wirtzman, Vivek Ramanujan at AL. So on a high level this paper tackles the problem of sequentially learning many many tasks without catastrophic forgetting by leveraging these things called super masks. A super mask is basically a binary mask that you unlace over a randomly initialized neural network to make the mask network perform better than a random initialization. They will train these masks for each of the tasks that they consider and then at inference time they can recover the task that the data is from and therefore kind of do this lifelong multitask learning better than the baselines that they compare against. In fact they can do better without knowing the task than the baselines can with knowing the task. So that's pretty pretty cool. This is a pretty dense paper in terms of content and we won't go over everything in the paper but we'll go over the ideas and what kind of what I think makes them work. So stick around if you want to know that. Also consider sharing this video out, tell your friends about it and subscribe if you haven't, it helps. So yeah cool. So let's dive in. We present the super masks in superposition model capable of sequentially learning thousands of tasks without catastrophic forgetting. So the term catastrophic forgetting comes from the world of this kind of sequential multitask learning where you have a model. Let's say this is your model, the black box, and you let it learn on a task. Let's say this is an image recognition task. So you have a data set and you let it run on this data set. You learn the data set, maybe it's CIFAR 10, right? So this is CIFAR 10. Cool. And now the model can do CIFAR 10 pretty well. Then you also want to learn a different task. You want to learn MNIST. Okay, so you have MNIST and you want to learn MNIST and you want to learn that one. So your hope is that your final model can do both. So you'll take this one and you simply train it on MNIST as well. And then, you know, we know there's this kind of fine tuning pre training and so on. So your hope would be that at the end it can do both. But then you want another one. You want ImageNet. Okay, now ImageNet is a pretty big data set. So you take your model and you also train it on ImageNet. And with time, the model is always going to be very good at the task you just learned, but it is going to forget the tasks that you learned previously. This is the catastrophic forgetting problem. You might ask, why don't I just train on all the tasks equally, like at the same time? And that's a valid question. You can do that. But this in the task description here, it's necessary that we learn the task one after another, because, you know, maybe we get this data in this year and then it's pretty big data. We can't just afford to retrain on all the data all the time. We want to kind of continuously integrate our knowledge. This is very important in the fields of lifelong learning, where you want to kind of the hope is you can build a system that continuously integrates experience, but doesn't forget the old experience. Okay, and the experience might come from new data sets and so on, but you don't want to forget the old ones. So catastrophic forgetting is one of the main problems in these types of research in this field of research of lifelong learning. And this paper is going to tackle this. How? It's sort of. So if you think of what could you do right here, what you could do is you could simply not use the same model, right? You could simply train the different models for each task and just keep them around. Right. And at, you know, test time, you need some way of deciding. So there are two different scenarios in at test time. So you learn all of these models. And then at test time, there's an image. And it could be that I tell you that this image, by the way, that's an MNIST image. So you just grab this model and you apply it. Very cool. Or it could be that I don't tell you what image it is. Like I have no clue. Then you need a way to decide where it comes from. But once you do decide where it comes from, it's again pretty easy. Once you think, I think this is an MNIST thing, you can apply this one. So you could technically do that, but it's very unhelpful because these models, they can be large. Right. First of all, they can be large. So that means it costs you to store those. And second of all, there might actually be some overlap, like C for 10 and ImageNet are both natural images. So they might benefit from each other's feature in some way. Now, what we're going to do here is we're sort of going to do this separate models approach. Namely, we're going to use these. We're going to build these super masks. So super masks are the second thing that we're going to combine here. Our approach uses a randomly initialized fixed base network. And for each task, find a sub network, a super mask that achieves good performance. So what's a super mask? A super mask comes from these kind of papers about lottery ticket hypothesis. And one of these papers discovered basically or conjectured and then showed in evidence that if you have a network that is randomly initialized, just like this is your neural network, the gray thing, and there is a way to mask it, which means masking basically means that you either activate or inactivate connections. So you have your network and you simply multiply it by a binary mask that for each connection is a one or a zero. So the one so here is like zero, zero, zero, zero, zero. This is a one. This is zero, zero, zero. This is a one. So the network isn't going to be zeros and ones, but it's going to be multiplied. Each connection is going to be multiplied by a zero or a one, which means wherever there's a one, whatever weight that connection had, that will be the value of the weight of the connection. If it is a zero, whatever weight that connection had, it will be it will be pinned to zero. So there will be no signal flowing. So this paper established that if you take a randomly initialized neural network, there is a way to mask it. And you can find those masks where if you mask in a particular way, the network will already perform better than random on a given task. So there is a way to solve MNIST by using a randomly initialized neural network and then simply masking it cleverly. And then the mask network will have a good accuracy on MNIST. OK, and they found that. And I've made a video about that. And the sort of intuition behind the super masks is this is just my intuition. Is that, you know, MNIST, this is what I'm guessing. MNIST is a relatively easy task. In fact, most of the tasks they're considering in these papers are relatively easy. And if you have a randomly initialized neural network, basically what you have around is a bunch of weight. Right. So if if I have my two layers right here and then each connection here is is a number like point two five. This is, you know, seven. This is negative three and so on. They're going to consider they here are going to consider weights that are initialized in a very special way. But ultimately, you just have a bunch of random weights lying around. And if the task is super easy, let's say, and the the neural network is sufficiently overparameterized, there might be many, many ways of achieving your goal. So rather than being able to adjust the weights like you would do when you train the neural network, you would actually change those numbers. You get away with simply selecting the combination of weights that will give you a good performance. So in it's kind of it's sort of a mix of drop out and vector quantization. So in vector quantization, you also you get away with quantizing the vectors to given precision. And here the task is easy enough such that by simple overparameterization and selecting of the weights that you have around, mixing them correctly by simply. So you can't mix arbitrarily, but you can mix with zero or one. You get good enough. OK. So this is sort of my hypothesis. My hypothesis would be that the harder the task, the the harder it gets to find super masks that perform well. That's what I think. But nevertheless, to say for the tasks they're considering here, you can find these super masks. And there is a way to do that by using gradient descent, even though the super masks are discrete. So what we're going to do is we're going to use the same randomly initialized neural network for each of the tasks. Right. So this is like C for 10. This is MNIST. This is ImageNet. We're going to use the same gray network, but we're going to find an individual mask for each of those networks, for each of those tasks on top of the same network. And they're all going to perform relatively well, according to the super mask conjecture. Now, again, this is not surprising. And the fact that we always use the same randomly initialized network, you know, isn't really it's not really necessary that we always use the same. But in this case, they say, OK, we always use the same. And then we only need to store the mask for each task. The mask is much simpler than the weights because, you know, a 32 bit floating point number is 32 bits, while a masking bit is only one bit. So we save basically a factor of 32 in our models. But essentially, essentially, right, it's not the case that we are training the same model and some continue learning. It's much more akin to training a training one model per task and then inferring the task. And just that we do it in a much more crude way. So it's more like learning a compressed model per task. I find it's a better way to look at it than than continuous learning. In any case, you learn these super masks. And then here is the the the hard bit. The easy bit is if I tell you which tasks the inference data point, the test data point comes from, you have a pretty easy time classifying it. You simply select the mask accordingly, you run forward pass, and that's it. If I don't tell you where the test data point comes from, that's the hard part. Now, they need a way to decide where the data point comes from. And the idea that they have right here, they have sort of multiple ideas. But the main idea, the first idea is that if you have trained these individual models for the individual tasks, then OK, there is not good explanation here. Then the correct model should be very confident. This is an assumption that you make. So I'm going to take my image of the test set and I'm going to feed it through the model one, which, you know, you have to separate this idea is separate from the masks at its core. It's simply saying if I have three different models that I have trained for three different tasks and now I get an input, I don't know which one it's from. I can simply feed it to each one of them and I can look at the output distribution. So maybe my output distribution right here, this is as you can see, three output neurons. It's a three class classifier right here. My output distribution is somewhat here like this. And here it's like this. And here it's like I shouldn't do that. I got a comment. You know who you are. And here it's like this. OK, so which one would you pick? And their answer here is we should pick this one because of it has very low entropy. So this middle model here is very, very sure about this data point. It's very sure about its prediction because it the distance basically of the top prediction to all the other predictions is so high. It's very confident in its prediction. Whereas here you can see that the distance is not too high. Also here, the distance between the highest and the others is not too high. So they say we are going to pick the model or the mask in this case for which the output entropy is the highest. And that is a heuristic for now, but it tends to work pretty well. And it has a bit to do with how relatively difficult your tasks are. So your tasks need to be kind of equally difficult. Otherwise, it's not otherwise this can get a little bit a little bit out of hand. But there are ways to solve it. And they allude to that in the kind of future work section. But in this case, if the tasks are equally hard and they consider tasks that are equally hard, then the entropy is a good measure of how confident these things are. And therefore we can check which task it is by using the entropy as a heuristic. All right, so we're left with simply trying each of the masks and then decide taking the one that has the highest entropy. Now, they say this is costly because if we've learned a thousand tasks, we need to try each of the thousand masks in order to do that. So they go for something else. And this is the second word in the title, this superposition word. So instead of doing that, what they'll do is they'll use a superposition of masks. And actually, the picture also I find more descriptive than the formula. I can write down the formula down here. So what they'll do is they'll say, why don't we just overlap all of the masks? So we'll have all of these masks and I for one for each tasks and we'll initialize them with coefficients. Alpha I will just mix them like this and alpha here. It's initialized in one over K, where K is the number of tasks. OK, we'll just mix them and then we'll multiply them by the weight of the neural network. And that's will that neural network is where we input our image into. OK, so what does that give us? That basically gives us a mix of all the networks. Like it's pretty safe to say that the entire network is going to be in there and maybe sometimes multiple times. Like if multiple masks use the same weight, it's going to be in there with a higher weight and so on. So that's what you see right here. You can see that all the masks are overlapped in superposition with each other. Now, what does the output give you? The output gives you nothing. The output gives you kind of the average prediction of the network. So this here is going to give you kind of the sort of the average prediction of all of the networks, which isn't very helpful. But of course, what we can do is we can look at the gradients of this. So if we from this calculate the entropy, which is here denoted H, and we calculate, we back propagate this. So we back propagate this to the alphas and we calculate the gradient of the entropy with respect to each of the alphas. What does that give us? So what's the intuition here? The intuition is if I change my alpha a bit, how does the entropy change? So basically, this gives you the sensitivity of the entropy to these alpha parameters. So if this is high, what does it mean? It means that this mask right here has a big influence on the entropy. Specifically, if I were to increase the alpha, then the entropy would increase. OK. And if I were to decrease the alpha, then the entropy would decrease. That's the kind of what the gradient gives you. Now, did I say before we want the one with the highest entropy? I'm pretty sure we want the one with the lowest entropy. We want the one where we're very, very, very sure. Right. I might have said that absolutely wrong. So if you see right here, this is the formalism. First, we associate each of the k learned supermasks with a coefficient alpha initially set to 1 over k. Each alpha can be interpreted as the belief that supermask m is the correct mask, equivalently the belief that the current unknown task is task i. The model output is then computed with a weighted superposition of all learned tasks, which is this thing right here. The correct mask should produce a confidence low entropy output. Therefore, we recover the correct mask. We find the coefficients alpha, which minimize the output entropy h. OK. So, yes, we want the task with the lowest entropy, of course, not with the highest entropy. So if we look at the gradient right here, the gradient basically tells us how each of the masks will influence the different the entropy. And if we simply select the alpha where the gradient here is the most negative number. So we want this to be as low as possible, not zero, but negative as high as possible. Then we know that if we increase this, the contribution of this mask, then the entropy will go down the most. OK. And again, our hypothesis here is that maximum entropy, sorry, minimum entropy means most confident prediction means that the if all tasks are equally hard, it probably means that the data point is from the task where we have the lowest entropy. So what's the what's the deal here? Like they show in this graph right here, they show this is much faster. So if we if we were to evaluate each mask individually and measure its entropy, of course, with the number of tasks, we'll simply linearly increase our time in the forward pass, because we need to try out each of these masks. However, if we do what they're doing here, we simply run one, we mix these ones, we run one forward pass, we do back prop and they consider two strategies. So what you can do is you can do gradient descent on these alphas, which takes a number of steps to converge. Or you can actually do a single step. So you just observe the gradient and by the gradient, you recognize which one has the lowest gradient. And that's the one you pick. So where's the catch here? The catch is that if you do something like this, if you do something like this, there are two catches, actually. First of all, this here is a convex combination, right? This is convex combination. And the problem isn't convex at all. But if you simply take this convex combination, multiply it and then look at the gradient, you sort of assume that the problem is a kind of a convex, nicely shaped problem. And if you then observe these gradients with respect to the alphas, you make assumptions about the problem that might not be true. So you lose, you kind of heuristically approximate the importance of these masks. That's the first thing. The second thing, of course, is that it's you still you still are implicitly saving your still are implicitly trying all the models, but you're just not trying them explicitly. You're implicitly trying all the models because when you do this combination right here, your auto differentiation library will actually keep track of what the individual models contribute. It's just that per layer. So, of course, this here, this W is multi layer perceptron, which means that if you have multiple layers, you know, there's W one and there's W two. And you have your alphas and your alphas are also, you know, you can distribute them into these. Sorry, your masks are also mask for layer one mask for layer two and so on. So your auto differentiation package needs to keep track of. OK, mask one goes here with this alpha mask to the layer two goes here with this alpha. And there is there. So it needs to keep track of this graph. It's just that this is highly optimized and you also need to you only need to do it layer by layer. So the contribution of alpha of mask one, this is maybe alpha eye of mask eye one mask eye two. The contribution of the alpha eye will not be explicit in this layer. It will be implicit as an average across the layer. Right. So, again, this is you assume in each layer, you assume a convex combination of all the alphas and propagate them. And propagate that forward. And therefore, if you look at the next layer, you can only view what mask two does mask of layer two does as in terms of a convex combination of layer one. So you make multiple approximations and you rely on the optimization of your auto differentiation library to keep track of these different things and do operations in parallel. And in the case where you do it linearly, I'm going to guess you simply do it as a sequential operation, but it's going to be exact. So that's the trade off. All right. So we now know how we can figure out where the task is from. And let's see how that works. So in this first task, we are looking at split image net. Split image net simply it takes the image net data set, which is a thousand class data set, and it distributes it into 100 different tasks. Each is a 10 class classification task. Now note two things. First thing is that split image net. Each task is approximately as hard as each other as the other tasks. Right. It's still image net classification and it's the same number of the of it's the same number of labels. And each task is about the same hardness. You can make that assumption. And second of all, the tasks are actually pretty, pretty easy. Right. It's hard to distinguish image net into a thousand classes. But if you split that task, I'm going to bet that you have these high resolution images and you have a 10 class classification. It's going to be relatively easy. So all our conditions are met for at least for my hypothesis to hold. And you can see on the right side, you can see split C for 100, which does the same thing to C for 100. It subdivides it into different, very small class classification tasks. You can see the results. The upper bound here is where you train a single model for each of the tasks. That gets you to average accuracy of 92 percent. So on image net, 92 percent. It's pretty, it's pretty good. Of course, this is again, this is 10 class, which makes the numbers a lot different with the subs. So subs up, you get to this pretty good 88 percent accuracy. This is this super masks in superposition. This here is a baseline that also does lifelong learning. Now, they have these annotations right here. Gigi, which yes, Gigi, haha. But so the first letter will always tell you whether the task ID is given during training. And the second letter will tell you whether the task ID is given during testing. So this here simply evaluates whether or not this masking is feasible, which you can see here it is. So this will we know which mask to train during training and we know which mask to retrieve during testing. So there is nothing of this entropy gradients here. None of it. This simply evaluates the viability of the masking approach, which as you can see, it's pretty viable and it's more viable than these baselines. This same thing on the CIFAR 100 right here. So you can see they also evaluate since I guess it's an easier problem, they also evaluate the number of bytes which they can control. So they can control the number of bytes in their model by simply increasing or decreasing the required sparsity of their mask. So you can change your mask by saying how sparse you want it. And of course, if you want it more sparse, you get a worse model because you have less less ones in your budget to make your model perform well. But you can see that if they do it with these baseline model, this batch E, you severely underperform with regard to the upper bound right here. The upper bound again is where you train a model per task and separate heads here is another kind of dummy baseline where you train a different head for each of the tasks with a common trunk. That gets you pretty much nowhere. With the sub sub algorithm, you do get almost to the performance of the upper bound. And in fact, if you do this transfer approach right here, you do get there. The transfer approach simply means that so you do these tasks in succession, right? You do task one. Okay, done. You do task two. Okay, done. And for each one, you train a mask. Okay, for each one you train this is mask one, mask two. The transfer approach simply says if I start task three, I'm going to start the mask three, my initial weights basically are going to be a running average of the masks that I have already considered or an average. There is some amount of transfer going on simply to initialize the weights. It's actually astounding that this helps you so much. But with this, if you look at the actual numbers, I believe you can get like a tiny bit higher than the training a single model for each of the tasks. Okay, so this sort of establishes the viability of training the different masks for the different tasks, which I again, I think it is not surprising because essentially you're training a different model per task. And it's just the fact that you do a very crude model and that you can store very efficiently. Now you might object and say, hey, don't I need to store the underlying randomly initialized network? And the answer is yes and no. Actually, you only need to store the random seed to produce it. So checkmate. Yeah, they do. So here they explain this one shot algorithm where they simply look at the gradient of the entropy. You can see with the maximum negative gradient of the entropy, they also have this binary algorithm. If the task where they say with the task is harder to differentiate this kind of assumption of the convex combination thing does might not hold. So what they do is they have this binary algorithm where they do a binary search where they simply want to circumvent the necessity to evaluate each of the masks by itself because that takes long. So they do something in between where they do this binary algorithm. This is right here where they do this convex combination, they evaluate the gradient, but then they don't just take the highest of the negative gradients. They eliminate half of them. So you can see whenever it's lower than the median, they eliminate it and then they start off with this new set of reduced alphas. So in each of these steps, they eliminate half of the masks and then they recompute again because because it is not a convex problem, the order might actually be different in the second and third and fourth step. Of course, this is simply this is like halfway towards between this one shot algorithm and trying each mask by itself. It's kind of a compromise. I mean, they make it they really try to not not try each mask once because it's one of their contributions. Right. But then they probably realized if we just do it one shot, sometimes it doesn't work. So they're going between, which is, you know, it's a pretty cool idea. All right. Next experiments. We're now in this situation and you see you see a number of things. So first of all, we have a new added a new baseline, this PSP, and you can see that the baselines operating this G.G. regime. So the baselines are given the task during training and given the task during evaluation. You see the upper bound here in gray is where you train a model for each task. And you assume that's an upper bound because you assume the tasks are kind of unrelated to each other, which is not the case. So there is actually potential to beat the to beat the upper bound baseline. And subs up here you see operates in a different regime. Namely, there's this regime of you're given the task during training, but then during testing, you're not given the task. OK. And this you here, it basically means that the labels you assume that the labels of the tasks are not shared. So in in this case, if you predict, if you predict like if you split MNIST into always two class, if you split MNIST into two tasks, you predict the first task is zero, one, two, three, four. The second task is five, six, seven, eight, nine. OK. And you have the same amount of labels. So you always have five output neurons. Right. So you have one, two, three, four, five output neurons. If you if the image here is like a five, that would be task task one label zero. Right. If your network now predicts label zero correctly, but predicts the the image to come from task one, you count it as a mistake. You say, well, you know, you've predicted the right output neuron, but you've told me it comes from task zero from from the zero to four. So I'm going to count that as a mistake. So it's really there isn't there isn't a way for the network to kind of get around predicting the wrong tasks or kind of share information. So you assume that the labels are not shared or unshared. Yeah. So it's the subs up here has a significantly harder task than the baselines. Keep keep that in mind. And now we are applying our because we we are not given the task at inference time. Now we're applying our heuristic where we go and look at which of the mask entropies is the lowest. Respectively, we use this actually this one shot algorithm where we look at the gradients. And you can see this is on permuted MNIST in permuted MNIST. What you do is you take MNIST and you simply permute the pixels. And this it sounds crazy, but you simply permute the pixels and that gives you a new task. So you can come up with like almost an infinite number of tasks because there are what? Twenty eight times, twenty eight pixels. So you can commute them seven hundred and eighty four factorial times, which gives you like infinitely many tasks. And so you can modulate. So here you can see the number of tasks learned increases. And at the beginning, this baselines, especially this baseline, is doing fairly well, actually, on par with the upper bound when you only have ten different tasks. However, after that quickly degrades, however, this subs up here, it keeps it keeps its performance, which it so this doesn't only mean that it correctly predicts the output neuron. It also correctly predicts which task, which permutation was applied to the digit simply by looking where the entropy is high. Right. So that's pretty cool. And, you know, it's it's actually kind of surprising to be to be honest. So on the left, this is a L'Onet architecture on the right. It's a fully connected network. Now, the fully connected network here performing better is sort of expected. First of all, MNIST is really easy and can actually be solved with a fully connected network. And second of all, especially permuted MNIST, I guess, doesn't really conform to the to the assumptions of convolutional neural networks anymore. Again, keep in mind, these tasks are very easy. Yeah. So so especially for the fully connected network, of course, each permutation kind of looks the same because it's it doesn't care at the beginning that each pixels are next to each other. Simply each pixel is a different thing. It's just the fact that it cannot it cannot learn from one tasks much about the other tasks. That's why you that's the nature of permuted MNIST. All right. And then in this experiment right here, and this is the sort of crown experiment, they learn they do this permuted MNIST, but they go up to 2500 tasks, 2500 different permutations. But so but now they have an additional thing right here. So again, they have this sub sub where it needs to predict the correct permutation, but also they compare it with a an algorithm that needs that is this NN right here. So in this NN, not not only are you not given the task label at testing time, you are actually not even given the task label at training time. But here the outputs are shared. So, you know, since since you have no way of knowing which task it is, you've never given it as long as you predict the correct class. You good. So it's always it's always a 10 class classification problem. It's just not permuted. You're not given the task label here. So first of all, I want to say that this here, the shared labels, it could actually contribute to the success of this algorithm right here, because even though you permute the pixels, you can still sort of do things like count the frequency of light pixels versus dark pixels in MNIST. And that might already give you a very, very big hint. Right. Or, you know, simple correlation of of two pixels, though that's that's a task specific thing. But the the frequency of light pixels versus dark pixels will already give you a big boost in accuracy. And now you can actually share that feature. That feature will always be the same for every permutation. So this is something you can share between tasks. And I would like. So one way I guess you could eliminate that. Well, I don't know. I'm not sure. You kind of have to randomize the number of light pixels, but keep the classes the same. It's going to be complicated. Right. But just keep that in mind. However, how how does the algorithm even decide? So they have a heuristic right here as well, namely. They say, OK, if we don't have no task identity during training or inference. Where task identity is entirely unknown, even during training, if subs of is uncertain about the current task identity, it is likely that the data does not do not belong to any tasks seen so far. When this occurs, a new super mask is allocated and the number of tasks learned so far is incremented. OK, so they go with the same principle right here. They say if we get a new training sample, we just evaluate it against all the masks that we had so far. Or we do our one shot algorithm to approximate which masks gets us a low entropy. If none of the mask gets us a low entropy, then we decide this must be some kind of unseen task. So we're going to allocate a new mask for this unseen tasks. And that heuristic, as you can see, it performs fairly, fairly well. Where was our graph? Our graph was down here. In fact, it performs pretty much on par with where you know the task during training. And just not during during inference up until like here, the very last bit. If you really get into the high task regime. Where I guess it starts getting it starts getting confusing. So this this heuristic might start to break down, but it might just be a fact how they tune their constants. Like they have to define a threshold where they say, OK, if the entropy is somehow higher than this threshold, then we allocate a new a new task. And this might be optimized in order to solve this. Again, these tasks are very, very, very, very easy. So keep keep that in mind. Yeah. OK. So this basically was the experimental part of that paper. Now they consider different extensions to that. I'm not sure how they also consider some ablations, which are pretty interesting. So here they say we are going to up the kind of the hardness of the task with with rotated MNIST and also their model does pretty well on the rotated MNIST task, where the differences of between the differences between the tasks are simply some of them are rotated by 10 degrees. So that's a tiny rotation in the right. If you have a number three, you kind of rotated by 10. I can't even draw that subtle of a rotation by 10 degrees. And, you know, the subs up must correctly predict which task the images from, or it will not get the it will not get a correct reward. The fact that it performs pretty well and the fact that it has, you know, rotation degrees, where it outperforms the baseline that is actually given the rotation. So it's given the task at inference time is pretty, pretty remarkable. Again, I believe this is due to the fact that these tasks are so easy. And therefore, this entropy, it just spikes when you get the correct thing, because it sort of it sort of latches onto very easy features for each task. So I'm going to guess that the tasks are generally solvable by maybe correlating two pixels. Right. If like this pixel correlated with this pixel, if the correlation is high, it's a three. The correlation is low. It's something else. OK. And then if you rotate it, it's just not the case anymore that this pixel and this pixel, the correlation is very high. So if you predict using this correlation, you'll get a pretty low confidence. And I'm going to guess that, yeah, if you have discrete tasks and it's in this task, then your confidence will just spike because the task is so easy and because all the tasks are about equally hard. Because if you can find this correlation here, you can find it over here. It's simply going to be two different two different pixels in this task. And then as you try the masks, whenever you hit the one where you can predict pretty confidently with those two pixels, then your confidence is going to spike, your entropy is going to get down. And, you know, it's that task. They also here they compare. The one shot algorithm. So they they they use their one shot algorithm to and they put it on a baseline. So this baseline where they always actually have to give it the task, they augmented by by their their one shot algorithm to select the task. And it turns out they can make it perform fairly well, not on par with them. Interestingly, but they can make it perform also fairly well, actually better than it was performing before. So they have different extensions right here. And that's some of them are pretty important. The one important thing they do is they have these superfluous neurons and that's sort of hidden. And it's always a bit. So here, for example, you see in the output, they say we have a lunette model using output size 500. Now there are only 10 different labels in the MNIST task, right? Also in the permuted MNIST task, there are 10 different labels. I mean, there are a total of 25,000 labels if you have 2500 tasks. But the neural network has output size 10. However, their neural network here has output size 500, which is surprising. So they say right here and we're going to get to the Hopfield network at the very end for those who are still around, because that's I think that should be its own paper. But, you know, they say it could use an output of size L where L is the actual number of labels per task, though we find in practice that it helps significantly to add extra neurons to the final layer. Specifically, we consider outputs P in our S. So S is higher than L and refer to the neurons that are past L as superfluous neurons. So let's try to make sense of this. So they have a neural network. And let's say it's a three class classification task, right? So you have three classes and that's what you would do. They simply add a bunch of neurons right here. That means they also they, you know, they add all of the connections from the previous layer to those neurons. But still, the classes can only be either 0, 1 or 2. These classes never appear during training. So they claim this helps during during their procedure. And I I thought about it a bit and we might be able to try to guess why it makes sense. They say they simply say we observe that helps. And I mean, you know, let's let's try to make sense of it. OK, so if we train, if we train our model using these too many neurons, what happens? Well, our label is always going to be of the top three neurons. So let's say our label is one. This is going to result in a one hot vector like this. Now, what are we training in this layer here? In this layer here, we're training logits. OK, so pre pre softmax outputs. So our our algorithm, our cross entropy loss is going to push all of these here down during every single training point. It's going to push this one up and all of these down. Now, these three here are going to be pushed up and down depending on the label. However, all of these down here are going to be only pushed down during the entire training. So they are going to be exceptionally low numbers. OK, now, if we then come and we look at the at the entropy of this, the the entropy, I think honestly, this is simply you could achieve the same thing by using a different temperature parameter in the softmax or in the entropy that you consider. Because why can this help? And this helps with inferring which task it's coming from. Right. So if you consider a task where you only have three outputs, so you don't have this bit down here and you look at the entropy, it's going to be you know, it's going to be something something. Sorry, I have to draw this right here. It's going to be like this. It's fairly confident. But if and maybe for the other tasks, it's not going to be as confident. You know, it's maybe going to be like this. However, if you have those and if it's of the correct tasks, I'm going to guess this kind of stays the same because they're really low. But if it's of the incorrect tasks, then you're not sure. And you not being sure about the output also means that you allocate a lot more to these things right here. Because you've sort of never seen this particular kind of training examples. So you're not sure. So you're just going to distribute your kind of your probability mass across these things right here because you've not been trained on that kind of input. Right. It's very important to see that this is task. This is the correct task, which they always label J. And for for any other incorrect task, you've never seen data like this. So these things here sort of act like an outlier class without you explicitly training an outlier class. You simply train these things during training. You make them small. But you it's important to notice you always make them small. From a data point that comes from their particular task. OK, that's what you train them for. And now if you input a data point from a different task, they have less reason to be small because this is an outlier data point. So you have much more fluctuations. So you have more fluctuations here. And therefore, the entropy is going to be even higher. All right. This is sort of how I make sense of the fact that these additional superfluous neurons help here. They act as kind of an outlier detector for the training data set of that particular task. Now, because you have different training data for each task, they go further and they say it actually works even better. It works even better if we instead of this entropy heuristic, we consider another heuristic. Accordingly, we consider an objective G, which encourages the S neurons to have large negative values and confused as an alternative to entropy in equation four. So G, they analyze down in the appendix. And we're just quickly going to look at what G is. Sorry, this is about to load right here. And it's very interesting to see what G is. Or is it? Yes. So G is going to be this right here. So why are the logits and then G is this expression right here? And in fact, it's this expression with the with a bit of a modification. So it's going to be G is going to be the log some X of the logits. Right. So it's this is some this is somewhat like the entropy. And what we're going to consider is the gradient of G. So what we want is the gradient of G with respect to our alphas. And the condition here with this detach operation is that. The gradient of G should be, you know, the gradient of the loss function for all V that are superfluous neurons and zero otherwise. So we're going to detach the gradient of G for all the real neurons, for all the actual logits of the output class. And we're only going to consider the gradient flowing through the superfluous neurons. So all of this here is if we take the gradient, it's only going to flow to in these in the last layer through the gradients of the superfluous neurons. OK. And that's why we don't need the entropy, because the entropy always considers the difference, sort of the difference between the correct label and the other labels. We are pretty sure that in our superfluous neurons, we don't have the correct label. OK. So this log the log some X of our of these outputs here, what will they represent? Well, this is sort of a flatness measure. Again, it's kind of like the entropy, except we don't have a correct label right here. If one of them is very high and the other ones are very low, or if they're generally very high up, then this will be high. However, consider the difference between this and this, where they're all super small and also they're all pretty equal. The log some X will be very small. So this is an alternative where we can basically only look at the superfluous neurons and say, is are these superfluous neurons all very small? And, you know, none of them basically says I'm the correct label. Then we can be pretty sure that over here there is some confidence. However, if they are sort of kind of larger and generally kind of generally large, maybe unequal, that means we're not very confident because these are our outlier classes. They shouldn't be. They shouldn't be large at all. So an alternative to looking at the entropy of this distribution is to build such superfluous neurons and then look at those and only those. And so the gradient of only those in order to decide which task it's from. It's an interesting idea, I have to say. But maybe one could achieve sort of the same thing with a with a temperature parameter here or by building an explicit outlier detection. But it's generally an interesting idea for outlier detection, I have to say. I've never really seen anything like this, though I also haven't really considered it. So here they show the importance. And you've seen in the experiments before that there sometimes was this H objective and also this G objective. So you can look at the entropy, but also you can look at the G. In both cases, you have superfluous neurons. So before you actually saw you have 500 neurons for a task of for a task of 10 that needed 10 output classes. Right. So this tells me that these superfluous neurons are pretty important for them. And it this is probably one of the things that makes this work. Right. These superfluous neurons. So you kind of setting up a trap where for the wrong models, you let it run into this trap of assigning a lot of weight into these outlier classes. And only if the correct model is trained to not do that on the particular data that you're considering. I don't think this comes through in the paper too much that this is one of I guess this is one of the main factors making this work. And you can see right here they actually do an experiment. So I don't want to be too mean where they say, look, if we train with just 25 classes and this is permuted MNIST. So the necessary amount will be 10. So if we train with only 25, you can see how quickly we degrade right here. However, as we go up and train with a hundred and 200, we get better and better. In fact, if we train with this G objective, it always sort of outperforms the H objective. Interestingly, the more output neurons you have, the less this difference seems to be. But maybe the percent difference is the same. The percent error difference is the same. I don't know. I can't tell from here. Yeah. So this isn't all. There is also this Hopfield network going on where they say, OK, OK. So essentially, we're actually training different models, right? We're not really superimposing all of these models. We're training a different mask for each of the tasks and kind of remembering the masks and so on. Can we also build a model where we actually only have one model? And that's what they do right here, where they build a Hopfield network, which is basically just a big matrix. This is the Hopfield network. And then they encode the masks in this Hopfield network. So specifically, the Hopfield network is of size D squared, where it is able to encode two to the D different binary strings. And it does so in a fuzzy way. But you can prove that if you construct the Hopfield network like this, where Z is a binary string, you can recover the binary strings by gradient descent in the Hopfield network. And obviously, the more binary strings you encode, the less you get out. It's not magic. You can't store that many bits into a thing that doesn't have that many bits. But I believe, you know, again, this is using gradient descent, and it can do so with surprising accuracy. So remember that these here are bits while these here are floating point numbers. So the comparison that I just made isn't entirely fair. But I don't want to go into the Hopfield networks because I really feel this should be its own paper. I guess they just want to show that it's also possible to compress these masks into one thing, such that I can't make the argument anymore that, hey, all you're doing is training different models for different tasks. All right. All in all, pretty cool paper. As I said, pretty dense paper. I invite you to read it. They have a big appendix where they have more experiments and so on and explain everything in detail. All in all, from this, I don't really take the method, but the ideas are very interesting. And I am excited to see where this goes in the future. All right. I'll see you next time. Bye bye.
[ { "start": 0, "end": 7, "text": " Hi there, today we'll look at super masks in superposition by Mitchell Wirtzman, Vivek Ramanujan at AL." }, { "start": 7, "end": 15, "text": " So on a high level this paper tackles the problem of sequentially learning many many tasks without catastrophic forgetting" }, { "start": 15, "end": 24, "text": " by leveraging these things called super masks. A super mask is basically a binary mask that you unlace over a randomly initialized neural network" }, { "start": 24, "end": 29, "text": " to make the mask network perform better than a random initialization." }, { "start": 29, "end": 39, "text": " They will train these masks for each of the tasks that they consider and then at inference time they can recover the task that the data is from" }, { "start": 39, "end": 47, "text": " and therefore kind of do this lifelong multitask learning better than the baselines that they compare against." }, { "start": 47, "end": 56, "text": " In fact they can do better without knowing the task than the baselines can with knowing the task. So that's pretty pretty cool." }, { "start": 56, "end": 70, "text": " This is a pretty dense paper in terms of content and we won't go over everything in the paper but we'll go over the ideas and what kind of what I think makes them work." }, { "start": 70, "end": 80, "text": " So stick around if you want to know that. Also consider sharing this video out, tell your friends about it and subscribe if you haven't, it helps." }, { "start": 80, "end": 93, "text": " So yeah cool. So let's dive in. We present the super masks in superposition model capable of sequentially learning thousands of tasks without catastrophic forgetting." }, { "start": 93, "end": 102, "text": " So the term catastrophic forgetting comes from the world of this kind of sequential multitask learning where you have a model." }, { "start": 102, "end": 108, "text": " Let's say this is your model, the black box, and you let it learn on a task. Let's say this is an image recognition task." }, { "start": 108, "end": 117, "text": " So you have a data set and you let it run on this data set. You learn the data set, maybe it's CIFAR 10, right? So this is CIFAR 10. Cool." }, { "start": 117, "end": 125, "text": " And now the model can do CIFAR 10 pretty well. Then you also want to learn a different task. You want to learn MNIST." }, { "start": 125, "end": 136, "text": " Okay, so you have MNIST and you want to learn MNIST and you want to learn that one. So your hope is that your final model can do both." }, { "start": 136, "end": 145, "text": " So you'll take this one and you simply train it on MNIST as well. And then, you know, we know there's this kind of fine tuning pre training and so on." }, { "start": 145, "end": 151, "text": " So your hope would be that at the end it can do both. But then you want another one. You want ImageNet." }, { "start": 151, "end": 157, "text": " Okay, now ImageNet is a pretty big data set. So you take your model and you also train it on ImageNet." }, { "start": 157, "end": 167, "text": " And with time, the model is always going to be very good at the task you just learned, but it is going to forget the tasks that you learned previously." }, { "start": 167, "end": 175, "text": " This is the catastrophic forgetting problem. You might ask, why don't I just train on all the tasks equally, like at the same time?" }, { "start": 175, "end": 184, "text": " And that's a valid question. You can do that. But this in the task description here, it's necessary that we learn the task one after another," }, { "start": 184, "end": 193, "text": " because, you know, maybe we get this data in this year and then it's pretty big data. We can't just afford to retrain on all the data all the time." }, { "start": 193, "end": 200, "text": " We want to kind of continuously integrate our knowledge. This is very important in the fields of lifelong learning," }, { "start": 200, "end": 209, "text": " where you want to kind of the hope is you can build a system that continuously integrates experience, but doesn't forget the old experience." }, { "start": 209, "end": 215, "text": " Okay, and the experience might come from new data sets and so on, but you don't want to forget the old ones." }, { "start": 215, "end": 222, "text": " So catastrophic forgetting is one of the main problems in these types of research in this field of research of lifelong learning." }, { "start": 222, "end": 231, "text": " And this paper is going to tackle this. How? It's sort of. So if you think of what could you do right here," }, { "start": 231, "end": 241, "text": " what you could do is you could simply not use the same model, right? You could simply train the different models for each task and just keep them around." }, { "start": 241, "end": 249, "text": " Right. And at, you know, test time, you need some way of deciding. So there are two different scenarios in at test time." }, { "start": 249, "end": 253, "text": " So you learn all of these models. And then at test time, there's an image." }, { "start": 253, "end": 259, "text": " And it could be that I tell you that this image, by the way, that's an MNIST image." }, { "start": 259, "end": 264, "text": " So you just grab this model and you apply it. Very cool." }, { "start": 264, "end": 269, "text": " Or it could be that I don't tell you what image it is. Like I have no clue." }, { "start": 269, "end": 276, "text": " Then you need a way to decide where it comes from. But once you do decide where it comes from, it's again pretty easy." }, { "start": 276, "end": 281, "text": " Once you think, I think this is an MNIST thing, you can apply this one." }, { "start": 281, "end": 289, "text": " So you could technically do that, but it's very unhelpful because these models, they can be large. Right." }, { "start": 289, "end": 297, "text": " First of all, they can be large. So that means it costs you to store those." }, { "start": 297, "end": 302, "text": " And second of all, there might actually be some overlap, like C for 10 and ImageNet are both natural images." }, { "start": 302, "end": 307, "text": " So they might benefit from each other's feature in some way." }, { "start": 307, "end": 313, "text": " Now, what we're going to do here is we're sort of going to do this separate models approach." }, { "start": 313, "end": 318, "text": " Namely, we're going to use these. We're going to build these super masks." }, { "start": 318, "end": 322, "text": " So super masks are the second thing that we're going to combine here." }, { "start": 322, "end": 326, "text": " Our approach uses a randomly initialized fixed base network." }, { "start": 326, "end": 332, "text": " And for each task, find a sub network, a super mask that achieves good performance." }, { "start": 332, "end": 339, "text": " So what's a super mask? A super mask comes from these kind of papers about lottery ticket hypothesis." }, { "start": 339, "end": 352, "text": " And one of these papers discovered basically or conjectured and then showed in evidence that if you have a network that is randomly initialized," }, { "start": 352, "end": 359, "text": " just like this is your neural network, the gray thing, and there is a way to mask it," }, { "start": 359, "end": 365, "text": " which means masking basically means that you either activate or inactivate connections." }, { "start": 365, "end": 372, "text": " So you have your network and you simply multiply it by a binary mask that for each connection is a one or a zero." }, { "start": 372, "end": 380, "text": " So the one so here is like zero, zero, zero, zero, zero. This is a one. This is zero, zero, zero. This is a one." }, { "start": 380, "end": 384, "text": " So the network isn't going to be zeros and ones, but it's going to be multiplied." }, { "start": 384, "end": 389, "text": " Each connection is going to be multiplied by a zero or a one, which means wherever there's a one," }, { "start": 389, "end": 395, "text": " whatever weight that connection had, that will be the value of the weight of the connection." }, { "start": 395, "end": 404, "text": " If it is a zero, whatever weight that connection had, it will be it will be pinned to zero." }, { "start": 404, "end": 412, "text": " So there will be no signal flowing. So this paper established that if you take a randomly initialized neural network," }, { "start": 412, "end": 419, "text": " there is a way to mask it. And you can find those masks where if you mask in a particular way," }, { "start": 419, "end": 423, "text": " the network will already perform better than random on a given task." }, { "start": 423, "end": 429, "text": " So there is a way to solve MNIST by using a randomly initialized neural network and then simply masking it cleverly." }, { "start": 429, "end": 434, "text": " And then the mask network will have a good accuracy on MNIST." }, { "start": 434, "end": 447, "text": " OK, and they found that. And I've made a video about that. And the sort of intuition behind the super masks is this is just my intuition." }, { "start": 447, "end": 453, "text": " Is that, you know, MNIST, this is what I'm guessing. MNIST is a relatively easy task." }, { "start": 453, "end": 459, "text": " In fact, most of the tasks they're considering in these papers are relatively easy." }, { "start": 459, "end": 466, "text": " And if you have a randomly initialized neural network, basically what you have around is a bunch of weight. Right." }, { "start": 466, "end": 475, "text": " So if if I have my two layers right here and then each connection here is is a number like point two five." }, { "start": 475, "end": 479, "text": " This is, you know, seven. This is negative three and so on." }, { "start": 479, "end": 486, "text": " They're going to consider they here are going to consider weights that are initialized in a very special way." }, { "start": 486, "end": 489, "text": " But ultimately, you just have a bunch of random weights lying around." }, { "start": 489, "end": 501, "text": " And if the task is super easy, let's say, and the the neural network is sufficiently overparameterized, there might be many, many ways of achieving your goal." }, { "start": 501, "end": 510, "text": " So rather than being able to adjust the weights like you would do when you train the neural network, you would actually change those numbers." }, { "start": 510, "end": 517, "text": " You get away with simply selecting the combination of weights that will give you a good performance." }, { "start": 517, "end": 525, "text": " So in it's kind of it's sort of a mix of drop out and vector quantization." }, { "start": 525, "end": 531, "text": " So in vector quantization, you also you get away with quantizing the vectors to given precision." }, { "start": 531, "end": 542, "text": " And here the task is easy enough such that by simple overparameterization and selecting of the weights that you have around, mixing them correctly by simply." }, { "start": 542, "end": 547, "text": " So you can't mix arbitrarily, but you can mix with zero or one." }, { "start": 547, "end": 550, "text": " You get good enough. OK." }, { "start": 550, "end": 560, "text": " So this is sort of my hypothesis. My hypothesis would be that the harder the task, the the harder it gets to find super masks that perform well." }, { "start": 560, "end": 568, "text": " That's what I think. But nevertheless, to say for the tasks they're considering here, you can find these super masks." }, { "start": 568, "end": 574, "text": " And there is a way to do that by using gradient descent, even though the super masks are discrete." }, { "start": 574, "end": 582, "text": " So what we're going to do is we're going to use the same randomly initialized neural network for each of the tasks." }, { "start": 582, "end": 587, "text": " Right. So this is like C for 10. This is MNIST. This is ImageNet." }, { "start": 587, "end": 598, "text": " We're going to use the same gray network, but we're going to find an individual mask for each of those networks, for each of those tasks on top of the same network." }, { "start": 598, "end": 604, "text": " And they're all going to perform relatively well, according to the super mask conjecture." }, { "start": 604, "end": 607, "text": " Now, again, this is not surprising." }, { "start": 607, "end": 617, "text": " And the fact that we always use the same randomly initialized network, you know, isn't really it's not really necessary that we always use the same." }, { "start": 617, "end": 620, "text": " But in this case, they say, OK, we always use the same." }, { "start": 620, "end": 625, "text": " And then we only need to store the mask for each task." }, { "start": 625, "end": 634, "text": " The mask is much simpler than the weights because, you know, a 32 bit floating point number is 32 bits, while a masking bit is only one bit." }, { "start": 634, "end": 638, "text": " So we save basically a factor of 32 in our models." }, { "start": 638, "end": 649, "text": " But essentially, essentially, right, it's not the case that we are training the same model and some continue learning." }, { "start": 649, "end": 660, "text": " It's much more akin to training a training one model per task and then inferring the task." }, { "start": 660, "end": 666, "text": " And just that we do it in a much more crude way. So it's more like learning a compressed model per task." }, { "start": 666, "end": 672, "text": " I find it's a better way to look at it than than continuous learning." }, { "start": 672, "end": 674, "text": " In any case, you learn these super masks." }, { "start": 674, "end": 678, "text": " And then here is the the the hard bit." }, { "start": 678, "end": 686, "text": " The easy bit is if I tell you which tasks the inference data point, the test data point comes from, you have a pretty easy time classifying it." }, { "start": 686, "end": 691, "text": " You simply select the mask accordingly, you run forward pass, and that's it." }, { "start": 691, "end": 696, "text": " If I don't tell you where the test data point comes from, that's the hard part." }, { "start": 696, "end": 703, "text": " Now, they need a way to decide where the data point comes from." }, { "start": 703, "end": 710, "text": " And the idea that they have right here, they have sort of multiple ideas." }, { "start": 710, "end": 727, "text": " But the main idea, the first idea is that if you have trained these individual models for the individual tasks, then OK, there is not good explanation here." }, { "start": 727, "end": 731, "text": " Then the correct model should be very confident." }, { "start": 731, "end": 733, "text": " This is an assumption that you make." }, { "start": 733, "end": 746, "text": " So I'm going to take my image of the test set and I'm going to feed it through the model one, which, you know, you have to separate this idea is separate from the masks at its core." }, { "start": 746, "end": 755, "text": " It's simply saying if I have three different models that I have trained for three different tasks and now I get an input, I don't know which one it's from." }, { "start": 755, "end": 761, "text": " I can simply feed it to each one of them and I can look at the output distribution." }, { "start": 761, "end": 767, "text": " So maybe my output distribution right here, this is as you can see, three output neurons." }, { "start": 767, "end": 769, "text": " It's a three class classifier right here." }, { "start": 769, "end": 773, "text": " My output distribution is somewhat here like this." }, { "start": 773, "end": 777, "text": " And here it's like this." }, { "start": 777, "end": 782, "text": " And here it's like I shouldn't do that." }, { "start": 782, "end": 783, "text": " I got a comment." }, { "start": 783, "end": 787, "text": " You know who you are." }, { "start": 787, "end": 791, "text": " And here it's like this." }, { "start": 791, "end": 794, "text": " OK, so which one would you pick?" }, { "start": 794, "end": 803, "text": " And their answer here is we should pick this one because of it has very low entropy." }, { "start": 803, "end": 808, "text": " So this middle model here is very, very sure about this data point." }, { "start": 808, "end": 817, "text": " It's very sure about its prediction because it the distance basically of the top prediction to all the other predictions is so high." }, { "start": 817, "end": 819, "text": " It's very confident in its prediction." }, { "start": 819, "end": 824, "text": " Whereas here you can see that the distance is not too high." }, { "start": 824, "end": 828, "text": " Also here, the distance between the highest and the others is not too high." }, { "start": 828, "end": 839, "text": " So they say we are going to pick the model or the mask in this case for which the output entropy is the highest." }, { "start": 839, "end": 844, "text": " And that is a heuristic for now, but it tends to work pretty well." }, { "start": 844, "end": 849, "text": " And it has a bit to do with how relatively difficult your tasks are." }, { "start": 849, "end": 854, "text": " So your tasks need to be kind of equally difficult." }, { "start": 854, "end": 860, "text": " Otherwise, it's not otherwise this can get a little bit a little bit out of hand." }, { "start": 860, "end": 862, "text": " But there are ways to solve it." }, { "start": 862, "end": 864, "text": " And they allude to that in the kind of future work section." }, { "start": 864, "end": 875, "text": " But in this case, if the tasks are equally hard and they consider tasks that are equally hard, then the entropy is a good measure of how confident these things are." }, { "start": 875, "end": 882, "text": " And therefore we can check which task it is by using the entropy as a heuristic." }, { "start": 882, "end": 890, "text": " All right, so we're left with simply trying each of the masks and then decide taking the one that has the highest entropy." }, { "start": 890, "end": 901, "text": " Now, they say this is costly because if we've learned a thousand tasks, we need to try each of the thousand masks in order to do that." }, { "start": 901, "end": 903, "text": " So they go for something else." }, { "start": 903, "end": 908, "text": " And this is the second word in the title, this superposition word." }, { "start": 908, "end": 915, "text": " So instead of doing that, what they'll do is they'll use a superposition of masks." }, { "start": 915, "end": 921, "text": " And actually, the picture also I find more descriptive than the formula." }, { "start": 921, "end": 923, "text": " I can write down the formula down here." }, { "start": 923, "end": 929, "text": " So what they'll do is they'll say, why don't we just overlap all of the masks?" }, { "start": 929, "end": 936, "text": " So we'll have all of these masks and I for one for each tasks and we'll initialize them with coefficients." }, { "start": 936, "end": 941, "text": " Alpha I will just mix them like this and alpha here." }, { "start": 941, "end": 946, "text": " It's initialized in one over K, where K is the number of tasks." }, { "start": 946, "end": 952, "text": " OK, we'll just mix them and then we'll multiply them by the weight of the neural network." }, { "start": 952, "end": 961, "text": " And that's will that neural network is where we input our image into." }, { "start": 961, "end": 964, "text": " OK, so what does that give us?" }, { "start": 964, "end": 968, "text": " That basically gives us a mix of all the networks." }, { "start": 968, "end": 976, "text": " Like it's pretty safe to say that the entire network is going to be in there and maybe sometimes multiple times." }, { "start": 976, "end": 983, "text": " Like if multiple masks use the same weight, it's going to be in there with a higher weight and so on." }, { "start": 983, "end": 985, "text": " So that's what you see right here." }, { "start": 985, "end": 989, "text": " You can see that all the masks are overlapped in superposition with each other." }, { "start": 989, "end": 991, "text": " Now, what does the output give you?" }, { "start": 991, "end": 992, "text": " The output gives you nothing." }, { "start": 992, "end": 996, "text": " The output gives you kind of the average prediction of the network." }, { "start": 996, "end": 1003, "text": " So this here is going to give you kind of the sort of the average prediction of all of the networks, which isn't very helpful." }, { "start": 1003, "end": 1010, "text": " But of course, what we can do is we can look at the gradients of this." }, { "start": 1010, "end": 1021, "text": " So if we from this calculate the entropy, which is here denoted H, and we calculate, we back propagate this." }, { "start": 1021, "end": 1032, "text": " So we back propagate this to the alphas and we calculate the gradient of the entropy with respect to each of the alphas." }, { "start": 1032, "end": 1033, "text": " What does that give us?" }, { "start": 1033, "end": 1035, "text": " So what's the intuition here?" }, { "start": 1035, "end": 1042, "text": " The intuition is if I change my alpha a bit, how does the entropy change?" }, { "start": 1042, "end": 1048, "text": " So basically, this gives you the sensitivity of the entropy to these alpha parameters." }, { "start": 1048, "end": 1052, "text": " So if this is high, what does it mean?" }, { "start": 1052, "end": 1058, "text": " It means that this mask right here has a big influence on the entropy." }, { "start": 1058, "end": 1067, "text": " Specifically, if I were to increase the alpha, then the entropy would increase." }, { "start": 1067, "end": 1072, "text": " OK. And if I were to decrease the alpha, then the entropy would decrease." }, { "start": 1072, "end": 1076, "text": " That's the kind of what the gradient gives you." }, { "start": 1076, "end": 1081, "text": " Now, did I say before we want the one with the highest entropy?" }, { "start": 1081, "end": 1086, "text": " I'm pretty sure we want the one with the lowest entropy." }, { "start": 1086, "end": 1090, "text": " We want the one where we're very, very, very sure." }, { "start": 1090, "end": 1095, "text": " Right. I might have said that absolutely wrong." }, { "start": 1095, "end": 1105, "text": " So if you see right here, this is the formalism." }, { "start": 1105, "end": 1111, "text": " First, we associate each of the k learned supermasks with a coefficient alpha initially set to 1 over k." }, { "start": 1111, "end": 1116, "text": " Each alpha can be interpreted as the belief that supermask m is the correct mask," }, { "start": 1116, "end": 1121, "text": " equivalently the belief that the current unknown task is task i." }, { "start": 1121, "end": 1128, "text": " The model output is then computed with a weighted superposition of all learned tasks, which is this thing right here." }, { "start": 1128, "end": 1133, "text": " The correct mask should produce a confidence low entropy output." }, { "start": 1133, "end": 1136, "text": " Therefore, we recover the correct mask." }, { "start": 1136, "end": 1140, "text": " We find the coefficients alpha, which minimize the output entropy h." }, { "start": 1140, "end": 1148, "text": " OK. So, yes, we want the task with the lowest entropy, of course, not with the highest entropy." }, { "start": 1148, "end": 1157, "text": " So if we look at the gradient right here, the gradient basically tells us how each of the masks will influence the different the entropy." }, { "start": 1157, "end": 1165, "text": " And if we simply select the alpha where the gradient here is the most negative number." }, { "start": 1165, "end": 1172, "text": " So we want this to be as low as possible, not zero, but negative as high as possible." }, { "start": 1172, "end": 1183, "text": " Then we know that if we increase this, the contribution of this mask, then the entropy will go down the most." }, { "start": 1183, "end": 1197, "text": " OK. And again, our hypothesis here is that maximum entropy, sorry, minimum entropy means most confident prediction means that the if all tasks are equally hard," }, { "start": 1197, "end": 1203, "text": " it probably means that the data point is from the task where we have the lowest entropy." }, { "start": 1203, "end": 1207, "text": " So what's the what's the deal here?" }, { "start": 1207, "end": 1211, "text": " Like they show in this graph right here, they show this is much faster." }, { "start": 1211, "end": 1222, "text": " So if we if we were to evaluate each mask individually and measure its entropy, of course, with the number of tasks, we'll simply linearly increase our time in the forward pass," }, { "start": 1222, "end": 1225, "text": " because we need to try out each of these masks." }, { "start": 1225, "end": 1238, "text": " However, if we do what they're doing here, we simply run one, we mix these ones, we run one forward pass, we do back prop and they consider two strategies." }, { "start": 1238, "end": 1246, "text": " So what you can do is you can do gradient descent on these alphas, which takes a number of steps to converge." }, { "start": 1246, "end": 1248, "text": " Or you can actually do a single step." }, { "start": 1248, "end": 1255, "text": " So you just observe the gradient and by the gradient, you recognize which one has the lowest gradient." }, { "start": 1255, "end": 1257, "text": " And that's the one you pick." }, { "start": 1257, "end": 1258, "text": " So where's the catch here?" }, { "start": 1258, "end": 1267, "text": " The catch is that if you do something like this, if you do something like this, there are two catches, actually." }, { "start": 1267, "end": 1273, "text": " First of all, this here is a convex combination, right?" }, { "start": 1273, "end": 1275, "text": " This is convex combination." }, { "start": 1275, "end": 1278, "text": " And the problem isn't convex at all." }, { "start": 1278, "end": 1290, "text": " But if you simply take this convex combination, multiply it and then look at the gradient, you sort of assume that the problem is a kind of a convex, nicely shaped problem." }, { "start": 1290, "end": 1300, "text": " And if you then observe these gradients with respect to the alphas, you make assumptions about the problem that might not be true." }, { "start": 1300, "end": 1305, "text": " So you lose, you kind of heuristically approximate the importance of these masks." }, { "start": 1305, "end": 1307, "text": " That's the first thing." }, { "start": 1307, "end": 1321, "text": " The second thing, of course, is that it's you still you still are implicitly saving your still are implicitly trying all the models, but you're just not trying them explicitly." }, { "start": 1321, "end": 1335, "text": " You're implicitly trying all the models because when you do this combination right here, your auto differentiation library will actually keep track of what the individual models contribute." }, { "start": 1335, "end": 1337, "text": " It's just that per layer." }, { "start": 1337, "end": 1349, "text": " So, of course, this here, this W is multi layer perceptron, which means that if you have multiple layers, you know, there's W one and there's W two." }, { "start": 1349, "end": 1358, "text": " And you have your alphas and your alphas are also, you know, you can distribute them into these." }, { "start": 1358, "end": 1364, "text": " Sorry, your masks are also mask for layer one mask for layer two and so on." }, { "start": 1364, "end": 1368, "text": " So your auto differentiation package needs to keep track of." }, { "start": 1368, "end": 1375, "text": " OK, mask one goes here with this alpha mask to the layer two goes here with this alpha." }, { "start": 1375, "end": 1378, "text": " And there is there." }, { "start": 1378, "end": 1381, "text": " So it needs to keep track of this graph." }, { "start": 1381, "end": 1388, "text": " It's just that this is highly optimized and you also need to you only need to do it layer by layer." }, { "start": 1388, "end": 1398, "text": " So the contribution of alpha of mask one, this is maybe alpha eye of mask eye one mask eye two." }, { "start": 1398, "end": 1405, "text": " The contribution of the alpha eye will not be explicit in this layer." }, { "start": 1405, "end": 1410, "text": " It will be implicit as an average across the layer." }, { "start": 1410, "end": 1417, "text": " Right. So, again, this is you assume in each layer, you assume a convex combination of all the alphas and propagate them." }, { "start": 1417, "end": 1419, "text": " And propagate that forward." }, { "start": 1419, "end": 1431, "text": " And therefore, if you look at the next layer, you can only view what mask two does mask of layer two does as in terms of a convex combination of layer one." }, { "start": 1431, "end": 1441, "text": " So you make multiple approximations and you rely on the optimization of your auto differentiation library to keep track of these different things and do operations in parallel." }, { "start": 1441, "end": 1453, "text": " And in the case where you do it linearly, I'm going to guess you simply do it as a sequential operation, but it's going to be exact." }, { "start": 1453, "end": 1455, "text": " So that's the trade off." }, { "start": 1455, "end": 1462, "text": " All right. So we now know how we can figure out where the task is from." }, { "start": 1462, "end": 1465, "text": " And let's see how that works." }, { "start": 1465, "end": 1470, "text": " So in this first task, we are looking at split image net." }, { "start": 1470, "end": 1480, "text": " Split image net simply it takes the image net data set, which is a thousand class data set, and it distributes it into 100 different tasks." }, { "start": 1480, "end": 1483, "text": " Each is a 10 class classification task." }, { "start": 1483, "end": 1485, "text": " Now note two things." }, { "start": 1485, "end": 1488, "text": " First thing is that split image net." }, { "start": 1488, "end": 1494, "text": " Each task is approximately as hard as each other as the other tasks." }, { "start": 1494, "end": 1495, "text": " Right." }, { "start": 1495, "end": 1503, "text": " It's still image net classification and it's the same number of the of it's the same number of labels." }, { "start": 1503, "end": 1508, "text": " And each task is about the same hardness." }, { "start": 1508, "end": 1509, "text": " You can make that assumption." }, { "start": 1509, "end": 1513, "text": " And second of all, the tasks are actually pretty, pretty easy." }, { "start": 1513, "end": 1514, "text": " Right." }, { "start": 1514, "end": 1518, "text": " It's hard to distinguish image net into a thousand classes." }, { "start": 1518, "end": 1526, "text": " But if you split that task, I'm going to bet that you have these high resolution images and you have a 10 class classification." }, { "start": 1526, "end": 1529, "text": " It's going to be relatively easy." }, { "start": 1529, "end": 1533, "text": " So all our conditions are met for at least for my hypothesis to hold." }, { "start": 1533, "end": 1542, "text": " And you can see on the right side, you can see split C for 100, which does the same thing to C for 100." }, { "start": 1542, "end": 1548, "text": " It subdivides it into different, very small class classification tasks." }, { "start": 1548, "end": 1550, "text": " You can see the results." }, { "start": 1550, "end": 1554, "text": " The upper bound here is where you train a single model for each of the tasks." }, { "start": 1554, "end": 1558, "text": " That gets you to average accuracy of 92 percent." }, { "start": 1558, "end": 1562, "text": " So on image net, 92 percent." }, { "start": 1562, "end": 1564, "text": " It's pretty, it's pretty good." }, { "start": 1564, "end": 1571, "text": " Of course, this is again, this is 10 class, which makes the numbers a lot different with the subs." }, { "start": 1571, "end": 1578, "text": " So subs up, you get to this pretty good 88 percent accuracy." }, { "start": 1578, "end": 1581, "text": " This is this super masks in superposition." }, { "start": 1581, "end": 1586, "text": " This here is a baseline that also does lifelong learning." }, { "start": 1586, "end": 1591, "text": " Now, they have these annotations right here." }, { "start": 1591, "end": 1594, "text": " Gigi, which yes, Gigi, haha." }, { "start": 1594, "end": 1602, "text": " But so the first letter will always tell you whether the task ID is given during training." }, { "start": 1602, "end": 1607, "text": " And the second letter will tell you whether the task ID is given during testing." }, { "start": 1607, "end": 1614, "text": " So this here simply evaluates whether or not this masking is feasible, which you can see here it is." }, { "start": 1614, "end": 1622, "text": " So this will we know which mask to train during training and we know which mask to retrieve during testing." }, { "start": 1622, "end": 1626, "text": " So there is nothing of this entropy gradients here." }, { "start": 1626, "end": 1627, "text": " None of it." }, { "start": 1627, "end": 1638, "text": " This simply evaluates the viability of the masking approach, which as you can see, it's pretty viable and it's more viable than these baselines." }, { "start": 1638, "end": 1643, "text": " This same thing on the CIFAR 100 right here." }, { "start": 1643, "end": 1650, "text": " So you can see they also evaluate since I guess it's an easier problem, they also evaluate the number of bytes which they can control." }, { "start": 1650, "end": 1658, "text": " So they can control the number of bytes in their model by simply increasing or decreasing the required sparsity of their mask." }, { "start": 1658, "end": 1664, "text": " So you can change your mask by saying how sparse you want it." }, { "start": 1664, "end": 1676, "text": " And of course, if you want it more sparse, you get a worse model because you have less less ones in your budget to make your model perform well." }, { "start": 1676, "end": 1688, "text": " But you can see that if they do it with these baseline model, this batch E, you severely underperform with regard to the upper bound right here." }, { "start": 1688, "end": 1701, "text": " The upper bound again is where you train a model per task and separate heads here is another kind of dummy baseline where you train a different head for each of the tasks with a common trunk." }, { "start": 1701, "end": 1704, "text": " That gets you pretty much nowhere." }, { "start": 1704, "end": 1710, "text": " With the sub sub algorithm, you do get almost to the performance of the upper bound." }, { "start": 1710, "end": 1715, "text": " And in fact, if you do this transfer approach right here, you do get there." }, { "start": 1715, "end": 1720, "text": " The transfer approach simply means that so you do these tasks in succession, right?" }, { "start": 1720, "end": 1724, "text": " You do task one. Okay, done. You do task two. Okay, done." }, { "start": 1724, "end": 1730, "text": " And for each one, you train a mask. Okay, for each one you train this is mask one, mask two." }, { "start": 1730, "end": 1745, "text": " The transfer approach simply says if I start task three, I'm going to start the mask three, my initial weights basically are going to be a running average of the masks that I have already considered or an average." }, { "start": 1745, "end": 1752, "text": " There is some amount of transfer going on simply to initialize the weights." }, { "start": 1752, "end": 1755, "text": " It's actually astounding that this helps you so much." }, { "start": 1755, "end": 1766, "text": " But with this, if you look at the actual numbers, I believe you can get like a tiny bit higher than the training a single model for each of the tasks." }, { "start": 1766, "end": 1782, "text": " Okay, so this sort of establishes the viability of training the different masks for the different tasks, which I again, I think it is not surprising because essentially you're training a different model per task." }, { "start": 1782, "end": 1791, "text": " And it's just the fact that you do a very crude model and that you can store very efficiently." }, { "start": 1791, "end": 1796, "text": " Now you might object and say, hey, don't I need to store the underlying randomly initialized network?" }, { "start": 1796, "end": 1801, "text": " And the answer is yes and no. Actually, you only need to store the random seed to produce it." }, { "start": 1801, "end": 1806, "text": " So checkmate. Yeah, they do." }, { "start": 1806, "end": 1812, "text": " So here they explain this one shot algorithm where they simply look at the gradient of the entropy." }, { "start": 1812, "end": 1821, "text": " You can see with the maximum negative gradient of the entropy, they also have this binary algorithm." }, { "start": 1821, "end": 1831, "text": " If the task where they say with the task is harder to differentiate this kind of assumption of the convex combination thing does might not hold." }, { "start": 1831, "end": 1848, "text": " So what they do is they have this binary algorithm where they do a binary search where they simply want to circumvent the necessity to evaluate each of the masks by itself because that takes long." }, { "start": 1848, "end": 1853, "text": " So they do something in between where they do this binary algorithm." }, { "start": 1853, "end": 1866, "text": " This is right here where they do this convex combination, they evaluate the gradient, but then they don't just take the highest of the negative gradients." }, { "start": 1866, "end": 1869, "text": " They eliminate half of them." }, { "start": 1869, "end": 1878, "text": " So you can see whenever it's lower than the median, they eliminate it and then they start off with this new set of reduced alphas." }, { "start": 1878, "end": 1892, "text": " So in each of these steps, they eliminate half of the masks and then they recompute again because because it is not a convex problem, the order might actually be different in the second and third and fourth step." }, { "start": 1892, "end": 1901, "text": " Of course, this is simply this is like halfway towards between this one shot algorithm and trying each mask by itself." }, { "start": 1901, "end": 1904, "text": " It's kind of a compromise." }, { "start": 1904, "end": 1912, "text": " I mean, they make it they really try to not not try each mask once because it's one of their contributions." }, { "start": 1912, "end": 1917, "text": " Right. But then they probably realized if we just do it one shot, sometimes it doesn't work." }, { "start": 1917, "end": 1921, "text": " So they're going between, which is, you know, it's a pretty cool idea." }, { "start": 1921, "end": 1924, "text": " All right. Next experiments." }, { "start": 1924, "end": 1928, "text": " We're now in this situation and you see you see a number of things." }, { "start": 1928, "end": 1936, "text": " So first of all, we have a new added a new baseline, this PSP, and you can see that the baselines operating this G.G. regime." }, { "start": 1936, "end": 1943, "text": " So the baselines are given the task during training and given the task during evaluation." }, { "start": 1943, "end": 1948, "text": " You see the upper bound here in gray is where you train a model for each task." }, { "start": 1948, "end": 1956, "text": " And you assume that's an upper bound because you assume the tasks are kind of unrelated to each other, which is not the case." }, { "start": 1956, "end": 1962, "text": " So there is actually potential to beat the to beat the upper bound baseline." }, { "start": 1962, "end": 1966, "text": " And subs up here you see operates in a different regime." }, { "start": 1966, "end": 1974, "text": " Namely, there's this regime of you're given the task during training, but then during testing, you're not given the task." }, { "start": 1974, "end": 1981, "text": " OK. And this you here, it basically means that the labels you assume that the labels of the tasks are not shared." }, { "start": 1981, "end": 2000, "text": " So in in this case, if you predict, if you predict like if you split MNIST into always two class, if you split MNIST into two tasks, you predict the first task is zero, one, two, three, four." }, { "start": 2000, "end": 2003, "text": " The second task is five, six, seven, eight, nine." }, { "start": 2003, "end": 2008, "text": " OK. And you have the same amount of labels. So you always have five output neurons. Right." }, { "start": 2008, "end": 2011, "text": " So you have one, two, three, four, five output neurons." }, { "start": 2011, "end": 2021, "text": " If you if the image here is like a five, that would be task task one label zero." }, { "start": 2021, "end": 2032, "text": " Right. If your network now predicts label zero correctly, but predicts the the image to come from task one, you count it as a mistake." }, { "start": 2032, "end": 2040, "text": " You say, well, you know, you've predicted the right output neuron, but you've told me it comes from task zero from from the zero to four." }, { "start": 2040, "end": 2042, "text": " So I'm going to count that as a mistake." }, { "start": 2042, "end": 2052, "text": " So it's really there isn't there isn't a way for the network to kind of get around predicting the wrong tasks or kind of share information." }, { "start": 2052, "end": 2058, "text": " So you assume that the labels are not shared or unshared." }, { "start": 2058, "end": 2065, "text": " Yeah. So it's the subs up here has a significantly harder task than the baselines." }, { "start": 2065, "end": 2067, "text": " Keep keep that in mind." }, { "start": 2067, "end": 2073, "text": " And now we are applying our because we we are not given the task at inference time." }, { "start": 2073, "end": 2081, "text": " Now we're applying our heuristic where we go and look at which of the mask entropies is the lowest." }, { "start": 2081, "end": 2086, "text": " Respectively, we use this actually this one shot algorithm where we look at the gradients." }, { "start": 2086, "end": 2091, "text": " And you can see this is on permuted MNIST in permuted MNIST." }, { "start": 2091, "end": 2097, "text": " What you do is you take MNIST and you simply permute the pixels." }, { "start": 2097, "end": 2103, "text": " And this it sounds crazy, but you simply permute the pixels and that gives you a new task." }, { "start": 2103, "end": 2108, "text": " So you can come up with like almost an infinite number of tasks because there are what?" }, { "start": 2108, "end": 2117, "text": " Twenty eight times, twenty eight pixels. So you can commute them seven hundred and eighty four factorial times," }, { "start": 2117, "end": 2122, "text": " which gives you like infinitely many tasks. And so you can modulate." }, { "start": 2122, "end": 2125, "text": " So here you can see the number of tasks learned increases." }, { "start": 2125, "end": 2136, "text": " And at the beginning, this baselines, especially this baseline, is doing fairly well, actually, on par with the upper bound when you only have ten different tasks." }, { "start": 2136, "end": 2153, "text": " However, after that quickly degrades, however, this subs up here, it keeps it keeps its performance, which it so this doesn't only mean that it correctly predicts the output neuron." }, { "start": 2153, "end": 2163, "text": " It also correctly predicts which task, which permutation was applied to the digit simply by looking where the entropy is high." }, { "start": 2163, "end": 2166, "text": " Right. So that's pretty cool." }, { "start": 2166, "end": 2172, "text": " And, you know, it's it's actually kind of surprising to be to be honest." }, { "start": 2172, "end": 2176, "text": " So on the left, this is a L'Onet architecture on the right." }, { "start": 2176, "end": 2183, "text": " It's a fully connected network. Now, the fully connected network here performing better is sort of expected." }, { "start": 2183, "end": 2187, "text": " First of all, MNIST is really easy and can actually be solved with a fully connected network." }, { "start": 2187, "end": 2199, "text": " And second of all, especially permuted MNIST, I guess, doesn't really conform to the to the assumptions of convolutional neural networks anymore." }, { "start": 2199, "end": 2202, "text": " Again, keep in mind, these tasks are very easy." }, { "start": 2202, "end": 2216, "text": " Yeah. So so especially for the fully connected network, of course, each permutation kind of looks the same because it's it doesn't care at the beginning" }, { "start": 2216, "end": 2221, "text": " that each pixels are next to each other. Simply each pixel is a different thing." }, { "start": 2221, "end": 2228, "text": " It's just the fact that it cannot it cannot learn from one tasks much about the other tasks." }, { "start": 2228, "end": 2233, "text": " That's why you that's the nature of permuted MNIST." }, { "start": 2233, "end": 2243, "text": " All right. And then in this experiment right here, and this is the sort of crown experiment, they learn they do this permuted MNIST," }, { "start": 2243, "end": 2250, "text": " but they go up to 2500 tasks, 2500 different permutations." }, { "start": 2250, "end": 2255, "text": " But so but now they have an additional thing right here." }, { "start": 2255, "end": 2261, "text": " So again, they have this sub sub where it needs to predict the correct permutation," }, { "start": 2261, "end": 2268, "text": " but also they compare it with a an algorithm that needs that is this NN right here." }, { "start": 2268, "end": 2275, "text": " So in this NN, not not only are you not given the task label at testing time," }, { "start": 2275, "end": 2279, "text": " you are actually not even given the task label at training time." }, { "start": 2279, "end": 2282, "text": " But here the outputs are shared." }, { "start": 2282, "end": 2292, "text": " So, you know, since since you have no way of knowing which task it is, you've never given it as long as you predict the correct class." }, { "start": 2292, "end": 2295, "text": " You good. So it's always it's always a 10 class classification problem." }, { "start": 2295, "end": 2298, "text": " It's just not permuted." }, { "start": 2298, "end": 2302, "text": " You're not given the task label here." }, { "start": 2302, "end": 2307, "text": " So first of all, I want to say that this here, the shared labels," }, { "start": 2307, "end": 2310, "text": " it could actually contribute to the success of this algorithm right here," }, { "start": 2310, "end": 2315, "text": " because even though you permute the pixels," }, { "start": 2315, "end": 2322, "text": " you can still sort of do things like count the frequency of light pixels versus dark pixels in MNIST." }, { "start": 2322, "end": 2326, "text": " And that might already give you a very, very big hint." }, { "start": 2326, "end": 2333, "text": " Right. Or, you know, simple correlation of of two pixels, though that's that's a task specific thing." }, { "start": 2333, "end": 2341, "text": " But the the frequency of light pixels versus dark pixels will already give you a big boost in accuracy." }, { "start": 2341, "end": 2344, "text": " And now you can actually share that feature." }, { "start": 2344, "end": 2346, "text": " That feature will always be the same for every permutation." }, { "start": 2346, "end": 2353, "text": " So this is something you can share between tasks. And I would like." }, { "start": 2353, "end": 2356, "text": " So one way I guess you could eliminate that." }, { "start": 2356, "end": 2359, "text": " Well, I don't know. I'm not sure." }, { "start": 2359, "end": 2364, "text": " You kind of have to randomize the number of light pixels, but keep the classes the same." }, { "start": 2364, "end": 2367, "text": " It's going to be complicated. Right." }, { "start": 2367, "end": 2373, "text": " But just keep that in mind. However, how how does the algorithm even decide?" }, { "start": 2373, "end": 2379, "text": " So they have a heuristic right here as well, namely." }, { "start": 2379, "end": 2390, "text": " They say, OK, if we don't have no task identity during training or inference." }, { "start": 2390, "end": 2393, "text": " Where task identity is entirely unknown, even during training," }, { "start": 2393, "end": 2396, "text": " if subs of is uncertain about the current task identity," }, { "start": 2396, "end": 2401, "text": " it is likely that the data does not do not belong to any tasks seen so far." }, { "start": 2401, "end": 2407, "text": " When this occurs, a new super mask is allocated and the number of tasks learned so far is incremented." }, { "start": 2407, "end": 2410, "text": " OK, so they go with the same principle right here." }, { "start": 2410, "end": 2417, "text": " They say if we get a new training sample, we just evaluate it against all the masks that we had so far." }, { "start": 2417, "end": 2425, "text": " Or we do our one shot algorithm to approximate which masks gets us a low entropy." }, { "start": 2425, "end": 2432, "text": " If none of the mask gets us a low entropy, then we decide this must be some kind of unseen task." }, { "start": 2432, "end": 2437, "text": " So we're going to allocate a new mask for this unseen tasks." }, { "start": 2437, "end": 2443, "text": " And that heuristic, as you can see, it performs fairly, fairly well." }, { "start": 2443, "end": 2447, "text": " Where was our graph? Our graph was down here." }, { "start": 2447, "end": 2453, "text": " In fact, it performs pretty much on par with where you know the task during training." }, { "start": 2453, "end": 2461, "text": " And just not during during inference up until like here, the very last bit." }, { "start": 2461, "end": 2465, "text": " If you really get into the high task regime." }, { "start": 2465, "end": 2469, "text": " Where I guess it starts getting it starts getting confusing." }, { "start": 2469, "end": 2475, "text": " So this this heuristic might start to break down, but it might just be a fact how they tune their constants." }, { "start": 2475, "end": 2481, "text": " Like they have to define a threshold where they say, OK, if the entropy is somehow higher than this threshold," }, { "start": 2481, "end": 2483, "text": " then we allocate a new a new task." }, { "start": 2483, "end": 2487, "text": " And this might be optimized in order to solve this." }, { "start": 2487, "end": 2491, "text": " Again, these tasks are very, very, very, very easy." }, { "start": 2491, "end": 2495, "text": " So keep keep that in mind." }, { "start": 2495, "end": 2503, "text": " Yeah. OK. So this basically was the experimental part of that paper." }, { "start": 2503, "end": 2507, "text": " Now they consider different extensions to that." }, { "start": 2507, "end": 2513, "text": " I'm not sure how they also consider some ablations, which are pretty interesting." }, { "start": 2513, "end": 2521, "text": " So here they say we are going to up the kind of the hardness of the task with with rotated MNIST" }, { "start": 2521, "end": 2525, "text": " and also their model does pretty well on the rotated MNIST task," }, { "start": 2525, "end": 2535, "text": " where the differences of between the differences between the tasks are simply some of them are rotated by 10 degrees." }, { "start": 2535, "end": 2539, "text": " So that's a tiny rotation in the right." }, { "start": 2539, "end": 2543, "text": " If you have a number three, you kind of rotated by 10." }, { "start": 2543, "end": 2547, "text": " I can't even draw that subtle of a rotation by 10 degrees." }, { "start": 2547, "end": 2556, "text": " And, you know, the subs up must correctly predict which task the images from," }, { "start": 2556, "end": 2562, "text": " or it will not get the it will not get a correct reward." }, { "start": 2562, "end": 2568, "text": " The fact that it performs pretty well and the fact that it has, you know, rotation degrees," }, { "start": 2568, "end": 2573, "text": " where it outperforms the baseline that is actually given the rotation." }, { "start": 2573, "end": 2578, "text": " So it's given the task at inference time is pretty, pretty remarkable." }, { "start": 2578, "end": 2582, "text": " Again, I believe this is due to the fact that these tasks are so easy." }, { "start": 2582, "end": 2588, "text": " And therefore, this entropy, it just spikes when you get the correct thing," }, { "start": 2588, "end": 2594, "text": " because it sort of it sort of latches onto very easy features for each task." }, { "start": 2594, "end": 2601, "text": " So I'm going to guess that the tasks are generally solvable by maybe correlating two pixels." }, { "start": 2601, "end": 2606, "text": " Right. If like this pixel correlated with this pixel, if the correlation is high, it's a three." }, { "start": 2606, "end": 2609, "text": " The correlation is low. It's something else. OK." }, { "start": 2609, "end": 2617, "text": " And then if you rotate it, it's just not the case anymore that this pixel and this pixel, the correlation is very high." }, { "start": 2617, "end": 2624, "text": " So if you predict using this correlation, you'll get a pretty low confidence." }, { "start": 2624, "end": 2629, "text": " And I'm going to guess that, yeah, if you have discrete tasks and it's in this task," }, { "start": 2629, "end": 2635, "text": " then your confidence will just spike because the task is so easy and because all the tasks are about equally hard." }, { "start": 2635, "end": 2639, "text": " Because if you can find this correlation here, you can find it over here." }, { "start": 2639, "end": 2644, "text": " It's simply going to be two different two different pixels in this task." }, { "start": 2644, "end": 2654, "text": " And then as you try the masks, whenever you hit the one where you can predict pretty confidently with those two pixels," }, { "start": 2654, "end": 2658, "text": " then your confidence is going to spike, your entropy is going to get down." }, { "start": 2658, "end": 2666, "text": " And, you know, it's that task. They also here they compare." }, { "start": 2666, "end": 2676, "text": " The one shot algorithm. So they they they use their one shot algorithm to and they put it on a baseline." }, { "start": 2676, "end": 2682, "text": " So this baseline where they always actually have to give it the task," }, { "start": 2682, "end": 2689, "text": " they augmented by by their their one shot algorithm to select the task." }, { "start": 2689, "end": 2695, "text": " And it turns out they can make it perform fairly well, not on par with them." }, { "start": 2695, "end": 2703, "text": " Interestingly, but they can make it perform also fairly well, actually better than it was performing before." }, { "start": 2703, "end": 2710, "text": " So they have different extensions right here. And that's some of them are pretty important." }, { "start": 2710, "end": 2717, "text": " The one important thing they do is they have these superfluous neurons and that's sort of hidden." }, { "start": 2717, "end": 2723, "text": " And it's always a bit. So here, for example, you see in the output," }, { "start": 2723, "end": 2728, "text": " they say we have a lunette model using output size 500." }, { "start": 2728, "end": 2732, "text": " Now there are only 10 different labels in the MNIST task, right?" }, { "start": 2732, "end": 2736, "text": " Also in the permuted MNIST task, there are 10 different labels." }, { "start": 2736, "end": 2742, "text": " I mean, there are a total of 25,000 labels if you have 2500 tasks." }, { "start": 2742, "end": 2751, "text": " But the neural network has output size 10. However, their neural network here has output size 500," }, { "start": 2751, "end": 2761, "text": " which is surprising. So they say right here and we're going to get to the Hopfield network at the very end" }, { "start": 2761, "end": 2766, "text": " for those who are still around, because that's I think that should be its own paper." }, { "start": 2766, "end": 2775, "text": " But, you know, they say it could use an output of size L where L is the actual number of labels per task," }, { "start": 2775, "end": 2782, "text": " though we find in practice that it helps significantly to add extra neurons to the final layer." }, { "start": 2782, "end": 2795, "text": " Specifically, we consider outputs P in our S. So S is higher than L and refer to the neurons that are past L as superfluous neurons." }, { "start": 2795, "end": 2802, "text": " So let's try to make sense of this. So they have a neural network." }, { "start": 2802, "end": 2810, "text": " And let's say it's a three class classification task, right? So you have three classes and that's what you would do." }, { "start": 2810, "end": 2815, "text": " They simply add a bunch of neurons right here. That means they also they, you know," }, { "start": 2815, "end": 2818, "text": " they add all of the connections from the previous layer to those neurons." }, { "start": 2818, "end": 2826, "text": " But still, the classes can only be either 0, 1 or 2. These classes never appear during training." }, { "start": 2826, "end": 2832, "text": " So they claim this helps during during their procedure." }, { "start": 2832, "end": 2842, "text": " And I I thought about it a bit and we might be able to try to guess why it makes sense." }, { "start": 2842, "end": 2850, "text": " They say they simply say we observe that helps. And I mean, you know, let's let's try to make sense of it." }, { "start": 2850, "end": 2856, "text": " OK, so if we train, if we train our model using these too many neurons, what happens?" }, { "start": 2856, "end": 2861, "text": " Well, our label is always going to be of the top three neurons. So let's say our label is one." }, { "start": 2861, "end": 2867, "text": " This is going to result in a one hot vector like this. Now, what are we training in this layer here?" }, { "start": 2867, "end": 2875, "text": " In this layer here, we're training logits. OK, so pre pre softmax outputs." }, { "start": 2875, "end": 2888, "text": " So our our algorithm, our cross entropy loss is going to push all of these here down during every single training point." }, { "start": 2888, "end": 2895, "text": " It's going to push this one up and all of these down. Now, these three here are going to be pushed up and down depending on the label." }, { "start": 2895, "end": 2902, "text": " However, all of these down here are going to be only pushed down during the entire training." }, { "start": 2902, "end": 2915, "text": " So they are going to be exceptionally low numbers. OK, now, if we then come and we look at the at the entropy of this," }, { "start": 2915, "end": 2929, "text": " the the entropy, I think honestly, this is simply you could achieve the same thing by using a different temperature parameter in the softmax or in the entropy that you consider." }, { "start": 2929, "end": 2936, "text": " Because why can this help? And this helps with inferring which task it's coming from. Right." }, { "start": 2936, "end": 2945, "text": " So if you consider a task where you only have three outputs, so you don't have this bit down here and you look at the entropy," }, { "start": 2945, "end": 2953, "text": " it's going to be you know, it's going to be something something. Sorry, I have to draw this right here." }, { "start": 2953, "end": 2963, "text": " It's going to be like this. It's fairly confident. But if and maybe for the other tasks, it's not going to be as confident." }, { "start": 2963, "end": 2975, "text": " You know, it's maybe going to be like this. However, if you have those and if it's of the correct tasks, I'm going to guess this kind of stays the same because they're really low." }, { "start": 2975, "end": 2987, "text": " But if it's of the incorrect tasks, then you're not sure. And you not being sure about the output also means that you allocate a lot more to these things right here." }, { "start": 2987, "end": 2993, "text": " Because you've sort of never seen this particular kind of training examples. So you're not sure." }, { "start": 2993, "end": 3004, "text": " So you're just going to distribute your kind of your probability mass across these things right here because you've not been trained on that kind of input." }, { "start": 3004, "end": 3010, "text": " Right. It's very important to see that this is task. This is the correct task, which they always label J." }, { "start": 3010, "end": 3023, "text": " And for for any other incorrect task, you've never seen data like this. So these things here sort of act like an outlier class without you explicitly training an outlier class." }, { "start": 3023, "end": 3032, "text": " You simply train these things during training. You make them small. But you it's important to notice you always make them small." }, { "start": 3032, "end": 3040, "text": " From a data point that comes from their particular task. OK, that's what you train them for." }, { "start": 3040, "end": 3050, "text": " And now if you input a data point from a different task, they have less reason to be small because this is an outlier data point." }, { "start": 3050, "end": 3057, "text": " So you have much more fluctuations. So you have more fluctuations here. And therefore, the entropy is going to be even higher." }, { "start": 3057, "end": 3064, "text": " All right. This is sort of how I make sense of the fact that these additional superfluous neurons help here." }, { "start": 3064, "end": 3072, "text": " They act as kind of an outlier detector for the training data set of that particular task." }, { "start": 3072, "end": 3081, "text": " Now, because you have different training data for each task, they go further and they say it actually works even better." }, { "start": 3081, "end": 3089, "text": " It works even better if we instead of this entropy heuristic, we consider another heuristic." }, { "start": 3089, "end": 3100, "text": " Accordingly, we consider an objective G, which encourages the S neurons to have large negative values and confused as an alternative to entropy in equation four." }, { "start": 3100, "end": 3108, "text": " So G, they analyze down in the appendix. And we're just quickly going to look at what G is." }, { "start": 3108, "end": 3116, "text": " Sorry, this is about to load right here. And it's very interesting to see what G is." }, { "start": 3116, "end": 3125, "text": " Or is it? Yes. So G is going to be this right here." }, { "start": 3125, "end": 3131, "text": " So why are the logits and then G is this expression right here?" }, { "start": 3131, "end": 3143, "text": " And in fact, it's this expression with the with a bit of a modification. So it's going to be G is going to be the log some X of the logits." }, { "start": 3143, "end": 3150, "text": " Right. So it's this is some this is somewhat like the entropy." }, { "start": 3150, "end": 3158, "text": " And what we're going to consider is the gradient of G. So what we want is the gradient of G with respect to our alphas." }, { "start": 3158, "end": 3169, "text": " And the condition here with this detach operation is that." }, { "start": 3169, "end": 3182, "text": " The gradient of G should be, you know, the gradient of the loss function for all V that are superfluous neurons and zero otherwise." }, { "start": 3182, "end": 3191, "text": " So we're going to detach the gradient of G for all the real neurons, for all the actual logits of the output class." }, { "start": 3191, "end": 3196, "text": " And we're only going to consider the gradient flowing through the superfluous neurons." }, { "start": 3196, "end": 3211, "text": " So all of this here is if we take the gradient, it's only going to flow to in these in the last layer through the gradients of the superfluous neurons." }, { "start": 3211, "end": 3221, "text": " OK. And that's why we don't need the entropy, because the entropy always considers the difference, sort of the difference between the correct label and the other labels." }, { "start": 3221, "end": 3226, "text": " We are pretty sure that in our superfluous neurons, we don't have the correct label." }, { "start": 3226, "end": 3235, "text": " OK. So this log the log some X of our of these outputs here, what will they represent?" }, { "start": 3235, "end": 3246, "text": " Well, this is sort of a flatness measure. Again, it's kind of like the entropy, except we don't have a correct label right here." }, { "start": 3246, "end": 3257, "text": " If one of them is very high and the other ones are very low, or if they're generally very high up, then this will be high." }, { "start": 3257, "end": 3267, "text": " However, consider the difference between this and this, where they're all super small and also they're all pretty equal." }, { "start": 3267, "end": 3270, "text": " The log some X will be very small." }, { "start": 3270, "end": 3281, "text": " So this is an alternative where we can basically only look at the superfluous neurons and say, is are these superfluous neurons all very small?" }, { "start": 3281, "end": 3287, "text": " And, you know, none of them basically says I'm the correct label." }, { "start": 3287, "end": 3292, "text": " Then we can be pretty sure that over here there is some confidence." }, { "start": 3292, "end": 3306, "text": " However, if they are sort of kind of larger and generally kind of generally large, maybe unequal, that means we're not very confident because these are our outlier classes." }, { "start": 3306, "end": 3309, "text": " They shouldn't be. They shouldn't be large at all." }, { "start": 3309, "end": 3320, "text": " So an alternative to looking at the entropy of this distribution is to build such superfluous neurons and then look at those and only those." }, { "start": 3320, "end": 3325, "text": " And so the gradient of only those in order to decide which task it's from." }, { "start": 3325, "end": 3328, "text": " It's an interesting idea, I have to say." }, { "start": 3328, "end": 3340, "text": " But maybe one could achieve sort of the same thing with a with a temperature parameter here or by building an explicit outlier detection." }, { "start": 3340, "end": 3344, "text": " But it's generally an interesting idea for outlier detection, I have to say." }, { "start": 3344, "end": 3350, "text": " I've never really seen anything like this, though I also haven't really considered it." }, { "start": 3350, "end": 3352, "text": " So here they show the importance." }, { "start": 3352, "end": 3358, "text": " And you've seen in the experiments before that there sometimes was this H objective and also this G objective." }, { "start": 3358, "end": 3362, "text": " So you can look at the entropy, but also you can look at the G." }, { "start": 3362, "end": 3365, "text": " In both cases, you have superfluous neurons." }, { "start": 3365, "end": 3374, "text": " So before you actually saw you have 500 neurons for a task of for a task of 10 that needed 10 output classes." }, { "start": 3374, "end": 3380, "text": " Right. So this tells me that these superfluous neurons are pretty important for them." }, { "start": 3380, "end": 3387, "text": " And it this is probably one of the things that makes this work." }, { "start": 3387, "end": 3389, "text": " Right. These superfluous neurons." }, { "start": 3389, "end": 3401, "text": " So you kind of setting up a trap where for the wrong models, you let it run into this trap of assigning a lot of weight into these outlier classes." }, { "start": 3401, "end": 3407, "text": " And only if the correct model is trained to not do that on the particular data that you're considering." }, { "start": 3407, "end": 3414, "text": " I don't think this comes through in the paper too much that this is one of I guess this is one of the main factors making this work." }, { "start": 3414, "end": 3417, "text": " And you can see right here they actually do an experiment." }, { "start": 3417, "end": 3425, "text": " So I don't want to be too mean where they say, look, if we train with just 25 classes and this is permuted MNIST." }, { "start": 3425, "end": 3428, "text": " So the necessary amount will be 10." }, { "start": 3428, "end": 3433, "text": " So if we train with only 25, you can see how quickly we degrade right here." }, { "start": 3433, "end": 3439, "text": " However, as we go up and train with a hundred and 200, we get better and better." }, { "start": 3439, "end": 3448, "text": " In fact, if we train with this G objective, it always sort of outperforms the H objective." }, { "start": 3448, "end": 3453, "text": " Interestingly, the more output neurons you have, the less this difference seems to be." }, { "start": 3453, "end": 3457, "text": " But maybe the percent difference is the same." }, { "start": 3457, "end": 3460, "text": " The percent error difference is the same." }, { "start": 3460, "end": 3462, "text": " I don't know. I can't tell from here." }, { "start": 3462, "end": 3466, "text": " Yeah. So this isn't all." }, { "start": 3466, "end": 3472, "text": " There is also this Hopfield network going on where they say, OK, OK." }, { "start": 3472, "end": 3476, "text": " So essentially, we're actually training different models, right?" }, { "start": 3476, "end": 3478, "text": " We're not really superimposing all of these models." }, { "start": 3478, "end": 3484, "text": " We're training a different mask for each of the tasks and kind of remembering the masks and so on." }, { "start": 3484, "end": 3490, "text": " Can we also build a model where we actually only have one model?" }, { "start": 3490, "end": 3496, "text": " And that's what they do right here, where they build a Hopfield network, which is basically just a big matrix." }, { "start": 3496, "end": 3498, "text": " This is the Hopfield network." }, { "start": 3498, "end": 3502, "text": " And then they encode the masks in this Hopfield network." }, { "start": 3502, "end": 3513, "text": " So specifically, the Hopfield network is of size D squared, where it is able to encode two to the D different binary strings." }, { "start": 3513, "end": 3515, "text": " And it does so in a fuzzy way." }, { "start": 3515, "end": 3522, "text": " But you can prove that if you construct the Hopfield network like this, where Z is a binary string," }, { "start": 3522, "end": 3528, "text": " you can recover the binary strings by gradient descent in the Hopfield network." }, { "start": 3528, "end": 3534, "text": " And obviously, the more binary strings you encode, the less you get out." }, { "start": 3534, "end": 3540, "text": " It's not magic. You can't store that many bits into a thing that doesn't have that many bits." }, { "start": 3540, "end": 3550, "text": " But I believe, you know, again, this is using gradient descent, and it can do so with surprising accuracy." }, { "start": 3550, "end": 3555, "text": " So remember that these here are bits while these here are floating point numbers." }, { "start": 3555, "end": 3560, "text": " So the comparison that I just made isn't entirely fair." }, { "start": 3560, "end": 3565, "text": " But I don't want to go into the Hopfield networks because I really feel this should be its own paper." }, { "start": 3565, "end": 3575, "text": " I guess they just want to show that it's also possible to compress these masks into one thing," }, { "start": 3575, "end": 3581, "text": " such that I can't make the argument anymore that, hey, all you're doing is training different models for different tasks." }, { "start": 3581, "end": 3585, "text": " All right. All in all, pretty cool paper. As I said, pretty dense paper." }, { "start": 3585, "end": 3593, "text": " I invite you to read it. They have a big appendix where they have more experiments and so on and explain everything in detail." }, { "start": 3593, "end": 3600, "text": " All in all, from this, I don't really take the method, but the ideas are very interesting." }, { "start": 3600, "end": 3627, "text": " And I am excited to see where this goes in the future. All right. I'll see you next time. Bye bye." } ]
kEhEbVZQwjM
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[YTalks] Siraj Raval - Stories about YouTube, Plagiarism, and the Dangers of Fame (Interview)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "siraj", "siraj raval", "ml youtube", "fame", "youtuber life", "what happened to siraj", "siraj raval plagiarism", "siraj raval interview", "siraj raval coursera", "siraj raval apology", "siraj raval paper", "quantum door", "ytalks", "yannic siraj" ]
#ytalks #siraj #plagiarism A conversation with Siraj Raval about his journey on YouTube, and the perils of fame. OUTLINE: 0:00 - Intro 1:30 - Welcome 3:15 - Starting out: From Economics to YouTube 13:00 - More Views: Plagiarizing Video Content 23:30 - One Step Up: Copying A Research Paper 29:15 - Was there another way? 39:00 - Clickbait Course: Make Money with Machine Learning 50:30 - Rock Bottom and the Way Forward 1:01:30 - Advice for Future Generations Siraj's Channel: https://www.youtube.com/c/SirajRaval Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
The following is a conversation with Siraj Ruval. Siraj has one of the largest channels in the machine learning YouTube space. Over 700,000 people are subscribed to him as of this date. Siraj pumped out lots and lots of videos on topics such as coding tutorials, explaining beginners concept in machine learning and in other topics like blockchain or other computer science things. Now his rise came to an abrupt stop when a series of scandals hit him at the end of 2019. And there were a lot of articles written back then, Twitter posts made and even Siraj himself made an apology video. But I was wondering how did he feel like during all of this? What did he think back then? How did he come to this? How did he feel during the highs and the lows of his career? And how does he look back on things now? I was struck by how straightforward Siraj was in this conversation. I was sure there was gonna be wisdom in there for the rest of us, be that youtubers or machine learners and I was not disappointed. He was definitely honest looking back with a different view and we touched on many things in this conversation. I hope you enjoy it. I hope you find something in there that helps you and yeah, let us know what you think. Well, hello everyone. Today is a special day. In many ways, Siraj, who is my guest today, is one of the pioneers of the field of ML YouTube. Now I'm pretty sure pretty much every single person in the field has heard of Siraj, has seen him, watched one of his videos or something like this. If I can maybe frame it a little bit, there's that you were one of the first machine learning youtubers. You became really popular quickly. Things went uphill, more views and so on and then I think it's fair to say it kind of all came crashing down in like a very short period of time and then it just sort of crumbled. I can't really frame it any differently. There seemed to be things one on top of another that just all came in like a month or so, the same month. It seemed crazy this time at the end of 2019. So yeah, I'm happy to host Siraj today. Thanks so much for being here and talking and you agreed to talk a little bit about your side of things, of what happened and what you're doing now. So yeah, welcome. Thanks, it's great to be here. I love your videos. They've definitely got a personality and character to them that I definitely admire and I'd like to see more of. Thank you. Since you're the OG youtuber of this, I guess character is a little bit of what it takes. I want to go back a little bit to the beginning though. If I recall correctly, you started studying economics, is that correct? Correct, at Columbia that was my freshman year. I was an economics major. Yeah and for some reason you switched over to computer science because what took you there? Well, I took a semester to travel around Europe using Couchsurfing. I was Couchsurfing for three and a half months and the first person that I Couchsurfed with in London, his name was Alex McCall. He showed me his terminal window. He had a hackintosh that he made and he really inspired me to get into computer science. It turned out, you know, several years later that Alex wrote the O'Reilly book on JavaScript and he has this really cool startup called Clearbit that he already sold by now. But I got to meet him before all that happened and once I saw Alex terminal and all the cool things he was doing, I knew that once I got back to Columbia I needed to like switch over to computer science because that was how you really made an impact in the world. Yeah, so I guess you saw pretty early that the impact was to be made, right? I think a lot of people go into economics and they think like, they maybe think a little bit of money if they go into economics because it's kind of close to it but I guess computer science especially, you know, nowadays is really the impactful field or one of the impactful fields. Little known fact, I also didn't, I started out in medicine and then switched over to computer science. So much of the of the same journey there. And then did you finish computer science? No, I dropped out my senior year of all times to drop out. Wow. Yeah. And that was because of YouTube? No, no, no. So I dropped out because I had a robotic startup at the time. We were making a six degree of freedom robot that would pick things up off the floor for older people with something called ALS because they can't bend over. And we built a prototype, raised money but it turns out like nobody would buy it and also there were some software problems at the time. This was like 2012. So yeah, I just moved to San Francisco from there, from New York and then that's when I really started to feel like I was around my people. Like techians. Yeah, you're American originally but from smaller town or big city or? I'm from Houston, Texas. So I was born here. My parents are from India. Definitely have a deep connection with India. I still dream about India. Cool. And then you were in San Francisco and how did you get into YouTube? So I worked at several contract jobs in San Francisco for companies like CBS Interactive doing mobile development. I worked at Meetup for a year just as a general software engineer. I started off as an intern and then eventually the last job I had, W2 job, was at Twilio, the API company and I worked there as a developer educator for about eight months and then I was fired because I think it was just a performance thing. That's what they said so I don't know. But I remember wanting, I learned a lot at Twilio about developer education and how innovative it could be. To give you an example, we were learning about different ways of getting developers to use the Twilio API and you know as I was writing documentation across nine different programming languages like Ruby and PHP and Python, one thing that I was told by my mentor was that we don't want to use too many exclamation points inside of our documentation because if you have more than three, what developers do is that they subconsciously think of not equals from code and that gives them a negative compression of the text. I was like, that level of detail I never thought about that but it really is an art and so I started wanting to make videos on the side and actually my first three YouTube videos I made while I was at Twilio at the conference room at midnight when nobody was there and I showed it to my colleagues there and they were like, my boss was like, you know that's great, that's cool. We don't think developers are going to use videos as a learning tool, they want something static like documentation and so that's when I thought, well maybe there's something here and so once I got fired I got a severance and I had enough to live in San Francisco for about six to eight months and that really gave me the impetus. I remember I had all my stuff in a box that they gave to me from my desk and literally the day I was let go I walked across the street to a hair salon and then I got my hair dyed and I was like, all right I'm all in on this YouTube thing now, I have to figure out how to make this work. Just the hair, did you consciously do that? Did you think I need some sort of a thing? Yeah, I mean I was always inspired by a guy named Bill Nye, the science guy and he was a very unique character for general science and I thought, what is my thing? I didn't know what exactly I wanted but I remember a roommate of mine at the time who was a matchmaker, she was like, you know you'd look really cool with like a silver streak in your hair. I just tried it out. I mean you chose better than me the sunglasses, now I have to code with sunglasses which is annoying. Do you get recognized with the sunglasses in person? I get recognized with and without. I think the hairline gives it away. That's how branding works I guess. So then you started creating videos, was it always machine learning or did you also get into that somehow? No, so we started out my first few videos were all on Bitcoin. In fact my first video was called What is Bitcoin? I think a Bitcoin is the soul of the hacker community. Everything comes from Bitcoin and emerges outwards from there. I'm not religious but Mike the closest thing to a religion would be Bitcoin but I started making machine learning videos just because it seemed really interesting and I was really interested. AlphaGo really was the catalyst for me. Like oh there's something here, let me start making videos on this with no credentials, no PhD or anything like that. Also I felt like, this is kind of weird to say out loud, but like I'd spent six months in India traveling across the entire subcontinent before I started working at Tulio and one thing that I saw was like I was living in such a box my whole life in the United States and India's such a beautiful country. However there's a lot of issues there. It is a developing country, ascending country I like to say. But we can't just solve all these problems in our lifetime and some of them are just they're gonna take many generations to solve. Perhaps if we created some sort of super intelligence digital organism god, it could solve everything for us. The thing that I personally could do was use my specific knowledge to help make that happen in the form of funny interesting videos that would raise awareness around these technologies to as many people as possible and that would somehow increase the amount of research happening in the field and all of this together would accelerate development of a super intelligence. Yeah I mean that's I have one socialist like borderline communist friend and whenever I make fun of communism has never worked he always says like but we haven't tried with an AI supermind planner right and then I'm like yeah okay that's got he's got a point right but yeah so when did you when did you so you had this plan of doing videos when did you really see that this could be something like was there a moment where you saw like wait you know views go up and was there like a particular moment or did it come you know slowly or when did you really feel like yeah I could make this work? Well I think it was three months into making videos once a week because back then I could only do once a week it took about 40 to 50 hours for a single video eventually I got up to three a week at my peak but after three months of one video a week I got someone emailed me from this company called Big ML which was a machine learning platform it was my first person who ever reached out to me and they wanted to pay me for a series of videos and I was elated because ad revenue was like you know nothing really. I did have patreon that definitely helped for sure but that that was my first I think they paid me 2k USD for six videos which was huge and and that was really like oh this is something and then of course Udacity reached out to me and that was the biggest catalyst like for it to help make their deep learning course nader degree. Yeah so yeah Udacity but that that also fell through if I if I recall correctly and and this is so maybe for for people who don't know and you have made you've made an extensive like apology videos about this but it some of your videos or you know to the degree were plagiarized not exactly the videos but you would sort of write or show some code and then you would say like either like oh look at this code or watch me build a trading bot or something like this and and you know just be very vague about the origins of the code and then you would you put attribution maybe really small at the bottom of the code but essentially it'd be other people's code that you you presented is that about a fair framing of of things so a lot of times you took other people's codes didn't fork it on github I just kind of downloaded it reuploaded it and then changed the like the read me or maybe some wrapper and things so when yeah when was that was this always your your mode of operating or did you like did you at some point start did it increase because that's what I'm I'm wondering like I right you started out saying you know I could do I could do raise awareness and so on and you ended by or ended you at some point you found yourself in a mode where you would a new video would just be like I take someone else's code I make a video claiming essentially inferring that I I made it right how how did you get from a to B so if it was a process it didn't happen all at once I mean if you look at my first few videos they were like I really did write the code for the first few videos they were like 10 to 20 lines using the skills that I learned at Tulio of like making something really basic a skeleton app that a developer could just download and hit compile and it runs make it as simple as possible I would look at these very complex repositories for the initial versions of tensor flow and you know a neural conversational model by Oriole vignoles who's my favorite researcher still to this day and just try to condense it into you know 10 20 lines as a wrapper but over time I just it was like a gradual process of you know instead of just raising awareness it became more like chasing clout right making the number go up number go up for views and likes and there was also like almost no of accountability I was a lone actor I wasn't working with anybody so that definitely made it easier to do something like that and eventually like once I moved from San Francisco to Los Angeles and that was the last year and a half that I worked on YouTube so from 2018 to 2019 that's when I think that was a bad move like I not really an LA person but that's when I really started to really chase the clout and pursue fame for the sake of it because I'd already gotten these opportunities and it seemed like I just needed to get to a million subscribers no matter what yeah a million is was that your personal goal or I mean for me a million was always the point a little bit where you could live off of ad revenue was was it like this or was it just a number you liked or no it's just a number it was just like a fine little goal in my head yeah yeah it so and did you did you did you at any point feel like maybe I shouldn't do this maybe at the beginning and did it become easier for you or how did you think about yourself or did you just think you know everyone else is doing it or yeah I mean I I guess I you know everybody is a protagonist of their own story right I felt like I was doing you're just having the little name in the very bottom of the github not forking the code but just putting it down there that made me you know feel guilt-free yeah at the time but obviously that wasn't how I should have done it I mean obviously what you did was was very public and therefore the backlash I felt was also very public I mean a lot of a lot of people got angry and and you know once once it all let's say came crashing down a lot of people came forward and said oh yeah me too I was also my code was plagiarized and so on I I feel like I have seen exactly stuff like this in research like tons of times people essentially copying papers mildly attributing like once but essentially that entire page would be would be like taken from usually it's their earlier papers so what authors will do is they will have like one new equation and then they'll write an eight page paper where seven and a half pages are essentially their old paper right and so so I mean but that is never it's never as public right it's never as as as big I guess the more public one is the the worse it gets when something like this really really happens did you so I've read your Udacity course that you you said that became an issue there right people try to tell you you can't plagiarize stuff is that is that correct or so I I've seen it like a tweet from someone at Udacity saying you know the the course fell through essentially because they try to tell you that that's not how they do things or what is or maybe you can tell a little bit what the the Udacity course you said that was a big thing for you why did it fall through yeah so you know the what happened with Udacity was we had a 16-week course that I essentially designed and then Udacity helped me build a team around that to help me one issue that one of the people at Udacity had that I was working with he was also in the initial trailer video Matt Leonard was that I was not writing the code from scratch I was using existing examples and he didn't like that we also didn't have that good a working relationship during the course but I think in terms of falling through that happened like you know everybody made money from that course including Udacity and there were several cohorts of students it didn't just run once I think it ran like three or four times you actually at Udacity actually approached me two years after that course was over to do another version of it and I did help yeah that too I'm in terms of falling through yeah when all of this happened then you know people came out and said this stuff yeah I don't know what happened with the courts honestly I haven't okay I think maybe I maybe I got I got this one this one wrong yes and so I've seen like I've looked at your your social blade and so on you're at about 700k subscribers and I've seen also an interview with Lex Friedman and you where essentially you you also told him like you know what matters to me is views I'm I'm attuned to to views to more subscribers and and so on is it fair to say a little bit that you might have lost sight of you know the bigger picture or other things just in pursuit of this goal it is it is I was definitely disillusioned with AGI and the initial goals that I had at the start I definitely also had a you know an issue with I had like a drug problem near the end I was doing too much of a certain drug that makes you really up and have a lot of energy and there was a point where I pretty much almost overdosed on it and that's when I knew like I even like you know called the cops on myself too because I thought I was gonna die I don't know I never really said this out loud before but that was near the end this is basically like a month or two before you know that scandal happened and I was just you know I just felt like I was unfalable like I was untouchable like I could do no wrong and yeah I'd never had that level of fame before as well like that was pretty that was that was quite a drug of its own as well on top of that but yeah it was a gradual process I think of going from uplifting developers and like that being the primary concern to also then chasing clout chasing fame wanting more opportunity more views more recognition and just making stupid decisions yeah I can I mean I'm you know as as a as another youtuber I I get the draw of this like I unders I can I I get this feeling of being sucked into these into these metrics and it's not only the metrics right the metrics are correlated with money correlated with fame and so on I like yeah I see the and so many youtubers fall into this right and your your mistake was also a little bit that you your your setting was in an maybe like an academic or a professional setting where people actually care about you know not stealing stuff and things like this so maybe you know you unluckily for you chose the wrong field to do something like this and because in many other fields I think this would have just you know been been completely fine so in addition to let's say making videos and you were making insane number of videos like two a week or three a week as you said and that certainly also you had a schedule that certainly must have also pressured you but then you also there is this there's the issue with your paper right and that that to me that to me was really something where I thought this is someone who is almost like blinded by either the speed or or the fame or or as you said you felt infallible or something like this so for people who don't know you had written a number of research papers but this particular one you even made a video about it I think like I wrote a paper in a week or something like and it was about it was about a neural the neural qubit and one of your viewers then went public and claimed and and and could show that this was copied from largely from two other papers copied together that the diagrams copied and the text copied and you you changed some of the wording which was the most puzzling thing to me so instead of a quantum gate which is equivalent to a logic gate you changed it to a quantum door which makes no I like this is a meme until today right and and instead of complex numbers or complex Hilbert spaces I think it was complicated Hilbert spaces which also is kind of if you so maybe if you just if you look back now what is what is your reaction now to past you in with respect to that that paper yeah um yeah that was hilarious that's eternally a meme now what I yeah I mean I used AI to generate some words and like make things different I would so this was automated the replacement yeah yeah okay yeah yeah yeah I think there's a tool called like um I think it's called like it's a web tool I forgot it's like AI writer or something like that you like paste in a paragraph and then it like rewrites it um yeah like what a stupid decision that was I but there there I mean at this point it's really it's not it's not it's not this it's not quite it's a step up from copying code and attributing someone at the bottom right because there you can still say you know I attributed them I'm you know I can sleep at night this is really I go I take paper I put it deliberately into a tool that rewords it and then I say here's my here's my paper right this is what what made you or how did you how did you find yourself making that that step that you know like the really from I can justify this to myself to I guess I don't know what maybe you explain better than me yeah I you know it's just like ego it's like I'm untouchable and I can just do anything and I I guess I didn't really understand what it's like before I plagiarize that paper I talked to an actual quantum researcher who works at in Santa Barbara for Google and you know he's like we should write this you know I was like we should write this paper together he's like yeah let's do it it's gonna take a year and I remember thinking like that's way too long for me like I'm not doing that in a year I'm gonna do this in three days and just thinking like you know I guess I didn't respect the scientific process enough to yeah it was just to me I just thought of it as like a another link in the video description just adding it I should have just linked to the seven papers I just instead I put my name on it and just made it into one and I like all people are gonna like me more because of this yeah I'll have more credibility because of this instead of the opposite and I don't know I was just making in general it's just you know really um drugged out honestly like that I don't know why I made a lot of decisions that I did um I'm sober now about the way yeah yeah at no point it did it did it ever because that's that's the baffling thing to me a little bit and that that that shows me or at least seems a little bit like someone who was really lost touch a bit is that when someone is like a an experienced researcher tells me it's gonna take a year to write a paper and sure if I think I'm fast I can I think I can do it in three months right but three days is a like is a different thing so so clearly your idea was already you know I'm gonna take a shortcut it's not like I'm gonna write the same paper in three days it's just how can I make a video out of this in the shortest possible time yeah I was like what's my next video I wrote a research paper and just thinking about that that's really the angle like I want to make a video that shows or tells people that I wrote a research paper yeah yeah so a lot of I've seen a lot of commentary saying things like you know it's it's a shame you have a you have a good platform you're charismatic and you could have they say something along the lines of you you might have just as well credited all these people and just had the same effect like implying you know there would be another way of doing this you could just say you know here is a bunch of code by some cool people I'm gonna show you how it works and and their implication is you would be just as famous you would be just as liked and so on did you first of all do you think that's true and second of all did you think that's true like or was it really your conviction no if I did that I would be way less popular I do think that that's true now I did not think that was true then mm-hmm I thought that I would have to be the guy with who is behind all of this in order for my brand and channel to grow because yes yeah because it's just hard like in the YouTube game to like differentiate yourself and I felt like this was a way I could do that yeah I mean it's it's it is true right I'm not sure that these people are correct like it's for sure good advice to credit the people whose work you present but I myself I'm not sure if they are correct when they say you would have been just as popular and and and just as as you know well respected by the people who think you really did do these things right I'm not sure as you say how how YouTube works is it's a it's tough game and you at some some point this this all came and together also with your with your course which we can talk about in a second but specifically with respect to the code and and to the paper you made an apology video which was fairly lengthy it was not your usual style it was just kind of you standing there and you you essentially said straightforwardly you know here's what I did I credit I didn't credit these people enough just took their code and and so on and then people noticed that only like a few days later in your next videos essentially you did the same thing like there there were slides where where you you took from somewhere and so on is it I don't know is it fair to say and so you made these videos you made the apology videos then you immediately started uploading videos and before you really quit and you quit for a long time after that what was what were sort of the last videos like for you or you know like after let's say the apology video and so on but before you quit what was that like you're asking about the time between when I quit to the apology video what that was like no from the apology video to the point where you it didn't upload for for months after that or uploaded very infrequently was how did you feel at the point like of the apology video and and a little after that yeah well I mean I felt pretty bad generally I'm a pretty happy guy as you can surmise but I can say that's the only time in my life where I've ever felt somewhat suicidal like just for a little bit and yeah I didn't know how to deal with that level of sadness so I tried about a bunch of different things like I moved from LA I got a dog I just I don't know did some soul-searching some meditation just try that a bunch of I tried virtual reality like escapism as well it was a pretty tough time as you can imagine but in terms of like I yeah doing the same thing again I guess I did but I didn't think that I was like maybe there's something wrong with me like I just I don't know I don't know like I needed I need some kind of mentor to be like here is how you credit people in a YouTube video about machine learning and here is what people are going to find acceptable yeah did you did you think at some point maybe I can turn this around you know maybe I can because because you were at the beginning when when people brought these things up you were I saw just a bunch of Twitter posts and so on sort of discrediting them denying them like no I never never did anything like this was there a point where you thought you know people are getting iffy maybe I can turn it around yeah yeah there was I mean I tried everything I was like maybe I don't need to apologize maybe I do that would make it better or worse maybe I should just deny deny deny like politicians do maybe I should you know make fun of you know make like reply videos to other youtubers who made videos about me there's a lot of things that I thought I could do eventually I decided and I don't even know if that was the best thing for my brand I know it was the right thing to do to make an apology video morally but I don't know if that actually helped me or hurt me I still don't know to this day yeah was it so I think if I hear this a little bit out of you that there was a time where you were still mainly thinking brand mainly thinking you know which actions are gonna let me still reach like the million subscribers or continue on and then was there a particular point where you thought no actually you know let's let's do an apology let's let's tone it down was there was there a time when you thought when you consciously let go maybe of the million subscriber goal there was there was I think it just came from introspection and seeing how like the the amount of I don't even know what you want to call it feedback negative feedback or criticism it just wouldn't go away it was just there and it didn't really die down I thought I mean there's really nothing else I can do here I need to just accept defeat to wave the white flag part of my brand is just like you know super confidence and always being okay with being like haters or whatever not even yes but you know I mean and like there was a point where I was like I you know I'll just apologize and then I also felt you know near the end I did feel I started to feel like guilty because you know some people said that he wasn't just that I plagiarized but that I was actually doing the opposite of like accelerating research in the space like this sets a bad example for people and this actually gets in the way of research and it's gonna slow it down and that's what I was like okay that's if that's true that's really bad and honestly I like I was reading too many comments as well but yeah I mean I still don't know to this day like whether or not the apology video helped or hurt my brand in fact if I had to bet I would say probably hurt my brand but you know at least I felt better afterwards and I guess that's what mattered in the end yeah I mean I think few people really understand what what it's like to get YouTube comments on a on on a bit of a scale and and and people there will there will always be people criticizing and hating especially I guess you with very little credentials in the field I guess you have always had people saying you know this is a maybe this is a clown has no credentials whatnot and it didn't help that you copied code because then you not authoring the code also meant you knew less about the code which might also be sometimes shine through a bit in your videos but I think you with time you you sort of learn to tune out the haters because you're gonna get them anyway but then sometimes they're right right and and I think it's I think you know I don't think and I don't think many people in the like public sphere get like have a good good understanding of when should I listen to the to the bad comments and when not because usually it's no right so right yeah so then then this this was this was very shortly people really complaining about plagiarized code and this this paper which was one of the sort of big points raised and then in a very short like within a month or so there was also the issue of a course you offered right so you you maybe can you tell a bit how this course even came to be you you made videos at an insane rate how did you how did you think you could also offer a course and why yeah I think it comes down to two things one I felt like I could do more than what I actually was capable of doing because I my ego was so inflated at the time so I that's one the other is just looking at the metrics generally the videos that were about making money were the ones that did the best and so I started to follow that trend and tailor my content in that direction as opposed to what I would have done years ago which is like how do we solve them you know Millennium problems like poverty reduction and water cleanliness and environmental sustainability things that you know actually matter the course was around that like well if people want to make money let me make a course around making money with machine learn that was what is called right it was called make money with machine learning literally that is a hell of a clickbait yeah I the most click baity exactly what's gonna get the views title mm-hmm and it was supposed to be a paid course it was I think about $200 per student and the issue the first issue was that you claimed it was like a limited entry course with personal supervision now both of these things didn't really turn out to be accurate as as you promised so there was an issue of you said I only let in 500 people but then you let in twice 500 people so you you had two different slack work workspaces with twice the five some I think one even had 700 but there's a few extra ones I guess and then also there was apparently not really like you can't you can't personally supervise a thousand two hundredths like it's impossible did you plan on these things already or did they just sort of how did they happen I didn't plan on them I did think that I would have 500 when I put the course out there were so many signed up so fast and I got greedy I was like I'm just gonna let this keep on going let's see how many people I can sign up for this and I thought yeah I can just have two different cohorts and you know I had people volunteer to help at the time you help me like as I guess you call them teaching assistants and yeah but they they how many roughly how many TAs did you have do you remember there was at least one there might have been written that there was at least one yeah but they they sort of did they quit after a while or did they stick with you no they actually they were amazing they stuck the whole yeah yeah okay but they were they were volunteers yeah yeah okay so it was 200 bucks and like one two three maybe volunteer TAs for a thousand two hundred students and you did you plan on ramp did you realize at some point I can't provide personal feedback to all of these students or did you just think you know whatever I'll I'll just I can do this or I did I did realize I was in over my head I I think it was like week two or week three that it really started to dawn on me um and then I think I think it was week four that some of the students started you're going to social media um and then everything came crashing down in the middle of the course and then I had to give out a bunch of refunds but still had to finish the course to the end it was a ten week course so we still have to keep going for five weeks after that um but yeah I mean there were still you know hundreds of students who stayed in the course I don't know that like the register made an article on this but they didn't say like it's not like everybody just dropped out all of a sudden yeah so people in the course I still had some responsibility yeah so I maybe briefly summarize these these articles and you know they're they're written from a certain angle right and that's that's exactly why I also wanted to get your just your side of of this story so these articles they claim for example that you know people started noticing there was no personal supervision they complained you you you never essentially showed up in the slack workspaces well or infrequently they all got the same feedback on their exercise so that was the sort of like a copy paste of like good job in it was it was like that then people started demanding refunds but were some claim they were even banned like for demanding refunds then it was also claimed that you eventually said there was a refund period which was for 14 days but the article claim you quietly introduced a refund period 30 days after the course started so it was essentially impossible for anyone to have known because there was no refund policy at the beginning you introduced a 14-day refund period 30 days after the code the course started you then and then you know once once people discovered that there were two different cohorts and so on how what of these articles is is true and what is overdone so there are also several several tweets of students that said yeah people claiming refunds were were banned or or that the fact that you introduced this refund period how did this go down from your perspective so Paul that is true what I dope I think was overblown is the banning part I'd never personally banned anybody but I can't speak to whether or not one of the TAs may or may not have done that I love yeah but yeah everything else like definitely on point like it's all a part of the the story yeah can't refute any of that yeah and did you did you get did you get scared at any point or did you were you still in this you because all of a sudden people and their money are involved right it's not I mean 200 200 bucks is not that much for maybe an American but it is a lot for maybe someone in India or something you know some place like this did you get at some point you know scared because like wow there's actual money here that I may have to pay back or yeah I mean I got scared for a lot of reasons I was scared that yeah I would like have to go through some kind of lawsuits people were saying like oh there's gonna be a lawsuit you you're lucky you're not in jail and stuff and yeah about the refund stuff like that 30-day versus sneaking it in and I'm sure I'm sure I did that I honestly don't remember it now like I'm sure like that's probably what happened but I mean when I look at it now I'm like heavy when you charge money you need to be very upfront with people in like that's how you make a sustainable product I wasn't thinking very sustainably a long term it was a very short-term thing but I was scared yeah I was here did you but but your thought was still I can educate these people even if I can't give them personal supervision or or was it was it all like you know like I'm gonna get their 200 bucks I'm gonna tell them something so they can't complain or did you still think you know I can't like the course has value for the people who are in it no I did think the course had value I mean it's weird because it's like I'm conflating my bias against academia and the traditional learning path with this course that is yeah it's got a super clickbait title but you know I guess I didn't fully appreciate what online learning and I'm still learning what online learning really can be in the future I thought well you know you don't need to be in a physical classroom to learn like I think we can all agree to that now like you can watch videos online but also you know what is personal supervision and does there need to be x y and z for someone to be able to say I learned a lot of learning comes from self-motivation and you know education is not a scarce resource it's it's abundant it's the desire to learn that is scarce and perhaps that alone I felt justified like if I could get them to want to learn these things that would be enough um at the time I felt that way now I know like what would I change differently besides the obvious part like the 30-day refund from the start is to just hire help like if I were to give advice to anybody doing anything like this like any youtuber who wants to make a course like hire help step one higher help then figure everything else out don't plan it out yourself it's too big it's too big at scale for one person to do what what happened did you end up giving refunds to two people or I did did you did you still have enough money to give the refunds haha um I yeah I did what what happened to the money like I can imagine you get 200 bucks a thousand people that's like 200k how where did that go did you end up plus or minus or did you spend on refunds did any lawsuit result or there were no lawsuits everybody who wanted a refund got a refund there were still a bunch of students who completed the course to the end like and I'm very thankful like despite all the drama they were loyal to the to the thing and so was it it wasn't negative it was positive it wasn't nearly like probably like 10% what I mean at the start and and then you know I think this as I said this was within like a month of of every everything down you you were making lots videos the paper the course all at the same time and then everything everything comes crashing and I think it's one it's one thing when you feel bad because life is is crap right because something happened to you that's bad and you know but it's it's an entirely different thing when you're you you know you're responsible for it right like that is that is worse that is like my life is bad and I'm to blame and any you know like it's it's my my doing right like was this I guess this was your experience right it you know whether you thought it was good or bad it was like my life is crap and I'm responsible how did you what did you do at that point you said bit of soul-searching and so on how did you decide to to go forward so I moved back to San Francisco I was there for a few months I basically invested in my friends and family talked to them that helped got really into virtual reality that helped as well like this associating from this reality bring it to a virtual world where I was anonymous and logged off of all social media as well so that helped as well and kind of just gave up with the whole like you know million subscriber path that I was on and what else yeah just oh yeah focus on my health as well like I was like I'm just gonna like try to focus on being healthy because I can control that I can't control what people think but I can control my health so that helped you made it you made a quite astounding body fitness transformation as well you were at the end you were like in 2019 when it all crashed you were kind of a like a chubster like right now and I saw like a before-after picture was this a conscious effort by you or it was it was yeah cuz like part of like what you know having a desire to live is to like be able to look at the mirror and you know say like for me at least like hey this is an attractive guy so that you know it's kind of vain but it definitely helped for sure like that yeah and so you eventually you got let's say back up on your on your feet after all of this what was your or what is your current plan or what are you doing right now you've you've posted a few videos again here and there but I'm not so maybe you know what's what are you doing essentially so um yeah making videos along this series called alpha care about health care in AI which is kind of always been like my the industry I'm most excited about for AI like applicability like oh we can make people healthier so doing that I'm almost done with a book I've been writing for the past three months which it's gonna be a free ebook not gonna charge for it so that's been interesting that's also on like deep learning for health care apps for beginners but examples in there and once I released that all of this will be done in like three weeks probably from now like the series the video series in the book then I have to figure out what the next thing I'm going to do is what I'm most excited about currently is paying people to be healthy there's this app called sweat coin it's out of the United Kingdom it pays people in cryptocurrency to walk I find that really really interesting because you know two of the most meaningful things to me are keeping people healthy and reducing poverty and this kind of does both at the same time so I'm wondering if there's a way to create what's called a Dow a distributed autonomous organization around health care and health data and keeping people healthy paying them somehow with cryptocurrency to stay healthy I just use a service called inside tracker which cost me like 500 bucks way too expensive a service for most people to use one but I got a blood test done two weeks ago using the service they took 43 biomarkers of mine and that now I have a bunch of health data like my cholesterol level is apparently way too high because I eat way too much red meat so I've got a cut down on that but something like this if we could turn into um like a free service that keeps people healthy and actually not just free but pay them money and then somehow turn it into a business or also the service makes money that'd be really cool so I'm kind of like thinking like I'm gonna start some kind of company around that or a down I should say I'm not exactly sure what it looks like though I mean there this is happening in part already with I don't know we have we have like high taxes on cigarettes right so essentially the the smokers they finance a little bit the non smokers via taxes some health insurances they already give discounts if you do like regularly go to it to a gym or something so I'm like something like this is definitely in the in the realm of possibilities now with respect to cryptocurrency is this a meme or was there actually a Siraj coin at some point I haven't found anything like what what was that yeah that was a real thing I launched a cryptocurrency I think two years ago or something three I don't know call sir Raj point and it was really didn't like it so I took down the video I'm like they're still you could find it if you really search Siraj coin okay but it was just it was more like for a video or did you think you know maybe I could make some money with launching my own cryptocurrency yeah both I mean this was at the height of the ICO crane yeah and everybody was doing it and I felt wow long with I'm gonna do it too here we go sir right right and the idea was that you can with Siraj coin you can get a meeting like buy a meeting with me or like make a music video with me just you know I am the scarce resource like in these cryptos there is a scarce resource great token the token is how you access the scarce resource yeah and yeah I mean I'm glad I did it still like nobody got hurt from that it was just like a fun experiment and I learned a lot from it as well like I still think it's an interesting idea like I do think that we're gonna see more individuals create tokens around themselves and yeah I mean yes a couple of NFTs work this way right that there is some kind of like a meeting with a famous person tagged onto it or something like this yeah so with with respect to your your book and your new set of videos and you know I guess that the question everyone asks is is there still how do you handle citations plagiarism things like this are you are you toning it down or are you like extra super duper careful or what is your sort of how do you approach this topic I guess you're in a bit of a special situation not not only are you held to the same standards but now you know people read your name they're probably the first thing they do is put something into a plagiarism checker yeah I'm super careful I put it in the video description not just like the github I say it verbally yeah I just try to be more careful yeah and the what's the book about can you is there is it something you can disclose already or yeah it's on bioinformatics for beginners I'm also a beginner to bioinformatics I'm really interested in multi comics like all the omics genomics epigenomics transcriptomics and just thinking about how we can integrate all of these different types of data to make both diagnostic and prognostic predictions for people and I think that's the future I'm really interested in reversing the aging process David Sinclair at Harvard has a great book on this called why we age and why we don't have to he has a podcast that he's gonna release next year on this topic and I just think that there's a great space for data science and data analysts enthusiasts to make a contribution in this field because I do think the future of healthcare isn't going to be targeting individual diseases like Alzheimer's or heart disease but rather that is the disease that is upstream of everything else aging itself that's it I mean it's a tough task but yeah it's a it's a I guess it's a cool cool outlook I it seems like a little bit of a rebirth it you know you told how you were at the beginning of your video career thinking if I could just you know make video about these cool topics and so on and it it almost feels or at least to me it sounds like it's got a little bit of that same spirit again I'd like to think so I mean I I don't have the same I don't know I don't have the same level of or maybe I just feel this way I don't have the same like energy that I did back then um where it's just like a I have to do this or else like the world is gonna end like that level of conviction I just feel like I mean I'm really interested in biology in general I don't think I'm gonna get I honestly don't think this is gonna give me the level of fame or opportunity that talking about deep learning from 316 to 2020 did it's just something I'm interested in and I'm okay like not reaching a million I mean it's probably never gonna reach a million subscribers I just wants to be interested in this and even if and you know if this like company doesn't work out I'm happy to like take a job somewhere and just like learn about bioinformatics full-time as a bioinformatician heroist or something yeah well in yeah I mean in many ways I I've told you that this this privately but in many ways you were you're sort of with with all of this happening you were still sort of a the pioneer of what many of us other ML youtubers essentially that the path we go is you you made it it kind of like I remember when I started making videos there was like nothing and when you started there must have been like really really nothing right and you know that for for for all the things I think it took it took balls to to go that way and and you you certainly hustled even if it led in into like a wrong direction do you have I don't know do you have do you have because I know that there are quite a number of people who look at maybe you also me other youtubers a lot of people are starting their podcasts nowadays a lot of people also start channels like mine or or similar to mine any advice you have for people starting out in in the in the sphere of online education or what might what we might call being an influencer anything like this yeah I would say that you this is not something you do as a side job like a lot of people you know kind of have to because they need a source of income from their day job but I would say like the only way to be successful in this is to pick hits to be your one thing and do that all day and it's got to feel like play to you but it's got to look like work to other people like to me this whole time I've just been playing like really enjoying myself like it's not work and that's honestly why I think I grew as much as I did I genuinely enjoy the topics I genuinely enjoy the video production process editing lighting thinking about metrics all that stuff just felt like play to me and that's how you're gonna be successful it's not gonna be if you feel like it's hard work um you should pivot or think of some other content to talk about or maybe a different medium like you know I had a podcast as well I did I think five interviews and then I stopped because it didn't feel like play to me like I don't actually yeah for some reason I just don't enjoy being a podcast host like I enjoyed monologues and that kind of thing so I stopped whereas someone like you or you know Joe Rogan or other podcasters they actually enjoy it so they're gonna they're actually gonna be successful so that's that's my best advice is like make sure that it feels like play to you and then I you will be you'll probably be successful and when someone finds themselves a bit successful and finds themselves to be sucked and drawn by the metrics by the clout by because I already I already said it but I'm gonna say it again like this is it this is a thing I feel it I like other youtubers feel it for sure this this suck it's like a it's like a thing drawing you right and you know leading to the kinds of decisions you made and and what is do you have any I don't know you know other than don't do it do you have any you know best the mindset that that creates in a person do you have any any maybe recognition of what could help someone to to get out of it or to resist or you know what do you tell yourself when there's like a really easy opportunity to get a lot of views or or clicks I would say the best thing you can do is Google Sir Roger ball and happen to this guy and yeah just be afraid you don't want that to happen to you for sure luckily happened to me first so you've got an example in front of you now of what can go wrong when you follow views and likes too much you chase cloud too much in the education space the internet gives everybody a voice you will be held accountable there is no we are moving into a world that is much more transparent every day less and less privacy yeah the internet gives everybody a voice and power so yeah that's so I can say use it use it wisely I guess it wisely well Sir Roger of all this was this was a pleasure really truly I I thank you very much for for being here with me today thanks for coming on thanks for being so open and and and forward and and and honest I think it's very valuable the world also hears from you and you know in it not just from articles and and and you know reviews and things like this absolutely thank you Yannick awesome
[ { "start": 0, "end": 6.2, "text": " The following is a conversation with Siraj Ruval. Siraj has one of the largest" }, { "start": 6.2, "end": 11.16, "text": " channels in the machine learning YouTube space. Over 700,000 people are" }, { "start": 11.16, "end": 18.52, "text": " subscribed to him as of this date. Siraj pumped out lots and lots of videos on" }, { "start": 18.52, "end": 23.48, "text": " topics such as coding tutorials, explaining beginners concept in machine" }, { "start": 23.48, "end": 28.44, "text": " learning and in other topics like blockchain or other computer science" }, { "start": 28.44, "end": 34.32, "text": " things. Now his rise came to an abrupt stop when a series of scandals hit him" }, { "start": 34.32, "end": 40.88, "text": " at the end of 2019. And there were a lot of articles written back then, Twitter" }, { "start": 40.88, "end": 47.28, "text": " posts made and even Siraj himself made an apology video. But I was wondering how" }, { "start": 47.28, "end": 53.1, "text": " did he feel like during all of this? What did he think back then? How did he come" }, { "start": 53.1, "end": 57.56, "text": " to this? How did he feel during the highs and the lows of his career? And" }, { "start": 57.56, "end": 64.24000000000001, "text": " how does he look back on things now? I was struck by how straightforward Siraj" }, { "start": 64.24000000000001, "end": 69.16, "text": " was in this conversation. I was sure there was gonna be wisdom in there for" }, { "start": 69.16, "end": 74.32000000000001, "text": " the rest of us, be that youtubers or machine learners and I was not" }, { "start": 74.32000000000001, "end": 81.24000000000001, "text": " disappointed. He was definitely honest looking back with a different view and" }, { "start": 81.24000000000001, "end": 86.6, "text": " we touched on many things in this conversation. I hope you enjoy it. I hope" }, { "start": 86.6, "end": 91.08, "text": " you find something in there that helps you and yeah, let us know what you think." }, { "start": 91.08, "end": 101.6, "text": " Well, hello everyone. Today is a special day. In many ways, Siraj, who is my guest" }, { "start": 101.6, "end": 109.03999999999999, "text": " today, is one of the pioneers of the field of ML YouTube. Now I'm pretty sure" }, { "start": 109.03999999999999, "end": 115.56, "text": " pretty much every single person in the field has heard of Siraj, has seen him," }, { "start": 115.56, "end": 123.24000000000001, "text": " watched one of his videos or something like this. If I can maybe frame" }, { "start": 123.24000000000001, "end": 127.74000000000001, "text": " it a little bit, there's that you were one of the first machine learning" }, { "start": 127.74000000000001, "end": 134.84, "text": " youtubers. You became really popular quickly. Things went uphill, more views" }, { "start": 134.84, "end": 141.72, "text": " and so on and then I think it's fair to say it kind of all came crashing down in" }, { "start": 141.72, "end": 149.8, "text": " like a very short period of time and then it just sort of" }, { "start": 149.8, "end": 154.36, "text": " crumbled. I can't really frame it any differently. There seemed to be" }, { "start": 154.36, "end": 161.56, "text": " things one on top of another that just all came in like a month or so, the same" }, { "start": 161.56, "end": 167.07999999999998, "text": " month. It seemed crazy this time at the end of 2019. So yeah, I'm" }, { "start": 167.08, "end": 174.92000000000002, "text": " happy to host Siraj today. Thanks so much for being here and talking" }, { "start": 174.92000000000002, "end": 179.74, "text": " and you agreed to talk a little bit about your side of things, of what" }, { "start": 179.74, "end": 184.32000000000002, "text": " happened and what you're doing now. So yeah, welcome. Thanks, it's great to be" }, { "start": 184.32000000000002, "end": 188.72000000000003, "text": " here. I love your videos. They've definitely got a personality and" }, { "start": 188.72000000000003, "end": 193.32000000000002, "text": " character to them that I definitely admire and I'd like to see more of. Thank" }, { "start": 193.32, "end": 202.32, "text": " you. Since you're the OG youtuber of this, I guess" }, { "start": 202.32, "end": 207, "text": " character is a little bit of what it takes. I want to go back a little bit to" }, { "start": 207, "end": 211.56, "text": " the beginning though. If I recall correctly, you started studying" }, { "start": 211.56, "end": 217.35999999999999, "text": " economics, is that correct? Correct, at Columbia that was my freshman year. I was" }, { "start": 217.35999999999999, "end": 223.28, "text": " an economics major. Yeah and for some reason you switched over to computer" }, { "start": 223.28, "end": 236.08, "text": " science because what took you there? Well, I took a semester to travel" }, { "start": 236.08, "end": 239.8, "text": " around Europe using Couchsurfing. I was Couchsurfing for three and a half months" }, { "start": 239.8, "end": 243.96, "text": " and the first person that I Couchsurfed with in London, his name was Alex" }, { "start": 243.96, "end": 249.92000000000002, "text": " McCall. He showed me his terminal window. He had a hackintosh that he made and he" }, { "start": 249.92, "end": 254.11999999999998, "text": " really inspired me to get into computer science. It turned out, you know, several" }, { "start": 254.11999999999998, "end": 258.91999999999996, "text": " years later that Alex wrote the O'Reilly book on JavaScript and he has" }, { "start": 258.91999999999996, "end": 263.36, "text": " this really cool startup called Clearbit that he already sold by now. But I got to" }, { "start": 263.36, "end": 266.59999999999997, "text": " meet him before all that happened and once I saw Alex terminal and all the" }, { "start": 266.59999999999997, "end": 270, "text": " cool things he was doing, I knew that once I got back to Columbia I needed to" }, { "start": 270, "end": 273.71999999999997, "text": " like switch over to computer science because that was how you really made an" }, { "start": 273.72, "end": 280.44000000000005, "text": " impact in the world. Yeah, so I guess you saw pretty early that the impact was" }, { "start": 280.44000000000005, "end": 283.96000000000004, "text": " to be made, right? I think a lot of people go into economics and they" }, { "start": 283.96000000000004, "end": 288, "text": " think like, they maybe think a little bit of money if they go into economics" }, { "start": 288, "end": 293.88000000000005, "text": " because it's kind of close to it but I guess computer science especially, you" }, { "start": 293.88000000000005, "end": 298.88000000000005, "text": " know, nowadays is really the impactful field or one of the impactful" }, { "start": 298.88000000000005, "end": 302.96000000000004, "text": " fields. Little known fact, I also didn't, I started out in medicine and then" }, { "start": 302.96, "end": 307.52, "text": " switched over to computer science. So much of the of the same journey there." }, { "start": 307.52, "end": 314.23999999999995, "text": " And then did you finish computer science? No, I dropped out my senior" }, { "start": 314.23999999999995, "end": 320.12, "text": " year of all times to drop out. Wow. Yeah. And that was because of YouTube?" }, { "start": 320.12, "end": 325.35999999999996, "text": " No, no, no. So I dropped out because I had a robotic startup at the time. We" }, { "start": 325.35999999999996, "end": 329.88, "text": " were making a six degree of freedom robot that would pick things up off the" }, { "start": 329.88, "end": 333.52, "text": " floor for older people with something called ALS because they can't bend over." }, { "start": 333.52, "end": 339.64, "text": " And we built a prototype, raised money but it turns out like nobody would buy" }, { "start": 339.64, "end": 344.36, "text": " it and also there were some software problems at the time. This was like" }, { "start": 344.36, "end": 351.8, "text": " 2012. So yeah, I just moved to San Francisco from there, from New York and" }, { "start": 351.8, "end": 356.96, "text": " then that's when I really started to feel like I was around my people. Like" }, { "start": 356.96, "end": 363.08, "text": " techians. Yeah, you're American originally but from smaller town or big" }, { "start": 363.08, "end": 367.79999999999995, "text": " city or? I'm from Houston, Texas. So I was born here. My parents are from India." }, { "start": 367.79999999999995, "end": 375.12, "text": " Definitely have a deep connection with India. I still dream about India. Cool." }, { "start": 375.12, "end": 380.76, "text": " And then you were in San Francisco and how did you get into YouTube? So" }, { "start": 380.76, "end": 385.08, "text": " I worked at several contract jobs in San Francisco for companies like CBS" }, { "start": 385.08, "end": 390.28, "text": " Interactive doing mobile development. I worked at Meetup for a year just as a" }, { "start": 390.28, "end": 395, "text": " general software engineer. I started off as an intern and then eventually" }, { "start": 395, "end": 401.12, "text": " the last job I had, W2 job, was at Twilio, the API company and I worked there as a" }, { "start": 401.12, "end": 407.08, "text": " developer educator for about eight months and then I was fired because I" }, { "start": 407.08, "end": 411.52, "text": " think it was just a performance thing. That's what they said so I don't know." }, { "start": 411.52, "end": 416.64, "text": " But I remember wanting, I learned a lot at Twilio about developer education and" }, { "start": 416.64, "end": 420.88, "text": " how innovative it could be. To give you an example, we were learning about" }, { "start": 420.88, "end": 426.15999999999997, "text": " different ways of getting developers to use the Twilio API and you know as I was" }, { "start": 426.15999999999997, "end": 428.79999999999995, "text": " writing documentation across nine different programming languages like" }, { "start": 428.79999999999995, "end": 433.47999999999996, "text": " Ruby and PHP and Python, one thing that I was told by my mentor was that we don't" }, { "start": 433.47999999999996, "end": 437.88, "text": " want to use too many exclamation points inside of our documentation because if" }, { "start": 437.88, "end": 441.47999999999996, "text": " you have more than three, what developers do is that they subconsciously think of" }, { "start": 441.48, "end": 447.44, "text": " not equals from code and that gives them a negative compression of the text." }, { "start": 447.44, "end": 450.84000000000003, "text": " I was like, that level of detail I never thought about that but it really is an" }, { "start": 450.84000000000003, "end": 454.68, "text": " art and so I started wanting to make videos on the side and actually my first" }, { "start": 454.68, "end": 459, "text": " three YouTube videos I made while I was at Twilio at the conference room at" }, { "start": 459, "end": 464.04, "text": " midnight when nobody was there and I showed it to my colleagues there and" }, { "start": 464.04, "end": 468.6, "text": " they were like, my boss was like, you know that's great, that's cool. We don't think" }, { "start": 468.6, "end": 472.24, "text": " developers are going to use videos as a learning tool, they want something static" }, { "start": 472.24, "end": 477.6, "text": " like documentation and so that's when I thought, well maybe there's something" }, { "start": 477.6, "end": 483.52000000000004, "text": " here and so once I got fired I got a severance and I had enough to live in" }, { "start": 483.52000000000004, "end": 487.48, "text": " San Francisco for about six to eight months and that really gave me the" }, { "start": 487.48, "end": 493, "text": " impetus. I remember I had all my stuff in a box that they gave to me from my desk" }, { "start": 493, "end": 499.6, "text": " and literally the day I was let go I walked across the street to a hair salon" }, { "start": 499.6, "end": 504.44, "text": " and then I got my hair dyed and I was like, all right I'm all in on this YouTube" }, { "start": 504.44, "end": 509.24, "text": " thing now, I have to figure out how to make this work." }, { "start": 509.24, "end": 513.64, "text": " Just the hair, did you consciously do that? Did you think I need some sort of a" }, { "start": 513.64, "end": 519.48, "text": " thing? Yeah, I mean I was always inspired by a guy named Bill Nye, the science" }, { "start": 519.48, "end": 523.9200000000001, "text": " guy and he was a very unique character for general science and I thought, what" }, { "start": 523.9200000000001, "end": 531.28, "text": " is my thing? I didn't know what exactly I wanted but I remember a roommate of mine" }, { "start": 531.28, "end": 534.84, "text": " at the time who was a matchmaker, she was like, you know you'd look really cool" }, { "start": 534.84, "end": 540.72, "text": " with like a silver streak in your hair. I just tried it out. I mean you chose" }, { "start": 540.72, "end": 545.64, "text": " better than me the sunglasses, now I have to code with sunglasses which is annoying." }, { "start": 545.64, "end": 551.8, "text": " Do you get recognized with the sunglasses in person? I get recognized" }, { "start": 551.8, "end": 556.96, "text": " with and without. I think the hairline gives it away." }, { "start": 556.96, "end": 563.28, "text": " That's how branding works I guess. So then you" }, { "start": 563.28, "end": 568.76, "text": " started creating videos, was it always machine learning or did you also" }, { "start": 568.76, "end": 573.3, "text": " get into that somehow? No, so we started out my first few videos were all on" }, { "start": 573.3, "end": 579.4799999999999, "text": " Bitcoin. In fact my first video was called What is Bitcoin? I think" }, { "start": 579.4799999999999, "end": 584.9599999999999, "text": " a Bitcoin is the soul of the hacker community. Everything comes from Bitcoin" }, { "start": 584.9599999999999, "end": 589.56, "text": " and emerges outwards from there. I'm not religious but Mike the closest" }, { "start": 589.56, "end": 594.16, "text": " thing to a religion would be Bitcoin but I started making machine learning" }, { "start": 594.16, "end": 599.4799999999999, "text": " videos just because it seemed really interesting and I was really interested." }, { "start": 599.48, "end": 604.4, "text": " AlphaGo really was the catalyst for me. Like oh there's something here, let me" }, { "start": 604.4, "end": 610.2, "text": " start making videos on this with no credentials, no PhD or anything" }, { "start": 610.2, "end": 617.16, "text": " like that. Also I felt like, this is kind of weird to say" }, { "start": 617.16, "end": 621.12, "text": " out loud, but like I'd spent six months in India traveling across the entire" }, { "start": 621.12, "end": 625.36, "text": " subcontinent before I started working at Tulio and one thing that I saw was like" }, { "start": 625.36, "end": 630.6, "text": " I was living in such a box my whole life in the United States and India's" }, { "start": 630.6, "end": 634.5600000000001, "text": " such a beautiful country. However there's a lot of issues there. It is a developing" }, { "start": 634.5600000000001, "end": 639.48, "text": " country, ascending country I like to say. But we can't just solve all" }, { "start": 639.48, "end": 642.28, "text": " these problems in our lifetime and some of them are just they're gonna take many" }, { "start": 642.28, "end": 646.2, "text": " generations to solve. Perhaps if we created some sort of super intelligence" }, { "start": 646.2, "end": 651.48, "text": " digital organism god, it could solve everything for us. The thing that I" }, { "start": 651.48, "end": 656.96, "text": " personally could do was use my specific knowledge to help make that happen in" }, { "start": 656.96, "end": 660.88, "text": " the form of funny interesting videos that would raise awareness around these" }, { "start": 660.88, "end": 664.72, "text": " technologies to as many people as possible and that would somehow increase" }, { "start": 664.72, "end": 667.8000000000001, "text": " the amount of research happening in the field and all of this together would" }, { "start": 667.8000000000001, "end": 673.52, "text": " accelerate development of a super intelligence. Yeah I mean that's I have" }, { "start": 673.52, "end": 678.4, "text": " one socialist like borderline communist friend and whenever I make" }, { "start": 678.4, "end": 682.4399999999999, "text": " fun of communism has never worked he always says like but we haven't tried" }, { "start": 682.4399999999999, "end": 687.92, "text": " with an AI supermind planner right and then I'm like yeah okay that's got he's" }, { "start": 687.92, "end": 695.76, "text": " got a point right but yeah so when did you when did you so you had this plan" }, { "start": 695.76, "end": 702.36, "text": " of doing videos when did you really see that this could be something like was" }, { "start": 702.36, "end": 709.38, "text": " there a moment where you saw like wait you know views go up and was there like" }, { "start": 709.38, "end": 714.92, "text": " a particular moment or did it come you know slowly or when did you really feel" }, { "start": 714.92, "end": 719.64, "text": " like yeah I could make this work? Well I think it was three months into making" }, { "start": 719.64, "end": 724.2, "text": " videos once a week because back then I could only do once a week it took about" }, { "start": 724.2, "end": 728.28, "text": " 40 to 50 hours for a single video eventually I got up to three a week at" }, { "start": 728.28, "end": 734.4399999999999, "text": " my peak but after three months of one video a week I got someone emailed me" }, { "start": 734.4399999999999, "end": 737.72, "text": " from this company called Big ML which was a machine learning platform it was" }, { "start": 737.72, "end": 741.28, "text": " my first person who ever reached out to me and they wanted to pay me for a" }, { "start": 741.28, "end": 745.68, "text": " series of videos and I was elated because ad revenue was like you know" }, { "start": 745.68, "end": 751.68, "text": " nothing really. I did have patreon that definitely helped for sure but that" }, { "start": 751.68, "end": 757.56, "text": " that was my first I think they paid me 2k USD for six videos which was huge and" }, { "start": 757.56, "end": 763.7399999999999, "text": " and that was really like oh this is something and then of course Udacity" }, { "start": 763.7399999999999, "end": 769, "text": " reached out to me and that was the biggest catalyst like for it to help" }, { "start": 769, "end": 775.88, "text": " make their deep learning course nader degree. Yeah so yeah Udacity but that" }, { "start": 775.88, "end": 782.1199999999999, "text": " that also fell through if I if I recall correctly and and this is so maybe for" }, { "start": 782.1199999999999, "end": 786.88, "text": " for people who don't know and you have made you've made an extensive like" }, { "start": 786.88, "end": 793.96, "text": " apology videos about this but it some of your videos or you know to the degree" }, { "start": 793.96, "end": 800.52, "text": " were plagiarized not exactly the videos but you would sort of write or show some" }, { "start": 800.52, "end": 805.4399999999999, "text": " code and then you would say like either like oh look at this code or watch me" }, { "start": 805.4399999999999, "end": 811.96, "text": " build a trading bot or something like this and and you know just be very vague" }, { "start": 811.96, "end": 816.84, "text": " about the origins of the code and then you would you put attribution maybe" }, { "start": 816.84, "end": 822.5600000000001, "text": " really small at the bottom of the code but essentially it'd be other people's" }, { "start": 822.5600000000001, "end": 830.2800000000001, "text": " code that you you presented is that about a fair framing of of things so a" }, { "start": 830.2800000000001, "end": 833.9200000000001, "text": " lot of times you took other people's codes didn't fork it on github I just" }, { "start": 833.9200000000001, "end": 838.4000000000001, "text": " kind of downloaded it reuploaded it and then changed the like the read me or" }, { "start": 838.4, "end": 845.28, "text": " maybe some wrapper and things so when yeah when was that was this always your" }, { "start": 845.28, "end": 850.84, "text": " your mode of operating or did you like did you at some point start did it" }, { "start": 850.84, "end": 856.0799999999999, "text": " increase because that's what I'm I'm wondering like I right you started out" }, { "start": 856.0799999999999, "end": 860.56, "text": " saying you know I could do I could do raise awareness and so on and you ended" }, { "start": 860.56, "end": 867.72, "text": " by or ended you at some point you found yourself in a mode where you would a new" }, { "start": 867.72, "end": 872.8000000000001, "text": " video would just be like I take someone else's code I make a video claiming" }, { "start": 872.8000000000001, "end": 880.48, "text": " essentially inferring that I I made it right how how did you get from a to B so" }, { "start": 880.48, "end": 884.48, "text": " if it was a process it didn't happen all at once I mean if you look at my first" }, { "start": 884.48, "end": 889, "text": " few videos they were like I really did write the code for the first few videos" }, { "start": 889, "end": 893.28, "text": " they were like 10 to 20 lines using the skills that I learned at Tulio of like" }, { "start": 893.28, "end": 896.4, "text": " making something really basic a skeleton app that a developer could just" }, { "start": 896.4, "end": 900.24, "text": " download and hit compile and it runs make it as simple as possible I would" }, { "start": 900.24, "end": 904.0799999999999, "text": " look at these very complex repositories for the initial versions of tensor flow" }, { "start": 904.0799999999999, "end": 909.8, "text": " and you know a neural conversational model by Oriole vignoles who's my" }, { "start": 909.8, "end": 914.0799999999999, "text": " favorite researcher still to this day and just try to condense it into you know" }, { "start": 914.0799999999999, "end": 921.3199999999999, "text": " 10 20 lines as a wrapper but over time I just it was like a gradual process of" }, { "start": 921.32, "end": 927.08, "text": " you know instead of just raising awareness it became more like chasing" }, { "start": 927.08, "end": 932.32, "text": " clout right making the number go up number go up for views and likes and" }, { "start": 932.32, "end": 936.2800000000001, "text": " there was also like almost no of accountability I was a lone actor I" }, { "start": 936.2800000000001, "end": 940, "text": " wasn't working with anybody so that definitely made it easier to do" }, { "start": 940, "end": 945.6400000000001, "text": " something like that and eventually like once I moved from San Francisco to Los" }, { "start": 945.64, "end": 953.1999999999999, "text": " Angeles and that was the last year and a half that I worked on YouTube so from" }, { "start": 953.1999999999999, "end": 959.3199999999999, "text": " 2018 to 2019 that's when I think that was a bad move like I not really an LA" }, { "start": 959.3199999999999, "end": 966.68, "text": " person but that's when I really started to really chase the clout and pursue" }, { "start": 966.68, "end": 971.8, "text": " fame for the sake of it because I'd already gotten these opportunities and" }, { "start": 971.8, "end": 977.92, "text": " it seemed like I just needed to get to a million subscribers no matter what yeah" }, { "start": 977.92, "end": 984.56, "text": " a million is was that your personal goal or I mean for me a million was always" }, { "start": 984.56, "end": 989.28, "text": " the point a little bit where you could live off of ad revenue was was it like" }, { "start": 989.28, "end": 993.0799999999999, "text": " this or was it just a number you liked or no it's just a number it was just" }, { "start": 993.0799999999999, "end": 1000.12, "text": " like a fine little goal in my head yeah yeah it so and did you did you did you" }, { "start": 1000.12, "end": 1005.4, "text": " at any point feel like maybe I shouldn't do this maybe at the beginning and did" }, { "start": 1005.4, "end": 1012.64, "text": " it become easier for you or how did you think about yourself or did you just" }, { "start": 1012.64, "end": 1020.96, "text": " think you know everyone else is doing it or yeah I mean I I guess I you know" }, { "start": 1020.96, "end": 1026.56, "text": " everybody is a protagonist of their own story right I felt like I was doing" }, { "start": 1026.56, "end": 1030.36, "text": " you're just having the little name in the very bottom of the github not" }, { "start": 1030.36, "end": 1034.52, "text": " forking the code but just putting it down there that made me you know feel" }, { "start": 1034.52, "end": 1039.3999999999999, "text": " guilt-free yeah at the time but obviously that wasn't how I should have" }, { "start": 1039.3999999999999, "end": 1046.32, "text": " done it I mean obviously what you did was was very public and therefore the" }, { "start": 1046.32, "end": 1052.28, "text": " backlash I felt was also very public I mean a lot of a lot of people got angry" }, { "start": 1052.28, "end": 1057.68, "text": " and and you know once once it all let's say came crashing down a lot of people" }, { "start": 1057.68, "end": 1062.66, "text": " came forward and said oh yeah me too I was also my code was plagiarized and so" }, { "start": 1062.66, "end": 1071.76, "text": " on I I feel like I have seen exactly stuff like this in research like tons of" }, { "start": 1071.76, "end": 1079.2, "text": " times people essentially copying papers mildly attributing like once but" }, { "start": 1079.2, "end": 1085.56, "text": " essentially that entire page would be would be like taken from usually it's" }, { "start": 1085.56, "end": 1090, "text": " their earlier papers so what authors will do is they will have like one new" }, { "start": 1090, "end": 1094.44, "text": " equation and then they'll write an eight page paper where seven and a half pages" }, { "start": 1094.44, "end": 1101.64, "text": " are essentially their old paper right and so so I mean but that is never it's" }, { "start": 1101.64, "end": 1108, "text": " never as public right it's never as as as big I guess the more public one is" }, { "start": 1108, "end": 1115.16, "text": " the the worse it gets when something like this really really happens did you" }, { "start": 1115.16, "end": 1123.68, "text": " so I've read your Udacity course that you you said that became an issue there" }, { "start": 1123.68, "end": 1128.36, "text": " right people try to tell you you can't plagiarize stuff is that is that" }, { "start": 1128.36, "end": 1135.88, "text": " correct or so I I've seen it like a tweet from someone at Udacity saying you" }, { "start": 1135.88, "end": 1140.8000000000002, "text": " know the the course fell through essentially because they try to tell" }, { "start": 1140.8000000000002, "end": 1147.8400000000001, "text": " you that that's not how they do things or what is or maybe you can tell a" }, { "start": 1147.8400000000001, "end": 1152.0400000000002, "text": " little bit what the the Udacity course you said that was a big thing for you" }, { "start": 1152.0400000000002, "end": 1158.88, "text": " why did it fall through yeah so you know the what happened with Udacity was we" }, { "start": 1158.88, "end": 1163.88, "text": " had a 16-week course that I essentially designed and then Udacity helped me" }, { "start": 1163.88, "end": 1168.48, "text": " build a team around that to help me one issue that one of the people at Udacity" }, { "start": 1168.48, "end": 1172.0800000000002, "text": " had that I was working with he was also in the initial trailer video Matt" }, { "start": 1172.0800000000002, "end": 1176.88, "text": " Leonard was that I was not writing the code from scratch I was using existing" }, { "start": 1176.88, "end": 1182.8400000000001, "text": " examples and he didn't like that we also didn't have that good a working" }, { "start": 1182.8400000000001, "end": 1188.4, "text": " relationship during the course but I think in terms of falling through that" }, { "start": 1188.4, "end": 1192.3200000000002, "text": " happened like you know everybody made money from that course including" }, { "start": 1192.32, "end": 1197, "text": " Udacity and there were several cohorts of students it didn't just run once I" }, { "start": 1197, "end": 1200.8, "text": " think it ran like three or four times you actually at Udacity actually" }, { "start": 1200.8, "end": 1205.04, "text": " approached me two years after that course was over to do another version of" }, { "start": 1205.04, "end": 1209.48, "text": " it and I did help yeah that too I'm in terms of falling through yeah when all" }, { "start": 1209.48, "end": 1214.28, "text": " of this happened then you know people came out and said this stuff yeah I" }, { "start": 1214.28, "end": 1218.36, "text": " don't know what happened with the courts honestly I haven't okay I think maybe" }, { "start": 1218.36, "end": 1226.84, "text": " I maybe I got I got this one this one wrong yes and so I've seen like I've" }, { "start": 1226.84, "end": 1232.76, "text": " looked at your your social blade and so on you're at about 700k subscribers and" }, { "start": 1232.76, "end": 1237.9599999999998, "text": " I've seen also an interview with Lex Friedman and you where essentially you" }, { "start": 1237.9599999999998, "end": 1243.4399999999998, "text": " you also told him like you know what matters to me is views I'm I'm attuned" }, { "start": 1243.44, "end": 1250, "text": " to to views to more subscribers and and so on is it fair to say a little bit" }, { "start": 1250, "end": 1256.92, "text": " that you might have lost sight of you know the bigger picture or other things" }, { "start": 1256.92, "end": 1264.8400000000001, "text": " just in pursuit of this goal it is it is I was definitely disillusioned with AGI" }, { "start": 1264.8400000000001, "end": 1272.28, "text": " and the initial goals that I had at the start I definitely also had a you know" }, { "start": 1272.28, "end": 1278.56, "text": " an issue with I had like a drug problem near the end I was doing too much of a" }, { "start": 1278.56, "end": 1285.08, "text": " certain drug that makes you really up and have a lot of energy and there was a" }, { "start": 1285.08, "end": 1290.3999999999999, "text": " point where I pretty much almost overdosed on it and that's when I knew" }, { "start": 1290.3999999999999, "end": 1295.08, "text": " like I even like you know called the cops on myself too because I thought I" }, { "start": 1295.08, "end": 1299.68, "text": " was gonna die I don't know I never really said this out loud before but that" }, { "start": 1299.68, "end": 1305.5600000000002, "text": " was near the end this is basically like a month or two before you know that" }, { "start": 1305.5600000000002, "end": 1314.6000000000001, "text": " scandal happened and I was just you know I just felt like I was unfalable like I" }, { "start": 1314.6000000000001, "end": 1320.4, "text": " was untouchable like I could do no wrong and yeah I'd never had that level of" }, { "start": 1320.4, "end": 1324.76, "text": " fame before as well like that was pretty that was that was quite a drug of its" }, { "start": 1324.76, "end": 1329.5600000000002, "text": " own as well on top of that but yeah it was a gradual process I think of going" }, { "start": 1329.56, "end": 1334.76, "text": " from uplifting developers and like that being the primary concern to also then" }, { "start": 1334.76, "end": 1342.6799999999998, "text": " chasing clout chasing fame wanting more opportunity more views more recognition" }, { "start": 1342.6799999999998, "end": 1353.6799999999998, "text": " and just making stupid decisions yeah I can I mean I'm you know as as a as another" }, { "start": 1353.68, "end": 1363.04, "text": " youtuber I I get the draw of this like I unders I can I I get this feeling of" }, { "start": 1363.04, "end": 1368.44, "text": " being sucked into these into these metrics and it's not only the metrics" }, { "start": 1368.44, "end": 1374.2, "text": " right the metrics are correlated with money correlated with fame and so on I" }, { "start": 1374.2, "end": 1381.72, "text": " like yeah I see the and so many youtubers fall into this right and your" }, { "start": 1381.72, "end": 1389, "text": " your mistake was also a little bit that you your your setting was in an maybe" }, { "start": 1389, "end": 1393.28, "text": " like an academic or a professional setting where people actually care about" }, { "start": 1393.28, "end": 1398.56, "text": " you know not stealing stuff and things like this so maybe you know you" }, { "start": 1398.56, "end": 1403.16, "text": " unluckily for you chose the wrong field to do something like this and because in" }, { "start": 1403.16, "end": 1408, "text": " many other fields I think this would have just you know been been completely fine" }, { "start": 1408, "end": 1413.92, "text": " so in addition to let's say making videos and you were making insane number" }, { "start": 1413.92, "end": 1418.72, "text": " of videos like two a week or three a week as you said and that certainly also" }, { "start": 1418.72, "end": 1424.68, "text": " you had a schedule that certainly must have also pressured you but then you" }, { "start": 1424.68, "end": 1429.68, "text": " also there is this there's the issue with your paper right and that that to" }, { "start": 1429.68, "end": 1437.64, "text": " me that to me was really something where I thought this is someone who is almost" }, { "start": 1437.64, "end": 1444.2, "text": " like blinded by either the speed or or the fame or or as you said you felt" }, { "start": 1444.2, "end": 1450.3600000000001, "text": " infallible or something like this so for people who don't know you had written a" }, { "start": 1450.3600000000001, "end": 1455.3600000000001, "text": " number of research papers but this particular one you even made a video" }, { "start": 1455.3600000000001, "end": 1460.96, "text": " about it I think like I wrote a paper in a week or something like and it was" }, { "start": 1460.96, "end": 1468.76, "text": " about it was about a neural the neural qubit and one of your viewers then went" }, { "start": 1468.76, "end": 1475.64, "text": " public and claimed and and and could show that this was copied from largely" }, { "start": 1475.64, "end": 1480.8, "text": " from two other papers copied together that the diagrams copied and the text" }, { "start": 1480.8, "end": 1488.24, "text": " copied and you you changed some of the wording which was the most puzzling" }, { "start": 1488.24, "end": 1494.92, "text": " thing to me so instead of a quantum gate which is equivalent to a logic gate you" }, { "start": 1494.92, "end": 1500.52, "text": " changed it to a quantum door which makes no I like this is a meme until today" }, { "start": 1500.52, "end": 1507.68, "text": " right and and instead of complex numbers or complex Hilbert spaces I think it was" }, { "start": 1507.68, "end": 1515.48, "text": " complicated Hilbert spaces which also is kind of if you so maybe if you just if" }, { "start": 1515.48, "end": 1522.32, "text": " you look back now what is what is your reaction now to past you in with respect" }, { "start": 1522.32, "end": 1531.32, "text": " to that that paper yeah um yeah that was hilarious that's eternally a meme now" }, { "start": 1531.32, "end": 1539, "text": " what I yeah I mean I used AI to generate some words and like make things" }, { "start": 1539, "end": 1546.76, "text": " different I would so this was automated the replacement yeah yeah okay yeah yeah" }, { "start": 1546.76, "end": 1551.28, "text": " yeah I think there's a tool called like um I think it's called like it's a web" }, { "start": 1551.28, "end": 1555, "text": " tool I forgot it's like AI writer or something like that you like paste in a" }, { "start": 1555, "end": 1561.6, "text": " paragraph and then it like rewrites it um yeah like what a stupid decision that" }, { "start": 1561.6, "end": 1567.2, "text": " was I but there there I mean at this point it's really it's not it's not it's" }, { "start": 1567.2, "end": 1572.64, "text": " not this it's not quite it's a step up from copying code and attributing" }, { "start": 1572.64, "end": 1576.8, "text": " someone at the bottom right because there you can still say you know I" }, { "start": 1576.8, "end": 1583, "text": " attributed them I'm you know I can sleep at night this is really I go I take paper" }, { "start": 1583, "end": 1588.72, "text": " I put it deliberately into a tool that rewords it and then I say here's my" }, { "start": 1588.72, "end": 1595.68, "text": " here's my paper right this is what what made you or how did you how did you find" }, { "start": 1595.68, "end": 1603.52, "text": " yourself making that that step that you know like the really from I can justify" }, { "start": 1603.52, "end": 1610.28, "text": " this to myself to I guess I don't know what maybe you explain better than me" }, { "start": 1610.28, "end": 1617.48, "text": " yeah I you know it's just like ego it's like I'm untouchable and I can just do" }, { "start": 1617.48, "end": 1625.76, "text": " anything and I I guess I didn't really understand what it's like before I" }, { "start": 1625.76, "end": 1631.84, "text": " plagiarize that paper I talked to an actual quantum researcher who works at" }, { "start": 1631.84, "end": 1637.76, "text": " in Santa Barbara for Google and you know he's like we should write this you know" }, { "start": 1637.76, "end": 1640.76, "text": " I was like we should write this paper together he's like yeah let's do it it's" }, { "start": 1640.76, "end": 1644.56, "text": " gonna take a year and I remember thinking like that's way too long for me" }, { "start": 1644.56, "end": 1649.48, "text": " like I'm not doing that in a year I'm gonna do this in three days and just" }, { "start": 1649.48, "end": 1655.28, "text": " thinking like you know I guess I didn't respect the scientific process enough to" }, { "start": 1655.28, "end": 1660.44, "text": " yeah it was just to me I just thought of it as like a another link in the video" }, { "start": 1660.44, "end": 1664.8, "text": " description just adding it I should have just linked to the seven papers I just" }, { "start": 1664.8, "end": 1669.96, "text": " instead I put my name on it and just made it into one and I like all people" }, { "start": 1669.96, "end": 1674.28, "text": " are gonna like me more because of this yeah I'll have more credibility because" }, { "start": 1674.28, "end": 1679.76, "text": " of this instead of the opposite and I don't know I was just making in general" }, { "start": 1679.76, "end": 1687.3999999999999, "text": " it's just you know really um drugged out honestly like that I don't know why I" }, { "start": 1687.3999999999999, "end": 1694.28, "text": " made a lot of decisions that I did um I'm sober now about the way yeah yeah at" }, { "start": 1694.28, "end": 1699.8799999999999, "text": " no point it did it did it ever because that's that's the baffling thing to me a" }, { "start": 1699.88, "end": 1704.64, "text": " little bit and that that that shows me or at least seems a little bit like" }, { "start": 1704.64, "end": 1710, "text": " someone who was really lost touch a bit is that when someone is like a an" }, { "start": 1710, "end": 1715.3600000000001, "text": " experienced researcher tells me it's gonna take a year to write a paper and" }, { "start": 1715.3600000000001, "end": 1720.88, "text": " sure if I think I'm fast I can I think I can do it in three months right but" }, { "start": 1720.88, "end": 1730.48, "text": " three days is a like is a different thing so so clearly your idea was already" }, { "start": 1730.48, "end": 1734.0400000000002, "text": " you know I'm gonna take a shortcut it's not like I'm gonna write the same paper" }, { "start": 1734.0400000000002, "end": 1739.5200000000002, "text": " in three days it's just how can I make a video out of this in the shortest" }, { "start": 1739.5200000000002, "end": 1744.48, "text": " possible time yeah I was like what's my next video I wrote a research paper and" }, { "start": 1744.48, "end": 1748.64, "text": " just thinking about that that's really the angle like I want to make a video" }, { "start": 1748.64, "end": 1756.3200000000002, "text": " that shows or tells people that I wrote a research paper yeah yeah so a lot of" }, { "start": 1756.3200000000002, "end": 1761.72, "text": " I've seen a lot of commentary saying things like you know it's it's a shame" }, { "start": 1761.72, "end": 1767.0400000000002, "text": " you have a you have a good platform you're charismatic and you could have" }, { "start": 1767.0400000000002, "end": 1773.8000000000002, "text": " they say something along the lines of you you might have just as well credited" }, { "start": 1773.8, "end": 1779.72, "text": " all these people and just had the same effect like implying you know there" }, { "start": 1779.72, "end": 1783.48, "text": " would be another way of doing this you could just say you know here is a bunch" }, { "start": 1783.48, "end": 1789.08, "text": " of code by some cool people I'm gonna show you how it works and and their" }, { "start": 1789.08, "end": 1793.6, "text": " implication is you would be just as famous you would be just as liked and" }, { "start": 1793.6, "end": 1800.2, "text": " so on did you first of all do you think that's true and second of all did you" }, { "start": 1800.2, "end": 1806.44, "text": " think that's true like or was it really your conviction no if I did that I would" }, { "start": 1806.44, "end": 1813.88, "text": " be way less popular I do think that that's true now I did not think that was" }, { "start": 1813.88, "end": 1819.56, "text": " true then mm-hmm I thought that I would have to be the guy with who is behind" }, { "start": 1819.56, "end": 1831.1599999999999, "text": " all of this in order for my brand and channel to grow because yes yeah because" }, { "start": 1831.36, "end": 1836.3999999999999, "text": " it's just hard like in the YouTube game to like differentiate yourself and I" }, { "start": 1836.3999999999999, "end": 1843.24, "text": " felt like this was a way I could do that yeah I mean it's it's it is true right" }, { "start": 1843.24, "end": 1848, "text": " I'm not sure that these people are correct like it's for sure good advice" }, { "start": 1848, "end": 1853.92, "text": " to credit the people whose work you present but I myself I'm not sure if" }, { "start": 1853.92, "end": 1859.6, "text": " they are correct when they say you would have been just as popular and and and" }, { "start": 1859.6, "end": 1865.12, "text": " just as as you know well respected by the people who think you really did do" }, { "start": 1865.12, "end": 1871.52, "text": " these things right I'm not sure as you say how how YouTube works is it's a it's" }, { "start": 1871.52, "end": 1879.4, "text": " tough game and you at some some point this this all came and together also" }, { "start": 1879.4, "end": 1886.56, "text": " with your with your course which we can talk about in a second but specifically" }, { "start": 1886.56, "end": 1891.92, "text": " with respect to the code and and to the paper you made an apology video which" }, { "start": 1891.92, "end": 1896.36, "text": " was fairly lengthy it was not your usual style it was just kind of you standing" }, { "start": 1896.36, "end": 1901, "text": " there and you you essentially said straightforwardly you know here's what I" }, { "start": 1901, "end": 1906.08, "text": " did I credit I didn't credit these people enough just took their code and" }, { "start": 1906.08, "end": 1916.48, "text": " and so on and then people noticed that only like a few days later in your next" }, { "start": 1916.48, "end": 1922.24, "text": " videos essentially you did the same thing like there there were slides where" }, { "start": 1922.24, "end": 1928.72, "text": " where you you took from somewhere and so on is it I don't know is it fair to say" }, { "start": 1928.72, "end": 1933.8, "text": " and so you made these videos you made the apology videos then you immediately" }, { "start": 1933.8, "end": 1938.84, "text": " started uploading videos and before you really quit and you quit for a long time" }, { "start": 1938.84, "end": 1945.72, "text": " after that what was what were sort of the last videos like for you or you know" }, { "start": 1945.72, "end": 1950.92, "text": " like after let's say the apology video and so on but before you quit what was" }, { "start": 1950.92, "end": 1956.28, "text": " that like you're asking about the time between when I quit to the apology video" }, { "start": 1956.28, "end": 1963.28, "text": " what that was like no from the apology video to the point where you it didn't" }, { "start": 1963.28, "end": 1968.84, "text": " upload for for months after that or uploaded very infrequently was how did" }, { "start": 1968.84, "end": 1973.28, "text": " you feel at the point like of the apology video and and a little after that" }, { "start": 1973.28, "end": 1977.6, "text": " yeah well I mean I felt pretty bad generally I'm a pretty happy guy as you" }, { "start": 1977.6, "end": 1982.24, "text": " can surmise but I can say that's the only time in my life where I've ever" }, { "start": 1982.24, "end": 1990.1200000000001, "text": " felt somewhat suicidal like just for a little bit and yeah I didn't know how to" }, { "start": 1990.1200000000001, "end": 1993.92, "text": " deal with that level of sadness so I tried about a bunch of different things" }, { "start": 1993.92, "end": 2005.88, "text": " like I moved from LA I got a dog I just I don't know did some soul-searching some" }, { "start": 2005.88, "end": 2011.08, "text": " meditation just try that a bunch of I tried virtual reality like escapism as" }, { "start": 2011.08, "end": 2018.3999999999999, "text": " well it was a pretty tough time as you can imagine but in terms of like I yeah" }, { "start": 2018.3999999999999, "end": 2023.6399999999999, "text": " doing the same thing again I guess I did but I didn't think that I was like maybe" }, { "start": 2023.6399999999999, "end": 2028.6, "text": " there's something wrong with me like I just I don't know I don't know like I" }, { "start": 2028.6, "end": 2032.6, "text": " needed I need some kind of mentor to be like here is how you credit people in a" }, { "start": 2032.6, "end": 2037.1599999999999, "text": " YouTube video about machine learning and here is what people are going to find" }, { "start": 2037.16, "end": 2045.0800000000002, "text": " acceptable yeah did you did you think at some point maybe I can turn this around" }, { "start": 2045.0800000000002, "end": 2051, "text": " you know maybe I can because because you were at the beginning when when people" }, { "start": 2051, "end": 2055.52, "text": " brought these things up you were I saw just a bunch of Twitter posts and so on" }, { "start": 2055.52, "end": 2062.88, "text": " sort of discrediting them denying them like no I never never did anything like" }, { "start": 2062.88, "end": 2068.7200000000003, "text": " this was there a point where you thought you know people are getting iffy maybe I" }, { "start": 2068.7200000000003, "end": 2074.6400000000003, "text": " can turn it around yeah yeah there was I mean I tried everything I was like maybe" }, { "start": 2074.6400000000003, "end": 2079.2000000000003, "text": " I don't need to apologize maybe I do that would make it better or worse maybe" }, { "start": 2079.2000000000003, "end": 2085.7200000000003, "text": " I should just deny deny deny like politicians do maybe I should you know" }, { "start": 2085.7200000000003, "end": 2091.7200000000003, "text": " make fun of you know make like reply videos to other youtubers who made" }, { "start": 2091.72, "end": 2098.68, "text": " videos about me there's a lot of things that I thought I could do eventually I" }, { "start": 2098.68, "end": 2103.48, "text": " decided and I don't even know if that was the best thing for my brand I know" }, { "start": 2103.48, "end": 2108.04, "text": " it was the right thing to do to make an apology video morally but I don't know" }, { "start": 2108.04, "end": 2116.7999999999997, "text": " if that actually helped me or hurt me I still don't know to this day yeah was it" }, { "start": 2116.8, "end": 2122.88, "text": " so I think if I hear this a little bit out of you that there was a time where" }, { "start": 2122.88, "end": 2128.88, "text": " you were still mainly thinking brand mainly thinking you know which actions" }, { "start": 2128.88, "end": 2133.96, "text": " are gonna let me still reach like the million subscribers or continue on and" }, { "start": 2133.96, "end": 2139.84, "text": " then was there a particular point where you thought no actually you know let's" }, { "start": 2139.84, "end": 2145.96, "text": " let's do an apology let's let's tone it down was there was there a time when you" }, { "start": 2145.96, "end": 2151.56, "text": " thought when you consciously let go maybe of the million subscriber goal there" }, { "start": 2151.56, "end": 2163.2, "text": " was there was I think it just came from introspection and seeing how like the" }, { "start": 2163.2, "end": 2169.52, "text": " the amount of I don't even know what you want to call it feedback negative" }, { "start": 2169.52, "end": 2178.32, "text": " feedback or criticism it just wouldn't go away it was just there and it didn't" }, { "start": 2178.32, "end": 2184.16, "text": " really die down I thought I mean there's really nothing else I can do here I need" }, { "start": 2184.16, "end": 2188.84, "text": " to just accept defeat to wave the white flag part of my brand is just like you" }, { "start": 2188.84, "end": 2198.4, "text": " know super confidence and always being okay with being like haters or whatever" }, { "start": 2198.4, "end": 2202.48, "text": " not even yes but you know I mean and like there was a point where I was like" }, { "start": 2202.48, "end": 2208.92, "text": " I you know I'll just apologize and then I also felt you know near the end I did" }, { "start": 2208.92, "end": 2213.12, "text": " feel I started to feel like guilty because you know some people said that" }, { "start": 2213.12, "end": 2220.84, "text": " he wasn't just that I plagiarized but that I was actually doing the opposite of" }, { "start": 2220.84, "end": 2226.76, "text": " like accelerating research in the space like this sets a bad example for people" }, { "start": 2226.76, "end": 2230.8, "text": " and this actually gets in the way of research and it's gonna slow it down and" }, { "start": 2230.8, "end": 2234.6000000000004, "text": " that's what I was like okay that's if that's true that's really bad and" }, { "start": 2234.6000000000004, "end": 2243.88, "text": " honestly I like I was reading too many comments as well but yeah I mean I still" }, { "start": 2243.88, "end": 2248.6800000000003, "text": " don't know to this day like whether or not the apology video helped or hurt my" }, { "start": 2248.6800000000003, "end": 2255.0800000000004, "text": " brand in fact if I had to bet I would say probably hurt my brand but you know" }, { "start": 2255.08, "end": 2261.48, "text": " at least I felt better afterwards and I guess that's what mattered in the end" }, { "start": 2261.48, "end": 2268.84, "text": " yeah I mean I think few people really understand what what it's like to get" }, { "start": 2268.84, "end": 2274.7999999999997, "text": " YouTube comments on a on on a bit of a scale and and and people there will" }, { "start": 2274.7999999999997, "end": 2279.96, "text": " there will always be people criticizing and hating especially I guess you with" }, { "start": 2279.96, "end": 2285.2400000000002, "text": " very little credentials in the field I guess you have always had people saying" }, { "start": 2285.2400000000002, "end": 2291.64, "text": " you know this is a maybe this is a clown has no credentials whatnot and it" }, { "start": 2291.64, "end": 2297.32, "text": " didn't help that you copied code because then you not authoring the code also" }, { "start": 2297.32, "end": 2302.7200000000003, "text": " meant you knew less about the code which might also be sometimes shine through a" }, { "start": 2302.7200000000003, "end": 2307.7200000000003, "text": " bit in your videos but I think you with time you you sort of learn to tune out" }, { "start": 2307.72, "end": 2314.3599999999997, "text": " the haters because you're gonna get them anyway but then sometimes they're right" }, { "start": 2314.3599999999997, "end": 2321, "text": " right and and I think it's I think you know I don't think and I don't think" }, { "start": 2321, "end": 2328.3999999999996, "text": " many people in the like public sphere get like have a good good understanding" }, { "start": 2328.3999999999996, "end": 2332.2, "text": " of when should I listen to the to the bad comments and when not because" }, { "start": 2332.2, "end": 2339.16, "text": " usually it's no right so right yeah so then then this this was this was very" }, { "start": 2339.16, "end": 2345.6, "text": " shortly people really complaining about plagiarized code and this this paper" }, { "start": 2345.6, "end": 2352.16, "text": " which was one of the sort of big points raised and then in a very short like" }, { "start": 2352.16, "end": 2357.2799999999997, "text": " within a month or so there was also the issue of a course you offered right so" }, { "start": 2357.28, "end": 2363.6400000000003, "text": " you you maybe can you tell a bit how this course even came to be you you made" }, { "start": 2363.6400000000003, "end": 2368.44, "text": " videos at an insane rate how did you how did you think you could also offer a" }, { "start": 2368.44, "end": 2375.1200000000003, "text": " course and why yeah I think it comes down to two things one I felt like I" }, { "start": 2375.1200000000003, "end": 2380.88, "text": " could do more than what I actually was capable of doing because I my ego was so" }, { "start": 2380.88, "end": 2386.44, "text": " inflated at the time so I that's one the other is just looking at the metrics" }, { "start": 2386.44, "end": 2391.12, "text": " generally the videos that were about making money were the ones that did the" }, { "start": 2391.12, "end": 2396.56, "text": " best and so I started to follow that trend and tailor my content in that" }, { "start": 2396.56, "end": 2400.28, "text": " direction as opposed to what I would have done years ago which is like how do" }, { "start": 2400.28, "end": 2403.92, "text": " we solve them you know Millennium problems like poverty reduction and water" }, { "start": 2403.92, "end": 2408.4, "text": " cleanliness and environmental sustainability things that you know" }, { "start": 2408.4, "end": 2413.2000000000003, "text": " actually matter the course was around that like well if people want to make" }, { "start": 2413.2, "end": 2418.08, "text": " money let me make a course around making money with machine learn that was what" }, { "start": 2418.08, "end": 2421.9199999999996, "text": " is called right it was called make money with machine learning literally that is" }, { "start": 2421.9199999999996, "end": 2427.96, "text": " a hell of a clickbait yeah I the most click baity exactly what's gonna get the" }, { "start": 2427.96, "end": 2435.64, "text": " views title mm-hmm and it was supposed to be a paid course it was I think about" }, { "start": 2435.64, "end": 2442.48, "text": " $200 per student and the issue the first issue was that you claimed it was like a" }, { "start": 2442.48, "end": 2447.96, "text": " limited entry course with personal supervision now both of these things" }, { "start": 2447.96, "end": 2454.04, "text": " didn't really turn out to be accurate as as you promised so there was an issue" }, { "start": 2454.04, "end": 2462.76, "text": " of you said I only let in 500 people but then you let in twice 500 people so you" }, { "start": 2462.76, "end": 2468.76, "text": " you had two different slack work workspaces with twice the five some I" }, { "start": 2468.76, "end": 2474.96, "text": " think one even had 700 but there's a few extra ones I guess and then also there" }, { "start": 2474.96, "end": 2480.88, "text": " was apparently not really like you can't you can't personally supervise a thousand" }, { "start": 2480.88, "end": 2487.28, "text": " two hundredths like it's impossible did you plan on these things already or did" }, { "start": 2487.28, "end": 2493.96, "text": " they just sort of how did they happen I didn't plan on them I did think that I" }, { "start": 2493.96, "end": 2500.56, "text": " would have 500 when I put the course out there were so many signed up so fast and" }, { "start": 2500.56, "end": 2504.56, "text": " I got greedy I was like I'm just gonna let this keep on going let's see how" }, { "start": 2504.56, "end": 2508, "text": " many people I can sign up for this and I thought yeah I can just have two" }, { "start": 2508, "end": 2515.88, "text": " different cohorts and you know I had people volunteer to help at the time you" }, { "start": 2515.88, "end": 2523.96, "text": " help me like as I guess you call them teaching assistants and yeah but they" }, { "start": 2523.96, "end": 2529.6, "text": " they how many roughly how many TAs did you have do you remember there was at" }, { "start": 2529.6, "end": 2534.78, "text": " least one there might have been written that there was at least one yeah but" }, { "start": 2534.78, "end": 2539.36, "text": " they they sort of did they quit after a while or did they stick with you no they" }, { "start": 2539.36, "end": 2544.4, "text": " actually they were amazing they stuck the whole yeah yeah okay but they were" }, { "start": 2544.4, "end": 2552, "text": " they were volunteers yeah yeah okay so it was 200 bucks and like one two three" }, { "start": 2552, "end": 2562.7200000000003, "text": " maybe volunteer TAs for a thousand two hundred students and you did you plan on" }, { "start": 2562.7200000000003, "end": 2568.84, "text": " ramp did you realize at some point I can't provide personal feedback to all" }, { "start": 2568.84, "end": 2574, "text": " of these students or did you just think you know whatever I'll I'll just I can" }, { "start": 2574, "end": 2580.48, "text": " do this or I did I did realize I was in over my head I I think it was like week" }, { "start": 2580.48, "end": 2587.36, "text": " two or week three that it really started to dawn on me um and then I think I think" }, { "start": 2587.36, "end": 2591.68, "text": " it was week four that some of the students started you're going to social" }, { "start": 2591.68, "end": 2596.64, "text": " media um and then everything came crashing down in the middle of the" }, { "start": 2596.64, "end": 2604.2799999999997, "text": " course and then I had to give out a bunch of refunds but still had to finish" }, { "start": 2604.2799999999997, "end": 2608, "text": " the course to the end it was a ten week course so we still have to keep going" }, { "start": 2608, "end": 2615.96, "text": " for five weeks after that um but yeah I mean there were still you know hundreds" }, { "start": 2615.96, "end": 2621.06, "text": " of students who stayed in the course I don't know that like the register made an" }, { "start": 2621.06, "end": 2625.48, "text": " article on this but they didn't say like it's not like everybody just dropped out" }, { "start": 2625.48, "end": 2629.6, "text": " all of a sudden yeah so people in the course I still had some responsibility" }, { "start": 2629.6, "end": 2636.2400000000002, "text": " yeah so I maybe briefly summarize these these articles and you know they're" }, { "start": 2636.2400000000002, "end": 2640.88, "text": " they're written from a certain angle right and that's that's exactly why I" }, { "start": 2640.88, "end": 2646.96, "text": " also wanted to get your just your side of of this story so these articles they" }, { "start": 2646.96, "end": 2652.2, "text": " claim for example that you know people started noticing there was no personal" }, { "start": 2652.2, "end": 2658.08, "text": " supervision they complained you you you never essentially showed up in the slack" }, { "start": 2658.08, "end": 2665.24, "text": " workspaces well or infrequently they all got the same feedback on their exercise" }, { "start": 2665.24, "end": 2671.46, "text": " so that was the sort of like a copy paste of like good job in it was it was" }, { "start": 2671.46, "end": 2679.56, "text": " like that then people started demanding refunds but were some claim they were" }, { "start": 2679.56, "end": 2688.08, "text": " even banned like for demanding refunds then it was also claimed that you" }, { "start": 2688.08, "end": 2696.56, "text": " eventually said there was a refund period which was for 14 days but the" }, { "start": 2696.56, "end": 2702.16, "text": " article claim you quietly introduced a refund period 30 days after the course" }, { "start": 2702.16, "end": 2708.82, "text": " started so it was essentially impossible for anyone to have known because there" }, { "start": 2708.82, "end": 2714.2400000000002, "text": " was no refund policy at the beginning you introduced a 14-day refund period 30" }, { "start": 2714.2400000000002, "end": 2719.32, "text": " days after the code the course started you then and then you know once once" }, { "start": 2719.32, "end": 2725.2400000000002, "text": " people discovered that there were two different cohorts and so on how what of" }, { "start": 2725.2400000000002, "end": 2734.0800000000004, "text": " these articles is is true and what is overdone so there are also several" }, { "start": 2734.08, "end": 2739.7599999999998, "text": " several tweets of students that said yeah people claiming refunds were were" }, { "start": 2739.7599999999998, "end": 2746.6, "text": " banned or or that the fact that you introduced this refund period how did" }, { "start": 2746.6, "end": 2752.72, "text": " this go down from your perspective so Paul that is true what I dope I think" }, { "start": 2752.72, "end": 2759.7599999999998, "text": " was overblown is the banning part I'd never personally banned anybody but I" }, { "start": 2759.76, "end": 2764.5600000000004, "text": " can't speak to whether or not one of the TAs may or may not have done that I love" }, { "start": 2764.5600000000004, "end": 2771.0400000000004, "text": " yeah but yeah everything else like definitely on point like it's all a part" }, { "start": 2771.0400000000004, "end": 2781.96, "text": " of the the story yeah can't refute any of that yeah and did you did you get" }, { "start": 2781.96, "end": 2787, "text": " did you get scared at any point or did you were you still in this you because" }, { "start": 2787, "end": 2792.64, "text": " all of a sudden people and their money are involved right it's not I mean 200" }, { "start": 2792.64, "end": 2798.96, "text": " 200 bucks is not that much for maybe an American but it is a lot for maybe" }, { "start": 2798.96, "end": 2805, "text": " someone in India or something you know some place like this did you get at some" }, { "start": 2805, "end": 2811.56, "text": " point you know scared because like wow there's actual money here that I may" }, { "start": 2811.56, "end": 2817.2799999999997, "text": " have to pay back or yeah I mean I got scared for a lot of reasons I was scared" }, { "start": 2817.2799999999997, "end": 2823, "text": " that yeah I would like have to go through some kind of lawsuits people were" }, { "start": 2823, "end": 2826.72, "text": " saying like oh there's gonna be a lawsuit you you're lucky you're not in" }, { "start": 2826.72, "end": 2834.2999999999997, "text": " jail and stuff and yeah about the refund stuff like that 30-day versus sneaking it" }, { "start": 2834.2999999999997, "end": 2838.2799999999997, "text": " in and I'm sure I'm sure I did that I honestly don't remember it now like I'm" }, { "start": 2838.28, "end": 2843.1200000000003, "text": " sure like that's probably what happened but I mean when I look at it now I'm" }, { "start": 2843.1200000000003, "end": 2848.76, "text": " like heavy when you charge money you need to be very upfront with people in" }, { "start": 2848.76, "end": 2852.96, "text": " like that's how you make a sustainable product I wasn't thinking very" }, { "start": 2852.96, "end": 2859.6400000000003, "text": " sustainably a long term it was a very short-term thing but I was scared yeah" }, { "start": 2859.6400000000003, "end": 2866.44, "text": " I was here did you but but your thought was still I can educate these people" }, { "start": 2866.44, "end": 2871.6, "text": " even if I can't give them personal supervision or or was it was it all like" }, { "start": 2871.6, "end": 2877.12, "text": " you know like I'm gonna get their 200 bucks I'm gonna tell them something so" }, { "start": 2877.12, "end": 2882.08, "text": " they can't complain or did you still think you know I can't like the course" }, { "start": 2882.08, "end": 2886.8, "text": " has value for the people who are in it no I did think the course had value I" }, { "start": 2886.8, "end": 2894.36, "text": " mean it's weird because it's like I'm conflating my bias against academia and" }, { "start": 2894.36, "end": 2899.28, "text": " the traditional learning path with this course that is yeah it's got a super" }, { "start": 2899.28, "end": 2908.28, "text": " clickbait title but you know I guess I didn't fully appreciate what online" }, { "start": 2908.28, "end": 2911.8, "text": " learning and I'm still learning what online learning really can be in the" }, { "start": 2911.8, "end": 2916.8, "text": " future I thought well you know you don't need to be in a physical classroom to" }, { "start": 2916.8, "end": 2919.96, "text": " learn like I think we can all agree to that now like you can watch videos online" }, { "start": 2919.96, "end": 2928.64, "text": " but also you know what is personal supervision and does there need to be x" }, { "start": 2928.64, "end": 2931.92, "text": " y and z for someone to be able to say I learned a lot of learning comes from" }, { "start": 2931.92, "end": 2939.2, "text": " self-motivation and you know education is not a scarce resource it's it's" }, { "start": 2939.2, "end": 2944.76, "text": " abundant it's the desire to learn that is scarce and perhaps that alone I felt" }, { "start": 2944.76, "end": 2948.2, "text": " justified like if I could get them to want to learn these things that would be" }, { "start": 2948.2, "end": 2953, "text": " enough um at the time I felt that way now I know like what would I change" }, { "start": 2953, "end": 2956.3999999999996, "text": " differently besides the obvious part like the 30-day" }, { "start": 2956.3999999999996, "end": 2962.6, "text": " refund from the start is to just hire help like if I were to give advice to" }, { "start": 2962.6, "end": 2966.8399999999997, "text": " anybody doing anything like this like any youtuber who wants to make a course" }, { "start": 2966.8399999999997, "end": 2972.02, "text": " like hire help step one higher help then figure everything else out don't plan it" }, { "start": 2972.02, "end": 2979.24, "text": " out yourself it's too big it's too big at scale for one person to do what what" }, { "start": 2979.24, "end": 2985.16, "text": " happened did you end up giving refunds to two people or I did did you did you" }, { "start": 2985.16, "end": 2992.12, "text": " still have enough money to give the refunds haha um I yeah I did what what" }, { "start": 2992.12, "end": 2997.72, "text": " happened to the money like I can imagine you get 200 bucks a thousand people" }, { "start": 2997.72, "end": 3006.8799999999997, "text": " that's like 200k how where did that go did you end up plus or minus or did you" }, { "start": 3006.8799999999997, "end": 3012.2799999999997, "text": " spend on refunds did any lawsuit result or there were no lawsuits everybody who" }, { "start": 3012.2799999999997, "end": 3016.3999999999996, "text": " wanted a refund got a refund there were still a bunch of students who completed" }, { "start": 3016.3999999999996, "end": 3021.3599999999997, "text": " the course to the end like and I'm very thankful like despite all the drama they" }, { "start": 3021.3599999999997, "end": 3026.4399999999996, "text": " were loyal to the to the thing and so was it it wasn't negative it was" }, { "start": 3026.44, "end": 3034.36, "text": " positive it wasn't nearly like probably like 10% what I mean at the start and" }, { "start": 3034.36, "end": 3042.64, "text": " and then you know I think this as I said this was within like a month of of" }, { "start": 3042.64, "end": 3047.12, "text": " every everything down you you were making lots videos the paper the course" }, { "start": 3047.12, "end": 3054.48, "text": " all at the same time and then everything everything comes crashing and I think" }, { "start": 3054.48, "end": 3061.12, "text": " it's one it's one thing when you feel bad because life is is crap right" }, { "start": 3061.12, "end": 3067.16, "text": " because something happened to you that's bad and you know but it's it's an" }, { "start": 3067.16, "end": 3073.84, "text": " entirely different thing when you're you you know you're responsible for it right" }, { "start": 3073.84, "end": 3080.16, "text": " like that is that is worse that is like my life is bad and I'm to blame and any" }, { "start": 3080.16, "end": 3089.04, "text": " you know like it's it's my my doing right like was this I guess this was your" }, { "start": 3089.04, "end": 3093.12, "text": " experience right it you know whether you thought it was good or bad it was like" }, { "start": 3093.12, "end": 3098.52, "text": " my life is crap and I'm responsible how did you what did you do at that point" }, { "start": 3098.52, "end": 3107.44, "text": " you said bit of soul-searching and so on how did you decide to to go forward so I" }, { "start": 3107.44, "end": 3116.76, "text": " moved back to San Francisco I was there for a few months I basically invested in" }, { "start": 3116.76, "end": 3121.92, "text": " my friends and family talked to them that helped got really into virtual" }, { "start": 3121.92, "end": 3126.52, "text": " reality that helped as well like this associating from this reality bring it" }, { "start": 3126.52, "end": 3132.52, "text": " to a virtual world where I was anonymous and logged off of all social media as" }, { "start": 3132.52, "end": 3137.7599999999998, "text": " well so that helped as well and kind of just gave up with the whole like you" }, { "start": 3137.7599999999998, "end": 3146.48, "text": " know million subscriber path that I was on and what else yeah just oh yeah focus" }, { "start": 3146.48, "end": 3151.4, "text": " on my health as well like I was like I'm just gonna like try to focus on being" }, { "start": 3151.4, "end": 3154.52, "text": " healthy because I can control that I can't control what people think but I" }, { "start": 3154.52, "end": 3161.6, "text": " can control my health so that helped you made it you made a quite astounding body" }, { "start": 3161.6, "end": 3166.88, "text": " fitness transformation as well you were at the end you were like in 2019 when it" }, { "start": 3166.88, "end": 3173.08, "text": " all crashed you were kind of a like a chubster like right now and I saw like a" }, { "start": 3173.08, "end": 3179.04, "text": " before-after picture was this a conscious effort by you or it was it was" }, { "start": 3179.04, "end": 3184.88, "text": " yeah cuz like part of like what you know having a desire to live is to like be" }, { "start": 3184.88, "end": 3189.12, "text": " able to look at the mirror and you know say like for me at least like hey this" }, { "start": 3189.12, "end": 3193.2, "text": " is an attractive guy so that you know it's kind of vain but it definitely" }, { "start": 3193.2, "end": 3203.12, "text": " helped for sure like that yeah and so you eventually you got let's say back up" }, { "start": 3203.12, "end": 3207.96, "text": " on your on your feet after all of this what was your or what is your current" }, { "start": 3207.96, "end": 3215, "text": " plan or what are you doing right now you've you've posted a few videos again" }, { "start": 3215, "end": 3221.2, "text": " here and there but I'm not so maybe you know what's what are you doing" }, { "start": 3221.2, "end": 3226.8, "text": " essentially so um yeah making videos along this series called alpha care" }, { "start": 3226.8, "end": 3232.22, "text": " about health care in AI which is kind of always been like my the industry I'm" }, { "start": 3232.22, "end": 3236.88, "text": " most excited about for AI like applicability like oh we can make people" }, { "start": 3236.88, "end": 3240.4, "text": " healthier so doing that I'm almost done with a book I've been writing for the" }, { "start": 3240.4, "end": 3248.28, "text": " past three months which it's gonna be a free ebook not gonna charge for it so" }, { "start": 3248.28, "end": 3252.04, "text": " that's been interesting that's also on like deep learning for health care apps" }, { "start": 3252.04, "end": 3259.08, "text": " for beginners but examples in there and once I released that all of this will be" }, { "start": 3259.08, "end": 3264.6, "text": " done in like three weeks probably from now like the series the video series in" }, { "start": 3264.6, "end": 3270.4, "text": " the book then I have to figure out what the next thing I'm going to do is what" }, { "start": 3270.4, "end": 3275.92, "text": " I'm most excited about currently is paying people to be healthy there's this" }, { "start": 3275.92, "end": 3280.12, "text": " app called sweat coin it's out of the United Kingdom it pays people in" }, { "start": 3280.12, "end": 3285, "text": " cryptocurrency to walk I find that really really interesting because you" }, { "start": 3285, "end": 3289.44, "text": " know two of the most meaningful things to me are keeping people healthy and" }, { "start": 3289.44, "end": 3294.12, "text": " reducing poverty and this kind of does both at the same time so I'm wondering" }, { "start": 3294.12, "end": 3297.56, "text": " if there's a way to create what's called a Dow a distributed autonomous" }, { "start": 3297.56, "end": 3303.3199999999997, "text": " organization around health care and health data and keeping people healthy" }, { "start": 3303.3199999999997, "end": 3308.2, "text": " paying them somehow with cryptocurrency to stay healthy I just use a service" }, { "start": 3308.2, "end": 3313.8599999999997, "text": " called inside tracker which cost me like 500 bucks way too expensive a service" }, { "start": 3313.8599999999997, "end": 3318.06, "text": " for most people to use one but I got a blood test done two weeks ago using the" }, { "start": 3318.06, "end": 3322.56, "text": " service they took 43 biomarkers of mine and that now I have a bunch of health" }, { "start": 3322.56, "end": 3325.88, "text": " data like my cholesterol level is apparently way too high because I eat" }, { "start": 3325.88, "end": 3330.7999999999997, "text": " way too much red meat so I've got a cut down on that but something like this if" }, { "start": 3330.7999999999997, "end": 3336.72, "text": " we could turn into um like a free service that keeps people healthy and" }, { "start": 3336.72, "end": 3339.36, "text": " actually not just free but pay them money and then somehow turn it into a" }, { "start": 3339.36, "end": 3343.6, "text": " business or also the service makes money that'd be really cool so I'm kind of" }, { "start": 3343.6, "end": 3347.58, "text": " like thinking like I'm gonna start some kind of company around that or a down" }, { "start": 3347.58, "end": 3353.36, "text": " I should say I'm not exactly sure what it looks like though I mean there this is" }, { "start": 3353.36, "end": 3358.16, "text": " happening in part already with I don't know we have we have like high taxes on" }, { "start": 3358.16, "end": 3364.4, "text": " cigarettes right so essentially the the smokers they finance a little bit the" }, { "start": 3364.4, "end": 3369.48, "text": " non smokers via taxes some health insurances they already give discounts if" }, { "start": 3369.48, "end": 3375.6, "text": " you do like regularly go to it to a gym or something so I'm like something like" }, { "start": 3375.6, "end": 3380.44, "text": " this is definitely in the in the realm of possibilities now with respect to" }, { "start": 3380.44, "end": 3385.42, "text": " cryptocurrency is this a meme or was there actually a Siraj coin at some" }, { "start": 3385.42, "end": 3391.64, "text": " point I haven't found anything like what what was that yeah that was a real thing" }, { "start": 3391.64, "end": 3395.2799999999997, "text": " I launched a cryptocurrency I think two years ago or something three I don't know" }, { "start": 3395.2799999999997, "end": 3402.48, "text": " call sir Raj point and it was really didn't like it so I took down the video" }, { "start": 3402.48, "end": 3409.76, "text": " I'm like they're still you could find it if you really search Siraj coin okay but" }, { "start": 3409.76, "end": 3414.36, "text": " it was just it was more like for a video or did you think you know maybe I could" }, { "start": 3414.36, "end": 3419.04, "text": " make some money with launching my own cryptocurrency yeah both I mean this was" }, { "start": 3419.04, "end": 3425, "text": " at the height of the ICO crane yeah and everybody was doing it and I felt wow" }, { "start": 3425, "end": 3429.6, "text": " long with I'm gonna do it too here we go sir right right and the idea was that" }, { "start": 3429.6, "end": 3435.24, "text": " you can with Siraj coin you can get a meeting like buy a meeting with me or" }, { "start": 3435.24, "end": 3439.88, "text": " like make a music video with me just you know I am the scarce resource like in" }, { "start": 3439.88, "end": 3443.8199999999997, "text": " these cryptos there is a scarce resource great token the token is how you access" }, { "start": 3443.8199999999997, "end": 3450.6, "text": " the scarce resource yeah and yeah I mean I'm glad I did it still like nobody got" }, { "start": 3450.6, "end": 3453.92, "text": " hurt from that it was just like a fun experiment and I learned a lot from it" }, { "start": 3453.92, "end": 3458.36, "text": " as well like I still think it's an interesting idea like I do think that" }, { "start": 3458.36, "end": 3468.2000000000003, "text": " we're gonna see more individuals create tokens around themselves and yeah I mean" }, { "start": 3468.2000000000003, "end": 3472.88, "text": " yes a couple of NFTs work this way right that there is some kind of like a" }, { "start": 3472.88, "end": 3479.2400000000002, "text": " meeting with a famous person tagged onto it or something like this yeah so with" }, { "start": 3479.2400000000002, "end": 3486.44, "text": " with respect to your your book and your new set of videos and you know I guess" }, { "start": 3486.44, "end": 3493.32, "text": " that the question everyone asks is is there still how do you handle citations" }, { "start": 3493.32, "end": 3498.28, "text": " plagiarism things like this are you are you toning it down or are you like extra" }, { "start": 3498.28, "end": 3503.92, "text": " super duper careful or what is your sort of how do you approach this topic I" }, { "start": 3503.92, "end": 3509.36, "text": " guess you're in a bit of a special situation not not only are you held to" }, { "start": 3509.36, "end": 3513.32, "text": " the same standards but now you know people read your name they're probably" }, { "start": 3513.32, "end": 3518.6400000000003, "text": " the first thing they do is put something into a plagiarism checker" }, { "start": 3518.6400000000003, "end": 3523.7200000000003, "text": " yeah I'm super careful I put it in the video description not just like the" }, { "start": 3523.7200000000003, "end": 3533.4, "text": " github I say it verbally yeah I just try to be more careful yeah and the what's" }, { "start": 3533.4, "end": 3537, "text": " the book about can you is there is it something you can disclose already or" }, { "start": 3537, "end": 3542.96, "text": " yeah it's on bioinformatics for beginners I'm also a beginner to bioinformatics" }, { "start": 3542.96, "end": 3548.16, "text": " I'm really interested in multi comics like all the omics genomics epigenomics" }, { "start": 3548.16, "end": 3552.84, "text": " transcriptomics and just thinking about how we can integrate all of these" }, { "start": 3552.84, "end": 3557.92, "text": " different types of data to make both diagnostic and prognostic predictions" }, { "start": 3557.92, "end": 3563.8, "text": " for people and I think that's the future I'm really interested in reversing the" }, { "start": 3563.8, "end": 3569.06, "text": " aging process David Sinclair at Harvard has a great book on this called why we" }, { "start": 3569.06, "end": 3573.16, "text": " age and why we don't have to he has a podcast that he's gonna release next" }, { "start": 3573.16, "end": 3577.12, "text": " year on this topic and I just think that there's a great space for data science" }, { "start": 3577.12, "end": 3582.6, "text": " and data analysts enthusiasts to make a contribution in this field because I do" }, { "start": 3582.6, "end": 3585.52, "text": " think the future of healthcare isn't going to be targeting individual" }, { "start": 3585.52, "end": 3590.84, "text": " diseases like Alzheimer's or heart disease but rather that is the disease" }, { "start": 3590.84, "end": 3597.04, "text": " that is upstream of everything else aging itself that's it I mean it's a" }, { "start": 3597.04, "end": 3604.36, "text": " tough task but yeah it's a it's a I guess it's a cool cool outlook I it" }, { "start": 3604.36, "end": 3609.2, "text": " seems like a little bit of a rebirth it you know you told how you were at the" }, { "start": 3609.2, "end": 3613.42, "text": " beginning of your video career thinking if I could just you know make video" }, { "start": 3613.42, "end": 3620.8, "text": " about these cool topics and so on and it it almost feels or at least to me it" }, { "start": 3620.8, "end": 3626.4, "text": " sounds like it's got a little bit of that same spirit again I'd like to think" }, { "start": 3626.4, "end": 3631.88, "text": " so I mean I I don't have the same I don't know I don't have the same level" }, { "start": 3631.88, "end": 3636.8, "text": " of or maybe I just feel this way I don't have the same like energy that I did" }, { "start": 3636.8, "end": 3643.6800000000003, "text": " back then um where it's just like a I have to do this or else like the world" }, { "start": 3643.6800000000003, "end": 3649.28, "text": " is gonna end like that level of conviction I just feel like I mean I'm" }, { "start": 3649.28, "end": 3653.12, "text": " really interested in biology in general I don't think I'm gonna get I honestly" }, { "start": 3653.12, "end": 3658.44, "text": " don't think this is gonna give me the level of fame or opportunity that" }, { "start": 3658.44, "end": 3662.88, "text": " talking about deep learning from 316 to 2020 did it's just something I'm" }, { "start": 3662.88, "end": 3667.16, "text": " interested in and I'm okay like not reaching a million I mean it's probably" }, { "start": 3667.16, "end": 3672.3199999999997, "text": " never gonna reach a million subscribers I just wants to be interested in this" }, { "start": 3672.3199999999997, "end": 3677.12, "text": " and even if and you know if this like company doesn't work out I'm happy to" }, { "start": 3677.12, "end": 3680.64, "text": " like take a job somewhere and just like learn about bioinformatics full-time as" }, { "start": 3680.64, "end": 3689.72, "text": " a bioinformatician heroist or something yeah well in yeah I mean in many ways I" }, { "start": 3689.72, "end": 3695.3199999999997, "text": " I've told you that this this privately but in many ways you were you're sort" }, { "start": 3695.3199999999997, "end": 3701.04, "text": " of with with all of this happening you were still sort of a the pioneer of what" }, { "start": 3701.04, "end": 3708.2799999999997, "text": " many of us other ML youtubers essentially that the path we go is you" }, { "start": 3708.28, "end": 3713.1200000000003, "text": " you made it it kind of like I remember when I started making videos there was" }, { "start": 3713.1200000000003, "end": 3718.6800000000003, "text": " like nothing and when you started there must have been like really really" }, { "start": 3718.6800000000003, "end": 3724.6800000000003, "text": " nothing right and you know that for for for all the things I think it took it" }, { "start": 3724.6800000000003, "end": 3731.92, "text": " took balls to to go that way and and you you certainly hustled even if it led in" }, { "start": 3731.92, "end": 3738.16, "text": " into like a wrong direction do you have I don't know do you have do you have" }, { "start": 3738.16, "end": 3742.92, "text": " because I know that there are quite a number of people who look at maybe you" }, { "start": 3742.92, "end": 3748.92, "text": " also me other youtubers a lot of people are starting their podcasts nowadays a" }, { "start": 3748.92, "end": 3754.92, "text": " lot of people also start channels like mine or or similar to mine any advice" }, { "start": 3754.92, "end": 3762.04, "text": " you have for people starting out in in the in the sphere of online education or" }, { "start": 3762.04, "end": 3768.6, "text": " what might what we might call being an influencer anything like this yeah I" }, { "start": 3768.6, "end": 3775.08, "text": " would say that you this is not something you do as a side job like a lot of" }, { "start": 3775.08, "end": 3778.44, "text": " people you know kind of have to because they need a source of income from their" }, { "start": 3778.44, "end": 3785.36, "text": " day job but I would say like the only way to be successful in this is to pick" }, { "start": 3785.36, "end": 3791.8, "text": " hits to be your one thing and do that all day and it's got to feel like play" }, { "start": 3791.8, "end": 3796.52, "text": " to you but it's got to look like work to other people like to me this whole time" }, { "start": 3796.52, "end": 3800.2000000000003, "text": " I've just been playing like really enjoying myself like it's not work and" }, { "start": 3800.2000000000003, "end": 3804.2000000000003, "text": " that's honestly why I think I grew as much as I did I genuinely enjoy the" }, { "start": 3804.2, "end": 3809.08, "text": " topics I genuinely enjoy the video production process editing lighting" }, { "start": 3809.08, "end": 3814.2799999999997, "text": " thinking about metrics all that stuff just felt like play to me and that's how" }, { "start": 3814.2799999999997, "end": 3817.7599999999998, "text": " you're gonna be successful it's not gonna be if you feel like it's hard work" }, { "start": 3817.7599999999998, "end": 3823.24, "text": " um you should pivot or think of some other content to talk about or maybe a" }, { "start": 3823.24, "end": 3827.72, "text": " different medium like you know I had a podcast as well I did I think five" }, { "start": 3827.72, "end": 3831, "text": " interviews and then I stopped because it didn't feel like play to me like I don't" }, { "start": 3831, "end": 3835.96, "text": " actually yeah for some reason I just don't enjoy being a podcast host like I" }, { "start": 3835.96, "end": 3841, "text": " enjoyed monologues and that kind of thing so I stopped whereas someone like" }, { "start": 3841, "end": 3845.16, "text": " you or you know Joe Rogan or other podcasters they actually enjoy it so" }, { "start": 3845.16, "end": 3848.44, "text": " they're gonna they're actually gonna be successful so that's that's my best" }, { "start": 3848.44, "end": 3852.6, "text": " advice is like make sure that it feels like play to you and then I you will be" }, { "start": 3852.6, "end": 3859.4, "text": " you'll probably be successful and when someone finds themselves a bit successful" }, { "start": 3859.4, "end": 3867.48, "text": " and finds themselves to be sucked and drawn by the metrics by the clout by" }, { "start": 3867.48, "end": 3872.28, "text": " because I already I already said it but I'm gonna say it again like this is it" }, { "start": 3872.28, "end": 3879, "text": " this is a thing I feel it I like other youtubers feel it for sure this this" }, { "start": 3879, "end": 3885.92, "text": " suck it's like a it's like a thing drawing you right and you know leading" }, { "start": 3885.92, "end": 3893.64, "text": " to the kinds of decisions you made and and what is do you have any I don't know" }, { "start": 3893.64, "end": 3899.64, "text": " you know other than don't do it do you have any you know best the mindset that" }, { "start": 3899.64, "end": 3904.52, "text": " that creates in a person do you have any any maybe recognition of what could help" }, { "start": 3904.52, "end": 3910.64, "text": " someone to to get out of it or to resist or you know what do you tell yourself" }, { "start": 3910.64, "end": 3916.2, "text": " when there's like a really easy opportunity to get a lot of views or or" }, { "start": 3916.2, "end": 3923.04, "text": " clicks I would say the best thing you can do is Google Sir Roger ball and" }, { "start": 3923.04, "end": 3928.7599999999998, "text": " happen to this guy and yeah just be afraid you don't want that to happen to" }, { "start": 3928.7599999999998, "end": 3933.2, "text": " you for sure luckily happened to me first so you've got an example in front" }, { "start": 3933.2, "end": 3938.2, "text": " of you now of what can go wrong when you follow views and likes too much you" }, { "start": 3938.2, "end": 3944.12, "text": " chase cloud too much in the education space the internet gives everybody a" }, { "start": 3944.12, "end": 3950.64, "text": " voice you will be held accountable there is no we are moving into a world that is" }, { "start": 3950.64, "end": 3957.3199999999997, "text": " much more transparent every day less and less privacy yeah the internet gives" }, { "start": 3957.3199999999997, "end": 3966.9199999999996, "text": " everybody a voice and power so yeah that's so I can say use it use it wisely" }, { "start": 3966.92, "end": 3974.6800000000003, "text": " I guess it wisely well Sir Roger of all this was this was a pleasure really" }, { "start": 3974.6800000000003, "end": 3981.64, "text": " truly I I thank you very much for for being here with me today thanks for" }, { "start": 3981.64, "end": 3987.08, "text": " coming on thanks for being so open and and and forward and and and honest I" }, { "start": 3987.08, "end": 3993.88, "text": " think it's very valuable the world also hears from you and you know in it not" }, { "start": 3993.88, "end": 3999.52, "text": " just from articles and and and you know reviews and things like this absolutely" }, { "start": 3999.52, "end": 4028.8, "text": " thank you Yannick awesome" } ]
qeEO2GECQk0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Evaluating NLP Models via Contrast Sets
[ "Science & Technology" ]
[ "deep learning", "machine learning", "nlp", "natural language processing", "arxiv", "attention", "evaluation", "cheat", "easy", "hard", "adversarial", "counterfactual", "hand-crafted", "test set", "supervised" ]
Current NLP models are often "cheating" on supervised learning tasks by exploiting correlations that arise from the particularities of the dataset. Therefore they often fail to learn the original intent of the dataset creators. This paper argues that NLP models should be evaluated on Contrast Sets, which are hand-crafted perturbations by the dataset authors that capture their intent in a meaningful way. https://arxiv.org/abs/2004.02709 Abstract: Standard test sets for supervised learning evaluate in-distribution generalization. Unfortunately, when a dataset has systematic gaps (e.g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities. We propose a new annotation paradigm for NLP that helps to close systematic gaps in the test data. In particular, after a dataset is constructed, we recommend that the dataset authors manually perturb the test instances in small but meaningful ways that (typically) change the gold label, creating contrast sets. Contrast sets provide a local view of a model's decision boundary, which can be used to more accurately evaluate a model's true linguistic capabilities. We demonstrate the efficacy of contrast sets by creating them for 10 diverse NLP datasets (e.g., DROP reading comprehension, UD parsing, IMDb sentiment analysis). Although our contrast sets are not explicitly adversarial, model performance is significantly lower on them than on the original test sets---up to 25\% in some cases. We release our contrast sets as new evaluation benchmarks and encourage future dataset construction efforts to follow similar annotation processes. Authors: Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, Ben Zhou Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at evaluating NLP models via contrast sets. These are too many authors from too many places for me to read out. We'll just jump right into the problem. What is the problem? Let's jump into the solution. Here you see a visual question answering task. Visual question answering in this case. You have two pictures right here. Picture one, picture two and a sentence. Two similarly colored and similarly posed chow dogs are face-to-face in one image. I guess the task here is to have the system answer. Is this correct or incorrect? As you see here I believe that's a correct statement. Or you're maybe tasked to ask which is the image that this applies to. Is it image one or image two? Of course here it's image one. The problem with such systems is that there are a lot of easy things that the models can do that will usually get them the answer. What we like to imagine is that the model will look at this and recognize that this is a dog here. This is a dog. Here is its face and this is a dog and here is its face. It will see there's a count. There's two of them. There's two of them. There's a notion of face and there's notion of pose and so on. Usually there are tricks that the models can do to get this easier. For example I know that in a particular visual question answering system whenever there is a question of what is the ground covered in or something like this. The answer is always snow. You don't even have to look at the image. Similarly there are a lot of these kind of tricks that the models learn and the authors recognize correctly that this is mostly a data set problem. Usually what you do in these data sets is you have an image that you scrape from the web or something and it has some mountains and then there's snow on the mountains, on the ground. You give this to a bunch of mechanical turks or someone like a raider and you instruct them. You produce a question to this image. You give them a couple of examples and they're usually kind of lazy and they will just look at it and be like what questions could I ask? You need to ask something. Usually the instructions are it must be visual and it must maybe be answerable with a one word answer or something like this. Or it must be a multiple choice question. There are these number of instructions and they will usually be like what's kind of special about this picture? There's snow so I'm gonna ask about that. Snow right? The problem is mainly the process of data set generation. That will lead to biases and easy solutions for the models where the models will simply learn statistical correlations between things and the intention. We have a big divergence between the intention of what the data set creators want. The intention is in this case is visual understanding, visual of the world. There's a big difference between this and between how the data set is really constructed. The authors are trying to address this with what they call contrast sets. They say you get out of this process a data set. You get a training data set and a test data set. Maybe here a smaller test data set. What they say is what we should do is we should additionally have these things called contrast sets. This is train and this is test. Usually these two come from the same distribution. You simply make them and then you split them somehow and you take the test from the train. But these here are not from the same distribution. This is the contrast. What they argue is that the authors of the data set should create the contrast set. You see that there's a split here where the data set comes from. They argue that the authors of the data set with knowing what intention they have, they should create the contrast data set manually by hand in order to make really hard examples that show what they want out of a system. They capture this here in their example. If we go back to the example, here are things. They suggest to do this via perturbations. What they would do is they would start at this example up here. They would start and they would perturb it textually or via image. They would perturb it to make it change the gold label. This is different from adversarial examples. In adversarial examples you would want to perturb a sample such that it is still the same but to the classifier it's different. Here you have the opposite gold. You want to make something that is means kind of the opposite but you want to test whether your classifier can pick up on that. In this case the one example would be two similarly colored and similarly posed cats instead of dogs are face to face in one image. That would change the label. Whereas before the answer was yes that's a correct sentence. Now it's no that's not a correct sentence. There are no cats in these images. Also here three similarly colored dogs. The intention of the authors, you have to view it through this lens, the intention here is that the system can recognize the species of the entities in the images. The system can count and the system can compare right compare in this case colors. You want to kind of make perturbations on these attributes from a test image. You can also think about image perturbations where you keep the sentence but you modify the image such that there are still two dogs and they're still facing each other. But they're not similarly colored anymore. So the similarly colored here would be the attribute that where before it was true now it's false with the new image. You get the gist that the people that created the data set that know their intention will create manually these samples. The authors they propose a new metric to track this but essentially the authors propose how well the models do on these contrast sets will be a reflection. It should be kind of an additional thing that people do with their NLP models. Alright so you get the picture. That is I believe the entire gist of this paper and I have some problems. First of all here they say alright let's give a toy example in two dimensions. Say you have this data set right and the red one is the correct decision boundary right and you want to capture that but because you only have limited training data and because you in in this generation processes you have systematic biases. So if we had non-systematic biases we would think that okay we maybe pick this and this one and this one here and this one here and this one here right. We don't get all of them but we kind of get an IID sample right. That wouldn't be so much of a problem. You could still kind of recover the decision boundary but because we have systematic biases the authors argue we actually introduce biases. So the systematic bias here is that we would of the blue ones we would only capture things on this sorry on the on this layer up here and of the red ones orange ones we'd only capture things of the level down here and thereby we introduce the kind of data set, the bias. It doesn't require this complex decision boundary anymore. Right and if we now the problem is if we collect the data set like this and we simply say well these ones are the test set and these ones are the train set right it will generalize well to the test set but it will not generalize well to what we actually want and therefore the authors say well if we introduce these contrast sets here then you see that the decision boundary that we found will not perform well on these contrast sets right. So they would say we take one example of the test set right. This is you can see this is this example right here and we would perturb it to make it change its label or in this case one of them where the label remains. We would kind of perturb it meaningfully and yeah so as I said I have multiple problems with this. First 2D toy examples very very bad for NLP models. First of all low-dimensional intuition does not generalize to high-dimensional intuition like very very little. Second of all even worse usually these NLP models have so many parameters much more parameters than you have data set which means that your decision boundary is incidentally going to be simple even if you had all the data you could possibly want. It's just a very different kind of problem and then the next problem is if even with by doing this contrast set and you already see it here right you already see it you can only kind of bicker about the data okay but with the contrast that you only really capture this one aspect so if that was actually well adhered to you could measure very locally whether or not this this would work or not and the ability to come up with meaningful contrast sets to ever capture what the model is doing is almost impossible because you have to create them manually and then you suggest that the authors themselves make these contrast sets. Remember the authors are the ones that gave these instructions right these instructions right here the authors provided them to the to the data set annotators so the authors will probably be even more biased if they have to do their own right if they have to now create their own contrast examples they will probably even though they know their intention they will probably be like more biased than if you at least this here at least this here is a distributed process across people right so you get things that you wouldn't have thought of but if just the three authors of the date of the paper make the contrast examples I would argue that that's an even more biased measure often so all of this it just strikes me as as the paper is basically saying let's try on a few things and I think the fundamental problem is much much deeper and it goes with this intention part like I get it the the visual question answering data set doesn't capture the doesn't capture what you want it doesn't make the model suddenly understand that there are dogs and there are species of animal and so on it simply makes it correlate things but that's what deep learning and especially NLP does so right it's like it's like saying you you build a build an image net classifier and it can't fly and identify if I try it on my tests that it requires my computer to fly and my image net model can't do this then it doesn't serve my intention right and I mean it's it's a crass example but ultimately you the correct approach should be to better encapsulate your intention into the data set generating process and then correctly interpreting the results that mean okay on this data set as far as we can tell the way we created it this is the performance of the model it doesn't the model will never learn to fulfill your intention and I get it that's what you're saying but still even with this contrast set I think it's a really bad measure to formally propose it's I think you should much more propose how is the data set generating process different from what you want and what are the limitations there right and so that's that that I think that will lead to much more meaningful meaningful results than simply the authors providing a few manually put examples that they feel capture their intention it will not will not the reason we do deep learning instead of straightforward if else programming is because we cannot capture even our intentions and therefore data set generation is the only is the only method we have so to say all right so ultimately I believe these these whole NLP especially the visual question answering and so on the natural language understanding part needs to have a grounding so ultimately I think grounding grounded NLP it means basically that you're not only doing NLP which is simply you take text and you take images and you correlate them somehow right you just make a statistical connection grounded NLP models is the hope that you could build something that actually understands the world understands that there's entities that is interacted there's something like a pose that there is something like what the color means right what a dog is and so on and as entities I think we're not there yet and I think that will be the ultimate solution to these kind of tasks not not any sort of local very local very low dimensional perturbation I mean yeah let's say you create a contrast set you will be able to capture one tiny little bit of your intention one tiny little bit even though you know your intention you will capture a tiny little bit all of the thousand other degrees of freedom of your own intention you won't be able in there to capture in the contrast set I guarantee you all right that was my quarrels with that I invite you to read the whole paper they actually do this for NLP datasets it's a lot of work and they show that the models perform much worse on their contrast sets and interestingly the humans don't the humans are able to solve the contrast set of course of course because you tell the humans what the task is right that's like humans succeed on contrasts at like how surprising what you should do is you should just provide the humans with the data set not tell them what the task is even worse just provide them with the encoded data set like not the text itself but actually the token IDs right and then and then make them do the thing and the humans will just as well make a statistical correlation between the tokens and the images or whatnot and the humans will fail just as well on the test on these contrast sets because the humans maybe they'll figure out what the task is but probably not so humans succeed on contrasts at how surprising you tell them the intention while you don't tell it to the model yes I see critical but yeah please read the paper it's an interesting paper and with that goodbye
[ { "start": 0, "end": 5.68, "text": " Hi there! Today we're looking at evaluating NLP models via contrast sets." }, { "start": 5.68, "end": 12.8, "text": " These are too many authors from too many places for me to read out." }, { "start": 12.8, "end": 22.32, "text": " We'll just jump right into the problem. What is the problem? Let's jump into" }, { "start": 22.32, "end": 28.92, "text": " the solution. Here you see a visual question answering task. Visual question" }, { "start": 28.92, "end": 34.32, "text": " answering in this case. You have two pictures right here. Picture one, picture" }, { "start": 34.32, "end": 42.6, "text": " two and a sentence. Two similarly colored and similarly posed chow dogs are" }, { "start": 42.6, "end": 51.72, "text": " face-to-face in one image. I guess the task here is to have the" }, { "start": 51.72, "end": 57.68000000000001, "text": " system answer. Is this correct or incorrect? As you see here I believe" }, { "start": 57.68, "end": 65.48, "text": " that's a correct statement. Or you're maybe tasked to ask which is the" }, { "start": 65.48, "end": 70.2, "text": " image that this applies to. Is it image one or image two? Of course" }, { "start": 70.2, "end": 78.52, "text": " here it's image one. The problem with such systems is that there are a" }, { "start": 78.52, "end": 84.16, "text": " lot of easy things that the models can do that will usually get them the" }, { "start": 84.16, "end": 89, "text": " answer. What we like to imagine is that the model will look at this and recognize" }, { "start": 89, "end": 94.39999999999999, "text": " that this is a dog here. This is a dog. Here is its face and this is a dog and" }, { "start": 94.39999999999999, "end": 100.47999999999999, "text": " here is its face. It will see there's a count. There's two of them." }, { "start": 100.47999999999999, "end": 110.19999999999999, "text": " There's two of them. There's a notion of face and there's notion of pose and so" }, { "start": 110.2, "end": 117.72, "text": " on. Usually there are tricks that the models can do to get this easier." }, { "start": 117.72, "end": 122.24000000000001, "text": " For example I know that in a particular visual question answering system" }, { "start": 122.24000000000001, "end": 135.12, "text": " whenever there is a question of what is the ground covered in or something like" }, { "start": 135.12, "end": 142.64000000000001, "text": " this. The answer is always snow. You don't even have to look at the image." }, { "start": 142.64000000000001, "end": 148.88, "text": " Similarly there are a lot of these kind of tricks that the models learn and the" }, { "start": 148.88, "end": 154.20000000000002, "text": " authors recognize correctly that this is mostly a data set problem." }, { "start": 154.20000000000002, "end": 160.08, "text": " Usually what you do in these data sets is you have an image" }, { "start": 160.08, "end": 163.8, "text": " that you scrape from the web or something" }, { "start": 163.8, "end": 170.92000000000002, "text": " and it has some mountains and then there's snow on the mountains, on the ground." }, { "start": 170.92000000000002, "end": 181, "text": " You give this to a bunch of mechanical turks or someone like a raider and you" }, { "start": 181, "end": 186.36, "text": " instruct them. You produce a question to this image. You give them a couple of" }, { "start": 186.36, "end": 190.92000000000002, "text": " examples and they're usually kind of lazy and they will just look at it and" }, { "start": 190.92, "end": 196.72, "text": " be like what questions could I ask? You need to ask something." }, { "start": 196.72, "end": 204.32, "text": " Usually the instructions are it must be visual and it must maybe be answerable" }, { "start": 204.32, "end": 210.79999999999998, "text": " with a one word answer or something like this. Or it must be a" }, { "start": 210.79999999999998, "end": 214.79999999999998, "text": " multiple choice question. There are these number of instructions and they will" }, { "start": 214.79999999999998, "end": 218.92, "text": " usually be like what's kind of special about this picture? There's snow" }, { "start": 218.92, "end": 228.44, "text": " so I'm gonna ask about that. Snow right? The problem is mainly the process" }, { "start": 228.44, "end": 235.64, "text": " of data set generation. That will lead to biases and easy" }, { "start": 235.64, "end": 240.16, "text": " solutions for the models where the models will simply learn" }, { "start": 240.16, "end": 245.72, "text": " statistical correlations between things and the intention. We have a big" }, { "start": 245.72, "end": 257.64, "text": " divergence between the intention of what the data set creators" }, { "start": 257.64, "end": 268.88, "text": " want. The intention is in this case is visual understanding, visual of the" }, { "start": 268.88, "end": 275.96, "text": " world. There's a big difference between this and between how the data" }, { "start": 275.96, "end": 282.68, "text": " set is really constructed. The authors are trying to address this with" }, { "start": 282.68, "end": 287.64, "text": " what they call contrast sets. They say you get out of this process" }, { "start": 287.64, "end": 292.56, "text": " a data set. You get a training data set and a test data set." }, { "start": 292.56, "end": 298.68, "text": " Maybe here a smaller test data set. What they say is what we should do is we" }, { "start": 298.68, "end": 306.88, "text": " should additionally have these things called contrast sets. This is train" }, { "start": 306.88, "end": 313.56, "text": " and this is test. Usually these two come from the same distribution. You" }, { "start": 313.56, "end": 318.76, "text": " simply make them and then you split them somehow and you take the test from the" }, { "start": 318.76, "end": 326.15999999999997, "text": " train. But these here are not from the same distribution. This is the contrast." }, { "start": 326.15999999999997, "end": 334.08, "text": " What they argue is that the authors of the data set should create the contrast" }, { "start": 334.08, "end": 341.08, "text": " set. You see that there's a split here where the data set comes from." }, { "start": 341.08, "end": 345.64, "text": " They argue that the authors of the data set with knowing what intention they" }, { "start": 345.64, "end": 351.88, "text": " have, they should create the contrast data set manually by hand in order to" }, { "start": 351.88, "end": 357.71999999999997, "text": " make really hard examples that show what they want out of a system." }, { "start": 357.71999999999997, "end": 364.96, "text": " They capture this here in their example. If we go back to the example, here" }, { "start": 364.96, "end": 371.74, "text": " are things. They suggest to do this via perturbations. What they would do" }, { "start": 371.74, "end": 377.56, "text": " is they would start at this example up here. They would start and they would" }, { "start": 377.56, "end": 386.56, "text": " perturb it textually or via image. They would perturb it to make it change" }, { "start": 386.56, "end": 391.28000000000003, "text": " the gold label. This is different from adversarial examples. In" }, { "start": 391.28000000000003, "end": 397.40000000000003, "text": " adversarial examples you would want to perturb a sample such that it is still" }, { "start": 397.4, "end": 402.03999999999996, "text": " the same but to the classifier it's different. Here you have the opposite gold." }, { "start": 402.03999999999996, "end": 408.67999999999995, "text": " You want to make something that is means kind of the opposite but you want to" }, { "start": 408.67999999999995, "end": 414.12, "text": " test whether your classifier can pick up on that. In this case the one example" }, { "start": 414.12, "end": 418.28, "text": " would be two similarly colored and similarly posed cats instead of dogs" }, { "start": 418.28, "end": 423.59999999999997, "text": " are face to face in one image. That would change the label. Whereas" }, { "start": 423.6, "end": 429.16, "text": " before the answer was yes that's a correct sentence. Now it's no that's not" }, { "start": 429.16, "end": 435.44, "text": " a correct sentence. There are no cats in these images. Also here three similarly" }, { "start": 435.44, "end": 440.28000000000003, "text": " colored dogs. The intention of the authors, you have to view it through" }, { "start": 440.28000000000003, "end": 446.92, "text": " this lens, the intention here is that the system can recognize the species of" }, { "start": 446.92, "end": 454.04, "text": " the entities in the images. The system can count and the system can compare" }, { "start": 454.04, "end": 460.08000000000004, "text": " right compare in this case colors. You want to kind of make perturbations on" }, { "start": 460.08000000000004, "end": 465.36, "text": " these attributes from a test image. You can also think about image" }, { "start": 465.36, "end": 471.08000000000004, "text": " perturbations where you keep the sentence but you modify the image such" }, { "start": 471.08000000000004, "end": 475.84000000000003, "text": " that there are still two dogs and they're still facing each other." }, { "start": 475.84, "end": 481.59999999999997, "text": " But they're not similarly colored anymore. So the similarly" }, { "start": 481.59999999999997, "end": 489.23999999999995, "text": " colored here would be the attribute that where before it was true now it's false" }, { "start": 489.23999999999995, "end": 495.28, "text": " with the new image. You get the gist that the people that created the" }, { "start": 495.28, "end": 503.2, "text": " data set that know their intention will create manually these samples. The" }, { "start": 503.2, "end": 508.64, "text": " authors they propose a new metric to track this but essentially the authors" }, { "start": 508.64, "end": 515.04, "text": " propose how well the models do on these contrast sets will be a reflection." }, { "start": 515.04, "end": 521.16, "text": " It should be kind of an additional thing that people do with their NLP" }, { "start": 521.16, "end": 530.24, "text": " models. Alright so you get the picture. That is I believe the entire gist of" }, { "start": 530.24, "end": 540.04, "text": " this paper and I have some problems. First of all here they say alright let's" }, { "start": 540.04, "end": 544.6, "text": " give a toy example in two dimensions. Say you have this data set right and the red" }, { "start": 544.6, "end": 549.16, "text": " one is the correct decision boundary right and you want to capture that but" }, { "start": 549.16, "end": 555.48, "text": " because you only have limited training data and because you in in this" }, { "start": 555.48, "end": 562.6, "text": " generation processes you have systematic biases. So if we had non-systematic" }, { "start": 562.6, "end": 569.32, "text": " biases we would think that okay we maybe pick this and this one and this one here" }, { "start": 569.32, "end": 573.16, "text": " and this one here and this one here right. We don't get all of them but we" }, { "start": 573.16, "end": 577.4, "text": " kind of get an IID sample right. That wouldn't be so much of a problem. You" }, { "start": 577.4, "end": 580.88, "text": " could still kind of recover the decision boundary but because we have" }, { "start": 580.88, "end": 588.96, "text": " systematic biases the authors argue we actually introduce biases. So the" }, { "start": 588.96, "end": 594.04, "text": " systematic bias here is that we would of the blue ones we would only capture" }, { "start": 594.04, "end": 603.04, "text": " things on this sorry on the on this layer up here and of the red ones orange" }, { "start": 603.04, "end": 608.72, "text": " ones we'd only capture things of the level down here and thereby we introduce" }, { "start": 608.72, "end": 615.64, "text": " the kind of data set, the bias. It doesn't require this complex decision" }, { "start": 615.64, "end": 623.6, "text": " boundary anymore. Right and if we now the problem is if we collect the data set" }, { "start": 623.6, "end": 628.9200000000001, "text": " like this and we simply say well these ones are the test set and these ones" }, { "start": 628.9200000000001, "end": 633.12, "text": " are the train set right it will generalize well to the test set but it" }, { "start": 633.12, "end": 640.4, "text": " will not generalize well to what we actually want and therefore the authors" }, { "start": 640.4, "end": 645.84, "text": " say well if we introduce these contrast sets here then you see that the decision" }, { "start": 645.84, "end": 652.96, "text": " boundary that we found will not perform well on these contrast sets right. So" }, { "start": 652.96, "end": 659.12, "text": " they would say we take one example of the test set right. This is you can see" }, { "start": 659.12, "end": 665.36, "text": " this is this example right here and we would perturb it to make it change its" }, { "start": 665.36, "end": 670.8, "text": " label or in this case one of them where the label remains. We would kind of" }, { "start": 670.8, "end": 678.76, "text": " perturb it meaningfully and yeah so as I said I have multiple problems with this." }, { "start": 678.76, "end": 687.12, "text": " First 2D toy examples very very bad for NLP models. First of all low-dimensional" }, { "start": 687.12, "end": 692.2, "text": " intuition does not generalize to high-dimensional intuition like very very" }, { "start": 692.2, "end": 699.8, "text": " little. Second of all even worse usually these NLP models have so many parameters" }, { "start": 699.8, "end": 704.84, "text": " much more parameters than you have data set which means that your decision" }, { "start": 704.84, "end": 710.92, "text": " boundary is incidentally going to be simple even if you had all the data you" }, { "start": 710.92, "end": 719.8399999999999, "text": " could possibly want. It's just a very different kind of problem and then the" }, { "start": 719.8399999999999, "end": 727.68, "text": " next problem is if even with by doing this contrast set and you already see it" }, { "start": 727.68, "end": 733.18, "text": " here right you already see it you can only kind of bicker about the data okay" }, { "start": 733.18, "end": 737.68, "text": " but with the contrast that you only really capture this one aspect so if" }, { "start": 737.68, "end": 746.1999999999999, "text": " that was actually well adhered to you could measure very locally whether or" }, { "start": 746.1999999999999, "end": 752.16, "text": " not this this would work or not and the ability to come up with meaningful" }, { "start": 752.16, "end": 758.16, "text": " contrast sets to ever capture what the model is doing is almost impossible" }, { "start": 758.16, "end": 764.7199999999999, "text": " because you have to create them manually and then you suggest that the authors" }, { "start": 764.72, "end": 769.64, "text": " themselves make these contrast sets. Remember the authors are the ones that" }, { "start": 769.64, "end": 774.28, "text": " gave these instructions right these instructions right here the authors" }, { "start": 774.28, "end": 782.48, "text": " provided them to the to the data set annotators so the authors will probably" }, { "start": 782.48, "end": 787.72, "text": " be even more biased if they have to do their own right if they have to now" }, { "start": 787.72, "end": 793.4, "text": " create their own contrast examples they will probably even though they know" }, { "start": 793.4, "end": 799.0799999999999, "text": " their intention they will probably be like more biased than if you at least" }, { "start": 799.0799999999999, "end": 803.4399999999999, "text": " this here at least this here is a distributed process across people right" }, { "start": 803.4399999999999, "end": 807.36, "text": " so you get things that you wouldn't have thought of but if just the three authors" }, { "start": 807.36, "end": 811.28, "text": " of the date of the paper make the contrast examples I would argue that" }, { "start": 811.28, "end": 819.56, "text": " that's an even more biased measure often so all of this it just strikes me as as" }, { "start": 819.56, "end": 825.4799999999999, "text": " the paper is basically saying let's try on a few things and I think the" }, { "start": 825.4799999999999, "end": 831.4, "text": " fundamental problem is much much deeper and it goes with this intention part" }, { "start": 831.4, "end": 839.92, "text": " like I get it the the visual question answering data set doesn't capture the" }, { "start": 839.92, "end": 845.52, "text": " doesn't capture what you want it doesn't make the model suddenly understand that" }, { "start": 845.52, "end": 849.1999999999999, "text": " there are dogs and there are species of animal and so on it simply makes it" }, { "start": 849.2, "end": 855, "text": " correlate things but that's what deep learning and especially NLP does so" }, { "start": 855, "end": 861.72, "text": " right it's like it's like saying you you build a build an image net classifier" }, { "start": 861.72, "end": 870.24, "text": " and it can't fly and identify if I try it on my tests that it requires my" }, { "start": 870.24, "end": 876.2, "text": " computer to fly and my image net model can't do this then it doesn't serve my" }, { "start": 876.2, "end": 883.12, "text": " intention right and I mean it's it's a crass example but ultimately you the" }, { "start": 883.12, "end": 889.8000000000001, "text": " correct approach should be to better encapsulate your intention into the" }, { "start": 889.8000000000001, "end": 894.76, "text": " data set generating process and then correctly interpreting the results that" }, { "start": 894.76, "end": 900.08, "text": " mean okay on this data set as far as we can tell the way we created it this is" }, { "start": 900.08, "end": 906.24, "text": " the performance of the model it doesn't the model will never learn to fulfill" }, { "start": 906.24, "end": 910.6, "text": " your intention and I get it that's what you're saying but still even with this" }, { "start": 910.6, "end": 919.2, "text": " contrast set I think it's a really bad measure to formally propose it's I think" }, { "start": 919.2, "end": 923.96, "text": " you should much more propose how is the data set generating process different" }, { "start": 923.96, "end": 931.4000000000001, "text": " from what you want and what are the limitations there right and so that's" }, { "start": 931.4000000000001, "end": 938.5600000000001, "text": " that that I think that will lead to much more meaningful meaningful results than" }, { "start": 938.5600000000001, "end": 943.9200000000001, "text": " simply the authors providing a few manually put examples that they feel" }, { "start": 943.9200000000001, "end": 948.76, "text": " capture their intention it will not will not the reason we do deep learning" }, { "start": 948.76, "end": 954.64, "text": " instead of straightforward if else programming is because we cannot" }, { "start": 954.64, "end": 961.2, "text": " capture even our intentions and therefore data set generation is the" }, { "start": 961.2, "end": 969.64, "text": " only is the only method we have so to say all right so ultimately I believe" }, { "start": 969.64, "end": 973.8, "text": " these these whole NLP especially the visual question answering and so on the" }, { "start": 973.8, "end": 980.5999999999999, "text": " natural language understanding part needs to have a grounding so ultimately I" }, { "start": 980.5999999999999, "end": 988.8399999999999, "text": " think grounding grounded NLP it means basically that you're not only doing NLP" }, { "start": 988.8399999999999, "end": 992.76, "text": " which is simply you take text and you take images and you correlate them" }, { "start": 992.76, "end": 997.92, "text": " somehow right you just make a statistical connection grounded NLP" }, { "start": 997.92, "end": 1001.92, "text": " models is the hope that you could build something that actually understands the" }, { "start": 1001.92, "end": 1005.76, "text": " world understands that there's entities that is interacted there's something" }, { "start": 1005.76, "end": 1011, "text": " like a pose that there is something like what the color means right what a dog is" }, { "start": 1011, "end": 1017.4399999999999, "text": " and so on and as entities I think we're not there yet and I think that will be" }, { "start": 1017.4399999999999, "end": 1026.6, "text": " the ultimate solution to these kind of tasks not not any sort of local very" }, { "start": 1026.6, "end": 1032.08, "text": " local very low dimensional perturbation I mean yeah let's say you create a" }, { "start": 1032.08, "end": 1039.08, "text": " contrast set you will be able to capture one tiny little bit of your intention" }, { "start": 1039.08, "end": 1043.36, "text": " one tiny little bit even though you know your intention you will capture a tiny" }, { "start": 1043.36, "end": 1048.7199999999998, "text": " little bit all of the thousand other degrees of freedom of your own intention" }, { "start": 1048.7199999999998, "end": 1053.76, "text": " you won't be able in there to capture in the contrast set I guarantee you all" }, { "start": 1053.76, "end": 1058.96, "text": " right that was my quarrels with that I invite you to read the whole paper they" }, { "start": 1058.96, "end": 1065.4, "text": " actually do this for NLP datasets it's a lot of work and they show that the" }, { "start": 1065.4, "end": 1070.08, "text": " models perform much worse on their contrast sets and interestingly the" }, { "start": 1070.08, "end": 1073.8799999999999, "text": " humans don't the humans are able to solve the contrast set of course of" }, { "start": 1073.8799999999999, "end": 1080.36, "text": " course because you tell the humans what the task is right that's like humans" }, { "start": 1080.36, "end": 1087, "text": " succeed on contrasts at like how surprising what you should do is you" }, { "start": 1087, "end": 1091.8799999999999, "text": " should just provide the humans with the data set not tell them what the task is" }, { "start": 1091.8799999999999, "end": 1096.6399999999999, "text": " even worse just provide them with the encoded data set like not the text" }, { "start": 1096.6399999999999, "end": 1102.1599999999999, "text": " itself but actually the token IDs right and then and then make them do the thing" }, { "start": 1102.1599999999999, "end": 1107.56, "text": " and the humans will just as well make a statistical correlation between the" }, { "start": 1107.56, "end": 1113.32, "text": " tokens and the images or whatnot and the humans will fail just as well on the" }, { "start": 1113.32, "end": 1118.98, "text": " test on these contrast sets because the humans maybe they'll figure out what the" }, { "start": 1118.98, "end": 1124.32, "text": " task is but probably not so humans succeed on contrasts at how surprising" }, { "start": 1124.32, "end": 1131.6799999999998, "text": " you tell them the intention while you don't tell it to the model yes I see" }, { "start": 1131.6799999999998, "end": 1136.6, "text": " critical but yeah please read the paper it's an interesting paper and with that" }, { "start": 1136.6, "end": 1139.6, "text": " goodbye" } ]
tjbEVY5XIk0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Divide-and-Conquer Monte Carlo Tree Search For Goal-Directed Planning (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "rl", "reinforcement learning", "deep rl", "planning", "alphago", "alphazero", "alpha go", "alpha zero", "mcts", "monte carlo", "tree search", "subdivision", "recursive", "training data", "hindsight experience replay" ]
When AI makes a plan it usually does so step by step, forward in time. But often it is beneficial to define intermediate goals to divide a large problem into easier sub-problems. This paper proposes a generalization of MCTS that searches not for the best next actions to take, but for the best way to sub-divide the problem recursively into problems so tiny that they can each be solved in a single step. Paper: https://arxiv.org/abs/2004.11410 Site: https://sites.google.com/view/dc-mcts/home Abstract: Standard planners for sequential decision making (including Monte Carlo planning, tree search, dynamic programming, etc.) are constrained by an implicit sequential planning assumption: The order in which a plan is constructed is the same in which it is executed. We consider alternatives to this assumption for the class of goal-directed Reinforcement Learning (RL) problems. Instead of an environment transition model, we assume an imperfect, goal-directed policy. This low-level policy can be improved by a plan, consisting of an appropriate sequence of sub-goals that guide it from the start to the goal state. We propose a planning algorithm, Divide-and-Conquer Monte Carlo Tree Search (DC-MCTS), for approximating the optimal plan by means of proposing intermediate sub-goals which hierarchically partition the initial tasks into simpler ones that are then solved independently and recursively. The algorithm critically makes use of a learned sub-goal proposal for finding appropriate partitions trees of new tasks based on prior experience. Different strategies for learning sub-goal proposals give rise to different planning strategies that strictly generalize sequential planning. We show that this algorithmic flexibility over planning order leads to improved results in navigation tasks in grid-worlds as well as in challenging continuous control environments. Authors: Giambattista Parascandolo, Lars Buesing, Josh Merel, Leonard Hasenclever, John Aslanides, Jessica B. Hamrick, Nicolas Heess, Alexander Neitz, Theophane Weber Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! What you're seeing here is a Divide and Conquer Monte Carlo Tree Search in action. This is a planning algorithm that plans in a kind of an unconventional fashion. So we're going to explore this today in this paper. Divide and Conquer Monte Carlo Tree Search for Goal-Directed Planning by Gian Battista Parascondolo and Lars Pyzing and a list of other authors. I believe this is from DeepMind and Max Planck and Eta and... yeah that's it. Alright, so what does this thing do? It is a planning algorithm and planning might be not really familiar for you. So let's say you are in this room and or this set of rooms right here. There's a bunch of walls and you are up here and you want to reach the goal down here. So first of all this is a goal-directed problem. Right here it says goal-directed. Goal-directed means that you give the algorithm a goal to reach and this could be a different goal each time you're on the algorithm. So you give it a goal to reach. Then the second thing here we see is planning. So it is a planning algorithm. What does planning mean? Planning means if you're traditionally reinforcement learning you would think I'm just gonna go ahead and run my agent here and maybe you can move in the four different directions, run my agent here, do some things, right? Maybe I hit a wall, I get a negative reward, I try again. In planning you don't have to move initially. What you can do is think about moving and you can think ahead of what's going to happen and that's usually because you have some sort of model what happens. This is very famous applied in for example AlphaGo or AlphaZero where you know if I move to the right here I'm going to be here. So you will know once you've reached the goal. If I'm here I go down I reach the goal. So you can think all of this without actually moving in the environment. You can think yourself ahead what would happen if I did certain things and that in turn also means that you can think ahead of multiple different paths. So you can think ahead what would happen if I move right, what would happen if I move down and then you can think in the next layer if I move right what would happen if I moved right again? What would happen if I move down instead? So you can easily see the planning problem becomes a tree search problem. In this case we've done a breadth first search and eventually you'll see that this will get you to the goal. So this breadth first search or maybe you want to employ depth first search will ultimately get you to the goal. We can represent this as a search tree. So you're here in a particular state and you have a bunch of actions in this case four to move left up down or right and you can choose any of them and you will get into a new state and from each of those you could choose any again and you can think ahead all of this you can construct this entire tree and one of these branches will lead you to the goal. Promise! What is the problem? The problem is if the path to the goal here is let's say D steps long then this tree here is going to be D layers deep and in our case that means we'll have four to the D nodes in that tree before we even reach the goal and that is just a long long or a big tree and we can't construct all of it. So algorithms have come along for example the A star algorithm where you can incorporate something like a heuristic and the heuristic would be well it's generally good if I'm close to the goal in L2 distance so you would not build the entire tree here but you would prefer nodes that will bring you towards this heuristic so this node down here is closer to the green node in terms of L2 distance and this node down here is even closer but then you're kind of stuck so A star will explore a bit probably along this wall here and once you're here you have a clear path again right so you can simply take the actions that minimize this L2 distance so this will already get you to a real good point. Monte Carlo tree search as it was employed in AlphaGo or AlphaStar has a similar structure where it after a certain while it stops and evaluates a heuristic to say what's the the value here and so on so it is in for some problems an even better method of constructing the search tree in a way where you don't get overblown by the number so the Monte Carlo search tree in this algorithm refers to the fact that we are generalizing Monte Carlo tree search from the let's say the AlphaGo paper so what's the idea here so far we've known everything the idea is the following if I had an Oracle if I am the master here and I can tell the agent agent look I guarantee you that this state right here in the middle you will pass that state if you want to reach the goal you will pass this for sure if I tell this to the agent now what can the agent do the edge could say oh okay if I know that I can simply I don't have to search for a way to the goal I'd much rather search for a way from my start point to that point where I know that I'm guaranteed to be at some place and then I can search also from a way from there to the goal right so this now remember our long out our path was d steps long for the original problem this is now let's say d half long each of these paths and that means we just construct two trees each one of them is going to be 4 to the d half and the other one is also 4 to the d half and if we add them that is much smaller than the original 4 to the d tree that we build so right there we have subdivided our problem into two sub problems and each of them are much easier than the original problem this paper basically does this recursively so it will subdivide the problem by proposing some middle state where you are going to be at some point for sure and that for sure we're going to take a look at and then for each of those problems again it will try to subdivided into sub problems and therefore recursively solve the sub problems until they're small enough that they can be basically solved in one step so that's the big idea here and this is illustrated in this point right here so you are in this s0 the start state and you want to go to this s infinity the goal state in your case what this paper does is it proposes to split the problem here in the middle and then simply solve the two problems recursively now what is a bit confusing right here is that it is the planning already is a tree search right so a plan is like a tree but we are searching over plan so we're searching for the best plan which means that we are searching over trees and that search itself is a tree search so the search itself is a problem where we go down one route and then on oh and then we maybe go down another route and here and then here so the search is a tree so we're now tree searching over trees that's the kind of tricky thing to remember so each of these plans even if it's half if it's only half done like this is only half a plan we don't know what's gonna happen in here this half a plan is a node in the tree that we're searching over and then it splits it it splits as you can see here into two sub problems the two sub problems also are nodes in that search tree so you see that this top thing here would correspond to this note even though in itself it is a plan a tree string in this case and each of the two sub problems would become these particular these nodes in the search tree so keep that in mind as we go through this paper it is very easy to get confused in this respect the algorithm is pretty simple in this case so this algorithm rests on this traverse procedure so we're going to we're going to traverse this what they call these or nodes so they they divide the problem into and nodes and or nodes I I don't believe that's particularly necessary for us to think about but here's how this works so they traverse the or node s to s prime this is simply again this is a node but the node is a path from s to s prime where we don't know yet what happens in between right so what we'll do is we'll run this procedure here select and select it you can see it outputs an s prime and the s prime is going to be a node here somewhere in the middle where the model says this is your subdivision point and then it will recursively traverse the left and the right branch of this tree so it will subdivide the problem into two problems and then recursively call this traverse you see that's the function that we're defining call this traverse function on these so it will subdivide this problem into these problems and it would for the next for the next step again it will eat for each of the problems propose a middle node and subdivide it further and so on until you have a full plan right at some point you're going to have a full plan now here again is the important thing to remember this is just one branch of the search this is just one possible plan and we are going to do a tree search over these plans so this select function here it has returned this as prime but it could have returned any point between s and s double prime so let in it this is just one branch I'm going to I don't have space to draw here but I'm going to draw it down here again so it could have also returned this particular node here like it's a different s prime and then subdivided the problem into different problems and then of course those problems are now different so they would be subdivided differently again and so on so this top part here is just if you consider this thing here your root node this is where you search from this top part is just one node one branch in the tree but we could also subdivide like this and then that would be another branch in your tree and this tree here is the thing that you're searching over so important to keep keep this in mind we're searching over these different possibilities now the rest of this algorithm here is basically the carryover from Monte Carlo tree search and I don't want to go into that in this video but if you're interested in you know how to actually implement this you'll have to go look at MCTS and then all of this just carries over from that algorithm because you have to keep estimates of value and visit counts in this tree and so on and also you have some sort of a value estimator but yeah I'm mainly concerned with how the tree is constructed right here so basically here's the here's the difference between a between the Monte Carlo tree search and the divide and conquer Monte Carlo tree search in Monte Carlo tree search ignore the yellow one for now you're in the green position and you want to go to the blue position in Monte Carlo tree search what you're searching over is the next action to take in this case you have four possible actions to take that's what you're searching over and that's what you build your search tree from your search tree is going to be which action to take right up left down or right that's why you have four actions in month in divide and conquer Monte Carlo tree search you're not searching over actions you are searching over the best way to subdivide this problem right you're searching over which of these all the black squares should I use to subdivide my problem into sub problems and that's what you build your search tree from so naturally you you can already see what kind of possibilities do we have here to subdivide this problem I drew one white square but any of the black squares are candidates to subdivide this problem right any of the black squares could be potential subdivisions and this is what we search over so in in Monte Carlo tree search we search over the actions which gives us this four to the D tree but in divide and conquer we're searching over all the ways to subdivide the problem as you can see there that are many many more possibilities so from this first starting node we have like like a hundred possibilities to subdivide this problem into two problems right and each of those again if you now you've decided on a subdivision let's say you decided on this one right here you say I want to pass through that point on my way to the goal now you have to subdivide that in this sub problem into two problems again every possible black square I'm not saying which one is good good thing to subdivide the problem I'm just asking what is a possible candidate every single black square here is a possible candidate for for a path from here to here right and again for this particular sub problem you have to do the same thing so the the search tree here even though we said before it is this one is very deep and this one is probably only log D sort of log 2d deep it width is going to be enormous and that is the catch right the catch this is not a method that is like a magic pill the catch is even though your tree is not as deep it is much much wider and it is intuitive right because you still have to have the ability to construct any possible plan so your tree is going to have as many nodes as the original Monte Carlo tree search tree you're if you were to fully expand it right so it's your trading of depth for width here I hope I hope that's a bit clear so your entire your entire promise of this method is going to be can you from all of these possibilities so from all of these you don't even you don't even want to go and search even one layer deep through all of these don't even want to consider all of them right you want to search in this tree you want to limit your search to very particular ways of subdivision here if you can do that efficiently if you can pick efficiently candidates to subdivide then this could be a successful thing because your deep is now not as your tree is not as deep as the original search tree would have been and you can limit the width effectively to only very few candidates so here we could for example make a heuristic that will always only pick squares that are kind of on this straight path to the goal so everything rests on how you do this select action this thing here the entire algorithm relies on the fact that you can select effectively select in between states where you're pretty sure that the algorithm will have to pass through there because the worse you make these predictions the worse your tree search is going to work and what they do of course is they use deep learning as you might have guessed to do that so they have they will have a model that for a particular start and end goal will give them a probability distribution across candidates now everything that's black here also has probability mass but it's just so small you can't see and these blue ones are that the lighter blue the more probable this model thinks that this is going to be an in between state now the tree search can now limit itself effectively to only the ones here with the highest probability right so we select the ones with the highest probability and will only search plans that have these as the first possible subdivisions again we're searching over plans so we're searching over ways to subdivide the problem into smaller problems that is our search space so once we've decided on one of them let's say here the yellow one again we have to evaluate that model for each of the sub problems and this this is kind of a step that's missing here so in between here there would be a model evaluation that would again tell you which of these in between states were probable subdivision candidates and then you would select again one of those in that particular search branch and in a different search branch right you're searching over these things in a different search branch you would select a different one and see is this possibly a better way to subdivide the problem and so on so the question of course is how do you train this model how do you train a model that gives you good candidates for subdivision and the answer here is a comes from the idea of hindsight experience replay so let's say again you are here and you want to go here and you're not very good at the at it initially so they train this model as I understand along with letting their agent act in this environment so the agent uses the model to plan but initially it's not very good so maybe the agent will fail a lot of times so instead of going to the blue square it will reach this white square right here it will go here here and here will reach the white square instead of saying I failed what you can do and this is the idea of hindsight experience replay is to say well I did fail but I did reach something right I I have reached a thing and and it's actually possible that that thing could have been the goal but this particular episode this was the goal remember the goal changes every time it's a goal-directed policy so it says well this could have been the goal possibly so if I just pretend this was the goal then I have a training example for a successful run so the hindsight experience replay basically pretends that what you have achieved even if you failed was your actual goal and that gives you a positive training example for an episode with that as a goal and the this it could have been the goal because the goal is chosen at random so this gives you a good training example now this paper just generalizes the hindsight experience replay or applies it to their particular framework and they say well if I reach this thing that means any point on this path is a good candidate for subdividing the path because I did actually reach the point remember the goal is to propose a a point where your for sure are going to pass through now since I've taken this path to this goal I have passed through any of the squares in between and so these are my possible sub candidates and all other black squares I don't want that so now I have a classifier I can train I can say any of these squares on my path are good squares to subdivide and any not on my path are bad ones they go a step further I believe and they actually say we're so if this was m steps we're actually going to take the particular square that is reached after m half steps so the exact middle point of that path is going to be our training example for subdivision so you have a classifier that has exactly one one target to train so this you train along with acting in the environment and of course your model for proposing subdivisions is going to be better and better and better and better and that makes your planning algorithm better and better and that makes you collect better episodes and so you can sort of sort of get bootstrap yourself up this thing now this is the basic experiment of the paper they also do this in a 3d manner where they move this little spider here around so the spider was trained to just move from one block to the next block and the planner basically tell it where to go and they show that they outperform the traditional Monte Carlo tree search now I have to say this is cool but you have to remember this is this is only advantageous in very very specific types of problems so first of all it has to be this goal-directed nature otherwise you probably couldn't train this this predictor here super well then given that you have such a good predictor the problem needs to be such that if you have a start state there could be many ways to go about reaching the end and if you have an end state there could be many ways from where you could come from but but there is like some bottleneck state in the middle where you're pretty sure that you're going to have to pass through it so if your problem is of that nature right if it has these bottleneck states where you can predict with reasonable accuracy that you're going to have to pass through then this is a good algorithm to consider and is obviously I mean it's intuitively outperforming the original Monte Carlo tree search because you have much less deep search tree and you can effectively limit its width by using that model they also have made this website where they kind of show videos of their spider and I haven't seen it in a while but it is it is like next to the mouse if you can see it so so you see this is kind of a continuous control problem that also requires planning and they also have these kind of gifts of how they're there what order their plans are constructed in so I invite you to check this out read the paper if you like this subscribe leave a like leave a comment and thank you for listening bye bye
[ { "start": 0, "end": 5.64, "text": " Hi there! What you're seeing here is a Divide and Conquer Monte Carlo Tree" }, { "start": 5.64, "end": 12.64, "text": " Search in action. This is a planning algorithm that plans in a kind of an" }, { "start": 12.64, "end": 17.52, "text": " unconventional fashion. So we're going to explore this today in this paper." }, { "start": 17.52, "end": 22.76, "text": " Divide and Conquer Monte Carlo Tree Search for Goal-Directed Planning by" }, { "start": 22.76, "end": 30.720000000000002, "text": " Gian Battista Parascondolo and Lars Pyzing and a list of other authors." }, { "start": 30.720000000000002, "end": 39.92, "text": " I believe this is from DeepMind and Max Planck and Eta and... yeah that's it." }, { "start": 39.92, "end": 46.480000000000004, "text": " Alright, so what does this thing do? It is a planning algorithm and planning might" }, { "start": 46.48, "end": 53.64, "text": " be not really familiar for you. So let's say you are in this room and or this set" }, { "start": 53.64, "end": 59, "text": " of rooms right here. There's a bunch of walls and you are up here and you want" }, { "start": 59, "end": 64.4, "text": " to reach the goal down here. So first of all this is a goal-directed problem." }, { "start": 64.4, "end": 69.08, "text": " Right here it says goal-directed. Goal-directed means that you give the" }, { "start": 69.08, "end": 73.92, "text": " algorithm a goal to reach and this could be a different goal each time you're on" }, { "start": 73.92, "end": 79.96000000000001, "text": " the algorithm. So you give it a goal to reach. Then the second thing here we" }, { "start": 79.96000000000001, "end": 84.36, "text": " see is planning. So it is a planning algorithm. What does planning mean?" }, { "start": 84.36, "end": 88.72, "text": " Planning means if you're traditionally reinforcement learning you" }, { "start": 88.72, "end": 96.04, "text": " would think I'm just gonna go ahead and run my agent here and maybe you can" }, { "start": 96.04, "end": 101.24000000000001, "text": " move in the four different directions, run my agent here, do some things, right?" }, { "start": 101.24, "end": 107.91999999999999, "text": " Maybe I hit a wall, I get a negative reward, I try again. In planning you" }, { "start": 107.91999999999999, "end": 113.8, "text": " don't have to move initially. What you can do is think about moving and you can" }, { "start": 113.8, "end": 118.32, "text": " think ahead of what's going to happen and that's usually because you have some" }, { "start": 118.32, "end": 124.16, "text": " sort of model what happens. This is very famous applied in for example AlphaGo" }, { "start": 124.16, "end": 130.04, "text": " or AlphaZero where you know if I move to the right here I'm going to" }, { "start": 130.04, "end": 135.2, "text": " be here. So you will know once you've reached the goal. If I'm" }, { "start": 135.2, "end": 140.16, "text": " here I go down I reach the goal. So you can think all of this without" }, { "start": 140.16, "end": 144.48, "text": " actually moving in the environment. You can think yourself ahead what would" }, { "start": 144.48, "end": 150.79999999999998, "text": " happen if I did certain things and that in turn also means that you can think" }, { "start": 150.79999999999998, "end": 154.84, "text": " ahead of multiple different paths. So you can think ahead what would happen if I" }, { "start": 154.84, "end": 158.95999999999998, "text": " move right, what would happen if I move down and then you can think in the next" }, { "start": 158.96, "end": 165.08, "text": " layer if I move right what would happen if I moved right again? What would happen" }, { "start": 165.08, "end": 171.20000000000002, "text": " if I move down instead? So you can easily see the planning problem becomes a tree" }, { "start": 171.20000000000002, "end": 176.24, "text": " search problem. In this case we've done a breadth first search" }, { "start": 176.24, "end": 183.36, "text": " and eventually you'll see that this will get you to the goal. So this" }, { "start": 183.36, "end": 187.72, "text": " breadth first search or maybe you want to employ depth first search will" }, { "start": 187.72, "end": 192.32, "text": " ultimately get you to the goal. We can represent this as a search tree. So" }, { "start": 192.32, "end": 196.04, "text": " you're here in a particular state and you have a bunch of actions in this case" }, { "start": 196.04, "end": 204.34, "text": " four to move left up down or right and you can choose any of them and you will" }, { "start": 204.34, "end": 208.32, "text": " get into a new state and from each of those you could choose any again and you" }, { "start": 208.32, "end": 213.24, "text": " can think ahead all of this you can construct this entire tree and one of" }, { "start": 213.24, "end": 220.76000000000002, "text": " these branches will lead you to the goal. Promise! What is the problem? The problem" }, { "start": 220.76000000000002, "end": 228.60000000000002, "text": " is if the path to the goal here is let's say D steps long then this tree here is" }, { "start": 228.60000000000002, "end": 234.92000000000002, "text": " going to be D layers deep and in our case that means we'll have four to the D" }, { "start": 234.92000000000002, "end": 242.8, "text": " nodes in that tree before we even reach the goal and that is just a long long" }, { "start": 242.8, "end": 248.72, "text": " or a big tree and we can't construct all of it. So algorithms have come along for" }, { "start": 248.72, "end": 254, "text": " example the A star algorithm where you can incorporate something like a" }, { "start": 254, "end": 258.8, "text": " heuristic and the heuristic would be well it's generally good if I'm close to" }, { "start": 258.8, "end": 264.76, "text": " the goal in L2 distance so you would not build the entire tree here but you would" }, { "start": 264.76, "end": 271.28000000000003, "text": " prefer nodes that will bring you towards this heuristic so this node down here is" }, { "start": 271.28, "end": 275.96, "text": " closer to the green node in terms of L2 distance and this node down here is even" }, { "start": 275.96, "end": 281.76, "text": " closer but then you're kind of stuck so A star will explore a bit probably along" }, { "start": 281.76, "end": 288.84, "text": " this wall here and once you're here you have a clear path again right so you can" }, { "start": 288.84, "end": 292.52, "text": " simply take the actions that minimize this L2 distance so this will already" }, { "start": 292.52, "end": 300.28, "text": " get you to a real good point. Monte Carlo tree search as it was employed in AlphaGo" }, { "start": 300.28, "end": 306.91999999999996, "text": " or AlphaStar has a similar structure where it after a certain while it stops" }, { "start": 306.91999999999996, "end": 312.32, "text": " and evaluates a heuristic to say what's the the value here and so on so it is in" }, { "start": 312.32, "end": 318.2, "text": " for some problems an even better method of constructing the search tree in a way" }, { "start": 318.2, "end": 324.47999999999996, "text": " where you don't get overblown by the number so the Monte Carlo search tree in" }, { "start": 324.47999999999996, "end": 330.03999999999996, "text": " this algorithm refers to the fact that we are generalizing Monte Carlo tree" }, { "start": 330.04, "end": 337.28000000000003, "text": " search from the let's say the AlphaGo paper so what's the idea here so far" }, { "start": 337.28000000000003, "end": 343.16, "text": " we've known everything the idea is the following if I had an Oracle if I am the" }, { "start": 343.16, "end": 352.16, "text": " master here and I can tell the agent agent look I guarantee you that this" }, { "start": 352.16, "end": 357.64000000000004, "text": " state right here in the middle you will pass that state if you want to reach the" }, { "start": 357.64, "end": 365.2, "text": " goal you will pass this for sure if I tell this to the agent now what can the" }, { "start": 365.2, "end": 369.8, "text": " agent do the edge could say oh okay if I know that I can simply I don't have to" }, { "start": 369.8, "end": 374.44, "text": " search for a way to the goal I'd much rather search for a way from my start" }, { "start": 374.44, "end": 380.52, "text": " point to that point where I know that I'm guaranteed to be at some place and" }, { "start": 380.52, "end": 388.96, "text": " then I can search also from a way from there to the goal right so this now" }, { "start": 388.96, "end": 394.47999999999996, "text": " remember our long out our path was d steps long for the original problem this" }, { "start": 394.47999999999996, "end": 399.91999999999996, "text": " is now let's say d half long each of these paths and that means we just" }, { "start": 399.91999999999996, "end": 405.64, "text": " construct two trees each one of them is going to be 4 to the d half and the" }, { "start": 405.64, "end": 410.24, "text": " other one is also 4 to the d half and if we add them that is much smaller than" }, { "start": 410.24, "end": 417.40000000000003, "text": " the original 4 to the d tree that we build so right there we have subdivided" }, { "start": 417.40000000000003, "end": 422.36, "text": " our problem into two sub problems and each of them are much easier than the" }, { "start": 422.36, "end": 428.08, "text": " original problem this paper basically does this recursively so it will" }, { "start": 428.08, "end": 433.36, "text": " subdivide the problem by proposing some middle state where you are going to be" }, { "start": 433.36, "end": 438.08, "text": " at some point for sure and that for sure we're going to take a look at and then" }, { "start": 438.08, "end": 443.03999999999996, "text": " for each of those problems again it will try to subdivided into sub problems and" }, { "start": 443.03999999999996, "end": 447.52, "text": " therefore recursively solve the sub problems until they're small enough that" }, { "start": 447.52, "end": 455.4, "text": " they can be basically solved in one step so that's the big idea here and this is" }, { "start": 455.4, "end": 461.28, "text": " illustrated in this point right here so you are in this s0 the start state and" }, { "start": 461.28, "end": 466.79999999999995, "text": " you want to go to this s infinity the goal state in your case what this paper" }, { "start": 466.8, "end": 471.6, "text": " does is it proposes to split the problem here in the middle and then simply solve" }, { "start": 471.6, "end": 479.16, "text": " the two problems recursively now what is a bit confusing right here is that it is" }, { "start": 479.16, "end": 486.8, "text": " the planning already is a tree search right so a plan is like a tree but we" }, { "start": 486.8, "end": 491.44, "text": " are searching over plan so we're searching for the best plan which means" }, { "start": 491.44, "end": 498.28, "text": " that we are searching over trees and that search itself is a tree search so" }, { "start": 498.28, "end": 502.71999999999997, "text": " the search itself is a problem where we go down one route and then on oh and then" }, { "start": 502.71999999999997, "end": 510.44, "text": " we maybe go down another route and here and then here so the search is a tree so" }, { "start": 510.44, "end": 515.8, "text": " we're now tree searching over trees that's the kind of tricky thing to" }, { "start": 515.8, "end": 521.8, "text": " remember so each of these plans even if it's half if it's only half done like" }, { "start": 521.8, "end": 526.28, "text": " this is only half a plan we don't know what's gonna happen in here this half a" }, { "start": 526.28, "end": 534.16, "text": " plan is a node in the tree that we're searching over and then it splits it it" }, { "start": 534.16, "end": 539.0799999999999, "text": " splits as you can see here into two sub problems the two sub problems also are" }, { "start": 539.0799999999999, "end": 544.4799999999999, "text": " nodes in that search tree so you see that this top thing here would correspond" }, { "start": 544.48, "end": 550.04, "text": " to this note even though in itself it is a plan a tree string in this case and" }, { "start": 550.04, "end": 555.76, "text": " each of the two sub problems would become these particular these nodes in" }, { "start": 555.76, "end": 563.44, "text": " the search tree so keep that in mind as we go through this paper it is very easy" }, { "start": 563.44, "end": 571.9200000000001, "text": " to get confused in this respect the algorithm is pretty simple in this case" }, { "start": 571.92, "end": 580.92, "text": " so this algorithm rests on this traverse procedure so we're going to we're going" }, { "start": 580.92, "end": 586.4399999999999, "text": " to traverse this what they call these or nodes so they they divide the problem" }, { "start": 586.4399999999999, "end": 591.28, "text": " into and nodes and or nodes I I don't believe that's particularly necessary" }, { "start": 591.28, "end": 597.24, "text": " for us to think about but here's how this works so they traverse the or node" }, { "start": 597.24, "end": 605.6800000000001, "text": " s to s prime this is simply again this is a node but the node is a path from s" }, { "start": 605.6800000000001, "end": 613.12, "text": " to s prime where we don't know yet what happens in between right so what we'll" }, { "start": 613.12, "end": 620.08, "text": " do is we'll run this procedure here select and select it you can see it" }, { "start": 620.08, "end": 624.88, "text": " outputs an s prime and the s prime is going to be a node here somewhere in the" }, { "start": 624.88, "end": 631.36, "text": " middle where the model says this is your subdivision point and then it will" }, { "start": 631.36, "end": 637.52, "text": " recursively traverse the left and the right branch of this tree so it will" }, { "start": 637.52, "end": 643.48, "text": " subdivide the problem into two problems and then recursively call this traverse" }, { "start": 643.48, "end": 648.72, "text": " you see that's the function that we're defining call this traverse function on" }, { "start": 648.72, "end": 654.16, "text": " these so it will subdivide this problem into these problems and it would for the" }, { "start": 654.16, "end": 660.6, "text": " next for the next step again it will eat for each of the problems propose a" }, { "start": 660.6, "end": 667.28, "text": " middle node and subdivide it further and so on until you have a full plan right" }, { "start": 667.28, "end": 672.24, "text": " at some point you're going to have a full plan now here again is the" }, { "start": 672.24, "end": 678.4399999999999, "text": " important thing to remember this is just one branch of the search this is just" }, { "start": 678.44, "end": 686.96, "text": " one possible plan and we are going to do a tree search over these plans so this" }, { "start": 686.96, "end": 692.2, "text": " select function here it has returned this as prime but it could have returned" }, { "start": 692.2, "end": 698.6800000000001, "text": " any point between s and s double prime so let in it this is just one branch I'm" }, { "start": 698.6800000000001, "end": 704.48, "text": " going to I don't have space to draw here but I'm going to draw it down here again" }, { "start": 704.48, "end": 712.32, "text": " so it could have also returned this particular node here like it's a" }, { "start": 712.32, "end": 717.48, "text": " different s prime and then subdivided the problem into different problems and" }, { "start": 717.48, "end": 722.12, "text": " then of course those problems are now different so they would be subdivided" }, { "start": 722.12, "end": 729.04, "text": " differently again and so on so this top part here is just if you consider this" }, { "start": 729.04, "end": 733.9200000000001, "text": " thing here your root node this is where you search from this top part is just" }, { "start": 733.92, "end": 740.28, "text": " one node one branch in the tree but we could also subdivide like this and then" }, { "start": 740.28, "end": 745.7199999999999, "text": " that would be another branch in your tree and this tree here is the thing" }, { "start": 745.7199999999999, "end": 751.88, "text": " that you're searching over so important to keep keep this in mind we're searching" }, { "start": 751.88, "end": 757.4799999999999, "text": " over these different possibilities now the rest of this algorithm here is" }, { "start": 757.4799999999999, "end": 763.48, "text": " basically the carryover from Monte Carlo tree search and I don't want to go into" }, { "start": 763.48, "end": 767.6, "text": " that in this video but if you're interested in you know how to actually" }, { "start": 767.6, "end": 773.36, "text": " implement this you'll have to go look at MCTS and then all of this just carries" }, { "start": 773.36, "end": 777.6, "text": " over from that algorithm because you have to keep estimates of value and" }, { "start": 777.6, "end": 781.6800000000001, "text": " visit counts in this tree and so on and also you have some sort of a value" }, { "start": 781.6800000000001, "end": 787.84, "text": " estimator but yeah I'm mainly concerned with how the tree is constructed right" }, { "start": 787.84, "end": 795.72, "text": " here so basically here's the here's the difference between a between the Monte" }, { "start": 795.72, "end": 801.64, "text": " Carlo tree search and the divide and conquer Monte Carlo tree search in" }, { "start": 801.64, "end": 806.2800000000001, "text": " Monte Carlo tree search ignore the yellow one for now you're in the green" }, { "start": 806.2800000000001, "end": 811.84, "text": " position and you want to go to the blue position in Monte Carlo tree search what" }, { "start": 811.84, "end": 817.64, "text": " you're searching over is the next action to take in this case you have four" }, { "start": 817.64, "end": 821.96, "text": " possible actions to take that's what you're searching over and that's what" }, { "start": 821.96, "end": 827.48, "text": " you build your search tree from your search tree is going to be which action" }, { "start": 827.48, "end": 833.4399999999999, "text": " to take right up left down or right that's why you have four actions in" }, { "start": 833.4399999999999, "end": 838.08, "text": " month in divide and conquer Monte Carlo tree search you're not searching over" }, { "start": 838.08, "end": 843.48, "text": " actions you are searching over the best way to subdivide this problem right" }, { "start": 843.48, "end": 848.16, "text": " you're searching over which of these all the black squares should I use to" }, { "start": 848.16, "end": 853.12, "text": " subdivide my problem into sub problems and that's what you build your search" }, { "start": 853.12, "end": 859.04, "text": " tree from so naturally you you can already see what kind of possibilities" }, { "start": 859.04, "end": 864.08, "text": " do we have here to subdivide this problem I drew one white square but any" }, { "start": 864.08, "end": 869.44, "text": " of the black squares are candidates to subdivide this problem right any of the" }, { "start": 869.44, "end": 874.96, "text": " black squares could be potential subdivisions and this is what we search" }, { "start": 874.96, "end": 882.44, "text": " over so in in Monte Carlo tree search we search over the actions which gives us" }, { "start": 882.44, "end": 890.6800000000001, "text": " this four to the D tree but in divide and conquer we're searching over all the" }, { "start": 890.6800000000001, "end": 895.2800000000001, "text": " ways to subdivide the problem as you can see there that are many many more" }, { "start": 895.28, "end": 901.48, "text": " possibilities so from this first starting node we have like like a hundred" }, { "start": 901.48, "end": 907.72, "text": " possibilities to subdivide this problem into two problems right and each of" }, { "start": 907.72, "end": 914.4, "text": " those again if you now you've decided on a subdivision let's say you decided on" }, { "start": 914.4, "end": 919.12, "text": " this one right here you say I want to pass through that point on my way to the" }, { "start": 919.12, "end": 926.8, "text": " goal now you have to subdivide that in this sub problem into two problems again" }, { "start": 926.8, "end": 932.22, "text": " every possible black square I'm not saying which one is good good thing to" }, { "start": 932.22, "end": 937, "text": " subdivide the problem I'm just asking what is a possible candidate every" }, { "start": 937, "end": 942.8, "text": " single black square here is a possible candidate for for a path from here to" }, { "start": 942.8, "end": 947.76, "text": " here right and again for this particular sub problem you have to do the same" }, { "start": 947.76, "end": 956.4399999999999, "text": " thing so the the search tree here even though we said before it is this one is" }, { "start": 956.4399999999999, "end": 965.88, "text": " very deep and this one is probably only log D sort of log 2d deep it width is" }, { "start": 965.88, "end": 971.6, "text": " going to be enormous and that is the catch right the catch this is not a" }, { "start": 971.6, "end": 978.44, "text": " method that is like a magic pill the catch is even though your tree is not as" }, { "start": 978.44, "end": 984.44, "text": " deep it is much much wider and it is intuitive right because you still have" }, { "start": 984.44, "end": 989.48, "text": " to have the ability to construct any possible plan so your tree is going to" }, { "start": 989.48, "end": 994.64, "text": " have as many nodes as the original Monte Carlo tree search tree you're if you" }, { "start": 994.64, "end": 1000.48, "text": " were to fully expand it right so it's your trading of depth for width here" }, { "start": 1000.48, "end": 1011.24, "text": " I hope I hope that's a bit clear so your entire your entire promise of this" }, { "start": 1011.24, "end": 1016.2, "text": " method is going to be can you from all of these possibilities so from all of" }, { "start": 1016.2, "end": 1020.6800000000001, "text": " these you don't even you don't even want to go and search even one layer deep" }, { "start": 1020.6800000000001, "end": 1025.28, "text": " through all of these don't even want to consider all of them right you want to" }, { "start": 1025.28, "end": 1033.2, "text": " search in this tree you want to limit your search to very particular ways of" }, { "start": 1033.2, "end": 1039.36, "text": " subdivision here if you can do that efficiently if you can pick efficiently" }, { "start": 1039.36, "end": 1045.48, "text": " candidates to subdivide then this could be a successful thing because your deep" }, { "start": 1045.48, "end": 1050.72, "text": " is now not as your tree is not as deep as the original search tree would have" }, { "start": 1050.72, "end": 1055.76, "text": " been and you can limit the width effectively to only very few candidates" }, { "start": 1055.76, "end": 1061.4, "text": " so here we could for example make a heuristic that will always only pick" }, { "start": 1061.4, "end": 1069.76, "text": " squares that are kind of on this straight path to the goal so everything" }, { "start": 1069.76, "end": 1077, "text": " rests on how you do this select action this thing here the entire algorithm" }, { "start": 1077, "end": 1083.32, "text": " relies on the fact that you can select effectively select in between states" }, { "start": 1083.32, "end": 1087.72, "text": " where you're pretty sure that the algorithm will have to pass through" }, { "start": 1087.72, "end": 1093.04, "text": " there because the worse you make these predictions the worse your tree search" }, { "start": 1093.04, "end": 1101.08, "text": " is going to work and what they do of course is they use deep learning as you" }, { "start": 1101.08, "end": 1105.72, "text": " might have guessed to do that so they have they will have a model that for a" }, { "start": 1105.72, "end": 1111, "text": " particular start and end goal will give them a probability distribution across" }, { "start": 1111, "end": 1115.16, "text": " candidates now everything that's black here also has probability mass but it's" }, { "start": 1115.16, "end": 1120.96, "text": " just so small you can't see and these blue ones are that the lighter blue the" }, { "start": 1120.96, "end": 1125.84, "text": " more probable this model thinks that this is going to be an in between state" }, { "start": 1125.84, "end": 1132.92, "text": " now the tree search can now limit itself effectively to only the ones here with" }, { "start": 1132.92, "end": 1136.16, "text": " the highest probability right so we select the ones with the highest" }, { "start": 1136.16, "end": 1143.6000000000001, "text": " probability and will only search plans that have these as the first possible" }, { "start": 1143.6000000000001, "end": 1150.16, "text": " subdivisions again we're searching over plans so we're searching over ways to" }, { "start": 1150.16, "end": 1154.88, "text": " subdivide the problem into smaller problems that is our search space so" }, { "start": 1154.88, "end": 1159.52, "text": " once we've decided on one of them let's say here the yellow one again we have to" }, { "start": 1159.52, "end": 1163.52, "text": " evaluate that model for each of the sub problems and this this is kind of a step" }, { "start": 1163.52, "end": 1167.96, "text": " that's missing here so in between here there would be a model evaluation that" }, { "start": 1167.96, "end": 1173.6, "text": " would again tell you which of these in between states were probable subdivision" }, { "start": 1173.6, "end": 1178.44, "text": " candidates and then you would select again one of those in that particular" }, { "start": 1178.44, "end": 1182.6, "text": " search branch and in a different search branch right you're searching over these" }, { "start": 1182.6, "end": 1185.92, "text": " things in a different search branch you would select a different one and see is" }, { "start": 1185.92, "end": 1193.2, "text": " this possibly a better way to subdivide the problem and so on so the question of" }, { "start": 1193.2, "end": 1196.44, "text": " course is how do you train this model how do you train a model that gives you" }, { "start": 1196.44, "end": 1203.64, "text": " good candidates for subdivision and the answer here is a comes from the idea of" }, { "start": 1203.64, "end": 1209.28, "text": " hindsight experience replay so let's say again you are here and you want to go" }, { "start": 1209.28, "end": 1216.32, "text": " here and you're not very good at the at it initially so they train this model as" }, { "start": 1216.32, "end": 1221.24, "text": " I understand along with letting their agent act in this environment so the" }, { "start": 1221.24, "end": 1225.04, "text": " agent uses the model to plan but initially it's not very good so maybe" }, { "start": 1225.04, "end": 1230.52, "text": " the agent will fail a lot of times so instead of going to the blue square it" }, { "start": 1230.52, "end": 1236.84, "text": " will reach this white square right here it will go here here and here will reach" }, { "start": 1236.84, "end": 1241.3999999999999, "text": " the white square instead of saying I failed what you can do and this is the" }, { "start": 1241.3999999999999, "end": 1246.9199999999998, "text": " idea of hindsight experience replay is to say well I did fail but I did reach" }, { "start": 1246.9199999999998, "end": 1253.56, "text": " something right I I have reached a thing and and it's actually possible that that" }, { "start": 1253.56, "end": 1258.08, "text": " thing could have been the goal but this particular episode this was the goal" }, { "start": 1258.08, "end": 1263.1999999999998, "text": " remember the goal changes every time it's a goal-directed policy so it says" }, { "start": 1263.2, "end": 1268.32, "text": " well this could have been the goal possibly so if I just pretend this was" }, { "start": 1268.32, "end": 1274.64, "text": " the goal then I have a training example for a successful run so the hindsight" }, { "start": 1274.64, "end": 1279.52, "text": " experience replay basically pretends that what you have achieved even if you" }, { "start": 1279.52, "end": 1284.1200000000001, "text": " failed was your actual goal and that gives you a positive training example" }, { "start": 1284.1200000000001, "end": 1288.88, "text": " for an episode with that as a goal and the this it could have been the goal" }, { "start": 1288.88, "end": 1295.68, "text": " because the goal is chosen at random so this gives you a good training example" }, { "start": 1295.68, "end": 1300.5200000000002, "text": " now this paper just generalizes the hindsight experience replay or applies" }, { "start": 1300.5200000000002, "end": 1305.16, "text": " it to their particular framework and they say well if I reach this thing that" }, { "start": 1305.16, "end": 1312.5600000000002, "text": " means any point on this path is a good candidate for subdividing the path" }, { "start": 1312.56, "end": 1318.6799999999998, "text": " because I did actually reach the point remember the goal is to propose a a" }, { "start": 1318.6799999999998, "end": 1324.56, "text": " point where your for sure are going to pass through now since I've taken this" }, { "start": 1324.56, "end": 1329.72, "text": " path to this goal I have passed through any of the squares in between and so" }, { "start": 1329.72, "end": 1334.6, "text": " these are my possible sub candidates and all other black squares I don't want" }, { "start": 1334.6, "end": 1338.54, "text": " that so now I have a classifier I can train I can say any of these squares on" }, { "start": 1338.54, "end": 1343.92, "text": " my path are good squares to subdivide and any not on my path are bad ones they" }, { "start": 1343.92, "end": 1348.6, "text": " go a step further I believe and they actually say we're so if this was m" }, { "start": 1348.6, "end": 1354.56, "text": " steps we're actually going to take the particular square that is reached after" }, { "start": 1354.56, "end": 1361.8799999999999, "text": " m half steps so the exact middle point of that path is going to be our training" }, { "start": 1361.88, "end": 1369.0800000000002, "text": " example for subdivision so you have a classifier that has exactly one one" }, { "start": 1369.0800000000002, "end": 1376.2800000000002, "text": " target to train so this you train along with acting in the environment and of" }, { "start": 1376.2800000000002, "end": 1380.4, "text": " course your model for proposing subdivisions is going to be better and" }, { "start": 1380.4, "end": 1383.8000000000002, "text": " better and better and better and that makes your planning algorithm better and" }, { "start": 1383.8000000000002, "end": 1390.2800000000002, "text": " better and that makes you collect better episodes and so you can sort of sort of" }, { "start": 1390.28, "end": 1399.04, "text": " get bootstrap yourself up this thing now this is the basic experiment of the" }, { "start": 1399.04, "end": 1404.92, "text": " paper they also do this in a 3d manner where they move this little spider here" }, { "start": 1404.92, "end": 1408.68, "text": " around so the spider was trained to just move from one block to the next block and" }, { "start": 1408.68, "end": 1414.48, "text": " the planner basically tell it where to go and they show that they outperform" }, { "start": 1414.48, "end": 1420.84, "text": " the traditional Monte Carlo tree search now I have to say this is cool but you" }, { "start": 1420.84, "end": 1427.84, "text": " have to remember this is this is only advantageous in very very specific types" }, { "start": 1427.84, "end": 1432, "text": " of problems so first of all it has to be this goal-directed nature otherwise you" }, { "start": 1432, "end": 1439.2, "text": " probably couldn't train this this predictor here super well then given" }, { "start": 1439.2, "end": 1446.44, "text": " that you have such a good predictor the problem needs to be such that if you" }, { "start": 1446.44, "end": 1452.16, "text": " have a start state there could be many ways to go about reaching the end and if" }, { "start": 1452.16, "end": 1455.44, "text": " you have an end state there could be many ways from where you could come from" }, { "start": 1455.44, "end": 1462.04, "text": " but but there is like some bottleneck state in the middle where you're pretty" }, { "start": 1462.04, "end": 1467.3600000000001, "text": " sure that you're going to have to pass through it so if your problem is of that" }, { "start": 1467.36, "end": 1473.36, "text": " nature right if it has these bottleneck states where you can predict with" }, { "start": 1473.36, "end": 1478.1999999999998, "text": " reasonable accuracy that you're going to have to pass through then this is a good" }, { "start": 1478.1999999999998, "end": 1485.08, "text": " algorithm to consider and is obviously I mean it's intuitively outperforming the" }, { "start": 1485.08, "end": 1492.28, "text": " original Monte Carlo tree search because you have much less deep search tree and" }, { "start": 1492.28, "end": 1497.8799999999999, "text": " you can effectively limit its width by using that model they also have made" }, { "start": 1497.8799999999999, "end": 1504.8799999999999, "text": " this website where they kind of show videos of their spider and I haven't seen" }, { "start": 1504.8799999999999, "end": 1511.8799999999999, "text": " it in a while but it is it is like next to the mouse if you can see it so so you" }, { "start": 1511.8799999999999, "end": 1516.6, "text": " see this is kind of a continuous control problem that also requires planning and" }, { "start": 1516.6, "end": 1520.6399999999999, "text": " they also have these kind of gifts of how they're there what order their plans" }, { "start": 1520.64, "end": 1525.6000000000001, "text": " are constructed in so I invite you to check this out read the paper if you" }, { "start": 1525.6000000000001, "end": 1531.64, "text": " like this subscribe leave a like leave a comment and thank you for listening bye" }, { "start": 1531.64, "end": 1550.96, "text": " bye" } ]
SPOqoI0zOPQ
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[ML News] AI-generated patent approved | Germany gets an analog to OpenAI | ML cheats video games
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "deep learning tutorial", "what is deep learning", "introduction to deep learning", "ai inventor", "dabus", "thaler", "steve thaler", "stephen thaler", "ai patent", "creativity machine", "aleph alpha", "openai", "german openai", "aleph alpha openai", "german aleph alpha", "machine learning game cheat", "ai cheat video games", "machine learning video games", "deepmind", "wordcraft", "neural flame" ]
#mlnews #dabus #alephalpha OUTLINE: 0:00 - Intro 0:20 - Sponsor: Weights & Biases 3:45 - AI legally recognized as patent inventor 8:35 - Alpeh Alpha raises USD 27Mio to build European OpenAI 10:20 - AMP advances AI aided recycling 11:20 - DeepMind builds XLand RL environment 13:15 - Cognitive Behavioral Therapy as an app 16:15 - Wordcraft interactive AI text editor 17:05 - ML used to cheat in console games 18:10 - Google's OpenBuildings Dataset 20:00 - Most ML COVID tools are flawed 21:10 - DALL-E mini released 21:55 - Helpful Libraries 25:20 - FSF funds papers discussing CoPilot SPONSOR: Weights & Biases https://wandb.ai References: AI legally recognized as patent inventor https://www.globallegalpost.com/news/south-africa-issues-worlds-first-patent-listing-ai-as-inventor-161068982 https://www.abc.net.au/news/2021-08-01/historic-decision-allows-ai-to-be-recognised-as-an-inventor/100339264 https://artificialinventor.com/frequently-asked-questions/ https://artificialinventor.com/dabus/ https://www.worldscientific.com/doi/abs/10.1142/S2705078521500053 https://www.worldscientific.com/doi/epdf/10.1142/S2705078521500053 https://imagination-engines.com/dabus.html https://imagination-engines.com/about.html https://www.nextbigfuture.com/2016/03/sander-olson-interviewed-dr-stephen.html https://www.actiac.org/system/files/Dawn19%20-%20Dr.%20Thaler.pdf Alpeh Alpha raises USD 27Mio to build European OpenAI https://techcrunch.com/2021/07/27/german-startup-aleph-alpha-raises-27m-series-a-round-to-build-europes-openai/ AMP advances AI aided recycling https://www.robotics247.com/article/amp_robotics_marks_data_pick_rate_milestones_automated_recycling DeepMind builds XLand RL environment https://deepmind.com/blog/article/generally-capable-agents-emerge-from-open-ended-play https://deepmind.com/research/publications/open-ended-learning-leads-to-generally-capable-agents Cognitive Behavioral Therapy as an app https://www.nytimes.com/2021/06/01/health/artificial-intelligence-therapy-woebot.html Wordcraft interactive AI text editor https://syncedreview.com/2021/07/21/deepmind-podracer-tpu-based-rl-frameworks-deliver-exceptional-performance-at-low-cost-66/ https://arxiv.org/abs/2107.07430 https://www.youtube.com/watch?v=9p4mfA0Fyd8 ML used to cheat in console games https://au.pcmag.com/games/88121/machine-learning-is-now-being-used-to-cheat-in-multiplayer-games Google's OpenBuildings Dataset https://ai.googleblog.com/2021/07/mapping-africas-buildings-with.html https://sites.research.google/open-buildings/ Most ML COVID tools are flawed https://www.technologyreview.com/2021/07/30/1030329/machine-learning-ai-failed-covid-hospital-diagnosis-pandemic/ DALL-E mini released https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini--Vmlldzo4NjIxODA https://huggingface.co/spaces/flax-community/dalle-mini Helpful Libraries https://www.openai.com/blog/triton/ https://github.com/openai/triton https://github.com/microsoft/FLAML https://github.com/clip-italian/clip-italian https://deepmind.com/research/open-source/melting-pot https://github.com/deepmind/meltingpot https://www.roboti.us/license.html https://github.com/openai/gym/issues/2259 https://github.com/jkterry1 FSF funds papers discussing CoPilot https://www.fsf.org/blogs/licensing/fsf-funded-call-for-white-papers-on-philosophical-and-legal-questions-around-copilot https://www.gnu.org/philosophy/who-does-that-server-really-serve.en.html Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
An AI is now officially listed as the inventor in a patent. Aleph Alpha raises $27 million to build Europe's open AI and an open source replication of Dalí is released. Welcome to ML News. All right, before we get into all the stuff, this video is sponsored by weight and biases. weights and biases is a one stop shop for machine learning researchers to track their experiments, save their models, recreate their old experiments, share work with others and generally analyze their results. weights and biases allows you with one single line of code to track your experiments, which means that weights and biases will track the execution run of your experiment, it will track the results, it will track saved models and checkpoints, upload it all to a convenient central place in your profile. And that allows you to analyze visualize all of your experiments and data. Think of it like effortless tensor board in the cloud. weights and biases has integrations across all of the deep learning frameworks, PyTorch, TensorFlow, hugging face, you name it, they probably have an integration available. Today, I want to tell you about a new feature that they have, which is called tables. Now the name is deceptively simple. Table is simply a grid of stuff. But in weights and biases, tables allow you to view things like data sets, but also outputs of your runs, any kind of artifact you have, you can analyze in tables, tables allow you to sort group filter and do anything with the data you're looking at. And you can take advantage of all the visualization capabilities that you're used to from weights and biases dashboards. For example, here, we automatically visualize the results of pixel level annotations. I mean, look at that left hand side, that model sucks. Look at the bottom, why is the sky labeled as trees, clearly you have to do something here. So as you can see, you can analyze the output of your runs, you can see where the model still makes mistakes by filtering for the samples that are classified incorrectly, if for some reason, weights and biases doesn't have a visualization for your type of data, which is unlikely, if they don't have it, they allow you to actually integrate with their framework in order to produce one, the capabilities here are really endless. Here you can see we visualize anything from sound files to training plots to spectrograms, whatever you can think of. So as a special bonus, viewers of this channel only get 80% off today off the basic plan, which you don't need actually, because it's free. Yes, it's completely free. There's really nothing stopping you from going there and making an account personal accounts, free unlimited experiments. If you're a bit more involved, if you want a team, and if that team is large and does a lot of tracking, you'll have to give them some money, but their main income comes from big enterprises that want to use this internally. If you are such a big enterprise, don't hesitate to give them a call and give them a lot of money. In that way, you'll be supporting all the free accounts for all us plebs, there are special options for academic research teams, which do get free team accounts. And you can also self host if you need to be compliant with some sort of regulations. So again, go over to weights and biases and check it out. There's a lot of features that I haven't even talked about yet, such as hyper parameter optimization that's done automatically, check it out. And now let's get into the news. I'm back. Yay. What did I miss? What has been going on? How do I do? How do I do news? I forgot. All right. The global legal post right South Africa issues world's first patent listing AI as inventor. So this person right here is Professor Ryan Abbott, he and his legal team have been fighting around the world applying for patents that list the AI named Davos as the inventor of two particular inventions. So now they finally succeeded in South Africa. And also as ABC news writes, an Australian court has equally ruled that AI can be listed as an inventor on a patent application. Now the situation is a little bit complex, and I'm not a lawyer, so don't take my word for it. But the ownership of the patent rests with the creator of Davos of the AI, while Davos is listed as the inventor. So here's one of the things that Davos apparently invented, it's kind of a fractal thing. So they're saying this is kind of a food container or something. And the fractality somehow makes it good. And you can connect containers together. But there's also this light emitting thing that has kind of a fractal ish pulse or something that makes it really noticeable. And this here is Stephen taller, who is the inventor of Davos and therefore the owner of the patent. Now I was immensely interested into this. And I have spent way too much time researching this here is kind of a few takeaways. First, I thought this is a PR stunt, come on, you know, why can't you just list yourself as an inventor, because ultimately AI is like a tool, right? And how does an AI even come up with new ideas? Like what counts as new ideas? And like, how does an AI come up with this? Or this? Like, what was the part that the AI did? What was the starting point? What was it do? Like, I'm so confused. Okay. So this is the website of the team of the legal professionals that got the patents through to through the courts. And they answer some of these questions. And their claim here is that in the various legal systems, the granting of a patent requires the inventor to perform like the invention step, like there's a specific step in the conception of an idea that is like the innovative step. And it is actually criminal offense to list the wrong individual as an inventor. So the inventor does the creative step. And you have to list that person as the inventor. Otherwise, it's criminal offense. Now, the question is, if legally the AI did that inventive step, whatever that means, technically, you should list the AI there because you can't list any of your employees, you can't list yourself because you've only controlled and built the AI, but the AI did the actual step that the law requires to be listed under the inventor. And apparently, they claim at places patent applications have been rejected because of this. So from this perspective, it kind of makes sense that you should be able to list the AI as the inventor. Now counter to that, some legal systems also reject this notion, saying only a natural person can be an inventor. And therefore, on some of these inventions, simply no patent can be granted, which would be discouraging from researching stuff. Remember, AI is used to make inventions in such field as drug discovery, where the AI simply comes up with new compounds, and then you test them. So in a way, the inventive step is performed by the AI, if you could not apply for a patent in that, that would discourage research in these directions. Alright, so this seemed to me like to be a reasonable explanation, but that's only the surface right here. I was much more interested in the question of how, how does this system that I have never heard of come up with new invention. And here on this hideous website of this legal team, this question appears to be answered and cut. So this has gotten so long through the edits that it just completely blows the format of ML news. So what we're going to do is we're going to cut the rest of this into its own video, because this is really weird. This DABA system is weird. This whole case is weird. The too long didn't read is there might be a valid legal reason why AI needs to be listed as an inventor on a patent. Also, at the same time, this is probably a giant PR stunt. And the inventions themselves are they're nothing. So, you know, look forward to the next video, make up your own mind. Let's go on with the news. Alright, German startup Aleph Alpha raises 27 million US dollar series a round to build Europe's open AI from tech crunch. This is Jonas Andrulles, the founder of Aleph Alpha with headquarters in Heidelberg in Germany, which is not too far from here. And the goal is to build the equivalent of open AI, but in a European fashion. So it says the German AI startup of office now raised 23 million euro, which is 27 million in real money in a series a founding co led by early bird VC, Lake Star and UBC partners, the team says it will have a strong commitment to open source communities such as a Luther AI academic partnerships and will be pushing European values and ethical standards, it says supporting fairer access to modern AI research aimed at counteracting the ongoing D democratization, monopolization and loss of control or transparency. So while these are laudable goals, and I really hope they achieve and stick to these goals, remember that open AI has said the same at the beginning and now open AI is mostly interested in closing down access to their stuff and charging for it. But luckily, venture capitalists, which are the main founders of this venture right here are not known to ever wanting their money back or anything like this. So this should just be a breeze for Aleph Alpha. So I wish Jonas and co founder Samuel and anyone part of Aleph Alpha all the best and big success in their endeavors. It's going to be fun having sort of a counterforce to the US here in Europe. Robotics 24 seven says a and pay robotics marks milestone in data pick rates for automated recycling. So speaking of companies and raising money, this company is now raising series B for about 55 million US dollars. And they're in the space of garbage sorting and disposal and recycling. So they've developed these analysis and gripper technologies. And this is incredibly cool to watch. I mean, we're always talking about AI taking away our jobs. I don't think people will be too sad that AI is going to take away their jobs in this particular field. So here the AI automatically analyzes the streams of garbage and sorts them by the materials in them. And these blocks of cans just look really cool. Also, there is such a thing as waste expo didn't know excellent must be a blast. Next news DeepMind releases a paper called open ended learning leads to generally capable agents. So what they do is they build an environment called x land. This is kind of a 3d environment and the agents in here you can see on the top left and top right, this is what they see apparently, and they have to fulfill various goals in these environments, you can build any kind of environment you want in x land, then you can tell the agents to achieve that apparently the paper is about when you instruct the agents to learn multiple goals, many goals at the same time, or after one another, they become generally capable as opposed to just having a single objective and then ending up with a very narrow skilled agent. Now x land can be used to not only have many different environment spatially, but also have many different tasks or games in this environment. So they've captured the flag king of the hill, and so on. In the paper, they actually detail how they use population based methods in order to train these agents, how good they are at zero shop learning and so on. And this is all pretty cool. However, these things and results aren't that new, we already knew that population based training is probably good if you want to achieve some generally skilled agents, we already knew that multi objective or objective conditioned learning is probably a good thing. Ultimately, the agents here are simply an observation encoder into an LSTM. And then they take in the goal conditioning. And then it's a standard actor critic reinforcement learning. I guess what I want to say is that the research isn't necessarily super new or exciting, but you can get a lot, lot, lot of publicity if you build something that's 3d and looks really cool. So if you want, you can build your own stuff in x land if you work at DeepMind, because I don't think it's open source. So ha ha. The New York Times writes something bothering you tell it to woe bot, and it is about the system that delivers cognitive behavioral therapy through an app. So cognitive behavioral therapy is one of the more successful approaches to treat things like depression or anxieties, it is rather formulaic, as this article describes. And therefore, it lends itself at least a little bit to be incorporated into some kind of algorithm. So the article is a discussion of is this good? Is this bad? The pros are that usually a human therapist is very expensive, and there aren't enough of them, especially in times of a global health crisis. On the other hand, critics argue that these algorithms aren't yet good enough to replace a human because they cannot intrinsically understand the things that the humans say. And you get the idea. The New York Times accompanies this person right here, Eli, who has tried out the app for a given period of time, Eli details how the app sometimes fails. Responding to my boss doesn't appreciate the work I do, and I can't seem to get her approval. The bot answers with that sounds difficult. Does this happen more in the morning or at night? It is a little bit of an improvement, I guess over something like Eliza. However, it still seems to be a rather formulaic. So my own personal opinion is this, if I have some problems, there are books that I can read self help books that guide me through the process of somehow solving my own problems. These books are necessarily impersonal, they are written by a person, but they're not personalized to me in any way. It's the same text for every single person that buys the book. So if a book like this can help me, then certainly a little bit of an algorithmized version of a book like this might help me too. You know, there are ways to make it worse, but I don't think much. So if you think that there are good books that have helped you in the past to overcome personal issues or problems or any kind of improvement, then it's entirely possible that an app like this does the same thing. I don't think we have to necessarily seek to replace therapists, but there are a lot of people who cannot afford therapists or don't have one close by. And in this case, such an app can probably help. Now, of course, it's also easy to see that people will feel as though that actually replaces a competent therapist and not seek the attention of an actual therapist when it's needed. So at the end, Eli breaks up with woe bot saying he was unimpressed by the bots advice for beating back loneliness and despair, but he is not entirely sorry that he tried it out. The mere act of typing out his problems was helpful. And through the process, he pinpointed what he actually needed to feel better. Yes. So it worked. Now Eli is seeing a human therapist in Philadelphia for $110 a session. Next news synced writes Google's wordcraft text editor advances human AI collaborative story writing. So the text editor isn't out yet just a paper and a demo video where a human writes something and then clicks on a button and then the machine sort of continues the story. This seems to be sort of a GPT three ish thing with an interface that just helps you select from different continuations and does the prompt engineering in a smart way for you, you can even customize the prompt, you can ask them on to elaborate on particular parts of the story, and then choose from various continuation. I think that's pretty cool if it ever will appear online, which I'm not sure, given that it's Google. But if it ever will appear, something like this might lead humans to just come up with new ideas through this thing. So pretty cool. Next news, PC mag writes machine learning is now being used to cheat in multiplayer games. So there's apparently this video here that demonstrates that a bot is used for cheating in games. Now aim bots have been a thing for a while. But apparently this thing works in a little bit of a different way. And it also works on consoles, which for now has been a kind of a difficult thing for aim bots. So what you do is you hook up your console to a video capture card feed that into your PC and the PC would actually send commands to your controller. So you'd hold the controller, but your controls would sort of be overwritten at times by the input of the cheat engine. And that makes detecting these cheats rather hard to use. Now it just says that machine learning is used in order to control this right here. You could also imagine this being just kind of a classic aim bot that just recognizes some pixels and then shoots at it. But apparently it's machine learning based. So you know, it's an ML news. Thanks. Next news, Google releases the open buildings data set, which is a data set that across satellite images of Africa has annotations of over 516 million buildings. This goes along with a paper where they detailed the challenges that they had to overcome to do this. So you can devise various failure modes right here. So all of these pictures, for examples are not buildings, the top left are water pools, top right are rocks. Then here there are some buildings, but the thing in the red square is not a building is just a bunch of walls, the left are containers. This is very difficult. Google has annotated over I think a million images 1.75 million images or sorry, Google has annotated 1.75 million buildings in 100,000 images by hand and then trained a system on it. The paper details how difficult that was how much you have to use augmentation and regularization in order to do that. But in the end, they've come up with this giant data set that you can now use, you can actually explore the data set in this interactive explorer right here. So you can switch between this view, which is I'm not sure how helpful that is, or this view I have discovered. So if you zoom in right here, I have discovered however, that sometimes I feel at least like this piece here, is this an actual building, it says it's a very high confidence building, I'm not sure, honestly, also this thing here, this might be one, but it seems like it works pretty well. Just overall, the challenges are also recognizing buildings in both rural areas, where they kind of blend into the environment and recognizing buildings in commercial or dense populated areas where you mainly have to separate buildings from each other. So pretty cool, give the open buildings data set a try if you're interested. Next MIT technology review writes hundreds of AI tools have been built to catch COVID, none of them helped yet another article about the shortcomings of machine learning research. And the take of this article is somehow you know, more effort is needed and criticizing ML research. In the meantime, I have a bit of a more cynical approach right here. Like we've known long enough about the publication pressure in ML research. And to use a buzzword topic like COVID in order to get a paper published by simply applying whatever your thing is in research, whatever your topic is, and using it on some kind of COVID data set in order to get a publication out of it, because people think like, oh, this is, you know, relevant, we need to publish fast. Now, I don't think the main motivation of 99% of this research was actually to develop something that actually works. Old methods are slapped onto new topics in order to get publications. And we will continue to see that in the future as well. Don't expect any of these things to work in the first place. Next news, Dali mini is an open source replication effort of open AI's Dali. So these people have built a version of Dali that is much smaller, but has first signs of actually working. Remember, Dali goes from text to images, and you can actually try it out yourself on an online interactive demo on hogging face. Here's my query for a creepy clown and the model does not disappoint. It seems like there's still a gap, probably a gap in size model size and data set size, until this project reaches the level of Dali if ever but still it's pretty cool. And I love the avocado chair just as much as the Dali one. Okay, we come to the helpful library section of ML news helpful libraries. First helpful library is kind of big news. Open AI releases Triton, which is a language that allows you to build custom CUDA kernels. And these CUDA kernels are super duper duper fast. And you don't have to know low level C++ CUDA in order to produce them. So there's a blog post and code to go along with it, detailing in very detail what's now possible with Triton. And apparently, open AI has made this in such a way that people who have no previous experience with CUDA programming are able to produce kernels that are as fast or faster than the kernels that were previously programmed by experienced CUDA programmers. So if you have something that doesn't have a efficient CUDA kernel yet, maybe give Triton a try. Next helpful library flammel fast and lightweight auto ML is a library for cost effective hyper parameter optimization. So apparently, you enter your problem to optimize and your cost and the library will optimize your hyper parameter towards your cost taking into account how much each hyper parameter setting costs to explore. So for example, if you have something like model size as a hyper parameter, it will preferably try the smaller sizes first because they cost less and you can search more before it then scales up that hyper parameter. Pretty cool. Give it a try. Next helpful library Italian clip. Remember clip scores images and text together and Italian clip is now available particularly can classify such things as a and oh, I'm kidding. It's a it's a cool project. Check it out if you are Italian speaking or building Italian speaking products. Next helpful library deep mind releases melting pot and evaluation suite for multi agent reinforcement learning. Now other than excellent this one is actually open. It's an environment in deep mind 2d lab and has various scenarios for multi agent reinforcement learning. And this actually looks like you can do some research with it and multi agent reinforcement learning especially something like cooperative multi agent reinforcement learning is one of these areas that is still largely unexplored and we don't have super good algorithms for it yet. So if you're looking for some research to do this might be a cool topic. There's an old helpful library with some news mojo co the 3d simulator that has been used for a long time for doing things like continuous reinforcement learning control problems and so on is now free the product requires a license but they do give out a free license to anyone at least until the 31st of October 2021. So if the availability of the license has blocked you so far, give it a try now. Also in RL news open AI gym has a new maintainer that is going to address the poll requests that are there project has been kind of dead for a long time and the new maintainer makes it clear that there aren't going to be new environments, major breaking changes, environment wrappers, anything like this, I think they simply want to make the gym usable and up to date as it is pretty cool. If you're a gym user, this should give you some stability and compatibility with current libraries. The new maintainer is JK Terry. Thanks for your work. So in last news for today, the free software foundation calls for white papers on the philosophical and legal questions around copilot. Apparently they're contacted understandably a lot with regards to copilot and the kind of legal ramifications of copyright and patents in what copilot does. If you don't know what copilot is, watch ml news from a while ago. In essence, they give you 500 bucks if you publish a paper through them that somehow elaborates on parts of these topics. So areas of interest are is copilot training on public repositories infringing copyright? Is it fair use? How likely is the output of copilot generate actionable claims of violations on GPL licensed works and so on. So there are some submission guidelines and I wonder if there's a way I can submit my ml news segment to this. Where's my 500 bucks, Richard? Come on. So the criticism of the free software foundation is that copilot is what they call service as a software substitute, which is a term they came up with to replace as software as a service to make it more clear. Of course, Richard Stallman here writes, the basic point is you can have control over a program someone else wrote if it's free, but you can never have control over service someone else runs. So never use a service where in principle running a program would do never. Richard says never. Okay, new.org. Let's look at that a certificate. What kind of certificate is there? Details. It's by let's encrypt. G is let's encrypt the program or a service. I wonder what's up, Richard, you're perfectly capable of generating SSL certificates using open SSL, a free program that you can run yet you elect to use a service like let's encrypt. Well, isn't that a jolly? All right, this was already way too long. This was it for this week's ml news. Please check out weights and biases. They're a great system. And I'll see you next time. Bye bye.
[ { "start": 0, "end": 7.2, "text": " An AI is now officially listed as the inventor in a patent. Aleph Alpha raises $27 million to" }, { "start": 7.2, "end": 13.84, "text": " build Europe's open AI and an open source replication of Dalí is released. Welcome to ML News." }, { "start": 20.080000000000002, "end": 24, "text": " All right, before we get into all the stuff, this video is sponsored by" }, { "start": 24, "end": 30.240000000000002, "text": " weight and biases. weights and biases is a one stop shop for machine learning researchers to track" }, { "start": 30.240000000000002, "end": 37.28, "text": " their experiments, save their models, recreate their old experiments, share work with others" }, { "start": 37.28, "end": 44.400000000000006, "text": " and generally analyze their results. weights and biases allows you with one single line of code" }, { "start": 44.400000000000006, "end": 50.8, "text": " to track your experiments, which means that weights and biases will track the execution run" }, { "start": 50.8, "end": 55.599999999999994, "text": " of your experiment, it will track the results, it will track saved models and checkpoints," }, { "start": 55.599999999999994, "end": 62.559999999999995, "text": " upload it all to a convenient central place in your profile. And that allows you to analyze" }, { "start": 62.559999999999995, "end": 69.12, "text": " visualize all of your experiments and data. Think of it like effortless tensor board in the cloud." }, { "start": 69.12, "end": 74.8, "text": " weights and biases has integrations across all of the deep learning frameworks, PyTorch," }, { "start": 74.8, "end": 79.75999999999999, "text": " TensorFlow, hugging face, you name it, they probably have an integration available. Today," }, { "start": 79.76, "end": 85.2, "text": " I want to tell you about a new feature that they have, which is called tables. Now the name is" }, { "start": 85.2, "end": 93.28, "text": " deceptively simple. Table is simply a grid of stuff. But in weights and biases, tables allow you to" }, { "start": 93.28, "end": 99.52000000000001, "text": " view things like data sets, but also outputs of your runs, any kind of artifact you have," }, { "start": 99.52000000000001, "end": 106.56, "text": " you can analyze in tables, tables allow you to sort group filter and do anything with the data" }, { "start": 106.56, "end": 111.28, "text": " you're looking at. And you can take advantage of all the visualization capabilities that you're" }, { "start": 111.28, "end": 117.52000000000001, "text": " used to from weights and biases dashboards. For example, here, we automatically visualize the" }, { "start": 117.52000000000001, "end": 123.84, "text": " results of pixel level annotations. I mean, look at that left hand side, that model sucks. Look at" }, { "start": 123.84, "end": 128.4, "text": " the bottom, why is the sky labeled as trees, clearly you have to do something here. So as" }, { "start": 128.4, "end": 133.04, "text": " you can see, you can analyze the output of your runs, you can see where the model still makes" }, { "start": 133.04, "end": 138.95999999999998, "text": " mistakes by filtering for the samples that are classified incorrectly, if for some reason," }, { "start": 138.95999999999998, "end": 144.39999999999998, "text": " weights and biases doesn't have a visualization for your type of data, which is unlikely," }, { "start": 144.39999999999998, "end": 150.32, "text": " if they don't have it, they allow you to actually integrate with their framework in order to produce" }, { "start": 150.32, "end": 155.92, "text": " one, the capabilities here are really endless. Here you can see we visualize anything from sound" }, { "start": 155.92, "end": 164, "text": " files to training plots to spectrograms, whatever you can think of. So as a special bonus, viewers" }, { "start": 164, "end": 171.11999999999998, "text": " of this channel only get 80% off today off the basic plan, which you don't need actually," }, { "start": 171.11999999999998, "end": 176.32, "text": " because it's free. Yes, it's completely free. There's really nothing stopping you from going" }, { "start": 176.32, "end": 182.23999999999998, "text": " there and making an account personal accounts, free unlimited experiments. If you're a bit more" }, { "start": 182.24, "end": 187.76000000000002, "text": " involved, if you want a team, and if that team is large and does a lot of tracking, you'll have to" }, { "start": 187.76000000000002, "end": 193.60000000000002, "text": " give them some money, but their main income comes from big enterprises that want to use this" }, { "start": 193.60000000000002, "end": 199.36, "text": " internally. If you are such a big enterprise, don't hesitate to give them a call and give them a lot" }, { "start": 199.36, "end": 204.64000000000001, "text": " of money. In that way, you'll be supporting all the free accounts for all us plebs, there are" }, { "start": 204.64000000000001, "end": 211.44, "text": " special options for academic research teams, which do get free team accounts. And you can also self" }, { "start": 211.44, "end": 216.64, "text": " host if you need to be compliant with some sort of regulations. So again, go over to weights and" }, { "start": 216.64, "end": 221.04, "text": " biases and check it out. There's a lot of features that I haven't even talked about yet, such as" }, { "start": 221.04, "end": 226.4, "text": " hyper parameter optimization that's done automatically, check it out. And now let's get into the news." }, { "start": 229.84, "end": 235.92, "text": " I'm back. Yay. What did I miss? What has been going on? How do I do? How do I do news? I forgot." }, { "start": 235.92, "end": 242.23999999999998, "text": " All right. The global legal post right South Africa issues world's first patent listing AI as" }, { "start": 242.23999999999998, "end": 248.07999999999998, "text": " inventor. So this person right here is Professor Ryan Abbott, he and his legal team have been" }, { "start": 248.07999999999998, "end": 254.88, "text": " fighting around the world applying for patents that list the AI named Davos as the inventor of" }, { "start": 254.88, "end": 261.44, "text": " two particular inventions. So now they finally succeeded in South Africa. And also as ABC news" }, { "start": 261.44, "end": 268.32, "text": " writes, an Australian court has equally ruled that AI can be listed as an inventor on a patent" }, { "start": 268.32, "end": 273.6, "text": " application. Now the situation is a little bit complex, and I'm not a lawyer, so don't take my" }, { "start": 273.6, "end": 281.36, "text": " word for it. But the ownership of the patent rests with the creator of Davos of the AI, while Davos" }, { "start": 281.36, "end": 287.68, "text": " is listed as the inventor. So here's one of the things that Davos apparently invented, it's kind" }, { "start": 287.68, "end": 293.68, "text": " of a fractal thing. So they're saying this is kind of a food container or something. And the" }, { "start": 293.68, "end": 299.52, "text": " fractality somehow makes it good. And you can connect containers together. But there's also this" }, { "start": 300.24, "end": 306.32, "text": " light emitting thing that has kind of a fractal ish pulse or something that makes it really" }, { "start": 306.32, "end": 313.04, "text": " noticeable. And this here is Stephen taller, who is the inventor of Davos and therefore the owner" }, { "start": 313.04, "end": 318.48, "text": " of the patent. Now I was immensely interested into this. And I have spent way too much time" }, { "start": 318.48, "end": 323.84000000000003, "text": " researching this here is kind of a few takeaways. First, I thought this is a PR stunt, come on," }, { "start": 323.84000000000003, "end": 329.68, "text": " you know, why can't you just list yourself as an inventor, because ultimately AI is like a tool," }, { "start": 329.68, "end": 334.72, "text": " right? And how does an AI even come up with new ideas? Like what counts as new ideas? And like," }, { "start": 334.72, "end": 342.56, "text": " how does an AI come up with this? Or this? Like, what was the part that the AI did? What was the" }, { "start": 342.56, "end": 347.36, "text": " starting point? What was it do? Like, I'm so confused. Okay. So this is the website of the" }, { "start": 347.36, "end": 353.28000000000003, "text": " team of the legal professionals that got the patents through to through the courts. And they" }, { "start": 353.28000000000003, "end": 359.04, "text": " answer some of these questions. And their claim here is that in the various legal systems, the" }, { "start": 359.04, "end": 365.36, "text": " granting of a patent requires the inventor to perform like the invention step, like there's" }, { "start": 365.36, "end": 371.84000000000003, "text": " a specific step in the conception of an idea that is like the innovative step. And it is actually" }, { "start": 371.84, "end": 378.88, "text": " criminal offense to list the wrong individual as an inventor. So the inventor does the creative" }, { "start": 378.88, "end": 384.15999999999997, "text": " step. And you have to list that person as the inventor. Otherwise, it's criminal offense." }, { "start": 384.15999999999997, "end": 391.28, "text": " Now, the question is, if legally the AI did that inventive step, whatever that means," }, { "start": 391.28, "end": 396.79999999999995, "text": " technically, you should list the AI there because you can't list any of your employees," }, { "start": 396.8, "end": 401.6, "text": " you can't list yourself because you've only controlled and built the AI, but the AI did the" }, { "start": 401.6, "end": 407.6, "text": " actual step that the law requires to be listed under the inventor. And apparently, they claim" }, { "start": 407.6, "end": 413.84000000000003, "text": " at places patent applications have been rejected because of this. So from this perspective, it kind" }, { "start": 413.84000000000003, "end": 419.44, "text": " of makes sense that you should be able to list the AI as the inventor. Now counter to that," }, { "start": 419.44, "end": 424.56, "text": " some legal systems also reject this notion, saying only a natural person can be an inventor. And" }, { "start": 424.56, "end": 431.52, "text": " therefore, on some of these inventions, simply no patent can be granted, which would be discouraging" }, { "start": 431.52, "end": 438.48, "text": " from researching stuff. Remember, AI is used to make inventions in such field as drug discovery," }, { "start": 438.48, "end": 444.16, "text": " where the AI simply comes up with new compounds, and then you test them. So in a way, the inventive" }, { "start": 444.16, "end": 449.76, "text": " step is performed by the AI, if you could not apply for a patent in that, that would discourage" }, { "start": 449.76, "end": 454.96, "text": " research in these directions. Alright, so this seemed to me like to be a reasonable explanation," }, { "start": 454.96, "end": 461.44, "text": " but that's only the surface right here. I was much more interested in the question of how," }, { "start": 462.15999999999997, "end": 467.44, "text": " how does this system that I have never heard of come up with new invention. And here on this" }, { "start": 467.44, "end": 475.52, "text": " hideous website of this legal team, this question appears to be answered and cut. So this has gotten" }, { "start": 475.52, "end": 482.08, "text": " so long through the edits that it just completely blows the format of ML news. So what we're going" }, { "start": 482.08, "end": 488.24, "text": " to do is we're going to cut the rest of this into its own video, because this is really weird. This" }, { "start": 488.24, "end": 494.79999999999995, "text": " DABA system is weird. This whole case is weird. The too long didn't read is there might be a valid" }, { "start": 494.79999999999995, "end": 501.76, "text": " legal reason why AI needs to be listed as an inventor on a patent. Also, at the same time," }, { "start": 501.76, "end": 509.44, "text": " this is probably a giant PR stunt. And the inventions themselves are they're nothing." }, { "start": 510.88, "end": 516.96, "text": " So, you know, look forward to the next video, make up your own mind. Let's go on with the news." }, { "start": 518.16, "end": 524.64, "text": " Alright, German startup Aleph Alpha raises 27 million US dollar series a round to build" }, { "start": 524.64, "end": 531.12, "text": " Europe's open AI from tech crunch. This is Jonas Andrulles, the founder of Aleph Alpha with" }, { "start": 531.12, "end": 536.32, "text": " headquarters in Heidelberg in Germany, which is not too far from here. And the goal is to build" }, { "start": 536.32, "end": 543.12, "text": " the equivalent of open AI, but in a European fashion. So it says the German AI startup of" }, { "start": 543.12, "end": 550.88, "text": " office now raised 23 million euro, which is 27 million in real money in a series a founding co" }, { "start": 550.88, "end": 557.6, "text": " led by early bird VC, Lake Star and UBC partners, the team says it will have a strong commitment to" }, { "start": 557.6, "end": 562.32, "text": " open source communities such as a Luther AI academic partnerships and will be pushing" }, { "start": 562.32, "end": 568.24, "text": " European values and ethical standards, it says supporting fairer access to modern AI research" }, { "start": 568.24, "end": 576.32, "text": " aimed at counteracting the ongoing D democratization, monopolization and loss of control or transparency." }, { "start": 576.32, "end": 582.24, "text": " So while these are laudable goals, and I really hope they achieve and stick to these goals," }, { "start": 582.24, "end": 589.52, "text": " remember that open AI has said the same at the beginning and now open AI is mostly interested" }, { "start": 589.52, "end": 595.76, "text": " in closing down access to their stuff and charging for it. But luckily, venture capitalists, which" }, { "start": 595.76, "end": 600.96, "text": " are the main founders of this venture right here are not known to ever wanting their money back or" }, { "start": 600.96, "end": 607.2, "text": " anything like this. So this should just be a breeze for Aleph Alpha. So I wish Jonas and co" }, { "start": 607.2, "end": 614, "text": " founder Samuel and anyone part of Aleph Alpha all the best and big success in their endeavors." }, { "start": 614, "end": 618.96, "text": " It's going to be fun having sort of a counterforce to the US here in Europe." }, { "start": 620.88, "end": 627.44, "text": " Robotics 24 seven says a and pay robotics marks milestone in data pick rates for automated" }, { "start": 627.44, "end": 633.36, "text": " recycling. So speaking of companies and raising money, this company is now raising series B for" }, { "start": 633.36, "end": 642.4, "text": " about 55 million US dollars. And they're in the space of garbage sorting and disposal and recycling." }, { "start": 642.4, "end": 648.64, "text": " So they've developed these analysis and gripper technologies. And this is incredibly cool to watch." }, { "start": 648.64, "end": 654.4, "text": " I mean, we're always talking about AI taking away our jobs. I don't think people will be too sad" }, { "start": 654.4, "end": 660, "text": " that AI is going to take away their jobs in this particular field. So here the AI automatically" }, { "start": 660, "end": 666.08, "text": " analyzes the streams of garbage and sorts them by the materials in them. And these blocks of cans" }, { "start": 666.08, "end": 672.4, "text": " just look really cool. Also, there is such a thing as waste expo didn't know excellent must be a blast." }, { "start": 674.16, "end": 680.08, "text": " Next news DeepMind releases a paper called open ended learning leads to generally capable agents." }, { "start": 680.08, "end": 686.24, "text": " So what they do is they build an environment called x land. This is kind of a 3d environment" }, { "start": 686.24, "end": 691.2, "text": " and the agents in here you can see on the top left and top right, this is what they see apparently," }, { "start": 691.2, "end": 697.04, "text": " and they have to fulfill various goals in these environments, you can build any kind of environment" }, { "start": 697.04, "end": 703.44, "text": " you want in x land, then you can tell the agents to achieve that apparently the paper is about when" }, { "start": 703.44, "end": 709.84, "text": " you instruct the agents to learn multiple goals, many goals at the same time, or after one another," }, { "start": 709.84, "end": 716, "text": " they become generally capable as opposed to just having a single objective and then ending up with" }, { "start": 716, "end": 722.96, "text": " a very narrow skilled agent. Now x land can be used to not only have many different environment" }, { "start": 722.96, "end": 728.16, "text": " spatially, but also have many different tasks or games in this environment. So they've captured" }, { "start": 728.16, "end": 733.84, "text": " the flag king of the hill, and so on. In the paper, they actually detail how they use population based" }, { "start": 733.84, "end": 740.08, "text": " methods in order to train these agents, how good they are at zero shop learning and so on. And this" }, { "start": 740.08, "end": 746, "text": " is all pretty cool. However, these things and results aren't that new, we already knew that" }, { "start": 746, "end": 751.84, "text": " population based training is probably good if you want to achieve some generally skilled agents," }, { "start": 751.84, "end": 758.08, "text": " we already knew that multi objective or objective conditioned learning is probably a good thing." }, { "start": 758.08, "end": 763.84, "text": " Ultimately, the agents here are simply an observation encoder into an LSTM. And then" }, { "start": 763.84, "end": 770, "text": " they take in the goal conditioning. And then it's a standard actor critic reinforcement learning." }, { "start": 770, "end": 775.84, "text": " I guess what I want to say is that the research isn't necessarily super new or exciting, but you" }, { "start": 775.84, "end": 784, "text": " can get a lot, lot, lot of publicity if you build something that's 3d and looks really cool. So if" }, { "start": 784, "end": 789.2, "text": " you want, you can build your own stuff in x land if you work at DeepMind, because I don't think" }, { "start": 789.2, "end": 798.08, "text": " it's open source. So ha ha. The New York Times writes something bothering you tell it to woe bot," }, { "start": 798.08, "end": 803.36, "text": " and it is about the system that delivers cognitive behavioral therapy through an app. So cognitive" }, { "start": 803.36, "end": 808.32, "text": " behavioral therapy is one of the more successful approaches to treat things like depression or" }, { "start": 808.32, "end": 816.08, "text": " anxieties, it is rather formulaic, as this article describes. And therefore, it lends itself at least" }, { "start": 816.08, "end": 822.48, "text": " a little bit to be incorporated into some kind of algorithm. So the article is a discussion of is" }, { "start": 822.48, "end": 828.32, "text": " this good? Is this bad? The pros are that usually a human therapist is very expensive, and there" }, { "start": 828.32, "end": 835.28, "text": " aren't enough of them, especially in times of a global health crisis. On the other hand," }, { "start": 835.28, "end": 840.8000000000001, "text": " critics argue that these algorithms aren't yet good enough to replace a human because they cannot" }, { "start": 840.8000000000001, "end": 846.08, "text": " intrinsically understand the things that the humans say. And you get the idea. The New York" }, { "start": 846.08, "end": 851.6800000000001, "text": " Times accompanies this person right here, Eli, who has tried out the app for a given period of time," }, { "start": 851.68, "end": 858.64, "text": " Eli details how the app sometimes fails. Responding to my boss doesn't appreciate the work I do," }, { "start": 858.64, "end": 863.28, "text": " and I can't seem to get her approval. The bot answers with that sounds difficult. Does this" }, { "start": 863.28, "end": 868.7199999999999, "text": " happen more in the morning or at night? It is a little bit of an improvement, I guess over something" }, { "start": 868.7199999999999, "end": 875.8399999999999, "text": " like Eliza. However, it still seems to be a rather formulaic. So my own personal opinion is this," }, { "start": 875.84, "end": 882.64, "text": " if I have some problems, there are books that I can read self help books that guide me through" }, { "start": 882.64, "end": 889.2, "text": " the process of somehow solving my own problems. These books are necessarily impersonal, they are" }, { "start": 889.2, "end": 895.2, "text": " written by a person, but they're not personalized to me in any way. It's the same text for every" }, { "start": 895.2, "end": 901.44, "text": " single person that buys the book. So if a book like this can help me, then certainly a little bit of" }, { "start": 901.44, "end": 908.1600000000001, "text": " an algorithmized version of a book like this might help me too. You know, there are ways to make it" }, { "start": 908.1600000000001, "end": 914.08, "text": " worse, but I don't think much. So if you think that there are good books that have helped you" }, { "start": 914.08, "end": 920.32, "text": " in the past to overcome personal issues or problems or any kind of improvement, then it's" }, { "start": 920.32, "end": 925.2800000000001, "text": " entirely possible that an app like this does the same thing. I don't think we have to necessarily" }, { "start": 925.2800000000001, "end": 931.36, "text": " seek to replace therapists, but there are a lot of people who cannot afford therapists or don't have" }, { "start": 931.36, "end": 936.4, "text": " one close by. And in this case, such an app can probably help. Now, of course, it's also easy to" }, { "start": 936.4, "end": 942.48, "text": " see that people will feel as though that actually replaces a competent therapist and not seek the" }, { "start": 942.48, "end": 948.4, "text": " attention of an actual therapist when it's needed. So at the end, Eli breaks up with woe bot saying" }, { "start": 948.4, "end": 953.84, "text": " he was unimpressed by the bots advice for beating back loneliness and despair, but he is not entirely" }, { "start": 953.84, "end": 958.64, "text": " sorry that he tried it out. The mere act of typing out his problems was helpful. And through the" }, { "start": 958.64, "end": 965.76, "text": " process, he pinpointed what he actually needed to feel better. Yes. So it worked. Now Eli is seeing" }, { "start": 965.76, "end": 974.4, "text": " a human therapist in Philadelphia for $110 a session. Next news synced writes Google's" }, { "start": 974.4, "end": 979.68, "text": " wordcraft text editor advances human AI collaborative story writing. So the text editor" }, { "start": 979.68, "end": 986.48, "text": " isn't out yet just a paper and a demo video where a human writes something and then clicks on a" }, { "start": 986.48, "end": 992.64, "text": " button and then the machine sort of continues the story. This seems to be sort of a GPT three ish" }, { "start": 992.64, "end": 998.72, "text": " thing with an interface that just helps you select from different continuations and does the prompt" }, { "start": 998.72, "end": 1003.6, "text": " engineering in a smart way for you, you can even customize the prompt, you can ask them on to" }, { "start": 1003.6, "end": 1009.76, "text": " elaborate on particular parts of the story, and then choose from various continuation. I think" }, { "start": 1009.76, "end": 1016, "text": " that's pretty cool if it ever will appear online, which I'm not sure, given that it's Google. But" }, { "start": 1016, "end": 1022.56, "text": " if it ever will appear, something like this might lead humans to just come up with new ideas through" }, { "start": 1022.56, "end": 1030.32, "text": " this thing. So pretty cool. Next news, PC mag writes machine learning is now being used to cheat" }, { "start": 1030.32, "end": 1038.08, "text": " in multiplayer games. So there's apparently this video here that demonstrates that a bot is used" }, { "start": 1038.08, "end": 1042.64, "text": " for cheating in games. Now aim bots have been a thing for a while. But apparently this thing" }, { "start": 1042.64, "end": 1048.0800000000002, "text": " works in a little bit of a different way. And it also works on consoles, which for now has been a" }, { "start": 1048.0800000000002, "end": 1052.8000000000002, "text": " kind of a difficult thing for aim bots. So what you do is you hook up your console to a video" }, { "start": 1052.8000000000002, "end": 1057.76, "text": " capture card feed that into your PC and the PC would actually send commands to your controller." }, { "start": 1057.76, "end": 1063.2800000000002, "text": " So you'd hold the controller, but your controls would sort of be overwritten at times by the" }, { "start": 1063.2800000000002, "end": 1070.48, "text": " input of the cheat engine. And that makes detecting these cheats rather hard to use. Now it just says" }, { "start": 1070.48, "end": 1075.92, "text": " that machine learning is used in order to control this right here. You could also imagine this being" }, { "start": 1075.92, "end": 1081.1200000000001, "text": " just kind of a classic aim bot that just recognizes some pixels and then shoots at it. But apparently" }, { "start": 1081.1200000000001, "end": 1089.84, "text": " it's machine learning based. So you know, it's an ML news. Thanks. Next news, Google releases the" }, { "start": 1089.84, "end": 1097.52, "text": " open buildings data set, which is a data set that across satellite images of Africa has annotations" }, { "start": 1097.52, "end": 1104, "text": " of over 516 million buildings. This goes along with a paper where they detailed the challenges" }, { "start": 1104, "end": 1109.92, "text": " that they had to overcome to do this. So you can devise various failure modes right here. So all" }, { "start": 1109.92, "end": 1115.76, "text": " of these pictures, for examples are not buildings, the top left are water pools, top right are rocks." }, { "start": 1115.76, "end": 1120.16, "text": " Then here there are some buildings, but the thing in the red square is not a building is just a bunch" }, { "start": 1120.16, "end": 1127.04, "text": " of walls, the left are containers. This is very difficult. Google has annotated over I think a" }, { "start": 1127.04, "end": 1134.1599999999999, "text": " million images 1.75 million images or sorry, Google has annotated 1.75 million buildings in 100,000" }, { "start": 1134.1599999999999, "end": 1140.08, "text": " images by hand and then trained a system on it. The paper details how difficult that was how much" }, { "start": 1140.08, "end": 1144.8799999999999, "text": " you have to use augmentation and regularization in order to do that. But in the end, they've come up" }, { "start": 1144.8799999999999, "end": 1150.24, "text": " with this giant data set that you can now use, you can actually explore the data set in this" }, { "start": 1150.24, "end": 1155.44, "text": " interactive explorer right here. So you can switch between this view, which is I'm not sure how" }, { "start": 1155.44, "end": 1161.6000000000001, "text": " helpful that is, or this view I have discovered. So if you zoom in right here, I have discovered" }, { "start": 1161.6000000000001, "end": 1169.92, "text": " however, that sometimes I feel at least like this piece here, is this an actual building, it says" }, { "start": 1169.92, "end": 1176.96, "text": " it's a very high confidence building, I'm not sure, honestly, also this thing here, this might be one," }, { "start": 1176.96, "end": 1182, "text": " but it seems like it works pretty well. Just overall, the challenges are also recognizing" }, { "start": 1182, "end": 1187.12, "text": " buildings in both rural areas, where they kind of blend into the environment and recognizing" }, { "start": 1187.12, "end": 1193.28, "text": " buildings in commercial or dense populated areas where you mainly have to separate buildings from" }, { "start": 1193.28, "end": 1198.56, "text": " each other. So pretty cool, give the open buildings data set a try if you're interested." }, { "start": 1200.56, "end": 1206.56, "text": " Next MIT technology review writes hundreds of AI tools have been built to catch COVID," }, { "start": 1206.56, "end": 1212.8, "text": " none of them helped yet another article about the shortcomings of machine learning research." }, { "start": 1212.8, "end": 1219.9199999999998, "text": " And the take of this article is somehow you know, more effort is needed and criticizing ML research." }, { "start": 1219.9199999999998, "end": 1225.2, "text": " In the meantime, I have a bit of a more cynical approach right here. Like we've known long enough" }, { "start": 1225.2, "end": 1230.8799999999999, "text": " about the publication pressure in ML research. And to use a buzzword topic like COVID in order" }, { "start": 1230.88, "end": 1237.2800000000002, "text": " to get a paper published by simply applying whatever your thing is in research, whatever your topic is," }, { "start": 1237.2800000000002, "end": 1242.16, "text": " and using it on some kind of COVID data set in order to get a publication out of it, because" }, { "start": 1242.16, "end": 1249.2, "text": " people think like, oh, this is, you know, relevant, we need to publish fast. Now, I don't think the" }, { "start": 1249.2, "end": 1256.0800000000002, "text": " main motivation of 99% of this research was actually to develop something that actually works." }, { "start": 1256.08, "end": 1261.4399999999998, "text": " Old methods are slapped onto new topics in order to get publications. And we will continue to see" }, { "start": 1261.4399999999998, "end": 1265.84, "text": " that in the future as well. Don't expect any of these things to work in the first place." }, { "start": 1268.3999999999999, "end": 1275.6, "text": " Next news, Dali mini is an open source replication effort of open AI's Dali. So these people have" }, { "start": 1275.6, "end": 1282.56, "text": " built a version of Dali that is much smaller, but has first signs of actually working. Remember," }, { "start": 1282.56, "end": 1289.84, "text": " Dali goes from text to images, and you can actually try it out yourself on an online" }, { "start": 1289.84, "end": 1295.6799999999998, "text": " interactive demo on hogging face. Here's my query for a creepy clown and the model does not disappoint." }, { "start": 1295.6799999999998, "end": 1302, "text": " It seems like there's still a gap, probably a gap in size model size and data set size," }, { "start": 1302, "end": 1308.1599999999999, "text": " until this project reaches the level of Dali if ever but still it's pretty cool. And I love" }, { "start": 1308.16, "end": 1316, "text": " the avocado chair just as much as the Dali one. Okay, we come to the helpful library section of" }, { "start": 1316, "end": 1323.6000000000001, "text": " ML news helpful libraries. First helpful library is kind of big news. Open AI releases Triton," }, { "start": 1323.6000000000001, "end": 1330.8000000000002, "text": " which is a language that allows you to build custom CUDA kernels. And these CUDA kernels are" }, { "start": 1330.8000000000002, "end": 1337.2, "text": " super duper duper fast. And you don't have to know low level C++ CUDA in order to produce them. So" }, { "start": 1337.2, "end": 1344.16, "text": " there's a blog post and code to go along with it, detailing in very detail what's now possible with" }, { "start": 1344.16, "end": 1351.04, "text": " Triton. And apparently, open AI has made this in such a way that people who have no previous" }, { "start": 1351.04, "end": 1358.24, "text": " experience with CUDA programming are able to produce kernels that are as fast or faster" }, { "start": 1358.24, "end": 1365.2, "text": " than the kernels that were previously programmed by experienced CUDA programmers. So if you have" }, { "start": 1365.2, "end": 1371.76, "text": " something that doesn't have a efficient CUDA kernel yet, maybe give Triton a try. Next helpful" }, { "start": 1371.76, "end": 1378.4, "text": " library flammel fast and lightweight auto ML is a library for cost effective hyper parameter" }, { "start": 1378.4, "end": 1384.96, "text": " optimization. So apparently, you enter your problem to optimize and your cost and the library will" }, { "start": 1384.96, "end": 1390.72, "text": " optimize your hyper parameter towards your cost taking into account how much each hyper parameter" }, { "start": 1390.72, "end": 1395.76, "text": " setting costs to explore. So for example, if you have something like model size as a hyper parameter," }, { "start": 1395.76, "end": 1401.3600000000001, "text": " it will preferably try the smaller sizes first because they cost less and you can search more" }, { "start": 1401.3600000000001, "end": 1406.72, "text": " before it then scales up that hyper parameter. Pretty cool. Give it a try. Next helpful library" }, { "start": 1406.72, "end": 1413.3600000000001, "text": " Italian clip. Remember clip scores images and text together and Italian clip is now available" }, { "start": 1413.36, "end": 1420.9599999999998, "text": " particularly can classify such things as a and oh, I'm kidding. It's a it's a cool project. Check" }, { "start": 1420.9599999999998, "end": 1427.1999999999998, "text": " it out if you are Italian speaking or building Italian speaking products. Next helpful library" }, { "start": 1427.1999999999998, "end": 1432, "text": " deep mind releases melting pot and evaluation suite for multi agent reinforcement learning." }, { "start": 1432, "end": 1437.12, "text": " Now other than excellent this one is actually open. It's an environment in deep mind 2d lab" }, { "start": 1437.12, "end": 1442.24, "text": " and has various scenarios for multi agent reinforcement learning. And this actually looks" }, { "start": 1442.24, "end": 1447.1200000000001, "text": " like you can do some research with it and multi agent reinforcement learning especially something" }, { "start": 1447.1200000000001, "end": 1451.68, "text": " like cooperative multi agent reinforcement learning is one of these areas that is still" }, { "start": 1451.68, "end": 1457.52, "text": " largely unexplored and we don't have super good algorithms for it yet. So if you're looking for" }, { "start": 1457.52, "end": 1462, "text": " some research to do this might be a cool topic. There's an old helpful library with some news" }, { "start": 1462, "end": 1468.96, "text": " mojo co the 3d simulator that has been used for a long time for doing things like continuous" }, { "start": 1468.96, "end": 1475.28, "text": " reinforcement learning control problems and so on is now free the product requires a license but they" }, { "start": 1475.28, "end": 1482, "text": " do give out a free license to anyone at least until the 31st of October 2021. So if the" }, { "start": 1482, "end": 1489.2, "text": " availability of the license has blocked you so far, give it a try now. Also in RL news open AI gym has" }, { "start": 1489.2, "end": 1494.48, "text": " a new maintainer that is going to address the poll requests that are there project has been kind of" }, { "start": 1494.48, "end": 1500.08, "text": " dead for a long time and the new maintainer makes it clear that there aren't going to be new" }, { "start": 1500.08, "end": 1505.84, "text": " environments, major breaking changes, environment wrappers, anything like this, I think they simply" }, { "start": 1505.84, "end": 1513.1200000000001, "text": " want to make the gym usable and up to date as it is pretty cool. If you're a gym user, this should" }, { "start": 1513.1200000000001, "end": 1519.3600000000001, "text": " give you some stability and compatibility with current libraries. The new maintainer is JK Terry." }, { "start": 1519.36, "end": 1526.56, "text": " Thanks for your work. So in last news for today, the free software foundation calls for white papers" }, { "start": 1526.56, "end": 1532.56, "text": " on the philosophical and legal questions around copilot. Apparently they're contacted understandably" }, { "start": 1532.56, "end": 1539.4399999999998, "text": " a lot with regards to copilot and the kind of legal ramifications of copyright and patents" }, { "start": 1539.4399999999998, "end": 1545.6799999999998, "text": " in what copilot does. If you don't know what copilot is, watch ml news from a while ago." }, { "start": 1545.68, "end": 1552.5600000000002, "text": " In essence, they give you 500 bucks if you publish a paper through them that somehow elaborates on" }, { "start": 1552.5600000000002, "end": 1558.5600000000002, "text": " parts of these topics. So areas of interest are is copilot training on public repositories infringing" }, { "start": 1558.5600000000002, "end": 1563.76, "text": " copyright? Is it fair use? How likely is the output of copilot generate actionable claims" }, { "start": 1563.76, "end": 1570.4, "text": " of violations on GPL licensed works and so on. So there are some submission guidelines and I wonder" }, { "start": 1570.4, "end": 1576.16, "text": " if there's a way I can submit my ml news segment to this. Where's my 500 bucks, Richard? Come on." }, { "start": 1576.16, "end": 1581.76, "text": " So the criticism of the free software foundation is that copilot is what they call service as a" }, { "start": 1581.76, "end": 1588.8000000000002, "text": " software substitute, which is a term they came up with to replace as software as a service to make" }, { "start": 1588.8000000000002, "end": 1593.68, "text": " it more clear. Of course, Richard Stallman here writes, the basic point is you can have control" }, { "start": 1593.68, "end": 1599.44, "text": " over a program someone else wrote if it's free, but you can never have control over service someone" }, { "start": 1599.44, "end": 1605.8400000000001, "text": " else runs. So never use a service where in principle running a program would do never." }, { "start": 1605.8400000000001, "end": 1613.6000000000001, "text": " Richard says never. Okay, new.org. Let's look at that a certificate. What kind of certificate is" }, { "start": 1613.6000000000001, "end": 1622.56, "text": " there? Details. It's by let's encrypt. G is let's encrypt the program or a service. I wonder what's" }, { "start": 1622.56, "end": 1628.4, "text": " up, Richard, you're perfectly capable of generating SSL certificates using open SSL, a free program" }, { "start": 1628.4, "end": 1633.6000000000001, "text": " that you can run yet you elect to use a service like let's encrypt. Well, isn't that a jolly?" }, { "start": 1633.6000000000001, "end": 1638, "text": " All right, this was already way too long. This was it for this week's ml news. Please check out" }, { "start": 1638, "end": 1658.96, "text": " weights and biases. They're a great system. And I'll see you next time. Bye bye." } ]
Lg97gWXsiQ4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Resolution-robust Large Mask Inpainting with Fourier Convolutions (w/ Author Interview)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "lama", "inpainting", "gan", "adversarial", "loss function", "fourier transform", "fft", "fast fourier transform", "fourier convolution", "fast fourier convolution", "fourier convolution layer", "global information", "generative model", "periodic strucutre", "best inpainting", "ai inpainting", "first author interview", "lama inpainting", "mask filling", "large mask inpainting", "remove from picture", "ai image editing" ]
#lama #inpainting #deeplearning At the end of the video is an interview with the paper authors! LaMa is a system that is amazing at removing foreground objects from images, especially when those objects cover a large part of the image itself. LaMa is specifically trained to reconstruct large masked areas and includes global information throughout its forward propagation by using Fourier Convolutions in its layers. This makes it incredibly effective at reconstructing periodic structures with long-range consistency, compared to regular convolutions. OUTLINE: 0:00 - Intro 0:45 - Sponsor: ClearML 3:30 - Inpainting Examples 5:05 - Live Demo 6:40 - Locality as a weakness of convolutions 10:30 - Using Fourier Transforms for global information 12:55 - Model architecture overview 14:35 - Fourier convolution layer 21:15 - Loss function 24:25 - Mask generation algorithm 25:40 - Experimental results 28:25 - Interview with the authors Paper: https://arxiv.org/abs/2109.07161 Code: https://github.com/saic-mdal/lama Online Demo: https://cleanup.pictures/ Sponsor: ClearML https://clear.ml Abstract: Modern image inpainting systems, despite the significant progress, often struggle with large missing areas, complex geometric structures, and high-resolution images. We find that one of the main reasons for that is the lack of an effective receptive field in both the inpainting network and the loss function. To alleviate this issue, we propose a new method called large mask inpainting (LaMa). LaMa is based on i) a new inpainting network architecture that uses fast Fourier convolutions (FFCs), which have the image-wide receptive field; ii) a high receptive field perceptual loss; iii) large training masks, which unlocks the potential of the first two components. Our inpainting network improves the state-of-the-art across a range of datasets and achieves excellent performance even in challenging scenarios, e.g. completion of periodic structures. Our model generalizes surprisingly well to resolutions that are higher than those seen at train time, and achieves this at lower parameter&time costs than the competitive baselines. The code is available at \url{this https URL}. Authors: Roman Suvorov, Elizaveta Logacheva, Anton Mashikhin, Anastasia Remizova, Arsenii Ashukha, Aleksei Silvestrov, Naejin Kong, Harshith Goka, Kiwoong Park, Victor Lempitsky Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello there, today we're looking at resolution robust large mask in painting with Fourier convolutions also called LAMA by the Samsung AI Center, Samsung Research, EPFL and the Skolkovo Institute of Science and Technology. This is a special paper review because I'm only going to introduce the paper briefly, maybe 15-20 minutes or so and then we're going to talk to the first author of the paper and go a lot more in depth. So if you like, if you like conversations with first authors and the ability for me to ask dumb questions to them, then stay tuned for that. It's going to be in the second half of the video. For the first half though, I first want to demonstrate to you what this model can do. Hey there, this video is sponsored by ClearML. ClearML is an ML Ops stack that is fully open source. It can do experiment tracking, orchestration, deployment, model and features, stores and much more. The self-hosted tier is a first class citizen in ClearML. As I said, it's fully open source, you can look at it, you can audit it, you can extend it, you can run it on your servers. And if you ever come to the point where you need the extra features, you can totally upgrade anytime, they'll gladly take your money. They have a free tier in the cloud, which gets you going pretty far. Now we talked about experiment tracking last time ClearML with two lines of code will track any experiment that you do track the metrics, the outputs, the environments, the dependencies and make everything super duper reproducible. But this time I want to talk about a second part, which is the orchestration engine. So the orchestration engine is responsible for packaging up your experiments, including all dependencies, and then distributing them on your hardware. So that means you can simply submit an experiment to a particular queue and ClearML takes care of running this wherever it's needed. So this is super cool, because it means I can get going on my laptop, run a few experiments there. And as soon as I'm ready, boom, I ship it to the cloud. So here's an example, look at this experiment that has already been run, I got some output, but now I would like to do something different with it. So I click here, I say clone, I give it a meaningful name, like two. And now I've cloned this experiment. And this is kind of a draft experiment right now, it has no results yet. But what I can do, I can go into my configuration, into my hyper parameters, and I can change around the hyper parameters. So I wasn't really happy with the last experiment, I feel a bigger batch size might be needed. So from 128, let's go to 129. Now I'm pretty sure that's going to make all the difference right here. So I save and then I simply click on in queue, I submit it. And now ClearML simply takes care of running that experiment for me. As you might guess, you can have different queues, some for GPU load, some for long running tasks, some high priority, as you're used to from any scheduler. This can also be used in automated fashion, meaning that you can use this for automated hyper parameter search, and you can even do things such as scheduled or triggered tasks. For example, if you want to trigger a training run every day on new incoming data, that's totally doable. Now orchestration is just one part of ClearML. I've shown you experiment tracking last time. And there are many more features to their product. If this sounds interesting to you, if you're an open source fan, go check them out. And thanks so much to ClearML for sponsoring this video. Let's get into it. You can already see it a little bit in figure one right here, the model is able to take a picture, you draw a mask on it. So this is the blue area right here. And the model would auto complete the picture. So the model doesn't see the mask, the model simply sees what is unmasked, then the model is asked to complete that missing area. As you can see, it fills that area in, you know, very, very cleanly. And especially if you look right here, this irregular structure of these door holes, or whatever that is, is preserved even across very large areas. This is very, very cool. This is very difficult to do with these in painting systems. In fact, there is a project website right here, all the code is available. They give this in a little bit more of an animated flair. So you can really see the effect that these models are having. And it's pretty, pretty cool, especially take a look at these repeated structures that are often in the pictures. So these meshes or the lines right here, these tend to be extremely these tend to be especially difficult for in painting models, because in painting models are usually built on convolutional neural networks, and convolutions, notably take into account very local context. Whereas for these patterns, you need to take into account kind of a global context, that's exactly going to be the the message right here. There is an app, there are actually a bunch of apps based on this model. This is a third party app. So this is not by the author. But it is an app built from these models. There are also as I said, code is available. There's like a hugging face space, there is a collab by the author. But this particular app, let's just take a picture right here. It works best on natural images, of course, but we'll just take the channel logo right here. And we'll say we want to erase the pie sign right here. Look how cool that works. What about the paw? Okay, that that is that is kind of disturbing. How about the nose? No, no, no, I don't like that. But it should be able to Yeah, see, so it kind of completes lines, if you cross them out. So this should complete the table, but remove the leg, you can see it's fairly robust, even to use sort of miss specifying bunch of things. So here I draw over the headline, if you saw that, and it remained the head headline remains. So I removed this part, but I crossed into here a bit, you can see the line kind of remains. Now it's got a bit of hair. Yes, kill it with fire. In any case, this is available for you to use if you have more sensible pictures, I'm sure that that will work a little bit better, maybe. There are also different versions of the model. So keep that in mind. And they works also on different resolutions. That's why it's called resolution robust, large mask in painting, which is also very cool. So what is the core idea of this paper, the core idea is going to be these Fourier convolutions right here. And these Fourier convolutions are going to be enabling the model to take into account global context from the very beginning. What is the problem with a convolutional neural network? The problem usually is that in a convolution, if I have a picture, a convolution on a particular point will take into account its local neighborhood, right? And then I sort of slide this over the image right here. And that will give me my representation in the next layer, maybe that's going to be even of the same size. So for a given point right here, I will have a receptive field of the point in the original image, plus some neighborhood. Usually we work with three by three convolutions. So all I'm going to do really is I'm going to look one pixel to the top and one pixel to the bottom, one pixel to the top, one pixel to the left, and one pixel to the right. And that's about it. I'm not going to do any more looking around. So how does a convolutional neural network integrate information across the whole image? And the answer to that is by going for multiple layers. If I simply represent the picture as a set of pixels in one dimension, imagine that the one dimension here is the picture. And I'm going to need a bit more space for that. So as you can see in the first layer, from the first to the second layer, let's say we look at this particular point right here, it's going to be have a receptive field of three. So it's going to look at these pictures, sorry, at these pixels right here. In the next layer, if you can see that the same location is also having a receptive field of three right here. However, since for example, this particular pixel right here also had a receptive field of three, and this particular one also, as you can see, and from layer two on the total receptive field of that so that all the information inflow is going to be from a receptive field of five. Therefore, the more layers we have, the more of information, the more spatial information can be included for a given particular location in the output. But as I said, that takes a lot of layers that takes depth. And especially for these in painting applications, what you want is kind of global information. These masks right here, like these masks, they're pretty big for an in painting application. So they're pretty, pretty wide. And if you can imagine a convolutional layer that looks at a three by three pixel neighborhood, that might be something right here. You know, so you're going to have a whole lot of convolutional kernels that just see the masked pixels, they see nothing of the outside, they simply see a bunch of masked pixels for a whole bunch of layers, right layer two, layer three, until like layer four, there's like there's nothing, no information at all at this position about the outside world about the world beyond the mask. And even then, it's only like this area, we need to go many more layers before we get access to information that is way outside of here. And at that point, it may already be too late. So the Fourier convolutions, they solve that they have the ability at every single layer to look at a global context. And how are they doing this? It's not super expensive. In fact, they're doing this by using of course, Fourier transformations, a Fourier transformation will map a signal to its corresponding frequency domain signal, it is essentially a different way of representing a signal. So if you have a signal, let's say you have like a pure sine wave, you do a Fourier transformation of that entire thing, you can represent that as the components in the Fourier spectrum. And that would simply have like one component at the particular frequency at which this sine wave at which this sine wave is operating. That's the that's not the frequency, that's like one over the frequency right here. But in a more in a more general sense, a Fourier transform will decouple the spatial resolution and give it a transform it into frequency resolution. So if you have a Fourier spectrum, maybe you have a very complicated signal right here, a complicated signal that will give you also a complicated Fourier spectrum, like you have a lot of this, you have like negative this frequency, a lot of this frequency, not too much of this frequency, and so on. If you do a convolution in this domain, you simply convolve across neighbors of the signal. However, if you do a convolution in Fourier domain, you can see you convolve across frequencies, you can evolve across neighboring frequencies, which means that these three things represent three particular sine waves frequencies, maybe the lowest one is like a super long sine wave, the second one is like a bit of a faster sine wave, the third one is even faster sine wave. But what is important is that every single component in the Fourier spectrum represents information about the entirety of the signal. And that is exactly what we want. Whereas the regular convolution is localized in in pixel space, the Fourier convolution is going to be localized in frequency space, but global in pixel space. That is very cool. And of course, Fourier transforms are also one of the things that are extremely fast. It's essentially a linear algebra operation. And there are very fast implementations of discrete Fourier transforms called fast Fourier transforms. That's exactly what they do right here. The whole architecture is going to look like this. There is going to be the image the input image x there's going to be a mask during training that is produced by a mask generation algorithm x is then masked out and the model is tasked to predict the missing pixels that are hidden by the mask. As I said, the model has no access to what's below the mask, I guess that will that would be kind of pointless, right? Yeah. So what we do first, but also this is a fully convolutional architecture that makes it able to essentially transfer to different resolutions, which is another advantage here being a fully convolutional. So what we do is first we downscale a little bit as far as I can tell these images are something like 256 by 256 during training, or it works on crops of 256 by 256, somewhere in that range. But the cool thing is it can generalize to high definition images like 1920 by 1080 or something like this, the same network. So the train the network that's trained on this low, low, quote unquote, low resolution can generate can generalize to very, very high resolution, and it won't lose performance. But we'll see that in the experiments. So first there's down sampling, and then the model is just this is just nine layers. They also have a variant with 18 layers. But the base model is nine layers of this fast Fourier convolution residual block. As you can see, it has a residual connection right here, like a normal resnet, whereas a normal resnet would have two convolution layers right here, we opt for these fast Fourier convolutional layers. Now, they look a bit complicated, but essentially, what we do is we carry two different signals across the entire network, one signal contains local localized information. So one signal is going to operate in the original domain of pixel space and has all that those properties, so it looks at its neighbors and so on. And one signal is going to operate in in more of the global domain. And then in each layer, those two strands of information get to exchange information with each other. So the whole signal is represented as this block here with the two components. But it's essentially just we have like two strands of signal, and then every now and then they get to exchange a bit of information, right, one is the local, the local branch, and one is the global branch of information. So what do we do with the local branch, we have different operations right here. So we have a little conv layer that is in pixel space, actually, we have two of them, right, two conv layers. So we pass this the local signal, this is really just if you just consider this path right here through this one, then ignore ignore this here. If you just go here, this is just like a normal conv net, right, this path here gets information from this side here. It receives it and then there is an addition. So what is that, that is simply this global signal the global signal, also doing a localized convolution in pixel space. So far, there is nothing special if we were to just do this, this would be it would be pointless to have two strands of information, right. But the important thing is that the global strand comes to be in a very special way. So for that, we have to look what information arrives at the global branch right here, because that's the information that's going to be passed in here for the next layer. For that, we see from the local branch, there's a three by three convolution going out over here. So let me draw that in greenish over here. And that is going to be mixed with this global strand of information. And the global strand of information is going through this spectral transform block. The spectral transform block is essentially pretty easy. There is a there's a batch norm, sorry, a convolution batch norm relu block. This is a one by one convolution. This is simply simply simply a linear operator pixel wise, essentially, there's a batch norm, there's a relu for the nonlinearity. And then what we do is we do a fast Fourier transform in 2d. And at the end of the block, we're going to invert that. So fast Fourier transform to operate in Fourier space, and then invert the fast Fourier transform at the end. And inside of it, we're going to do a convolution batch norm relu block right here. So the convolution again, that's a one by one convolution, I believe, followed by batch and followed by relu. So actually even forget what I said about localized convolutions right here, if they just do one by one convolutions, they really operate just on the individual elements of the spectrum by itself, not even, they don't even they don't even consider localized, sorry, neighborhoods of frequencies, they just operate on the individual frequencies, one by one, which is is an option, like one by one convolutions are are a thing. So, you know, pretty cool. This by itself also has residual connection right here, I'm going to guess to make signal flow better or more more stable or some something like this, the observant people might object and say, hey, this thing right here actually outputs complex numbers. So this is in the space of complex numbers. So you'll get vectors with entries like a plus, plus IB. But what we do is simply we take those and we stack them. So we just make like vectors out of them, a and b. So if there is a bunch of numbers, it will just be like a one b one, a one b one, b one, a two, b two, and so on. And we just consider this to be a real vector of double dimensionality, or a real 2d signal of double the dimensionality as before. And that is how we do it. I mean, it's not it's not entirely correct, right. But the model in this way has access to all the relevant information, it can do what it wants with it. Yeah, it can it can learn that half of the dimensions correspond to two phases, or, or whatever, whatever the complex part of this is, it's been a while since since been a while since Fourier transforms. Okay, so these are the exactly so here, that's done, we have, sorry, go back up here to start it, there is first the real FFT, as you can see, that gets you to complex space, then there is complex to real, in which we transform the c channels into two c channels. But now we're in the real numbers. Then there is this value batch norm conv, which retains the signal. And there is real to complex where we go back into complex space. So from reals, 2d to c channels into complex, just c channels, and then we reverse the Fourier transform. And that is a Fourier convolution, as they define it. If we integrate, no, that is the spectral transform block right here, the Fourier transfer, the Fourier convolution is this entire construct right here, as you can see, the spectral transform information then flows in here is combined with some local information that really should be green. And that then goes into this global output and obviously will become the global input to the next layer. So that is how they fuse localized information with global information in every single layer. And that turns out to be pretty, pretty powerful. They do have other improvements right here. And it's it's crazy to see that just how much engineering and how many tricks go into these models to really get them to work. So they also stress that loss function is a really, really important topic right here, because you can't simply reconstruct the original image right here, if you simply tell the model to reconstruct the original image from here, it's going to be bad because if your mask is pretty big, pretty wide, there can be many possible fillings of the mask that makes sense. And since there are many possible ones, if you don't account, if you don't reward the model for getting one of the possible ones without punishing it that it didn't get all the other ones, the model is going to be very confused and is simply going to output the average of all the possible ones, which we don't want we want one of the possible ones. So what we do is we apply a perceptive loss, they call that a perceptive loss. And they explain that over here, what you do is you feed the image, the original image, the original image, this is the real one, and the fake one, and you can already see there's going to be like a discriminator later, right. But you feed them both through a pre trained neural network. And then you compare at intermediate points, or even like at the last latent layer, you compare the two feature maps. So depending on how this network is trained, if that outputs very perceptually salient features, you'll get like a nice loss that doesn't punish you for getting any pixels wrong. But that encourages you to get something that is perceptually similar to what was there in the original image. They also stress that it's really important on how you train this network right here. They suggest to make this network also include global context using either also Fourier convolutions or dilated convolutions. And here you can see that's essentially the formula that means we take the features from the original image and the features from the fake image, and we calculate their distance. And that's going to be the high receptive field perceptual loss. This is not the only thing they do. They also have, as you can see, an adversarial loss. There is also a regularizer on the gradients. So yeah, the final loss you're going to end up with is like a mix of all of these different losses. There's also a discriminator based perceptual loss. And this part right here is by itself, again, a conjunction of two losses. So rest assured, the loss architecture right here is very, very intricate. And I'm going to guess it's taken a lot of experimentation, not only by this paper, but by the whole field here to really come up with nice losses that make your outputs nice. Obviously, there's going to be a bunch of hyper parameters here to tune, which is always fun, but they seem to have done a pretty good job. The last thing they stress, which is important is how you generate masks during training. So during training, you can't just, you know, take your finger and draw on pictures. Like I did, you have to have some heuristic way of generating masks. And I'm not going to go into the detail of how they do it. You can see here compared to this is one of the one of the baselines. And this is one of their heuristics. They have a mix of these large masks and the box masks. So sorry, both are large, but one is called wide masks, which are kind of polygons that they round off the corners, I think, and box masks, which are sort of heuristically generated boxes right here, or stacks of these boxes. And that's, and they mix those two together in order to get the final masking for their images. You can see these are fairly large, like this one here covers more than more than half the image. So these are challenging, challenging tasks. But it is through training with such large masks that you get the models to really learn to fill in it consistently. So what you can see is that in their results, and we're not going to go into all the tape, like they have a lot of tables, a lot of ablations, but red essentially means that it's worse than their model, you can see almost all of the table is red, except some models in some of the benchmarks, for example, in the narrow masks, you will find situations where other models might outperform their model. But as soon as you go to like wide masks, it is no longer, it's no longer really a competition at all. Yeah, so their model seems to be really good. Those white masks, they do a lot of ablations where they switch out different, for example, different convolutions right here, they show what if we switch the Fourier by a dilated convolution, which is also a way to increase the receptive field rapidly or by regular convolution. And again, while there might be some improvement, sometime on narrow masks, as soon as you go to wide masks, the other models degrade pretty quickly, the dilated convolution actually holds up fairly well right here. But one disadvantage of that is that it's very hard to go to higher resolutions, because the higher resolution you go, the dilated convolutions that their receptive fields will also shrink, while the Fourier convolutions receptive fields will always remain essentially global. So here you have some comparison to baselines, you can see of course, they chose these pictures well with kind of the regular structure in the background. But check this out, like this is even this is even their model. But with regular convolutions, and even if they go deeper, doesn't really help. But like this, this is just insane, right? I get it, they pick this picture, but it is like is really good. And you can also see this building how it's completed over here with different methods, and then with their method. And the mask was, you know, fairly, fairly big, as you can see, also the bottom this the mask is huge. Yeah, here they show what happens if you go to higher resolution. So on this rather simpler problem, you can see that a lot of the models do well in the top row, if you just have the kind of a lower resolution. But if you go to really high resolution, a lot of the models struggle while the llama model here still does a big, a good job in their larger model seems to be even better. Yeah, again, lots of ablations, but I'm going to stop right here, and we'll go over to chatting with the first author about this. So I'll see you in a bit. Hello, everyone. I'm here with Roman Suvorov and Elizaveta Logacheva, the authors of the llama paper and llama system as well, I guess I think this is as much a paper as it is an engineering effort. And just because looking at the paper, it already dawns on just how many things are important in this system. And then trying this out myself, it really works like it's snappy, it's really cool. And the results are pretty great, I have to say for a for a learned system. So first, like welcome both of you and big props on big props on the system is very cool. So you've seen you've seen my video, what did strike you? What were they get it wrong? Yeah, first of all, I think that you did a great job in describing the overall paper. And I have almost no, you know, I have almost nothing to no complaints. Yeah, no complaints regarding that. And maybe one point regarding the overall the overall point of the paper. And yeah, as it's seen from the title, Fourier convolution might be stand out a little bit more than other components. But the actually the paper is about that all three components like, like we generate data and how we process images with a neural network and how we optimize this, how what losses do we choose, all these three components are important. And yes, sometimes they can be relatively easily tuned from existing methods and allow to such easy tuning can help to significantly improve the results. So that's that's was the overall point of the paper. Yeah, I had this I had the feeling to you again and again stress that a lot of these things are important, especially the three main components. And you did a lot of ablations to also show that all of these are important. That's why I find it so impressive, right? Because people usually just put which one did you start with first? Did you first have the idea of the Fourier convolutions? Is was that the motivation? No, initially we started when we when we started overall project on the inpainting, we just started with a classic peaks to peaks. So just get clone and pick predict an existing code base from piece to piece. And then we tried to step iteratively identify the most weak points and try to understand what is the reason behind that weakness. And at some stage, we understood that most architectures we tried really lots of different architectures, and we tried existing blocks from other inpainting papers. And we found that almost none of them can handle repetitive patterns. Well, and yes, we started it. When we think about repetitions, the one of the most obvious thing that came in mind is Fourier transform, because it is very natural thing to handle periodic signals. And first we started composing a layer on our own. And then we just googled and found that FFC, which was proposed for recognition tasks. And we thought that it is a great thing to start with and took it and modified it and tuned for that particular task. And yeah, it worked pretty well. So these would be the the Fourier convolutions. Was it already in the form that we see in the paper with the two strands of information like the global and the local? Or did you have to shake things up? No, the right part of this picture reflects the original form of this fast Fourier convolution as it was proposed by the authors. Cool. And did it work out of the box? Yes. But when we tuned that for inpainting, we figured out that the local branch is not really important. And we can handle almost everything with just global branch with that spectral transform. Yeah. So but you still kept the local branch in? Yeah, because it helps for stability, especially in not such large images and large masks. So if we try to push the generalization to high resolution to extreme, and to train on very low resolutions and then infer in very high resolutions, then using only global branch will pay more. But in the real world, some combinations, some combination of these two is more practical. Yeah. So this is it's something I found interesting because you have this point of these large, large masks, or very wide masks and so on. And you stress the importance of your algorithm that produces these different masks. Now when I look at these pictures, it doesn't seem that different, right? If I look at the top row, you know, there's also like some parts of the picture are also occluded relatively big parts, there are kind of some squiggles, they're even relatively wide, right? Why do you have an intuition? Why is the mask generation algorithm so important? Is it important that it's close to what humans do later? Or is it important that it is of a certain shape because of the architecture of the network? Or what's the deal with that? Yeah, as with the architecture, we started with an existing heuristic to draw that masks. And we actually we follow the same algorithm as the one used in Deep Field version two, the first row in that figure. Why masks should be wide? Yeah, because it is important because the width of masks forces the generator to pass the information more far within itself. So if we can cover almost all input image with very thin lines, for example, we can mask out every second row and every second column in the input image. And that would be very something very similar to a super resolution problem. And the percent of the image will be covered by such masks. But the network wouldn't need to pass information far. Yeah, that's why masks are important. And they are more important for fully convolutional architectures, but for a Fourier based they always help as well. And we have a couple of histograms in our supplementary material, which compare actually the first row of that figure with the mask generated by our algorithm. And the difference is pretty huge, actually. It is cool to see that the difference is so big. I think that it was mask that it was point from which we started, actually, because we aimed to inpaint real world examples. And in that examples, masks actually are huge. So we started with big masks in our validation set. And we saw that all other algorithms have fails to fill these large holes. And then we started to think on how we need to change our model that it can incorporate global information. Yeah. Is your algorithm deterministic? Yeah. If I give it the same input and the same mask. And is this correct that the clean up dot pictures app that is really your small model that runs here? No, this is the large model. Oh, this is the big model already. Okay. So here, I've taken this. But what happens? Have you ever tried just masking the whole picture? What's kind of like the default output? That's an interesting... I don't know what will happen. I think something average, a constant color maybe. Let's see. Yeah. All right. Pretty unspectacular. But I guess it's very gray is very high probability, right? Okay. Cool. And then there's the third component is the loss. And I have to say the loss is a monstrosity. There are like 50. So first of all, you have... No, this is the adversarial part of the loss. And then on top of that, you have like the discriminator perceptive loss. I'm going to guess that's the same as the perceptual loss, but in the features of the discriminator. Yeah. So the features which are used to calculate discriminator based perceptual loss are updated throughout the training. This is a pretty commonly used loss in image to image tasks. It helps to stabilize training. So the idea is that the discriminator bases its decisions on features which are perceptually meaningful. So very similar to the perceptive loss that you have up here, right? I think that feature matching or discriminator based perceptual loss helps mostly because it provides a clear signal to the generator. And if in adversarial training, we have to balance discriminator and generator. And if one part is more powerful, the whole thing collapses. And discriminator based perceptual loss helps the generator to catch up when discriminator becomes too powerful. Yeah, that makes sense. For all of these losses, right? And then you have a regularizer on the gradients and you have this wide field perceptive loss and so on. Did you plan this from the beginning? Did you say, you know, here are all the good losses that I know of? Or do you have more losses that you ended up not including? My question is, how, if I'm a if I'm a if I'm a researcher or an engineer trying to come up with such a system, how do I decide which seven losses go into my final loss, right of the 50 possible losses that I could do? Do I try them all? Or are there some some guidelines? Actually, I think all of these losses, except for high perceptual loss are pretty common. And they all are often used in image to image tasks. We need something to force our model to create a realistic picture. So we need discriminator and its loss. We need to reconstruct something that we can reconstruct. So we need some loss for reconstruction and editing losses to restrict it. So we need something that works on features. But we worked a lot on we made a hyperparameter search, of course, and we changed our we work on our perceptual loss form, because we started with this common perceptual loss based on the GG model. But we had a feeling that it can be not perfect because it's because the models that run on classification tasks and models that was trained on classification, they seems to concentrate on texture and not global structure. So we decided to try something else. And then we find these models that run on segmentation tasks. And on data set that is more similar to ours, data set, and we tried it and it worked. So the segmentation task as a training task for the perceptual loss model is sort of a better preconditioner than the classification task? Yeah, because it is natural for the segmentation model to focus more on boundaries of objects instead of their textures. And in case of inpainting, good texture can be learned using only discriminator. Because there is a lot of freedom in how we can generate fine grain textures and there is no need to put some any supervision on that part of. But it's also important that models that are used for segmentation are different. So we compared in our ablation, we compared the same model that was on the training on classification and it works with the same model. Yeah, you have not only do you have a different task with the segmentation, you also include also higher receptive field layers into that model. So the sense, the logic is that if that model also includes global information more, its signal to the model is more accurate. So it's signal to your model will also be more sensitive to that global information. It's a little bit like, you know, in reinforcement learning, people do reward shaping, it seems like you do reward shaping by how you train the different discriminator models that then give your model the sort of the signal to learn. Yeah, I like the sort of the meta idea here. That's pretty cool. Unfortunately, I'm not familiar with reward shaping from reinforcement learning. But our idea here was that basically we have two losses here. The first one is discriminator or the serial and which focus more on fine grain details and the second is perceptual loss, which focuses more on global text, global structures. For the Fourier convolutions, maybe a little bit more conceptual, right? We have this local information in one strand, we have this global information in the other strand. And it's clear that for these large masks, as you show the system works pretty well. What kind of data does your system work not well on? Like what's what would be sort of the worst input that I could give to your system? Like this, this up here is really beautiful, right? What picture could I take such that it is absolute garbage? Yeah, actually, lots of images will be processed bad with our model. I mean, of course, I can give it a picture that is, you know, very dissimilar to the training data set. But let's say I actually had a training data set, what would be the worst domain or the worst kind of picture? Yeah. I think it cannot recreate half of human on something. Yeah, our model focuses mostly on background due to how it was trained. And yeah, it cannot recover foreground objects really well. It cannot do something that requires it to actually know everything about what works and not just take it from picture it sees. Yeah. So is it, is it, do you feel that the model mostly learns how to sort of copy elements from the part it sees to the parts that are masked? Do you think that the learning is mostly teaching the model how to do that? Because it seems the model is very sophisticated in, you know, in Photoshop, you take this stamp tool, right? You say, I'll take a little bit from over here, put it here. Do you think your model is just like a really, really good user of that tool in a sense? Yeah, it seems yes, yes. And in order to be able to create big parts of images from scratch, we need a different kind of model. And we most probably need a kind of capacity within the generator, because without it, it is not possible to create something from nothing. Yeah. Also, our model is quite small, so it cannot really remember everything. Yeah, that is something that I left completely out of my review. I think the fact that your model is compared to the baselines you compare to is a lot smaller, right? It has way less parameters. That is something that's, I think, very cool and enables it to run inside web applications and so on, like on or maybe on a mobile device or... Yeah, I have another question and to the Fourier convolution. So here we have global information and local information, right? As sort of two different things. You mentioned in the paper that other models that have more global information or access to wider information could also work, such as a vision transformer or something like this. My question is, is there an in between between local convolutions and Fourier convolutions? Okay, I mean, there's dilated convolutions. But if I think of a Fourier transform, you transform into a space where locality no longer matters, but frequency matters. And in the original domain, frequency is just kind of doesn't matter, but locality really matters. Is there a transform, are there transforms that we could do that put us in between where, you know, the as I go in the x coordinate, it's a little bit of frequency and a little bit of locality? Like, is there hope that instead of having multiple columns of information, we could sort of choose our space wisely to trade off local and global? Or do you think this is already, you know, local, like a mix with two channels is a good way to go? That's a very good question. Yeah, and I don't know the answer to it. And one thing that comes to my mind is there is a short time Fourier transform, which is often used for music processing, sound processing. And yeah, it kind of combines local convolutions with Fourier transform over. It is roughly can be described as processing the whole signal with a sliding window and transform each sliding window with Fourier transform. Yeah. So it is most obvious combination. If you had to give your intuition why the Fourier convolutions make such a big difference here, of course, like that, we've already discussed Fourier transform kind of loses the locality of the signal and gets global information. But why Fourier transforms? What's kind of good about this particular function that you chose and space that you chose? Surprisingly, if we throw the local branch away, it will still generate something meaningful. So spectral transform doesn't lose that local correlations completely. And I think that this is due to the fact that the generator has spectral transforms and spatial transforms interleaving each other because here we can see that we have cone one by one between two FFT and we have two more convolutions before and after the spectral transform. They are as well one by one. So they don't capture local content directly, but then can combine channels on that particular locations. And yeah, that maybe that can somehow replace traditional convolutions. The fact that these spatial and spectral transforms are interleaved. Yeah. And when we think about generalization to higher resolution, I think spectral transform helps because of the fact that low frequency part of spectrum does not depend on the resolution, on the input resolution that strong. And it is it is almost the same no matter if we have 2056 or sorry 256 or 2000. Yeah. Yeah, that by itself is one of the cool properties again of your paper. The fact that it can scale up to sort of very high resolutions. There are artifacts appearing, but they are not nearly as much as in other other models. Looks pretty cool. Yeah, it doesn't scale up perfectly, but yeah, it's better than fully convolutional architectures. Cool. Yeah. So where do you think I mean, maybe you don't want to disclose necessarily but what is the plan for the future? We don't know where we get throughout research. But yeah, the most obvious thing here is that we can try to improve the way generalized to high resolutions. And the second point is that we are trying to understand why actually it works that because it yeah, it has lots of components. And we conducted an ablation study regarding if validating if each of these components matter, but this is just a surface. And we can go more in depth in that. And we are not satisfied with our laws because it's that huge. There are many components that we need to balance. And we want better laws with just one, just one button, make everything work. And nice. So yeah, I mean, I was almost I was expecting you to say we're not happy with our loss. We want more. We want like more components to make it. But it's I think it's pretty cool that the goal is also to make a system that's kind of as good but simpler. I think they'll make it also much more accessible. Yeah, I think that's a good idea. Yeah. I think they'll make it also much more accessible. Cool. Yeah. Roman, Elisa, sorry, Lisa. Is that correct? Yes. Okay, Lisa and Roman, thank you so much for being here. Was it was a pleasure? Do you have any last criticisms to the video? Or shout outs? No, thank you very much for for the discussion. It was really fun. And thank you for your channel, because it is you make a real good job in helping others to be in time and to catch with the this huge wave of information that we have in the field. Thanks. Thanks. Yeah, thank you. Thank you.
[ { "start": 0, "end": 5.36, "text": " Hello there, today we're looking at resolution robust large mask in painting with Fourier" }, { "start": 5.36, "end": 12.96, "text": " convolutions also called LAMA by the Samsung AI Center, Samsung Research, EPFL and the Skolkovo" }, { "start": 12.96, "end": 19.92, "text": " Institute of Science and Technology. This is a special paper review because I'm only going to" }, { "start": 19.92, "end": 26.16, "text": " introduce the paper briefly, maybe 15-20 minutes or so and then we're going to talk to the first" }, { "start": 26.16, "end": 32.96, "text": " author of the paper and go a lot more in depth. So if you like, if you like conversations with first" }, { "start": 32.96, "end": 38.480000000000004, "text": " authors and the ability for me to ask dumb questions to them, then stay tuned for that." }, { "start": 38.480000000000004, "end": 43.68, "text": " It's going to be in the second half of the video. For the first half though, I first want to" }, { "start": 43.68, "end": 49.44, "text": " demonstrate to you what this model can do. Hey there, this video is sponsored by ClearML." }, { "start": 49.44, "end": 55.04, "text": " ClearML is an ML Ops stack that is fully open source. It can do experiment tracking," }, { "start": 55.04, "end": 61.44, "text": " orchestration, deployment, model and features, stores and much more. The self-hosted tier is" }, { "start": 61.44, "end": 66.88, "text": " a first class citizen in ClearML. As I said, it's fully open source, you can look at it, you can audit" }, { "start": 66.88, "end": 71.03999999999999, "text": " it, you can extend it, you can run it on your servers. And if you ever come to the point where" }, { "start": 71.03999999999999, "end": 76.32, "text": " you need the extra features, you can totally upgrade anytime, they'll gladly take your money." }, { "start": 76.32, "end": 81.52, "text": " They have a free tier in the cloud, which gets you going pretty far. Now we talked about experiment" }, { "start": 81.52, "end": 87.67999999999999, "text": " tracking last time ClearML with two lines of code will track any experiment that you do track the" }, { "start": 87.67999999999999, "end": 93.28, "text": " metrics, the outputs, the environments, the dependencies and make everything super duper" }, { "start": 93.28, "end": 98.64, "text": " reproducible. But this time I want to talk about a second part, which is the orchestration engine." }, { "start": 98.64, "end": 103.75999999999999, "text": " So the orchestration engine is responsible for packaging up your experiments, including all" }, { "start": 103.75999999999999, "end": 109.19999999999999, "text": " dependencies, and then distributing them on your hardware. So that means you can simply submit an" }, { "start": 109.2, "end": 115.60000000000001, "text": " experiment to a particular queue and ClearML takes care of running this wherever it's needed. So this" }, { "start": 115.60000000000001, "end": 120.16, "text": " is super cool, because it means I can get going on my laptop, run a few experiments there. And as" }, { "start": 120.16, "end": 125.28, "text": " soon as I'm ready, boom, I ship it to the cloud. So here's an example, look at this experiment that" }, { "start": 125.28, "end": 130.56, "text": " has already been run, I got some output, but now I would like to do something different with it." }, { "start": 130.56, "end": 138.88, "text": " So I click here, I say clone, I give it a meaningful name, like two. And now I've cloned this experiment." }, { "start": 138.88, "end": 144.16, "text": " And this is kind of a draft experiment right now, it has no results yet. But what I can do, I can go" }, { "start": 144.16, "end": 149.6, "text": " into my configuration, into my hyper parameters, and I can change around the hyper parameters. So" }, { "start": 149.6, "end": 154.64, "text": " I wasn't really happy with the last experiment, I feel a bigger batch size might be needed. So" }, { "start": 154.64, "end": 161.76, "text": " from 128, let's go to 129. Now I'm pretty sure that's going to make all the difference right here. So" }, { "start": 161.76, "end": 168.4, "text": " I save and then I simply click on in queue, I submit it. And now ClearML simply takes care of" }, { "start": 168.4, "end": 173.52, "text": " running that experiment for me. As you might guess, you can have different queues, some for GPU load," }, { "start": 173.52, "end": 179.04000000000002, "text": " some for long running tasks, some high priority, as you're used to from any scheduler. This can" }, { "start": 179.04000000000002, "end": 184.48000000000002, "text": " also be used in automated fashion, meaning that you can use this for automated hyper parameter search," }, { "start": 184.48000000000002, "end": 188.96, "text": " and you can even do things such as scheduled or triggered tasks. For example, if you want to" }, { "start": 188.96, "end": 195.44, "text": " trigger a training run every day on new incoming data, that's totally doable. Now orchestration is" }, { "start": 195.44, "end": 201.44, "text": " just one part of ClearML. I've shown you experiment tracking last time. And there are many more" }, { "start": 201.44, "end": 206.07999999999998, "text": " features to their product. If this sounds interesting to you, if you're an open source fan," }, { "start": 206.07999999999998, "end": 211.2, "text": " go check them out. And thanks so much to ClearML for sponsoring this video. Let's get into it." }, { "start": 215.68, "end": 222.4, "text": " You can already see it a little bit in figure one right here, the model is able to take a picture," }, { "start": 222.4, "end": 228.88, "text": " you draw a mask on it. So this is the blue area right here. And the model would auto complete the" }, { "start": 228.88, "end": 234.4, "text": " picture. So the model doesn't see the mask, the model simply sees what is unmasked, then the model" }, { "start": 234.4, "end": 241.36, "text": " is asked to complete that missing area. As you can see, it fills that area in, you know, very," }, { "start": 241.36, "end": 247.92000000000002, "text": " very cleanly. And especially if you look right here, this irregular structure of these door holes," }, { "start": 247.92, "end": 255.2, "text": " or whatever that is, is preserved even across very large areas. This is very, very cool. This is very" }, { "start": 255.2, "end": 261.76, "text": " difficult to do with these in painting systems. In fact, there is a project website right here," }, { "start": 261.76, "end": 267.59999999999997, "text": " all the code is available. They give this in a little bit more of an animated flair. So you can" }, { "start": 267.59999999999997, "end": 275.12, "text": " really see the effect that these models are having. And it's pretty, pretty cool, especially take a" }, { "start": 275.12, "end": 281.52, "text": " look at these repeated structures that are often in the pictures. So these meshes or the lines right" }, { "start": 281.52, "end": 287.92, "text": " here, these tend to be extremely these tend to be especially difficult for in painting models," }, { "start": 287.92, "end": 293.68, "text": " because in painting models are usually built on convolutional neural networks, and convolutions," }, { "start": 293.68, "end": 299.44, "text": " notably take into account very local context. Whereas for these patterns, you need to take into" }, { "start": 299.44, "end": 305.76, "text": " account kind of a global context, that's exactly going to be the the message right here. There is" }, { "start": 305.76, "end": 309.76, "text": " an app, there are actually a bunch of apps based on this model. This is a third party app. So this" }, { "start": 309.76, "end": 316.15999999999997, "text": " is not by the author. But it is an app built from these models. There are also as I said, code is" }, { "start": 316.15999999999997, "end": 322.24, "text": " available. There's like a hugging face space, there is a collab by the author. But this particular app," }, { "start": 322.24, "end": 328, "text": " let's just take a picture right here. It works best on natural images, of course, but we'll just" }, { "start": 328, "end": 335.2, "text": " take the channel logo right here. And we'll say we want to erase the pie sign right here. Look how" }, { "start": 335.2, "end": 343.6, "text": " cool that works. What about the paw? Okay, that that is that is kind of disturbing. How about the" }, { "start": 343.6, "end": 352.8, "text": " nose? No, no, no, I don't like that. But it should be able to Yeah, see, so it kind of completes" }, { "start": 352.8, "end": 359.28000000000003, "text": " lines, if you cross them out. So this should complete the table, but remove the leg, you can see" }, { "start": 359.28000000000003, "end": 365.04, "text": " it's fairly robust, even to use sort of miss specifying bunch of things. So here I draw over" }, { "start": 365.04, "end": 371.68, "text": " the headline, if you saw that, and it remained the head headline remains. So I removed this part," }, { "start": 371.68, "end": 376.64, "text": " but I crossed into here a bit, you can see the line kind of remains. Now it's got a bit of hair." }, { "start": 376.64, "end": 382.47999999999996, "text": " Yes, kill it with fire. In any case, this is available for you to use if you have more sensible" }, { "start": 382.47999999999996, "end": 389.03999999999996, "text": " pictures, I'm sure that that will work a little bit better, maybe. There are also different versions" }, { "start": 389.03999999999996, "end": 394.88, "text": " of the model. So keep that in mind. And they works also on different resolutions. That's why it's" }, { "start": 394.88, "end": 401.44, "text": " called resolution robust, large mask in painting, which is also very cool. So what is the core idea" }, { "start": 401.44, "end": 407.2, "text": " of this paper, the core idea is going to be these Fourier convolutions right here. And these Fourier" }, { "start": 407.2, "end": 414.24, "text": " convolutions are going to be enabling the model to take into account global context from the very" }, { "start": 414.24, "end": 420.32, "text": " beginning. What is the problem with a convolutional neural network? The problem usually is that" }, { "start": 420.32, "end": 427.44, "text": " in a convolution, if I have a picture, a convolution on a particular point will take into account its" }, { "start": 427.44, "end": 431.76, "text": " local neighborhood, right? And then I sort of slide this over the image right here. And that will" }, { "start": 431.76, "end": 437.68, "text": " give me my representation in the next layer, maybe that's going to be even of the same size. So for a" }, { "start": 437.68, "end": 445.6, "text": " given point right here, I will have a receptive field of the point in the original image, plus" }, { "start": 445.6, "end": 450.72, "text": " some neighborhood. Usually we work with three by three convolutions. So all I'm going to do really" }, { "start": 450.72, "end": 456.8, "text": " is I'm going to look one pixel to the top and one pixel to the bottom, one pixel to the top," }, { "start": 456.8, "end": 463.2, "text": " one pixel to the left, and one pixel to the right. And that's about it. I'm not going to do any more" }, { "start": 463.2, "end": 469.52000000000004, "text": " looking around. So how does a convolutional neural network integrate information across the whole" }, { "start": 469.52000000000004, "end": 476.88, "text": " image? And the answer to that is by going for multiple layers. If I simply represent the picture" }, { "start": 476.88, "end": 483.12, "text": " as a set of pixels in one dimension, imagine that the one dimension here is the picture." }, { "start": 483.12, "end": 489.84000000000003, "text": " And I'm going to need a bit more space for that. So as you can see in the first layer," }, { "start": 490.72, "end": 497.04, "text": " from the first to the second layer, let's say we look at this particular point right here," }, { "start": 497.04, "end": 502.8, "text": " it's going to be have a receptive field of three. So it's going to look at these pictures, sorry," }, { "start": 502.8, "end": 510.4, "text": " at these pixels right here. In the next layer, if you can see that the same location is also having" }, { "start": 510.4, "end": 518.16, "text": " a receptive field of three right here. However, since for example, this particular pixel right" }, { "start": 518.16, "end": 525.92, "text": " here also had a receptive field of three, and this particular one also, as you can see, and from layer" }, { "start": 525.92, "end": 532.56, "text": " two on the total receptive field of that so that all the information inflow is going to be from a" }, { "start": 532.56, "end": 539.92, "text": " receptive field of five. Therefore, the more layers we have, the more of information, the more spatial" }, { "start": 539.92, "end": 546.88, "text": " information can be included for a given particular location in the output. But as I said, that takes" }, { "start": 546.88, "end": 554.16, "text": " a lot of layers that takes depth. And especially for these in painting applications, what you want" }, { "start": 554.16, "end": 561.04, "text": " is kind of global information. These masks right here, like these masks, they're pretty big for an" }, { "start": 561.04, "end": 569.28, "text": " in painting application. So they're pretty, pretty wide. And if you can imagine a convolutional" }, { "start": 569.28, "end": 574.56, "text": " layer that looks at a three by three pixel neighborhood, that might be something right here." }, { "start": 575.36, "end": 581.36, "text": " You know, so you're going to have a whole lot of convolutional kernels that just see the masked" }, { "start": 581.36, "end": 587.12, "text": " pixels, they see nothing of the outside, they simply see a bunch of masked pixels for a whole" }, { "start": 587.12, "end": 593.52, "text": " bunch of layers, right layer two, layer three, until like layer four, there's like there's nothing," }, { "start": 593.52, "end": 600.48, "text": " no information at all at this position about the outside world about the world beyond the mask." }, { "start": 600.48, "end": 606.16, "text": " And even then, it's only like this area, we need to go many more layers before we get access to" }, { "start": 606.16, "end": 613.12, "text": " information that is way outside of here. And at that point, it may already be too late. So the" }, { "start": 613.12, "end": 619.04, "text": " Fourier convolutions, they solve that they have the ability at every single layer to look at a" }, { "start": 619.04, "end": 627.1999999999999, "text": " global context. And how are they doing this? It's not super expensive. In fact, they're doing this" }, { "start": 627.1999999999999, "end": 634.48, "text": " by using of course, Fourier transformations, a Fourier transformation will map a signal to its" }, { "start": 634.48, "end": 640.0799999999999, "text": " corresponding frequency domain signal, it is essentially a different way of representing" }, { "start": 640.0799999999999, "end": 646.4, "text": " a signal. So if you have a signal, let's say you have like a pure sine wave, you do a Fourier" }, { "start": 646.4, "end": 651.92, "text": " transformation of that entire thing, you can represent that as the components in the Fourier" }, { "start": 651.92, "end": 657.84, "text": " spectrum. And that would simply have like one component at the particular frequency at which" }, { "start": 657.84, "end": 662.56, "text": " this sine wave at which this sine wave is operating. That's the that's not the frequency," }, { "start": 662.56, "end": 668.16, "text": " that's like one over the frequency right here. But in a more in a more general sense, a Fourier" }, { "start": 668.16, "end": 677.76, "text": " transform will decouple the spatial resolution and give it a transform it into frequency resolution." }, { "start": 677.76, "end": 682.9599999999999, "text": " So if you have a Fourier spectrum, maybe you have a very complicated signal right here," }, { "start": 685.52, "end": 690.3199999999999, "text": " a complicated signal that will give you also a complicated Fourier spectrum, like you have a lot" }, { "start": 690.3199999999999, "end": 695.68, "text": " of this, you have like negative this frequency, a lot of this frequency, not too much of this" }, { "start": 695.68, "end": 703.28, "text": " frequency, and so on. If you do a convolution in this domain, you simply convolve across neighbors" }, { "start": 703.28, "end": 709.8399999999999, "text": " of the signal. However, if you do a convolution in Fourier domain, you can see you convolve across" }, { "start": 709.8399999999999, "end": 716.4799999999999, "text": " frequencies, you can evolve across neighboring frequencies, which means that these three things" }, { "start": 717.12, "end": 725.04, "text": " represent three particular sine waves frequencies, maybe the lowest one is like a super long sine" }, { "start": 725.04, "end": 730.64, "text": " wave, the second one is like a bit of a faster sine wave, the third one is even faster sine wave." }, { "start": 730.64, "end": 736, "text": " But what is important is that every single component in the Fourier spectrum represents" }, { "start": 736, "end": 743.36, "text": " information about the entirety of the signal. And that is exactly what we want. Whereas the" }, { "start": 743.36, "end": 751.76, "text": " regular convolution is localized in in pixel space, the Fourier convolution is going to be localized" }, { "start": 751.76, "end": 759.4399999999999, "text": " in frequency space, but global in pixel space. That is very cool. And of course, Fourier transforms" }, { "start": 759.4399999999999, "end": 766.16, "text": " are also one of the things that are extremely fast. It's essentially a linear algebra operation. And" }, { "start": 766.16, "end": 772.16, "text": " there are very fast implementations of discrete Fourier transforms called fast Fourier transforms." }, { "start": 772.16, "end": 778.64, "text": " That's exactly what they do right here. The whole architecture is going to look like this. There is" }, { "start": 778.64, "end": 784.4, "text": " going to be the image the input image x there's going to be a mask during training that is produced" }, { "start": 784.4, "end": 792.3199999999999, "text": " by a mask generation algorithm x is then masked out and the model is tasked to predict the missing" }, { "start": 792.3199999999999, "end": 798.16, "text": " pixels that are hidden by the mask. As I said, the model has no access to what's below the mask," }, { "start": 798.16, "end": 804.64, "text": " I guess that will that would be kind of pointless, right? Yeah. So what we do first, but also this is" }, { "start": 804.64, "end": 811.6, "text": " a fully convolutional architecture that makes it able to essentially transfer to different resolutions," }, { "start": 811.6, "end": 817.1999999999999, "text": " which is another advantage here being a fully convolutional. So what we do is first we downscale" }, { "start": 817.1999999999999, "end": 824.72, "text": " a little bit as far as I can tell these images are something like 256 by 256 during training," }, { "start": 824.72, "end": 832.3199999999999, "text": " or it works on crops of 256 by 256, somewhere in that range. But the cool thing is it can generalize" }, { "start": 832.32, "end": 840.1600000000001, "text": " to high definition images like 1920 by 1080 or something like this, the same network. So the" }, { "start": 840.1600000000001, "end": 845.6800000000001, "text": " train the network that's trained on this low, low, quote unquote, low resolution can generate can" }, { "start": 845.6800000000001, "end": 852.08, "text": " generalize to very, very high resolution, and it won't lose performance. But we'll see that in the" }, { "start": 852.08, "end": 858.6400000000001, "text": " experiments. So first there's down sampling, and then the model is just this is just nine layers." }, { "start": 858.64, "end": 866.48, "text": " They also have a variant with 18 layers. But the base model is nine layers of this fast Fourier" }, { "start": 866.48, "end": 873.04, "text": " convolution residual block. As you can see, it has a residual connection right here, like a normal" }, { "start": 873.04, "end": 879.6, "text": " resnet, whereas a normal resnet would have two convolution layers right here, we opt for these" }, { "start": 879.6, "end": 887.2, "text": " fast Fourier convolutional layers. Now, they look a bit complicated, but essentially, what we do is" }, { "start": 887.2, "end": 894.72, "text": " we carry two different signals across the entire network, one signal contains local localized" }, { "start": 894.72, "end": 901.44, "text": " information. So one signal is going to operate in the original domain of pixel space and has all" }, { "start": 901.44, "end": 908.32, "text": " that those properties, so it looks at its neighbors and so on. And one signal is going to operate in" }, { "start": 908.32, "end": 913.76, "text": " in more of the global domain. And then in each layer, those two strands of information get to" }, { "start": 913.76, "end": 919.28, "text": " exchange information with each other. So the whole signal is represented as this block here with the" }, { "start": 919.28, "end": 924.64, "text": " two components. But it's essentially just we have like two strands of signal, and then every now and" }, { "start": 924.64, "end": 930.3199999999999, "text": " then they get to exchange a bit of information, right, one is the local, the local branch, and one" }, { "start": 930.3199999999999, "end": 937.2, "text": " is the global branch of information. So what do we do with the local branch, we have different" }, { "start": 937.2, "end": 942.96, "text": " operations right here. So we have a little conv layer that is in pixel space, actually, we have two" }, { "start": 942.96, "end": 949.9200000000001, "text": " of them, right, two conv layers. So we pass this the local signal, this is really just if you just" }, { "start": 949.9200000000001, "end": 957.36, "text": " consider this path right here through this one, then ignore ignore this here. If you just go here," }, { "start": 957.84, "end": 965.12, "text": " this is just like a normal conv net, right, this path here gets information from this side here." }, { "start": 966.64, "end": 972.32, "text": " It receives it and then there is an addition. So what is that, that is simply this global signal" }, { "start": 972.32, "end": 979.6800000000001, "text": " the global signal, also doing a localized convolution in pixel space. So far, there is nothing" }, { "start": 979.6800000000001, "end": 984.5600000000001, "text": " special if we were to just do this, this would be it would be pointless to have two strands of" }, { "start": 984.5600000000001, "end": 990.6400000000001, "text": " information, right. But the important thing is that the global strand comes to be in a very special" }, { "start": 990.6400000000001, "end": 996.72, "text": " way. So for that, we have to look what information arrives at the global branch right here, because" }, { "start": 996.72, "end": 1001.6800000000001, "text": " that's the information that's going to be passed in here for the next layer. For that, we see" }, { "start": 1001.68, "end": 1006.9599999999999, "text": " from the local branch, there's a three by three convolution going out over here. So let me draw" }, { "start": 1006.9599999999999, "end": 1013.92, "text": " that in greenish over here. And that is going to be mixed with this global strand of information." }, { "start": 1013.92, "end": 1019.4399999999999, "text": " And the global strand of information is going through this spectral transform block. The" }, { "start": 1019.4399999999999, "end": 1024.8, "text": " spectral transform block is essentially pretty easy. There is a there's a batch norm, sorry," }, { "start": 1024.8, "end": 1030.8, "text": " a convolution batch norm relu block. This is a one by one convolution. This is simply" }, { "start": 1030.8, "end": 1036.08, "text": " simply simply a linear operator pixel wise, essentially, there's a batch norm, there's a" }, { "start": 1036.08, "end": 1043.68, "text": " relu for the nonlinearity. And then what we do is we do a fast Fourier transform in 2d. And at the" }, { "start": 1043.68, "end": 1050.32, "text": " end of the block, we're going to invert that. So fast Fourier transform to operate in Fourier space," }, { "start": 1050.32, "end": 1056.1599999999999, "text": " and then invert the fast Fourier transform at the end. And inside of it, we're going to do a" }, { "start": 1056.16, "end": 1061.2, "text": " convolution batch norm relu block right here. So the convolution again, that's a one by one" }, { "start": 1061.2, "end": 1067.1200000000001, "text": " convolution, I believe, followed by batch and followed by relu. So actually even forget what I" }, { "start": 1067.1200000000001, "end": 1073.68, "text": " said about localized convolutions right here, if they just do one by one convolutions, they really" }, { "start": 1073.68, "end": 1081.68, "text": " operate just on the individual elements of the spectrum by itself, not even, they don't even" }, { "start": 1081.68, "end": 1087.52, "text": " they don't even consider localized, sorry, neighborhoods of frequencies, they just operate" }, { "start": 1087.52, "end": 1094.64, "text": " on the individual frequencies, one by one, which is is an option, like one by one convolutions are" }, { "start": 1094.64, "end": 1101.76, "text": " are a thing. So, you know, pretty cool. This by itself also has residual connection right here," }, { "start": 1101.76, "end": 1108.16, "text": " I'm going to guess to make signal flow better or more more stable or some something like this," }, { "start": 1108.16, "end": 1115.44, "text": " the observant people might object and say, hey, this thing right here actually outputs complex" }, { "start": 1115.44, "end": 1122.88, "text": " numbers. So this is in the space of complex numbers. So you'll get vectors with entries like a plus," }, { "start": 1123.6000000000001, "end": 1130.5600000000002, "text": " plus IB. But what we do is simply we take those and we stack them. So we just make like vectors" }, { "start": 1130.5600000000002, "end": 1135.76, "text": " out of them, a and b. So if there is a bunch of numbers, it will just be like a one b one," }, { "start": 1135.76, "end": 1144.16, "text": " a one b one, b one, a two, b two, and so on. And we just consider this to be a real vector" }, { "start": 1144.16, "end": 1152.08, "text": " of double dimensionality, or a real 2d signal of double the dimensionality as before. And that" }, { "start": 1152.8, "end": 1160.16, "text": " is how we do it. I mean, it's not it's not entirely correct, right. But the model in this way has" }, { "start": 1160.16, "end": 1167.1200000000001, "text": " access to all the relevant information, it can do what it wants with it. Yeah, it can it can learn" }, { "start": 1167.1200000000001, "end": 1174.8000000000002, "text": " that half of the dimensions correspond to two phases, or, or whatever, whatever the complex part" }, { "start": 1174.8000000000002, "end": 1183.1200000000001, "text": " of this is, it's been a while since since been a while since Fourier transforms. Okay, so these are" }, { "start": 1183.12, "end": 1191.6799999999998, "text": " the exactly so here, that's done, we have, sorry, go back up here to start it, there is first the" }, { "start": 1191.6799999999998, "end": 1198.6399999999999, "text": " real FFT, as you can see, that gets you to complex space, then there is complex to real, in which we" }, { "start": 1198.6399999999999, "end": 1206.9599999999998, "text": " transform the c channels into two c channels. But now we're in the real numbers. Then there is this" }, { "start": 1206.96, "end": 1214.64, "text": " value batch norm conv, which retains the signal. And there is real to complex where we go back into" }, { "start": 1214.64, "end": 1222.96, "text": " complex space. So from reals, 2d to c channels into complex, just c channels, and then we reverse the" }, { "start": 1222.96, "end": 1231.1200000000001, "text": " Fourier transform. And that is a Fourier convolution, as they define it. If we integrate," }, { "start": 1231.12, "end": 1238.32, "text": " no, that is the spectral transform block right here, the Fourier transfer, the Fourier convolution" }, { "start": 1238.32, "end": 1244.8, "text": " is this entire construct right here, as you can see, the spectral transform information then flows" }, { "start": 1244.8, "end": 1251.6, "text": " in here is combined with some local information that really should be green. And that then goes" }, { "start": 1251.6, "end": 1259.04, "text": " into this global output and obviously will become the global input to the next layer. So that is how" }, { "start": 1259.04, "end": 1266.1599999999999, "text": " they fuse localized information with global information in every single layer. And that turns" }, { "start": 1266.1599999999999, "end": 1272.1599999999999, "text": " out to be pretty, pretty powerful. They do have other improvements right here. And it's it's crazy" }, { "start": 1272.1599999999999, "end": 1278.72, "text": " to see that just how much engineering and how many tricks go into these models to really get them to" }, { "start": 1278.72, "end": 1286.96, "text": " work. So they also stress that loss function is a really, really important topic right here, because" }, { "start": 1286.96, "end": 1293.1200000000001, "text": " you can't simply reconstruct the original image right here, if you simply tell the model to" }, { "start": 1293.1200000000001, "end": 1300.4, "text": " reconstruct the original image from here, it's going to be bad because if your mask is pretty big," }, { "start": 1300.4, "end": 1307.8400000000001, "text": " pretty wide, there can be many possible fillings of the mask that makes sense. And since there are" }, { "start": 1307.8400000000001, "end": 1314.16, "text": " many possible ones, if you don't account, if you don't reward the model for getting one of the" }, { "start": 1314.16, "end": 1319.76, "text": " possible ones without punishing it that it didn't get all the other ones, the model is going to be" }, { "start": 1319.76, "end": 1325.0400000000002, "text": " very confused and is simply going to output the average of all the possible ones, which we don't" }, { "start": 1325.0400000000002, "end": 1332.8000000000002, "text": " want we want one of the possible ones. So what we do is we apply a perceptive loss, they call that a" }, { "start": 1332.8000000000002, "end": 1339.6000000000001, "text": " perceptive loss. And they explain that over here, what you do is you feed the image, the original" }, { "start": 1339.6, "end": 1347.36, "text": " image, the original image, this is the real one, and the fake one, and you can already see there's" }, { "start": 1347.36, "end": 1354.9599999999998, "text": " going to be like a discriminator later, right. But you feed them both through a pre trained neural" }, { "start": 1354.9599999999998, "end": 1362.7199999999998, "text": " network. And then you compare at intermediate points, or even like at the last latent layer," }, { "start": 1362.72, "end": 1370.16, "text": " you compare the two feature maps. So depending on how this network is trained, if that outputs very" }, { "start": 1370.16, "end": 1377.28, "text": " perceptually salient features, you'll get like a nice loss that doesn't punish you for getting any" }, { "start": 1377.28, "end": 1383.28, "text": " pixels wrong. But that encourages you to get something that is perceptually similar to what" }, { "start": 1383.28, "end": 1388.32, "text": " was there in the original image. They also stress that it's really important on how you train this" }, { "start": 1388.32, "end": 1396.1599999999999, "text": " network right here. They suggest to make this network also include global context using either" }, { "start": 1396.1599999999999, "end": 1402, "text": " also Fourier convolutions or dilated convolutions. And here you can see that's essentially the" }, { "start": 1402, "end": 1407.04, "text": " formula that means we take the features from the original image and the features from the fake image," }, { "start": 1407.04, "end": 1412.8, "text": " and we calculate their distance. And that's going to be the high receptive field perceptual loss." }, { "start": 1412.8, "end": 1418.08, "text": " This is not the only thing they do. They also have, as you can see, an adversarial loss." }, { "start": 1418.08, "end": 1425.28, "text": " There is also a regularizer on the gradients. So yeah, the final loss you're going to end up with" }, { "start": 1425.28, "end": 1432.32, "text": " is like a mix of all of these different losses. There's also a discriminator based perceptual loss." }, { "start": 1432.32, "end": 1440.32, "text": " And this part right here is by itself, again, a conjunction of two losses. So rest assured," }, { "start": 1440.32, "end": 1448.24, "text": " the loss architecture right here is very, very intricate. And I'm going to guess it's taken a lot" }, { "start": 1448.24, "end": 1455.2, "text": " of experimentation, not only by this paper, but by the whole field here to really come up with nice" }, { "start": 1455.2, "end": 1459.9199999999998, "text": " losses that make your outputs nice. Obviously, there's going to be a bunch of hyper parameters" }, { "start": 1459.9199999999998, "end": 1468, "text": " here to tune, which is always fun, but they seem to have done a pretty good job. The last thing" }, { "start": 1468, "end": 1474.4, "text": " they stress, which is important is how you generate masks during training. So during training," }, { "start": 1474.4, "end": 1479.36, "text": " you can't just, you know, take your finger and draw on pictures. Like I did, you have to have" }, { "start": 1479.36, "end": 1486.72, "text": " some heuristic way of generating masks. And I'm not going to go into the detail of how they do it." }, { "start": 1486.72, "end": 1493.76, "text": " You can see here compared to this is one of the one of the baselines. And this is one of their" }, { "start": 1493.76, "end": 1503.52, "text": " heuristics. They have a mix of these large masks and the box masks. So sorry, both are large," }, { "start": 1503.52, "end": 1509.92, "text": " but one is called wide masks, which are kind of polygons that they round off the corners," }, { "start": 1509.92, "end": 1516.96, "text": " I think, and box masks, which are sort of heuristically generated boxes right here," }, { "start": 1516.96, "end": 1524.72, "text": " or stacks of these boxes. And that's, and they mix those two together in order to get the final" }, { "start": 1524.72, "end": 1529.8400000000001, "text": " masking for their images. You can see these are fairly large, like this one here covers more than" }, { "start": 1529.8400000000001, "end": 1536.56, "text": " more than half the image. So these are challenging, challenging tasks. But it is through training with" }, { "start": 1536.56, "end": 1543.04, "text": " such large masks that you get the models to really learn to fill in it consistently. So what you can" }, { "start": 1543.04, "end": 1549.2, "text": " see is that in their results, and we're not going to go into all the tape, like they have a lot of" }, { "start": 1549.2, "end": 1554.8799999999999, "text": " tables, a lot of ablations, but red essentially means that it's worse than their model, you can" }, { "start": 1554.8799999999999, "end": 1561.2, "text": " see almost all of the table is red, except some models in some of the benchmarks, for example," }, { "start": 1561.2, "end": 1567.36, "text": " in the narrow masks, you will find situations where other models might outperform their model." }, { "start": 1567.36, "end": 1575.1999999999998, "text": " But as soon as you go to like wide masks, it is no longer, it's no longer really a competition at all." }, { "start": 1576.4799999999998, "end": 1582, "text": " Yeah, so their model seems to be really good. Those white masks, they do a lot of ablations" }, { "start": 1582, "end": 1586.8, "text": " where they switch out different, for example, different convolutions right here, they show what" }, { "start": 1586.8, "end": 1592.8, "text": " if we switch the Fourier by a dilated convolution, which is also a way to increase the receptive" }, { "start": 1592.8, "end": 1598.96, "text": " field rapidly or by regular convolution. And again, while there might be some improvement," }, { "start": 1598.96, "end": 1605.2, "text": " sometime on narrow masks, as soon as you go to wide masks, the other models degrade pretty" }, { "start": 1605.2, "end": 1611.04, "text": " quickly, the dilated convolution actually holds up fairly well right here. But one disadvantage of" }, { "start": 1611.04, "end": 1617.36, "text": " that is that it's very hard to go to higher resolutions, because the higher resolution you go," }, { "start": 1617.36, "end": 1622.08, "text": " the dilated convolutions that their receptive fields will also shrink, while the Fourier" }, { "start": 1622.08, "end": 1629.28, "text": " convolutions receptive fields will always remain essentially global. So here you have some comparison" }, { "start": 1629.28, "end": 1634.3999999999999, "text": " to baselines, you can see of course, they chose these pictures well with kind of the regular" }, { "start": 1634.3999999999999, "end": 1639.6799999999998, "text": " structure in the background. But check this out, like this is even this is even their model. But" }, { "start": 1639.6799999999998, "end": 1645.28, "text": " with regular convolutions, and even if they go deeper, doesn't really help. But like this," }, { "start": 1645.28, "end": 1650.8799999999999, "text": " this is just insane, right? I get it, they pick this picture, but it is like is really good." }, { "start": 1650.88, "end": 1655.92, "text": " And you can also see this building how it's completed over here with different methods," }, { "start": 1655.92, "end": 1661.6000000000001, "text": " and then with their method. And the mask was, you know, fairly, fairly big, as you can see," }, { "start": 1662.16, "end": 1668.64, "text": " also the bottom this the mask is huge. Yeah, here they show what happens if you go to higher" }, { "start": 1668.64, "end": 1674.72, "text": " resolution. So on this rather simpler problem, you can see that a lot of the models do well in" }, { "start": 1674.72, "end": 1682.72, "text": " the top row, if you just have the kind of a lower resolution. But if you go to really high resolution," }, { "start": 1683.28, "end": 1691.44, "text": " a lot of the models struggle while the llama model here still does a big, a good job in their larger" }, { "start": 1691.44, "end": 1701.04, "text": " model seems to be even better. Yeah, again, lots of ablations, but I'm going to stop right here," }, { "start": 1701.04, "end": 1707.84, "text": " and we'll go over to chatting with the first author about this. So I'll see you in a bit." }, { "start": 1707.84, "end": 1715.36, "text": " Hello, everyone. I'm here with Roman Suvorov and Elizaveta Logacheva, the authors of the llama" }, { "start": 1715.36, "end": 1722.32, "text": " paper and llama system as well, I guess I think this is as much a paper as it is an engineering" }, { "start": 1722.32, "end": 1729.76, "text": " effort. And just because looking at the paper, it already dawns on just how many things are important" }, { "start": 1729.76, "end": 1736.72, "text": " in this system. And then trying this out myself, it really works like it's snappy, it's really cool." }, { "start": 1736.72, "end": 1742.96, "text": " And the results are pretty great, I have to say for a for a learned system. So first, like welcome" }, { "start": 1742.96, "end": 1752.08, "text": " both of you and big props on big props on the system is very cool. So you've seen you've seen" }, { "start": 1752.08, "end": 1760.6399999999999, "text": " my video, what did strike you? What were they get it wrong? Yeah, first of all, I think that you did" }, { "start": 1760.6399999999999, "end": 1771.36, "text": " a great job in describing the overall paper. And I have almost no, you know, I have almost nothing to" }, { "start": 1772, "end": 1778.8799999999999, "text": " no complaints. Yeah, no complaints regarding that. And maybe one point regarding the overall" }, { "start": 1778.88, "end": 1787.0400000000002, "text": " the overall point of the paper. And yeah, as it's seen from the title, Fourier convolution might be" }, { "start": 1787.0400000000002, "end": 1794.24, "text": " stand out a little bit more than other components. But the actually the paper is about that all three" }, { "start": 1794.24, "end": 1800.96, "text": " components like, like we generate data and how we process images with a neural network and how we" }, { "start": 1800.96, "end": 1808.32, "text": " optimize this, how what losses do we choose, all these three components are important. And yes," }, { "start": 1808.32, "end": 1817.76, "text": " sometimes they can be relatively easily tuned from existing methods and allow to such easy tuning can" }, { "start": 1817.76, "end": 1826.32, "text": " help to significantly improve the results. So that's that's was the overall point of the paper." }, { "start": 1826.32, "end": 1833.52, "text": " Yeah, I had this I had the feeling to you again and again stress that a lot of these things are" }, { "start": 1833.52, "end": 1839.4399999999998, "text": " important, especially the three main components. And you did a lot of ablations to also show that" }, { "start": 1839.4399999999998, "end": 1844.8, "text": " all of these are important. That's why I find it so impressive, right? Because people usually just" }, { "start": 1844.8, "end": 1850.6399999999999, "text": " put which one did you start with first? Did you first have the idea of the Fourier convolutions?" }, { "start": 1850.64, "end": 1856.96, "text": " Is was that the motivation? No, initially we started when we when we started overall" }, { "start": 1856.96, "end": 1864.72, "text": " project on the inpainting, we just started with a classic peaks to peaks. So just get clone and" }, { "start": 1864.72, "end": 1872.48, "text": " pick predict an existing code base from piece to piece. And then we tried to step iteratively" }, { "start": 1872.48, "end": 1882.08, "text": " identify the most weak points and try to understand what is the reason behind that weakness. And at" }, { "start": 1882.08, "end": 1888.96, "text": " some stage, we understood that most architectures we tried really lots of different architectures," }, { "start": 1888.96, "end": 1897.52, "text": " and we tried existing blocks from other inpainting papers. And we found that almost none of them can" }, { "start": 1897.52, "end": 1906.24, "text": " handle repetitive patterns. Well, and yes, we started it. When we think about repetitions," }, { "start": 1906.24, "end": 1912.56, "text": " the one of the most obvious thing that came in mind is Fourier transform, because it is very" }, { "start": 1912.56, "end": 1923.04, "text": " natural thing to handle periodic signals. And first we started composing a layer on our own." }, { "start": 1923.04, "end": 1930.08, "text": " And then we just googled and found that FFC, which was proposed for recognition tasks. And we" }, { "start": 1930.8, "end": 1936.96, "text": " thought that it is a great thing to start with and took it and modified it and tuned for" }, { "start": 1937.76, "end": 1944, "text": " that particular task. And yeah, it worked pretty well. So these would be the the Fourier" }, { "start": 1944, "end": 1950, "text": " convolutions. Was it already in the form that we see in the paper with the two strands of information" }, { "start": 1950, "end": 1958.64, "text": " like the global and the local? Or did you have to shake things up? No, the right part of this" }, { "start": 1958.64, "end": 1965.68, "text": " picture reflects the original form of this fast Fourier convolution as it was proposed by the" }, { "start": 1965.68, "end": 1974.56, "text": " authors. Cool. And did it work out of the box? Yes. But when we tuned that for inpainting, we" }, { "start": 1974.56, "end": 1980.8799999999999, "text": " figured out that the local branch is not really important. And we can handle almost everything" }, { "start": 1980.8799999999999, "end": 1986.96, "text": " with just global branch with that spectral transform. Yeah. So but you still kept the" }, { "start": 1986.96, "end": 1994.32, "text": " local branch in? Yeah, because it helps for stability, especially in not such large images" }, { "start": 1994.32, "end": 2002.24, "text": " and large masks. So if we try to push the generalization to high resolution to extreme," }, { "start": 2002.24, "end": 2008.32, "text": " and to train on very low resolutions and then infer in very high resolutions, then" }, { "start": 2009.6, "end": 2016.96, "text": " using only global branch will pay more. But in the real world, some combinations," }, { "start": 2016.96, "end": 2023.1200000000001, "text": " some combination of these two is more practical. Yeah. So this is it's something I found" }, { "start": 2023.1200000000001, "end": 2029.04, "text": " interesting because you have this point of these large, large masks, or very wide masks and so on." }, { "start": 2029.04, "end": 2035.36, "text": " And you stress the importance of your algorithm that produces these different masks. Now when I" }, { "start": 2035.36, "end": 2040.72, "text": " look at these pictures, it doesn't seem that different, right? If I look at the top row," }, { "start": 2040.72, "end": 2045.84, "text": " you know, there's also like some parts of the picture are also occluded relatively big parts," }, { "start": 2045.84, "end": 2051.6, "text": " there are kind of some squiggles, they're even relatively wide, right? Why do you have an" }, { "start": 2051.6, "end": 2060.56, "text": " intuition? Why is the mask generation algorithm so important? Is it important that it's close to what" }, { "start": 2060.56, "end": 2066.72, "text": " humans do later? Or is it important that it is of a certain shape because of the architecture of the" }, { "start": 2066.72, "end": 2073.8399999999997, "text": " network? Or what's the deal with that? Yeah, as with the architecture, we started with an" }, { "start": 2073.84, "end": 2082.08, "text": " existing heuristic to draw that masks. And we actually we follow the same algorithm as the" }, { "start": 2082.08, "end": 2091.84, "text": " one used in Deep Field version two, the first row in that figure. Why masks should be wide? Yeah," }, { "start": 2091.84, "end": 2101.44, "text": " because it is important because the width of masks forces the generator to pass the information more" }, { "start": 2101.44, "end": 2110.56, "text": " far within itself. So if we can cover almost all input image with very thin lines, for example," }, { "start": 2110.56, "end": 2118.64, "text": " we can mask out every second row and every second column in the input image. And that would be very" }, { "start": 2118.64, "end": 2124.56, "text": " something very similar to a super resolution problem. And the percent of the image will be covered" }, { "start": 2124.56, "end": 2133.6, "text": " by such masks. But the network wouldn't need to pass information far. Yeah, that's why masks are" }, { "start": 2133.6, "end": 2139.36, "text": " important. And they are more important for fully convolutional architectures, but for a Fourier" }, { "start": 2139.36, "end": 2149.2799999999997, "text": " based they always help as well. And we have a couple of histograms in our supplementary material," }, { "start": 2149.28, "end": 2157.76, "text": " which compare actually the first row of that figure with the mask generated by our algorithm." }, { "start": 2157.76, "end": 2164.1600000000003, "text": " And the difference is pretty huge, actually. It is cool to see that the difference is so big." }, { "start": 2164.88, "end": 2173.6800000000003, "text": " I think that it was mask that it was point from which we started, actually, because we" }, { "start": 2173.68, "end": 2183.2, "text": " aimed to inpaint real world examples. And in that examples, masks actually are huge." }, { "start": 2183.2, "end": 2193.3599999999997, "text": " So we started with big masks in our validation set. And we saw that all other algorithms have" }, { "start": 2193.36, "end": 2206.08, "text": " fails to fill these large holes. And then we started to think on how we need to change our" }, { "start": 2206.08, "end": 2218.48, "text": " model that it can incorporate global information. Yeah. Is your algorithm deterministic? Yeah." }, { "start": 2218.48, "end": 2225.84, "text": " If I give it the same input and the same mask. And is this correct that the clean up dot pictures" }, { "start": 2225.84, "end": 2232.64, "text": " app that is really your small model that runs here? No, this is the large model. Oh, this is" }, { "start": 2232.64, "end": 2240.56, "text": " the big model already. Okay. So here, I've taken this. But what happens? Have you ever tried just" }, { "start": 2240.56, "end": 2245.44, "text": " masking the whole picture? What's kind of like the default output? That's an interesting..." }, { "start": 2245.44, "end": 2256, "text": " I don't know what will happen. I think something average, a constant color maybe." }, { "start": 2260.56, "end": 2268, "text": " Let's see. Yeah. All right. Pretty unspectacular. But I guess it's very gray is very high" }, { "start": 2268, "end": 2277.84, "text": " probability, right? Okay. Cool. And then there's the third component is the loss. And I have to" }, { "start": 2277.84, "end": 2285.12, "text": " say the loss is a monstrosity. There are like 50. So first of all, you have... No, this is the" }, { "start": 2285.12, "end": 2294.08, "text": " adversarial part of the loss. And then on top of that, you have like the discriminator perceptive" }, { "start": 2294.08, "end": 2299.68, "text": " loss. I'm going to guess that's the same as the perceptual loss, but in the features of the" }, { "start": 2299.68, "end": 2306.72, "text": " discriminator. Yeah. So the features which are used to calculate discriminator based perceptual" }, { "start": 2306.72, "end": 2318, "text": " loss are updated throughout the training. This is a pretty commonly used loss in image to image" }, { "start": 2318, "end": 2326.8, "text": " tasks. It helps to stabilize training. So the idea is that the discriminator bases its decisions" }, { "start": 2326.8, "end": 2333.76, "text": " on features which are perceptually meaningful. So very similar to the perceptive loss that you have" }, { "start": 2334.96, "end": 2343.84, "text": " up here, right? I think that feature matching or discriminator based perceptual loss helps mostly" }, { "start": 2343.84, "end": 2353.28, "text": " because it provides a clear signal to the generator. And if in adversarial training," }, { "start": 2353.28, "end": 2362.6400000000003, "text": " we have to balance discriminator and generator. And if one part is more powerful, the whole thing" }, { "start": 2362.6400000000003, "end": 2372.2400000000002, "text": " collapses. And discriminator based perceptual loss helps the generator to catch up when discriminator" }, { "start": 2372.24, "end": 2379.04, "text": " becomes too powerful. Yeah, that makes sense. For all of these losses, right? And then you have a" }, { "start": 2379.04, "end": 2387.68, "text": " regularizer on the gradients and you have this wide field perceptive loss and so on. Did you" }, { "start": 2388.16, "end": 2393.8399999999997, "text": " plan this from the beginning? Did you say, you know, here are all the good losses that I know of?" }, { "start": 2393.8399999999997, "end": 2401.4399999999996, "text": " Or do you have more losses that you ended up not including? My question is, how, if I'm a" }, { "start": 2401.44, "end": 2407.28, "text": " if I'm a if I'm a researcher or an engineer trying to come up with such a system, how do I decide" }, { "start": 2407.92, "end": 2416, "text": " which seven losses go into my final loss, right of the 50 possible losses that I could do? Do I" }, { "start": 2416, "end": 2423.84, "text": " try them all? Or are there some some guidelines? Actually, I think all of these losses, except for" }, { "start": 2423.84, "end": 2434.1600000000003, "text": " high perceptual loss are pretty common. And they all are often used in image to image tasks." }, { "start": 2435.1200000000003, "end": 2444, "text": " We need something to force our model to create a realistic picture. So we need discriminator" }, { "start": 2444.6400000000003, "end": 2453.6000000000004, "text": " and its loss. We need to reconstruct something that we can reconstruct. So we need some" }, { "start": 2453.6, "end": 2462.08, "text": " loss for reconstruction and editing losses to restrict it. So we need something that works on" }, { "start": 2462.08, "end": 2471.92, "text": " features. But we worked a lot on we made a hyperparameter search, of course, and we changed" }, { "start": 2471.92, "end": 2481.44, "text": " our we work on our perceptual loss form, because we started with this common perceptual loss based on" }, { "start": 2481.44, "end": 2490.96, "text": " the GG model. But we had a feeling that it can be not perfect because it's because the models that" }, { "start": 2491.68, "end": 2502.32, "text": " run on classification tasks and models that was trained on classification, they" }, { "start": 2502.32, "end": 2510.4, "text": " seems to concentrate on texture and not global structure. So we decided to try something else." }, { "start": 2511.6800000000003, "end": 2520.2400000000002, "text": " And then we find these models that run on segmentation tasks. And on data set that is more" }, { "start": 2520.2400000000002, "end": 2525.84, "text": " similar to ours, data set, and we tried it and it worked." }, { "start": 2525.84, "end": 2532.6400000000003, "text": " So the segmentation task as a training task for the perceptual loss model is sort of a better" }, { "start": 2532.6400000000003, "end": 2539.44, "text": " preconditioner than the classification task? Yeah, because it is natural for the segmentation" }, { "start": 2539.44, "end": 2547.6800000000003, "text": " model to focus more on boundaries of objects instead of their textures. And in case of inpainting," }, { "start": 2548.2400000000002, "end": 2552.08, "text": " good texture can be learned using only discriminator." }, { "start": 2552.08, "end": 2557.92, "text": " Because there is a lot of freedom in how we can generate fine grain textures and there is no need" }, { "start": 2557.92, "end": 2569.04, "text": " to put some any supervision on that part of. But it's also important that models that are used for" }, { "start": 2569.04, "end": 2579.36, "text": " segmentation are different. So we compared in our ablation, we compared the same model" }, { "start": 2579.36, "end": 2587.52, "text": " that was on the training on classification and it works with the same model." }, { "start": 2587.52, "end": 2592.96, "text": " Yeah, you have not only do you have a different task with the segmentation, you also include" }, { "start": 2592.96, "end": 2600.4, "text": " also higher receptive field layers into that model. So the sense, the logic is that if that model" }, { "start": 2600.96, "end": 2608, "text": " also includes global information more, its signal to the model is more accurate." }, { "start": 2608, "end": 2613.68, "text": " So it's signal to your model will also be more sensitive to that global information. It's a" }, { "start": 2613.68, "end": 2619.2, "text": " little bit like, you know, in reinforcement learning, people do reward shaping, it seems like" }, { "start": 2619.2, "end": 2627.76, "text": " you do reward shaping by how you train the different discriminator models that then" }, { "start": 2627.76, "end": 2633.36, "text": " give your model the sort of the signal to learn. Yeah, I like the sort of the meta idea here." }, { "start": 2633.36, "end": 2639.76, "text": " That's pretty cool. Unfortunately, I'm not familiar with reward shaping from reinforcement learning." }, { "start": 2640.4, "end": 2647.36, "text": " But our idea here was that basically we have two losses here. The first one is discriminator or" }, { "start": 2647.36, "end": 2653.6800000000003, "text": " the serial and which focus more on fine grain details and the second is perceptual loss, which" }, { "start": 2653.6800000000003, "end": 2662.4, "text": " focuses more on global text, global structures. For the Fourier convolutions, maybe a little bit" }, { "start": 2662.4, "end": 2668.4, "text": " more conceptual, right? We have this local information in one strand, we have this global" }, { "start": 2668.4, "end": 2676, "text": " information in the other strand. And it's clear that for these large masks, as you show the system" }, { "start": 2676, "end": 2683.52, "text": " works pretty well. What kind of data does your system work not well on? Like what's what would" }, { "start": 2683.52, "end": 2688.56, "text": " be sort of the worst input that I could give to your system? Like this, this up here is really" }, { "start": 2688.56, "end": 2695.44, "text": " beautiful, right? What picture could I take such that it is absolute garbage? Yeah, actually," }, { "start": 2695.44, "end": 2705.84, "text": " lots of images will be processed bad with our model. I mean, of course, I can give it a picture" }, { "start": 2705.84, "end": 2711.44, "text": " that is, you know, very dissimilar to the training data set. But let's say I actually had a training" }, { "start": 2711.44, "end": 2721.28, "text": " data set, what would be the worst domain or the worst kind of picture? Yeah. I think it cannot" }, { "start": 2722.16, "end": 2728.88, "text": " recreate half of human on something. Yeah, our model focuses mostly on background due to how" }, { "start": 2728.88, "end": 2736, "text": " it was trained. And yeah, it cannot recover foreground objects really well. It cannot" }, { "start": 2736, "end": 2744.64, "text": " do something that requires it to actually know everything about what works and not just take it" }, { "start": 2744.64, "end": 2754.32, "text": " from picture it sees. Yeah. So is it, is it, do you feel that the model mostly learns how to sort of" }, { "start": 2754.32, "end": 2761.12, "text": " copy elements from the part it sees to the parts that are masked? Do you think that the learning" }, { "start": 2761.12, "end": 2766.4, "text": " is mostly teaching the model how to do that? Because it seems the model is very sophisticated" }, { "start": 2766.4, "end": 2772.24, "text": " in, you know, in Photoshop, you take this stamp tool, right? You say, I'll take a little bit from" }, { "start": 2772.24, "end": 2777.7599999999998, "text": " over here, put it here. Do you think your model is just like a really, really good user of that tool" }, { "start": 2777.7599999999998, "end": 2787.2799999999997, "text": " in a sense? Yeah, it seems yes, yes. And in order to be able to create big parts of images from" }, { "start": 2787.28, "end": 2794.8, "text": " scratch, we need a different kind of model. And we most probably need a kind of capacity within" }, { "start": 2794.8, "end": 2800.5600000000004, "text": " the generator, because without it, it is not possible to create something from nothing." }, { "start": 2801.6000000000004, "end": 2809.0400000000004, "text": " Yeah. Also, our model is quite small, so it cannot really remember everything." }, { "start": 2809.84, "end": 2815.2000000000003, "text": " Yeah, that is something that I left completely out of my review. I think the fact that your model" }, { "start": 2815.2, "end": 2823.12, "text": " is compared to the baselines you compare to is a lot smaller, right? It has way less parameters." }, { "start": 2823.68, "end": 2830.3199999999997, "text": " That is something that's, I think, very cool and enables it to run inside web applications" }, { "start": 2830.3199999999997, "end": 2837.2, "text": " and so on, like on or maybe on a mobile device or... Yeah, I have another question and to the" }, { "start": 2837.2, "end": 2844.16, "text": " Fourier convolution. So here we have global information and local information, right? As" }, { "start": 2844.16, "end": 2852, "text": " sort of two different things. You mentioned in the paper that other models that have more global" }, { "start": 2852, "end": 2857.2799999999997, "text": " information or access to wider information could also work, such as a vision transformer" }, { "start": 2857.2799999999997, "end": 2864.48, "text": " or something like this. My question is, is there an in between between local convolutions and" }, { "start": 2864.48, "end": 2870, "text": " Fourier convolutions? Okay, I mean, there's dilated convolutions. But if I think of a Fourier" }, { "start": 2870, "end": 2875.92, "text": " transform, you transform into a space where locality no longer matters, but frequency matters." }, { "start": 2875.92, "end": 2882.08, "text": " And in the original domain, frequency is just kind of doesn't matter, but locality really matters." }, { "start": 2882.08, "end": 2889.28, "text": " Is there a transform, are there transforms that we could do that put us in between where, you know," }, { "start": 2889.28, "end": 2894.96, "text": " the as I go in the x coordinate, it's a little bit of frequency and a little bit of locality?" }, { "start": 2894.96, "end": 2901.52, "text": " Like, is there hope that instead of having multiple columns of information, we could sort of choose" }, { "start": 2901.52, "end": 2907.6, "text": " our space wisely to trade off local and global? Or do you think this is already, you know, local," }, { "start": 2907.6, "end": 2915.6, "text": " like a mix with two channels is a good way to go? That's a very good question. Yeah, and I don't" }, { "start": 2915.6, "end": 2925.2, "text": " know the answer to it. And one thing that comes to my mind is there is a short time Fourier transform," }, { "start": 2925.2, "end": 2932.64, "text": " which is often used for music processing, sound processing. And yeah, it kind of combines local" }, { "start": 2932.64, "end": 2939.44, "text": " convolutions with Fourier transform over. It is roughly can be described as processing the whole" }, { "start": 2939.44, "end": 2948.7200000000003, "text": " signal with a sliding window and transform each sliding window with Fourier transform. Yeah." }, { "start": 2948.7200000000003, "end": 2956.56, "text": " So it is most obvious combination. If you had to give your intuition why the Fourier convolutions" }, { "start": 2956.56, "end": 2961.36, "text": " make such a big difference here, of course, like that, we've already discussed Fourier transform" }, { "start": 2961.36, "end": 2968.16, "text": " kind of loses the locality of the signal and gets global information. But why Fourier transforms?" }, { "start": 2968.16, "end": 2973.8399999999997, "text": " What's kind of good about this particular function that you chose and space that you chose?" }, { "start": 2973.8399999999997, "end": 2980.56, "text": " Surprisingly, if we throw the local branch away, it will still generate something meaningful." }, { "start": 2981.44, "end": 2994, "text": " So spectral transform doesn't lose that local correlations completely. And I think that this" }, { "start": 2994, "end": 3002.56, "text": " is due to the fact that the generator has spectral transforms and spatial transforms interleaving" }, { "start": 3002.56, "end": 3013.52, "text": " each other because here we can see that we have cone one by one between two FFT and we have two" }, { "start": 3013.52, "end": 3022.56, "text": " more convolutions before and after the spectral transform. They are as well one by one. So they" }, { "start": 3022.56, "end": 3031.84, "text": " don't capture local content directly, but then can combine channels on that particular locations." }, { "start": 3031.84, "end": 3039.84, "text": " And yeah, that maybe that can somehow replace traditional convolutions. The fact that these" }, { "start": 3039.84, "end": 3047.6, "text": " spatial and spectral transforms are interleaved. Yeah. And when we think about generalization" }, { "start": 3047.6, "end": 3056.96, "text": " to higher resolution, I think spectral transform helps because of the fact that low frequency part" }, { "start": 3056.96, "end": 3067.6, "text": " of spectrum does not depend on the resolution, on the input resolution that strong. And it is" }, { "start": 3067.6, "end": 3082.24, "text": " it is almost the same no matter if we have 2056 or sorry 256 or 2000. Yeah. Yeah, that by itself" }, { "start": 3082.24, "end": 3088.4, "text": " is one of the cool properties again of your paper. The fact that it can scale up to sort of very" }, { "start": 3088.4, "end": 3094.4, "text": " high resolutions. There are artifacts appearing, but they are not nearly as much as in other" }, { "start": 3094.4, "end": 3100.88, "text": " other models. Looks pretty cool. Yeah, it doesn't scale up perfectly, but yeah, it's better than" }, { "start": 3100.88, "end": 3106.56, "text": " fully convolutional architectures. Cool. Yeah. So where do you think I mean, maybe you don't want" }, { "start": 3106.56, "end": 3115.6, "text": " to disclose necessarily but what is the plan for the future? We don't know where we get" }, { "start": 3115.6, "end": 3125.12, "text": " throughout research. But yeah, the most obvious thing here is that we can try to improve the way" }, { "start": 3125.12, "end": 3133.7599999999998, "text": " generalized to high resolutions. And the second point is that we are trying to understand why" }, { "start": 3134.24, "end": 3142.64, "text": " actually it works that because it yeah, it has lots of components. And we conducted an ablation" }, { "start": 3142.64, "end": 3150, "text": " study regarding if validating if each of these components matter, but this is just a surface." }, { "start": 3150.8799999999997, "end": 3158.64, "text": " And we can go more in depth in that. And we are not satisfied with our laws because it's" }, { "start": 3159.3599999999997, "end": 3169.04, "text": " that huge. There are many components that we need to balance. And we want better laws with just one," }, { "start": 3169.04, "end": 3177.68, "text": " just one button, make everything work. And nice. So yeah, I mean, I was almost I was expecting you" }, { "start": 3177.68, "end": 3183.92, "text": " to say we're not happy with our loss. We want more. We want like more components to make it. But it's" }, { "start": 3183.92, "end": 3190.32, "text": " I think it's pretty cool that the goal is also to make a system that's kind of as good but simpler." }, { "start": 3191.2799999999997, "end": 3196.4, "text": " I think they'll make it also much more accessible. Yeah, I think that's a good idea." }, { "start": 3196.4, "end": 3203.52, "text": " Yeah. I think they'll make it also much more accessible. Cool. Yeah. Roman, Elisa, sorry, Lisa." }, { "start": 3204.32, "end": 3209.2000000000003, "text": " Is that correct? Yes. Okay, Lisa and Roman, thank you so much for being here." }, { "start": 3209.84, "end": 3216.2400000000002, "text": " Was it was a pleasure? Do you have any last criticisms to the video? Or shout outs? No," }, { "start": 3216.2400000000002, "end": 3222.64, "text": " thank you very much for for the discussion. It was really fun. And thank you for your channel," }, { "start": 3222.64, "end": 3231.44, "text": " because it is you make a real good job in helping others to be in time and to catch with" }, { "start": 3231.44, "end": 3252.7200000000003, "text": " the this huge wave of information that we have in the field. Thanks. Thanks. Yeah, thank you. Thank you." } ]
W3mrgqtm5R4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[ML News] BLOOM: 176B Open-Source | Chinese Brain-Scale Computer | Meta AI: No Language Left Behind
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "bloom", "nlp", "gpt3", "gpt 3", "gpt-3", "eleuther ai", "eleutherai", "bigscience", "bigsciencew", "big science", "huggingface", "hugging face", "yalm", "yandex", "facebook", "nllb", "meta ai language", "meta ai translation", "machine translation", "ml news", "mlnews", "kilcher news", "ml news bloom", "responsible ai", "rail license", "ai model license", "ai license", "chatbot", "ai chatbot", "are chatbots allowed", "karpathy leaves tesla" ]
#mlnews #bloom #ai Today we look at all the recent giant language models in the AI world! OUTLINE: 0:00 - Intro 0:55 - BLOOM: Open-Source 176B Language Model 5:25 - YALM 100B 5:40 - Chinese Brain-Scale Supercomputer 7:25 - Meta AI Translates over 200 Languages 10:05 - Reproducibility Crisis Workshop 10:55 - AI21 Raises $64M 11:50 - Ian Goodfellow leaves Apple 12:20 - Andrej Karpathy leaves Tesla 12:55 - Wordalle References: BLOOM: Open-Source 176B Language Model https://bigscience.huggingface.co/blog/bloom https://huggingface.co/spaces/bigscience/license https://huggingface.co/bigscience/bloom?text=34%2B10%3D44+%0A54%2B20%3D YALM 100B https://github.com/yandex/YaLM-100B Chinese Brain-Scale Supercomputer https://www.scmp.com/news/china/science/article/3182498/china-supercomputer-achieves-global-first-brain-scale-ai-model?utm_source=pocket_mylist https://archive.ph/YaoA6#selection-1237.156-1237.246 Meta AI Translates over 200 Languages https://ai.facebook.com/research/no-language-left-behind/ Reproducibility Crisis Workshop https://reproducible.cs.princeton.edu/ AI21 Raises $64M https://techcrunch.com/2022/07/12/openai-rival-ai21-labs-raises-64m-to-ramp-up-its-ai-powered-language-services/?guccounter=1 Ian Goodfellow leaves Apple https://twitter.com/goodfellow_ian/status/1544638709039091717 Andrey Karpathy leaves Tesla https://mobile.twitter.com/karpathy/status/1547332300186066944 https://www.businessinsider.com/report-tesla-laid-off-about-200-people-in-autopilot-unit-2022-6?r=US&IR=T Wordalle https://huggingface.co/spaces/huggingface-projects/wordalle?utm_source=pocket_mylist Links: Homepage: https://ykilcher.com Merch: https://ykilcher.com/merch YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ykilcher.com/discord LinkedIn: https://www.linkedin.com/in/ykilcher If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Bloom finishes training and is now released as the biggest open source language model to date. A new Chinese supercomputer is allegedly able to compute brain scale AI models. And both Ian Goodfellow and Andrej Karpati leave their jobs. Welcome to ML News. Hello and welcome everyone to ML News rather ML old I've been gone for a while. What happened? Yeah, sorry, I was busy getting canceled and all. So but you know, I'm back. So we're going to catch up on everything that happened over the summer. And we're going to do it in different installments. So if your favorite thing is not in the news right now, maybe wait a bit or remind me of it. This installment is all about large models, there have been a plethora of huge models coming out of both companies and research initiatives. Speaking of which big science is a research conglomerate, a workshop, a group of people over 1000 researchers from over 250 countries coming together and trying to replicate something like GPT three not only replicate but go beyond bloom is the result of this effort. It is a 176 billion parameter language model, which is released as fully open source, the model has been developed open source has been trained open source and is now released to the world for everyone to use and research. But not only that other than something like GPT three, we know everything that's going into these models, we know what data is in there. And the data is really cool. The model is explicitly made to be multilingual. In fact, the training data contains over 59 languages, probably even more. Now, 13 of these 59 are programming languages. So the model is also going to be relatively decent at that. But this is a huge step forward for open source research for language research, and especially when it comes to less represented languages in the usual training data. The model was trained with sponsored compute and is available on the hugging face hub to download, you can even enter a little prompt over here, yet they do only accept smaller short prompts for now because the model is rather large. No 54 and 20 is not exactly four, but we'll get there bloom we'll get there. Now one interesting aspect about this model is that it is released under the big science real license, which is the responsible AI license. This license is kind of like a copy left license in the sense that if you create derivative works of this model, like if you fine tune it, you have to release it under the same terms as this license, this license governs the use of the model and essentially says that you cannot use this model for a certain number of things which are listed in the license. So if you look at the license, you have to scroll down a little bit. And if you scroll down more, there's like a huge blank space. And then there's appendix A. And these are the use restriction. Now most of these restrictions are fairly standard. For example, you are not allowed to use the model in any way that violates, you know, state law, international law, federal law, and so on. You're not allowed to use the model for the purpose of exploiting harming or attempt to exploit or harm miners in any way. There's a number of these things. The more interesting ones, which I think are you're not allowed to use the model for fully automated decision making that adversely impacts an individual's legal rights or otherwise creates or modifies a binding enforceable obligation. So binding enforceable obligation will be something like a contract. So you are not allowed to use this model to make automatic contract decisions. I'm not entirely sure what exactly that prohibits. Let's say the authors here intended to prevent something like automated decision making in terms of hiring someone or maybe automated selling of something like insurance, like a person comes, I want to get some insurance, and they just talk to a chat bot and the chat bot, you know, actually makes the contract. I'm not exactly sure how this license would apply here. Like, could I make it such that the chat bot simply makes a suggestion back to the human says like, here is an offer, you know, you can accept it or not? Or does at any point need to be a human in the loop from the side of the model, like for sure, the model can make a contract offer about a piece of insurance, but then maybe an insurance agent will still have to look over that look over the applicant and say, yeah, that's correct. Or that's not correct. I think this is going to be hashed out at some point, which is not now. This is probably not the first time software has released under such restrictions, but probably the first time a big AI model is the other interesting one is you're not allowed to generate or disseminate information or content in any context, for example, posts, articles, tweets, chatbots, or other kinds of automated bots without expressly and intelligibly disclaiming that the text is machine generated. But who would do something like this? I mean, come on. All in all, I think the license is actually fairly permissible. There's a lot of things that you actually can do with a model like this. And that's really cool. And it's available for everyone to research and even build monetizable products on top of it. So let me know what you think in the comments about the model about the licenses and so on. Other big models, YALM 100B as a 100 billion parameter GPT like language model by Yandex, and it can mainly speak English and Russian. Now, if we go not one but three orders of magnitude bigger in terms of models, South China Morning Post writes China supercomputer achieves global first with brain scale AI model. So this apparently and I'm going to say apparently because apparently there are no official statements out yet. There is a new supercomputer in China that has trained a neural network with 174 trillion parameters. That's trillion that is 1000 times bigger than something like GPT three or bloom or any of these biggest models that we have today. Now we've seen trillion parameter models before, but they've usually been sparse in some way and we have no clue over what this model here represents. But as the article says, this does approach the number of synapses in a brain. Now that's not to say that we've replicated the brain, but these models are getting extremely huge. So apparently the scientists said that they had achieved a decent performance from the unprecedented brain scale AI model, whatever that means. They also say the communication between the nodes of the supercomputer is over 23 petabytes per second, with one researcher saying that the machines parallel computing ability mimicked human thinking like eating while watching television that I have to say in all these stages of building a GI. Certainly the last step is going to be an AI that can eat while watching television. I have the feeling there is hardly a greater human achievement than doing those two things at the same time. In fact, it's true, I've never ever seen a robot or a piece of software that can eat while watching television. So if this is true, a GI is almost solved. Meta AI releases a blog post along with a paper under the heading No Language Left Behind, another huge language model, in fact, a translation model that focuses on translating between a plethora, in fact, over 200 languages, and with a particular focus on low resource languages, low resource languages have been a problematic topic for machine translation for a while, because AI models, especially big models that perform really well need lots of data in the question of machine translation, they in fact need aligned data, they need the same text in two different languages to be able to translate between those languages, there are techniques like pivoting, but that still requires you to have like parallel data from both languages to English at some point, this model overcomes this by in fact, using another AI model to automatically align texts of different images. So you can feed in unaligned text and the model will find parts in each of the texts that probably align with each other. This then serves as a base data set to train a translation system. This is really cool. And we've seen this a number of times to in fact, use one model to generate training data for another model. And I strongly believe that we might go beyond this paradigm, this really simple paradigm of, you know, get big data, train one model and done, we've seen a number of configurations, for example, with generative model, we've seen various benefits of having a critic, a model that selects and ranks the outputs of generative models in order to make it better. And in the case with this model right here, and others, we've seen numerous models where first training data is automatically generated by another model. And I think this opens up a possibility if you think of this, if you think not just what can I do with one model, how can I train one model, but think about the models that we already have and think about what you could do to use them to create training data to train other models that we usually wouldn't have enough training data for. This has been thought about, obviously, for a long time, I think a lot of people when they learned about GANs for the first time, they were like, wow, we can create so much training data to train our classifiers. But this is kind of the wrong way around a generative model like a GAN has much more information contained in it than an image classifier, which kind of reduces the space to the number of classes. So it seems like you kind of have to go from models that know less to models that know more what exactly that entails, I think, you know, smart people will have to come up with things like this. But it's really cool to think about. And this is a really cool work. So check it out. Alright, I quickly wanted to mention this workshop here, which is held on July 28. So potentially kind of right now or something like this, depending on when this is released. This is a workshop on the leakage and reproducibility crisis in ML based science, machine learning itself, obviously has a reproducibility problem. But there are also a number of machine learning based papers in other fields such as medicine, chemistry, physics, biology, and whatnot. And these are apparently even worse in terms of reproducibility when they apply machine learning. So this is a workshop focusing on this various pitfalls like no train test split, temporal leakage, and things like pre processing on train and test sets together. Now I have to admit, I'm guilty of this. I've done this before. But if you're interested in topics like this and want to learn more, this workshop is surely a good place to go. TechCrunch writes open AI arrival AI 21 labs raises $64 million to ramp up its AI powered language services yet another startup raising giant amounts of money to build giant models. I'm not exactly sure all this money flowing into this stuff is going to pay off for all of them. I mean, surely not for all of them. Is it going to pay off for a lot of them? I don't know. But I've reported on AI 21 in the past. And I think they have a really interesting approach with their Jurassic X models where they try to compose different tools and make the language model not solve tasks as such but make the language model learn how to use other programs other tools in order to complete its tasks. I think that's a you know, a really cool paradigm to go about things. I'm not sure how it's going to work out for them business wise, but I congratulate them on their funding round. Exciting times. Ian Goodfellow is leaving Apple to join DeepMind has long been rumored articles have been written that he's not happy with the remote working agreements and so on. But he's released a simple tweet and as always take what is rumored by journalists with a grain of salt. Usually, you know, only about 5% of the story of what's going on. In any case, I wish Ian the best of success at DeepMind seems like cool times for him. And very similarly, Andre Karpati is leaving Tesla, he's just recently gone on a sabbatical. And now he's leaving for sure he does not have a place that he's switching to, it seems like he's going to focus on doing things he enjoys and you know, good for Andre. In related news business insider writes Tesla reportedly reportedly again laid off about 200 workers in its autopilot division, very dark rumors actually say that they all are replaced by optimus bots, but that's unconfirmed for now. And the last thing right here, this is word Ali, this is a hugging face space that composes the concept of the popular game word or with Dali. So you get a bunch of images from Dali mini, which is now crayon, and you're supposed to guess the prompt. So this one, every time you refresh, you get a new one. This one, I'm going to take a guess it is Eminem in GTA. E Eminem in GTA. Yeah, yeah. Okay, this first try first try, but it gets harder promise. All right, this was it for ML news slash old slash what happened over the summer slash I'm no longer canceled. I hope you enjoy leave a comment, leave a like share it out, subscribe, all that stuff, please keep hydrated during these warm times and I'll see you next time when we continue.
[ { "start": 0.48, "end": 6.32, "text": " Bloom finishes training and is now released as the biggest open source language model to date." }, { "start": 6.88, "end": 12.88, "text": " A new Chinese supercomputer is allegedly able to compute brain scale AI models." }, { "start": 13.52, "end": 19.28, "text": " And both Ian Goodfellow and Andrej Karpati leave their jobs. Welcome to ML News." }, { "start": 19.28, "end": 30, "text": " Hello and welcome everyone to ML News rather ML old I've been gone for a while. What happened?" }, { "start": 30, "end": 35.92, "text": " Yeah, sorry, I was busy getting canceled and all. So but you know, I'm back. So we're going to catch" }, { "start": 35.92, "end": 40.72, "text": " up on everything that happened over the summer. And we're going to do it in different installments." }, { "start": 40.72, "end": 46.96, "text": " So if your favorite thing is not in the news right now, maybe wait a bit or remind me of it. This" }, { "start": 46.96, "end": 52.96, "text": " installment is all about large models, there have been a plethora of huge models coming out of both" }, { "start": 52.96, "end": 59.92, "text": " companies and research initiatives. Speaking of which big science is a research conglomerate," }, { "start": 59.92, "end": 67.52, "text": " a workshop, a group of people over 1000 researchers from over 250 countries coming together and trying" }, { "start": 67.52, "end": 74.4, "text": " to replicate something like GPT three not only replicate but go beyond bloom is the result of" }, { "start": 74.4, "end": 81.44000000000001, "text": " this effort. It is a 176 billion parameter language model, which is released as fully open source," }, { "start": 81.44000000000001, "end": 86.64, "text": " the model has been developed open source has been trained open source and is now released to the" }, { "start": 86.64, "end": 93.04, "text": " world for everyone to use and research. But not only that other than something like GPT three," }, { "start": 93.04, "end": 98.24000000000001, "text": " we know everything that's going into these models, we know what data is in there. And the data is" }, { "start": 98.24000000000001, "end": 103.68, "text": " really cool. The model is explicitly made to be multilingual. In fact, the training data contains" }, { "start": 103.68, "end": 111.60000000000001, "text": " over 59 languages, probably even more. Now, 13 of these 59 are programming languages. So the model" }, { "start": 111.60000000000001, "end": 116.56, "text": " is also going to be relatively decent at that. But this is a huge step forward for open source" }, { "start": 116.56, "end": 122.72, "text": " research for language research, and especially when it comes to less represented languages in the" }, { "start": 122.72, "end": 128.56, "text": " usual training data. The model was trained with sponsored compute and is available on the hugging" }, { "start": 128.56, "end": 135.28, "text": " face hub to download, you can even enter a little prompt over here, yet they do only accept smaller" }, { "start": 135.28, "end": 144.24, "text": " short prompts for now because the model is rather large. No 54 and 20 is not exactly four, but we'll" }, { "start": 144.24, "end": 149.36, "text": " get there bloom we'll get there. Now one interesting aspect about this model is that it is released" }, { "start": 149.36, "end": 156.08, "text": " under the big science real license, which is the responsible AI license. This license is kind of" }, { "start": 156.08, "end": 162, "text": " like a copy left license in the sense that if you create derivative works of this model, like if you" }, { "start": 162, "end": 167.68, "text": " fine tune it, you have to release it under the same terms as this license, this license governs the" }, { "start": 167.68, "end": 173.36, "text": " use of the model and essentially says that you cannot use this model for a certain number of" }, { "start": 173.36, "end": 178.48000000000002, "text": " things which are listed in the license. So if you look at the license, you have to scroll down a" }, { "start": 178.48000000000002, "end": 184.32000000000002, "text": " little bit. And if you scroll down more, there's like a huge blank space. And then there's appendix" }, { "start": 184.32, "end": 189.76, "text": " A. And these are the use restriction. Now most of these restrictions are fairly standard. For" }, { "start": 189.76, "end": 195.12, "text": " example, you are not allowed to use the model in any way that violates, you know, state law," }, { "start": 195.12, "end": 199.76, "text": " international law, federal law, and so on. You're not allowed to use the model for the purpose of" }, { "start": 199.76, "end": 204.95999999999998, "text": " exploiting harming or attempt to exploit or harm miners in any way. There's a number of these things." }, { "start": 204.95999999999998, "end": 209.92, "text": " The more interesting ones, which I think are you're not allowed to use the model for fully" }, { "start": 209.92, "end": 215.11999999999998, "text": " automated decision making that adversely impacts an individual's legal rights or otherwise creates" }, { "start": 215.11999999999998, "end": 221.27999999999997, "text": " or modifies a binding enforceable obligation. So binding enforceable obligation will be something" }, { "start": 221.27999999999997, "end": 227.04, "text": " like a contract. So you are not allowed to use this model to make automatic contract decisions." }, { "start": 227.04, "end": 233.27999999999997, "text": " I'm not entirely sure what exactly that prohibits. Let's say the authors here intended to prevent" }, { "start": 233.27999999999997, "end": 238.79999999999998, "text": " something like automated decision making in terms of hiring someone or maybe automated selling of" }, { "start": 238.8, "end": 243.04000000000002, "text": " something like insurance, like a person comes, I want to get some insurance, and they just talk" }, { "start": 243.04000000000002, "end": 248.8, "text": " to a chat bot and the chat bot, you know, actually makes the contract. I'm not exactly sure how this" }, { "start": 248.8, "end": 254.24, "text": " license would apply here. Like, could I make it such that the chat bot simply makes a suggestion" }, { "start": 254.24, "end": 259.36, "text": " back to the human says like, here is an offer, you know, you can accept it or not? Or does at any" }, { "start": 259.36, "end": 265.04, "text": " point need to be a human in the loop from the side of the model, like for sure, the model can make a" }, { "start": 265.04, "end": 270.40000000000003, "text": " contract offer about a piece of insurance, but then maybe an insurance agent will still have to" }, { "start": 270.40000000000003, "end": 274.48, "text": " look over that look over the applicant and say, yeah, that's correct. Or that's not correct." }, { "start": 274.48, "end": 280.48, "text": " I think this is going to be hashed out at some point, which is not now. This is probably not" }, { "start": 280.48, "end": 286.24, "text": " the first time software has released under such restrictions, but probably the first time a big" }, { "start": 286.24, "end": 291.44, "text": " AI model is the other interesting one is you're not allowed to generate or disseminate information" }, { "start": 291.44, "end": 296.64, "text": " or content in any context, for example, posts, articles, tweets, chatbots, or other kinds of" }, { "start": 296.64, "end": 302.64, "text": " automated bots without expressly and intelligibly disclaiming that the text is machine generated." }, { "start": 302.64, "end": 308, "text": " But who would do something like this? I mean, come on. All in all, I think the license is actually" }, { "start": 308, "end": 313.92, "text": " fairly permissible. There's a lot of things that you actually can do with a model like this. And" }, { "start": 313.92, "end": 319.76, "text": " that's really cool. And it's available for everyone to research and even build monetizable products" }, { "start": 319.76, "end": 324.4, "text": " on top of it. So let me know what you think in the comments about the model about the licenses and so" }, { "start": 324.4, "end": 335.44, "text": " on. Other big models, YALM 100B as a 100 billion parameter GPT like language model by Yandex," }, { "start": 335.44, "end": 341.84, "text": " and it can mainly speak English and Russian. Now, if we go not one but three orders of magnitude" }, { "start": 341.84, "end": 348, "text": " bigger in terms of models, South China Morning Post writes China supercomputer achieves global" }, { "start": 348, "end": 353.92, "text": " first with brain scale AI model. So this apparently and I'm going to say apparently because" }, { "start": 353.92, "end": 360.08, "text": " apparently there are no official statements out yet. There is a new supercomputer in China that" }, { "start": 360.08, "end": 368.24, "text": " has trained a neural network with 174 trillion parameters. That's trillion that is 1000 times" }, { "start": 368.24, "end": 373.92, "text": " bigger than something like GPT three or bloom or any of these biggest models that we have today." }, { "start": 373.92, "end": 379.76, "text": " Now we've seen trillion parameter models before, but they've usually been sparse in some way and" }, { "start": 379.76, "end": 385.12, "text": " we have no clue over what this model here represents. But as the article says, this does" }, { "start": 385.12, "end": 390.56, "text": " approach the number of synapses in a brain. Now that's not to say that we've replicated the brain," }, { "start": 390.56, "end": 396.16, "text": " but these models are getting extremely huge. So apparently the scientists said that they had" }, { "start": 396.16, "end": 402.8, "text": " achieved a decent performance from the unprecedented brain scale AI model, whatever that means. They" }, { "start": 402.8, "end": 409.52000000000004, "text": " also say the communication between the nodes of the supercomputer is over 23 petabytes per second," }, { "start": 409.52000000000004, "end": 415.04, "text": " with one researcher saying that the machines parallel computing ability mimicked human" }, { "start": 415.04, "end": 421.44, "text": " thinking like eating while watching television that I have to say in all these stages of building" }, { "start": 421.44, "end": 427.6, "text": " a GI. Certainly the last step is going to be an AI that can eat while watching television. I have" }, { "start": 427.6, "end": 432.88, "text": " the feeling there is hardly a greater human achievement than doing those two things at the" }, { "start": 432.88, "end": 439.6, "text": " same time. In fact, it's true, I've never ever seen a robot or a piece of software that can eat" }, { "start": 439.6, "end": 444.32000000000005, "text": " while watching television. So if this is true, a GI is almost solved." }, { "start": 446.48, "end": 452, "text": " Meta AI releases a blog post along with a paper under the heading No Language Left Behind," }, { "start": 452, "end": 458.24, "text": " another huge language model, in fact, a translation model that focuses on translating between a" }, { "start": 458.24, "end": 465.04, "text": " plethora, in fact, over 200 languages, and with a particular focus on low resource languages," }, { "start": 465.04, "end": 470.64, "text": " low resource languages have been a problematic topic for machine translation for a while," }, { "start": 470.64, "end": 476.08, "text": " because AI models, especially big models that perform really well need lots of data in the" }, { "start": 476.08, "end": 481.44, "text": " question of machine translation, they in fact need aligned data, they need the same text in two" }, { "start": 481.44, "end": 485.84, "text": " different languages to be able to translate between those languages, there are techniques" }, { "start": 485.84, "end": 491.04, "text": " like pivoting, but that still requires you to have like parallel data from both languages to" }, { "start": 491.04, "end": 497.92, "text": " English at some point, this model overcomes this by in fact, using another AI model to automatically" }, { "start": 497.92, "end": 504.56, "text": " align texts of different images. So you can feed in unaligned text and the model will find parts" }, { "start": 504.56, "end": 509.44, "text": " in each of the texts that probably align with each other. This then serves as a base data set" }, { "start": 509.44, "end": 514.8, "text": " to train a translation system. This is really cool. And we've seen this a number of times to" }, { "start": 514.8, "end": 521.04, "text": " in fact, use one model to generate training data for another model. And I strongly believe that we" }, { "start": 521.04, "end": 526.16, "text": " might go beyond this paradigm, this really simple paradigm of, you know, get big data, train one" }, { "start": 526.16, "end": 530.88, "text": " model and done, we've seen a number of configurations, for example, with generative model," }, { "start": 530.88, "end": 536.48, "text": " we've seen various benefits of having a critic, a model that selects and ranks the outputs of" }, { "start": 536.48, "end": 540.5600000000001, "text": " generative models in order to make it better. And in the case with this model right here," }, { "start": 540.5600000000001, "end": 545.6800000000001, "text": " and others, we've seen numerous models where first training data is automatically generated" }, { "start": 545.6800000000001, "end": 551.52, "text": " by another model. And I think this opens up a possibility if you think of this, if you think" }, { "start": 551.52, "end": 556.96, "text": " not just what can I do with one model, how can I train one model, but think about the models that" }, { "start": 556.96, "end": 562.4, "text": " we already have and think about what you could do to use them to create training data to train" }, { "start": 562.4, "end": 567.84, "text": " other models that we usually wouldn't have enough training data for. This has been thought about," }, { "start": 567.84, "end": 572, "text": " obviously, for a long time, I think a lot of people when they learned about GANs for the first time," }, { "start": 572, "end": 576.9599999999999, "text": " they were like, wow, we can create so much training data to train our classifiers. But this is kind of" }, { "start": 576.9599999999999, "end": 582.72, "text": " the wrong way around a generative model like a GAN has much more information contained in it than" }, { "start": 582.72, "end": 587.76, "text": " an image classifier, which kind of reduces the space to the number of classes. So it seems like" }, { "start": 587.76, "end": 595.52, "text": " you kind of have to go from models that know less to models that know more what exactly that entails," }, { "start": 595.52, "end": 599.68, "text": " I think, you know, smart people will have to come up with things like this. But it's really cool to" }, { "start": 599.68, "end": 605.92, "text": " think about. And this is a really cool work. So check it out. Alright, I quickly wanted to mention" }, { "start": 605.92, "end": 612.48, "text": " this workshop here, which is held on July 28. So potentially kind of right now or something like" }, { "start": 612.48, "end": 617.28, "text": " this, depending on when this is released. This is a workshop on the leakage and reproducibility crisis" }, { "start": 617.28, "end": 622.48, "text": " in ML based science, machine learning itself, obviously has a reproducibility problem. But" }, { "start": 622.48, "end": 629.12, "text": " there are also a number of machine learning based papers in other fields such as medicine, chemistry," }, { "start": 629.12, "end": 635.92, "text": " physics, biology, and whatnot. And these are apparently even worse in terms of reproducibility" }, { "start": 635.92, "end": 641.76, "text": " when they apply machine learning. So this is a workshop focusing on this various pitfalls like" }, { "start": 641.76, "end": 648, "text": " no train test split, temporal leakage, and things like pre processing on train and test sets together." }, { "start": 648, "end": 652.88, "text": " Now I have to admit, I'm guilty of this. I've done this before. But if you're interested in" }, { "start": 652.88, "end": 657.28, "text": " topics like this and want to learn more, this workshop is surely a good place to go." }, { "start": 659.12, "end": 666, "text": " TechCrunch writes open AI arrival AI 21 labs raises $64 million to ramp up its AI powered" }, { "start": 666, "end": 672.56, "text": " language services yet another startup raising giant amounts of money to build giant models." }, { "start": 672.56, "end": 678.88, "text": " I'm not exactly sure all this money flowing into this stuff is going to pay off for all of them." }, { "start": 678.88, "end": 684.4, "text": " I mean, surely not for all of them. Is it going to pay off for a lot of them? I don't know. But" }, { "start": 684.4, "end": 689.6, "text": " I've reported on AI 21 in the past. And I think they have a really interesting approach with their" }, { "start": 689.6, "end": 694.48, "text": " Jurassic X models where they try to compose different tools and make the language model" }, { "start": 694.48, "end": 700.48, "text": " not solve tasks as such but make the language model learn how to use other programs other" }, { "start": 700.48, "end": 704.72, "text": " tools in order to complete its tasks. I think that's a you know, a really cool paradigm to" }, { "start": 704.72, "end": 710.32, "text": " go about things. I'm not sure how it's going to work out for them business wise, but I congratulate" }, { "start": 710.32, "end": 718.4, "text": " them on their funding round. Exciting times. Ian Goodfellow is leaving Apple to join DeepMind" }, { "start": 718.4, "end": 723.44, "text": " has long been rumored articles have been written that he's not happy with the remote working" }, { "start": 723.44, "end": 728.48, "text": " agreements and so on. But he's released a simple tweet and as always take what is rumored by" }, { "start": 728.48, "end": 734.48, "text": " journalists with a grain of salt. Usually, you know, only about 5% of the story of what's going" }, { "start": 734.48, "end": 740.48, "text": " on. In any case, I wish Ian the best of success at DeepMind seems like cool times for him. And" }, { "start": 740.48, "end": 746.96, "text": " very similarly, Andre Karpati is leaving Tesla, he's just recently gone on a sabbatical. And now" }, { "start": 746.96, "end": 752.24, "text": " he's leaving for sure he does not have a place that he's switching to, it seems like he's going" }, { "start": 752.24, "end": 758.24, "text": " to focus on doing things he enjoys and you know, good for Andre. In related news business insider" }, { "start": 758.24, "end": 764.96, "text": " writes Tesla reportedly reportedly again laid off about 200 workers in its autopilot division," }, { "start": 764.96, "end": 771.52, "text": " very dark rumors actually say that they all are replaced by optimus bots, but that's unconfirmed" }, { "start": 771.52, "end": 779.2, "text": " for now. And the last thing right here, this is word Ali, this is a hugging face space that composes" }, { "start": 779.2, "end": 786.08, "text": " the concept of the popular game word or with Dali. So you get a bunch of images from Dali mini," }, { "start": 786.08, "end": 791.12, "text": " which is now crayon, and you're supposed to guess the prompt. So this one, every time you refresh," }, { "start": 791.12, "end": 798.24, "text": " you get a new one. This one, I'm going to take a guess it is Eminem in GTA. E Eminem" }, { "start": 798.24, "end": 813.52, "text": " in GTA. Yeah, yeah. Okay, this first try first try, but it gets harder promise. All right," }, { "start": 813.52, "end": 818.72, "text": " this was it for ML news slash old slash what happened over the summer slash I'm no longer" }, { "start": 818.72, "end": 824.08, "text": " canceled. I hope you enjoy leave a comment, leave a like share it out, subscribe, all that stuff," }, { "start": 824.08, "end": 829.9200000000001, "text": " please keep hydrated during these warm times and I'll see you next time when we continue." } ]
RrBapqCPnmE
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[ML Coding Tips] Separate Computation & Plotting using locals
[ "Science & Technology" ]
[ "deep learning", "machine learning", "coding", "research", "engineering", "ipython", "colab", "notebook", "locals" ]
Here's a lazy way to separate computation and subsequent analysis in a notebook without the overhead of manually saving local variables. WARNING: Don't do this in a serious project. Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! So today I just wanted to bring you a quick coding tip that I often encounter in my daily machine learning research or life that might not be super common in let's say traditional software engineering or elsewhere. So often I have a bunch of, let's say I have a bunch of models right, and I use these IPython notebooks or collabs to analyze my data, plot things and so on. So I have a bunch of models, let's say they're called M1 and M2, and I'll usually run my big jobs on a cluster. So let's say I have run some jobs, one for each model, and have some logs that I want to analyze. So I'll go through my models and here I load a bunch of logs and I'll also compute some stuff, some statistics and some just things, right, that I want to have computed. And maybe these things are called A and let's... this and B, right. So now I've computed these things now, I want to analyze them. And let's shortcut and say printing is plotting. So think of these, this might be numbers right, and now I want to plot them here out, let's just print them, which I do this. Now every time I want to, you know, change something here in my printing, maybe I want the separator to be that, I'll have to load all the logs and compute all the stuff right each time I run this cell, which is not super cool, right. So I usually would like to factor out the plotting and stuff like this from the computation. So I could extract one of them into like a function, but then the point of these notebooks is that I can run each of the cells and they'll run right there, right. So functions aren't really cool in these notebooks. So what I'll usually end up doing is you'll have some second loop in here, right, and let's see, you'll have some data dict up here and you'll... here at the end you'll say something like data for this model is A and B or something like this, right, and then down here the first thing I do is I'll get my data and then I'll unpack again. So DA, DB, either I'll unpack or I'll just address them in dictionary notation like this and then I can do my plotting, right. Some people use tuples here, right, they could just go A and B, but then you'll have to do this unpacking. The problem is now if I want to add something here, right, I compute something new. I now need to add something here, right, I need to remember to store it in the data array and then I need to here remember to unpack it in the same order, right, and then I need to produce to put it in in the plotting, right. So this is very cumbersome. This line here and this line here, they're very... because you kind of duplicate your variable names all over the place just because you want to compute them here and use them here. A software engineer would usually tell you let's do something like a data class or in its most simplest form is say of a class and you know there are multiple ways of achieving this but let's just do A and B here. This is more probably the most verbose, you can also do name tuples, adder classes, data classes and so on but ultimately you produce a class like this and then here you say this is a data class A and B and then here down here you can at least address them like this, right, and you don't have to do the dictionary notation or remember the order. But now again if you want to add the C, not only do you have to add it here but you have to add it up here and you have to add it here and beware if there's a doc string and then you can use it here. This is just too cumbersome. So here is a trick and please only use these in like notebooks like this. This will lead to so much memory problems and everything and if you work with a software engineer you might have to get them chocolate for doing this. But alright, so what you can do is you can save your local variables in this dictionary. So this is a built-in that saves the local variables, sorry that gives you a dict of the local variables. The local variables here are A, B, C, M and all a bunch of other things, right. It's not clear in these IPython notebooks they have a bunch of stuff around them such that the meaning is slightly different depending where locals is but in essence you'll get the A, B and C what you want. Now the locals dict refers in order for this to go through the locals dict refers of course to the same objects not newly constructed so we're safe here to make a copy of that right like this and then down here we simply retrieve these local variables and update the current local variables using the local variables we stored and voila we can use A, B and C without ever having defined them, right. If this doesn't work that means that the locals here is a copy basically and this is a Python optimization and I read I don't have to use it because in these IPython notebooks it appears to work but you might want to have an empty exec statement here such that the Python interpreter can never be sure which variables are created in here and therefore can't optimize away. So that that is a horrible horrible trick. Again don't build anything serious upon this but it is very duper super easy you can just add any variable here and then use it down here so easy right and so wrong. Alright this was it I hope you enjoyed this I want to bring these kind of tips every now and then as a as an intermix into the research papers I hope you enjoyed this bye bye
[ { "start": 0, "end": 5.5600000000000005, "text": " Hi there! So today I just wanted to bring you a quick coding tip that I often" }, { "start": 5.5600000000000005, "end": 11.76, "text": " encounter in my daily machine learning research or life that might not be super" }, { "start": 11.76, "end": 17.52, "text": " common in let's say traditional software engineering or elsewhere. So often I have" }, { "start": 17.52, "end": 21.16, "text": " a bunch of, let's say I have a bunch of models right, and I use these IPython" }, { "start": 21.16, "end": 27, "text": " notebooks or collabs to analyze my data, plot things and so on. So I have a bunch" }, { "start": 27, "end": 33.84, "text": " of models, let's say they're called M1 and M2, and I'll usually run my big jobs" }, { "start": 33.84, "end": 38.36, "text": " on a cluster. So let's say I have run some jobs, one for each model, and have" }, { "start": 38.36, "end": 45.6, "text": " some logs that I want to analyze. So I'll go through my models and here I" }, { "start": 45.6, "end": 52.400000000000006, "text": " load a bunch of logs and I'll also compute some stuff, some statistics and" }, { "start": 52.4, "end": 59.12, "text": " some just things, right, that I want to have computed. And maybe these things" }, { "start": 59.12, "end": 68.08, "text": " are called A and let's... this and B, right. So now I've computed these things now, I" }, { "start": 68.08, "end": 73.92, "text": " want to analyze them. And let's shortcut and say printing is plotting. So" }, { "start": 73.92, "end": 77.44, "text": " think of these, this might be numbers right, and now I want to plot them here" }, { "start": 77.44, "end": 80.92, "text": " out, let's just print them, which I do this. Now every time I want to, you know," }, { "start": 80.92, "end": 88.8, "text": " change something here in my printing, maybe I want the separator to be that," }, { "start": 88.8, "end": 93.04, "text": " I'll have to load all the logs and compute all the stuff right each time I" }, { "start": 93.04, "end": 97.72, "text": " run this cell, which is not super cool, right. So I usually would like to factor" }, { "start": 97.72, "end": 103.64, "text": " out the plotting and stuff like this from the computation. So I could" }, { "start": 103.64, "end": 108.48, "text": " extract one of them into like a function, but then the point of these notebooks is" }, { "start": 108.48, "end": 111.96000000000001, "text": " that I can run each of the cells and they'll run right there, right. So" }, { "start": 111.96000000000001, "end": 116.72, "text": " functions aren't really cool in these notebooks. So what I'll usually end up" }, { "start": 116.72, "end": 123.16, "text": " doing is you'll have some second loop in here, right, and let's see, you'll have" }, { "start": 123.16, "end": 129.48000000000002, "text": " some data dict up here and you'll... here at the end you'll say something like" }, { "start": 129.48, "end": 139.67999999999998, "text": " data for this model is A and B or something like this, right, and then down" }, { "start": 139.67999999999998, "end": 149.79999999999998, "text": " here the first thing I do is I'll get my data and then I'll unpack again. So DA," }, { "start": 149.79999999999998, "end": 157.64, "text": " DB, either I'll unpack or I'll just address them in dictionary notation like" }, { "start": 157.64, "end": 161.6, "text": " this and then I can do my plotting, right. Some people use tuples here, right, they" }, { "start": 161.6, "end": 166.95999999999998, "text": " could just go A and B, but then you'll have to do this unpacking. The problem is" }, { "start": 166.95999999999998, "end": 172.33999999999997, "text": " now if I want to add something here, right, I compute something new. I now need" }, { "start": 172.33999999999997, "end": 176.92, "text": " to add something here, right, I need to remember to store it in the data array" }, { "start": 176.92, "end": 181.11999999999998, "text": " and then I need to here remember to unpack it in the same order, right, and" }, { "start": 181.11999999999998, "end": 187.51999999999998, "text": " then I need to produce to put it in in the plotting, right. So this is very" }, { "start": 187.52, "end": 193.68, "text": " cumbersome. This line here and this line here, they're very... because you kind of" }, { "start": 193.68, "end": 197.24, "text": " duplicate your variable names all over the place just because you want to" }, { "start": 197.24, "end": 201.44, "text": " compute them here and use them here. A software engineer would usually tell you" }, { "start": 201.44, "end": 211.04000000000002, "text": " let's do something like a data class or in its most simplest form is say of a" }, { "start": 211.04, "end": 218, "text": " class and you know there are multiple ways of achieving this but let's just do" }, { "start": 218, "end": 224.6, "text": " A and B here. This is more probably the most verbose, you can also do name tuples," }, { "start": 224.6, "end": 230.56, "text": " adder classes, data classes and so on but ultimately you produce a class like this" }, { "start": 230.56, "end": 237.12, "text": " and then here you say this is a data class A and B and then here down here" }, { "start": 237.12, "end": 244.28, "text": " you can at least address them like this, right, and you don't have to do the" }, { "start": 244.28, "end": 252.56, "text": " dictionary notation or remember the order. But now again if you" }, { "start": 252.56, "end": 257.2, "text": " want to add the C, not only do you have to add it here but you have to add" }, { "start": 257.2, "end": 266.64, "text": " it up here and you have to add it here and beware if there's a doc string and" }, { "start": 266.64, "end": 272.88, "text": " then you can use it here. This is just too cumbersome. So here is a" }, { "start": 272.88, "end": 277.8, "text": " trick and please only use these in like notebooks like this. This will lead to so" }, { "start": 277.8, "end": 281.71999999999997, "text": " much memory problems and everything and if you work with a software engineer you" }, { "start": 281.71999999999997, "end": 289.15999999999997, "text": " might have to get them chocolate for doing this. But alright, so what you" }, { "start": 289.15999999999997, "end": 295.88, "text": " can do is you can save your local variables in this dictionary." }, { "start": 295.88, "end": 302.32, "text": " So this is a built-in that saves the local variables, sorry that gives you a" }, { "start": 302.32, "end": 308.8, "text": " dict of the local variables. The local variables here are A, B, C, M and all a bunch" }, { "start": 308.8, "end": 312.8, "text": " of other things, right. It's not clear in these IPython notebooks they have a" }, { "start": 312.8, "end": 316.8, "text": " bunch of stuff around them such that the meaning is slightly different depending" }, { "start": 316.8, "end": 322.6, "text": " where locals is but in essence you'll get the A, B and C what you want. Now the" }, { "start": 322.6, "end": 329.16, "text": " locals dict refers in order for this to go through the locals dict refers of" }, { "start": 329.16, "end": 333.72, "text": " course to the same objects not newly constructed so we're safe here to make a" }, { "start": 333.72, "end": 341.72, "text": " copy of that right like this and then down here we simply retrieve these local" }, { "start": 341.72, "end": 349.56, "text": " variables and update the current local variables using the local variables we" }, { "start": 349.56, "end": 359.16, "text": " stored and voila we can use A, B and C without ever having defined them, right." }, { "start": 359.16, "end": 366, "text": " If this doesn't work that means that the locals here is a copy basically" }, { "start": 366, "end": 371.48, "text": " and this is a Python optimization and I read I don't have to use it because in" }, { "start": 371.48, "end": 378.66, "text": " these IPython notebooks it appears to work but you might want to have an empty" }, { "start": 378.66, "end": 385.36, "text": " exec statement here such that the Python interpreter can never be sure which" }, { "start": 385.36, "end": 392.56, "text": " variables are created in here and therefore can't optimize away. So that" }, { "start": 392.56, "end": 399.6, "text": " that is a horrible horrible trick. Again don't build anything serious upon this" }, { "start": 399.6, "end": 406.36, "text": " but it is very duper super easy you can just add any variable here and then use" }, { "start": 406.36, "end": 417.56, "text": " it down here so easy right and so wrong. Alright this was it I hope you enjoyed" }, { "start": 417.56, "end": 422.28000000000003, "text": " this I want to bring these kind of tips every now and then as a as an intermix" }, { "start": 422.28, "end": 437.71999999999997, "text": " into the research papers I hope you enjoyed this bye bye" } ]
vfBAUYpMCTU
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Unsupervised Brain Models - How does Deep Learning inform Neuroscience? (w/ Patrick Mineault)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "xcorr", "patrick mineault", "unsupervised models", "neuroscience", "neuroscience and deep learning", "deep learning brain", "machine learning brain", "brain models", "how does the brain work", "deep learning and neuroscience", "self-supervised models", "representation learning", "does the brain do representation learning", "does the brain work like a deep neural network", "neurips" ]
#deeplearning #brain #neuroscience Originally, Deep Learning sprang into existence inspired by how the brain processes information, but the two fields have diverged ever since. However, given that deep models can solve many perception tasks with remarkable accuracy, is it possible that we might be able to learn something about how the brain works by inspecting our models? I speak to Patrick Mineault about his blog post "2021 in review: unsupervised brain models" and we explore why neuroscientists are taking interest in unsupervised and self-supervised deep neural networks in order to explain how the brain works. We discuss a series of influential papers that have appeared last year, and we go into the more general questions of connecting neuroscience and machine learning. OUTLINE: 0:00 - Intro & Overview 6:35 - Start of Interview 10:30 - Visual processing in the brain 12:50 - How does deep learning inform neuroscience? 21:15 - Unsupervised training explains the ventral stream 30:50 - Predicting own motion parameters explains the dorsal stream 42:20 - Why are there two different visual streams? 49:45 - Concept cells and representation learning 56:20 - Challenging the manifold theory 1:08:30 - What are current questions in the field? 1:13:40 - Should the brain inform deep learning? 1:18:50 - Neuromatch Academy and other endeavours Blog Post: https://xcorr.net/2021/12/31/2021-in-review-unsupervised-brain-models/ Patrick's Blog: https://xcorr.net/ Twitter: https://twitter.com/patrickmineault Neuromatch Academy: https://academy.neuromatch.io/ Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello there! Today I'm interviewing Patrick Minot, who has a PhD from McGill and did a postdoc at UCLA. He's an independent scientist and a neural data scientist. His interests are neuroscience and the connection to machine learning. He has an awesome blog called XCore, which I guess is pronounced cross correlation, but who knows. So please check out Patrick's blog. He also worked at Google for a while, seeing how people interact with web pages and was a brain computer interface engineer at Facebook Reality Labs. He also has launched the NeuroMatch Academy, which is sort of an intro, an academy where you learn in a summer school about computational neuroscience. This runs every year and you can take part if you want. We're going to touch on that a little bit in the interview. I just wanted to take it away beforehand. So I'm going to give a little introduction about what we'll talk about and then we'll jump into the interview. We're going to talk about mainly about this blog post right here, the 2021 in review unsupervised brain model. The main focus here is on unsupervised models and what they have to do with the brain. So a big question in neuroscience is how does the brain work? I guess it's the main question in neuroscience. And so people are developing the hypothesis of how the brain works. And deep learning turns out to be quite an interesting tool for neuroscientists because in deep learning, we get some inspiration from neuroscience, but essentially we build a model that end to end can learn some task, to perform some tasks. So this would be this one right here. Now the question is, is what deep models do the same or different than what brains do given that they solve the same task? Like let's say both recognize objects on images. Do they do the same thing or do they do something completely different? So neuroscientists, they wonder, you know, how does the brain learn stuff? Is it the same as neural network? Does the neural network now also during the interview, I have to stop saying neural network because it's ambiguous in this context. So does a deep network, a computer, a human made deep network, does it account for neural activity, which means that are the signals in the deep network the same or related to the signals that we see in the brain? And this turns out to be a very important tool for neuroscientists. What they want to see is that let's say the intermediate representations in the neural network. Like you have some kind of picture, it goes into a neural network, there's layer, layer, layer, layer, and then there's a classification head. The classification head might not be that interesting, but what is interesting is like some intermediate representation here. If we figure out that that explains, which means we can correlate it with things that are in the brain. And I'm going to draw like a very bad brain right here. If we can correlate this with things that are found in the brain signals like from fMRI, from electrodes that we put into people's heads, then that is an indication that what these deep networks are doing have something like that there is an effect that is similar and that could help us understand the brain. So the holy grail in neuroscience would be something that can perform the same task as humans that does account for neural activity that is biologically plausible. As you might know, there is still a debate of whether something like backprop is implementable in the brain in one way or another, or if we need an entirely different mechanism in the brain. And lastly, something that could conceivably also have evolved and maybe we'd even have some evidence of how it evolved over time. So we're going to talk about these models right here, specifically self supervised models. Self supervised models here is a slide by Jan Lacan, or models that don't need labels to train. And what you usually do is you block out part of something you know, and then try to predict that from the parts that you do know. For example, if it is an image, again, you'd block out some part of the image and then from the rest of the image, you'd try to predict that part that is self supervised method. There's also contrastive methods which are self supervised, which means that you'd have an image and you make two different views of it, for example, by cropping the image in different places. And then you try to train a model that can tell that these two things actually belong together, come from the same image, and that they are apart from, I'm going to draw inverted arrows right here, they are apart from like a third image that has nothing to do with this image. These are contrastive methods. It turns out that if we build models that learn in self supervised and contrastive ways, and especially in multimodal ways, that we end up with models that can explain brain activity fairly well. So we're going to jump into the papers right here in the interview pretty quickly. But if you keep watching the interview, Patrick goes also into more like high level explanations of neuroscience in general. It is a bit my fault that I immediately was like, so what does this paper say? But I promise you, if you keep listening throughout the interview, there are great insights into the entire field of neuroscience into what are open questions into where can people go to learn about this. And if you even want to research this, if you're in deep learning right now, and you're interested in neuroscience, this Patrick says it's a wide open field, there's lots of papers to be published. And the conferences are especially something like NeurIPS are pretty receptive to papers that connect deep learning with neuroscience, or in general, try to explain neuroscience, neuroscience things. So as I said, we're going to jump into the interview now, I don't want to spend too much more time because we're very detailed in the interview. Check out Patrick's blog and all his other endeavors. And I wish you a lot of fun. Bye. Hello, everyone today here with me I have Patrick Minow, who is a neuroscientist slash blogger slash anything else that you might imagine in between deep learning and the human brain. Welcome, Patrick, to the channel for this bit of a special episode, I guess. Thanks. It's great to be here. I got I think I'm going to say I'm going to say I'm going to say I'm going to say I'm going to say I got I got sort of knowledge of you for through your article 2021 in review unsupervised brain models, you wrote down what happened in the last year in terms of the connection of deep learning and how to let's say how to explain the brain. What is your what is your background in this area? How did you come to be in this in between space between neuroscience and AI? Yeah, absolutely. So I actually originally studied physics. And, you know, after my undergrad, I figured, you know, maybe I don't want to do string theory for the rest of my life. Like that sounds it sounds like some of the questions that to ask like interesting questions, you need to really be pretty advanced. But I think in neuroscience, there's some questions that are pretty right for the picking and that are obvious for even somebody that's pretty far outside the field. So for instance, what is sleep? What does it do? That's like a pretty easy question. That's that's very hard to answer. So I went to do a PhD in computational neuroscience at McGill. And one of the fields of my study was really that intersection of neuroscience and artificial intelligence. Now, when I started my PhD, which was in 2008, deep learning really wasn't a thing, I guess, like some of the original papers by Benjio and and Jeffrey Hinton had been they were out. But you know, the big event, I think, in presenting deep learning to the world and saying like, this is really this is a big deal was image in 2012. Right. As you know, so that was during my PhD. So at the very start of my of my PhD presentation, my PhD defense, I would say something like, look, you know, you have neurons in infratemporal cortex, which is one part of the visual stream, and they're able to do visual recognition. I would present examples of these neurons. And they're invariant. And to things like lighting, rotation, scale, etc. We don't know how to make a computer that does that. But if I gave this presentation, just you know, six months or a year later, I would never have been able to say that because people have been like, you know, you could just you know, like get even Alex net would would be able to do that. So so that's a little bit my, my story, my introduction to to neuro AI. So I was there like, during that transition, towards deep learning. And in fact, in the end of my PhD, I was, I was working on deep learning to try and explain some of the brain areas that I cared about. Now these brain areas are the areas of the dorsal stream. And those are like really brain areas that really care about emotion. And so I was poking around with what was I'm going to date myself, you know, I was poking around in the piano back in the day to to make this happen, which I guess has fallen by the wayside. But yes, I've been at this intersection for quite a while now. Awesome. Well, that it seems like it was an exciting time. I do remember the piano as well. So I'm definitely dated, dated the same. So you, the dorsal stream, just to make clear, that's part of sort of the visual, the visual stream into the brain. Is that correct? Or? Yeah, yeah. So maybe I can, I can give you like the first minute of my, my thesis defense. I've got it engraved in my brain. You just, you defended not too, too long ago, right? True. Exactly. So I'm sure you're gonna forgot it. Oh, yeah. Yeah, you just like put in the box in your brain and just, it's gone. Okay. So the visual information falls on the retina. And it's originally encoded in these very simple formats in terms of differences and luminance between like a center and a surround, or differences in time. So you can think of it as a camera with like a little bit of linear filtering. And it then gets forwarded to different areas of the brain, first to the lateral geniculate nucleus, and then to the back of the brain, the occipital cortex, which is called the primary visual cortex. So that's a huge area, huge chunk of the brain. And you have tons of neurons which are selected for vision there. And from from there, the visual processing splits into two different substreams. There's the ventral visual stream, which is the object stream. So if you think like, what does a, you know, ResNet 50 that strain on, on ImageNet do? Maybe it's something similar that we can get into that later. And then there's another set of areas, which is the dorsal stream. Again, organized in a hierarchical fashion. Again, you have like these, you know, for instance, you have increases in the size of receptive fields, you have increases in the size of in the complexity of things that these neurons respond to. But this time, they don't care about form, they don't care whether they don't care about texture, what they really care about is motion. So you know, you're going to poke at a neuron in, let's say the middle temporal area, which is part of the dorsal stream. And 80 or 90% of the neurons will respond when you show them the right moving stimulus. Yeah, which is, which is remarkable. So in your in your article, you go a little bit into both of these streams. And I think the one of the main focuses that you care about is, are or are the are or are not the deep learning networks we use today, similar to what the brain does, because sure, we've built these systems that can do some visual tasks. But does that bring us closer to understanding how the brain does certain things? And the answer is, right? The answer is a little bit yes, and a little bit no, like there's still there's still questions. But you point out a bunch of areas of where progress has been made in correlating, let's say, neural activities in deep neural networks with neural activities in in brains. So yeah, yeah, I'm, I think that it might be good to just back up a little bit and talk about the, you know, that world at large so that, you know, people are just tuning in. I haven't read the article yet. We'll understand what we're discussing. I think that originally, some of the, okay, so I was talking about ImageNet 2012, which was the big milestone in creating good deep neural networks that could solve the kinds of tasks that humans that humans can solve. Now there was a lot of background work that came into that. One is, you know, the creation of convolutional neural networks and the work from from Yan-le-Cun, which was ultimately, you know, inspired by the new the new cognitron, which is Fukushima, like in around the early 80s. But ultimately, that work was motivated a lot by some early work in vision and in vision neuroscience. So David Ubel and Torsten Weisel in the 50s and 60s looked at different kinds of neurons in the primary visual cortex, and were able to find that you have this this hierarchy of selectivity, right? So the canonical thing that they found is they found cells which were tuned for orientation, right? So you know, you present an edge like this or a line like this, and the cell responds. But if the line, if instead of being white, it's black, then it doesn't respond. So those are called the simple cells. And then they found another subset of cells, which are called the complex cells. And so those are selected for this, but they would be, it wouldn't matter the precise location of this line in question. And it wouldn't matter the contrast. So it could be white to black, or it could be black to white, it wouldn't matter. And so their hunch was that, okay, well, you have this this transformation that happens, first of all, you have a selectivity operation, which creates that simple cell. So basically just a threshold. And that's enough to give you a selectivity, or it could be a relu if you, you know, smooth it out. And, and then there's a pooling operation that happens. So you pool from different, from different simple cells that have the same orientation selectivity, but different contrast sensitivity. And that creates the complex cell. And you can view that as a subsampling operation or downsampling operation as you would have in a deep neural net. So there's this kind of long line of like, oh, there's the inspiration from the brain, we're going to make some models, we're going to show that it's that they're actually good enough to solve tasks that humans can solve. But the question is, okay, are these are these like really like, like human brains? So and that's similar work from from in Jim DiCarlo's lab and Nico Cricascorte in 2014, like really showed that there's some very tantalizing hints that this is indeed the case, you know, that these networks that we've trained on ImageNet, they look a lot like the brain in, in really interesting ways. And one of the big ways that, you know, they're similar is that if you have, if you look at, you know, let's say 10 different networks, and one of them is, some of them turned out to be a little bit better at solving ImageNet, or a little bit worse. And then you correlate that with how well you can align these networks to the brain, turns out that the ones which perform better on ImageNet tend to also perform better on explaining the brain, which is like a very strange coincidence, because think about how like, completely different these two things have been created. So that was that was one of the big hints. And I think like another big hint is the word from Chris Ola and other people at OpenAI that looked inside of these deep neural networks and found that, you know, the kinds of selectivity that you see inside the cells, they're very, very similar to what you would, what a neurophysiologist would describe in areas like V1, V2, V4, and temporal cortex. So the combination of the quantitative and qualitative tells us like, hey, maybe, maybe there's a kind of these are kind of like little brains, one very, very specific part of the brain, I want to be a lot of trouble if you say that that statement. Yes, exactly, exactly. So what do people mean when they say something like explains the brain or something aligns with brain activity? Like, what is it? What is behind that? Yeah, yeah, yeah. So we can talk about the high level stuff, like, you're sure just like the idea of look how like, what do we, what do we measure? Like, you know, is it a number? Is it a correlation? Or is it? Am I training a regression model from one signal to the other signal? Like, how can I make the statement that this neural network explains some function in the brain? So in the early work from from 2014, we see two different approaches being used. And those are the kinds of approaches, like every other approach that's been tried, is kind of a derivative of these like two basic concepts. So one approach is a regression based approach. So let's so very simply, let's say you train a ResNet 50 on image net, you chop it off at some layer, layer four after the first down sampling or whatever. And then you measure the output of that deep neural network with respect to some stimulus ensemble. So which gives you a big matrix big X, which has a bunch of rows for the different examples and a bunch of columns for the different features. And then you just regress that against neural data that's, that's recorded with the same, with the same images. So it's just a regression. So you can add like a bunch of different spices into your basic recipe. So you can add some some sparseness priors, you can try to well, usually you'll use a ridge regression rather than a straight regression, because that will definitely the other regular regression will usually crash and burn neural data is very noisy. That's something that people don't often appreciate. And so it's a regression. Let's just put it that way. Yeah, now that will be sort of, for example, f MRI data, when we talk about neural data. It can be f MRI data, it can be MEG data. So magnetoencephalopograph, encephalopograph, I think we just say MEG. And or it could be a single neuron recordings or array recordings. So those are taken inside the brain, or it might be ECog, which is just on the surface of the brain. So there's different kinds of recordings. Now, it happens that f MRI and MEG are much more popular for for humans, because it's it's, it's non invasive. But every once in a while, people get to record inside of the brains of humans that have some some sort of need for brain surgery, whether it's usually it's epilepsy. And those data are very precious. Now speaking of so you go through different papers in your article. So maybe we can follow that structure a little bit. The first one is a work that shows that the ventral stream might be explainable by and your idea, your the article also goes into it's called unsupervised unsupervised brain models. So your your kind of point that you make is or your investigation is into unsupervised systems, like what, what, what, how good or how close to what the brain does is comes from the self supervised and unsupervised system. So the first, the first, the first thing you go into is the ventral, sorry, the ventral stream, that is you set the sort of object stream. And this paper looks at single neuron activations, right? And the they find that the self supervised systems can be or are equally or even better able to explain the brain data than supervised systems, let's say in an image recognition task. Yeah, so that's super exciting. And the reason is that I think that everybody got very excited when they saw that these networks which were trained for image net, they could be aligned for to the ventral stream to that object recognition stream, because now it's something that, you know, you have this in silico thing, and it kind of looks like it does the same thing as the brain. And so it's kind of a model of the brain. Super exciting, you can do a lot of things with it. But there's different ways in which something can be a model of the brain. And some of these are a little bit more useful than others. And one of the ways I one of the big flaws, I think, for for supervised learning is that it's not like really a way it's not really a model of how the brain would learn a task. Because, you know, I'm not walking around as a baby. And like, you know, my, my parent just tells me like, dog, dog, dog, dog, dog, cat, dog, dog, just like constantly for years and years. So you know, we don't really use unsupervised learning for for for learning these kinds of things. So that's a big flaw that if we want to go move forward with models, which are biologically plausible instantiations of creating these, these models, then we have to move away from from supervised learning. So people generally like unsupervised learning and self supervised learning better for that reason, because you don't have to, you know, come up with this like, weird concept that you have dog, dog, dog, cat. And and but you do have to do the math to make sure that it actually does work out in practice. And that, you know, the right the kinds of the quantity of examples that you feed into, into the model is similar to the kinds of to the quantity of examples that you would feed into a human, for instance, I think you have you have a so your conclusion, you have a little bit of an example that it would like the language models that we train such as GPT three would be equivalent to like, years and years and years of of human, just constants, just talking and talking and talking and talking and babies are able to do it by age, what four or so or two. Exactly. So, so I think that there's still a big gap there that comes from that you still I mean, we're off, I think I calculated we're off by four orders of magnitude in terms of the efficiency. But, you know, I'm to score everybody on the same kind of curve. I mean, the GPT three is not made as a model of the brain minutes made as a language model. And to solve all these these problems in zero shot settings, and it works very well for for its purposes. But definitely, if we want to actually try to explain the brain, we'll need to get to that. So this, this, the, it is also a bit special, because we hear we talk about the ventral stream, you said that's the object stream. And the fact that self supervised systems are equal or better at explaining that than supervised systems, which presumably are trained exactly on the task of that such an object stream would be sensitive to right, that is also one special thing. So I totally agree. I mean, that's super cool that that this is the case that you have this, this thing where you don't give it like learn objects, and yet it learns something that can do can do object recognition. And it learns meaningful, meaningful things like that. But I think that there's a couple of hidden assumptions there that make this not nearly as mysterious as it was like, as we would like it to be. So one is that, you know, image net is not really if your model of image net is not you take like a, like a nice Canon DLS, the DLSR, and, you know, you, you put it at a random point in space, and then you point it at somewhere random, and then you hit the button. Right. So if we look at both of our faces right now, we're in the center of the screen, it turns out that, you know, we're smart like that, that we place our faces, like generally in the center of the screen when we take photos. So the things that we try to look at in image net, you know, the the subject of the category will by and large be in the center. So, and you know, the position of the camera, the things that we that we tend to measure, I mean, these are all these all come into why the model learns the thing that it learns. So it's not, it we can't really say, oh, it, you know, we're not like really feeding it any, any structural priors, we definitely do. We definitely do just, just in not like the conventional way, and not in a way that's very easy to quantify either. But some people are definitely trying to solve these, these problems. So, so for instance, there's a lot of work on trying to fit the same kinds of unsupervised learning models, but with streams of data that look more like what a baby would see in their early years, when which the camera is not always pointed that at the right things, because babies tend to, I see. Yeah, do a lot of gesturing. But it's also, it's also there, especially because the baby with time is able to move its head, right. And therefore, it's also not the same as just placing a camera somewhere because whatever captures attention will be actively looked at more. So it's, it's definitely like, I think there's a long way to go in any of these things. Oh, yeah. Oh, yeah, absolutely. I think. So to close the, the, just that one paper, because we've been on it for like 15 minutes, but super cool that you can have, you can train a model in a unsupervised or self supervised manner. And it turns out to be just as good at explaining, you know, V1, V4, and IT, all these different sub areas of the ventral stream. And then there's a kind of eriarchy that happens between the different, the different models. So, you know, some models are clearly doing better than others. So typically in these papers, SimClear is usually the one that performs the best for reasons that we don't totally understand. Local aggregation also tends to, to do better. So that's interesting. Like, what is it about what's inside of these models that can, that allows them to be more similar to the brain. Now, of course, in the end, you know, you end up with like tiny, tiny error bars, and it can be pretty difficult to actually differentiate between these, these different things. So, you know, you can't like read too, too much into it. But definitely the best models are like the new kind of generation of self supervised models. And then so the next paper deals with the with the with the other stream with the dorsal stream. And there you or yes, that is actually you who found some that's your own paper, right? Oh, yeah. So, so I'll just go very rapidly with true that actually the second one is ventral stream. Oh, sorry, again. And so that's from Talia Conkle. And very, very consistent data. So they use fMRI rather than single neuron data. But I mean, the data is like these two studies were done independently, about a kilometer away from each other, one one team from Harvard and one team from MIT, and they found exactly the same results. So maybe some things in the water in Cambridge, Massachusetts. But otherwise, I mean, it's a very robust finding, basically. But yeah, we can definitely talk about the dorsal stream. So, like I said, I've been interested in this problem for a for a very long time. And I had a little bit of time during the the last lockdown of the pandemic to to relook at this problem. And so we sat down and we said, you know, this I think like the time is right to really look at all this dorsal stream data and see if we can get if we can get one really good model of all these these different areas. So the first thing that I did actually is I was going about this very naively, but I just looked into like the torch vision models, you know, they have like some some model database, and just downloaded all the models that were trained on video recognition. So all the models that were trained on I'm drawing a blank here, kinetics 400, which is a task where you have to look at a video of somebody juggling and say, oh, it's juggling rather than unicycling rather than soccer or whatever. And so the special thing about these models that they look at 3d data, by 3d, I mean spatial temporal, right in time. And so that means that and generally they're trained, the convolutional neural nets, they're trained with 3d filters. So, you know, the front end of the model is going to be a 3d convolution in space and time. So I looked at these models, and I did the kinds of visualization tricks that Chris Ola and gang do it, I open my eye to look inside because I was curious, you know, do they learn motion? Do they align with with the brain? And I found that they were actually really terrible, which surprised me, because if you look into the methods of these papers, it's like we trained, we trained these models for 24 hours on a supercomputer with, you know, 16 GPUs in parallel, and went through, you know, a million videos. And this is the model that we obtained, and they're very good at doing the tests that they're doing. And yet, the kinds of generic features that come out of the models are really terrible at aligning with the brain. So that was kind of the hunch that we saw there that I should say that the one of the early findings and one of the early points that people who are dubious about the finding that the ventral streams align with ImageNet trained ResNets and AlexNets and VGG nets, is that people say, well, you're just training the model to do a task, you know, any sort of task will work. It doesn't matter whether it's object recognition or whatever, it just turns out that this is the task that you had data on. But this is a very, this is a very good like counter example of that, because you train a model on a task which involves, you know, 3D data, video spatial temporal data. And yet, that model is actually the model that you that you train is really good for that one task, but is really terrible at this task of aligning with the brain. So that motivated us to look more deeply into, you know, what else could, like if we don't train, if we don't take, you know, pre-train models to solve this problem, like what could we do? And we know that a lot of the dorsal visual stream is really cares about navigation. So if you look at an area like MST, have you ever had Vertigo? Sure. Yeah. So Vertigo is like kind of sorry, this is like a weird non-secret, but Vertigo is kind of a funny thing, right? Because it's an inner ear problem, right? So you have your vestibule, and it kind of, it basically tells you there's acceleration in ways that there shouldn't be acceleration. And that gives you an impression of being dizzy. But also gives you like these weird visual effects. Yeah. Right? Which is strange. Or, you know, if you drink a little too much, you might have that same kind of feeling. So there's an area in the brain, which is called MST, which has these neurons, which receive both visual input and vestibular input. And the way that they receive visual input is they have a lot of selectivity for things like rotation and expansion and and wide field translation. And so we think that they're really involved in navigation. So if you're going forward in a line, you have these neurons, which receive both the vestibular input. So they know how you're accelerating and where gravity is. And they receive all this wide field optic flow, which is tells you where you're heading. So we said, why don't we train a deep neural network to solve a navigation task so that the network can can orient itself in space, essentially. So I used an environment, which is it's an environment for drone simulations called AirSim. And it's really fun. So it's an Unreal Engine. And you can, you can basically fly a drone in these suburban environments and back out the sequences of videos. And then you can train a convolutional neural net, 3D ResNet, to solve the problem of figuring out what is the from a little sequence of movement, what is the trajectory, basically, that's going on, like where are you heading? Are you rotating? Are you going forward, etc., etc. And so if you train a network on that, it turns out that if you visualize the cells inside of the train network, they really, really look like what you would see in the visual cortex. So as a neurophysiologist or as an amateur neurophysiologist or a person that's been in the vicinity of neurophysiologists, I was really, I was really stoked to see this. So you see these cells that are selected for translation and translation, but they don't care about the pattern that underlies the translation. And in particular, you see these cells like the one that you're visualizing here that like things like spirals in some of the higher level layers of this network, which was super exciting because those look a lot like what you would see in a... So basically, the networks that try to just predict anything from a video that contains motion weren't like turns out these neural net, sorry, the deep networks, I have to stop saying neural networks here because it's ambiguous. Ah, yes, yes, yes. The deep networks that train on any kind of video data, they're not super well aligned with the brain. However, as soon as you go maybe to like some sort of an ego perspective, right? And you especially you predict your own parameters of motion. So from the visuals you're trying to predict, okay, I went to the left, I went to the right, I turned around from the visual information. And that turns out to align very well with the brain data. Does that make like, just maybe an esoteric question, but does that say anything about the need for AI to be embodied? Maybe? Oh, I love this question. Yes, 100%. Yes, we should, we should completely embody AI. Yeah. So I think that one, one big question that came up during the review is that, you know, we claimed originally this was unsupervised or self supervised in the abstract. And then the reviewers came back and said, well, it's not really unsupervised or self supervised. It's a supervised network because you know, you know what the answer is, you're just training in a supervised fashion. My feeling is that it is self supervised in the sense of when you embody this in an agent. So when I'm when I'm a baby, let's imagine that I'm a baby. And I'm walking around the world, I have some control over where I'm heading. Yeah, right. So I can say like, I'm going to turn this way, I'm going to turn that way, I'm going to move forward, I'm going to go get that cookie. I'm going to look at my parent, and so forth. So I am an agent. Yeah. So that means that I control the motion that comes into my eyes. Yeah. Because the vast majority of motion that we see in the world comes from from our self motion. And so I can correlate my motor plans with what I see in the world. And that means that it's a much easier kind of problem to correlate these two things, then to say I here's found data, which is the case of ImageNet, and figure out something to model with this. Yeah, exactly. Right. Yes. You also have this diagram here from young Lecar, talking about self supervised learning. And it seems very much that it is I agree, the line is like gray in some places. But it seems like if you are an embodied agent, you always have those motion parameters ready, right. So it's much more like I am going to darken out part of part of what I already know and try to predict that from it, it seems it falls a lot into this into this diagram right here. Yeah, absolutely. So I think it looks more like the bottom part of this diagram that you see there, where you have these two things which are happening in the present, but one part is occluded and the other part is visible. So you're doing multimodal masking, in other words, right. So you have the vision, but now you're trying to predict the vestibular, or you have the vestibular, and you're trying to predict the vision. And so if you look something like clip would be, I think, like maybe the most popular model that's of the same kind of multimodal kind, you can say, well, clip is a supervised model, because you're trying to predict, you know, in a way, you're trying to predict language from vision. But it's really this kind of masking. And I think it's a more general approach to solving this type of problem. So yeah, I agree with you embodied agents, I'm 100% on board, they're definitely going to be awesome. And actually, questions about, you know, what do reinforcement learning agents learn? Do they learn like good self motion representations, for instance, when they're when they have a visual task? I think like those are super interesting, like, what do you need to put in there? In order to get that that effect? Yeah, that that concept of me in a eyes is not yet really come through so far. But I'm also looking into like, I'm looking forward to having more of a eyes who understand the concept of, of me and to be embodied and and and sort of to have self self state and all of this kind of stuff. I think that will bring us forward. So here in the next paper, you you tackle not I mean, this this paper you're describing, it tackles the question. It is actually, it is actually, I just saw in my notes, that is, again, one of one of your papers. It is the question, why are there even two different of these visual streams in in the brain? Like, it maybe makes sense if we if we sit down, but also, you find some actual empirical evidence for why it might be might be that we even have two streams, right? Yeah, yeah, absolutely. So I think that's a that's an interesting question, like, why are there two things rather than one or four things or eight things rather than than an arbitrary number? So, so Shahab, who's the first author on this paper, worked on looking at what it would take to to recreate both ventral and dorsal stream. And I think the remarkable thing that he found is if you train a network like CPC network, so a contrastive predictive coding network, which is one form of self supervised learning, in which you're trying to essentially discriminate between different futures, if you will, so you're trying to you look at the past, like a certain window in the past, and then you're trying to tell apart like the actual future embed in some subspace versus an alternative future, which is which is dreamt up. So if you try to do that, then, you know, it's already been shown that you can find good representations and in videos. But what's very interesting is that then you can ask the question of what happens as you add more and more substreams inside of this of this network. So if you remember the original Alex net paper, so it did have two streams. So if you remember, like very, it's like a while ago, but what happened is that they had like tiny GPUs back in the day, right. And so they couldn't fit the whole model on just on just one GPU. So what they decided arbitrarily is to split it up into two parts, especially at the at the early point. And then basically, they so they were independent, but they could re communicate a little bit later on. So which was a pretty unique feature. Back then, people didn't really do that. But now it's it's quite common to, you know, chop up the channels in different ways and all sorts of things. But what they found is that there's this this this very interesting self organization principle where all this all the the filters on one GPU turned out to be color selective, and all the filters on the other GPU turned out to be to be black and white, which is whoa, that's weird. Just by the fact of splitting up, because the two streams, they don't always communicate, right, they only communicate at very sparse intermediate points. So so just structural prior gives rise to something that very much looks like the brain in that in the sense that one of the streams correlates well with the ventral brain stream and one correlates well with the dorsal brain stream. Yeah, so in that in that case, in the early Alex, that paper, actually, both of the types of filters are different subtypes that you see in in V1, but they are, you know, functionally different, and they have different roles. But it was like kind of an interesting proof of concept that if you just set a separation, arbitrary separation down the middle, you don't say anything else like you don't say like, you have to respond to color, you have to respond to this. But just you set a separation, it self organizes to something that's interesting. It's crazy. And yeah, it's weird. So they might have just locked themselves into like building a better model by by having two small GPUs. Yeah, exactly. So, you know, they say that necessity is the mother of invention. So I think this is a particular case where, you know, the limitations at the time caused them to stumble onto something which I think is is really deep and interesting, which is symmetry breaking. So I guess ultimately, you know, when you start with, okay, you can imagine that if you just set all the weight parameters to zero, and then you perform your gradient descent, these two filtered sets will learn exactly the same thing, or they'll crash and burn. But by adding a little noise, right, by initializing your your network, you're pushing the network very, very slightly out of equilibrium, and that's enough to self organize into this thing. And so Shahab found a very similar phenomenon in the context of these networks, which are trained in an unsupervised manner in CPC. And so being trained on videos was able to find that these parts of the one part of the network was and so again, this is this is an instance of a network that has kind of a firewall in between the two sets of filters. And so he was able to find that these two sub branches, one of them was dorsal like and the other one was ventral like, and was able to correlate that with some some data that we have in in mouse where there's tons and tons of data on what's the relative selectivity of these different things and found some some really nice correlations. So that means that you can all you would need basically is a little bit of a nudge, right. And so so which is this great idea, like maybe you just initialize the network in a sli so that like the two things are just very slightly asymmetric. Because one thing I should say is that the the two networks don't always get the same label, right. So if you train the network twice, one time it's going to be dorsal ventral and other time is going to be ventral dorsal. Whereas the brain every time that you train it, it's the same that we know. There are some exactly it's all ventral is ventral dorsal. So there's some like inbuilt asymmetry. But it's a very probably like a very small asymmetry. Because if you train it with real data, and then it will automatically, you know, self generate into this in bloom into this particular activity. Cool. So very excited that the brain can organize itself for something that's that's useful just from this could be used, I guess, for I mean, people are already, you know, in multi head attention, they do multi head, right. And that's kind of similar in that they they clearly separate different computation that cannot interconnect. And therefore, that that sort of there also, like the random initialization probably does some symmetry breaking, and then you find that the different heads respond to different things, people have investigated that it's probably very much along the same lines. So I want to skip ahead a little bit here to the the the the the concept cells, the the is it this paper? Oh, that's this as well. I think like, I think that there's been a lot of movement in the subfield. And by the way, I want to tell your viewers because I know a lot of you viewers are coming from a machine learning background versus an neuroscience background. And, you know, it's hard to get into NeurIPS. But I think if you know, it's such a wide open field in neuroscience. There's so many questions that if you care a lot about representation learning, you know, it's it's a pretty easy field to jump onto, and, and have positive reception. So there's there's still a bunch of a bunch of questions. So grab your nearest neuroscientist and go write a paper. Encourage everybody to do it. Yep. Definitely how to how to hack how to hack publications. There you go. Yeah, there you go. So yeah, so clip clip is clip is weird. So if there's one thing that I would say is when we saw when we saw the results of of clip and and some of the both in terms of of how good it is, and also the inner visualizations that Chris Olin gang worked on Chelsea Voss, as well. I think that we were all kind of surprised because they do look a lot like the kinds of concept cells that you see on the hippocampus, right. So the very, very, very, very famous paper that did this is the had the infamous Jennifer Aniston cell. So I don't know if you're in your only in the context of your article. So it's one one cell that responds to both what pictures and the name and various aspects of a person not not just like, exactly, exactly. So if I remember correctly, this this paper, so they had, they had people with intractable epilepsy. So these are human patients, and they were doing pro recordings in the hippocampus to figure out what was the the nature of their epilepsy and how they could be treated. And, you know, they spend a lot of time in the hospital just being bored. And so sometimes they enroll into experiments and these experiments tell us more about the human brain than is otherwise possible. And so very thankful for for these people that do this. And so in this particular instance, they, they presented different kinds of concepts and images. And one of the cells that they found that have this like amazing property that if you just show the words Jennifer Aniston, it would respond. If you showed the face of Jennifer Aniston, it would respond. If you showed, like I didn't do like other kinds of controls, but I imagine that if they had played, and you know, that the start of the of the French show, it probably would have responded, because it all came with this like general concept of, of Jennifer Aniston. So ever since then, people have been like fascinated by this idea, although it's a much older idea, you know, this idea that you have like a cell in your hippocampus that responds to your grandmother, it's the grandmother cell idea. But one thing that was very interesting when we first saw clip is that you have cells can respond both to text and to to images. And in fact, you can do these new kinds of adversarial attacks in which you just write the wrong, write the wrong text. And it fools the wrong text. And it fools the system into actually reading the text and mislabeling the the images. So it sounds very hippocampus like to me. And so in this particular paper, they, they actually looked at at this problem and found that out of all the different models that that they could look, they found that clip could explain the most hippocampal data, which is super exciting. I'm sure that people are really going to drill down further into this, into this finding. Yeah. But it's clip specifically, because there's a lot of other unsupervised models. And somehow clip is the best and we still don't understand why this is I mean, it's like the delta between it and the the second best model is, it's huge. But why? I think no one knows right now. And and actually clip the the just the the the visual aspects of clip are also very good at explaining some of the some some other data. So it's, it's very interesting to think about what happens in a multimodal fashion, like what happens when, you know, experimentalists and neurophysiologists like really like to isolate one thing to one thing to just look at one thing at a time. But now you're talking about something that can do different kinds of modalities. And I think that, you know, multimodal areas are going to be some of the next things that are really attacked by unsupervised and self I mean, it's also a question, I mean, clip is huge. It also has a huge amount of data. We don't exactly know what data went into there, right? There's a lot to to untangle here. But the multimodality, I also feel that that is, is a big part of what's going to bring us forward in AI. And probably also, you know, since the brain is always multimodal, like, I don't you don't get like a stimulus that is maybe now with computers, you do. But you know, just growing up in nature, you probably get zero stimuli that are just unimodal, right? So you're always in this mode of multimodality. Yeah. And in one thing that's, that's interesting, in particular for babies, you know, if, if you ever interacted with babies, they really like to have toys, which make lots of noise, which drives parents crazy. And but I think that there's a reason for that, right? Like, why would you want to like a toy that makes like a lot of noise, because clearly, there's a lot of pressure on making the noise as silent as possible, because the parents are just like trying to sleep. But I think that the kids just prefer that because it's a multimodal stimuli. And you can do all sorts of causal inference about what happens when I get this thing with this thing. So this is the last paper that I I wanted to look at, maybe maybe you have more, but this is, it's challenges, the manifold perspective of deep learning in your you've described it a little bit in the paragraph, you say challenges, the manifold perspective, and it favors the causal perspective. So what is meant here? And what does this paper tell us? Oh, yeah. So you remember, we were discussing earlier, the mechanics of how you compare a brain area and deep neural network. And so you could have so I think a lot of deep learning methods are rotation invariant. So if you take something like clip, for instance, you're learning, I guess, like this, this subspace, which is, I guess, like 128 dimensional in the both from the visual side and from the text side, and you're trying to align it in this 128 dimensional space. If you multiply the two by rotation matrix, and then the entire 128 dimensional space gets gets rotated, it's the same network, right? It really doesn't matter whether it's, whether it's rotated or not. What matters just the locations on the manifolds. And so if you're thinking about aligning a brain area and neural network with a with a regression, again, the rotation doesn't matter. You're saying any any weight matrix is just as good as any other weight matrix. So that's the so So that's the so that's the underlying, I think, assumption. And I think that there's been a lot of work recently in neuroscience, focusing on this idea that, you know, single neurons like don't really matter. What matters is the latent subspace in which the near the neurons are responding. So if you have a population of 100,000 neurons, maybe they Yeah, it's 100,000 neurons. But if you present a bunch of stimuli, you find out that actually the latent sub and you do like an SVD on the matrix of responses, you find that latent subspace actually just five dimensional, or whatever. So first of all, they're just random projections from this five dimensional subspace. And the and the large dimensional subspace doesn't really matter. So this paper, so sorry, sorry, and it's been a lot of work in neuroscience showing that this is the case, especially in, in motor cortex. So you know, you have tons and tons of neurons in your motor cortex as you're going for for reach movement. And yet it seems that these neurons really live in a very low dimensional subspace. So that's what we call the manifold theory of neuroscience is that idea that the neurons are in a high dimensional subspace, but they're just project random projections of some lower dimensional subspace. But one of the consequences that if it's random projections, then each of the neurons individually should just be, you know, weird. It should, you know, respond to a bunch of different things, it shouldn't be shouldn't be able to place a label, because you could like neurons, you could rotate the entire space, it would still make sense, right? So there's no, there's no reason why an individual neuron should align with just like one axis in, in that particular subspace. Yeah, exactly. So, but neuroscientists really like labeled axes. That's one thing that they're very fond of. So, you know, you can imagine that you have like an axis, I don't know if you're in Unity or Unreal, you know, you have like my avatar, and then you just like hit like one switch, and I just go, you know, it just, it just changes my smile from from upwards to downwards. And oh, sorry, I, my printer is haunted. And so I'm just going to disconnect it, if you don't mind, because it makes the lights flash. Unfortunately. Okay. I find it weird that printers are like the oldest technology on the planet, yet still they're like the most troubled, like we should we should have figured this out by now. But we have not. Yeah, it's, it's too bad. So I still print out papers, because there's been research that shows that you retain more when you print something out rather than when you read it in the on a printed document rather than Yeah, reading it on the but it's just becoming so, so inconvenient that I think I'm gonna have to abandon soon. Okay, so starting back then, and I apologize, where do you want me to restart? So um, we, yeah, there's no there's no particular reason why any single neuron right should align with any axis. Yet people find that they do. Yes, yes, exactly. And that might be because, you know, neuroscientists like to name things. And if something is not nameable, they'll say it's mixed selectivity or whatever, and then they'll just forget about it. That's also a very good assumption. So both of these things can be happening at the same time. But in this paper, they found that if you train a bit of VAE, which is a VAE, which has a stronger weight on on one of the KL terms, it tends to find disentangled representations, right, so that the axes actually matter. So one axis is like my smile, the other axis is how much of a unibrow I have, and you know, a third axis is, you know, what's up with my mustache, and etc, etc. And so they found that that aligns pretty well with some neurons in one face selective area of infotemporal cortex. And so they did some some trickery trying to do like one on one alignment versus ensemble alignment. And it looks like, you know, the good interpretation for this data is that it's, it's more like a one on one alignment. And so that could be pretty interesting. But I do want to point out that there are certainly distributed representations in the brain. It doesn't mean that because in this one area, you have non distributed representations, that that's the case for the whole brain. And it might be because of energetic reasons that we have this representation in this in this brain area. Because you know, you want to have how the what the distribution of responses is over a stimulus ensemble is very important for how efficient the code is, because remember, neurons are super noisy. Right. So you want them you want to have like a nice exponential distribution of responses in order to have an efficient code. Given that you have this personal like noise in the data. So yeah, and you you say it favors the causal hypothesis, it so it means that maybe what's happening is that rather than some simply encoding the signal that you see that the brain is actually building like a causal model of what's happening, like you know, there are eyes and there are eyebrows and that, you know, the the result of there being eyebrows is that they look a certain way. And then it will make sense again that they are encoded, like the structural priors encoded in one space. And then simply the manifestation of that is the picture we see. Yeah, yeah, maybe I misused the term causal here. I don't want to mistake it for causal inference. And I don't want to misuse the term causal inference. And sure, sure. But I think that what I mean by this is a forward model for how like one individual. So you can think of you can think of a of a directed basically graph in which, you know, there's a bunch of different factors. One of them is whether or not I wake up with a mustache today. Another one is how close my eyes are. Another one is my nose. And these factors are disentangled. So that means that they're independent from each other. And then I can just like turn on and off the switch and generate different faces. So that's I think like the underlying naive model is the Mr. Potato Head model, right, in which you just like switch out the different components. And of course, there are specific holes that you can put the different the different things in. So I think that I guess like the question is, like, are these factors in this this factor graph? Are they like, can you put labels on them and they correspond to one thing that we would identify as something that is independently changeable? So for instance, like, we understand that age and lighting, for instance, like those are two totally disentangled things that have nothing to do with each other. So the question is, are they are they different factors? Or you rotated like one is square root of two, like one over square root of two times age minus one over square root of two times lighting, and so on and so forth. And it looks like they're really aligned towards the factors that we can label, and that are indeed independent, both in brands and in this particular model. Do you think that it plays a big part that it because face, let's say facial structure, is it is something that is truly, let's say the individual factors are actually independent because of, you know, genetic variation, allele crossing during during meiosis, sorry, or recombination, and so on these things actually go in a fairly, let's say, this uncorrelated uniform distribution in the human population. So almost every combination of narrow eyes, wide eyes, you know, big mouth, small mouth, and so on is possible. And therefore, it might make just sense to let's say encode the individual factors as individual neurons, as you say, maybe for energetic reasons. I think that that's, that's a really interesting hypothesis. But I don't think that that's that that's the case. I think that there might be like a general, you know, algorithm that makes it that tries to disentangle these things into into different, into different sub factors. And then as a consequence, there's this natural alignment with this other process. But, and of course, if it's the case that the kind of latent model that is inside the brain is better aligned with the latent model that's in reality, well, that's better. You know, you want the thing to reflect, but I don't think it's 100% true that, that these that these factors are really disentangled in reality. So for instance, you know, I, I, like a unibrow versus mustache, like these two things are probably pretty correlated with with each other. Yeah, yeah, yeah, I see what I see what you mean. Yeah. So we're we're we're we've been we've been going through this a little bit. There's all I mean, there's a lot of there's other papers, which which are definitely also interesting, like the gloss ones is super interesting. Is there Yeah, is there one that you wanted to touch on particularly? Well, I wanted to give for, you know, readers that are coming slightly outside of this field, and moving into this like very rapidly moving field, kind of an overview of what are the questions that people are interested in, like what are kind of the some of the interesting approaches that people are using to, to tackle these and also encourage people to come in our field and, and, and, and, you know, get papers in and, and scoop us basically. So I really want to encourage people to, to get into that. I think, I think that we've covered some of the papers that I think are the most interesting. And we'll see in the, I actually wanted to do a follow up on precisely the kind of agent based representations that are coming because that that is coming down the line. And I think that's going to be super interesting for this field. So maybe we can end with like, some things to look forward to in the future. Sure. So one of the things that I think is going to be interesting for for the future is like really taking evolution seriously. So we saw the, actually maybe if you can scroll to where I show Jess's, Jess Thompson's diagram of the different types of, of models and how they all fit together. It's at the very start. It's at the intro. So Jess has a really nice way I think of, of explaining this, which is that, you know, there's some models which can really perform a task. And, you know, once we got to ImageNet 2012, like that was, that was where we got there. And then, you know, in 2014, we really got into this accounts for neural activity part of, so, you know, we can find models that can both perform a task, which is biologically relevant and accounts for neural activity. I think this year was a big year for biological plausibility. And I want to say this is the last word, because clearly there's way more work to be doing there. You're going to have models which have realistic, biologically realistic kinds of gradient descent, or replace gradient descent with something that's more biologically plausible. You're going to have Dale's Law, you know, so excitatory neurons only make connection, only makes excitatory connections and inhibitory neurons only make inhibitory connections and you'll have normalization and you have temporal dynamics and so on and so forth. So that's like, the next five years is probably just going to be to fill in this biologically plausible. But there's also could have evolved. I think that that's that's like a super interesting unknown questions and people are going to start to think about this problem in a serious fashion. And I want to point out there's this there's this recent paper that I don't talk about here, which from Fei-Fei Li, which is about evolving different kinds of agents that can solve different kinds of reinforcement learning tasks that actually has a an interesting evolution component to it. So I think we're going to start to see and we can actually like see the process by which the brain can bootstrap itself into existence, which I think is going to teach us something about what it is to be human. And I'm sure there'll be TED Talks and books and so forth. But that's going to take like another five, 10 years. Another thing that I'm excited to look at in the in the future is I just wrote my notes here hands. Hands are great. Hi. I think that one one one thing that we that we're having like really taken seriously so far is the role of weak supervision from a parental perspective. But if you think of like a parent and their baby, they're going to point at things they're going to say this is this, this is that. And you know, it has had like hands have had a huge role in our evolution as as homo sapiens. And it's even like thought that sign language preceded the appearance of voice speech. So that we probably have somewhere in our noggin, some areas which are highly selective for hand gestures, and which are used for a kind of weak supervision. That's important for for parents. So understanding what happens with that personal space and what what happens as as we use tools is clearly important from like just this that curiosity of how you know, we went from Australia to get the techies to the modern humans. And I think it's going to teach us a lot about yeah, what it means to be human. Awesome. Last question from my side with you're clearly interested in how the brain works, right? And and see and seeing, you know, can we can we make parallels between AI models, like deep models and brain areas and so on? Do you think that it is a necessity that we sort of feed back the knowledge into the deep learning realm? So should we should we put more effort into saying, how does the brain work? Okay, let's do that. Because at least that's that's like one example of where intelligence was achieved. Or do you think that, you know, how the brain works is just like a happenstance of nature and evolution and energy restrictions. And, you know, it's not it's not super like, let's just do AI, you know, the way it works best, or option three is something like, what, however we build AI, if we solve the task, it will automatically align with the brain, because there's like only one real way to solve the task, like in which, in which of these, let's say camps are do you find yourself in? Yeah, that's a that's super interesting. And I want to say that so people have made for a long time that claim that if we just study the brain, we'll be able to make better machines. Yeah, so that that comes about and again and again. And I do want to point out that this actually did happen, as we saw with convolutional neural networks, and the whole story of Yubil and Weasel and the Neocognitron and Yalda Kuhn and and eventually ImageNet 2012. But, you know, it's really only happened a few times, it's not clear how much more we have to like how much how many more instances of this will happen. That's certainly the view from from some people at DeepMind, for instance, that have really like gone into cognitive neuroscience and have started to do their own fMRI experiments to really, you know, tackle these problems. I think it's really, really interesting. But I'm not I think that it's going to teach us a lot about the human brain, but not necessarily about how to make intelligent machines, because we're, you know, like these are different systems, as you point out, and there are certainly things about the brain which are kludgy and, and, and certainly suboptimal. So how the retina is wired up is the classic example, it's wired up in the wrong way around, octopuses have haven't the right way around, and it doesn't seem to bother them. So that's a that's a clear example. But maybe there's some thing that we can that we can identify with with brains and that is going to unlock the next generation of machine learning. Maybe it's spiking neural networks, for instance, you know, people are demonstrating like, you could get something which is the like 1000 times or 10,000 times more energy efficient if you just use these mixed signals spiking neural networks. So I don't know. Yeah, that would I mean, 1000 times 10,000 times that is sort of the orders of magnitude you spoke about before when it came to to data. Well, those are so here, I'm thinking about the energy efficiency. So like one recurrent super comparable. No, I think like the the one thing I would point out here is that if you look at all these papers, and you add up all of the their, their training time and carbon emissions, it's it's probably like pretty substantial. Although I will say that, you know, the paper that that I'm the first author of here actually have the machine that I train this thing on like right here. And it's it's still like it's still a one GPU machine. So again, I encourage your your your viewers to to get into this because you can still do things with GTX 1080. That's awesome. But I think that one thing that's that's going to be really interesting is that by studying, you know, better machines, we'll be able to start to understand how to bring this back from the side of machine learning and bring it back into human health. So that's very interesting. And it's by and wide, hasn't been explored thus far. But that I'm kind of a fan of the opposite direction that most people are really going into. So I hope that that answers your question. I, I don't think that naturally, if you just train on your own network to solve a task, it's going to do it the same way that the brain does. But I think that's the brain does because I don't think that that's that's really pointed out. I don't think that GPT three does things the same way that a human does in any sort of meaningful way. No way. Even though they're both very good at language. Yeah, maybe GPT four. Well, if you ask Gary Marcus, he'll say that there's no way it'll never happen. Neurosymbolic AI all the way. Yeah. All right. Cool. Yeah. For every to everyone. Follow Patrick. The many he's written papers, lots of papers. You're also the CTO of Neuromatch Academy. Is that correct? So I, so I helped Neuromatch start actually, so I'm no longer CTO there. But it's a great occasion for for people that want to learn more about that intersection between neuroscience and artificial intelligence to to bring that about. So when we started this a couple of years ago, we just figured, oh, well, do a few video lectures and present that online. And it was at the start of the pandemic and people were bored. So the response was out of this world. So we have we had over 2000 applications and people from all over the world wanted to learn more about both neuroscience and artificial intelligence and their intersection. So we ended up having, I think, 1700 students in the first cohort and having 200 TAs. And so it became a big thing very fast. So I'm very happy that I helped bring that about. It was definitely one of the most stressful times in my life. But we could bring together people from very disparate backgrounds, whether it's people in emerging economies that are at local universities there, and people from from Ivy League universities in the US, Canada and, and the UK together and working with the same curriculum and under the same circumstances. So which was very cool. And then last year, we did the same but doubled in size as well. So I hope that we'll be able to, to double this year. I'm sure the announcement actually for for the next version of Neuromagic Academy will happen pretty soon. So if you have people in in your audience that are interested in that, I highly recommend to them to do that. It's a great occasion to learn. And we already have, you know, materials from last year online. So if you want to get started on your learning, you can do that today. Excellent. Cool. Well, Patrick, it was wonderful, wonderful having you here. This is a new world to me and I think for to a lot of people listening right here. So thank you so much. And I hope to see you again with with next year's review. Awesome.
[ { "start": 0, "end": 8.48, "text": " Hello there! Today I'm interviewing Patrick Minot, who has a PhD from McGill and did a postdoc at UCLA." }, { "start": 8.48, "end": 14.88, "text": " He's an independent scientist and a neural data scientist. His interests are neuroscience and" }, { "start": 14.88, "end": 20.88, "text": " the connection to machine learning. He has an awesome blog called XCore, which I guess is" }, { "start": 20.88, "end": 27.68, "text": " pronounced cross correlation, but who knows. So please check out Patrick's blog. He also worked" }, { "start": 27.68, "end": 34.16, "text": " at Google for a while, seeing how people interact with web pages and was a brain computer interface" }, { "start": 34.16, "end": 42.08, "text": " engineer at Facebook Reality Labs. He also has launched the NeuroMatch Academy, which is sort of" }, { "start": 42.08, "end": 49.44, "text": " an intro, an academy where you learn in a summer school about computational neuroscience. This runs" }, { "start": 49.44, "end": 54.480000000000004, "text": " every year and you can take part if you want. We're going to touch on that a little bit in" }, { "start": 54.48, "end": 59.519999999999996, "text": " the interview. I just wanted to take it away beforehand. So I'm going to give a little" }, { "start": 59.519999999999996, "end": 65.03999999999999, "text": " introduction about what we'll talk about and then we'll jump into the interview. We're going to talk" }, { "start": 65.03999999999999, "end": 72.4, "text": " about mainly about this blog post right here, the 2021 in review unsupervised brain model. The main" }, { "start": 72.4, "end": 80.16, "text": " focus here is on unsupervised models and what they have to do with the brain. So a big question" }, { "start": 80.16, "end": 86.08, "text": " in neuroscience is how does the brain work? I guess it's the main question in neuroscience." }, { "start": 86.08, "end": 94.96, "text": " And so people are developing the hypothesis of how the brain works. And deep learning turns out to be" }, { "start": 94.96, "end": 102.08, "text": " quite an interesting tool for neuroscientists because in deep learning, we get some inspiration" }, { "start": 102.08, "end": 107.67999999999999, "text": " from neuroscience, but essentially we build a model that end to end can learn some task," }, { "start": 107.68, "end": 113.84, "text": " to perform some tasks. So this would be this one right here. Now the question is, is what deep" }, { "start": 113.84, "end": 120.56, "text": " models do the same or different than what brains do given that they solve the same task? Like let's" }, { "start": 120.56, "end": 126.16000000000001, "text": " say both recognize objects on images. Do they do the same thing or do they do something completely" }, { "start": 126.16000000000001, "end": 131.68, "text": " different? So neuroscientists, they wonder, you know, how does the brain learn stuff? Is it the" }, { "start": 131.68, "end": 137.84, "text": " same as neural network? Does the neural network now also during the interview, I have to stop saying" }, { "start": 137.84, "end": 144.08, "text": " neural network because it's ambiguous in this context. So does a deep network, a computer," }, { "start": 144.08, "end": 150.8, "text": " a human made deep network, does it account for neural activity, which means that are the signals" }, { "start": 150.8, "end": 156.16, "text": " in the deep network the same or related to the signals that we see in the brain? And this turns" }, { "start": 156.16, "end": 162.07999999999998, "text": " out to be a very important tool for neuroscientists. What they want to see is that let's say the" }, { "start": 162.07999999999998, "end": 167.6, "text": " intermediate representations in the neural network. Like you have some kind of picture," }, { "start": 167.6, "end": 172.24, "text": " it goes into a neural network, there's layer, layer, layer, layer, and then there's a classification" }, { "start": 172.24, "end": 177.6, "text": " head. The classification head might not be that interesting, but what is interesting is like some" }, { "start": 177.6, "end": 184.8, "text": " intermediate representation here. If we figure out that that explains, which means we can correlate" }, { "start": 184.8, "end": 191.92000000000002, "text": " it with things that are in the brain. And I'm going to draw like a very bad brain right here." }, { "start": 192.56, "end": 198.88000000000002, "text": " If we can correlate this with things that are found in the brain signals like from fMRI," }, { "start": 198.88000000000002, "end": 204.88000000000002, "text": " from electrodes that we put into people's heads, then that is an indication that what these deep" }, { "start": 204.88000000000002, "end": 211.44, "text": " networks are doing have something like that there is an effect that is similar and that could help" }, { "start": 211.44, "end": 217.92, "text": " us understand the brain. So the holy grail in neuroscience would be something that can perform" }, { "start": 217.92, "end": 224.4, "text": " the same task as humans that does account for neural activity that is biologically plausible." }, { "start": 224.4, "end": 230.88, "text": " As you might know, there is still a debate of whether something like backprop is implementable" }, { "start": 230.88, "end": 236.64, "text": " in the brain in one way or another, or if we need an entirely different mechanism in the brain." }, { "start": 236.64, "end": 242.32, "text": " And lastly, something that could conceivably also have evolved and maybe we'd even have some" }, { "start": 242.32, "end": 248.88, "text": " evidence of how it evolved over time. So we're going to talk about these models right here," }, { "start": 248.88, "end": 254.79999999999998, "text": " specifically self supervised models. Self supervised models here is a slide by Jan Lacan," }, { "start": 254.79999999999998, "end": 261.36, "text": " or models that don't need labels to train. And what you usually do is you block out part of" }, { "start": 261.36, "end": 266.24, "text": " something you know, and then try to predict that from the parts that you do know. For example," }, { "start": 266.24, "end": 271.68, "text": " if it is an image, again, you'd block out some part of the image and then from the rest of the" }, { "start": 271.68, "end": 277.92, "text": " image, you'd try to predict that part that is self supervised method. There's also contrastive" }, { "start": 277.92, "end": 285.68, "text": " methods which are self supervised, which means that you'd have an image and you make two different" }, { "start": 285.68, "end": 291.6, "text": " views of it, for example, by cropping the image in different places. And then you try to train" }, { "start": 291.6, "end": 297.52000000000004, "text": " a model that can tell that these two things actually belong together, come from the same image," }, { "start": 297.52000000000004, "end": 305.04, "text": " and that they are apart from, I'm going to draw inverted arrows right here, they are apart from" }, { "start": 305.04, "end": 310.08000000000004, "text": " like a third image that has nothing to do with this image. These are contrastive methods." }, { "start": 310.08000000000004, "end": 316.24, "text": " It turns out that if we build models that learn in self supervised and contrastive ways," }, { "start": 316.24, "end": 322.24, "text": " and especially in multimodal ways, that we end up with models that can explain brain activity" }, { "start": 322.24, "end": 328.40000000000003, "text": " fairly well. So we're going to jump into the papers right here in the interview pretty quickly." }, { "start": 328.40000000000003, "end": 333.92, "text": " But if you keep watching the interview, Patrick goes also into more like high level explanations" }, { "start": 333.92, "end": 338.64, "text": " of neuroscience in general. It is a bit my fault that I immediately was like, so what does this" }, { "start": 338.64, "end": 344.40000000000003, "text": " paper say? But I promise you, if you keep listening throughout the interview, there are great insights" }, { "start": 344.4, "end": 350.4, "text": " into the entire field of neuroscience into what are open questions into where can people go to" }, { "start": 352.64, "end": 358.4, "text": " learn about this. And if you even want to research this, if you're in deep learning right now," }, { "start": 358.4, "end": 363.84, "text": " and you're interested in neuroscience, this Patrick says it's a wide open field, there's lots of papers" }, { "start": 363.84, "end": 370.15999999999997, "text": " to be published. And the conferences are especially something like NeurIPS are pretty receptive to" }, { "start": 370.16, "end": 376.72, "text": " papers that connect deep learning with neuroscience, or in general, try to explain neuroscience," }, { "start": 377.28000000000003, "end": 382.08000000000004, "text": " neuroscience things. So as I said, we're going to jump into the interview now, I don't want to spend" }, { "start": 382.08000000000004, "end": 386.88, "text": " too much more time because we're very detailed in the interview. Check out Patrick's blog" }, { "start": 386.88, "end": 391.44000000000005, "text": " and all his other endeavors. And I wish you a lot of fun. Bye." }, { "start": 391.44, "end": 400.08, "text": " Hello, everyone today here with me I have Patrick Minow, who is a neuroscientist slash blogger slash" }, { "start": 400.08, "end": 407.84, "text": " anything else that you might imagine in between deep learning and the human brain. Welcome, Patrick," }, { "start": 407.84, "end": 411.52, "text": " to the channel for this bit of a special episode, I guess." }, { "start": 411.52, "end": 414.08, "text": " Thanks. It's great to be here." }, { "start": 414.08, "end": 420, "text": " I got I think I'm going to say I'm going to say I'm going to say I'm going to say I'm going to say" }, { "start": 420, "end": 428.48, "text": " I got I got sort of knowledge of you for through your article 2021 in review unsupervised brain" }, { "start": 428.48, "end": 435.28, "text": " models, you wrote down what happened in the last year in terms of the connection of deep learning" }, { "start": 435.28, "end": 442, "text": " and how to let's say how to explain the brain. What is your what is your background in this area?" }, { "start": 442, "end": 448, "text": " How did you come to be in this in between space between neuroscience and AI?" }, { "start": 448, "end": 454.4, "text": " Yeah, absolutely. So I actually originally studied physics. And, you know, after my undergrad," }, { "start": 454.4, "end": 458.4, "text": " I figured, you know, maybe I don't want to do string theory for the rest of my life. Like that" }, { "start": 458.4, "end": 465.2, "text": " sounds it sounds like some of the questions that to ask like interesting questions, you need to" }, { "start": 465.2, "end": 469.12, "text": " really be pretty advanced. But I think in neuroscience, there's some questions that are" }, { "start": 469.12, "end": 473.84, "text": " pretty right for the picking and that are obvious for even somebody that's pretty far outside the" }, { "start": 473.84, "end": 480.56, "text": " field. So for instance, what is sleep? What does it do? That's like a pretty easy question. That's" }, { "start": 480.56, "end": 486.64, "text": " that's very hard to answer. So I went to do a PhD in computational neuroscience at McGill." }, { "start": 487.44, "end": 492.55999999999995, "text": " And one of the fields of my study was really that intersection of neuroscience" }, { "start": 493.52, "end": 499.35999999999996, "text": " and artificial intelligence. Now, when I started my PhD, which was in 2008, deep learning really" }, { "start": 499.36, "end": 507.44, "text": " wasn't a thing, I guess, like some of the original papers by Benjio and and Jeffrey Hinton had been" }, { "start": 508.64, "end": 514.96, "text": " they were out. But you know, the big event, I think, in presenting deep learning to the world" }, { "start": 514.96, "end": 522.72, "text": " and saying like, this is really this is a big deal was image in 2012. Right. As you know, so that was" }, { "start": 522.72, "end": 530.96, "text": " during my PhD. So at the very start of my of my PhD presentation, my PhD defense, I would say" }, { "start": 530.96, "end": 536.32, "text": " something like, look, you know, you have neurons in infratemporal cortex, which is one part of the" }, { "start": 536.32, "end": 542.08, "text": " visual stream, and they're able to do visual recognition. I would present examples of these" }, { "start": 542.08, "end": 550.4, "text": " neurons. And they're invariant. And to things like lighting, rotation, scale, etc. We don't know how" }, { "start": 550.4, "end": 555.4399999999999, "text": " to make a computer that does that. But if I gave this presentation, just you know, six months or a" }, { "start": 555.4399999999999, "end": 560.64, "text": " year later, I would never have been able to say that because people have been like, you know, you" }, { "start": 560.64, "end": 567.68, "text": " could just you know, like get even Alex net would would be able to do that. So so that's a little" }, { "start": 567.68, "end": 574.8, "text": " bit my, my story, my introduction to to neuro AI. So I was there like, during that transition," }, { "start": 574.8, "end": 582.7199999999999, "text": " towards deep learning. And in fact, in the end of my PhD, I was, I was working on deep learning" }, { "start": 582.7199999999999, "end": 588.4799999999999, "text": " to try and explain some of the brain areas that I cared about. Now these brain areas are the areas" }, { "start": 588.4799999999999, "end": 593.8399999999999, "text": " of the dorsal stream. And those are like really brain areas that really care about emotion. And" }, { "start": 593.8399999999999, "end": 599.92, "text": " so I was poking around with what was I'm going to date myself, you know, I was poking around in" }, { "start": 599.92, "end": 608.4, "text": " the piano back in the day to to make this happen, which I guess has fallen by the wayside. But yes," }, { "start": 608.4, "end": 614.56, "text": " I've been at this intersection for quite a while now. Awesome. Well, that it seems like it was an" }, { "start": 614.56, "end": 622.3199999999999, "text": " exciting time. I do remember the piano as well. So I'm definitely dated, dated the same. So you," }, { "start": 622.3199999999999, "end": 628.64, "text": " the dorsal stream, just to make clear, that's part of sort of the visual, the visual stream into the" }, { "start": 628.64, "end": 634.88, "text": " brain. Is that correct? Or? Yeah, yeah. So maybe I can, I can give you like the first minute of my," }, { "start": 634.88, "end": 643.12, "text": " my thesis defense. I've got it engraved in my brain. You just, you defended not too, too long" }, { "start": 643.12, "end": 648.88, "text": " ago, right? True. Exactly. So I'm sure you're gonna forgot it. Oh, yeah. Yeah, you just like put in" }, { "start": 648.88, "end": 657.12, "text": " the box in your brain and just, it's gone. Okay. So the visual information falls on the retina. And" }, { "start": 657.12, "end": 662.64, "text": " it's originally encoded in these very simple formats in terms of differences and luminance" }, { "start": 662.64, "end": 670, "text": " between like a center and a surround, or differences in time. So you can think of it as a camera with" }, { "start": 670, "end": 677.2, "text": " like a little bit of linear filtering. And it then gets forwarded to different areas of the brain," }, { "start": 677.2, "end": 682.32, "text": " first to the lateral geniculate nucleus, and then to the back of the brain, the occipital cortex," }, { "start": 682.32, "end": 687.36, "text": " which is called the primary visual cortex. So that's a huge area, huge chunk of the brain." }, { "start": 687.9200000000001, "end": 695.44, "text": " And you have tons of neurons which are selected for vision there. And from from there, the" }, { "start": 696.48, "end": 702.96, "text": " visual processing splits into two different substreams. There's the ventral visual stream," }, { "start": 702.96, "end": 712, "text": " which is the object stream. So if you think like, what does a, you know, ResNet 50 that strain on," }, { "start": 712, "end": 718.88, "text": " on ImageNet do? Maybe it's something similar that we can get into that later. And then there's" }, { "start": 718.88, "end": 725.6, "text": " another set of areas, which is the dorsal stream. Again, organized in a hierarchical fashion. Again," }, { "start": 725.6, "end": 732, "text": " you have like these, you know, for instance, you have increases in the size of receptive fields," }, { "start": 732, "end": 738.4, "text": " you have increases in the size of in the complexity of things that these neurons respond to. But this" }, { "start": 738.4, "end": 742.9599999999999, "text": " time, they don't care about form, they don't care whether they don't care about texture, what they" }, { "start": 742.9599999999999, "end": 750.72, "text": " really care about is motion. So you know, you're going to poke at a neuron in, let's say the middle" }, { "start": 750.72, "end": 756.9599999999999, "text": " temporal area, which is part of the dorsal stream. And 80 or 90% of the neurons will respond when you" }, { "start": 756.9599999999999, "end": 765.68, "text": " show them the right moving stimulus. Yeah, which is, which is remarkable. So in your in your article," }, { "start": 765.68, "end": 772.0799999999999, "text": " you go a little bit into both of these streams. And I think the one of the main focuses that you" }, { "start": 772.0799999999999, "end": 780.9599999999999, "text": " care about is, are or are the are or are not the deep learning networks we use today, similar to" }, { "start": 780.9599999999999, "end": 786.56, "text": " what the brain does, because sure, we've built these systems that can do some visual tasks." }, { "start": 786.56, "end": 793.4399999999999, "text": " But does that bring us closer to understanding how the brain does certain things? And the answer is," }, { "start": 793.44, "end": 799.0400000000001, "text": " right? The answer is a little bit yes, and a little bit no, like there's still there's still" }, { "start": 799.0400000000001, "end": 805.2800000000001, "text": " questions. But you point out a bunch of areas of where progress has been made in correlating," }, { "start": 805.2800000000001, "end": 810.8800000000001, "text": " let's say, neural activities in deep neural networks with neural activities in in brains." }, { "start": 810.8800000000001, "end": 818.8000000000001, "text": " So yeah, yeah, I'm, I think that it might be good to just back up a little bit and talk about the," }, { "start": 818.8, "end": 822.9599999999999, "text": " you know, that world at large so that, you know, people are just tuning in. I haven't read the" }, { "start": 822.9599999999999, "end": 832.7199999999999, "text": " article yet. We'll understand what we're discussing. I think that originally, some of the," }, { "start": 835.04, "end": 840.4, "text": " okay, so I was talking about ImageNet 2012, which was the big milestone in creating good" }, { "start": 840.4, "end": 845.8399999999999, "text": " deep neural networks that could solve the kinds of tasks that humans that humans can solve. Now" }, { "start": 845.84, "end": 850.64, "text": " there was a lot of background work that came into that. One is, you know, the creation of" }, { "start": 850.64, "end": 855.6800000000001, "text": " convolutional neural networks and the work from from Yan-le-Cun, which was ultimately, you know," }, { "start": 855.6800000000001, "end": 862, "text": " inspired by the new the new cognitron, which is Fukushima, like in around the early 80s." }, { "start": 863.12, "end": 869.6800000000001, "text": " But ultimately, that work was motivated a lot by some early work in vision and in vision neuroscience." }, { "start": 869.68, "end": 878.4, "text": " So David Ubel and Torsten Weisel in the 50s and 60s looked at different kinds of neurons in the" }, { "start": 878.4, "end": 887.04, "text": " primary visual cortex, and were able to find that you have this this hierarchy of selectivity," }, { "start": 887.04, "end": 895.92, "text": " right? So the canonical thing that they found is they found cells which were tuned for orientation," }, { "start": 895.92, "end": 903.28, "text": " right? So you know, you present an edge like this or a line like this, and the cell responds." }, { "start": 903.28, "end": 908.0799999999999, "text": " But if the line, if instead of being white, it's black, then it doesn't respond. So those are called" }, { "start": 908.0799999999999, "end": 912.7199999999999, "text": " the simple cells. And then they found another subset of cells, which are called the complex cells." }, { "start": 912.7199999999999, "end": 918.4799999999999, "text": " And so those are selected for this, but they would be, it wouldn't matter the precise location" }, { "start": 919.04, "end": 923.68, "text": " of this line in question. And it wouldn't matter the contrast. So it could be white to black," }, { "start": 923.68, "end": 930.0799999999999, "text": " or it could be black to white, it wouldn't matter. And so their hunch was that, okay," }, { "start": 930.0799999999999, "end": 933.4399999999999, "text": " well, you have this this transformation that happens, first of all, you have a selectivity" }, { "start": 933.4399999999999, "end": 939.04, "text": " operation, which creates that simple cell. So basically just a threshold. And that's enough to" }, { "start": 939.04, "end": 946, "text": " give you a selectivity, or it could be a relu if you, you know, smooth it out. And, and then there's" }, { "start": 946, "end": 953.68, "text": " a pooling operation that happens. So you pool from different, from different simple cells that have" }, { "start": 953.68, "end": 959.52, "text": " the same orientation selectivity, but different contrast sensitivity. And that creates the complex" }, { "start": 959.52, "end": 965.52, "text": " cell. And you can view that as a subsampling operation or downsampling operation as you would" }, { "start": 965.52, "end": 970.48, "text": " have in a deep neural net. So there's this kind of long line of like, oh, there's the inspiration" }, { "start": 970.48, "end": 975.04, "text": " from the brain, we're going to make some models, we're going to show that it's that they're actually" }, { "start": 975.04, "end": 980.3199999999999, "text": " good enough to solve tasks that humans can solve. But the question is, okay, are these are these" }, { "start": 980.3199999999999, "end": 990.16, "text": " like really like, like human brains? So and that's similar work from from in Jim DiCarlo's lab and" }, { "start": 990.16, "end": 997.12, "text": " Nico Cricascorte in 2014, like really showed that there's some very tantalizing hints that this is" }, { "start": 997.12, "end": 1001.8399999999999, "text": " indeed the case, you know, that these networks that we've trained on ImageNet, they look a lot" }, { "start": 1001.84, "end": 1008.8000000000001, "text": " like the brain in, in really interesting ways. And one of the big ways that, you know, they're" }, { "start": 1008.8000000000001, "end": 1017.84, "text": " similar is that if you have, if you look at, you know, let's say 10 different networks, and one of" }, { "start": 1017.84, "end": 1024.56, "text": " them is, some of them turned out to be a little bit better at solving ImageNet, or a little bit" }, { "start": 1024.56, "end": 1031.68, "text": " worse. And then you correlate that with how well you can align these networks to the brain, turns" }, { "start": 1031.68, "end": 1036.4, "text": " out that the ones which perform better on ImageNet tend to also perform better on explaining the" }, { "start": 1036.4, "end": 1041.92, "text": " brain, which is like a very strange coincidence, because think about how like, completely different" }, { "start": 1041.92, "end": 1048.24, "text": " these two things have been created. So that was that was one of the big hints. And I think like" }, { "start": 1048.24, "end": 1054.8, "text": " another big hint is the word from Chris Ola and other people at OpenAI that looked inside of these" }, { "start": 1054.8, "end": 1059.92, "text": " deep neural networks and found that, you know, the kinds of selectivity that you see inside the cells," }, { "start": 1059.92, "end": 1065.28, "text": " they're very, very similar to what you would, what a neurophysiologist would describe in areas like" }, { "start": 1065.28, "end": 1073.3600000000001, "text": " V1, V2, V4, and temporal cortex. So the combination of the quantitative and qualitative tells us like," }, { "start": 1073.3600000000001, "end": 1079.92, "text": " hey, maybe, maybe there's a kind of these are kind of like little brains, one very, very specific" }, { "start": 1079.92, "end": 1086.64, "text": " part of the brain, I want to be a lot of trouble if you say that that statement. Yes, exactly," }, { "start": 1086.64, "end": 1092, "text": " exactly. So what do people mean when they say something like explains the brain or something" }, { "start": 1092, "end": 1098.5600000000002, "text": " aligns with brain activity? Like, what is it? What is behind that? Yeah, yeah, yeah. So we can talk" }, { "start": 1098.5600000000002, "end": 1105.44, "text": " about the high level stuff, like, you're sure just like the idea of look how like, what do we," }, { "start": 1105.44, "end": 1111.6000000000001, "text": " what do we measure? Like, you know, is it a number? Is it a correlation? Or is it? Am I training a" }, { "start": 1111.6, "end": 1116.8, "text": " regression model from one signal to the other signal? Like, how can I make the statement that" }, { "start": 1117.9199999999998, "end": 1126.6399999999999, "text": " this neural network explains some function in the brain? So in the early work from from 2014," }, { "start": 1127.76, "end": 1131.9199999999998, "text": " we see two different approaches being used. And those are the kinds of approaches," }, { "start": 1131.9199999999998, "end": 1137.36, "text": " like every other approach that's been tried, is kind of a derivative of these like two basic" }, { "start": 1137.36, "end": 1144.6399999999999, "text": " concepts. So one approach is a regression based approach. So let's so very simply," }, { "start": 1144.6399999999999, "end": 1153.1999999999998, "text": " let's say you train a ResNet 50 on image net, you chop it off at some layer, layer four after the" }, { "start": 1153.1999999999998, "end": 1159.4399999999998, "text": " first down sampling or whatever. And then you measure the output of that deep neural network" }, { "start": 1160, "end": 1165.9199999999998, "text": " with respect to some stimulus ensemble. So which gives you a big matrix big X, which has a bunch" }, { "start": 1165.92, "end": 1172.5600000000002, "text": " of rows for the different examples and a bunch of columns for the different features. And then you" }, { "start": 1172.5600000000002, "end": 1181.92, "text": " just regress that against neural data that's, that's recorded with the same, with the same images." }, { "start": 1182.96, "end": 1189.52, "text": " So it's just a regression. So you can add like a bunch of different spices into your basic recipe." }, { "start": 1189.52, "end": 1198.08, "text": " So you can add some some sparseness priors, you can try to well, usually you'll use a ridge" }, { "start": 1198.08, "end": 1203.84, "text": " regression rather than a straight regression, because that will definitely the other regular" }, { "start": 1203.84, "end": 1210.6399999999999, "text": " regression will usually crash and burn neural data is very noisy. That's something that people don't" }, { "start": 1210.6399999999999, "end": 1217.36, "text": " often appreciate. And so it's a regression. Let's just put it that way. Yeah, now that" }, { "start": 1217.36, "end": 1221.6799999999998, "text": " will be sort of, for example, f MRI data, when we talk about neural data." }, { "start": 1223.6, "end": 1231.6799999999998, "text": " It can be f MRI data, it can be MEG data. So magnetoencephalopograph," }, { "start": 1231.6799999999998, "end": 1242.4799999999998, "text": " encephalopograph, I think we just say MEG. And or it could be a single neuron recordings or array" }, { "start": 1242.48, "end": 1247.68, "text": " recordings. So those are taken inside the brain, or it might be ECog, which is just on the surface" }, { "start": 1247.68, "end": 1255.2, "text": " of the brain. So there's different kinds of recordings. Now, it happens that f MRI and MEG" }, { "start": 1255.2, "end": 1262.56, "text": " are much more popular for for humans, because it's it's, it's non invasive. But every once in a while," }, { "start": 1262.56, "end": 1269.92, "text": " people get to record inside of the brains of humans that have some some sort of need for brain" }, { "start": 1269.92, "end": 1276.5600000000002, "text": " surgery, whether it's usually it's epilepsy. And those data are very precious. Now speaking of so" }, { "start": 1276.5600000000002, "end": 1283.44, "text": " you go through different papers in your article. So maybe we can follow that structure a little bit." }, { "start": 1283.44, "end": 1294.5600000000002, "text": " The first one is a work that shows that the ventral stream might be explainable by and your idea," }, { "start": 1294.56, "end": 1301.28, "text": " your the article also goes into it's called unsupervised unsupervised brain models." }, { "start": 1301.28, "end": 1308.1599999999999, "text": " So your your kind of point that you make is or your investigation is into unsupervised systems," }, { "start": 1308.1599999999999, "end": 1316.1599999999999, "text": " like what, what, what, how good or how close to what the brain does is comes from the self" }, { "start": 1316.16, "end": 1324.96, "text": " supervised and unsupervised system. So the first, the first, the first thing you go into is the" }, { "start": 1324.96, "end": 1333.1200000000001, "text": " ventral, sorry, the ventral stream, that is you set the sort of object stream. And this paper looks" }, { "start": 1333.1200000000001, "end": 1342.64, "text": " at single neuron activations, right? And the they find that the self supervised systems can be or" }, { "start": 1342.64, "end": 1351.6000000000001, "text": " are equally or even better able to explain the brain data than supervised systems, let's say in" }, { "start": 1351.6000000000001, "end": 1358.5600000000002, "text": " an image recognition task. Yeah, so that's super exciting. And the reason is that I think that" }, { "start": 1358.5600000000002, "end": 1362.64, "text": " everybody got very excited when they saw that these networks which were trained for image net," }, { "start": 1362.64, "end": 1368.4, "text": " they could be aligned for to the ventral stream to that object recognition stream," }, { "start": 1368.4, "end": 1373.2, "text": " because now it's something that, you know, you have this in silico thing, and it kind of looks" }, { "start": 1373.2, "end": 1378.3200000000002, "text": " like it does the same thing as the brain. And so it's kind of a model of the brain. Super exciting," }, { "start": 1378.3200000000002, "end": 1384, "text": " you can do a lot of things with it. But there's different ways in which something can be a model" }, { "start": 1384, "end": 1390.88, "text": " of the brain. And some of these are a little bit more useful than others. And one of the ways I one" }, { "start": 1390.88, "end": 1398, "text": " of the big flaws, I think, for for supervised learning is that it's not like really a way it's" }, { "start": 1398, "end": 1404, "text": " not really a model of how the brain would learn a task. Because, you know, I'm not walking around as" }, { "start": 1404, "end": 1413.92, "text": " a baby. And like, you know, my, my parent just tells me like, dog, dog, dog, dog, dog, cat, dog," }, { "start": 1413.92, "end": 1420.5600000000002, "text": " dog, just like constantly for years and years. So you know, we don't really use unsupervised" }, { "start": 1420.5600000000002, "end": 1428.96, "text": " learning for for for learning these kinds of things. So that's a big flaw that if we want to" }, { "start": 1428.96, "end": 1436.24, "text": " go move forward with models, which are biologically plausible instantiations of creating these," }, { "start": 1436.96, "end": 1442.24, "text": " these models, then we have to move away from from supervised learning. So people generally" }, { "start": 1442.24, "end": 1446, "text": " like unsupervised learning and self supervised learning better for that reason, because you" }, { "start": 1446, "end": 1452.96, "text": " don't have to, you know, come up with this like, weird concept that you have dog, dog, dog, cat." }, { "start": 1455.28, "end": 1460.88, "text": " And and but you do have to do the math to make sure that it actually does work out in practice." }, { "start": 1460.88, "end": 1465.36, "text": " And that, you know, the right the kinds of the quantity of examples that you feed into," }, { "start": 1465.36, "end": 1471.4399999999998, "text": " into the model is similar to the kinds of to the quantity of examples that you would feed into a" }, { "start": 1471.4399999999998, "end": 1476.24, "text": " human, for instance, I think you have you have a so your conclusion, you have a little bit of an" }, { "start": 1476.24, "end": 1483.12, "text": " example that it would like the language models that we train such as GPT three would be equivalent" }, { "start": 1483.12, "end": 1491.4399999999998, "text": " to like, years and years and years of of human, just constants, just talking and talking and" }, { "start": 1491.44, "end": 1496.16, "text": " talking and talking and babies are able to do it by age, what four or so or two." }, { "start": 1498.16, "end": 1505.92, "text": " Exactly. So, so I think that there's still a big gap there that comes from that you still I mean," }, { "start": 1505.92, "end": 1510.48, "text": " we're off, I think I calculated we're off by four orders of magnitude in terms of the efficiency." }, { "start": 1511.92, "end": 1518.56, "text": " But, you know, I'm to score everybody on the same kind of curve. I mean, the GPT three is not made" }, { "start": 1518.56, "end": 1524.08, "text": " as a model of the brain minutes made as a language model. And to solve all these these problems in" }, { "start": 1524.08, "end": 1530.8, "text": " zero shot settings, and it works very well for for its purposes. But definitely, if we want to" }, { "start": 1530.8, "end": 1536.8, "text": " actually try to explain the brain, we'll need to get to that. So this, this, the, it is also a bit" }, { "start": 1536.8, "end": 1542.24, "text": " special, because we hear we talk about the ventral stream, you said that's the object stream. And the" }, { "start": 1542.24, "end": 1548.96, "text": " fact that self supervised systems are equal or better at explaining that than supervised systems," }, { "start": 1548.96, "end": 1555.76, "text": " which presumably are trained exactly on the task of that such an object stream would be sensitive" }, { "start": 1555.76, "end": 1558, "text": " to right, that is also one special thing." }, { "start": 1559.76, "end": 1564.64, "text": " So I totally agree. I mean, that's super cool that that this is the case that you have this," }, { "start": 1565.36, "end": 1571.76, "text": " this thing where you don't give it like learn objects, and yet it learns something that can do" }, { "start": 1571.76, "end": 1579.36, "text": " can do object recognition. And it learns meaningful, meaningful things like that. But" }, { "start": 1579.36, "end": 1585.28, "text": " I think that there's a couple of hidden assumptions there that make this not nearly as mysterious" }, { "start": 1585.28, "end": 1589.76, "text": " as it was like, as we would like it to be. So one is that, you know, image net is not really" }, { "start": 1590.48, "end": 1597.84, "text": " if your model of image net is not you take like a, like a nice Canon DLS, the DLSR, and," }, { "start": 1597.84, "end": 1604.3999999999999, "text": " you know, you, you put it at a random point in space, and then you point it at somewhere random," }, { "start": 1604.3999999999999, "end": 1610.32, "text": " and then you hit the button. Right. So if we look at both of our faces right now, we're in the" }, { "start": 1610.32, "end": 1616.1599999999999, "text": " center of the screen, it turns out that, you know, we're smart like that, that we place our faces," }, { "start": 1616.1599999999999, "end": 1621.9199999999998, "text": " like generally in the center of the screen when we take photos. So the things that we try to look at" }, { "start": 1621.92, "end": 1629.2, "text": " in image net, you know, the the subject of the category will by and large be in the center." }, { "start": 1630.5600000000002, "end": 1636.96, "text": " So, and you know, the position of the camera, the things that we that we tend to measure," }, { "start": 1636.96, "end": 1644.8000000000002, "text": " I mean, these are all these all come into why the model learns the thing that it learns. So it's not," }, { "start": 1644.8, "end": 1651.76, "text": " it we can't really say, oh, it, you know, we're not like really feeding it any, any structural" }, { "start": 1651.76, "end": 1658.24, "text": " priors, we definitely do. We definitely do just, just in not like the conventional way, and not in" }, { "start": 1658.24, "end": 1664.24, "text": " a way that's very easy to quantify either. But some people are definitely trying to solve these," }, { "start": 1665.12, "end": 1672.3999999999999, "text": " these problems. So, so for instance, there's a lot of work on trying to fit the same kinds of" }, { "start": 1672.4, "end": 1677.68, "text": " unsupervised learning models, but with streams of data that look more like what a baby would see" }, { "start": 1677.68, "end": 1684.24, "text": " in their early years, when which the camera is not always pointed that at the right things," }, { "start": 1684.24, "end": 1686.8000000000002, "text": " because babies tend to, I see. Yeah," }, { "start": 1687.44, "end": 1692.88, "text": " do a lot of gesturing. But it's also, it's also there, especially because the baby with time is" }, { "start": 1692.88, "end": 1698, "text": " able to move its head, right. And therefore, it's also not the same as just placing a camera" }, { "start": 1698, "end": 1703.52, "text": " somewhere because whatever captures attention will be actively looked at more. So it's, it's" }, { "start": 1703.52, "end": 1710.24, "text": " definitely like, I think there's a long way to go in any of these things. Oh, yeah. Oh, yeah," }, { "start": 1710.24, "end": 1717.2, "text": " absolutely. I think. So to close the, the, just that one paper, because we've been on it for" }, { "start": 1718.08, "end": 1725.92, "text": " like 15 minutes, but super cool that you can have, you can train a model in a unsupervised or" }, { "start": 1725.92, "end": 1731.68, "text": " self supervised manner. And it turns out to be just as good at explaining, you know, V1, V4," }, { "start": 1731.68, "end": 1736.8000000000002, "text": " and IT, all these different sub areas of the ventral stream. And then there's a kind of" }, { "start": 1736.8000000000002, "end": 1744.24, "text": " eriarchy that happens between the different, the different models. So, you know, some models are" }, { "start": 1744.24, "end": 1751.52, "text": " clearly doing better than others. So typically in these papers, SimClear is usually the one that" }, { "start": 1751.52, "end": 1759.04, "text": " performs the best for reasons that we don't totally understand. Local aggregation also tends to, to" }, { "start": 1759.04, "end": 1764.6399999999999, "text": " do better. So that's interesting. Like, what is it about what's inside of these models that can," }, { "start": 1765.28, "end": 1770.32, "text": " that allows them to be more similar to the brain. Now, of course, in the end, you know, you end up" }, { "start": 1770.32, "end": 1775.68, "text": " with like tiny, tiny error bars, and it can be pretty difficult to actually differentiate between" }, { "start": 1775.68, "end": 1780.32, "text": " these, these different things. So, you know, you can't like read too, too much into it. But" }, { "start": 1780.32, "end": 1786.24, "text": " definitely the best models are like the new kind of generation of self supervised models." }, { "start": 1786.24, "end": 1792.24, "text": " And then so the next paper deals with the with the with the other stream with the dorsal stream." }, { "start": 1792.24, "end": 1798.6399999999999, "text": " And there you or yes, that is actually you who found some that's your own paper, right?" }, { "start": 1798.6399999999999, "end": 1804.32, "text": " Oh, yeah. So, so I'll just go very rapidly with true that actually the second one is" }, { "start": 1804.32, "end": 1813.4399999999998, "text": " ventral stream. Oh, sorry, again. And so that's from Talia Conkle. And very, very consistent data." }, { "start": 1813.4399999999998, "end": 1821.04, "text": " So they use fMRI rather than single neuron data. But I mean, the data is like these two studies" }, { "start": 1821.04, "end": 1826.72, "text": " were done independently, about a kilometer away from each other, one one team from Harvard and" }, { "start": 1826.72, "end": 1830.6399999999999, "text": " one team from MIT, and they found exactly the same results. So maybe some things in the water" }, { "start": 1830.64, "end": 1835.92, "text": " in Cambridge, Massachusetts. But otherwise, I mean, it's a very robust finding, basically." }, { "start": 1837.68, "end": 1843.0400000000002, "text": " But yeah, we can definitely talk about the dorsal stream. So, like I said, I've been interested in" }, { "start": 1843.0400000000002, "end": 1850.0800000000002, "text": " this problem for a for a very long time. And I had a little bit of time during the the last" }, { "start": 1850.0800000000002, "end": 1857.0400000000002, "text": " lockdown of the pandemic to to relook at this problem. And so we sat down and we said, you know," }, { "start": 1857.04, "end": 1863.52, "text": " this I think like the time is right to really look at all this dorsal stream data and see if we can" }, { "start": 1863.52, "end": 1870.8, "text": " get if we can get one really good model of all these these different areas. So the first thing" }, { "start": 1870.8, "end": 1876.72, "text": " that I did actually is I was going about this very naively, but I just looked into like the" }, { "start": 1876.72, "end": 1882.32, "text": " torch vision models, you know, they have like some some model database, and just downloaded" }, { "start": 1882.32, "end": 1890.56, "text": " all the models that were trained on video recognition. So all the models that were trained on" }, { "start": 1893.52, "end": 1899.76, "text": " I'm drawing a blank here, kinetics 400, which is a task where you have to look at a video of" }, { "start": 1899.76, "end": 1904.56, "text": " somebody juggling and say, oh, it's juggling rather than unicycling rather than soccer or whatever." }, { "start": 1905.12, "end": 1911.2, "text": " And so the special thing about these models that they look at 3d data, by 3d, I mean spatial" }, { "start": 1911.2, "end": 1918, "text": " temporal, right in time. And so that means that and generally they're trained, the convolutional" }, { "start": 1918, "end": 1925.44, "text": " neural nets, they're trained with 3d filters. So, you know, the front end of the model is going to" }, { "start": 1925.44, "end": 1933.1200000000001, "text": " be a 3d convolution in space and time. So I looked at these models, and I did the kinds of" }, { "start": 1933.1200000000001, "end": 1939.6000000000001, "text": " visualization tricks that Chris Ola and gang do it, I open my eye to look inside because I was" }, { "start": 1939.6, "end": 1945.12, "text": " curious, you know, do they learn motion? Do they align with with the brain? And I found that they" }, { "start": 1945.12, "end": 1951.52, "text": " were actually really terrible, which surprised me, because if you look into the methods of these" }, { "start": 1951.52, "end": 1960.8, "text": " papers, it's like we trained, we trained these models for 24 hours on a supercomputer with," }, { "start": 1961.4399999999998, "end": 1968.56, "text": " you know, 16 GPUs in parallel, and went through, you know, a million videos. And this is the model" }, { "start": 1968.56, "end": 1974.08, "text": " that we obtained, and they're very good at doing the tests that they're doing. And yet, the kinds" }, { "start": 1974.08, "end": 1981.28, "text": " of generic features that come out of the models are really terrible at aligning with the brain." }, { "start": 1981.28, "end": 1989.2, "text": " So that was kind of the hunch that we saw there that I should say that the one of the early" }, { "start": 1989.2, "end": 1994.6399999999999, "text": " findings and one of the early points that people who are dubious about the finding that the ventral" }, { "start": 1994.64, "end": 2006.5600000000002, "text": " streams align with ImageNet trained ResNets and AlexNets and VGG nets, is that people say, well," }, { "start": 2006.5600000000002, "end": 2013.44, "text": " you're just training the model to do a task, you know, any sort of task will work. It doesn't" }, { "start": 2013.44, "end": 2017.3600000000001, "text": " matter whether it's object recognition or whatever, it just turns out that this is the task that you" }, { "start": 2017.3600000000001, "end": 2023.6000000000001, "text": " had data on. But this is a very, this is a very good like counter example of that, because you" }, { "start": 2023.6, "end": 2032.1599999999999, "text": " train a model on a task which involves, you know, 3D data, video spatial temporal data. And yet," }, { "start": 2032.7199999999998, "end": 2038.08, "text": " that model is actually the model that you that you train is really good for that one task," }, { "start": 2038.08, "end": 2045.04, "text": " but is really terrible at this task of aligning with the brain. So that motivated us to look" }, { "start": 2045.04, "end": 2053.92, "text": " more deeply into, you know, what else could, like if we don't train, if we don't take, you know, pre-train" }, { "start": 2053.92, "end": 2060.64, "text": " models to solve this problem, like what could we do? And we know that a lot of the dorsal visual" }, { "start": 2060.64, "end": 2069.6, "text": " stream is really cares about navigation. So if you look at an area like MST, have you ever had Vertigo?" }, { "start": 2069.6, "end": 2079.44, "text": " Sure. Yeah. So Vertigo is like kind of sorry, this is like a weird non-secret, but Vertigo is kind of" }, { "start": 2079.44, "end": 2085.6, "text": " a funny thing, right? Because it's an inner ear problem, right? So you have your vestibule, and it" }, { "start": 2085.6, "end": 2090.56, "text": " kind of, it basically tells you there's acceleration in ways that there shouldn't be acceleration. And" }, { "start": 2090.56, "end": 2095.52, "text": " that gives you an impression of being dizzy. But also gives you like these weird visual effects." }, { "start": 2095.52, "end": 2102.16, "text": " Yeah. Right? Which is strange. Or, you know, if you drink a little too much, you might have that" }, { "start": 2102.16, "end": 2108.24, "text": " same kind of feeling. So there's an area in the brain, which is called MST, which has these" }, { "start": 2108.24, "end": 2114.08, "text": " neurons, which receive both visual input and vestibular input. And the way that they receive" }, { "start": 2114.08, "end": 2121.2, "text": " visual input is they have a lot of selectivity for things like rotation and expansion and" }, { "start": 2121.2, "end": 2127.7599999999998, "text": " and wide field translation. And so we think that they're really involved in navigation. So if" }, { "start": 2127.7599999999998, "end": 2134.16, "text": " you're going forward in a line, you have these neurons, which receive both the vestibular input." }, { "start": 2134.16, "end": 2139.52, "text": " So they know how you're accelerating and where gravity is. And they receive all this wide field" }, { "start": 2139.52, "end": 2147.2799999999997, "text": " optic flow, which is tells you where you're heading. So we said, why don't we train a deep neural" }, { "start": 2147.28, "end": 2155.36, "text": " network to solve a navigation task so that the network can can orient itself in space, essentially." }, { "start": 2155.36, "end": 2165.2000000000003, "text": " So I used an environment, which is it's an environment for drone simulations called AirSim." }, { "start": 2165.2000000000003, "end": 2173.2000000000003, "text": " And it's really fun. So it's an Unreal Engine. And you can, you can basically fly a drone in these" }, { "start": 2173.2, "end": 2180.08, "text": " suburban environments and back out the sequences of videos. And then you can train a convolutional" }, { "start": 2180.08, "end": 2189.9199999999996, "text": " neural net, 3D ResNet, to solve the problem of figuring out what is the from a little sequence of" }, { "start": 2190.96, "end": 2198.56, "text": " movement, what is the trajectory, basically, that's going on, like where are you heading?" }, { "start": 2198.56, "end": 2205.92, "text": " Are you rotating? Are you going forward, etc., etc. And so if you train a network on that, it turns out" }, { "start": 2205.92, "end": 2212.32, "text": " that if you visualize the cells inside of the train network, they really, really look like what" }, { "start": 2212.32, "end": 2219.04, "text": " you would see in the visual cortex. So as a neurophysiologist or as an amateur neurophysiologist" }, { "start": 2219.04, "end": 2223.92, "text": " or a person that's been in the vicinity of neurophysiologists, I was really, I was really" }, { "start": 2223.92, "end": 2231.52, "text": " stoked to see this. So you see these cells that are selected for translation and translation," }, { "start": 2231.52, "end": 2236.88, "text": " but they don't care about the pattern that underlies the translation. And in particular," }, { "start": 2236.88, "end": 2241.36, "text": " you see these cells like the one that you're visualizing here that like things like spirals" }, { "start": 2242.32, "end": 2249.76, "text": " in some of the higher level layers of this network, which was super exciting because those look a lot" }, { "start": 2249.76, "end": 2257.0400000000004, "text": " like what you would see in a... So basically, the networks that try to just predict anything from a" }, { "start": 2257.0400000000004, "end": 2263.0400000000004, "text": " video that contains motion weren't like turns out these neural net, sorry, the deep networks," }, { "start": 2263.76, "end": 2267.2000000000003, "text": " I have to stop saying neural networks here because it's ambiguous." }, { "start": 2268.48, "end": 2274.48, "text": " Ah, yes, yes, yes. The deep networks that train on any kind of video data, they're not super well" }, { "start": 2274.48, "end": 2280, "text": " aligned with the brain. However, as soon as you go maybe to like some sort of an ego perspective," }, { "start": 2280, "end": 2287.28, "text": " right? And you especially you predict your own parameters of motion. So from the visuals you're" }, { "start": 2287.28, "end": 2293.6, "text": " trying to predict, okay, I went to the left, I went to the right, I turned around from the visual" }, { "start": 2293.6, "end": 2301.84, "text": " information. And that turns out to align very well with the brain data. Does that make like," }, { "start": 2301.84, "end": 2308.6400000000003, "text": " just maybe an esoteric question, but does that say anything about the need for AI to be embodied?" }, { "start": 2308.6400000000003, "end": 2316.8, "text": " Maybe? Oh, I love this question. Yes, 100%. Yes, we should, we should completely embody AI." }, { "start": 2316.8, "end": 2325.1200000000003, "text": " Yeah. So I think that one, one big question that came up during the review is that, you know," }, { "start": 2325.1200000000003, "end": 2331.36, "text": " we claimed originally this was unsupervised or self supervised in the abstract. And then the" }, { "start": 2331.36, "end": 2335.6800000000003, "text": " reviewers came back and said, well, it's not really unsupervised or self supervised. It's a supervised" }, { "start": 2335.6800000000003, "end": 2340.56, "text": " network because you know, you know what the answer is, you're just training in a supervised fashion." }, { "start": 2341.44, "end": 2347.6, "text": " My feeling is that it is self supervised in the sense of when you embody this in an agent. So when" }, { "start": 2347.6, "end": 2354.6400000000003, "text": " I'm when I'm a baby, let's imagine that I'm a baby. And I'm walking around the world, I have some" }, { "start": 2354.6400000000003, "end": 2360.1600000000003, "text": " control over where I'm heading. Yeah, right. So I can say like, I'm going to turn this way, I'm going" }, { "start": 2360.16, "end": 2365.2, "text": " to turn that way, I'm going to move forward, I'm going to go get that cookie. I'm going to look at" }, { "start": 2365.2, "end": 2373.68, "text": " my parent, and so forth. So I am an agent. Yeah. So that means that I control the motion that comes" }, { "start": 2373.68, "end": 2378.8799999999997, "text": " into my eyes. Yeah. Because the vast majority of motion that we see in the world comes from" }, { "start": 2378.8799999999997, "end": 2385.92, "text": " from our self motion. And so I can correlate my motor plans with what I see in the world. And" }, { "start": 2385.92, "end": 2394.4, "text": " that means that it's a much easier kind of problem to correlate these two things, then to say I" }, { "start": 2395.36, "end": 2401.84, "text": " here's found data, which is the case of ImageNet, and figure out something to model with this. Yeah," }, { "start": 2401.84, "end": 2407.36, "text": " exactly. Right. Yes. You also have this diagram here from young Lecar, talking about self supervised" }, { "start": 2407.36, "end": 2413.2000000000003, "text": " learning. And it seems very much that it is I agree, the line is like gray in some places. But it" }, { "start": 2413.2, "end": 2418.7999999999997, "text": " seems like if you are an embodied agent, you always have those motion parameters ready, right. So it's" }, { "start": 2418.7999999999997, "end": 2427.12, "text": " much more like I am going to darken out part of part of what I already know and try to predict" }, { "start": 2427.12, "end": 2434.72, "text": " that from it, it seems it falls a lot into this into this diagram right here. Yeah, absolutely. So" }, { "start": 2434.72, "end": 2440.8799999999997, "text": " I think it looks more like the bottom part of this diagram that you see there, where you have these" }, { "start": 2440.88, "end": 2445.76, "text": " two things which are happening in the present, but one part is occluded and the other part is visible." }, { "start": 2446.48, "end": 2451.44, "text": " So you're doing multimodal masking, in other words, right. So you have the vision, but now you're" }, { "start": 2451.44, "end": 2455.44, "text": " trying to predict the vestibular, or you have the vestibular, and you're trying to predict the vision." }, { "start": 2455.44, "end": 2462.56, "text": " And so if you look something like clip would be, I think, like maybe the most popular model that's" }, { "start": 2462.56, "end": 2467.6800000000003, "text": " of the same kind of multimodal kind, you can say, well, clip is a supervised model, because you're" }, { "start": 2467.68, "end": 2475.7599999999998, "text": " trying to predict, you know, in a way, you're trying to predict language from vision. But" }, { "start": 2475.7599999999998, "end": 2482.96, "text": " it's really this kind of masking. And I think it's a more general approach to solving this type of" }, { "start": 2482.96, "end": 2488.48, "text": " problem. So yeah, I agree with you embodied agents, I'm 100% on board, they're definitely going to be" }, { "start": 2488.48, "end": 2495.2799999999997, "text": " awesome. And actually, questions about, you know, what do reinforcement learning agents learn? Do" }, { "start": 2495.28, "end": 2499.84, "text": " they learn like good self motion representations, for instance, when they're when they have a visual" }, { "start": 2499.84, "end": 2504.5600000000004, "text": " task? I think like those are super interesting, like, what do you need to put in there? In order" }, { "start": 2504.5600000000004, "end": 2511.92, "text": " to get that that effect? Yeah, that that concept of me in a eyes is not yet really come through so far." }, { "start": 2512.88, "end": 2518.6400000000003, "text": " But I'm also looking into like, I'm looking forward to having more of a eyes who understand" }, { "start": 2518.64, "end": 2525.2799999999997, "text": " the concept of, of me and to be embodied and and and sort of to have self self state and all of this" }, { "start": 2525.2799999999997, "end": 2532.4, "text": " kind of stuff. I think that will bring us forward. So here in the next paper, you you tackle not I" }, { "start": 2532.4, "end": 2540, "text": " mean, this this paper you're describing, it tackles the question. It is actually, it is actually, I just" }, { "start": 2540, "end": 2547.2799999999997, "text": " saw in my notes, that is, again, one of one of your papers. It is the question, why are there even" }, { "start": 2547.28, "end": 2553.2000000000003, "text": " two different of these visual streams in in the brain? Like, it maybe makes sense if we if we sit" }, { "start": 2553.2000000000003, "end": 2560.2400000000002, "text": " down, but also, you find some actual empirical evidence for why it might be might be that we" }, { "start": 2560.2400000000002, "end": 2567.6000000000004, "text": " even have two streams, right? Yeah, yeah, absolutely. So I think that's a that's an interesting question," }, { "start": 2567.6000000000004, "end": 2573.1200000000003, "text": " like, why are there two things rather than one or four things or eight things rather than than an" }, { "start": 2573.12, "end": 2583.52, "text": " arbitrary number? So, so Shahab, who's the first author on this paper, worked on looking at what" }, { "start": 2583.52, "end": 2590.16, "text": " it would take to to recreate both ventral and dorsal stream. And I think the remarkable thing" }, { "start": 2590.16, "end": 2597.2799999999997, "text": " that he found is if you train a network like CPC network, so a contrastive predictive coding network," }, { "start": 2597.28, "end": 2605.2000000000003, "text": " which is one form of self supervised learning, in which you're trying to essentially discriminate" }, { "start": 2605.2000000000003, "end": 2612.88, "text": " between different futures, if you will, so you're trying to you look at the past, like a certain" }, { "start": 2612.88, "end": 2620.1600000000003, "text": " window in the past, and then you're trying to tell apart like the actual future embed in some subspace" }, { "start": 2620.16, "end": 2630.08, "text": " versus an alternative future, which is which is dreamt up. So if you try to do that, then, you know," }, { "start": 2630.08, "end": 2635.44, "text": " it's already been shown that you can find good representations and in videos. But what's very" }, { "start": 2635.44, "end": 2642.8799999999997, "text": " interesting is that then you can ask the question of what happens as you add more and more substreams" }, { "start": 2642.88, "end": 2654.1600000000003, "text": " inside of this of this network. So if you remember the original Alex net paper, so it did have two" }, { "start": 2654.1600000000003, "end": 2661.6800000000003, "text": " streams. So if you remember, like very, it's like a while ago, but what happened is that they had" }, { "start": 2661.6800000000003, "end": 2668.4, "text": " like tiny GPUs back in the day, right. And so they couldn't fit the whole model on just on just one" }, { "start": 2668.4, "end": 2674.64, "text": " GPU. So what they decided arbitrarily is to split it up into two parts, especially at the at the" }, { "start": 2674.64, "end": 2680.48, "text": " early point. And then basically, they so they were independent, but they could re communicate a little" }, { "start": 2680.48, "end": 2689.76, "text": " bit later on. So which was a pretty unique feature. Back then, people didn't really do that. But now" }, { "start": 2689.76, "end": 2694.1600000000003, "text": " it's it's quite common to, you know, chop up the channels in different ways and all sorts of things." }, { "start": 2694.16, "end": 2700.8799999999997, "text": " But what they found is that there's this this this very interesting self organization principle where" }, { "start": 2701.52, "end": 2707.52, "text": " all this all the the filters on one GPU turned out to be color selective, and all the filters on the" }, { "start": 2707.52, "end": 2715.8399999999997, "text": " other GPU turned out to be to be black and white, which is whoa, that's weird. Just by the fact of" }, { "start": 2715.8399999999997, "end": 2721.12, "text": " splitting up, because the two streams, they don't always communicate, right, they only communicate" }, { "start": 2721.12, "end": 2729.3599999999997, "text": " at very sparse intermediate points. So so just structural prior gives rise to something that" }, { "start": 2729.3599999999997, "end": 2734.96, "text": " very much looks like the brain in that in the sense that one of the streams correlates well with" }, { "start": 2734.96, "end": 2741.44, "text": " the ventral brain stream and one correlates well with the dorsal brain stream. Yeah, so in that in" }, { "start": 2741.44, "end": 2748, "text": " that case, in the early Alex, that paper, actually, both of the types of filters are different subtypes" }, { "start": 2748, "end": 2752.56, "text": " that you see in in V1, but they are, you know, functionally different, and they have different" }, { "start": 2752.56, "end": 2757.52, "text": " roles. But it was like kind of an interesting proof of concept that if you just set a separation," }, { "start": 2757.52, "end": 2762.24, "text": " arbitrary separation down the middle, you don't say anything else like you don't say like, you" }, { "start": 2762.24, "end": 2767.84, "text": " have to respond to color, you have to respond to this. But just you set a separation, it self" }, { "start": 2767.84, "end": 2773.6, "text": " organizes to something that's interesting. It's crazy. And yeah, it's weird. So they might have" }, { "start": 2773.6, "end": 2779.36, "text": " just locked themselves into like building a better model by by having two small GPUs." }, { "start": 2782.16, "end": 2786.96, "text": " Yeah, exactly. So, you know, they say that necessity is the mother of invention. So I think" }, { "start": 2786.96, "end": 2792, "text": " this is a particular case where, you know, the limitations at the time caused them to" }, { "start": 2792, "end": 2798.24, "text": " stumble onto something which I think is is really deep and interesting, which is symmetry breaking." }, { "start": 2798.24, "end": 2804.4799999999996, "text": " So I guess ultimately, you know, when you start with, okay, you can imagine that if you" }, { "start": 2805.3599999999997, "end": 2810.4799999999996, "text": " just set all the weight parameters to zero, and then you perform your gradient descent, these" }, { "start": 2810.4799999999996, "end": 2818.56, "text": " two filtered sets will learn exactly the same thing, or they'll crash and burn. But by adding" }, { "start": 2818.56, "end": 2824.24, "text": " a little noise, right, by initializing your your network, you're pushing the network very, very" }, { "start": 2824.24, "end": 2830.3999999999996, "text": " slightly out of equilibrium, and that's enough to self organize into this thing. And so Shahab" }, { "start": 2830.3999999999996, "end": 2836, "text": " found a very similar phenomenon in the context of these networks, which are trained in an" }, { "start": 2836, "end": 2846.3999999999996, "text": " unsupervised manner in CPC. And so being trained on videos was able to find that these parts of the" }, { "start": 2846.3999999999996, "end": 2852.8799999999997, "text": " one part of the network was and so again, this is this is an instance of a network that has" }, { "start": 2852.88, "end": 2859.36, "text": " kind of a firewall in between the two sets of filters. And so he was able to find that these two" }, { "start": 2860.8, "end": 2864.7200000000003, "text": " sub branches, one of them was dorsal like and the other one was ventral like," }, { "start": 2865.6800000000003, "end": 2871.6800000000003, "text": " and was able to correlate that with some some data that we have in in mouse where there's tons and" }, { "start": 2871.6800000000003, "end": 2876.96, "text": " tons of data on what's the relative selectivity of these different things and found some some" }, { "start": 2876.96, "end": 2884.88, "text": " really nice correlations. So that means that you can all you would need basically is a little bit" }, { "start": 2884.88, "end": 2892.64, "text": " of a nudge, right. And so so which is this great idea, like maybe you just initialize the network" }, { "start": 2892.64, "end": 2899.44, "text": " in a sli so that like the two things are just very slightly asymmetric. Because one thing I should" }, { "start": 2899.44, "end": 2907.2000000000003, "text": " say is that the the two networks don't always get the same label, right. So if you train the network" }, { "start": 2907.2000000000003, "end": 2912.08, "text": " twice, one time it's going to be dorsal ventral and other time is going to be ventral dorsal." }, { "start": 2912.8, "end": 2917.68, "text": " Whereas the brain every time that you train it, it's the same that we know. There are some exactly" }, { "start": 2917.68, "end": 2922.56, "text": " it's all ventral is ventral dorsal. So there's some like inbuilt asymmetry. But it's a very" }, { "start": 2922.56, "end": 2930.88, "text": " probably like a very small asymmetry. Because if you train it with real data, and then it will" }, { "start": 2930.88, "end": 2938.08, "text": " automatically, you know, self generate into this in bloom into this particular activity. Cool." }, { "start": 2938.96, "end": 2945.36, "text": " So very excited that the brain can organize itself for something that's that's useful just from" }, { "start": 2945.36, "end": 2949.36, "text": " this could be used, I guess, for I mean, people are already, you know, in multi head attention," }, { "start": 2949.36, "end": 2955.04, "text": " they do multi head, right. And that's kind of similar in that they they clearly separate" }, { "start": 2955.04, "end": 2962.2400000000002, "text": " different computation that cannot interconnect. And therefore, that that sort of there also," }, { "start": 2962.2400000000002, "end": 2966.32, "text": " like the random initialization probably does some symmetry breaking, and then you find that the" }, { "start": 2966.32, "end": 2971.44, "text": " different heads respond to different things, people have investigated that it's probably very" }, { "start": 2971.44, "end": 2978.56, "text": " much along the same lines. So I want to skip ahead a little bit here to the the the the" }, { "start": 2978.56, "end": 2987.7599999999998, "text": " the concept cells, the the is it this paper? Oh, that's this as well. I think like, I think" }, { "start": 2987.7599999999998, "end": 2991.2799999999997, "text": " that there's been a lot of movement in the subfield. And by the way, I want to tell your" }, { "start": 2991.2799999999997, "end": 2995.2799999999997, "text": " viewers because I know a lot of you viewers are coming from a machine learning background versus" }, { "start": 2995.84, "end": 3001.84, "text": " an neuroscience background. And, you know, it's hard to get into NeurIPS. But I think if you know," }, { "start": 3001.84, "end": 3009.04, "text": " it's such a wide open field in neuroscience. There's so many questions that if you care a" }, { "start": 3009.04, "end": 3014.2400000000002, "text": " lot about representation learning, you know, it's it's a pretty easy field to jump onto," }, { "start": 3014.88, "end": 3022.6400000000003, "text": " and, and have positive reception. So there's there's still a bunch of a bunch of questions." }, { "start": 3022.6400000000003, "end": 3027.52, "text": " So grab your nearest neuroscientist and go write a paper. Encourage everybody to do it." }, { "start": 3027.52, "end": 3034.16, "text": " Yep. Definitely how to how to hack how to hack publications. There you go." }, { "start": 3035.7599999999998, "end": 3042.08, "text": " Yeah, there you go. So yeah, so clip clip is clip is weird." }, { "start": 3044.64, "end": 3051.44, "text": " So if there's one thing that I would say is when we saw when we saw the results of of clip and" }, { "start": 3051.44, "end": 3058.4, "text": " and some of the both in terms of of how good it is, and also the" }, { "start": 3060.2400000000002, "end": 3066.2400000000002, "text": " inner visualizations that Chris Olin gang worked on Chelsea Voss, as well." }, { "start": 3068.48, "end": 3072.4, "text": " I think that we were all kind of surprised because they do look a lot like the kinds" }, { "start": 3072.4, "end": 3078.56, "text": " of concept cells that you see on the hippocampus, right. So the very, very, very, very famous paper" }, { "start": 3078.56, "end": 3086.08, "text": " that did this is the had the infamous Jennifer Aniston cell. So I don't know if you're" }, { "start": 3086.08, "end": 3092.7999999999997, "text": " in your only in the context of your article. So it's one one cell that responds to both what" }, { "start": 3092.7999999999997, "end": 3099.2, "text": " pictures and the name and various aspects of a person not not just like," }, { "start": 3099.2, "end": 3106.24, "text": " exactly, exactly. So if I remember correctly, this this paper, so they had, they had people with" }, { "start": 3106.24, "end": 3112.8799999999997, "text": " intractable epilepsy. So these are human patients, and they were doing pro recordings in the" }, { "start": 3112.8799999999997, "end": 3118.8799999999997, "text": " hippocampus to figure out what was the the nature of their epilepsy and how they could be treated." }, { "start": 3119.68, "end": 3125.52, "text": " And, you know, they spend a lot of time in the hospital just being bored. And so sometimes they" }, { "start": 3125.52, "end": 3133.12, "text": " enroll into experiments and these experiments tell us more about the human brain than is otherwise" }, { "start": 3133.12, "end": 3139.68, "text": " possible. And so very thankful for for these people that do this. And so in this particular" }, { "start": 3139.68, "end": 3145.3599999999997, "text": " instance, they, they presented different kinds of concepts and images. And one of the cells that" }, { "start": 3145.3599999999997, "end": 3150.08, "text": " they found that have this like amazing property that if you just show the words Jennifer Aniston," }, { "start": 3150.08, "end": 3154.72, "text": " it would respond. If you showed the face of Jennifer Aniston, it would respond. If you showed," }, { "start": 3154.72, "end": 3161.52, "text": " like I didn't do like other kinds of controls, but I imagine that if they had played," }, { "start": 3161.52, "end": 3168.3199999999997, "text": " and you know, that the start of the of the French show, it probably would have responded," }, { "start": 3168.3199999999997, "end": 3177.12, "text": " because it all came with this like general concept of, of Jennifer Aniston. So ever since then," }, { "start": 3177.12, "end": 3182.16, "text": " people have been like fascinated by this idea, although it's a much older idea, you know, this" }, { "start": 3182.16, "end": 3186.08, "text": " idea that you have like a cell in your hippocampus that responds to your grandmother, it's the" }, { "start": 3186.08, "end": 3192.8799999999997, "text": " grandmother cell idea. But one thing that was very interesting when we first saw clip is that you" }, { "start": 3192.8799999999997, "end": 3201.8399999999997, "text": " have cells can respond both to text and to to images. And in fact, you can do these new kinds of" }, { "start": 3201.8399999999997, "end": 3209.44, "text": " adversarial attacks in which you just write the wrong, write the wrong text. And it fools the" }, { "start": 3209.44, "end": 3216.16, "text": " wrong text. And it fools the system into actually reading the text and mislabeling the the images." }, { "start": 3217.44, "end": 3222.4, "text": " So it sounds very hippocampus like to me. And so in this particular paper, they," }, { "start": 3222.4, "end": 3228.96, "text": " they actually looked at at this problem and found that out of all the different models that" }, { "start": 3228.96, "end": 3236.96, "text": " that they could look, they found that clip could explain the most hippocampal data," }, { "start": 3236.96, "end": 3242.56, "text": " which is super exciting. I'm sure that people are really going to drill down further into this," }, { "start": 3243.28, "end": 3247.92, "text": " into this finding. Yeah. But it's clip specifically, because there's a lot of other" }, { "start": 3247.92, "end": 3254.32, "text": " unsupervised models. And somehow clip is the best and we still don't understand why this is I mean," }, { "start": 3254.32, "end": 3260.88, "text": " it's like the delta between it and the the second best model is, it's huge. But why?" }, { "start": 3260.88, "end": 3269.6, "text": " I think no one knows right now. And and actually clip the the just the the the visual aspects of" }, { "start": 3269.6, "end": 3278.08, "text": " clip are also very good at explaining some of the some some other data. So it's, it's very" }, { "start": 3278.08, "end": 3285.44, "text": " interesting to think about what happens in a multimodal fashion, like what happens when," }, { "start": 3285.44, "end": 3290.32, "text": " you know, experimentalists and neurophysiologists like really like to isolate one thing to one" }, { "start": 3290.32, "end": 3294.1600000000003, "text": " thing to just look at one thing at a time. But now you're talking about something that can do" }, { "start": 3294.1600000000003, "end": 3301.6800000000003, "text": " different kinds of modalities. And I think that, you know, multimodal areas are going to be some" }, { "start": 3301.6800000000003, "end": 3307.6800000000003, "text": " of the next things that are really attacked by unsupervised and self I mean, it's also a question," }, { "start": 3307.6800000000003, "end": 3313.2000000000003, "text": " I mean, clip is huge. It also has a huge amount of data. We don't exactly know what data went into" }, { "start": 3313.2000000000003, "end": 3318.6400000000003, "text": " there, right? There's a lot to to untangle here. But the multimodality, I also feel that that is," }, { "start": 3318.64, "end": 3325.3599999999997, "text": " is a big part of what's going to bring us forward in AI. And probably also, you know, since the brain" }, { "start": 3325.3599999999997, "end": 3332.56, "text": " is always multimodal, like, I don't you don't get like a stimulus that is maybe now with computers," }, { "start": 3332.56, "end": 3337.92, "text": " you do. But you know, just growing up in nature, you probably get zero stimuli that are just" }, { "start": 3337.92, "end": 3342.16, "text": " unimodal, right? So you're always in this mode of multimodality." }, { "start": 3342.16, "end": 3348.48, "text": " Yeah. And in one thing that's, that's interesting, in particular for babies, you know, if, if you" }, { "start": 3348.48, "end": 3353.04, "text": " ever interacted with babies, they really like to have toys, which make lots of noise, which drives" }, { "start": 3353.04, "end": 3358.16, "text": " parents crazy. And but I think that there's a reason for that, right? Like, why would you want" }, { "start": 3358.16, "end": 3362, "text": " to like a toy that makes like a lot of noise, because clearly, there's a lot of pressure on" }, { "start": 3362, "end": 3366.7999999999997, "text": " making the noise as silent as possible, because the parents are just like trying to sleep. But I" }, { "start": 3366.8, "end": 3372.5600000000004, "text": " think that the kids just prefer that because it's a multimodal stimuli. And you can do all sorts of" }, { "start": 3372.5600000000004, "end": 3376.1600000000003, "text": " causal inference about what happens when I get this thing with this thing." }, { "start": 3377.1200000000003, "end": 3383.76, "text": " So this is the last paper that I I wanted to look at, maybe maybe you have more, but this is," }, { "start": 3383.76, "end": 3391.76, "text": " it's challenges, the manifold perspective of deep learning in your you've described it a little bit" }, { "start": 3391.76, "end": 3397.76, "text": " in the paragraph, you say challenges, the manifold perspective, and it favors the causal" }, { "start": 3397.76, "end": 3402.6400000000003, "text": " perspective. So what is meant here? And what does this paper tell us?" }, { "start": 3404.32, "end": 3409.92, "text": " Oh, yeah. So you remember, we were discussing earlier, the mechanics of how you compare a brain" }, { "start": 3409.92, "end": 3419.2000000000003, "text": " area and deep neural network. And so you could have so I think a lot of deep learning methods are" }, { "start": 3419.2, "end": 3424, "text": " rotation invariant. So if you take something like clip, for instance, you're learning," }, { "start": 3425.4399999999996, "end": 3432.96, "text": " I guess, like this, this subspace, which is, I guess, like 128 dimensional in the both from the" }, { "start": 3432.96, "end": 3438, "text": " visual side and from the text side, and you're trying to align it in this 128 dimensional space." }, { "start": 3438, "end": 3443.52, "text": " If you multiply the two by rotation matrix, and then the entire 128 dimensional space gets gets" }, { "start": 3443.52, "end": 3449.68, "text": " rotated, it's the same network, right? It really doesn't matter whether it's, whether it's rotated" }, { "start": 3449.68, "end": 3455.7599999999998, "text": " or not. What matters just the locations on the manifolds. And so if you're thinking about aligning" }, { "start": 3455.7599999999998, "end": 3463.84, "text": " a brain area and neural network with a with a regression, again, the rotation doesn't matter." }, { "start": 3464.64, "end": 3471.44, "text": " You're saying any any weight matrix is just as good as any other weight matrix. So that's the so" }, { "start": 3471.44, "end": 3478.32, "text": " So that's the so that's the underlying, I think, assumption. And I think that there's been a lot of" }, { "start": 3478.32, "end": 3484.2400000000002, "text": " work recently in neuroscience, focusing on this idea that, you know, single neurons like don't" }, { "start": 3484.2400000000002, "end": 3490.48, "text": " really matter. What matters is the latent subspace in which the near the neurons are responding. So" }, { "start": 3490.48, "end": 3497.12, "text": " if you have a population of 100,000 neurons, maybe they Yeah, it's 100,000 neurons. But if you present" }, { "start": 3497.12, "end": 3501.6, "text": " a bunch of stimuli, you find out that actually the latent sub and you do like an SVD on the matrix" }, { "start": 3501.6, "end": 3506.88, "text": " of responses, you find that latent subspace actually just five dimensional, or whatever." }, { "start": 3508.16, "end": 3516.16, "text": " So first of all, they're just random projections from this five dimensional subspace. And the" }, { "start": 3516.16, "end": 3522.4, "text": " and the large dimensional subspace doesn't really matter. So this paper, so sorry, sorry, and" }, { "start": 3522.4, "end": 3528.7200000000003, "text": " it's been a lot of work in neuroscience showing that this is the case, especially in, in motor" }, { "start": 3528.7200000000003, "end": 3534.8, "text": " cortex. So you know, you have tons and tons of neurons in your motor cortex as you're going for" }, { "start": 3534.8, "end": 3539.44, "text": " for reach movement. And yet it seems that these neurons really live in a very low dimensional" }, { "start": 3539.44, "end": 3550.1600000000003, "text": " subspace. So that's what we call the manifold theory of neuroscience is that idea that the" }, { "start": 3550.16, "end": 3555.12, "text": " neurons are in a high dimensional subspace, but they're just project random projections of some" }, { "start": 3555.12, "end": 3560.72, "text": " lower dimensional subspace. But one of the consequences that if it's random projections," }, { "start": 3560.72, "end": 3568.8799999999997, "text": " then each of the neurons individually should just be, you know, weird. It should, you know, respond" }, { "start": 3568.8799999999997, "end": 3573.04, "text": " to a bunch of different things, it shouldn't be shouldn't be able to place a label, because you" }, { "start": 3573.04, "end": 3578.08, "text": " could like neurons, you could rotate the entire space, it would still make sense, right? So there's" }, { "start": 3578.08, "end": 3585.68, "text": " no, there's no reason why an individual neuron should align with just like one axis in, in that" }, { "start": 3585.68, "end": 3596.56, "text": " particular subspace. Yeah, exactly. So, but neuroscientists really like labeled axes." }, { "start": 3597.84, "end": 3603.68, "text": " That's one thing that they're very fond of. So, you know, you can imagine that you have like an" }, { "start": 3603.68, "end": 3608.7999999999997, "text": " axis, I don't know if you're in Unity or Unreal, you know, you have like my avatar, and then you" }, { "start": 3608.7999999999997, "end": 3617.2, "text": " just like hit like one switch, and I just go, you know, it just, it just changes my smile from" }, { "start": 3617.2, "end": 3628, "text": " from upwards to downwards. And oh, sorry, I, my printer is haunted. And so I'm just going to" }, { "start": 3628, "end": 3634.64, "text": " disconnect it, if you don't mind, because it makes the lights flash. Unfortunately. Okay." }, { "start": 3636.56, "end": 3641.76, "text": " I find it weird that printers are like the oldest technology on the planet, yet still they're like" }, { "start": 3641.76, "end": 3646.64, "text": " the most troubled, like we should we should have figured this out by now. But we have not." }, { "start": 3647.44, "end": 3652.16, "text": " Yeah, it's, it's too bad. So I still print out papers, because there's been research that shows" }, { "start": 3652.16, "end": 3658.24, "text": " that you retain more when you print something out rather than when you read it in the on a printed" }, { "start": 3658.24, "end": 3665.92, "text": " document rather than Yeah, reading it on the but it's just becoming so, so inconvenient that I think" }, { "start": 3665.92, "end": 3672.96, "text": " I'm gonna have to abandon soon. Okay, so starting back then, and I apologize, where do you want me" }, { "start": 3672.96, "end": 3682.4, "text": " to restart? So um, we, yeah, there's no there's no particular reason why any single neuron right" }, { "start": 3682.4, "end": 3689.92, "text": " should align with any axis. Yet people find that they do. Yes, yes, exactly. And that might be" }, { "start": 3689.92, "end": 3695.52, "text": " because, you know, neuroscientists like to name things. And if something is not nameable, they'll" }, { "start": 3695.52, "end": 3700.56, "text": " say it's mixed selectivity or whatever, and then they'll just forget about it. That's also a very" }, { "start": 3700.56, "end": 3707.04, "text": " good assumption. So both of these things can be happening at the same time. But in this paper," }, { "start": 3707.04, "end": 3716.24, "text": " they found that if you train a bit of VAE, which is a VAE, which has a stronger weight on on one" }, { "start": 3716.24, "end": 3725.12, "text": " of the KL terms, it tends to find disentangled representations, right, so that the axes actually" }, { "start": 3725.12, "end": 3733.04, "text": " matter. So one axis is like my smile, the other axis is how much of a unibrow I have, and you know," }, { "start": 3733.04, "end": 3739.8399999999997, "text": " a third axis is, you know, what's up with my mustache, and etc, etc. And so they found that" }, { "start": 3739.8399999999997, "end": 3747.92, "text": " that aligns pretty well with some neurons in one face selective area of infotemporal cortex. And" }, { "start": 3747.92, "end": 3755.28, "text": " so they did some some trickery trying to do like one on one alignment versus ensemble alignment." }, { "start": 3755.28, "end": 3763.36, "text": " And it looks like, you know, the good interpretation for this data is that it's, it's more like a one" }, { "start": 3763.36, "end": 3770.2400000000002, "text": " on one alignment. And so that could be pretty interesting. But I do want to point out that" }, { "start": 3770.24, "end": 3778.72, "text": " there are certainly distributed representations in the brain. It doesn't mean that because in this" }, { "start": 3778.72, "end": 3785.6, "text": " one area, you have non distributed representations, that that's the case for the whole brain. And it" }, { "start": 3785.6, "end": 3792.7999999999997, "text": " might be because of energetic reasons that we have this representation in this in this brain area." }, { "start": 3792.8, "end": 3801.2000000000003, "text": " Because you know, you want to have how the what the distribution of responses is over a stimulus" }, { "start": 3801.2000000000003, "end": 3808, "text": " ensemble is very important for how efficient the code is, because remember, neurons are super noisy." }, { "start": 3808.88, "end": 3815.2000000000003, "text": " Right. So you want them you want to have like a nice exponential distribution of responses" }, { "start": 3815.2, "end": 3823.2799999999997, "text": " in order to have an efficient code. Given that you have this personal like noise in the data." }, { "start": 3824.56, "end": 3833.9199999999996, "text": " So yeah, and you you say it favors the causal hypothesis, it so it means that maybe what's" }, { "start": 3833.9199999999996, "end": 3840.7999999999997, "text": " happening is that rather than some simply encoding the signal that you see that the brain is actually" }, { "start": 3840.8, "end": 3846.2400000000002, "text": " building like a causal model of what's happening, like you know, there are eyes and there are" }, { "start": 3846.2400000000002, "end": 3852.5600000000004, "text": " eyebrows and that, you know, the the result of there being eyebrows is that they look a certain" }, { "start": 3852.5600000000004, "end": 3857.92, "text": " way. And then it will make sense again that they are encoded, like the structural priors encoded in" }, { "start": 3857.92, "end": 3863.6800000000003, "text": " one space. And then simply the manifestation of that is the picture we see. Yeah, yeah, maybe I" }, { "start": 3863.6800000000003, "end": 3869.52, "text": " misused the term causal here. I don't want to mistake it for causal inference. And I don't want" }, { "start": 3869.52, "end": 3876, "text": " to misuse the term causal inference. And sure, sure. But I think that what I mean by this is" }, { "start": 3876, "end": 3883.52, "text": " a forward model for how like one individual. So you can think of you can think of a of a" }, { "start": 3883.52, "end": 3888.4, "text": " directed basically graph in which, you know, there's a bunch of different factors. One of them" }, { "start": 3888.4, "end": 3893.2, "text": " is whether or not I wake up with a mustache today. Another one is how close my eyes are. Another one" }, { "start": 3893.2, "end": 3899.68, "text": " is my nose. And these factors are disentangled. So that means that they're independent from" }, { "start": 3900.3999999999996, "end": 3906.8799999999997, "text": " each other. And then I can just like turn on and off the switch and generate different faces." }, { "start": 3906.8799999999997, "end": 3913.9199999999996, "text": " So that's I think like the underlying naive model is the Mr. Potato Head model, right, in which you" }, { "start": 3913.9199999999996, "end": 3921.3599999999997, "text": " just like switch out the different components. And of course, there are specific holes that you" }, { "start": 3921.36, "end": 3930.6400000000003, "text": " can put the different the different things in. So I think that I guess like the question is," }, { "start": 3930.6400000000003, "end": 3937.04, "text": " like, are these factors in this this factor graph? Are they like, can you put labels on them and" }, { "start": 3937.04, "end": 3941.92, "text": " they correspond to one thing that we would identify as something that is independently" }, { "start": 3941.92, "end": 3947.52, "text": " changeable? So for instance, like, we understand that age and lighting, for instance, like those" }, { "start": 3947.52, "end": 3955.36, "text": " are two totally disentangled things that have nothing to do with each other. So the question" }, { "start": 3955.36, "end": 3961.12, "text": " is, are they are they different factors? Or you rotated like one is square root of two, like one" }, { "start": 3961.12, "end": 3967.36, "text": " over square root of two times age minus one over square root of two times lighting, and so on and" }, { "start": 3967.36, "end": 3974.8, "text": " so forth. And it looks like they're really aligned towards the factors that we can label," }, { "start": 3974.8, "end": 3980.48, "text": " and that are indeed independent, both in brands and in this particular model." }, { "start": 3980.48, "end": 3986.0800000000004, "text": " Do you think that it plays a big part that it because face, let's say facial structure," }, { "start": 3986.0800000000004, "end": 3992.5600000000004, "text": " is it is something that is truly, let's say the individual factors are actually independent" }, { "start": 3992.5600000000004, "end": 3999.2000000000003, "text": " because of, you know, genetic variation, allele crossing during during meiosis, sorry, or" }, { "start": 3999.2, "end": 4008.08, "text": " recombination, and so on these things actually go in a fairly, let's say, this uncorrelated" }, { "start": 4008.08, "end": 4013.8399999999997, "text": " uniform distribution in the human population. So almost every combination of narrow eyes," }, { "start": 4013.8399999999997, "end": 4019.8399999999997, "text": " wide eyes, you know, big mouth, small mouth, and so on is possible. And therefore, it might make" }, { "start": 4019.8399999999997, "end": 4025.4399999999996, "text": " just sense to let's say encode the individual factors as individual neurons, as you say," }, { "start": 4025.44, "end": 4032.96, "text": " maybe for energetic reasons. I think that that's, that's a really interesting hypothesis. But I" }, { "start": 4032.96, "end": 4037.28, "text": " don't think that that's that that's the case. I think that there might be like a general," }, { "start": 4037.28, "end": 4043.68, "text": " you know, algorithm that makes it that tries to disentangle these things into into different," }, { "start": 4044.56, "end": 4049.2000000000003, "text": " into different sub factors. And then as a consequence, there's this natural alignment" }, { "start": 4049.2, "end": 4058.72, "text": " with this other process. But, and of course, if it's the case that the kind of latent model that" }, { "start": 4058.72, "end": 4063.4399999999996, "text": " is inside the brain is better aligned with the latent model that's in reality, well, that's" }, { "start": 4063.4399999999996, "end": 4071.9199999999996, "text": " better. You know, you want the thing to reflect, but I don't think it's 100% true that, that these" }, { "start": 4071.92, "end": 4081.6800000000003, "text": " that these factors are really disentangled in reality. So for instance, you know, I, I," }, { "start": 4083.44, "end": 4089.28, "text": " like a unibrow versus mustache, like these two things are probably pretty correlated with" }, { "start": 4089.28, "end": 4099.52, "text": " with each other. Yeah, yeah, yeah, I see what I see what you mean. Yeah. So we're we're we're" }, { "start": 4099.52, "end": 4103.76, "text": " we've been we've been going through this a little bit. There's all I mean, there's a lot of there's" }, { "start": 4103.76, "end": 4109.68, "text": " other papers, which which are definitely also interesting, like the gloss ones is super" }, { "start": 4109.68, "end": 4113.84, "text": " interesting. Is there Yeah, is there one that you wanted to touch on particularly?" }, { "start": 4113.84, "end": 4120.240000000001, "text": " Well, I wanted to give for, you know, readers that are coming slightly outside of this field," }, { "start": 4120.240000000001, "end": 4124.8, "text": " and moving into this like very rapidly moving field, kind of an overview of what are the" }, { "start": 4124.8, "end": 4130.08, "text": " questions that people are interested in, like what are kind of the some of the interesting approaches" }, { "start": 4130.08, "end": 4138.400000000001, "text": " that people are using to, to tackle these and also encourage people to come in our field and, and," }, { "start": 4139.68, "end": 4148.400000000001, "text": " and, and, you know, get papers in and, and scoop us basically. So I really want to encourage people" }, { "start": 4148.400000000001, "end": 4154.72, "text": " to, to get into that. I think, I think that we've covered some of the papers that I think are the" }, { "start": 4154.72, "end": 4163.2, "text": " most interesting. And we'll see in the, I actually wanted to do a follow up on precisely the kind of" }, { "start": 4163.2, "end": 4168.240000000001, "text": " agent based representations that are coming because that that is coming down the line. And" }, { "start": 4168.240000000001, "end": 4172.72, "text": " I think that's going to be super interesting for this field. So maybe we can end with like," }, { "start": 4172.72, "end": 4179.6, "text": " some things to look forward to in the future. Sure. So one of the things that I think is going" }, { "start": 4179.6, "end": 4185.76, "text": " to be interesting for for the future is like really taking evolution seriously. So we saw the, actually" }, { "start": 4185.76, "end": 4195.360000000001, "text": " maybe if you can scroll to where I show Jess's, Jess Thompson's diagram of the different types of," }, { "start": 4196.72, "end": 4200.4800000000005, "text": " of models and how they all fit together. It's at the very start. It's at the intro." }, { "start": 4202.4800000000005, "end": 4208.96, "text": " So Jess has a really nice way I think of, of explaining this, which is that, you know," }, { "start": 4208.96, "end": 4213.76, "text": " there's some models which can really perform a task. And, you know, once we got to ImageNet 2012," }, { "start": 4213.76, "end": 4220.96, "text": " like that was, that was where we got there. And then, you know, in 2014, we really got into this" }, { "start": 4220.96, "end": 4227.04, "text": " accounts for neural activity part of, so, you know, we can find models that can both perform a task," }, { "start": 4227.04, "end": 4232.56, "text": " which is biologically relevant and accounts for neural activity. I think this year was a big year" }, { "start": 4232.56, "end": 4236.96, "text": " for biological plausibility. And I want to say this is the last word, because clearly there's" }, { "start": 4236.96, "end": 4246.4, "text": " way more work to be doing there. You're going to have models which have realistic, biologically" }, { "start": 4246.4, "end": 4251.52, "text": " realistic kinds of gradient descent, or replace gradient descent with something that's more" }, { "start": 4251.52, "end": 4255.92, "text": " biologically plausible. You're going to have Dale's Law, you know, so excitatory neurons" }, { "start": 4256.56, "end": 4261.76, "text": " only make connection, only makes excitatory connections and inhibitory neurons only make" }, { "start": 4261.76, "end": 4266.4800000000005, "text": " inhibitory connections and you'll have normalization and you have temporal dynamics and so on and so" }, { "start": 4266.48, "end": 4271.919999999999, "text": " forth. So that's like, the next five years is probably just going to be to fill in this" }, { "start": 4271.919999999999, "end": 4276.879999999999, "text": " biologically plausible. But there's also could have evolved. I think that that's that's like a" }, { "start": 4276.879999999999, "end": 4284.48, "text": " super interesting unknown questions and people are going to start to think about this problem" }, { "start": 4284.48, "end": 4290.08, "text": " in a serious fashion. And I want to point out there's this there's this recent paper that I" }, { "start": 4290.08, "end": 4297.68, "text": " don't talk about here, which from Fei-Fei Li, which is about evolving different kinds of agents that" }, { "start": 4297.68, "end": 4303.76, "text": " can solve different kinds of reinforcement learning tasks that actually has a an interesting" }, { "start": 4304.48, "end": 4311.28, "text": " evolution component to it. So I think we're going to start to see and we can actually like see the" }, { "start": 4311.28, "end": 4316.08, "text": " process by which the brain can bootstrap itself into existence, which I think is going to teach" }, { "start": 4316.08, "end": 4322.08, "text": " us something about what it is to be human. And I'm sure there'll be TED Talks and books and so" }, { "start": 4322.08, "end": 4328.8, "text": " forth. But that's going to take like another five, 10 years. Another thing that I'm excited to look" }, { "start": 4328.8, "end": 4340.5599999999995, "text": " at in the in the future is I just wrote my notes here hands. Hands are great. Hi. I think that one" }, { "start": 4340.56, "end": 4348.160000000001, "text": " one one thing that we that we're having like really taken seriously so far is the role of" }, { "start": 4348.160000000001, "end": 4356.080000000001, "text": " weak supervision from a parental perspective. But if you think of like a parent and their baby," }, { "start": 4356.080000000001, "end": 4360.72, "text": " they're going to point at things they're going to say this is this, this is that. And you know," }, { "start": 4360.72, "end": 4368.72, "text": " it has had like hands have had a huge role in our evolution as as homo sapiens. And it's even like" }, { "start": 4368.72, "end": 4383.04, "text": " thought that sign language preceded the appearance of voice speech. So that we probably have somewhere" }, { "start": 4383.04, "end": 4389.360000000001, "text": " in our noggin, some areas which are highly selective for hand gestures, and which are" }, { "start": 4389.360000000001, "end": 4395.92, "text": " used for a kind of weak supervision. That's important for for parents. So understanding" }, { "start": 4395.92, "end": 4405.52, "text": " what happens with that personal space and what what happens as as we use tools is clearly important" }, { "start": 4405.52, "end": 4411.68, "text": " from like just this that curiosity of how you know, we went from Australia to get the techies to" }, { "start": 4412.56, "end": 4419.2, "text": " the modern humans. And I think it's going to teach us a lot about yeah, what it means to be human." }, { "start": 4419.2, "end": 4426.8, "text": " Awesome. Last question from my side with you're clearly interested in how the brain works, right?" }, { "start": 4426.8, "end": 4433.5199999999995, "text": " And and see and seeing, you know, can we can we make parallels between AI models, like deep models" }, { "start": 4433.5199999999995, "end": 4443.679999999999, "text": " and brain areas and so on? Do you think that it is a necessity that we sort of feed back the knowledge" }, { "start": 4443.68, "end": 4451.280000000001, "text": " into the deep learning realm? So should we should we put more effort into saying, how does the brain" }, { "start": 4451.280000000001, "end": 4458, "text": " work? Okay, let's do that. Because at least that's that's like one example of where intelligence was" }, { "start": 4458, "end": 4464.4800000000005, "text": " achieved. Or do you think that, you know, how the brain works is just like a happenstance of nature" }, { "start": 4464.4800000000005, "end": 4472.240000000001, "text": " and evolution and energy restrictions. And, you know, it's not it's not super like, let's just do AI," }, { "start": 4472.24, "end": 4480.639999999999, "text": " you know, the way it works best, or option three is something like, what, however we build AI," }, { "start": 4480.639999999999, "end": 4487.599999999999, "text": " if we solve the task, it will automatically align with the brain, because there's like only one real" }, { "start": 4487.599999999999, "end": 4493.84, "text": " way to solve the task, like in which, in which of these, let's say camps are do you find yourself in?" }, { "start": 4493.84, "end": 4502.08, "text": " Yeah, that's a that's super interesting. And I want to say that so people have made for a long time" }, { "start": 4502.08, "end": 4508.4800000000005, "text": " that claim that if we just study the brain, we'll be able to make better machines. Yeah, so that" }, { "start": 4508.4800000000005, "end": 4513.52, "text": " that comes about and again and again. And I do want to point out that this actually did happen," }, { "start": 4513.52, "end": 4519.4400000000005, "text": " as we saw with convolutional neural networks, and the whole story of Yubil and Weasel and the" }, { "start": 4519.44, "end": 4527.04, "text": " Neocognitron and Yalda Kuhn and and eventually ImageNet 2012. But, you know, it's really only" }, { "start": 4527.04, "end": 4535.28, "text": " happened a few times, it's not clear how much more we have to like how much how many more instances" }, { "start": 4535.28, "end": 4540.879999999999, "text": " of this will happen. That's certainly the view from from some people at DeepMind, for instance," }, { "start": 4542.4, "end": 4546.639999999999, "text": " that have really like gone into cognitive neuroscience and have started to do their own" }, { "start": 4546.64, "end": 4550.96, "text": " fMRI experiments to really, you know, tackle these problems. I think it's really, really interesting." }, { "start": 4550.96, "end": 4556.72, "text": " But I'm not I think that it's going to teach us a lot about the human brain, but not necessarily" }, { "start": 4556.72, "end": 4563.84, "text": " about how to make intelligent machines, because we're, you know, like these are different systems," }, { "start": 4563.84, "end": 4568.88, "text": " as you point out, and there are certainly things about the brain which are kludgy and, and, and" }, { "start": 4569.200000000001, "end": 4574.96, "text": " certainly suboptimal. So how the retina is wired up is the classic example, it's wired up in the" }, { "start": 4574.96, "end": 4580.72, "text": " wrong way around, octopuses have haven't the right way around, and it doesn't seem to bother them." }, { "start": 4581.28, "end": 4589.6, "text": " So that's a that's a clear example. But maybe there's some thing that we can that we can" }, { "start": 4589.6, "end": 4595.44, "text": " identify with with brains and that is going to unlock the next generation of machine learning." }, { "start": 4595.44, "end": 4599.44, "text": " Maybe it's spiking neural networks, for instance, you know, people are demonstrating like," }, { "start": 4599.44, "end": 4605.599999999999, "text": " you could get something which is the like 1000 times or 10,000 times more energy efficient if" }, { "start": 4605.599999999999, "end": 4610.48, "text": " you just use these mixed signals spiking neural networks. So I don't know." }, { "start": 4611.36, "end": 4617.04, "text": " Yeah, that would I mean, 1000 times 10,000 times that is sort of the orders of magnitude you spoke" }, { "start": 4617.04, "end": 4622.5599999999995, "text": " about before when it came to to data. Well, those are so here, I'm thinking about" }, { "start": 4622.56, "end": 4631.4400000000005, "text": " the energy efficiency. So like one recurrent super comparable. No, I think like the the one thing I" }, { "start": 4631.4400000000005, "end": 4636.080000000001, "text": " would point out here is that if you look at all these papers, and you add up all of the their," }, { "start": 4636.64, "end": 4642.080000000001, "text": " their training time and carbon emissions, it's it's probably like pretty substantial. Although I will" }, { "start": 4642.080000000001, "end": 4648.320000000001, "text": " say that, you know, the paper that that I'm the first author of here actually have the machine" }, { "start": 4648.32, "end": 4656.32, "text": " that I train this thing on like right here. And it's it's still like it's still a one GPU machine." }, { "start": 4656.32, "end": 4662.24, "text": " So again, I encourage your your your viewers to to get into this because you can still do things" }, { "start": 4662.24, "end": 4668.48, "text": " with GTX 1080. That's awesome. But I think that one thing that's that's going to be really" }, { "start": 4668.48, "end": 4674.48, "text": " interesting is that by studying, you know, better machines, we'll be able to start to understand" }, { "start": 4674.48, "end": 4679.599999999999, "text": " how to bring this back from the side of machine learning and bring it back into human health." }, { "start": 4679.599999999999, "end": 4687.12, "text": " So that's very interesting. And it's by and wide, hasn't been explored thus far. But that I'm kind" }, { "start": 4687.12, "end": 4693.44, "text": " of a fan of the opposite direction that most people are really going into. So I hope that" }, { "start": 4693.44, "end": 4698.4, "text": " that answers your question. I, I don't think that naturally, if you just train on your own network" }, { "start": 4698.4, "end": 4703.839999999999, "text": " to solve a task, it's going to do it the same way that the brain does. But I think that's" }, { "start": 4703.84, "end": 4708.24, "text": " the brain does because I don't think that that's that's really pointed out. I don't think that" }, { "start": 4708.24, "end": 4714.88, "text": " GPT three does things the same way that a human does in any sort of meaningful way. No way." }, { "start": 4717.2, "end": 4722, "text": " Even though they're both very good at language. Yeah, maybe GPT four." }, { "start": 4724.56, "end": 4728.08, "text": " Well, if you ask Gary Marcus, he'll say that there's no way it'll never happen." }, { "start": 4728.08, "end": 4736.4, "text": " Neurosymbolic AI all the way. Yeah. All right. Cool. Yeah. For every to everyone. Follow Patrick." }, { "start": 4737.44, "end": 4744.16, "text": " The many he's written papers, lots of papers. You're also the CTO of Neuromatch Academy. Is" }, { "start": 4744.16, "end": 4751.2, "text": " that correct? So I, so I helped Neuromatch start actually, so I'm no longer CTO there. But it's a" }, { "start": 4751.2, "end": 4758.4, "text": " great occasion for for people that want to learn more about that intersection between neuroscience" }, { "start": 4758.96, "end": 4767.679999999999, "text": " and artificial intelligence to to bring that about. So when we started this a couple of years ago," }, { "start": 4767.679999999999, "end": 4774.48, "text": " we just figured, oh, well, do a few video lectures and present that online. And it was at the start" }, { "start": 4774.48, "end": 4780.96, "text": " of the pandemic and people were bored. So the response was out of this world. So we have" }, { "start": 4780.96, "end": 4786.56, "text": " we had over 2000 applications and people from all over the world wanted to learn more about" }, { "start": 4786.56, "end": 4793.52, "text": " both neuroscience and artificial intelligence and their intersection. So we ended up having," }, { "start": 4793.52, "end": 4799.92, "text": " I think, 1700 students in the first cohort and having 200 TAs. And so it became a big thing" }, { "start": 4799.92, "end": 4805.52, "text": " very fast. So I'm very happy that I helped bring that about. It was definitely one of the most" }, { "start": 4805.52, "end": 4812.64, "text": " stressful times in my life. But we could bring together people from very disparate backgrounds," }, { "start": 4813.4400000000005, "end": 4821.200000000001, "text": " whether it's people in emerging economies that are at local universities there, and people from" }, { "start": 4821.200000000001, "end": 4827.84, "text": " from Ivy League universities in the US, Canada and, and the UK together and working with the" }, { "start": 4827.84, "end": 4834.72, "text": " same curriculum and under the same circumstances. So which was very cool. And then last year, we did" }, { "start": 4834.72, "end": 4841.84, "text": " the same but doubled in size as well. So I hope that we'll be able to, to double this year." }, { "start": 4842.56, "end": 4850.64, "text": " I'm sure the announcement actually for for the next version of Neuromagic Academy will happen" }, { "start": 4850.64, "end": 4859.04, "text": " pretty soon. So if you have people in in your audience that are interested in that, I highly" }, { "start": 4859.04, "end": 4865.2, "text": " recommend to them to do that. It's a great occasion to learn. And we already have, you know," }, { "start": 4865.2, "end": 4870.24, "text": " materials from last year online. So if you want to get started on your learning, you can do that" }, { "start": 4870.24, "end": 4876.56, "text": " today. Excellent. Cool. Well, Patrick, it was wonderful, wonderful having you here. This is a" }, { "start": 4876.56, "end": 4882.48, "text": " new world to me and I think for to a lot of people listening right here. So thank you so much. And I" }, { "start": 4882.48, "end": 4889.36, "text": " hope to see you again with with next year's review. Awesome." } ]
i4H0kjxrias
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Reformer: The Efficient Transformer
[ "Science & Technology" ]
[ "deep learning", "machine learning", "nlp", "natural language processing", "machine translation", "arxiv", "google", "attention mechanism", "attention", "transformer", "seq2seq", "bert", "memory", "lsh", "locality sensitive hashing", "reversible", "revertible", "flow", "long sequence" ]
The Transformer for the masses! Reformer solves the biggest problem with the famous Transformer model: Its huge resource requirements. By cleverly combining Locality Sensitive Hashing and ideas from Reversible Networks, the classically huge footprint of the Transformer is drastically reduced. Not only does that mean the model uses less memory, but it can process much longer input sequences, up to 16K tokens with just 16gb of memory! https://arxiv.org/abs/2001.04451 https://ai.googleblog.com/2020/01/reformer-efficient-transformer.html Abstract: Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O(L2) to O(LlogL), where L is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences. Authors: Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we'll look at Reformer, the efficient transformer by Nikita Kitaev, Lukas Kaiser and Anselm Levskaia. This is a paper that tries to reduce the extreme resource requirements of the transformer model. Now if you haven't seen the transformer model before, that's this thing, I suggest you go watch for example my video on it, Attention is All You Need, it's called, where the transformer is introduced. The most famous transformer is called BERT, B-E-R-T, and you can also look that up, I've made a video about this. So what's the issue here? If you remember transformers, they need a lot of memory. And why? That's because they compute, in each layer they compute these attention things. Let's recap shortly. In a transformer you propagate information layer by layer. So you have layer here with some signal, and then the next layer that you try to propagate the signal. Now what you do, you assign, you assign key queries to each of the next layer. So each of the next layer has queries, and queries are just vectors. This is a vector, this is a vector, this is a vector, and so on. So basically the next layer has the ability to ask, to ask the last layer what it wants. This is a kind of an intrinsic property of attention, and I, as I said, I explained this in detail in the video, Attention is All You Need. Basically these are what's called queries, Q. And then this layer is exposing what are called keys, and keys again are vectors. So vector, vector, vector, vector, and so on. So keys are vectors, and the way that the information is propagated to the next layer is whenever, whatever, we consider for example this node here, right, this node, let's make that yellow. When we consider this node here, it is going to look in the last layer which, which keys match my key the most. And in this case it will probably be this key and this key, right, they match the key the most. And here we look at the inner product, so the angle between the vectors. And then information is aggregated by simply having a weighted average of the values. So information is coming in here and here. Actually information is coming into all the nodes, but since only these keys match, the information will be propagated like this, to this unit. We could do this for another unit, for example this unit right here. What's the value of this unit? Well we have to look at the key here. Which key is it going to be matched to? It's probably going to be matched to this key right here. And probably no other key really. Maybe this key a little bit. So the information of that node in the next layer will be whatever's information is coming in here, routed there, and a little bit of this information. So this is kind of a, it's not a hard, it's called soft attention. So there's a little bit of information going everywhere, but the majority of the information is coming from the nodes where the keys match. So these are queries, these are keys, and technically these things coming in here are called values. But imagine the values simply as the information to be propagated, and the queries and the keys are responsible for routing that information to the next layer. All of these things are learned. So the queries, the keys, and the values. Now what's the problem? The problem is between the queries and the keys. As you can see, what you have to do is you have to match every single query with every single key in order to find out where information goes. So this becomes order of, if you have D keys and D queries, order of D squared operations that you have to do. And of course D squared values that you have to compute. And since these are all vectors, of course there is D will not only be the number of keys, but then again this is multiplied, so there is an inner multiplication with the dimensionality, let's call that capital D, of the... no sorry that's not an inner multiplication. Let's just remain at this. So D squared inner products between vectors of capital D dimensions. So it's not an easy thing for resources to do. You need a lot of resources to hold this, all of this in memory at the same time and to compute all of these things. The reformer aims to solve this problem. So this giant space problem that the transformers have, space, memory, also computational problem to a lesser degree. Mostly it's a memory issue. Alright, so what is happening here? And you see here that this product between two matrices clearly gives you this kind of squared thing. So what's happening in the reformer to do this? The trick is, if we go back to this drawing, the trick is to create what's called a hashing scheme or buckets. In creating buckets what you want to do is you want to group similar things together. So let's say we create four buckets. Bucket one, bucket two, bucket three, bucket four. And each bucket we label. And bucket one we label with the up direction, this with the right direction, with the down direction, the left direction as vectors. And now we simply put each of the things into the bucket where it belongs most. So let's for example this vector here, it goes here. Sorry, that is like absolutely not the right place. It goes probably here, right? This vector here, probably this one goes here, right? And so on. So you'll end up each of these assigning a bucket. So these all go into that bucket. Let's continue, actually let's also put the keys in the same buckets. So also the keys, this key here probably goes to this bucket. This key here probably goes to this bucket. Let's say this key here probably goes to the bucket over here. You already see, so before, right before, we cared about this particular query and this particular key. We just looked and we said those two will probably route information to each other because they're similar. And now you can see they both ended up in the same bucket. So the idea is to create a scheme where you throw these things into buckets such that if two vectors are similar they will end up in the same bucket with high probability. So you'll only have to really compare things within the same bucket and not across all of these d squared elements. That's the idea and the technique here is called locality sensitive hashing. So locality sensitive hashing. And short this is called LSH. The idea is the following, if you have two vectors v1 and v2 and they have and you have a distance measure distance measure d. D is a distance. What you want is if the distance between v1 and v2 is small, I'm getting confused with color, with small then you want them in the same bucket. And if the distance is large then you want them in a different bucket. Different buckets. You know with high probability. So all of these things where you say you want them in the same bucket with probability p with probability p with high probability p and here you want them in different buckets with high probability. Or you want them in the same pocket with low probability. That's an equivalent form of stating. This is all formalized and I can direct you to the Wikipedia page of that. It's pretty good. It gives a concise definition. Here you can see that and it gives a number of examples. So one example I'd like to give here for locality sensitive hashing is of course the scheme of bucketing will all depend on what your distance measure is. If you consider the distance measure simply to be the jacquard distance. So let's say we have two vectors 0 1 0 1 and here we have 1 0 1 1 0 1 and here it's 0 0 0 1. Alright so maybe you can see the first two vectors here are much more close together than the last vector. Now in terms of bit differences, one scheme to do locality sensitive hashing is to simply sub sample bits. So in this case this is a slightly constructed example. We will just sub sample the first two bits and then construct the buckets according to these bit values. So if since we sample two bits we have four buckets. Here is 0 0, here is 0 1, here is 1 0 and here is 1 1. That's the concept of locality sensitive hashing. You have these buckets and then you can say alright this vector has 1 0, goes into this, this goes into this and then that goes into the 0 1 bucket. And you end up with what you have. You have the two close vectors in the same bucket and the two far apart vectors in that bucket. Of course that doesn't always work. You can be unlucky in sub sampling but that's kind of trade-off you'll have to go for. If things that are close together happen with it's a low probability but if they happen to end up in the different buckets then basically you lose the fact that they are close to each other and that's the trade-off. The kind of locality sensitive hashing they use in the reformer now is what are called random projections. So let's say you have a bunch of vectors and that's really what we care about. You have a bunch of vectors and what you want, you want the keys and queries. So you have a bunch of vectors like this and you want to create buckets such that vectors that are close together will end up in the same bucket and vectors that are far apart will end up in the in different buckets. A cool way to do is, and this is in the cosine distance so we care about the angle between vectors, a cool way to do this is to use random plane projections and the cool thing about it is it works for the cosine distance and you can basically choose how many buckets you create. Let's say we want to create four buckets here again. What we need is two hyper planes and what we'll do is, so here is the origin, we'll simply create two hyper planes through the origin at random. So I'm gonna draw a random hyper plane here like this and then a second random hyper plane like this. So you would agree those are pretty random hyper planes as much as I can be a random generator and then we'll simply label, so this will label hyper plane one, this will label hyper plane two. Now we simply assign each vector bits according to the, on which side of the hyper plane they lie. So let's call this here the plus side and this here the minus side or even yeah let's call this the plus and the minus and here also we call this the plus side and this the minus side. So this vector here is, its signs are plus plus right because it's on the plus side of both of hyper planes. This vector plus plus, this one plus plus, this one here is called, it's on the negative side of plane two but on the positive side of plane one so it's plus minus, this one here minus minus, minus minus, minus minus and these are your buckets. So you would group these vectors together because they have they have the same signs. You would group that vector, you would group these vectors together. The combination of this with attention, since in attention you've seen attention uses a softmax and the softmax is dominated usually by the largest elements and since we compute inner products it means that this softmax thing is dominated by vectors that have small inner products. So basically you don't have to look at all of these d squared vectors if you can find the ones that have the closest distance. You can pretty much ignore the others. And LSH allows you to do this. So build buckets of vectors with similar directions. Then you only have to care about these vectors comparing them to each other. So that's not a lot of vectors generally and that's how you save a lot of work. So you will only have to care about these three vectors if your key vector for example is right here. You'll only have to care about these things in the same bucket and you can ignore all of that rest of the space. Of course the more hyperplanes you have the more buckets you'll have, the less vectors you'll have in the same bucket. That's the general idea. I find this explanation to be a bit easy. You can equivalently explain it by doing these kind of random rotations in the space. You can think about how that will end up actually being the exact same thing as what I just explained. I just like that my explanation better I think. Alright so the way they use this, they have an illustration right here, is the following. So they have these keys right? Sequence of queries and keys. So they do equivalent queries and keys which is a thing you can do in transformers. Don't worry too much about it whether they're different or not. But then they do this LSH bucketing and here the color of the cell is just the bucket, the LSH bucket which will end up. Then they sort that right as you can see and now they do an additional thing which is called the chunk. As you can see there are not the same amount of vectors in each bucket and that is sometimes a problem because even though you've reduced the memory, the memory requirements are still dominated by the largest bucket. By whatever bucket has the most number of vectors that will pretty much be your memory requirement. Because now you don't have to, if this is D, you have to compute all the D squared things anymore. But you'll only have to compute this quantity, let's call that B. So the maximum bucket size. But that could still be large right? If you look at a distribution it's probably going to be something like this right? Where most buckets have a kind of a standard number of vectors but some buckets will have a lot of vectors and that's, sorry, some few buckets will have a lot of vectors and your memory requirement is still dominated by this. So they do an additional thing which is called chunking which means they actually take fixed size chunks here, fixed size. Here they always take four and they say all right these are our chunks and we will only compute attention within the chunks right? So it could be that there's the same bucket is actually split between chunks and that's why they do an additional thing is that you can attend two things in a different chunk right here. You can attend two things in your neighboring chunks so you're restricted to either your own chunk or your neighboring chunk. Note that there aren't any any arrows going over here. So you can attend, they have this diagram here, which things you can attend to. You can attend to yourself or attend to your neighboring thing but not to any other thing or the other way around right? So that's basically the the concept of saving memory. Now your memory requirements are, if we call this quantity now, we call the other one B, let's call this the chunk size C right? Your memory requirements are pretty much C squared plus whatever this unidirectional, so not this isn't squared, plus probably O of C something like this. So you bring your memory requirements down quite a bit. Now that's the general idea here. The problem they face again is, so they face another problem where they say hold on, I can't find it right here, they say hold on, we do have actually another problem and that is that these transformers have to back propagate. So you'll have to forward propagate these things and now we've kind of solved this D square computation issue but what you'll have to do is if you go from layer to layer right? Layer, layer, layer, layer. What you have to do is if you propagate information forward you still have to back propagate and in order to back propagate usually, usually you'll have to remember all of these activations right? So these activations, these activations. In order to do back prop it is often the case that you actually have to remember the activations because in each forward propagation, in each layer here you might lose some information. Imagine you have a layer that maps these two-dimensional vectors both to, so here actually let's make this blue, maps these three vectors to the following configuration. So a layer maps these vectors to this, this and this. So it maps two things to one thing which you know can be if you in a linear layer can decide to map it to a lower dimensional subspace. So you could actually decide to map it to in fact two points right? This is also a possibility. You could do dimension reduction. So because all of this in order to do back prop you actually have to remember these things in order to do proper back prop. This is a problem again for the transformer because all these activations even though we've gotten rid of the d-square computation they will have to be remembered and that takes a lot of memory. The way to solve this is actually to do invertible layers. What that means is that if I propagate information forward, forward, forward, forward, I can figure out what the information here was simply by looking at the back prop activations. And this happens if the layer is invertible. So if this function here is invertible. So if f here technically is invertible. So I can actually write down the inverse of f and that is defined. This of course is a pretty big restriction and the way they achieve it, I like to go to the blog here, the way they achieve it is they do what's called an idea from reversible networks where they always have two sets of activations. That's what you see here. X1 and X2. And in each layer only one of them is updated in a residual fashion. You can see here layer 1 updates X2 but X1 remains the same and goes to Y1. And then in the next layer, layer 2 only updates Y1 in order to construct Z1. But Y2 remains the same to be Z2. And then you can revert the layers. You can basically figure out what the activations were from the back prop signal. Now that's extremely good if you want to save memory but of course it restricts clearly. You have to be restricted to this kind of architecture similar. This idea actually isn't new. This has been used many times in things like normalizing flows and I want to highlight this paper. Actually want to highlight specific... I chose this paper because they have these nice diagrams where they show exactly this. You see they have two sets X1 and X2 that in forward propagation they only update one of them. And then in backward in what's called inverse propagation they can figure out what those were. And they couple these in exactly the same way. Like here this drawing might be even more similar where they alternate between updating the two activations. So you can think of this as a way to simply make the function that you're representing with the neural network invertible. That is a giant constraint on your architecture but these methods here, these normalizing flow methods, use that so they can actually define an invertible layer because they need the Jacobian inverse in order to compute their normalizing flow. So you see that's why they originally did it. And I'm sure that that's not a new idea or particularly new again. Strangely I haven't found any of the flow literature cited. They do cite the reversible residual net paper that they probably got the idea from. So with these two things now you can save the giant computation. And you can also not store the forward activations. So they say they can take now giant giant giant input sizes. You may remember transformers like BERT. So BERT it can use something like 512 tokens. In its input sequence. That means the sequence that you can look at with BERT at a time is 512 long and not a bit longer. There have been some extensions to that. For example I believe in XL net. So XL net has pushed this to something like C times 512 where C is a smallish constant. That where you can kind of carry over information between sequences. But this thing here as you can see they calculate it could take up something like 64,000 tokens and that would use in total 16 gigabytes of memory. Which is available on a high-end GPU. So this is a giant this is a giant step forward in in producing transformers that can actually take large models. And here you see the memory and time complexity. You can look at these things yourself but you can see maybe here that these squares here from the original transformer they now vanish from this. And all of these constants are a lot of these constants are actually smaller. For example that chunk size is in there instead of kind of the entire sequence length. So that's basically the the paper. They show that I can actually input those long sequences. They can apply this to images. You see there's image net pixel by pixel which is a lot of pixels and would have been absolutely unthinkable with one of the original transformers. And with that I invite you to check out the paper and the blog post and I'll see you next time. Bye bye.
[ { "start": 0, "end": 5.84, "text": " Hi there! Today we'll look at Reformer, the efficient transformer by Nikita" }, { "start": 5.84, "end": 13.72, "text": " Kitaev, Lukas Kaiser and Anselm Levskaia. This is a paper that tries to reduce the" }, { "start": 13.72, "end": 18.6, "text": " extreme resource requirements of the transformer model. Now if you haven't" }, { "start": 18.6, "end": 25.2, "text": " seen the transformer model before, that's this thing, I suggest you go watch for" }, { "start": 25.2, "end": 29.36, "text": " example my video on it, Attention is All You Need, it's called, where the" }, { "start": 29.36, "end": 36.56, "text": " transformer is introduced. The most famous transformer is called BERT, B-E-R-T," }, { "start": 36.56, "end": 43.72, "text": " and you can also look that up, I've made a video about this. So what's the issue" }, { "start": 43.72, "end": 50.480000000000004, "text": " here? If you remember transformers, they need a lot of memory. And why? That's" }, { "start": 50.480000000000004, "end": 56.92, "text": " because they compute, in each layer they compute these attention things. Let's" }, { "start": 56.92, "end": 63.64, "text": " recap shortly. In a transformer you propagate information layer by layer. So" }, { "start": 63.64, "end": 71.48, "text": " you have layer here with some signal, and then the next layer that you try to" }, { "start": 71.48, "end": 80.44, "text": " propagate the signal. Now what you do, you assign, you assign key queries to each of" }, { "start": 80.44, "end": 84.92, "text": " the next layer. So each of the next layer has queries, and queries are just" }, { "start": 84.92, "end": 90.44, "text": " vectors. This is a vector, this is a vector, this is a vector, and so on. So" }, { "start": 90.44, "end": 97.48, "text": " basically the next layer has the ability to ask, to ask the last layer what it" }, { "start": 97.48, "end": 104.2, "text": " wants. This is a kind of an intrinsic property of attention, and I, as I said, I" }, { "start": 104.2, "end": 108.92, "text": " explained this in detail in the video, Attention is All You Need. Basically" }, { "start": 108.92, "end": 115.88, "text": " these are what's called queries, Q. And then this layer is exposing what are" }, { "start": 115.88, "end": 124.28, "text": " called keys, and keys again are vectors. So vector, vector, vector, vector, and so on." }, { "start": 124.28, "end": 130.48, "text": " So keys are vectors, and the way that the information is propagated to the next" }, { "start": 130.48, "end": 138.83999999999997, "text": " layer is whenever, whatever, we consider for example this node here, right, this" }, { "start": 138.83999999999997, "end": 144.79999999999998, "text": " node, let's make that yellow. When we consider this node here, it is going to" }, { "start": 144.79999999999998, "end": 152.48, "text": " look in the last layer which, which keys match my key the most. And in this case" }, { "start": 152.48, "end": 158.2, "text": " it will probably be this key and this key, right, they match the key the most." }, { "start": 158.2, "end": 164.11999999999998, "text": " And here we look at the inner product, so the angle between the vectors. And then" }, { "start": 164.11999999999998, "end": 171.56, "text": " information is aggregated by simply having a weighted average of the values." }, { "start": 171.56, "end": 176.56, "text": " So information is coming in here and here. Actually information is coming into" }, { "start": 176.56, "end": 181.48, "text": " all the nodes, but since only these keys match, the information will be propagated" }, { "start": 181.48, "end": 189.79999999999998, "text": " like this, to this unit. We could do this for another unit, for example this unit" }, { "start": 189.79999999999998, "end": 195.83999999999997, "text": " right here. What's the value of this unit? Well we have to look at the key here." }, { "start": 195.83999999999997, "end": 201, "text": " Which key is it going to be matched to? It's probably going to be matched to" }, { "start": 201, "end": 208.23999999999998, "text": " this key right here. And probably no other key really. Maybe this key a little" }, { "start": 208.24, "end": 213.04000000000002, "text": " bit. So the information of that node in the next layer will be whatever's" }, { "start": 213.04000000000002, "end": 218.08, "text": " information is coming in here, routed there, and a little bit of this" }, { "start": 218.08, "end": 223, "text": " information. So this is kind of a, it's not a hard, it's called soft attention." }, { "start": 223, "end": 228.48000000000002, "text": " So there's a little bit of information going everywhere, but the majority of the" }, { "start": 228.48000000000002, "end": 232.12, "text": " information is coming from the nodes where the keys match. So these are" }, { "start": 232.12, "end": 237.60000000000002, "text": " queries, these are keys, and technically these things coming in here are called" }, { "start": 237.6, "end": 243.48, "text": " values. But imagine the values simply as the information to be propagated, and the" }, { "start": 243.48, "end": 248.56, "text": " queries and the keys are responsible for routing that information to the next" }, { "start": 248.56, "end": 254.28, "text": " layer. All of these things are learned. So the queries, the keys, and the values." }, { "start": 254.28, "end": 259.15999999999997, "text": " Now what's the problem? The problem is between the queries and the keys. As you" }, { "start": 259.15999999999997, "end": 264.88, "text": " can see, what you have to do is you have to match every single query with every" }, { "start": 264.88, "end": 270.28, "text": " single key in order to find out where information goes. So this becomes order" }, { "start": 270.28, "end": 278.6, "text": " of, if you have D keys and D queries, order of D squared operations that you" }, { "start": 278.6, "end": 283.96, "text": " have to do. And of course D squared values that you have to compute. And" }, { "start": 283.96, "end": 290.96, "text": " since these are all vectors, of course there is D will not only be the number" }, { "start": 290.96, "end": 294.91999999999996, "text": " of keys, but then again this is multiplied, so there is an inner" }, { "start": 294.91999999999996, "end": 303.52, "text": " multiplication with the dimensionality, let's call that capital D, of the... no" }, { "start": 303.52, "end": 310.35999999999996, "text": " sorry that's not an inner multiplication. Let's just remain at this. So D squared" }, { "start": 310.35999999999996, "end": 317, "text": " inner products between vectors of capital D dimensions. So it's not an" }, { "start": 317, "end": 324.8, "text": " easy thing for resources to do. You need a lot of resources to hold this, all of" }, { "start": 324.8, "end": 331.22, "text": " this in memory at the same time and to compute all of these things. The reformer" }, { "start": 331.22, "end": 336.64, "text": " aims to solve this problem. So this giant space problem that the" }, { "start": 336.64, "end": 343.24, "text": " transformers have, space, memory, also computational problem to a lesser degree." }, { "start": 343.24, "end": 350.44, "text": " Mostly it's a memory issue. Alright, so what is happening here? And you see" }, { "start": 350.44, "end": 356.84000000000003, "text": " here that this product between two matrices clearly gives you this" }, { "start": 356.84000000000003, "end": 365.08, "text": " kind of squared thing. So what's happening in the reformer to do this?" }, { "start": 365.08, "end": 371.96000000000004, "text": " The trick is, if we go back to this drawing, the trick is to create" }, { "start": 371.96, "end": 378.35999999999996, "text": " what's called a hashing scheme or buckets. In creating buckets what you" }, { "start": 378.35999999999996, "end": 385.4, "text": " want to do is you want to group similar things together. So let's say we create" }, { "start": 385.4, "end": 395.88, "text": " four buckets. Bucket one, bucket two, bucket three, bucket four. And each" }, { "start": 395.88, "end": 402.56, "text": " bucket we label. And bucket one we label with the up direction, this with the right" }, { "start": 402.56, "end": 408.56, "text": " direction, with the down direction, the left direction as vectors. And now we" }, { "start": 408.56, "end": 415.36, "text": " simply put each of the things into the bucket where it belongs most. So let's" }, { "start": 415.36, "end": 422.76, "text": " for example this vector here, it goes here. Sorry, that is like absolutely not" }, { "start": 422.76, "end": 432.12, "text": " the right place. It goes probably here, right? This vector here, probably this one" }, { "start": 432.12, "end": 437.8, "text": " goes here, right? And so on. So you'll end up each of these assigning a bucket. So" }, { "start": 437.8, "end": 445.4, "text": " these all go into that bucket. Let's continue, actually let's also" }, { "start": 445.4, "end": 453, "text": " put the keys in the same buckets. So also the keys, this key here probably goes" }, { "start": 453, "end": 462.64, "text": " to this bucket. This key here probably goes to this bucket. Let's say this key" }, { "start": 462.64, "end": 468.12, "text": " here probably goes to the bucket over here. You already see, so before, right" }, { "start": 468.12, "end": 476.04, "text": " before, we cared about this particular query and this particular key. We just" }, { "start": 476.04, "end": 480.8, "text": " looked and we said those two will probably route information to each other" }, { "start": 480.8, "end": 486.72, "text": " because they're similar. And now you can see they both ended up in the same" }, { "start": 486.72, "end": 493.84000000000003, "text": " bucket. So the idea is to create a scheme where you throw these things into" }, { "start": 493.84, "end": 499.56, "text": " buckets such that if two vectors are similar they will end up in the same" }, { "start": 499.56, "end": 504.76, "text": " bucket with high probability. So you'll only have to really compare things within" }, { "start": 504.76, "end": 511.96, "text": " the same bucket and not across all of these d squared elements. That's the idea" }, { "start": 511.96, "end": 520.16, "text": " and the technique here is called locality sensitive hashing. So locality" }, { "start": 520.16, "end": 531.56, "text": " sensitive hashing. And short this is called LSH. The idea is the following, if" }, { "start": 531.56, "end": 539.92, "text": " you have two vectors v1 and v2 and they have and you have a distance measure" }, { "start": 539.92, "end": 551.64, "text": " distance measure d. D is a distance. What you want is if the distance between v1" }, { "start": 551.64, "end": 564.8399999999999, "text": " and v2 is small, I'm getting confused with color, with small then you want them in the" }, { "start": 564.84, "end": 579.0400000000001, "text": " same bucket. And if the distance is large then you want them in a different bucket." }, { "start": 579.0400000000001, "end": 589.88, "text": " Different buckets. You know with high probability. So all of these things" }, { "start": 589.88, "end": 597.4399999999999, "text": " where you say you want them in the same bucket with probability p with" }, { "start": 597.4399999999999, "end": 602.76, "text": " probability p with high probability p and here you want them in different" }, { "start": 602.76, "end": 606.88, "text": " buckets with high probability. Or you want them in the same pocket with low" }, { "start": 606.88, "end": 612.32, "text": " probability. That's an equivalent form of stating. This is all formalized and I" }, { "start": 612.32, "end": 618.56, "text": " can direct you to the Wikipedia page of that. It's pretty good. It gives a concise" }, { "start": 618.56, "end": 625.1199999999999, "text": " definition. Here you can see that and it gives a number of examples. So one" }, { "start": 625.1199999999999, "end": 630.04, "text": " example I'd like to give here for locality sensitive hashing is of course" }, { "start": 630.04, "end": 636.4799999999999, "text": " the scheme of bucketing will all depend on what your distance measure is. If you" }, { "start": 636.4799999999999, "end": 641.3599999999999, "text": " consider the distance measure simply to be the jacquard distance. So let's say we" }, { "start": 641.36, "end": 656.12, "text": " have two vectors 0 1 0 1 and here we have 1 0 1 1 0 1 and here it's 0 0 0 1." }, { "start": 656.12, "end": 664.16, "text": " Alright so maybe you can see the first two vectors here are much more close" }, { "start": 664.16, "end": 672.9599999999999, "text": " together than the last vector. Now in terms of bit differences, one scheme" }, { "start": 672.9599999999999, "end": 680.12, "text": " to do locality sensitive hashing is to simply sub sample bits. So in this case" }, { "start": 680.12, "end": 686.52, "text": " this is a slightly constructed example. We will just sub sample the first two" }, { "start": 686.52, "end": 691.88, "text": " bits and then construct the buckets according to these bit values. So if" }, { "start": 691.88, "end": 698.24, "text": " since we sample two bits we have four buckets. Here is 0 0, here is 0 1," }, { "start": 698.24, "end": 703.76, "text": " here is 1 0 and here is 1 1. That's the concept of locality sensitive hashing." }, { "start": 703.76, "end": 708.12, "text": " You have these buckets and then you can say alright this vector has 1 0," }, { "start": 708.12, "end": 716.76, "text": " goes into this, this goes into this and then that goes into the 0 1 bucket." }, { "start": 716.76, "end": 722, "text": " And you end up with what you have. You have the two close vectors in the same" }, { "start": 722, "end": 726.08, "text": " bucket and the two far apart vectors in that bucket. Of course that doesn't" }, { "start": 726.08, "end": 730.36, "text": " always work. You can be unlucky in sub sampling but that's kind of" }, { "start": 730.36, "end": 735.36, "text": " trade-off you'll have to go for. If things that are close together" }, { "start": 735.36, "end": 740.96, "text": " happen with it's a low probability but if they happen to end up in the different" }, { "start": 740.96, "end": 747.44, "text": " buckets then basically you lose the fact that they are close to each other and" }, { "start": 747.44, "end": 752.6, "text": " that's the trade-off. The kind of locality sensitive hashing they use in" }, { "start": 752.6, "end": 757.84, "text": " the reformer now is what are called random projections. So let's say you have" }, { "start": 757.84, "end": 761.48, "text": " a bunch of vectors and that's really what we care about. You have a bunch" }, { "start": 761.48, "end": 770.76, "text": " of vectors and what you want, you want the keys and queries. So you have a" }, { "start": 770.76, "end": 775.8, "text": " bunch of vectors like this and you want to create buckets such that vectors that" }, { "start": 775.8, "end": 780.64, "text": " are close together will end up in the same bucket and vectors that are far" }, { "start": 780.64, "end": 787.4, "text": " apart will end up in the in different buckets. A cool way to do is," }, { "start": 787.4, "end": 791.72, "text": " and this is in the cosine distance so we care about the angle between vectors," }, { "start": 791.72, "end": 799.48, "text": " a cool way to do this is to use random plane projections and the cool" }, { "start": 799.48, "end": 803.44, "text": " thing about it is it works for the cosine distance and you can basically" }, { "start": 803.44, "end": 810.4, "text": " choose how many buckets you create. Let's say we want to create four" }, { "start": 810.4, "end": 816.16, "text": " buckets here again. What we need is two hyper planes and what we'll do is, so" }, { "start": 816.16, "end": 822.04, "text": " here is the origin, we'll simply create two hyper planes through the origin at" }, { "start": 822.04, "end": 829.44, "text": " random. So I'm gonna draw a random hyper plane here like this and then a second" }, { "start": 829.44, "end": 837.24, "text": " random hyper plane like this. So you would agree those are pretty random" }, { "start": 837.24, "end": 843.12, "text": " hyper planes as much as I can be a random generator and then we'll simply" }, { "start": 843.12, "end": 848.8000000000001, "text": " label, so this will label hyper plane one, this will label hyper plane two." }, { "start": 848.8000000000001, "end": 857, "text": " Now we simply assign each vector bits according to the, on which" }, { "start": 857, "end": 862, "text": " side of the hyper plane they lie. So let's call this here the plus side and" }, { "start": 862, "end": 866.88, "text": " this here the minus side or even yeah let's call this the plus and the minus" }, { "start": 866.88, "end": 872.24, "text": " and here also we call this the plus side and this the minus side. So this vector" }, { "start": 872.24, "end": 880.8, "text": " here is, its signs are plus plus right because it's on the plus side of both of" }, { "start": 880.8, "end": 888.64, "text": " hyper planes. This vector plus plus, this one plus plus, this one here is called," }, { "start": 888.64, "end": 894.12, "text": " it's on the negative side of plane two but on the positive side of plane one so" }, { "start": 894.12, "end": 902.12, "text": " it's plus minus, this one here minus minus, minus minus, minus minus and these" }, { "start": 902.12, "end": 907.12, "text": " are your buckets. So you would group these vectors together because they have" }, { "start": 907.12, "end": 911.48, "text": " they have the same signs. You would group that vector, you would group these" }, { "start": 911.48, "end": 918.64, "text": " vectors together. The combination of this with attention, since in attention you've" }, { "start": 918.64, "end": 926.44, "text": " seen attention uses a softmax and the softmax is dominated usually by the" }, { "start": 926.44, "end": 932.44, "text": " largest elements and since we compute inner products it means that this softmax" }, { "start": 932.44, "end": 938.48, "text": " thing is dominated by vectors that have small inner products. So basically" }, { "start": 938.48, "end": 944.6800000000001, "text": " you don't have to look at all of these d squared vectors if you can find the" }, { "start": 944.6800000000001, "end": 950.48, "text": " ones that have the closest distance. You can pretty much ignore the others." }, { "start": 950.48, "end": 957.8800000000001, "text": " And LSH allows you to do this. So build buckets of vectors with" }, { "start": 957.88, "end": 964.68, "text": " similar directions. Then you only have to care about these vectors comparing them" }, { "start": 964.68, "end": 971.32, "text": " to each other. So that's not a lot of vectors generally and that's how you" }, { "start": 971.32, "end": 976.32, "text": " save a lot of work. So you will only have to care about these three vectors if" }, { "start": 976.32, "end": 981.36, "text": " your key vector for example is right here. You'll only have to care about these" }, { "start": 981.36, "end": 988.4, "text": " things in the same bucket and you can ignore all of that rest of the space. Of" }, { "start": 988.4, "end": 992.72, "text": " course the more hyperplanes you have the more buckets you'll have, the less" }, { "start": 992.72, "end": 997.04, "text": " vectors you'll have in the same bucket. That's the general idea. I find this" }, { "start": 997.04, "end": 1001.36, "text": " explanation to be a bit easy. You can equivalently explain it by doing these" }, { "start": 1001.36, "end": 1007.84, "text": " kind of random rotations in the space. You can think about how that will end up" }, { "start": 1007.84, "end": 1012.5600000000001, "text": " actually being the exact same thing as what I just explained. I just like that" }, { "start": 1012.5600000000001, "end": 1020.48, "text": " my explanation better I think. Alright so the way they use this, they have an" }, { "start": 1020.48, "end": 1026.88, "text": " illustration right here, is the following. So they have these keys right?" }, { "start": 1026.88, "end": 1031.68, "text": " Sequence of queries and keys. So they do equivalent queries and keys which is a" }, { "start": 1031.68, "end": 1036.48, "text": " thing you can do in transformers. Don't worry too much about it whether they're" }, { "start": 1036.48, "end": 1042.16, "text": " different or not. But then they do this LSH bucketing and here the color of the" }, { "start": 1042.16, "end": 1048.84, "text": " cell is just the bucket, the LSH bucket which will end up. Then they sort that" }, { "start": 1048.84, "end": 1055.3600000000001, "text": " right as you can see and now they do an additional thing which is called the" }, { "start": 1055.3600000000001, "end": 1061.4, "text": " chunk. As you can see there are not the same amount of vectors in each bucket" }, { "start": 1061.4, "end": 1068.3200000000002, "text": " and that is sometimes a problem because even though you've reduced the" }, { "start": 1068.3200000000002, "end": 1073.4, "text": " memory, the memory requirements are still dominated by the" }, { "start": 1073.4, "end": 1080.3200000000002, "text": " largest bucket. By whatever bucket has the most number of vectors that will" }, { "start": 1080.3200000000002, "end": 1085.48, "text": " pretty much be your memory requirement. Because now you don't have to, if" }, { "start": 1085.48, "end": 1091.2800000000002, "text": " this is D, you have to compute all the D squared things anymore. But you'll" }, { "start": 1091.28, "end": 1099.84, "text": " only have to compute this quantity, let's call that B. So the maximum" }, { "start": 1099.84, "end": 1105.6399999999999, "text": " bucket size. But that could still be large right? If you look at a" }, { "start": 1105.6399999999999, "end": 1110.6, "text": " distribution it's probably going to be something like this right? Where most" }, { "start": 1110.6, "end": 1116.44, "text": " buckets have a kind of a standard number of vectors but some buckets will have a" }, { "start": 1116.44, "end": 1122.64, "text": " lot of vectors and that's, sorry, some few buckets will have a lot of vectors and" }, { "start": 1122.64, "end": 1126.16, "text": " your memory requirement is still dominated by this. So they do an" }, { "start": 1126.16, "end": 1129.04, "text": " additional thing which is called chunking which means they actually take" }, { "start": 1129.04, "end": 1136.24, "text": " fixed size chunks here, fixed size. Here they always take four and they say all" }, { "start": 1136.24, "end": 1143.8, "text": " right these are our chunks and we will only compute attention within the chunks" }, { "start": 1143.8, "end": 1149, "text": " right? So it could be that there's the same bucket is actually split" }, { "start": 1149, "end": 1153.2, "text": " between chunks and that's why they do an additional thing is that you can attend" }, { "start": 1153.2, "end": 1159.84, "text": " two things in a different chunk right here. You can attend two things" }, { "start": 1159.84, "end": 1165.52, "text": " in your neighboring chunks so you're restricted to either your own chunk or" }, { "start": 1165.52, "end": 1173.48, "text": " your neighboring chunk. Note that there aren't any any arrows going over here." }, { "start": 1173.48, "end": 1180.08, "text": " So you can attend, they have this diagram here, which things you can" }, { "start": 1180.08, "end": 1185.6, "text": " attend to. You can attend to yourself or attend to your neighboring thing but not" }, { "start": 1185.6, "end": 1192.4, "text": " to any other thing or the other way around right? So that's basically the" }, { "start": 1192.4, "end": 1201.32, "text": " the concept of saving memory. Now your memory requirements are, if we call this" }, { "start": 1201.32, "end": 1208.28, "text": " quantity now, we call the other one B, let's call this the chunk size C right?" }, { "start": 1208.28, "end": 1213.76, "text": " Your memory requirements are pretty much C squared plus whatever this" }, { "start": 1213.76, "end": 1220.52, "text": " unidirectional, so not this isn't squared, plus probably O of C something" }, { "start": 1220.52, "end": 1230.3999999999999, "text": " like this. So you bring your memory requirements down quite a bit. Now" }, { "start": 1230.4, "end": 1240.0400000000002, "text": " that's the general idea here. The problem they face again is, so they face" }, { "start": 1240.0400000000002, "end": 1249.92, "text": " another problem where they say hold on, I can't find it right here, they say hold on," }, { "start": 1249.92, "end": 1254.72, "text": " we do have actually another problem and that is that these transformers" }, { "start": 1254.72, "end": 1260.64, "text": " have to back propagate. So you'll have to forward propagate these things and now" }, { "start": 1260.64, "end": 1264.48, "text": " we've kind of solved this D square computation issue but what you'll have to" }, { "start": 1264.48, "end": 1270.64, "text": " do is if you go from layer to layer right? Layer, layer, layer, layer. What you" }, { "start": 1270.64, "end": 1274.96, "text": " have to do is if you propagate information forward you still have to" }, { "start": 1274.96, "end": 1280.68, "text": " back propagate and in order to back propagate usually, usually you'll have to" }, { "start": 1280.68, "end": 1287.3600000000001, "text": " remember all of these activations right? So these activations, these activations." }, { "start": 1287.3600000000001, "end": 1292.4, "text": " In order to do back prop it is often the case that you actually have to remember" }, { "start": 1292.4, "end": 1296.96, "text": " the activations because in each forward propagation, in each layer here you might" }, { "start": 1296.96, "end": 1304.5600000000002, "text": " lose some information. Imagine you have a layer that maps these" }, { "start": 1304.56, "end": 1314.12, "text": " two-dimensional vectors both to, so here actually let's make this blue, maps these" }, { "start": 1314.12, "end": 1319.96, "text": " three vectors to the following configuration. So a layer maps these" }, { "start": 1319.96, "end": 1329.32, "text": " vectors to this, this and this. So it maps two things to one thing which" }, { "start": 1329.32, "end": 1335.32, "text": " you know can be if you in a linear layer can decide to map it to a lower" }, { "start": 1335.32, "end": 1340.6799999999998, "text": " dimensional subspace. So you could actually decide to map it to in fact" }, { "start": 1340.6799999999998, "end": 1346, "text": " two points right? This is also a possibility. You could do dimension reduction." }, { "start": 1346, "end": 1349.52, "text": " So because all of this in order to do back prop you actually have to remember" }, { "start": 1349.52, "end": 1357.32, "text": " these things in order to do proper back prop. This is a problem again for the" }, { "start": 1357.32, "end": 1361.4399999999998, "text": " transformer because all these activations even though we've gotten rid" }, { "start": 1361.4399999999998, "end": 1366.72, "text": " of the d-square computation they will have to be remembered and that takes a" }, { "start": 1366.72, "end": 1374.6, "text": " lot of memory. The way to solve this is actually to do invertible layers. What" }, { "start": 1374.6, "end": 1378.96, "text": " that means is that if I propagate information forward, forward, forward," }, { "start": 1378.96, "end": 1385.76, "text": " forward, I can figure out what the information here was simply by looking" }, { "start": 1385.76, "end": 1392.56, "text": " at the back prop activations. And this happens if the layer is invertible." }, { "start": 1392.56, "end": 1400.48, "text": " So if this function here is invertible. So if f here technically is invertible." }, { "start": 1400.48, "end": 1408.6, "text": " So I can actually write down the inverse of f and that is defined. This of course" }, { "start": 1408.6, "end": 1419.1999999999998, "text": " is a pretty big restriction and the way they achieve it, I like to go to the blog" }, { "start": 1419.1999999999998, "end": 1430.4399999999998, "text": " here, the way they achieve it is they do what's called an idea from reversible" }, { "start": 1430.4399999999998, "end": 1434.4399999999998, "text": " networks where they always have two sets of activations. That's what you see here." }, { "start": 1434.44, "end": 1441.56, "text": " X1 and X2. And in each layer only one of them is updated in a residual fashion." }, { "start": 1441.56, "end": 1449.52, "text": " You can see here layer 1 updates X2 but X1 remains the same and goes to Y1." }, { "start": 1449.52, "end": 1458.76, "text": " And then in the next layer, layer 2 only updates Y1 in order to" }, { "start": 1458.76, "end": 1466.28, "text": " construct Z1. But Y2 remains the same to be Z2. And then you can revert the layers." }, { "start": 1466.28, "end": 1471.84, "text": " You can basically figure out what the activations were from the back prop" }, { "start": 1471.84, "end": 1479.24, "text": " signal. Now that's extremely good if you want to save memory but of course it" }, { "start": 1479.24, "end": 1483.4, "text": " restricts clearly. You have to be restricted to this kind of architecture" }, { "start": 1483.4, "end": 1490.52, "text": " similar. This idea actually isn't new. This has been used many times in things" }, { "start": 1490.52, "end": 1494.8000000000002, "text": " like normalizing flows and I want to highlight this paper. Actually want to" }, { "start": 1494.8000000000002, "end": 1501.16, "text": " highlight specific... I chose this paper because they have these nice diagrams" }, { "start": 1501.16, "end": 1509.5600000000002, "text": " where they show exactly this. You see they have two sets X1 and X2 that in" }, { "start": 1509.56, "end": 1514.52, "text": " forward propagation they only update one of them. And then in backward in what's" }, { "start": 1514.52, "end": 1520.44, "text": " called inverse propagation they can figure out what those were. And they" }, { "start": 1520.44, "end": 1527.32, "text": " couple these in exactly the same way. Like here this drawing might be even more" }, { "start": 1527.32, "end": 1534.04, "text": " similar where they alternate between updating the two activations. So you can" }, { "start": 1534.04, "end": 1539.76, "text": " think of this as a way to simply make the function that you're representing" }, { "start": 1539.76, "end": 1544.68, "text": " with the neural network invertible. That is a giant constraint on your" }, { "start": 1544.68, "end": 1549.24, "text": " architecture but these methods here, these normalizing flow methods, use that" }, { "start": 1549.24, "end": 1554.84, "text": " so they can actually define an invertible layer because they need the" }, { "start": 1554.84, "end": 1562.8799999999999, "text": " Jacobian inverse in order to compute their normalizing flow. So you see that's" }, { "start": 1562.88, "end": 1569.3600000000001, "text": " why they originally did it. And I'm sure that that's not a new idea or" }, { "start": 1569.3600000000001, "end": 1576.3600000000001, "text": " particularly new again. Strangely I haven't found any of the flow" }, { "start": 1576.3600000000001, "end": 1585, "text": " literature cited. They do cite the reversible residual net paper that they" }, { "start": 1585, "end": 1592.0800000000002, "text": " probably got the idea from. So with these two things now you can save the" }, { "start": 1592.08, "end": 1599.84, "text": " giant computation. And you can also not store the forward activations. So" }, { "start": 1599.84, "end": 1612.1599999999999, "text": " they say they can take now giant giant giant input sizes. You may remember" }, { "start": 1612.1599999999999, "end": 1622, "text": " transformers like BERT. So BERT it can use something like 512 tokens." }, { "start": 1622, "end": 1628, "text": " In its input sequence. That means the sequence that you can look at with BERT" }, { "start": 1628, "end": 1634.72, "text": " at a time is 512 long and not a bit longer. There have been some" }, { "start": 1634.72, "end": 1644.12, "text": " extensions to that. For example I believe in XL net. So XL net has pushed this to" }, { "start": 1644.12, "end": 1655.1599999999999, "text": " something like C times 512 where C is a smallish constant. That where you" }, { "start": 1655.1599999999999, "end": 1659.6399999999999, "text": " can kind of carry over information between sequences. But this thing here" }, { "start": 1659.6399999999999, "end": 1668.04, "text": " as you can see they calculate it could take up something like 64,000 tokens and" }, { "start": 1668.04, "end": 1675.32, "text": " that would use in total 16 gigabytes of memory. Which is available on a high-end" }, { "start": 1675.32, "end": 1687, "text": " GPU. So this is a giant this is a giant step forward in in producing" }, { "start": 1687, "end": 1693.12, "text": " transformers that can actually take large models. And here you see the memory" }, { "start": 1693.12, "end": 1698.9599999999998, "text": " and time complexity. You can look at these things yourself but you can see" }, { "start": 1698.9599999999998, "end": 1704.4399999999998, "text": " maybe here that these squares here from the original transformer they now" }, { "start": 1704.4399999999998, "end": 1710.3999999999999, "text": " vanish from this. And all of these constants are a lot of these constants" }, { "start": 1710.3999999999999, "end": 1715.12, "text": " are actually smaller. For example that chunk size is in there instead of kind" }, { "start": 1715.12, "end": 1724.3999999999999, "text": " of the entire sequence length. So that's basically the the paper. They show that" }, { "start": 1724.3999999999999, "end": 1729.76, "text": " I can actually input those long sequences. They can apply this to images." }, { "start": 1729.76, "end": 1735.8, "text": " You see there's image net pixel by pixel which is a lot of pixels and would have" }, { "start": 1735.8, "end": 1742.6799999999998, "text": " been absolutely unthinkable with one of the original transformers. And with that" }, { "start": 1742.68, "end": 1749.04, "text": " I invite you to check out the paper and the blog post and I'll see you next time." }, { "start": 1749.04, "end": 1775.84, "text": " Bye bye." } ]
yFAuXmcGk2Y
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
SingularityNET - A Decentralized, Open Market and Network for AIs (Whitepaper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "singularity", "singularitynet", "agi", "ben goertzel", "goertzel", "hanson", "hanson robotics", "sophia", "network", "api", "offercoin", "offernetworks", "offer networks", "offer coin", "agi token", "erc20", "ethereum", "cardano", "governance", "benefit", "reputation", "reputation system", "liquid rank", "liquidrank", "deoldify", "inflation", "ico", "matchmaking", "graph", "opencog", "open cog", "tontoni phi", "intelligence", "artificial general intelligence", "blockchain" ]
#ai #research #blockchain Big Tech is currently dominating the pursuit of ever more capable AI. This happens behind closed doors and results in a monopoly of power. SingularityNET is an open, decentralized network where anyone can offer and consume AI services, and where AI agents can interlink with each other to provide ever more sophisticated AI, with the goal to create a singularity that's beneficial for humanity. This video takes a look at the basics behind SingularityNET and some of its core components. OUTLINE: 0:00 - Intro & Overview 2:55 - Document Summarization Example Workflow 5:50 - Why AI needs a Marketplace? 9:20 - A network of APIs 12:30 - AI Evaluators & Matchmakers 15:00 - My criticisms of the Marketplace 17:45 - What is on the Blockchain? 20:45 - AI Marketplace Demo 22:00 - The AGI Token & Inflation 26:30 - Reputation System & other features 30:00 - Democratic Governance 33:00 - Benefit Tasks 36:15 - My general thoughts on the application examples 38:05 - Measuring Intelligence on SingularityNET 45:15 - OfferNet Economy 50:00 - Summary & Comments Whitepaper: https://public.singularitynet.io/whitepaper.pdf Website: https://singularitynet.io/ AI Marketplace: https://beta.singularitynet.io/aimarketplace References: https://www.hansonrobotics.com/wp-content/uploads/2018/12/Using-Tononi-Phi-to-Measure-Consciousness-of-a-Cognitive-System-While-Reading-and-Conversing.pdf https://arxiv.org/pdf/1601.02626.pdf https://blog.singularitynet.io/singularitynet-the-past-the-present-and-the-future-7bacb2b8e7f0 https://blog.singularitynet.io/singularitynet-supervisory-council-e7c513fd3ea6 https://blog.singularitynet.io/singularitynet-phase-two-massive-token-utilization-toward-decentralized-beneficial-agi-6e3ac5a5b44a ADDENDUM: I forgot to mention one important example for the utility of dynamic matchmaking: If I have a German text to summarize, and there is a German summarizer, but there is also a better English one, a clever AI could figure out for me whether to use the German one or whether to use a translator to English, then the English summarizer, then a backtranslator. And it could even do so depending on the input text. Abstract: [...] Most AI research today is controlled by a handful of corporations—those with the resources to fund development. Independent developers of AI tools have no readily available way to monetize their creations. Usually, their most lucrative option is to sell their tool to one of the big tech companies, leading to control of the technology becoming even more concentrated. SingularityNET’s open-source protocol and collection of smart contracts are designed to address these problems. Developers can launch their AI tools on the network, where they can interoperate with other AIs and with paying users. Not only does the SingularityNET platform give developers a commercial launchpad (much like app stores give mobile app developers an easy path to market), it also allows the AIs to interoperate, creating a more synergistic, broadly capable intelligence. For example, if a text-to-speech AI and an Italian-to-English translation AI were both on the network, then the network as a whole would be capable of using Italian text to produce English speech. Within this framework, AI transforms from a corporate asset to a global commons; anyone can access AI tech or become a stakeholder in its development. Also, anyone can add an AI/machine learning service to SingularityNET for use by the network and receive network payment tokens in exchange. [...] Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there. Today we'll look at SingularityNet, the global AI marketplace, as it is advertised on their website. Specifically, we're going to look at the SingularityNet white paper 2.0, as it appeared in 2019. So it's version two, version one, I think appeared in 2017. So SingularityNet is a, as it says, a global AI marketplace, but it is also kind of an effort. It is a foundation, it has blockchain in it, it has AI in it, it has symbolic computation, it has graphs, it has all the things, all the buzzwords you could possibly want. So the high level summary of this system is that it is a marketplace for API is basically on blockchain, where either humans or API's can call other API's and pay them for that service. And the goal is to sort of get a network going of API's that call API's that call API's and sort of have that build into a global AI, not only marketplace, but like as itself a global AI. This is backed by the SingularityNet foundation. And they do a whole bunch of development of the platform, but also research on the platform. And we'll look at all of this today. So it is a white paper, which is not a research paper, as we usually look at. That means a bunch of things. First of all, as you can see, it's quite long, and we're going to skip most of it, actually. But also, I have maybe it's just it's just because it's a white paper, and that's usual. But this, all of this is, it's sort of marketing, and it's, it's, it's sort of never fixates on one level of analysis, like it goes into this, and then a bunch of buzzwords, and then super detail. And then it talks about, you know, what kind of cache do we need for the database, but then it goes back and it just references a bunch of stuff without explaining it to just kind of beef it up for investors, I guess. I don't know. In any case, we're going to go through it, we're going to go through what the marketplace looks like, how it works, what it's good for, or some of my criticisms. The central components, as I said, are the API's, but also a rating system. And it is also decent really governed. So the goal is to have the community govern the network. And lastly, the goal is to have all of this be beneficial for humanity. So we're going to see how this all ties together. So what's the the current the current situation and what the singularity net want to do. So let's say you are this external software, you're a person, okay. And what you want to do is you want to summarize a document. The view that this system has is that you could give this to a document summarizer. The document summarizer, however, looks at this and sees, oh, what are you giving me, you're giving me. And in this case, it might be, you know, an article of the New York Times that has both text and video, okay, so you give it you see an article has like a title, it has a bunch of text. And here it has like a little video to go along with it. And you simply say summarize this to me. So this document summarizer, all it does is it looks at the document and it sees up there is a bunch of text. And there is a video here. And I'm going to. So in order to summarize the document, I need to summarize the text and I need to summarize the video. So it will take the text and it will send it to a note that's dedicated only to text summarization. And then it will send the video to a note that's only dedicated to video summarization. The video summarizes summarizer in turn could do stuff like call face recognizers and call some databases in order to sort of get who is in the video or what's in the video, you could call object detection and so on. The text summarizer, in turn, it could call some word sense disambiguators, it could call entity extractors to also realize what is in the document. And then these nodes will send sort of so every node can call other nodes in the network. And at the bottom, you'll have these sort of AI primitives, like face identification, entity extraction, and so on. And they are not to be meant to be called by you directly, they're meant to be called by higher level nodes that sort of aggregate them. Okay. And this, if you look at this, and if you are a software developer, you, you think of libraries, like you think, of course, you know, this is this here, this stuff here is maybe that's hogging face. And this stuff here, probably in spacey that exists, right? If you are a software developer, you know, if you have to do subtasks, someone probably already solved that subtasks, I can just call a library. Now, the view of singularity net is that no, maybe you don't want to call a library. Maybe you don't know yet what's the best. So their view is a marketplace. And why is a marketplace better for AI than for regular programs? Because, you know, for regular programs, we don't need a marketplace, we simply call a library. Why is that not good for AI? I'm, you know, I'm trying to, I'm trying to sort of make sense of this right here. I am not convinced by this system either, but I'm sort of trying to make the best case for it that I can. So if you are this, let's go back to that graphic. If you are this text summarizer, and you need to do, you need to do entity extraction, right, you might have a lot of a lot of choice. So there might be, you know, entity, entity extractor, a, there might be entity extractor, b, and so on, there might be many of these entity extractors, and then a new paper comes out, right. And then entity extractor, f is somewhere on GitHub, you know, but so what you need to do every time a new entity extractor comes out is released, you know, someone makes a paper, maybe put some code, the code doesn't really work, you have to go fetch that code, you have to look, you have to plug this into your system, right, you have to test against your data sets, and you have to decide, is this better than what I had before? Or is it worse? Is it worth including and so on? So it is in the in the classic software world, if you have a library that does something, it does that thing, right, it cannot necessarily do it better or worse. However, in the machine learning world, it can definitely be you know, that this thing here is like 90% accurate, which is already good, but then something comes out with 95% accurate, and that's better, and you would like to sort of switch to the better thing, or the thing that meets your needs more, the thing that works on your test data set, and so on. So that's sort of the case to be made for an AI marketplace. Now, this singularity nets vision is that let's say, I'm a researcher, I come up with a new entity extractor, right? I have my so I have my paper here, I have it written, I have maybe a bit of code somewhere. What I can do is I can plug this into singularity net, right, and then I am say, here, here, I am entity extractor x, and you can advertise yourself to this network. And then all the other nodes like this text summarizer node, but you know, many other nodes could then come and sort of in an automated fashion, test some sort of test data set that they have against you, right, they tested against your system. And they can evaluate you and then they will switch to you to using your code. If you are better than the competition for them, or maybe if you're cheaper, right. And for that, if you're a researcher and do all that, for that you would get money, because every time a node calls you, they're giving you some money for analyzing their data. So that is the that is the sorry, that is the the core idea behind the AI marketplace right here. So the AI marketplace as a whole looks something like this. And there's a lot of stuff here. But we'll go through it sort of one by one. Okay, so it is so this, this here, it mixes kind of conceptual and technical and so on. But ultimately, you have is there a way I can draw this more easily? Yeah, maybe. Okay, so you have consumers, okay, and consumers can be people, or can be robots. And you have a whole network of them, right. And the robots, if it's a robot, the robot exposes an API, as we said, the robot exposes an API that says exactly what inputs it takes and what outputs it provides. And it can also do tags. So here are my inputs, here are my outputs, and it can it can have some tags, it can, for example, say, Hey, I am an entity extractor. My, you know, I do it, I do entity extraction in English, and, and so on, though, maybe the English would actually go into the into the input definition. So we could do entity extraction. So the input definition says I need a string that's called text. And that string needs to be language English. And for that, I can produce a set of a list of entities, and to T, something like this, okay, it is very much like you would specify an interface in regular programming, except that in singularity net, these types here, so the string with the language parameter, and like the definition of what an entity is, they are set, I don't want to say centrally, because it's on a it's on a blockchain. But in essence, they are on the blockchain centrally deposited, you can add your own, but you can also implement the the ones that other people have already defined. And what would be the good thing about not defining your own? Well, if if this is the kind of commonly agreed upon standard for entity, or entity recognition, did I say augmentation extraction entity extraction, I said, I put an A all the time, sorry about that. If this is the common definition for entity extraction, and you implement the same right, you have your new algorithm over here, and you implement the same API, you know, you have this this green API, and you implement the same types, then anyone who uses this API, can, if they want switch without any work to your API. And if you are better, then, you know, you get probably their business because they want to call the better one. The idea of singularity net actually goes further, because this is not only callable by humans, this is also callable by other robots. So here I have a other robot. And this is a special robot, because this robot is like an evaluator robot. So this robot can go around, and it has a little data set inside of it. And it will just do nothing else but scan for new AI's on the network that implement a certain API, it will recognize it will say, ah, this is the this is the API for entity recognition, or entity extraction, I will simply run my test data set against it. And I will run my test data set against this and so on. And I will report. So my API will be, I simply output, I simply so input would be a task name. So task would be a string or something like this. And the output would be a list of model and performance like model a model m 90% model x 95%. Okay, so there couldn't there can be robots that test other robots, and then publish sort of ranking lists, and then I as a, like, I as a human or the robot, you know, the the higher order robots, they can go read this robot, and then decide to which of the of the all the listed and things they want to go. So at central core to the system is this kind of shared type system. If you share the types, if you share the API's, your API's become replaceable with one another. And therefore you can enable sort of automatic competition and automatic matchmaking. So these robots, the there are evaluator robots, and there are matchmaker robots, where you can tell a robot, I would like to extract some entities, please find me the best node in the network that does it. Okay, and the marketplace makes sense because it's AI and it constantly shifts which one is good and which one's appropriate. That's the best case I can make for it. Like, I have my doubts that this is actually the case, like, but we'll get to we'll actually know let's make the case against it. So my case against the AI marketplace as it is listed here is twofold. So first, first point against it. Everything we know right now is end to end. The direction of research is clearly going into less structured data and more end to end. That means if I want to do a text summer or a document summarizer, I am right now much better off just training a giant model that does it end to end, rather than using many, many small models. Because if I call an entity extractor, right, and I simply only rely on that information, I lose the rest of the text and the nuances in the text, I simply get the output of that model. Now, I could combine that, of course, but this this idea of modularizing AI, I'm right now, research is pointing into a different direction. And second of all, I still believe, like, if I make a product, if I build a product towards a user, I want to know what's in it. Like even if I have to go with myself and test the stupid API, I would never use like a matchmaking agent that dynamically goes and finds me someone who can implement this API. Because implementing an API only goes so far implementing, you know, like I require image and I output value, that's an API. But that can be many. And then you know, maybe these tags here, maybe these tags could do something. But it is not like I think the system, even though it's, you know, thought out well with the types and the API is and so on. I don't think that's enough. I think that works for a very, very small subset of AI tasks. I don't think that works for most of the AI tasks that we have right now, because simply API definitions just don't convey what the models so wait API. So API does not convey what the model does function. In my mind, so I would ask yourself if you would if you were there to use a matchmaking agent, and then you know, sell that product to a customer. It's it's but I guess the goal here is that in the future, these matchmaking agents will be much more intelligent and so on. Yeah. So here's how it works on a more sort of technical level. So there is two components here, there's off chain and on chain. So if I'm assuming you know, what a blockchain is, if you don't know what a blockchain is, a blockchain is basically a distributed database, and in some forms, also a computation engine. So it's kind of a distributed computer that you can't fake. So you can't cheat, no one has authority over it, everything is visible. And so that's secure. The drawback is you cannot do hardcore computation on blockchain. So this is not AI on blockchain, the blockchain is simply there to first of all, register the AI's so register the types. So this this API is here, and register what AI's are available in the network. And second of all, to facilitate the payments to the AI. So how does that work? It goes via this sort of multi party escrow escrow contract right here. So there's a registry, by the way, that's where AI's register and put their types. So that's one function of the blockchain. The other function is to escrow money. And this, if you know, lightning network is very similar to this. So what you would do if I don't know, Alice wants to call Bob, Alice would sort of put a bunch of money like a big bunch of money. How do I do that? Alice would send money to this escrow account like this much money. And then that establishes a channel between Alex, Alice, sorry, and Bob. So there is a channel channel is opened, and it's tied to this money. And now Alice can sort of send incremental amounts of that money to Bob. And every time you know, one of these, like a little bit of that money is used up. And the way the reason you do it in escrow form and not so all of these could be transactions on the blockchain, right. But that's first of all, it's slow. And second of all, it's expensive. And if you do it like this, you actually only need at, you know, you need one transaction in best case. So if Alice spends this much money to Bob, there needs to be only one transaction to putting all of it to Bob at the same time rather than all these small transactions. So that's kind of the the channel principle. I think yeah, it's very similar to lightning network. And it's still secure. So there, it's still secure. The way it is done. I don't want to go into channel economics and security right here. But suffice to say, you can make this secure and fast to a certain degree. Okay, so that's how it works. Every time you call an API, you just send it some money in order to call it. So how does this look? This looks something like this. Sorry. Here is this AI marketplace, they've actually built it. And they have a bunch of services on there. As you can see, it's, it's, it's kind of they take some standard AI tasks, and they put them on here. And if you click on one, you can either, you know, pay a GI tokens. That's a thing we're going to get to in a second. Or you I think you have like 10 free calls a day if you make an account. So I've tried it out, you know, it works. But it's important to realize that the computation does not happen on the blockchain, you send money on the blockchain, and the AI service, it runs off chain. So this is off chain. Okay. So it is not a secure AI, you still need to trust the thing you're calling, um, it's not about privacy that much, but you, you can't verify the outputs, you can't verify the computation as you could if if it were happening on chain. Now there are methods to sort of do heavy computation on chain, but these, I guess wouldn't be that efficient. So just take that in mind. Now, the other thing is, I always sent say, you send around money. But what you actually send around is a token. So a token is a very special concept. If you if you don't know what a token is, it's like money on top of money. So it's like if you go to a fair, and the fair has like its own internal money system, and at the beginning, you pay like 20 bucks, and you get 100 fair coins, and you can use the fair coins inside the fair. And that just enables the fair to sort of have its own monetary policy. And it's usually done with these projects to at the very beginning, you sort of sell those coins to a lot of people and the people buy it not because they can use it right there, but they estimate they can use it later. And it's a way to found a project that's called an it's called an initial coin offering usually or initial token offering the coin that singularity that uses is aptly called a GI. And there is 1 billion. And you can see here, it's still active. So it's still being traded. You can see this is an hour ago, 20, 15 minutes ago, and so on. If you look at here is analysis. If you look at the activity on the network, it had a lot of activity at the beginning, it dropped and now it picked up a little bit again. I don't know exactly what that's related to. But so it is still alive. If you look at the price, however, this sharply dropped and is now actually below the price of the initial coin offering. And what you hope when you you know, buy the initial coin is not only that you can use it later, but you know that since there is only limited amount of tokens that that will be more valuable in the future. Because people want to buy it off you because they want to use the network here, it sort of looks like that's not exactly happening. And we'll get to what they're doing against it. Right in a second, the answer is inflation. So in a new blog post, actually, as I was preparing for this video, this new blog post came out yesterday. And here, they're announcing sort of the path forward Singularity Net phase two. And essentially, what they're doing is they're switching blockchains from Ethereum to Cardano. And I have my doubts isn't like I don't know much about this whole the whole crypto space, but isn't Cardano where massive amounts of the of the coins are like in some I think there are massive amounts that are just never moved and so on. And it's quite scary. But you know, they probably know what they're doing. And with that, they are doubling the amount of tokens like they could do it without increasing the token tokens. But with that, they're issuing another billion tokens, I think 50 or 25% will go to themselves. So that's usually you do that in initial coin offering, right, you keep some of the tokens to yourself, because as people buy it, it's valuable. And that's how you fund the operation. So here, they need to fund it some more. So they just inflate the currency with the new with the new token. And they project, you know, they project that the network is used is going to be used a lot more than double now. So I guess if you buy the new tokens here, phase two plan five years from now, there will be 2 billion instead of 1 billion tokens, my strong assessment is that in this case, the overall value of the network in 2025 is going to be far more than twice what it would be if we didn't release the new token. So they need money. They inflate the currency. It's you know, it's government. I guess it's valid, but just just to be aware. Okay, that's the network. There are a few crucial components that I have left out now. But that's essentially how it works. So one crucial component, so you the registry is where you register. One crucial component is the reputation system. And this is something that's quite difficult. So the reputation system is important, because if you want to sort of find agents that that perform well, you can also sort of rely on reputation. So if a lot of people have bought services from a particular node in the past, and they rated high, then you can sort of trust that node more than if if a node is lower rated or has dissatisfied customers. So they spent quite a bit here talking about reputation systems and how you could do them. And that is an open area of research is really hard problem to make a good reputation system that can't be gamed and so on. Yeah, there are various ways like, for example, a stake deposited by a consumer service owner to be forfeited should its rating in some dimension fall below a given threshold. So you can like put some money and say, Well, I I if my rating falls below a three, then that money is gone, I will like it's burned, it's automatically burned and that gives people more trust in you because you're now forced to uphold that rating. But it also allows some kind of mafia games like you could go to that, you know, service owner be like, well, it would be a shame if you had a bunch of one star ratings coming in, then you can sort of blackmail them in given circumstances. It's not easy, right? It's not easy. But that's built into into it. By the way, because this is on chain, anyone can participate in the market permission less, which is a really good thing. However, they maintain kind of a a DAP a centralized platform where they that they control. So you you sort of have this decentralized thing wherever you can participate, but only some people are listed on the central on the main hub, let's say, but you can technically build your own hub, like you can build you can build your own Android app store and so on. So think of it like, it's a marketplace for apps, but only the ones that are, you know, KYC compliant will be in the in the Google App Store, but you can build your own alternative app store. They also want to provide AI infrastructure as a service. And that I feel it's really irrelevant, like they say, okay, we want to provide this, but it really doesn't matter for the singularity net. So they, they, here is where they go into, oh, you could do this, you can do that with it, and so on, you can deploy it on embedded devices. So their idea is really that the whole world will be connected to this network. And whenever you require any sort of functionality, you just call the network, and the network solves your problem. As I said, I'm kind of doubtful, I still think it's probably going to be people just build the functionality either into a custom, you know, uni service, or they, they just build it on device. So the last component here is democratic governance. So they are, they are invested in, in sort of making this a community effort. And one thing is this governance, right? How do you govern decentralized organization? And that is also an unsolved problem. They do it in multiple stages. So they stay say, okay, in years one and two of network operation, basically the foundations, the foundation says everything in according to any any major changes the foundation decides. So the foundations are the maker of the network. In years three and four, they transition. So major changes, agreement of the foundation plus a majority AGI holder votes. Minor changes don't actually even require the foundation. And then there's also this introduction of benefit tasks. Yeah, so years three and four, and from year five on forward, this the the foundation is gone. And only there it's only done by votes by AGI token holder votes, which are logarithmic such that rich people don't have too much power. Yeah, so this was launched in 2017 at the end. So technically, we are in this phase right here. And I have searched for like an announcement that yeah, we're going to transition from this mode to this mode. But I haven't found it on their blog instead of what I found are like announcements that they're going to they're going to launch this supervisory council, which are like elected members that check the foundation. And also in this roadmap of part two that we've just looked at, they also saying, oh, progressive decentralization, making it real. They also talk about this supervisory council, and they now pay them and they release financial reports. But nowhere does it say that you see here, it's 3.5 years in so they should be in that second phase. Maybe they are, but I would guess they'd make an announcement if that's the case. Maybe I've just missed it. And they're actually doing this. But I have my feeling that if you you know, launch such a system, and you have power to do stuff, and especially this if the system doesn't grow as much as you expect, and so on, you're not going to give that power away. So that's, that is my my doubt here is that if you have the power, it's of course, it's always better for you if you say, well, I'm just gonna hold on to it a little bit longer. Eventually, you know, when everything goes well, but it's never that everything goes well. Like, yeah, alo communism. Okay, so enough rant. The benefits tasks. So they also have in mind, you see, there's a lot of stuff in this network, right? They also have in mind that this this network should benefit sort of humanity as a whole, which is, you know, a laudable task. But they have a system where it's some tasks are classified as benefits tasks. And the these benefit tasks, they are suggested by by a GIs by actors in the network that has so each agent gets a certain number of benefit votes, right? to cast each month based on its benefit rating. So the rating system is multi dimensional. One aspect is the benefit rating, someone can rate you beneficial if you like, do if you're a GI cures cancer or something like this. And, and then you nominate you vote. And then some of the some money goes to these benefit vote winners. Once a qualified benefit decided nominates a certain task, yada, yada, yada, yada, yada, if 25% votes are cast in the affirmative, then the task becomes a benefit task. Once a task is a benefit task, any agent capable of performing it and possessing a sufficiently high rating, and benefit rating will receive benefit payment for doing it. Okay, so the idea is the community nominates beneficial tasks, and these tasks will get benefit payment. Like, the only question is, where does this come from? Where does that money come from the benefit payment? So I guess it has to come from other people. So you have to have like some sort of a benefit tax or something like this, that you have other transactions that you give to the benefit tasks. And then this is like, you the whole system work, there's nothing about this that makes it benefit specific, you can switch out the word benefit by evil, like some you have an evil reputation, and then some tasks are evil, and get evil votes. And if you are especially evil, you get evil payments. This whole notion rests on the fact that people somehow recognize what's beneficial, which is a highly, highly controversial, right. And it's basically politics, right? Every politician advertises themselves as beneficial, every, every, you know, organic food is beneficial. But then you just do the bare minimum, you like cut, you take 99% of tomatoes, and you put a little bit of dirt on top of them, and boom, they're organic, like they're now labeled as organic. It's, it's, I this is, to me, this just seems like a thing that's going to be gained so hard, it's going to become irrelevant. It's basically a political game at this point. Because you cannot define benefit other than through human voting, and human voting is subject to money. And yeah, that's how politics starts. Okay, so they have, they have a lot of examples. So here you see sort of this network idea, they have a lot of examples, what can be done with this, I don't want to go into into these, because this video is already quite long. But it's, it's a lot of talk. I just want to say that it's a lot of talk. And, you know, they're basically putting up everything they have done so far, and they're doing on the network, what they can do with the network, which is all cool, right? It, but it's it's sort of advertising, what kind of research they do on it. And yeah, the last point. The last point. Yes, it's very long. So these people, for some reason, they actually, they're like two things they love or three, there's graphs, domain specific languages for some reason, they love graphs and domain specific languages. So their idea of AI, it all revolves around kind of classic notion of AI. So there is knowledge bases, and then there is graphs that and you can see this reflection in singularity net, right? This idea that lots of things by themselves network together can make up a bigger AI and so on that it is exact reflection and exactly goes counter to like the deep learning idea of let's do everything end to end. So the singularity net here is very much a reflection of what these people think. And yeah, for some reason, they love inventing DSL for new problems. Like why? What like, I've never understood DSL aficionados, but I guess if you are, you're having fun. Okay, so here they say, measuring modeling and extending singularity net. Okay, so this is sort of their research on singularity net itself, which is, you know, quite a, a important thing if you build a system like this. But what I want to, I wanted to do so, I've read through all of this kind of research suggestions and what they're doing, and they just make it seem great, but it's also very washy, in my opinion, and I was wondering, is it just because it's a white paper? And I you know, there's actual good research and for most things, I can definitely guess you know, they're, you know, they're also the people behind this Sophia robot. I don't know if you you know, like this Sophia robot and so on. They so they have a lot of success. So precision medicine and so on. There's a lot of research, but some things just sounded also just washy. So here that this is something that made me particularly just kind of stop. So they want to measure with this phi quantity for measuring integrated information in complex cognitive networks. So this phi, this number phi by this researcher tontoni is sort of a measure fundamental measure of the level of consciousness. And they themselves say, you know, maybe it's net, it's not, you know, the measure, but it's certainly an interesting measure, and so on. And they say we have experimented with measuring phi across time series generated by open call, by the way, open cog is from the same person that's one of the co founders, Ben Garth, so of singularity net, open cogs attention, allocation module, yada, yada, yada. While the while the system parsed and semantically analyzed a series of short documents, we have also calculated phi values while the open cogs system controlled the Sophia humanoid robot, as she led a person through a structured meditation system. So they like the extent of them describing the research is simply we have experimented with it. And we have measured it across time. And so I was wondering, like, what's behind this? So I went and I read the paper that's linked there. That's this using tontoni phi to measure the consciousness of a cognitive system while reading and conversing. And so this is a paper, it's quite short, but they let it read like texts from about different things. And they measure this phi quantity. And when you go and look first, what's this phi quantity, this is kind of a one of these papers, it's, it's very mathematical, actually. And there's a lot of information theory in there. So it has something to do with mutual information, there's a lot of ways you can calculate it, as you can see here on the left, and there's a lot of ways you can approximate it. So this is like a serious quantity. But measuring it is like super hard. And here, they let this open cogs system read short texts with, with respect to, as you can see here, poison and insects. And they look where the sort of, I guess the attention, the attentional focus of the system rests on which of these concepts, right? And then they measure the phi over time. And their claim here is I was okay, we also calculated five based upon the concept nodes. No, wait up here. As the system ingests each sentence, word nodes corresponding to each word are simulated as stimulated with this system, thus triggering attentional focus dynamics correlated with the reading process. One goal of the study was to observe whether after reading documents regarding insects, then poisons attention would spread to the concept related to insect to insecticide. This phenomenon did occur. So they say, okay, when you read, when you read insect and poison, after that, you got to put a focus on insecticide. And you can see so insect is blue, poison is orange, and you can see maybe the insecticide, you know, bumping a little bit after while you read poison. But honest, like this could also just be because it's associated with poison. This is, you know, I don't know that this is a bit interpreted a bit too much into that graph. And then what's even more astounding, we also calculated five values based on the concept node insect, poison and insecticide as figure three shows, there was an interesting jump in the five value when insecticide first became important, suggesting that the five increase was correlated with an increased complexity of attentional spreading within the atom space. So the atom space and so on, that's that's sort of this classic AI concept of knowledge bases and atoms. But here, so the claim is that the fire on the right somehow, somehow correlates with the insecticide attention on the left or with anything interesting. And that to me is a stretch. In fact, I have, I've put the I've put these things above one another. So in the gray background here, you can see the five value, and I've matched up the the time steps right here. And so the claim is that here, insecticide marginally bumps up, and then sort of this five spike is here. But if you look anywhere else, like here, insecticide bumps up, okay, but much delayed spike, and here, it doesn't bump up at all. But there's a spike still. And it's, it just seems, it just like that is just not a inference you can make right here. Like, I'm not sure. Let me let me know what you think. But if you know, you can't just nah, nah, sorry. This one, you know, this one, it was the one that that was kind of the most strange to me. But also, yeah, don't, don't, don't tell me that this does anything. But in any case, they, this is the type of research that they do. And so they measure these measure the intelligence of the system, and so on. Yeah. The last thing is these, what they want to do is this offered net economy. And you know, in researching this paper, I have also watched a bunch of talks from from Ben, and it seems like sprawling with ideas. And the talk about these offer nets is, is also so the idea behind it is that offer net is sort of an economy without money. The offer nets domain model, the other where is it? So huh, I don't I don't remember where it said, but offer nets is like an economy without money. So the idea behind it is okay, person A, person B, person C, or machines, they are sort of in an economy. And person A wants something that person B has, but B doesn't want something that A has. But instead, B wants something that C has, and C wants something that A has. And the logic here is couldn't you, you know, a cannot, a cannot trade with B, B cannot trade with C, C cannot trade with a but they can trade in a circle, right. And this offer nets, they do make this possible. And so that the idea is sort of everyone puts out there what they want. And the offer nets, they will sort of figure out, they will figure out who needs to trade with whom. And thereby, you could make an economy without money, right, without Yeah, you can make a money free economy. And is this the right paragraph? Because there was a fun sentence, there was a fun sentence that I've I've seen right here. So this is another another thing where I think that just like that the ideas they go a bit, they go a bit too far. offer nets analyzing the data, yada, yada, yada, open ender process. Okay, I don't I don't know where it was. But they say something like, yeah, offer nets could mediate this process. And I'm, and how do they mediate this process, you know, such that everyone actually gets their worth of stuff that they put out, they mediate this process by means of the offer coin. Okay, so the offer coin is now transferred from B to A, or sorry, or from A to B, let's say because a wants something that B has, and the offer coin is transferred from B to C, and then from C to A. So the offer coin makes all of this happen in an economic sense. And like, huh, are you saying there is an asset going along with a certain service, and the asset is sort of agnostic such that you can, if B gets the asset from A, B can then give the asset to C in order to obtain services from C. And that, you know, asset actually is what makes the whole economy work, even though no one directly wants to trade with each other. And you're doing all of that without money. That's crazy. So yeah, in any case, I think, oh, ah, there we go. Offer nets. A decentralized economy providing an alternative to purely currency based exchanges. This economy features a complex network of interactions that optimizes reciprocal changes of goods and services by finding agents with compatible and complementary preferences and coordinating their interactions dot dot dot by means of a coin, which is money. That's this is exactly what money does. Like that. That's what money is for. In any case, I'm like this. These people are very smart, and I'm probably too dumb to see what the exact difference is right here. So I just found it funny. If you know, if I'm completely wrong, then let it be stated that you know, that's what a semi only semi smart person would conclude from reading these things. All right, this was lengthy. But I hope you sort of got the idea. The base system is an a an API marketplace. Now the API marketplace in itself doesn't have anything to do with AI necessarily. But I've made the case that the API marketplace only makes sense in the in the world of AI, because if it was regular software, you would just hard code either the API calls or you would actually include the library. So the marketplace makes sense in the realm of AI. Okay, it's doubtable whether that's actually the case. It very much goes against the end to end principle, it bets on a form of AI that works on discrete graphs, it works on sub components divided into sub components, it works on networks, networks built together to achieve higher order functions, it could definitely be that the future of AI lies in this direction. It's just that the current direction is pointing away from that. The whole marketplace runs in on the blockchain, and only the marketplace so the AI processing is off chain. So it is not a on blockchain AI. And yeah, they've built it and they are in money problems. Currently, they're inflating the currency. But they're switching blockchains, because they think the new blockchain will be better and faster. And they project high growth and the token is actually active. So it's you know, it's not a dead project. And they are in the news quite a bit, especially with this this Sophia robot, I think that is that is a very it's a kind of a PR magnet. Alright, that was what I had to say. I hope you enjoyed it. If you did share it out. Let me know what you think in the comments. Let me know what I did wrong. And bye bye.
[ { "start": 0, "end": 7.140000000000001, "text": " Hi there. Today we'll look at SingularityNet, the global AI marketplace, as it is advertised" }, { "start": 7.140000000000001, "end": 12.540000000000001, "text": " on their website. Specifically, we're going to look at the SingularityNet white paper" }, { "start": 12.540000000000001, "end": 20.84, "text": " 2.0, as it appeared in 2019. So it's version two, version one, I think appeared in 2017." }, { "start": 20.84, "end": 26.94, "text": " So SingularityNet is a, as it says, a global AI marketplace, but it is also kind of an" }, { "start": 26.94, "end": 35.28, "text": " effort. It is a foundation, it has blockchain in it, it has AI in it, it has symbolic computation," }, { "start": 35.28, "end": 41.44, "text": " it has graphs, it has all the things, all the buzzwords you could possibly want. So" }, { "start": 41.44, "end": 51.32, "text": " the high level summary of this system is that it is a marketplace for API is basically on" }, { "start": 51.32, "end": 59.88, "text": " blockchain, where either humans or API's can call other API's and pay them for that service." }, { "start": 59.88, "end": 67.56, "text": " And the goal is to sort of get a network going of API's that call API's that call API's and" }, { "start": 67.56, "end": 74.8, "text": " sort of have that build into a global AI, not only marketplace, but like as itself a" }, { "start": 74.8, "end": 83.64, "text": " global AI. This is backed by the SingularityNet foundation. And they do a whole bunch of development" }, { "start": 83.64, "end": 90.44, "text": " of the platform, but also research on the platform. And we'll look at all of this today." }, { "start": 90.44, "end": 95.32, "text": " So it is a white paper, which is not a research paper, as we usually look at. That means a" }, { "start": 95.32, "end": 100.53999999999999, "text": " bunch of things. First of all, as you can see, it's quite long, and we're going to skip" }, { "start": 100.54, "end": 107.68, "text": " most of it, actually. But also, I have maybe it's just it's just because it's a white paper," }, { "start": 107.68, "end": 115.60000000000001, "text": " and that's usual. But this, all of this is, it's sort of marketing, and it's, it's, it's" }, { "start": 115.60000000000001, "end": 120.96000000000001, "text": " sort of never fixates on one level of analysis, like it goes into this, and then a bunch of" }, { "start": 120.96000000000001, "end": 125.04, "text": " buzzwords, and then super detail. And then it talks about, you know, what kind of cache" }, { "start": 125.04, "end": 130.68, "text": " do we need for the database, but then it goes back and it just references a bunch of stuff" }, { "start": 130.68, "end": 137.32, "text": " without explaining it to just kind of beef it up for investors, I guess. I don't know." }, { "start": 137.32, "end": 142.56, "text": " In any case, we're going to go through it, we're going to go through what the marketplace" }, { "start": 142.56, "end": 151.04000000000002, "text": " looks like, how it works, what it's good for, or some of my criticisms. The central components," }, { "start": 151.04, "end": 159.04, "text": " as I said, are the API's, but also a rating system. And it is also decent really governed." }, { "start": 159.04, "end": 164.64, "text": " So the goal is to have the community govern the network. And lastly, the goal is to have" }, { "start": 164.64, "end": 176.48, "text": " all of this be beneficial for humanity. So we're going to see how this all ties together." }, { "start": 176.48, "end": 182.6, "text": " So what's the the current the current situation and what the singularity net want to do. So" }, { "start": 182.6, "end": 190.79999999999998, "text": " let's say you are this external software, you're a person, okay. And what you want to" }, { "start": 190.79999999999998, "end": 198.28, "text": " do is you want to summarize a document. The view that this system has is that you could" }, { "start": 198.28, "end": 207.56, "text": " give this to a document summarizer. The document summarizer, however, looks at this and sees," }, { "start": 207.56, "end": 212.32, "text": " oh, what are you giving me, you're giving me. And in this case, it might be, you know," }, { "start": 212.32, "end": 217.88, "text": " an article of the New York Times that has both text and video, okay, so you give it" }, { "start": 217.88, "end": 223.04, "text": " you see an article has like a title, it has a bunch of text. And here it has like a little" }, { "start": 223.04, "end": 229.16, "text": " video to go along with it. And you simply say summarize this to me. So this document" }, { "start": 229.16, "end": 233.88, "text": " summarizer, all it does is it looks at the document and it sees up there is a bunch of" }, { "start": 233.88, "end": 241.64, "text": " text. And there is a video here. And I'm going to. So in order to summarize the document," }, { "start": 241.64, "end": 248.04, "text": " I need to summarize the text and I need to summarize the video. So it will take the text" }, { "start": 248.04, "end": 254.79999999999998, "text": " and it will send it to a note that's dedicated only to text summarization. And then it will" }, { "start": 254.79999999999998, "end": 261.71999999999997, "text": " send the video to a note that's only dedicated to video summarization. The video summarizes" }, { "start": 261.71999999999997, "end": 268.36, "text": " summarizer in turn could do stuff like call face recognizers and call some databases in" }, { "start": 268.36, "end": 272.96, "text": " order to sort of get who is in the video or what's in the video, you could call object" }, { "start": 272.96, "end": 280.35999999999996, "text": " detection and so on. The text summarizer, in turn, it could call some word sense disambiguators," }, { "start": 280.35999999999996, "end": 286.84, "text": " it could call entity extractors to also realize what is in the document. And then these nodes" }, { "start": 286.84, "end": 294.47999999999996, "text": " will send sort of so every node can call other nodes in the network. And at the bottom, you'll" }, { "start": 294.47999999999996, "end": 302, "text": " have these sort of AI primitives, like face identification, entity extraction, and so" }, { "start": 302, "end": 307.32, "text": " on. And they are not to be meant to be called by you directly, they're meant to be called" }, { "start": 307.32, "end": 314.4, "text": " by higher level nodes that sort of aggregate them. Okay. And this, if you look at this," }, { "start": 314.4, "end": 319.88, "text": " and if you are a software developer, you, you think of libraries, like you think, of" }, { "start": 319.88, "end": 326.84, "text": " course, you know, this is this here, this stuff here is maybe that's hogging face. And" }, { "start": 326.84, "end": 332.76, "text": " this stuff here, probably in spacey that exists, right? If you are a software developer, you" }, { "start": 332.76, "end": 337, "text": " know, if you have to do subtasks, someone probably already solved that subtasks, I can" }, { "start": 337, "end": 345.4, "text": " just call a library. Now, the view of singularity net is that no, maybe you don't want to call" }, { "start": 345.4, "end": 353.76, "text": " a library. Maybe you don't know yet what's the best. So their view is a marketplace." }, { "start": 353.76, "end": 361.8, "text": " And why is a marketplace better for AI than for regular programs? Because, you know, for" }, { "start": 361.8, "end": 366.7, "text": " regular programs, we don't need a marketplace, we simply call a library. Why is that not" }, { "start": 366.7, "end": 372.4, "text": " good for AI? I'm, you know, I'm trying to, I'm trying to sort of make sense of this right" }, { "start": 372.4, "end": 378.64, "text": " here. I am not convinced by this system either, but I'm sort of trying to make the best case" }, { "start": 378.64, "end": 386.96, "text": " for it that I can. So if you are this, let's go back to that graphic. If you are this text" }, { "start": 386.96, "end": 393, "text": " summarizer, and you need to do, you need to do entity extraction, right, you might have" }, { "start": 393, "end": 399.59999999999997, "text": " a lot of a lot of choice. So there might be, you know, entity, entity extractor, a, there" }, { "start": 399.59999999999997, "end": 405.47999999999996, "text": " might be entity extractor, b, and so on, there might be many of these entity extractors," }, { "start": 405.48, "end": 413.88, "text": " and then a new paper comes out, right. And then entity extractor, f is somewhere on GitHub," }, { "start": 413.88, "end": 421.6, "text": " you know, but so what you need to do every time a new entity extractor comes out is released," }, { "start": 421.6, "end": 426.08000000000004, "text": " you know, someone makes a paper, maybe put some code, the code doesn't really work, you" }, { "start": 426.08000000000004, "end": 431.52000000000004, "text": " have to go fetch that code, you have to look, you have to plug this into your system, right," }, { "start": 431.52, "end": 435.68, "text": " you have to test against your data sets, and you have to decide, is this better than what" }, { "start": 435.68, "end": 443.59999999999997, "text": " I had before? Or is it worse? Is it worth including and so on? So it is in the in the" }, { "start": 443.59999999999997, "end": 450.58, "text": " classic software world, if you have a library that does something, it does that thing, right," }, { "start": 450.58, "end": 455.84, "text": " it cannot necessarily do it better or worse. However, in the machine learning world, it" }, { "start": 455.84, "end": 461.08, "text": " can definitely be you know, that this thing here is like 90% accurate, which is already" }, { "start": 461.08, "end": 466.03999999999996, "text": " good, but then something comes out with 95% accurate, and that's better, and you would" }, { "start": 466.03999999999996, "end": 471.2, "text": " like to sort of switch to the better thing, or the thing that meets your needs more, the" }, { "start": 471.2, "end": 476.84, "text": " thing that works on your test data set, and so on. So that's sort of the case to be made" }, { "start": 476.84, "end": 486.15999999999997, "text": " for an AI marketplace. Now, this singularity nets vision is that let's say, I'm a researcher," }, { "start": 486.16, "end": 491.28000000000003, "text": " I come up with a new entity extractor, right? I have my so I have my paper here, I have" }, { "start": 491.28000000000003, "end": 499, "text": " it written, I have maybe a bit of code somewhere. What I can do is I can plug this into singularity" }, { "start": 499, "end": 506.40000000000003, "text": " net, right, and then I am say, here, here, I am entity extractor x, and you can advertise" }, { "start": 506.40000000000003, "end": 512.72, "text": " yourself to this network. And then all the other nodes like this text summarizer node," }, { "start": 512.72, "end": 518.64, "text": " but you know, many other nodes could then come and sort of in an automated fashion," }, { "start": 518.64, "end": 523.86, "text": " test some sort of test data set that they have against you, right, they tested against" }, { "start": 523.86, "end": 529.96, "text": " your system. And they can evaluate you and then they will switch to you to using your" }, { "start": 529.96, "end": 537.5, "text": " code. If you are better than the competition for them, or maybe if you're cheaper, right." }, { "start": 537.5, "end": 542.6800000000001, "text": " And for that, if you're a researcher and do all that, for that you would get money, because" }, { "start": 542.68, "end": 549.2399999999999, "text": " every time a node calls you, they're giving you some money for analyzing their data. So" }, { "start": 549.2399999999999, "end": 557.68, "text": " that is the that is the sorry, that is the the core idea behind the AI marketplace right" }, { "start": 557.68, "end": 565.9599999999999, "text": " here. So the AI marketplace as a whole looks something like this. And there's a lot of" }, { "start": 565.96, "end": 575.14, "text": " stuff here. But we'll go through it sort of one by one. Okay, so it is so this, this here," }, { "start": 575.14, "end": 585.24, "text": " it mixes kind of conceptual and technical and so on. But ultimately, you have is there" }, { "start": 585.24, "end": 598.32, "text": " a way I can draw this more easily? Yeah, maybe. Okay, so you have consumers, okay, and consumers" }, { "start": 598.32, "end": 608.72, "text": " can be people, or can be robots. And you have a whole network of them, right. And the robots," }, { "start": 608.72, "end": 616.64, "text": " if it's a robot, the robot exposes an API, as we said, the robot exposes an API that" }, { "start": 616.64, "end": 622.1, "text": " says exactly what inputs it takes and what outputs it provides. And it can also do tags." }, { "start": 622.1, "end": 628.64, "text": " So here are my inputs, here are my outputs, and it can it can have some tags, it can," }, { "start": 628.64, "end": 636.12, "text": " for example, say, Hey, I am an entity extractor. My, you know, I do it, I do entity extraction" }, { "start": 636.12, "end": 642.08, "text": " in English, and, and so on, though, maybe the English would actually go into the into" }, { "start": 642.08, "end": 646.84, "text": " the input definition. So we could do entity extraction. So the input definition says I" }, { "start": 646.84, "end": 659.32, "text": " need a string that's called text. And that string needs to be language English. And for" }, { "start": 659.32, "end": 669.88, "text": " that, I can produce a set of a list of entities, and to T, something like this, okay, it is" }, { "start": 669.88, "end": 676.5200000000001, "text": " very much like you would specify an interface in regular programming, except that in singularity" }, { "start": 676.5200000000001, "end": 683.72, "text": " net, these types here, so the string with the language parameter, and like the definition" }, { "start": 683.72, "end": 689.52, "text": " of what an entity is, they are set, I don't want to say centrally, because it's on a it's" }, { "start": 689.52, "end": 694.28, "text": " on a blockchain. But in essence, they are on the blockchain centrally deposited, you" }, { "start": 694.28, "end": 701.88, "text": " can add your own, but you can also implement the the ones that other people have already" }, { "start": 701.88, "end": 707.5600000000001, "text": " defined. And what would be the good thing about not defining your own? Well, if if this" }, { "start": 707.56, "end": 715.64, "text": " is the kind of commonly agreed upon standard for entity, or entity recognition, did I say" }, { "start": 715.64, "end": 723.3599999999999, "text": " augmentation extraction entity extraction, I said, I put an A all the time, sorry about" }, { "start": 723.3599999999999, "end": 729.9599999999999, "text": " that. If this is the common definition for entity extraction, and you implement the same" }, { "start": 729.9599999999999, "end": 735.92, "text": " right, you have your new algorithm over here, and you implement the same API, you know," }, { "start": 735.92, "end": 740.9599999999999, "text": " you have this this green API, and you implement the same types, then anyone who uses this" }, { "start": 740.9599999999999, "end": 749.68, "text": " API, can, if they want switch without any work to your API. And if you are better, then," }, { "start": 749.68, "end": 754.52, "text": " you know, you get probably their business because they want to call the better one." }, { "start": 754.52, "end": 759.4799999999999, "text": " The idea of singularity net actually goes further, because this is not only callable" }, { "start": 759.48, "end": 766.24, "text": " by humans, this is also callable by other robots. So here I have a other robot. And" }, { "start": 766.24, "end": 771.48, "text": " this is a special robot, because this robot is like an evaluator robot. So this robot" }, { "start": 771.48, "end": 777.36, "text": " can go around, and it has a little data set inside of it. And it will just do nothing" }, { "start": 777.36, "end": 783.88, "text": " else but scan for new AI's on the network that implement a certain API, it will recognize" }, { "start": 783.88, "end": 790.16, "text": " it will say, ah, this is the this is the API for entity recognition, or entity extraction," }, { "start": 790.16, "end": 796.12, "text": " I will simply run my test data set against it. And I will run my test data set against" }, { "start": 796.12, "end": 809.52, "text": " this and so on. And I will report. So my API will be, I simply output, I simply so input" }, { "start": 809.52, "end": 819.1999999999999, "text": " would be a task name. So task would be a string or something like this. And the output would" }, { "start": 819.1999999999999, "end": 835.16, "text": " be a list of model and performance like model a model m 90% model x 95%. Okay, so there" }, { "start": 835.16, "end": 842.0799999999999, "text": " couldn't there can be robots that test other robots, and then publish sort of ranking lists," }, { "start": 842.0799999999999, "end": 849.56, "text": " and then I as a, like, I as a human or the robot, you know, the the higher order robots," }, { "start": 849.56, "end": 856.92, "text": " they can go read this robot, and then decide to which of the of the all the listed and" }, { "start": 856.92, "end": 863.3199999999999, "text": " things they want to go. So at central core to the system is this kind of shared type" }, { "start": 863.32, "end": 868.7600000000001, "text": " system. If you share the types, if you share the API's, your API's become replaceable with" }, { "start": 868.7600000000001, "end": 873.96, "text": " one another. And therefore you can enable sort of automatic competition and automatic" }, { "start": 873.96, "end": 878.6800000000001, "text": " matchmaking. So these robots, the there are evaluator robots, and there are matchmaker" }, { "start": 878.6800000000001, "end": 884.6400000000001, "text": " robots, where you can tell a robot, I would like to extract some entities, please find" }, { "start": 884.6400000000001, "end": 890.5200000000001, "text": " me the best node in the network that does it. Okay, and the marketplace makes sense" }, { "start": 890.52, "end": 897.6, "text": " because it's AI and it constantly shifts which one is good and which one's appropriate. That's" }, { "start": 897.6, "end": 902.36, "text": " the best case I can make for it. Like, I have my doubts that this is actually the case," }, { "start": 902.36, "end": 907.96, "text": " like, but we'll get to we'll actually know let's make the case against it. So my case" }, { "start": 907.96, "end": 915.4, "text": " against the AI marketplace as it is listed here is twofold. So first, first point against" }, { "start": 915.4, "end": 924.3199999999999, "text": " it. Everything we know right now is end to end. The direction of research is clearly" }, { "start": 924.3199999999999, "end": 931.6, "text": " going into less structured data and more end to end. That means if I want to do a text" }, { "start": 931.6, "end": 939.3199999999999, "text": " summer or a document summarizer, I am right now much better off just training a giant" }, { "start": 939.32, "end": 945.44, "text": " model that does it end to end, rather than using many, many small models. Because if" }, { "start": 945.44, "end": 952.5600000000001, "text": " I call an entity extractor, right, and I simply only rely on that information, I lose the" }, { "start": 952.5600000000001, "end": 957.0400000000001, "text": " rest of the text and the nuances in the text, I simply get the output of that model. Now," }, { "start": 957.0400000000001, "end": 966.6800000000001, "text": " I could combine that, of course, but this this idea of modularizing AI, I'm right now," }, { "start": 966.68, "end": 973.64, "text": " research is pointing into a different direction. And second of all, I still believe, like," }, { "start": 973.64, "end": 979.88, "text": " if I make a product, if I build a product towards a user, I want to know what's in it." }, { "start": 979.88, "end": 984.3599999999999, "text": " Like even if I have to go with myself and test the stupid API, I would never use like" }, { "start": 984.3599999999999, "end": 990.2399999999999, "text": " a matchmaking agent that dynamically goes and finds me someone who can implement this" }, { "start": 990.24, "end": 997.04, "text": " API. Because implementing an API only goes so far implementing, you know, like I require" }, { "start": 997.04, "end": 1003.28, "text": " image and I output value, that's an API. But that can be many. And then you know, maybe" }, { "start": 1003.28, "end": 1011.24, "text": " these tags here, maybe these tags could do something. But it is not like I think the" }, { "start": 1011.24, "end": 1018.12, "text": " system, even though it's, you know, thought out well with the types and the API is and" }, { "start": 1018.12, "end": 1023.36, "text": " so on. I don't think that's enough. I think that works for a very, very small subset of" }, { "start": 1023.36, "end": 1031.34, "text": " AI tasks. I don't think that works for most of the AI tasks that we have right now, because" }, { "start": 1031.34, "end": 1042.92, "text": " simply API definitions just don't convey what the models so wait API. So API does not convey" }, { "start": 1042.92, "end": 1050.88, "text": " what the model does function. In my mind, so I would ask yourself if you would if you" }, { "start": 1050.88, "end": 1057.0800000000002, "text": " were there to use a matchmaking agent, and then you know, sell that product to a customer." }, { "start": 1057.0800000000002, "end": 1061.96, "text": " It's it's but I guess the goal here is that in the future, these matchmaking agents will" }, { "start": 1061.96, "end": 1068.8000000000002, "text": " be much more intelligent and so on. Yeah. So here's how it works on a more sort of technical" }, { "start": 1068.8, "end": 1074.9199999999998, "text": " level. So there is two components here, there's off chain and on chain. So if I'm assuming" }, { "start": 1074.9199999999998, "end": 1079.6399999999999, "text": " you know, what a blockchain is, if you don't know what a blockchain is, a blockchain is" }, { "start": 1079.6399999999999, "end": 1085.12, "text": " basically a distributed database, and in some forms, also a computation engine. So it's" }, { "start": 1085.12, "end": 1092.1599999999999, "text": " kind of a distributed computer that you can't fake. So you can't cheat, no one has authority" }, { "start": 1092.16, "end": 1100.3600000000001, "text": " over it, everything is visible. And so that's secure. The drawback is you cannot do hardcore" }, { "start": 1100.3600000000001, "end": 1107.3600000000001, "text": " computation on blockchain. So this is not AI on blockchain, the blockchain is simply" }, { "start": 1107.3600000000001, "end": 1114.64, "text": " there to first of all, register the AI's so register the types. So this this API is here," }, { "start": 1114.64, "end": 1120.76, "text": " and register what AI's are available in the network. And second of all, to facilitate" }, { "start": 1120.76, "end": 1130, "text": " the payments to the AI. So how does that work? It goes via this sort of multi party escrow" }, { "start": 1130, "end": 1134.44, "text": " escrow contract right here. So there's a registry, by the way, that's where AI's register and" }, { "start": 1134.44, "end": 1139.52, "text": " put their types. So that's one function of the blockchain. The other function is to escrow" }, { "start": 1139.52, "end": 1145.32, "text": " money. And this, if you know, lightning network is very similar to this. So what you would" }, { "start": 1145.32, "end": 1153.6799999999998, "text": " do if I don't know, Alice wants to call Bob, Alice would sort of put a bunch of money like" }, { "start": 1153.6799999999998, "end": 1161.12, "text": " a big bunch of money. How do I do that? Alice would send money to this escrow account like" }, { "start": 1161.12, "end": 1168.08, "text": " this much money. And then that establishes a channel between Alex, Alice, sorry, and" }, { "start": 1168.08, "end": 1173.6799999999998, "text": " Bob. So there is a channel channel is opened, and it's tied to this money. And now Alice" }, { "start": 1173.68, "end": 1180.9, "text": " can sort of send incremental amounts of that money to Bob. And every time you know, one" }, { "start": 1180.9, "end": 1185.8, "text": " of these, like a little bit of that money is used up. And the way the reason you do" }, { "start": 1185.8, "end": 1193.0800000000002, "text": " it in escrow form and not so all of these could be transactions on the blockchain, right." }, { "start": 1193.0800000000002, "end": 1197.96, "text": " But that's first of all, it's slow. And second of all, it's expensive. And if you do it like" }, { "start": 1197.96, "end": 1203.72, "text": " this, you actually only need at, you know, you need one transaction in best case. So" }, { "start": 1203.72, "end": 1210.72, "text": " if Alice spends this much money to Bob, there needs to be only one transaction to putting" }, { "start": 1210.72, "end": 1215.52, "text": " all of it to Bob at the same time rather than all these small transactions. So that's kind" }, { "start": 1215.52, "end": 1221, "text": " of the the channel principle. I think yeah, it's very similar to lightning network. And" }, { "start": 1221, "end": 1227.56, "text": " it's still secure. So there, it's still secure. The way it is done. I don't want to go into" }, { "start": 1227.56, "end": 1235.36, "text": " channel economics and security right here. But suffice to say, you can make this secure" }, { "start": 1235.36, "end": 1243.5, "text": " and fast to a certain degree. Okay, so that's how it works. Every time you call an API," }, { "start": 1243.5, "end": 1248.5, "text": " you just send it some money in order to call it. So how does this look? This looks something" }, { "start": 1248.5, "end": 1254.32, "text": " like this. Sorry. Here is this AI marketplace, they've actually built it. And they have a" }, { "start": 1254.32, "end": 1261.04, "text": " bunch of services on there. As you can see, it's, it's, it's kind of they take some standard" }, { "start": 1261.04, "end": 1268.08, "text": " AI tasks, and they put them on here. And if you click on one, you can either, you know," }, { "start": 1268.08, "end": 1273.36, "text": " pay a GI tokens. That's a thing we're going to get to in a second. Or you I think you" }, { "start": 1273.36, "end": 1278.24, "text": " have like 10 free calls a day if you make an account. So I've tried it out, you know," }, { "start": 1278.24, "end": 1286.34, "text": " it works. But it's important to realize that the computation does not happen on the blockchain," }, { "start": 1286.34, "end": 1292.9, "text": " you send money on the blockchain, and the AI service, it runs off chain. So this is" }, { "start": 1292.9, "end": 1302.2, "text": " off chain. Okay. So it is not a secure AI, you still need to trust the thing you're calling," }, { "start": 1302.2, "end": 1309.3600000000001, "text": " um, it's not about privacy that much, but you, you can't verify the outputs, you can't" }, { "start": 1309.3600000000001, "end": 1314.04, "text": " verify the computation as you could if if it were happening on chain. Now there are" }, { "start": 1314.04, "end": 1320.24, "text": " methods to sort of do heavy computation on chain, but these, I guess wouldn't be that" }, { "start": 1320.24, "end": 1327, "text": " efficient. So just take that in mind. Now, the other thing is, I always sent say, you" }, { "start": 1327, "end": 1333.64, "text": " send around money. But what you actually send around is a token. So a token is a very special" }, { "start": 1333.64, "end": 1340.64, "text": " concept. If you if you don't know what a token is, it's like money on top of money. So it's" }, { "start": 1340.64, "end": 1345.68, "text": " like if you go to a fair, and the fair has like its own internal money system, and at" }, { "start": 1345.68, "end": 1350.56, "text": " the beginning, you pay like 20 bucks, and you get 100 fair coins, and you can use the" }, { "start": 1350.56, "end": 1356.7, "text": " fair coins inside the fair. And that just enables the fair to sort of have its own monetary" }, { "start": 1356.7, "end": 1362.76, "text": " policy. And it's usually done with these projects to at the very beginning, you sort of sell" }, { "start": 1362.76, "end": 1368.48, "text": " those coins to a lot of people and the people buy it not because they can use it right there," }, { "start": 1368.48, "end": 1373.8400000000001, "text": " but they estimate they can use it later. And it's a way to found a project that's called" }, { "start": 1373.8400000000001, "end": 1381.16, "text": " an it's called an initial coin offering usually or initial token offering the coin that singularity" }, { "start": 1381.16, "end": 1389.68, "text": " that uses is aptly called a GI. And there is 1 billion. And you can see here, it's still" }, { "start": 1389.68, "end": 1395.0400000000002, "text": " active. So it's still being traded. You can see this is an hour ago, 20, 15 minutes ago," }, { "start": 1395.0400000000002, "end": 1403.72, "text": " and so on. If you look at here is analysis. If you look at the activity on the network," }, { "start": 1403.72, "end": 1409.44, "text": " it had a lot of activity at the beginning, it dropped and now it picked up a little bit" }, { "start": 1409.44, "end": 1417.76, "text": " again. I don't know exactly what that's related to. But so it is still alive. If you look" }, { "start": 1417.76, "end": 1424.88, "text": " at the price, however, this sharply dropped and is now actually below the price of the" }, { "start": 1424.88, "end": 1429.8, "text": " initial coin offering. And what you hope when you you know, buy the initial coin is not" }, { "start": 1429.8, "end": 1434.68, "text": " only that you can use it later, but you know that since there is only limited amount of" }, { "start": 1434.68, "end": 1440.72, "text": " tokens that that will be more valuable in the future. Because people want to buy it" }, { "start": 1440.72, "end": 1447.04, "text": " off you because they want to use the network here, it sort of looks like that's not exactly" }, { "start": 1447.04, "end": 1453.24, "text": " happening. And we'll get to what they're doing against it. Right in a second, the answer" }, { "start": 1453.24, "end": 1459.94, "text": " is inflation. So in a new blog post, actually, as I was preparing for this video, this new" }, { "start": 1459.94, "end": 1469.8400000000001, "text": " blog post came out yesterday. And here, they're announcing sort of the path forward Singularity" }, { "start": 1469.8400000000001, "end": 1475.3600000000001, "text": " Net phase two. And essentially, what they're doing is they're switching blockchains from" }, { "start": 1475.3600000000001, "end": 1481.64, "text": " Ethereum to Cardano. And I have my doubts isn't like I don't know much about this whole" }, { "start": 1481.64, "end": 1491.24, "text": " the whole crypto space, but isn't Cardano where massive amounts of the of the coins" }, { "start": 1491.24, "end": 1498.48, "text": " are like in some I think there are massive amounts that are just never moved and so on." }, { "start": 1498.48, "end": 1506.6000000000001, "text": " And it's quite scary. But you know, they probably know what they're doing. And with that, they" }, { "start": 1506.6000000000001, "end": 1511.5400000000002, "text": " are doubling the amount of tokens like they could do it without increasing the token" }, { "start": 1511.54, "end": 1518.6, "text": " tokens. But with that, they're issuing another billion tokens, I think 50 or 25% will go" }, { "start": 1518.6, "end": 1522.86, "text": " to themselves. So that's usually you do that in initial coin offering, right, you keep" }, { "start": 1522.86, "end": 1528.68, "text": " some of the tokens to yourself, because as people buy it, it's valuable. And that's how" }, { "start": 1528.68, "end": 1534.08, "text": " you fund the operation. So here, they need to fund it some more. So they just inflate" }, { "start": 1534.08, "end": 1540, "text": " the currency with the new with the new token. And they project, you know, they project that" }, { "start": 1540, "end": 1549.2, "text": " the network is used is going to be used a lot more than double now. So I guess if you" }, { "start": 1549.2, "end": 1554.96, "text": " buy the new tokens here, phase two plan five years from now, there will be 2 billion instead" }, { "start": 1554.96, "end": 1558.56, "text": " of 1 billion tokens, my strong assessment is that in this case, the overall value of" }, { "start": 1558.56, "end": 1563.48, "text": " the network in 2025 is going to be far more than twice what it would be if we didn't release" }, { "start": 1563.48, "end": 1570.92, "text": " the new token. So they need money. They inflate the currency. It's you know, it's government." }, { "start": 1570.92, "end": 1578.16, "text": " I guess it's valid, but just just to be aware. Okay, that's the network. There are a few" }, { "start": 1578.16, "end": 1585.52, "text": " crucial components that I have left out now. But that's essentially how it works. So one" }, { "start": 1585.52, "end": 1592.2, "text": " crucial component, so you the registry is where you register. One crucial component" }, { "start": 1592.2, "end": 1598, "text": " is the reputation system. And this is something that's quite difficult. So the reputation" }, { "start": 1598, "end": 1606.32, "text": " system is important, because if you want to sort of find agents that that perform well," }, { "start": 1606.32, "end": 1612.4, "text": " you can also sort of rely on reputation. So if a lot of people have bought services from" }, { "start": 1612.4, "end": 1618.28, "text": " a particular node in the past, and they rated high, then you can sort of trust that node" }, { "start": 1618.28, "end": 1626.48, "text": " more than if if a node is lower rated or has dissatisfied customers. So they spent quite" }, { "start": 1626.48, "end": 1631.56, "text": " a bit here talking about reputation systems and how you could do them. And that is an" }, { "start": 1631.56, "end": 1637.76, "text": " open area of research is really hard problem to make a good reputation system that can't" }, { "start": 1637.76, "end": 1644.6399999999999, "text": " be gamed and so on. Yeah, there are various ways like, for example, a stake deposited" }, { "start": 1644.64, "end": 1649.5600000000002, "text": " by a consumer service owner to be forfeited should its rating in some dimension fall below" }, { "start": 1649.5600000000002, "end": 1657.88, "text": " a given threshold. So you can like put some money and say, Well, I I if my rating falls" }, { "start": 1657.88, "end": 1664.0800000000002, "text": " below a three, then that money is gone, I will like it's burned, it's automatically" }, { "start": 1664.0800000000002, "end": 1669.6000000000001, "text": " burned and that gives people more trust in you because you're now forced to uphold that" }, { "start": 1669.6, "end": 1675.7199999999998, "text": " rating. But it also allows some kind of mafia games like you could go to that, you know," }, { "start": 1675.7199999999998, "end": 1681.6399999999999, "text": " service owner be like, well, it would be a shame if you had a bunch of one star ratings" }, { "start": 1681.6399999999999, "end": 1689.08, "text": " coming in, then you can sort of blackmail them in given circumstances. It's not easy," }, { "start": 1689.08, "end": 1697.56, "text": " right? It's not easy. But that's built into into it. By the way, because this is on chain," }, { "start": 1697.56, "end": 1705.98, "text": " anyone can participate in the market permission less, which is a really good thing. However," }, { "start": 1705.98, "end": 1714.52, "text": " they maintain kind of a a DAP a centralized platform where they that they control. So" }, { "start": 1714.52, "end": 1719.04, "text": " you you sort of have this decentralized thing wherever you can participate, but only some" }, { "start": 1719.04, "end": 1724.8999999999999, "text": " people are listed on the central on the main hub, let's say, but you can technically build" }, { "start": 1724.9, "end": 1731.16, "text": " your own hub, like you can build you can build your own Android app store and so on. So think" }, { "start": 1731.16, "end": 1740, "text": " of it like, it's a marketplace for apps, but only the ones that are, you know, KYC compliant" }, { "start": 1740, "end": 1747.66, "text": " will be in the in the Google App Store, but you can build your own alternative app store." }, { "start": 1747.66, "end": 1751.88, "text": " They also want to provide AI infrastructure as a service. And that I feel it's really" }, { "start": 1751.88, "end": 1757.5600000000002, "text": " irrelevant, like they say, okay, we want to provide this, but it really doesn't matter" }, { "start": 1757.5600000000002, "end": 1765.22, "text": " for the singularity net. So they, they, here is where they go into, oh, you could do this," }, { "start": 1765.22, "end": 1769, "text": " you can do that with it, and so on, you can deploy it on embedded devices. So their idea" }, { "start": 1769, "end": 1774.24, "text": " is really that the whole world will be connected to this network. And whenever you require" }, { "start": 1774.24, "end": 1780.48, "text": " any sort of functionality, you just call the network, and the network solves your problem." }, { "start": 1780.48, "end": 1786.52, "text": " As I said, I'm kind of doubtful, I still think it's probably going to be people just build" }, { "start": 1786.52, "end": 1795.1200000000001, "text": " the functionality either into a custom, you know, uni service, or they, they just build" }, { "start": 1795.1200000000001, "end": 1805.26, "text": " it on device. So the last component here is democratic governance. So they are, they are" }, { "start": 1805.26, "end": 1812.12, "text": " invested in, in sort of making this a community effort. And one thing is this governance," }, { "start": 1812.12, "end": 1820.8799999999999, "text": " right? How do you govern decentralized organization? And that is also an unsolved problem. They" }, { "start": 1820.8799999999999, "end": 1828.2, "text": " do it in multiple stages. So they stay say, okay, in years one and two of network operation," }, { "start": 1828.2, "end": 1835.6000000000001, "text": " basically the foundations, the foundation says everything in according to any any major" }, { "start": 1835.6000000000001, "end": 1841.5, "text": " changes the foundation decides. So the foundations are the maker of the network. In years three" }, { "start": 1841.5, "end": 1850.1200000000001, "text": " and four, they transition. So major changes, agreement of the foundation plus a majority" }, { "start": 1850.1200000000001, "end": 1857.74, "text": " AGI holder votes. Minor changes don't actually even require the foundation. And then there's" }, { "start": 1857.74, "end": 1864.1200000000001, "text": " also this introduction of benefit tasks. Yeah, so years three and four, and from year five" }, { "start": 1864.1200000000001, "end": 1871.28, "text": " on forward, this the the foundation is gone. And only there it's only done by votes by" }, { "start": 1871.28, "end": 1877.1200000000001, "text": " AGI token holder votes, which are logarithmic such that rich people don't have too much" }, { "start": 1877.1200000000001, "end": 1887.06, "text": " power. Yeah, so this was launched in 2017 at the end. So technically, we are in this" }, { "start": 1887.06, "end": 1893, "text": " phase right here. And I have searched for like an announcement that yeah, we're going" }, { "start": 1893, "end": 1897.74, "text": " to transition from this mode to this mode. But I haven't found it on their blog instead" }, { "start": 1897.74, "end": 1903.3999999999999, "text": " of what I found are like announcements that they're going to they're going to launch this" }, { "start": 1903.3999999999999, "end": 1909.72, "text": " supervisory council, which are like elected members that check the foundation. And also" }, { "start": 1909.72, "end": 1915, "text": " in this roadmap of part two that we've just looked at, they also saying, oh, progressive" }, { "start": 1915, "end": 1919.88, "text": " decentralization, making it real. They also talk about this supervisory council, and they" }, { "start": 1919.88, "end": 1928.6, "text": " now pay them and they release financial reports. But nowhere does it say that you see here," }, { "start": 1928.6, "end": 1934.88, "text": " it's 3.5 years in so they should be in that second phase. Maybe they are, but I would" }, { "start": 1934.88, "end": 1939.34, "text": " guess they'd make an announcement if that's the case. Maybe I've just missed it. And they're" }, { "start": 1939.34, "end": 1946.9599999999998, "text": " actually doing this. But I have my feeling that if you you know, launch such a system," }, { "start": 1946.9599999999998, "end": 1953.56, "text": " and you have power to do stuff, and especially this if the system doesn't grow as much as" }, { "start": 1953.56, "end": 1961.08, "text": " you expect, and so on, you're not going to give that power away. So that's, that is my" }, { "start": 1961.08, "end": 1965.72, "text": " my doubt here is that if you have the power, it's of course, it's always better for you" }, { "start": 1965.72, "end": 1972.44, "text": " if you say, well, I'm just gonna hold on to it a little bit longer. Eventually, you know," }, { "start": 1972.44, "end": 1980.32, "text": " when everything goes well, but it's never that everything goes well. Like, yeah, alo" }, { "start": 1980.32, "end": 1987.76, "text": " communism. Okay, so enough rant. The benefits tasks. So they also have in mind, you see," }, { "start": 1987.76, "end": 1992.32, "text": " there's a lot of stuff in this network, right? They also have in mind that this this network" }, { "start": 1992.32, "end": 1997.96, "text": " should benefit sort of humanity as a whole, which is, you know, a laudable task. But they" }, { "start": 1997.96, "end": 2006.8799999999999, "text": " have a system where it's some tasks are classified as benefits tasks. And the these benefit tasks," }, { "start": 2006.8799999999999, "end": 2014.48, "text": " they are suggested by by a GIs by actors in the network that has so each agent gets a" }, { "start": 2014.48, "end": 2020.3999999999999, "text": " certain number of benefit votes, right? to cast each month based on its benefit rating." }, { "start": 2020.4, "end": 2024.96, "text": " So the rating system is multi dimensional. One aspect is the benefit rating, someone" }, { "start": 2024.96, "end": 2031.92, "text": " can rate you beneficial if you like, do if you're a GI cures cancer or something like" }, { "start": 2031.92, "end": 2043.3200000000002, "text": " this. And, and then you nominate you vote. And then some of the some money goes to these" }, { "start": 2043.3200000000002, "end": 2049.84, "text": " benefit vote winners. Once a qualified benefit decided nominates a certain task, yada, yada," }, { "start": 2049.84, "end": 2057.2400000000002, "text": " yada, yada, yada, if 25% votes are cast in the affirmative, then the task becomes a benefit" }, { "start": 2057.2400000000002, "end": 2063.44, "text": " task. Once a task is a benefit task, any agent capable of performing it and possessing a" }, { "start": 2063.44, "end": 2069.76, "text": " sufficiently high rating, and benefit rating will receive benefit payment for doing it." }, { "start": 2069.76, "end": 2075.6800000000003, "text": " Okay, so the idea is the community nominates beneficial tasks, and these tasks will get" }, { "start": 2075.68, "end": 2082.2999999999997, "text": " benefit payment. Like, the only question is, where does this come from? Where does that" }, { "start": 2082.2999999999997, "end": 2089.58, "text": " money come from the benefit payment? So I guess it has to come from other people. So" }, { "start": 2089.58, "end": 2094.44, "text": " you have to have like some sort of a benefit tax or something like this, that you have" }, { "start": 2094.44, "end": 2102.48, "text": " other transactions that you give to the benefit tasks. And then this is like, you the whole" }, { "start": 2102.48, "end": 2107.86, "text": " system work, there's nothing about this that makes it benefit specific, you can switch" }, { "start": 2107.86, "end": 2114.04, "text": " out the word benefit by evil, like some you have an evil reputation, and then some tasks" }, { "start": 2114.04, "end": 2120.6, "text": " are evil, and get evil votes. And if you are especially evil, you get evil payments. This" }, { "start": 2120.6, "end": 2126.28, "text": " whole notion rests on the fact that people somehow recognize what's beneficial, which" }, { "start": 2126.28, "end": 2133.96, "text": " is a highly, highly controversial, right. And it's basically politics, right? Every politician" }, { "start": 2133.96, "end": 2140.48, "text": " advertises themselves as beneficial, every, every, you know, organic food is beneficial." }, { "start": 2140.48, "end": 2146.44, "text": " But then you just do the bare minimum, you like cut, you take 99% of tomatoes, and you" }, { "start": 2146.44, "end": 2150.4, "text": " put a little bit of dirt on top of them, and boom, they're organic, like they're now labeled" }, { "start": 2150.4, "end": 2158.1600000000003, "text": " as organic. It's, it's, I this is, to me, this just seems like a thing that's going" }, { "start": 2158.1600000000003, "end": 2163.12, "text": " to be gained so hard, it's going to become irrelevant. It's basically a political game" }, { "start": 2163.12, "end": 2170, "text": " at this point. Because you cannot define benefit other than through human voting, and human" }, { "start": 2170, "end": 2178.76, "text": " voting is subject to money. And yeah, that's how politics starts. Okay, so they have, they" }, { "start": 2178.76, "end": 2186.2000000000003, "text": " have a lot of examples. So here you see sort of this network idea, they have a lot of examples," }, { "start": 2186.2000000000003, "end": 2192.96, "text": " what can be done with this, I don't want to go into into these, because this video is" }, { "start": 2192.96, "end": 2200.84, "text": " already quite long. But it's, it's a lot of talk. I just want to say that it's a lot of" }, { "start": 2200.84, "end": 2207.4, "text": " talk. And, you know, they're basically putting up everything they have done so far, and they're" }, { "start": 2207.4, "end": 2212.84, "text": " doing on the network, what they can do with the network, which is all cool, right? It," }, { "start": 2212.84, "end": 2221.92, "text": " but it's it's sort of advertising, what kind of research they do on it. And yeah, the last" }, { "start": 2221.92, "end": 2230.56, "text": " point. The last point. Yes, it's very long. So these people, for some reason, they actually," }, { "start": 2230.56, "end": 2237.1, "text": " they're like two things they love or three, there's graphs, domain specific languages" }, { "start": 2237.1, "end": 2242.12, "text": " for some reason, they love graphs and domain specific languages. So their idea of AI, it" }, { "start": 2242.12, "end": 2247.68, "text": " all revolves around kind of classic notion of AI. So there is knowledge bases, and then" }, { "start": 2247.68, "end": 2254.64, "text": " there is graphs that and you can see this reflection in singularity net, right? This" }, { "start": 2254.64, "end": 2261.72, "text": " idea that lots of things by themselves network together can make up a bigger AI and so on" }, { "start": 2261.72, "end": 2267.64, "text": " that it is exact reflection and exactly goes counter to like the deep learning idea of" }, { "start": 2267.64, "end": 2273.2799999999997, "text": " let's do everything end to end. So the singularity net here is very much a reflection of what" }, { "start": 2273.2799999999997, "end": 2278.3999999999996, "text": " these people think. And yeah, for some reason, they love inventing DSL for new problems." }, { "start": 2278.3999999999996, "end": 2284.9599999999996, "text": " Like why? What like, I've never understood DSL aficionados, but I guess if you are, you're" }, { "start": 2284.96, "end": 2292.7200000000003, "text": " having fun. Okay, so here they say, measuring modeling and extending singularity net. Okay," }, { "start": 2292.7200000000003, "end": 2301, "text": " so this is sort of their research on singularity net itself, which is, you know, quite a, a" }, { "start": 2301, "end": 2306.32, "text": " important thing if you build a system like this. But what I want to, I wanted to do so," }, { "start": 2306.32, "end": 2312.7200000000003, "text": " I've read through all of this kind of research suggestions and what they're doing, and they" }, { "start": 2312.72, "end": 2321.66, "text": " just make it seem great, but it's also very washy, in my opinion, and I was wondering," }, { "start": 2321.66, "end": 2328.9199999999996, "text": " is it just because it's a white paper? And I you know, there's actual good research and" }, { "start": 2328.9199999999996, "end": 2333.3199999999997, "text": " for most things, I can definitely guess you know, they're, you know, they're also the" }, { "start": 2333.3199999999997, "end": 2340.04, "text": " people behind this Sophia robot. I don't know if you you know, like this Sophia robot and" }, { "start": 2340.04, "end": 2346.16, "text": " so on. They so they have a lot of success. So precision medicine and so on. There's a" }, { "start": 2346.16, "end": 2353.92, "text": " lot of research, but some things just sounded also just washy. So here that this is something" }, { "start": 2353.92, "end": 2363.44, "text": " that made me particularly just kind of stop. So they want to measure with this phi quantity" }, { "start": 2363.44, "end": 2369.36, "text": " for measuring integrated information in complex cognitive networks. So this phi, this number" }, { "start": 2369.36, "end": 2377.32, "text": " phi by this researcher tontoni is sort of a measure fundamental measure of the level" }, { "start": 2377.32, "end": 2382.54, "text": " of consciousness. And they themselves say, you know, maybe it's net, it's not, you know," }, { "start": 2382.54, "end": 2387.04, "text": " the measure, but it's certainly an interesting measure, and so on. And they say we have experimented" }, { "start": 2387.04, "end": 2391.6800000000003, "text": " with measuring phi across time series generated by open call, by the way, open cog is from" }, { "start": 2391.6800000000003, "end": 2399.2200000000003, "text": " the same person that's one of the co founders, Ben Garth, so of singularity net, open cogs" }, { "start": 2399.22, "end": 2406.2799999999997, "text": " attention, allocation module, yada, yada, yada. While the while the system parsed and" }, { "start": 2406.2799999999997, "end": 2412.72, "text": " semantically analyzed a series of short documents, we have also calculated phi values while the" }, { "start": 2412.72, "end": 2418.12, "text": " open cogs system controlled the Sophia humanoid robot, as she led a person through a structured" }, { "start": 2418.12, "end": 2424.8399999999997, "text": " meditation system. So they like the extent of them describing the research is simply" }, { "start": 2424.84, "end": 2433.88, "text": " we have experimented with it. And we have measured it across time. And so I was wondering," }, { "start": 2433.88, "end": 2440.04, "text": " like, what's behind this? So I went and I read the paper that's linked there. That's" }, { "start": 2440.04, "end": 2447.44, "text": " this using tontoni phi to measure the consciousness of a cognitive system while reading and conversing." }, { "start": 2447.44, "end": 2456.4, "text": " And so this is a paper, it's quite short, but they let it read like texts from about" }, { "start": 2456.4, "end": 2462.2400000000002, "text": " different things. And they measure this phi quantity. And when you go and look first," }, { "start": 2462.2400000000002, "end": 2468.08, "text": " what's this phi quantity, this is kind of a one of these papers, it's, it's very mathematical," }, { "start": 2468.08, "end": 2472.7200000000003, "text": " actually. And there's a lot of information theory in there. So it has something to do" }, { "start": 2472.7200000000003, "end": 2476.2400000000002, "text": " with mutual information, there's a lot of ways you can calculate it, as you can see" }, { "start": 2476.24, "end": 2481.2799999999997, "text": " here on the left, and there's a lot of ways you can approximate it. So this is like a" }, { "start": 2481.2799999999997, "end": 2489.2799999999997, "text": " serious quantity. But measuring it is like super hard. And here, they let this open cogs" }, { "start": 2489.2799999999997, "end": 2498.04, "text": " system read short texts with, with respect to, as you can see here, poison and insects." }, { "start": 2498.04, "end": 2507.2, "text": " And they look where the sort of, I guess the attention, the attentional focus of the system" }, { "start": 2507.2, "end": 2515.16, "text": " rests on which of these concepts, right? And then they measure the phi over time. And their" }, { "start": 2515.16, "end": 2524.14, "text": " claim here is I was okay, we also calculated five based upon the concept nodes. No, wait" }, { "start": 2524.14, "end": 2529.68, "text": " up here. As the system ingests each sentence, word nodes corresponding to each word are" }, { "start": 2529.68, "end": 2536.44, "text": " simulated as stimulated with this system, thus triggering attentional focus dynamics" }, { "start": 2536.44, "end": 2541.56, "text": " correlated with the reading process. One goal of the study was to observe whether after" }, { "start": 2541.56, "end": 2546.08, "text": " reading documents regarding insects, then poisons attention would spread to the concept" }, { "start": 2546.08, "end": 2554.88, "text": " related to insect to insecticide. This phenomenon did occur. So they say, okay, when you read," }, { "start": 2554.88, "end": 2561.96, "text": " when you read insect and poison, after that, you got to put a focus on insecticide. And" }, { "start": 2561.96, "end": 2570.44, "text": " you can see so insect is blue, poison is orange, and you can see maybe the insecticide, you" }, { "start": 2570.44, "end": 2577.32, "text": " know, bumping a little bit after while you read poison. But honest, like this could also" }, { "start": 2577.32, "end": 2585.28, "text": " just be because it's associated with poison. This is, you know, I don't know that this" }, { "start": 2585.28, "end": 2591.42, "text": " is a bit interpreted a bit too much into that graph. And then what's even more astounding," }, { "start": 2591.42, "end": 2596.32, "text": " we also calculated five values based on the concept node insect, poison and insecticide" }, { "start": 2596.32, "end": 2603.6400000000003, "text": " as figure three shows, there was an interesting jump in the five value when insecticide first" }, { "start": 2603.6400000000003, "end": 2609.48, "text": " became important, suggesting that the five increase was correlated with an increased" }, { "start": 2609.48, "end": 2615.6400000000003, "text": " complexity of attentional spreading within the atom space. So the atom space and so on," }, { "start": 2615.6400000000003, "end": 2621.2400000000002, "text": " that's that's sort of this classic AI concept of knowledge bases and atoms. But here, so" }, { "start": 2621.24, "end": 2630.9199999999996, "text": " the claim is that the fire on the right somehow, somehow correlates with the insecticide attention" }, { "start": 2630.9199999999996, "end": 2636.08, "text": " on the left or with anything interesting. And that to me is a stretch. In fact, I have," }, { "start": 2636.08, "end": 2642.7599999999998, "text": " I've put the I've put these things above one another. So in the gray background here, you" }, { "start": 2642.7599999999998, "end": 2649.64, "text": " can see the five value, and I've matched up the the time steps right here. And so the" }, { "start": 2649.64, "end": 2657.96, "text": " claim is that here, insecticide marginally bumps up, and then sort of this five spike" }, { "start": 2657.96, "end": 2664.3599999999997, "text": " is here. But if you look anywhere else, like here, insecticide bumps up, okay, but much" }, { "start": 2664.3599999999997, "end": 2670.7999999999997, "text": " delayed spike, and here, it doesn't bump up at all. But there's a spike still. And it's," }, { "start": 2670.8, "end": 2680.76, "text": " it just seems, it just like that is just not a inference you can make right here. Like," }, { "start": 2680.76, "end": 2688.1200000000003, "text": " I'm not sure. Let me let me know what you think. But if you know, you can't just nah," }, { "start": 2688.1200000000003, "end": 2695.44, "text": " nah, sorry. This one, you know, this one, it was the one that that was kind of the most" }, { "start": 2695.44, "end": 2706.36, "text": " strange to me. But also, yeah, don't, don't, don't tell me that this does anything. But" }, { "start": 2706.36, "end": 2713.84, "text": " in any case, they, this is the type of research that they do. And so they measure these measure" }, { "start": 2713.84, "end": 2721.86, "text": " the intelligence of the system, and so on. Yeah. The last thing is these, what they want" }, { "start": 2721.86, "end": 2727.32, "text": " to do is this offered net economy. And you know, in researching this paper, I have also" }, { "start": 2727.32, "end": 2733.84, "text": " watched a bunch of talks from from Ben, and it seems like sprawling with ideas. And the" }, { "start": 2733.84, "end": 2742.56, "text": " talk about these offer nets is, is also so the idea behind it is that offer net is sort" }, { "start": 2742.56, "end": 2758.36, "text": " of an economy without money. The offer nets domain model, the other where is it? So huh," }, { "start": 2758.36, "end": 2765.08, "text": " I don't I don't remember where it said, but offer nets is like an economy without money." }, { "start": 2765.08, "end": 2772.48, "text": " So the idea behind it is okay, person A, person B, person C, or machines, they are sort of" }, { "start": 2772.48, "end": 2780.04, "text": " in an economy. And person A wants something that person B has, but B doesn't want something" }, { "start": 2780.04, "end": 2786.32, "text": " that A has. But instead, B wants something that C has, and C wants something that A has." }, { "start": 2786.32, "end": 2793.2400000000002, "text": " And the logic here is couldn't you, you know, a cannot, a cannot trade with B, B cannot" }, { "start": 2793.2400000000002, "end": 2798.44, "text": " trade with C, C cannot trade with a but they can trade in a circle, right. And this offer" }, { "start": 2798.44, "end": 2807.48, "text": " nets, they do make this possible. And so that the idea is sort of everyone puts out there" }, { "start": 2807.48, "end": 2814.96, "text": " what they want. And the offer nets, they will sort of figure out, they will figure out who" }, { "start": 2814.96, "end": 2821.16, "text": " needs to trade with whom. And thereby, you could make an economy without money, right," }, { "start": 2821.16, "end": 2834.08, "text": " without Yeah, you can make a money free economy. And is this the right paragraph? Because there" }, { "start": 2834.08, "end": 2842.16, "text": " was a fun sentence, there was a fun sentence that I've I've seen right here. So this is" }, { "start": 2842.16, "end": 2847.64, "text": " another another thing where I think that just like that the ideas they go a bit, they go" }, { "start": 2847.64, "end": 2865.48, "text": " a bit too far. offer nets analyzing the data, yada, yada, yada, open ender process. Okay," }, { "start": 2865.48, "end": 2870.24, "text": " I don't I don't know where it was. But they say something like, yeah, offer nets could" }, { "start": 2870.24, "end": 2875.5, "text": " mediate this process. And I'm, and how do they mediate this process, you know, such" }, { "start": 2875.5, "end": 2880.32, "text": " that everyone actually gets their worth of stuff that they put out, they mediate this" }, { "start": 2880.32, "end": 2887.96, "text": " process by means of the offer coin. Okay, so the offer coin is now transferred from" }, { "start": 2887.96, "end": 2894.2, "text": " B to A, or sorry, or from A to B, let's say because a wants something that B has, and" }, { "start": 2894.2, "end": 2899.44, "text": " the offer coin is transferred from B to C, and then from C to A. So the offer coin makes" }, { "start": 2899.44, "end": 2906.64, "text": " all of this happen in an economic sense. And like, huh, are you saying there is an asset" }, { "start": 2906.64, "end": 2914.7200000000003, "text": " going along with a certain service, and the asset is sort of agnostic such that you can," }, { "start": 2914.7200000000003, "end": 2921, "text": " if B gets the asset from A, B can then give the asset to C in order to obtain services" }, { "start": 2921, "end": 2927.92, "text": " from C. And that, you know, asset actually is what makes the whole economy work, even" }, { "start": 2927.92, "end": 2932.64, "text": " though no one directly wants to trade with each other. And you're doing all of that without" }, { "start": 2932.64, "end": 2945.28, "text": " money. That's crazy. So yeah, in any case, I think, oh, ah, there we go. Offer nets." }, { "start": 2945.28, "end": 2950.76, "text": " A decentralized economy providing an alternative to purely currency based exchanges. This economy" }, { "start": 2950.76, "end": 2954.88, "text": " features a complex network of interactions that optimizes reciprocal changes of goods" }, { "start": 2954.88, "end": 2960.2000000000003, "text": " and services by finding agents with compatible and complementary preferences and coordinating" }, { "start": 2960.2000000000003, "end": 2972.92, "text": " their interactions dot dot dot by means of a coin, which is money. That's this is exactly" }, { "start": 2972.92, "end": 2979.28, "text": " what money does. Like that. That's what money is for. In any case, I'm like this. These" }, { "start": 2979.28, "end": 2985.32, "text": " people are very smart, and I'm probably too dumb to see what the exact difference is right" }, { "start": 2985.32, "end": 2992.1200000000003, "text": " here. So I just found it funny. If you know, if I'm completely wrong, then let it be stated" }, { "start": 2992.1200000000003, "end": 2999.36, "text": " that you know, that's what a semi only semi smart person would conclude from reading these" }, { "start": 2999.36, "end": 3008.1600000000003, "text": " things. All right, this was lengthy. But I hope you sort of got the idea. The base system" }, { "start": 3008.16, "end": 3017.44, "text": " is an a an API marketplace. Now the API marketplace in itself doesn't have anything to do with" }, { "start": 3017.44, "end": 3028.72, "text": " AI necessarily. But I've made the case that the API marketplace only makes sense in the" }, { "start": 3028.72, "end": 3034.3199999999997, "text": " in the world of AI, because if it was regular software, you would just hard code either" }, { "start": 3034.32, "end": 3040, "text": " the API calls or you would actually include the library. So the marketplace makes sense" }, { "start": 3040, "end": 3046.8, "text": " in the realm of AI. Okay, it's doubtable whether that's actually the case. It very much goes" }, { "start": 3046.8, "end": 3054.2000000000003, "text": " against the end to end principle, it bets on a form of AI that works on discrete graphs," }, { "start": 3054.2000000000003, "end": 3061.7200000000003, "text": " it works on sub components divided into sub components, it works on networks, networks" }, { "start": 3061.72, "end": 3067.12, "text": " built together to achieve higher order functions, it could definitely be that the future of" }, { "start": 3067.12, "end": 3072.3599999999997, "text": " AI lies in this direction. It's just that the current direction is pointing away from" }, { "start": 3072.3599999999997, "end": 3080.72, "text": " that. The whole marketplace runs in on the blockchain, and only the marketplace so the" }, { "start": 3080.72, "end": 3088.12, "text": " AI processing is off chain. So it is not a on blockchain AI. And yeah, they've built" }, { "start": 3088.12, "end": 3094.04, "text": " it and they are in money problems. Currently, they're inflating the currency. But they're" }, { "start": 3094.04, "end": 3099.7599999999998, "text": " switching blockchains, because they think the new blockchain will be better and faster." }, { "start": 3099.7599999999998, "end": 3104.7999999999997, "text": " And they project high growth and the token is actually active. So it's you know, it's" }, { "start": 3104.7999999999997, "end": 3111.12, "text": " not a dead project. And they are in the news quite a bit, especially with this this Sophia" }, { "start": 3111.12, "end": 3118.88, "text": " robot, I think that is that is a very it's a kind of a PR magnet. Alright, that was what" }, { "start": 3118.88, "end": 3124.96, "text": " I had to say. I hope you enjoyed it. If you did share it out. Let me know what you think" }, { "start": 3124.96, "end": 3141.6, "text": " in the comments. Let me know what I did wrong. And bye bye." } ]
LMb5tvW-UoQ
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Discovering Symbolic Models from Deep Learning with Inductive Biases (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "graph networks", "graph neural networks", "gnn", "physics", "newtonian", "hamiltonian", "dynamics", "cosmology", "dark matter", "symbolic regression", "edge", "vertex", "regularization" ]
Neural networks are very good at predicting systems' numerical outputs, but not very good at deriving the discrete symbolic equations that govern many physical systems. This paper combines Graph Networks with symbolic regression and shows that the strong inductive biases of these models can be used to derive accurate symbolic equations from observation data. OUTLINE: 0:00 - Intro & Outline 1:10 - Problem Statement 4:25 - Symbolic Regression 6:40 - Graph Neural Networks 12:05 - Inductive Biases for Physics 15:15 - How Graph Networks compute outputs 23:10 - Loss Backpropagation 24:30 - Graph Network Recap 26:10 - Analogies of GN to Newtonian Mechanics 28:40 - From Graph Network to Equation 33:50 - L1 Regularization of Edge Messages 40:10 - Newtonian Dynamics Example 43:10 - Cosmology Example 44:45 - Conclusions & Appendix Paper: https://arxiv.org/abs/2006.11287 Code: https://github.com/MilesCranmer/symbolic_deep_learning Abstract: We develop a general approach to distill symbolic representations of a learned deep model by introducing strong inductive biases. We focus on Graph Neural Networks (GNNs). The technique works as follows: we first encourage sparse latent representations when we train a GNN in a supervised setting, then we apply symbolic regression to components of the learned model to extract explicit physical relations. We find the correct known equations, including force laws and Hamiltonians, can be extracted from the neural network. We then apply our method to a non-trivial cosmology example-a detailed dark matter simulation-and discover a new analytic formula which can predict the concentration of dark matter from the mass distribution of nearby cosmic structures. The symbolic expressions extracted from the GNN using our technique also generalized to out-of-distribution data better than the GNN itself. Our approach offers alternative directions for interpreting neural networks and discovering novel physical principles from the representations they learn. Authors: Miles Cranmer, Alvaro Sanchez-Gonzalez, Peter Battaglia, Rui Xu, Kyle Cranmer, David Spergel, Shirley Ho Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today we're looking at discovering symbolic models from deep learning with inductive biases by Miles Cranmer, Alvaro Sanchez-Gonzalez, Peter Pitalia, Rui Xu, Kyle Cranmer, David Spurgill and Shirley Ho. So this paper on a high level, it uses graph neural networks to fit a dataset of observations of a physical system. And then it uses symbolic regression in order to parse out equations, symbolic equations from the graph neural network. And the symbolic equations that will are found such they describe the physical system. And they do find, they do recover some known equations and they do find a new one in the field of cosmology. So we'll go through how they do it, what these two steps are and why this might work better than previous approaches. So yeah, join me. If you like content like this, as always, feel free to share it, subscribe if you haven't, if you want more content like this, and tell me what you think in the comments. All right, so they claim we develop a general approach to distill symbolic representation of a learned deep model by introducing strong inductive biases. And this, it doesn't really say a whole lot, but I think the abstract doesn't say a whole lot. So let me give you an example. If you have three different, let's say planets or stars, right? This is a, this three body problem is a unsolved problem, I think still. So if you have these three stars, and you just let the simulation run, they have gravity, they attract each other, right? So they are going to move around somehow. So this one's going to move here, this one's going to move like this, this one's going to move like this, and then it turns around and this one turns around and so on. So there is a fairly complex motions in already three different things that are somehow in a physical system together. This is a bigger problem than just stars. So you have these systems, for example, when these are atoms and there is like an electromagnetic force between them or the strong force. There can be, these can be things where springs are attached to them and so on. So our goal is to derive equations that govern this behavior, right? In the case of gravity, we know that these objects sort of pull on each other with the force proportional to something like the mass of the first times the mass of the second divided by the radius that they are a part squared. Something like this times like this gravitational constant. We know the equation that governs these interactions. We don't know the symbolic solution to the whole problem, but we know the equation that governs the interaction, right? Now imagine if we didn't know the equation, what do we have to do? Well, what did Newton do? Ultimately, he sat down and just came up with an equation that seemed okay to him and then found out that the equation actually does predict very accurately how the things move. So we're going to try to replicate that process in an AI system, the process of coming up with an equation that governs this behavior. So what we have is a data set. As I said, we let this stuff run. So we let it run for one time step and then this is here, maybe this is here and this is here, okay? And then we let it run for the next time step. This goes here, this goes here, this goes here and so on. So that will give us, basically it will give us frame by frame how this system evolves. Frame by frame. And that will give us a data set. So this right here, if we let it run and maybe we restarted a couple of times with different initializations, we let it run, we get a data set. So now we have a data set, right? So our goal is to be to take that data set and come up with an equation like m1, m2 divided by r squared that governs this behavior. Now previous approaches have resorted to symbolic regression, I think they call this. And that is basically, it's pretty simple. Namely, what you do is you simply provide the system with a bunch of options. You tell it, I have a list and the list can include the mass of the first, it can include the mass of the second, it can include the x and the y position of the things, it can include the delta x and delta y, which basically means the speed of the objects. It can include any constant a and b that you want. It can include the symbols plus, minus, division, multiplication, square, maybe exponential function and so on. So you give it a bunch of options of what it could potentially use in an equation. And then you simply let it make equations and you see how well these equations describe the data set. And the way you do that is you can do it naively by just searching and trying out, or you can be a little bit smarter about it and use evolutionary methods. So you start with some equations like this, okay, I'm going to x plus delta x minus a squared. You see how that describes the data set, you'll find not very well. And then you go on and you say, okay, maybe I'll make a small mutation, I'll mutate this to a minus and so on. And if you do this with an entire population, as is common in these evolutionary methods, you'll end up with something better at the end. Now this works until a point. So whenever the space of things to explore, like this one here, gets larger, and it doesn't have to be super large to already exhaust the capabilities of these methods. So these methods are very limited in the space they can search and have proven not really effective so far for this type of problem. This paper right here goes a different route. It uses graph neural networks in order to describe the data set. So in between this step of collecting a data set and making the equation, it fits another step. So it says in between here, we fit another step and that other step is going to be we have a graph neural network and you don't know yet, you don't have to know yet what that exactly is. But it's technique. It's like a type of neural network. And we're going to have that neural network learn the data set. Now as you know from neural networks, they can't do symbolic regression, they can't give you an equation, they can simply predict numbers. So what the network will do is it will simply predict like the motions or the accelerations, whatever you're interested in, it will predict those things as numbers, not as equations as just you can plug in this situation right here, and it will tell you how the things will move. Neural networks are pretty good at that. And once you have a graph neural network that can describe the system in a numeric fashion, then you parse out the equations from this graph neural network. And we're going to go over why that is going to be much, much easier than parsing out the equations directly from the physical system. It's going to be because you engineer the graph neural network in a way that makes it very congruent with physical reality that makes it very adapt to parse out equations like this that makes the job of this evolutionary method much easier. All right, so that's basically the two-step process here. First step is to numerically regress a neural network to describe the system, and then second step is going to be from that neural network parse out the equations. So we have to talk about graph neural networks. So here you see the entire process as they describe it. So they have this data set right here of observations of these physical systems. This is like, it's like any data set that you have in machine learning. They predict the dynamics, which means in a numeric fashion with a graph neural network, and then from the graph neural network, they extract the symbolic equation, as you can see right here. And this here is going to be the equation that they figure out that was previously unknown. They even say unknown dark matter over density equation. Cool. So we have to talk about graph neural networks. We haven't really done this on this channel so far. And I'm not like a big expert on graph neural networks. But they come in all shapes and forms. In this particular paper, they use what they call a type of interaction network that's called a graph network. So graph network is something different than graph neural network. I think graph network is a type of graph neural network. And specifically here, they use a network that... So a graph neural network has these things called vertices, and then it has edges, and edges connect vertices, like in a graph. Now we're going to build this graph neural network such that the number of vertices is exactly equal to the number of particles in our system. So in this paper, they consider systems with, I believe, four or eight particles. That's already a lot for if you want to derive equations and things. But of course, the physical world is made of many more particles. In any case, they consider four, let's say four particles right here. So what they're going to do, they're going to build a graph neural network that has four vertices, one for each of the particles. And in a graph neural network, every vertex can have properties. So the properties of each vertex here are going to be the properties of that particle. That means the x coordinate, for example, the y coordinate, and we're going to, let's say we're in two dimensions, right? It's a two dimensional problem. The x coordinate, the y coordinate, the delta x, the delta y, the mass, the, I don't know what else can we put here. There's a lot of stuff that we can put here, the charge, right? So all of these things are properties of the vertex. Then the other component of a graph are, of course, the edges. So the edges connect each two of all of the, so each edge connects two vertices like this. And in this particular type of graph network, we're going to consider graphs where all the particles are connected to all the other particles like this. So it's not like a sparse, it's not a sparse graph, except I think in the cosmology example here you can see that always there is a node that's connected to all its neighbors. But in the Newtonian dynamics graph networks, you can see right here, everything is connected to everything, like this. And why does that represent a physical system really well? So the reason is going to be the following. What we're going to try to do is we're going to try to say that in physical systems, if I want to, for example, consider this node up here, and consider how it is pulled by gravity by the other nodes, it's going to be pulled in this direction a little bit because of this particle right here, it's going to be pulled in this direction a little bit because of that one, and in this direction because of that one. Now note that these three things are independent. So if I want to describe the total force of gravity, I can do so as a sum over i equals 1, 2, 1, 2, 3 of the force that the particle i pulls. So if this is j right here, how i pulls on j, right? This is an independent sum across all of the neighbors of that particle. Now you might say, wait a minute, it's not independent, because it's being, you know, it's not being strictly pulled in this direction, it's also pulled in this direction. Yes, but with independent, we mean that the force, this force right here, is only dependent on this particle, and the force diagonally is only dependent on that particle. There is no part of the particle up here that modulates this force right here. So you can calculate the total force as an independent sum across the individual forces. And that's the simplification here. And that's a part, they claim, why current approaches that directly try to go about finding an equation using evolutionary methods from the data set itself don't really work, because the space is just too high of equations. But this right here, this is a massive constraint. And it's lucky, first of all, that physical systems, they say, most physical systems actually obey that constraint. Most physical systems can be described as an independent sum over contributions of interactions between just two things, right? So we simply can sum over interactions between two things. And that's way simpler than considering everything at once. And second of all, it's lucky because these graph networks describe exactly this. So each edge in the graph network is coincidentally connecting two things, right? And not more. So the edges, they don't know about each other. No edge knows about the other edge. They only consider whatever particles are at their respective ends. And that is exactly the same as this physical constraint on the physical system. And that's why the graph networks are so adapted or are so useful in describing these systems. So how does a graph network like this do anything, basically? So for that, you have to consider the task. If we want to describe a system like this, a task in that, if you frame it in a machine learning way, could be, I'm going to give you these particles right here. OK, here it's five particles. I'm going to give you, for each one, I'm going to give you all its features, like the x, the y, the speed currently, and the mass. And you're going to tell me what the acceleration is in the next frame. OK? So like this, like this, like this. OK, considering all the interactions between the particles, just tell me, where does it go in the very next time frame? That sounds like a machine learning problem, right? And the graph neural network can be made to predict this. So what we want is, for each vertex here, an output of a number or a vector, the acceleration. So we want to compute an output for each vertex. How do we do this? In a graph neural network, there are three, or in this particular type, there are three steps. We said each vertex, and we're just going to do it for one vertex, let's say the one on the bottom right. Let's say each vertex has these properties, like this x, y, and so on. So first, what we do is we go over the edges. So for each edge, in parallel and independent from each other, let's consider this edge right here. What we'll do is we take the nodes that are attached to it, and we combine their features. And we combine them, so x, y, this also has x, y. So we want to combine these two. We want to compute the edge right here. Now, in a physical system, what does the edge represent? The edge represents the force between the two particles, right? And that's a fairly complex equation. It's not like we can just add the features or something like this. So the edge here already needs to compute some sort of nonlinear, complicated function. And we know how to compute nonlinear, complicated functions with neural networks. We're in deep learning right here. So the edge here is going to compute what's called this edge function. And this edge function takes in two vertices, v1 and v2 right here. Maybe this is v2, this is v1. It takes in the features, these features of the two vertices, and it will compute a so-called edge message. I think they call this ek for the edge k. It will compute an edge message. And this is supposed to represent the force that pulls between these two particles. And we're going to approximate this function right here using a neural network. Since we don't know the equation yet, right? We assume we don't know the gravitational equation, but we can learn it, right? Because we have data. So we take this and we simply make it into a neural network. So the features go in here, both. We can concatenate them. And then out comes this edge message. Now, this edge message here is simply going to be a vector, a numerical vector describing some intermediate hidden state, right? That is going to describe the force, but for now it's just describing intermediate hidden state. OK, so we do this for each edge. So each edge is going to be, maybe this is e1, this is e2, e3, e4. Each edge in parallel is going to aggregate information of its endpoints into that edge. And then that's step one. So step one, compute the edge messages. Step two is going to be to compute the vertex messages or the vertex outputs. So we said we're not actually interested in the edges. We're interested that each vertex ends up with an acceleration, with an output. So how are we going to do this? So consider again our graph. If we want to compute the output for this node right here, what we'll do is we'll simply aggregate all of the edges, all of the edge messages that connect to that vertex. So we've computed previously the edge messages by integrating the information from all of the attached endpoints. Now we're going to go backwards and distribute the information from the edges back to the vertices that are attached. And you can see already by this two-step process, it's kind of a message passing process if you've ever studied graphical models. This means that in the two-step process, this vertex here aggregates information from the other vertices, via these edges. So in this case, this vertex here is going to take in all the edge messages right here, and it is going to aggregate all these edge messages in a function that computes the acceleration. So our estimate of the acceleration is going to be a function, let's call that nu, of the edges that are attached to it. So e1, e2, and e3. And here is where we're going to make our next physical assumption, namely the one we said before, that the way that these edges, the way that they influence the vertex, is going to be in a form of an independent sum. So this simplification means that this function should somehow be not of the edges, but of the sum of the edges, right? Sum of ei. Okay, so this sum here, this is the simplification that we make to make it in accordance with the physical system. With this graph network, we could do any sort of complicated thing right here. We could put a transformer on these things and compute 12 layers of interaction effects between these edges. We're not going to do that. We're simply going to sum them up and then come up and then run those through a function. So we'll sum them up. And of course, this function right here is still going to be a complex function because just because you sum up the forces, you don't have the acceleration yet. So as you know that force is mass times acceleration, that means acceleration is equal to force divided by mass. So this here is going to be this sum over the edges, I guess. Yes. So you still need to divide it by force. And technically, you still can do much more complicated things right here. We only say that the edges should only come in in form of a sum. So of course, we're going to say that this function right here, since it can be any complicated function of its input, it should also be a neural network. So we're going to take that sum of the edge messages and we're going to put that into a second neural network, and then out comes our estimate of the acceleration. And now that we can use together from the data set, we know the true acceleration, right? Since we have a data set, we have the observations and the labels. The labels are the true accelerations of that system that we observed. And we can compute a loss function right here. If you followed so far, everything we've done so far is differentiable. So from this loss function that compares the output of the neural network for that vertex to the true acceleration that we observed in the data set, we can back propagate through this neural network that computes the vertex function. We can back prop through the sum here to the edge messages, and we can back prop through the edge messages to that neural network that computed the edge messages from those features. So everything is differentiable. By having that loss at the end, we can train this neural network end to end to, from the observation right here, predict the numerical acceleration of the system right here. It was a fairly lengthy way, but it's important that you kind of understand what's happening. So you build the graph network according to the physical system. In the graph network, there are two kinds of things. First there are deterministic things, like we're always going to aggregate in a sum. And then there are things that you learn. Namely, there are two neural networks. The first one computes the edge messages from the features of the vertices. And the second one computes the output of each vertex according to the sum of the edge messages that are attached to that vertex. Now you can say, wait a minute, there are more than just two neural networks. Like each edge here has a neural network, technically, right? This edge has a neural network, this edge has a neural network, and each vertex has a neural network. But in this case, these neural networks are shared. So the neural network that computes the edge message for that edge is the same as the neural network that computes the edge message for any of the edges. You can think of it like weight sharing, or you can think that it is actually the same neural network, it's equivalent. And the same for the vertices. There's only one neural network that in the same fashion computes the output for each vertex. Of course, the incoming edge messages are going to be different, and that's why you have different outputs. But the neural network itself is the same. Okay, so we have a system that can describe this data set of physical observations really well. It's this graph neural network. So we train this end to end. And here is a little bit of an analogy where they say, this is how you can analogize the neural network with a physical system. So what are the analogies here? The nodes in the graph network correspond to the particles in Newtonian mechanics. Pairs of nodes correspond to two interacting particles. The edge model is the force between two particles. Then the pooling operation, which is the summing up of the edge messages, right, that we found so important as a simplification. This is the sum into the net force that is really given in the physical system. So independent sum of, sorry, sum of independent forces without interaction effects. Then concatenate with node, I guess this I left this out. But whenever you compute, whenever you compute the vertex properties, right here, I guess, what you want to do is not only input the edge messages, but you know, each vertex has these features that we said, and these could also be fairly important. It's like you technically have that information in the edge messages because it started out from these. But you can also just input that again into this neural network together with the edge properties. And that will just make its job a bit easier since, for example, right here, we have to divide by the mass in this function. And it's just easier if you provide that mass as a as the property. So that's a little detail I left out before. So that you concatenate the edge mess, the aggregated edge messages with the node, then you compute the node model, which in this case is the computation is simply the you take this sum right here, and you divide it by the mass. And then optionally, you can update the nodes, which is compute the next time step, which we don't do right here, because we simply want to output the acceleration. I guess I mean, it should be equivalent to output the next time step and then compare with the data set what the next time step was. In any case, you have to have some kind of loss function. And here you can see all the black squares right here are going to be neural networks. So now we have learned a graph network that can describe a system. How do we make this into an equation? And again, here, our our physical reality comes in that these of the like the independence assumptions of these realities comes in. Because in physics, you know, the the acceleration here is going to be a function of the sum and so on. So what we need to do is we don't need to develop an equation for the entire system, right? What we need to do is simply we need to develop an equation for each vertex. So each vertex, we need to have an equation acceleration equals something. And that something should include some of the edges and then the edges again should be something right. So we technically as we had two neural networks, we technically need two symbolic equations, one that represents that first neural network that computes the edge functions and one that represents that second neural network that aggregates the sum of the edge functions or that computes the output from the sum of the edge functions. And that you know, it's an exact correspondence. So what we need to do is we need to take that first neural network up here and do symbolic regression on that and the second neural network do symbolic regression on that. So what does it mean to do symbolic regression? It basically means that we want to find the symbolic equation that describes the neural network the best. And we do that in the exact same fashion as we started right here. So we give it a bunch of these options and then we let the system describe the neural network as best as possible. The way we do that again is we try out equations and if they get a low error, right, so we let the neural network run on the data set and we let this run on the data set. If it outputs the same thing, it describes the neural network well. And we can iterate that until we find a good equation. So the difference here is that we don't need to find an equation that governs the whole system. We just need to find two equations, one governing the edge model and one governing the vertex model and that's way, way easier than the whole system. And by finding those two equations, we and our given our physical assumptions, we can now find the equation to the whole system by simply composing them. Alright, so that's the entire system. I believe I've told you the entire paper right here without actually going into the paper. Let's just skim the paper a bit to see that they actually tell us the same thing. So, yeah, so the graph networks, they say, are ideal candidate for our approach due to their inductive biases shared by many physics problems. A, they're equivalent under particle permutations. B, they are differentiable end to end and can be trained efficiently using gradient descent. And C, they make use of three separate and interpretable internal functions, the edge, the node and the global model. Now the global model here isn't really used in the cases we're going to look at. So it's just going to be two different neural networks. Which are targets for the symbolic regression? Graph networks can also be embedded with additional symmetries, as in 23, 24, but we don't implement these. Okay, and then they say symbolic regression. So they use this Eureka package to perform symbolic regressions and fit compact closed form analytical expressions to these neural networks. Eureka works by using a genetic algorithm to combine algebraic expressions stochastically. The technique is analogous to natural selection, where the fitness of each expression is defined in terms of simplicity and accuracy. The operations considered in the fitting process are plus, minus, times, if, as well as real constants. Alright, so if we look at the examples, they have three different examples. First of all, they have Newtonian dynamics, which is, for example, this gravitational force we looked at. They have Hamiltonian dynamics, which describes the same systems, but in a different way in terms of the Hamiltonian. And I don't want to go into this too much, because I think that the Newtonian dynamics already demonstrate really well what the system can do. And then they have dark matter halos for cosmology, which is a problem where you have universe simulators and you try to predict where the dark matter is, depending on where other dark matter is, and that's where they find a new unknown equation. Okay, here is the system in a nutshell. This is the path that you know. You have the data set, you learn a graph network, and then you get out an equation. But in between, you can put even more constraints to make the network really learn a physical equation. So, as I said, you're going to compute these edge functions right here. And the output of the edge functions is going to be this edge message, which is just going to be a vector of some sort. And that vector can be pretty large. You know, this is a hidden dimension that you can choose as an implementer. All you need to make sure is that the output of the vertex is the same dimension as, you know, what your output should be. Everything internal, you can choose. Now, we know that, for example, in a 2D system, the actual informational content of that edge message should be two dimensional, right? If this really describes the force in two dimensions, it should be two dimensional. There's really no reason why it should have a higher dimension since all the relevant information can be described in two dimensions. So one thing you can do is you can simply say, all right, I will choose the hidden dimension to be two. And therefore, I will force my neural network to just use two dimensions. This however, they noticed doesn't work super well. I think it works, but not that well. They call this the bottleneck model. And the reason why it doesn't work super well is that if you have like this constraint of neural networks, they don't tend to learn very well. And that's what they hypothesize in the paper as well. They don't tend to really come, you know, be good friends with the fact that they only have two floating point numbers to learn anything. And this is probably more a property of the optimization procedure than the problem itself. It's property of, you know, us training neural networks with SGD. So what they do instead is they put an L1 penalty on these edge messages. So they say we apply L1 regularization. And what that will do is that will induce sparsity in whatever you apply it to. So L1 regularization simply means that you constrain. So the edge message, if you take the absolute value in each entry and the sum of that, that should be small. So you can just add this to the loss function, and that will induce sparsity in these edge messages. And so now the network still has these whatever 100 latent dimensions, but it is encouraged to use as few as possible. That means it can use a lot during the beginning when it's really benefits from the lot of dimensions when it learns the system. But then as it gets better and better, it might shift a lot of the information into very, very few dimensions. Okay, so once we do, if we do that, we can then run a check, right? If it is really the case that this graph network has learned the physical dynamics of the system, then we can simply look at the top two dimensions, and we start by largest standard deviation. So whichever two dimensions are the least sparse, have the largest standard deviation, we can look at those two and we say, well, even though we didn't constrain the model, those two should describe our force pretty well. And since in Newtonian dynamics, we know what the force is, so this is we know what the force is, we can simply check whether or not that holds, we can check whether we can read out the force from these two components. And here it's made such that you can't guarantee that the force is, you know, this force right here is actually so there are many ways to state a physical equation, because there are many symmetries in physics, and we cannot really make the neural network describe the equation exactly as humans would, because there are infinite amount of equivalent formulations, but in this case, they're all covered by rotations of each other. And that means in these graphs, if you have these message elements right here, and the linear combination of forces right here, a linear relationship means basically that the information is there, whereas a nonlinear relationship would mean that these numbers don't really encode the force as is. And here you can pretty clearly see that the linear relationship is given. And that means that these first two dimensions right here really encode the force in the way that we know the equation is. So that's when we know the equation, right? When we know the equation, we can simply say, okay, does this fit? And when we don't know the equation, we can use this symbolic regression. And what turns out is exactly this thing right here. Now you might you might object that this isn't really that force right here. But as I said, there are many, many symmetries. So for example, this, this R hat right here, I believe, and this is I've I'm not a big physics person, this R hat, I think this is the vector of the delta x delta y, right? So delta x delta y is in this R hat. So we already see that delta x and delta y here, this already looks like some sort of this already looks okay. No, actually, if we go down, it gets even clearer. So here they have the outputs of that. Alright, so in this first case, this is the same example right here. So they say you in this spring example, so this is a system where the particles are connected by springs, and we do l one regularization, what we expect is this equation, this is we know that this equation holds in this spring system. And what the neural network combined with the symbolic regression gives us is this equation. So right here, you can see there's this delta vector, and it's a product, it's an inner product dot product with this a, which is a numerical constants. And you can see that there is this form of product with numerical constants. What you can also see, so for example, here, the delta y here is 1.36 and 1.37. That's, you know, the same number and here it's point 6.6. Okay, but here you see, for example, r minus one, and here it's something like this minus something divided by r doesn't seem the same. But again, due to the due to the symmetries, you can, if you take this and you simply divide everything by r, you'll end up with this vector right here, a times delta x, delta y, times one over r, no, times one minus one over r plus b. Right, and now you can see it already looks very much similar. And it's only off by like, it's only a transformation away from what you want. So that's why I said you can describe these equations in many different sort of equivalent ways. And ask the neural network to really figure out, you know, the exact one we want. As long as it figures out a one that is equivalent, we're happy. And we're, I guess we're pretty happy here. So also in this case right here, you can see that it correctly predicts this relationship that it should be divided by r to the third power. And there is a delta x, delta y, delta z, if you simply consider, so delta z here, I guess is, has simply a factor of zero. And it even has this discontinuous problem where the force breaks after a certain while, it can even parse out this if condition right here. So that's, that's fairly cool, right? But to me that is pretty, pretty cool result that you can actually parse out these equations with just these graph networks and then the symbolic regression. So they do the same thing for this cosmology example, where they have these simulators of the universe and they let them run and these kind of distribute this dark matter. And I guess your task is, if I give you a bunch of these points right here, tell me where the other dark matter is, something like that. I don't understand this, but in essence, it is the same kind of problem, right? You want to figure out the dark matter properties from the surrounding dark matter or properties of other things. And again, here you can see pretty well that this is the equation they get out. So the equation they get out is going to be a sum right here over, so here the output for node i is going to be a sum over all the other nodes j and then some function of that sum. So this right here is the equation that came out of our edge model, of our edge neural network. And this here that includes this one, it was the equation that came out of our vertex model. As you know, the same here in this spring law, this came out of our edge model, this came out of our vertex model. Again, this rests on the fact that physical systems can actually be described often as these sums of independent interactions. And that's why all of this works. So they do give very, very detailed instructions on how they did everything. I think the most unclear things in this paper are the physics things that are assumed sort of that you know. I don't, I didn't. Yeah, but other than that, it's pretty straightforward. Their appendix is also pretty detailed in how they do all the representations and so on. They have different formulations other than this L1 regularization. As I said, they have bottleneck, they have like a KL formulation. They really describe how the graph neural network works here and so on. So all in all, I enjoyed reading this paper. Here is a bunch of examples of these particle systems. And yeah, and here is a bunch of examples of where you'd have a linear relationship that where you can say, oh, look, this really describes that force or a nonlinear relationship where you can make the claim this doesn't really describe the force well, because it's not linear relationship indicates that what the network found is a rotation of what you really want. And that's good because it's equivalent nonlinear basically means that you can't really it doesn't really describe what you want really well. Yeah, and I'm going to leave you with that. I absolutely invite you to check out the code and the video they made about it and I'll see you next time. Bye bye.
[ { "start": 0, "end": 5.28, "text": " Hi there, today we're looking at discovering symbolic models from deep learning with inductive" }, { "start": 5.28, "end": 11.98, "text": " biases by Miles Cranmer, Alvaro Sanchez-Gonzalez, Peter Pitalia, Rui Xu, Kyle Cranmer, David" }, { "start": 11.98, "end": 14.5, "text": " Spurgill and Shirley Ho." }, { "start": 14.5, "end": 21.62, "text": " So this paper on a high level, it uses graph neural networks to fit a dataset of observations" }, { "start": 21.62, "end": 23.76, "text": " of a physical system." }, { "start": 23.76, "end": 30.64, "text": " And then it uses symbolic regression in order to parse out equations, symbolic equations" }, { "start": 30.64, "end": 32.620000000000005, "text": " from the graph neural network." }, { "start": 32.620000000000005, "end": 39.08, "text": " And the symbolic equations that will are found such they describe the physical system." }, { "start": 39.08, "end": 45.68000000000001, "text": " And they do find, they do recover some known equations and they do find a new one in the" }, { "start": 45.68000000000001, "end": 48.3, "text": " field of cosmology." }, { "start": 48.3, "end": 54.82, "text": " So we'll go through how they do it, what these two steps are and why this might work better" }, { "start": 54.82, "end": 56.72, "text": " than previous approaches." }, { "start": 56.72, "end": 59.4, "text": " So yeah, join me." }, { "start": 59.4, "end": 65.16, "text": " If you like content like this, as always, feel free to share it, subscribe if you haven't," }, { "start": 65.16, "end": 70.67999999999999, "text": " if you want more content like this, and tell me what you think in the comments." }, { "start": 70.68, "end": 78.4, "text": " All right, so they claim we develop a general approach to distill symbolic representation" }, { "start": 78.4, "end": 82.32000000000001, "text": " of a learned deep model by introducing strong inductive biases." }, { "start": 82.32000000000001, "end": 90.36000000000001, "text": " And this, it doesn't really say a whole lot, but I think the abstract doesn't say a whole" }, { "start": 90.36000000000001, "end": 91.36000000000001, "text": " lot." }, { "start": 91.36000000000001, "end": 93.04, "text": " So let me give you an example." }, { "start": 93.04, "end": 98.96000000000001, "text": " If you have three different, let's say planets or stars, right?" }, { "start": 98.96, "end": 105.39999999999999, "text": " This is a, this three body problem is a unsolved problem, I think still." }, { "start": 105.39999999999999, "end": 110.75999999999999, "text": " So if you have these three stars, and you just let the simulation run, they have gravity," }, { "start": 110.75999999999999, "end": 111.96, "text": " they attract each other, right?" }, { "start": 111.96, "end": 114.38, "text": " So they are going to move around somehow." }, { "start": 114.38, "end": 117.6, "text": " So this one's going to move here, this one's going to move like this, this one's going" }, { "start": 117.6, "end": 122.13999999999999, "text": " to move like this, and then it turns around and this one turns around and so on." }, { "start": 122.14, "end": 129.56, "text": " So there is a fairly complex motions in already three different things that are somehow in" }, { "start": 129.56, "end": 131.96, "text": " a physical system together." }, { "start": 131.96, "end": 134.66, "text": " This is a bigger problem than just stars." }, { "start": 134.66, "end": 140.42000000000002, "text": " So you have these systems, for example, when these are atoms and there is like an electromagnetic" }, { "start": 140.42000000000002, "end": 143.96, "text": " force between them or the strong force." }, { "start": 143.96, "end": 148.44, "text": " There can be, these can be things where springs are attached to them and so on." }, { "start": 148.44, "end": 154.66, "text": " So our goal is to derive equations that govern this behavior, right?" }, { "start": 154.66, "end": 161.32, "text": " In the case of gravity, we know that these objects sort of pull on each other with the" }, { "start": 161.32, "end": 166.44, "text": " force proportional to something like the mass of the first times the mass of the second" }, { "start": 166.44, "end": 172.32, "text": " divided by the radius that they are a part squared." }, { "start": 172.32, "end": 176.36, "text": " Something like this times like this gravitational constant." }, { "start": 176.36, "end": 178.88000000000002, "text": " We know the equation that governs these interactions." }, { "start": 178.88000000000002, "end": 182.92000000000002, "text": " We don't know the symbolic solution to the whole problem, but we know the equation that" }, { "start": 182.92000000000002, "end": 186.44000000000003, "text": " governs the interaction, right?" }, { "start": 186.44000000000003, "end": 189.96, "text": " Now imagine if we didn't know the equation, what do we have to do?" }, { "start": 189.96, "end": 191.8, "text": " Well, what did Newton do?" }, { "start": 191.8, "end": 199.22000000000003, "text": " Ultimately, he sat down and just came up with an equation that seemed okay to him and then" }, { "start": 199.22, "end": 207.84, "text": " found out that the equation actually does predict very accurately how the things move." }, { "start": 207.84, "end": 214.38, "text": " So we're going to try to replicate that process in an AI system, the process of coming up" }, { "start": 214.38, "end": 217.42, "text": " with an equation that governs this behavior." }, { "start": 217.42, "end": 219.68, "text": " So what we have is a data set." }, { "start": 219.68, "end": 221.52, "text": " As I said, we let this stuff run." }, { "start": 221.52, "end": 225.62, "text": " So we let it run for one time step and then this is here, maybe this is here and this" }, { "start": 225.62, "end": 227.24, "text": " is here, okay?" }, { "start": 227.24, "end": 229.42000000000002, "text": " And then we let it run for the next time step." }, { "start": 229.42000000000002, "end": 233.52, "text": " This goes here, this goes here, this goes here and so on." }, { "start": 233.52, "end": 240.68, "text": " So that will give us, basically it will give us frame by frame how this system evolves." }, { "start": 240.68, "end": 242.52, "text": " Frame by frame." }, { "start": 242.52, "end": 244.62, "text": " And that will give us a data set." }, { "start": 244.62, "end": 250, "text": " So this right here, if we let it run and maybe we restarted a couple of times with different" }, { "start": 250, "end": 253.68, "text": " initializations, we let it run, we get a data set." }, { "start": 253.68, "end": 255.78, "text": " So now we have a data set, right?" }, { "start": 255.78, "end": 263.08, "text": " So our goal is to be to take that data set and come up with an equation like m1, m2 divided" }, { "start": 263.08, "end": 267.92, "text": " by r squared that governs this behavior." }, { "start": 267.92, "end": 274.5, "text": " Now previous approaches have resorted to symbolic regression, I think they call this." }, { "start": 274.5, "end": 277.28, "text": " And that is basically, it's pretty simple." }, { "start": 277.28, "end": 282.3, "text": " Namely, what you do is you simply provide the system with a bunch of options." }, { "start": 282.3, "end": 287.72, "text": " You tell it, I have a list and the list can include the mass of the first, it can include" }, { "start": 287.72, "end": 293.12, "text": " the mass of the second, it can include the x and the y position of the things, it can" }, { "start": 293.12, "end": 299.88, "text": " include the delta x and delta y, which basically means the speed of the objects." }, { "start": 299.88, "end": 305.16, "text": " It can include any constant a and b that you want." }, { "start": 305.16, "end": 316.16, "text": " It can include the symbols plus, minus, division, multiplication, square, maybe exponential" }, { "start": 316.16, "end": 317.16, "text": " function and so on." }, { "start": 317.16, "end": 323, "text": " So you give it a bunch of options of what it could potentially use in an equation." }, { "start": 323, "end": 329.18, "text": " And then you simply let it make equations and you see how well these equations describe" }, { "start": 329.18, "end": 331, "text": " the data set." }, { "start": 331, "end": 335.8, "text": " And the way you do that is you can do it naively by just searching and trying out, or you can" }, { "start": 335.8, "end": 339.44, "text": " be a little bit smarter about it and use evolutionary methods." }, { "start": 339.44, "end": 349, "text": " So you start with some equations like this, okay, I'm going to x plus delta x minus a" }, { "start": 349, "end": 350.56, "text": " squared." }, { "start": 350.56, "end": 355, "text": " You see how that describes the data set, you'll find not very well." }, { "start": 355, "end": 360.34, "text": " And then you go on and you say, okay, maybe I'll make a small mutation, I'll mutate this" }, { "start": 360.34, "end": 362.11999999999995, "text": " to a minus and so on." }, { "start": 362.11999999999995, "end": 369.44, "text": " And if you do this with an entire population, as is common in these evolutionary methods," }, { "start": 369.44, "end": 373.2, "text": " you'll end up with something better at the end." }, { "start": 373.2, "end": 377.03999999999996, "text": " Now this works until a point." }, { "start": 377.03999999999996, "end": 382.91999999999996, "text": " So whenever the space of things to explore, like this one here, gets larger, and it doesn't" }, { "start": 382.91999999999996, "end": 389.08, "text": " have to be super large to already exhaust the capabilities of these methods." }, { "start": 389.08, "end": 394.46, "text": " So these methods are very limited in the space they can search and have proven not really" }, { "start": 394.46, "end": 399.02, "text": " effective so far for this type of problem." }, { "start": 399.02, "end": 402.4, "text": " This paper right here goes a different route." }, { "start": 402.4, "end": 407.76, "text": " It uses graph neural networks in order to describe the data set." }, { "start": 407.76, "end": 414.71999999999997, "text": " So in between this step of collecting a data set and making the equation, it fits another" }, { "start": 414.71999999999997, "end": 415.97999999999996, "text": " step." }, { "start": 415.98, "end": 422.24, "text": " So it says in between here, we fit another step and that other step is going to be we" }, { "start": 422.24, "end": 427.20000000000005, "text": " have a graph neural network and you don't know yet, you don't have to know yet what" }, { "start": 427.20000000000005, "end": 428.44, "text": " that exactly is." }, { "start": 428.44, "end": 429.44, "text": " But it's technique." }, { "start": 429.44, "end": 431.64000000000004, "text": " It's like a type of neural network." }, { "start": 431.64000000000004, "end": 436.6, "text": " And we're going to have that neural network learn the data set." }, { "start": 436.6, "end": 441.56, "text": " Now as you know from neural networks, they can't do symbolic regression, they can't give" }, { "start": 441.56, "end": 445.28000000000003, "text": " you an equation, they can simply predict numbers." }, { "start": 445.28, "end": 453, "text": " So what the network will do is it will simply predict like the motions or the accelerations," }, { "start": 453, "end": 458.15999999999997, "text": " whatever you're interested in, it will predict those things as numbers, not as equations" }, { "start": 458.15999999999997, "end": 463.88, "text": " as just you can plug in this situation right here, and it will tell you how the things" }, { "start": 463.88, "end": 465.53999999999996, "text": " will move." }, { "start": 465.53999999999996, "end": 468.91999999999996, "text": " Neural networks are pretty good at that." }, { "start": 468.91999999999996, "end": 474.88, "text": " And once you have a graph neural network that can describe the system in a numeric fashion," }, { "start": 474.88, "end": 480.12, "text": " then you parse out the equations from this graph neural network." }, { "start": 480.12, "end": 485.12, "text": " And we're going to go over why that is going to be much, much easier than parsing out the" }, { "start": 485.12, "end": 488, "text": " equations directly from the physical system." }, { "start": 488, "end": 493.44, "text": " It's going to be because you engineer the graph neural network in a way that makes it" }, { "start": 493.44, "end": 501.2, "text": " very congruent with physical reality that makes it very adapt to parse out equations" }, { "start": 501.2, "end": 505.47999999999996, "text": " like this that makes the job of this evolutionary method much easier." }, { "start": 505.47999999999996, "end": 510.2, "text": " All right, so that's basically the two-step process here." }, { "start": 510.2, "end": 516.08, "text": " First step is to numerically regress a neural network to describe the system, and then second" }, { "start": 516.08, "end": 521.4399999999999, "text": " step is going to be from that neural network parse out the equations." }, { "start": 521.4399999999999, "end": 524.46, "text": " So we have to talk about graph neural networks." }, { "start": 524.46, "end": 528.88, "text": " So here you see the entire process as they describe it." }, { "start": 528.88, "end": 535.2, "text": " So they have this data set right here of observations of these physical systems." }, { "start": 535.2, "end": 539.96, "text": " This is like, it's like any data set that you have in machine learning." }, { "start": 539.96, "end": 547.08, "text": " They predict the dynamics, which means in a numeric fashion with a graph neural network," }, { "start": 547.08, "end": 551.88, "text": " and then from the graph neural network, they extract the symbolic equation, as you can" }, { "start": 551.88, "end": 554.48, "text": " see right here." }, { "start": 554.48, "end": 561.6, "text": " And this here is going to be the equation that they figure out that was previously unknown." }, { "start": 561.6, "end": 566.12, "text": " They even say unknown dark matter over density equation." }, { "start": 566.12, "end": 567.44, "text": " Cool." }, { "start": 567.44, "end": 570, "text": " So we have to talk about graph neural networks." }, { "start": 570, "end": 572.46, "text": " We haven't really done this on this channel so far." }, { "start": 572.46, "end": 575.82, "text": " And I'm not like a big expert on graph neural networks." }, { "start": 575.82, "end": 579.16, "text": " But they come in all shapes and forms." }, { "start": 579.16, "end": 584.32, "text": " In this particular paper, they use what they call a type of interaction network that's" }, { "start": 584.32, "end": 585.84, "text": " called a graph network." }, { "start": 585.84, "end": 590.12, "text": " So graph network is something different than graph neural network." }, { "start": 590.12, "end": 593.7600000000001, "text": " I think graph network is a type of graph neural network." }, { "start": 593.7600000000001, "end": 599.32, "text": " And specifically here, they use a network that..." }, { "start": 599.32, "end": 604.0400000000001, "text": " So a graph neural network has these things called vertices, and then it has edges, and" }, { "start": 604.0400000000001, "end": 607.44, "text": " edges connect vertices, like in a graph." }, { "start": 607.44, "end": 613.2, "text": " Now we're going to build this graph neural network such that the number of vertices is" }, { "start": 613.2, "end": 617.0400000000001, "text": " exactly equal to the number of particles in our system." }, { "start": 617.0400000000001, "end": 623.0400000000001, "text": " So in this paper, they consider systems with, I believe, four or eight particles." }, { "start": 623.0400000000001, "end": 627.5200000000001, "text": " That's already a lot for if you want to derive equations and things." }, { "start": 627.5200000000001, "end": 632.86, "text": " But of course, the physical world is made of many more particles." }, { "start": 632.86, "end": 637.0200000000001, "text": " In any case, they consider four, let's say four particles right here." }, { "start": 637.0200000000001, "end": 640.6400000000001, "text": " So what they're going to do, they're going to build a graph neural network that has four" }, { "start": 640.64, "end": 645.04, "text": " vertices, one for each of the particles." }, { "start": 645.04, "end": 650.4399999999999, "text": " And in a graph neural network, every vertex can have properties." }, { "start": 650.4399999999999, "end": 655.72, "text": " So the properties of each vertex here are going to be the properties of that particle." }, { "start": 655.72, "end": 660.3199999999999, "text": " That means the x coordinate, for example, the y coordinate, and we're going to, let's" }, { "start": 660.3199999999999, "end": 663.16, "text": " say we're in two dimensions, right?" }, { "start": 663.16, "end": 664.84, "text": " It's a two dimensional problem." }, { "start": 664.84, "end": 671.2800000000001, "text": " The x coordinate, the y coordinate, the delta x, the delta y, the mass, the, I don't know" }, { "start": 671.2800000000001, "end": 673.36, "text": " what else can we put here." }, { "start": 673.36, "end": 676.6, "text": " There's a lot of stuff that we can put here, the charge, right?" }, { "start": 676.6, "end": 681.0400000000001, "text": " So all of these things are properties of the vertex." }, { "start": 681.0400000000001, "end": 684.44, "text": " Then the other component of a graph are, of course, the edges." }, { "start": 684.44, "end": 692.2, "text": " So the edges connect each two of all of the, so each edge connects two vertices like this." }, { "start": 692.2, "end": 698.32, "text": " And in this particular type of graph network, we're going to consider graphs where all the" }, { "start": 698.32, "end": 701.76, "text": " particles are connected to all the other particles like this." }, { "start": 701.76, "end": 709.0400000000001, "text": " So it's not like a sparse, it's not a sparse graph, except I think in the cosmology example" }, { "start": 709.0400000000001, "end": 715.1600000000001, "text": " here you can see that always there is a node that's connected to all its neighbors." }, { "start": 715.1600000000001, "end": 720.2, "text": " But in the Newtonian dynamics graph networks, you can see right here, everything is connected" }, { "start": 720.2, "end": 727.12, "text": " to everything, like this." }, { "start": 727.12, "end": 732.84, "text": " And why does that represent a physical system really well?" }, { "start": 732.84, "end": 737.76, "text": " So the reason is going to be the following." }, { "start": 737.76, "end": 743.48, "text": " What we're going to try to do is we're going to try to say that in physical systems, if" }, { "start": 743.48, "end": 750.16, "text": " I want to, for example, consider this node up here, and consider how it is pulled by" }, { "start": 750.16, "end": 756.48, "text": " gravity by the other nodes, it's going to be pulled in this direction a little bit because" }, { "start": 756.48, "end": 760.5600000000001, "text": " of this particle right here, it's going to be pulled in this direction a little bit because" }, { "start": 760.5600000000001, "end": 764.32, "text": " of that one, and in this direction because of that one." }, { "start": 764.32, "end": 768.2, "text": " Now note that these three things are independent." }, { "start": 768.2, "end": 775.0400000000001, "text": " So if I want to describe the total force of gravity, I can do so as a sum over i equals" }, { "start": 775.0400000000001, "end": 783.08, "text": " 1, 2, 1, 2, 3 of the force that the particle i pulls." }, { "start": 783.08, "end": 788.1600000000001, "text": " So if this is j right here, how i pulls on j, right?" }, { "start": 788.1600000000001, "end": 795.36, "text": " This is an independent sum across all of the neighbors of that particle." }, { "start": 795.36, "end": 800.8000000000001, "text": " Now you might say, wait a minute, it's not independent, because it's being, you know," }, { "start": 800.8000000000001, "end": 804.64, "text": " it's not being strictly pulled in this direction, it's also pulled in this direction." }, { "start": 804.64, "end": 814.08, "text": " Yes, but with independent, we mean that the force, this force right here, is only dependent" }, { "start": 814.08, "end": 819.0600000000001, "text": " on this particle, and the force diagonally is only dependent on that particle." }, { "start": 819.06, "end": 825.92, "text": " There is no part of the particle up here that modulates this force right here." }, { "start": 825.92, "end": 833.0999999999999, "text": " So you can calculate the total force as an independent sum across the individual forces." }, { "start": 833.0999999999999, "end": 835.1199999999999, "text": " And that's the simplification here." }, { "start": 835.1199999999999, "end": 841.52, "text": " And that's a part, they claim, why current approaches that directly try to go about finding" }, { "start": 841.52, "end": 848.16, "text": " an equation using evolutionary methods from the data set itself don't really work, because" }, { "start": 848.16, "end": 851.12, "text": " the space is just too high of equations." }, { "start": 851.12, "end": 856.1999999999999, "text": " But this right here, this is a massive constraint." }, { "start": 856.1999999999999, "end": 861.52, "text": " And it's lucky, first of all, that physical systems, they say, most physical systems actually" }, { "start": 861.52, "end": 863.28, "text": " obey that constraint." }, { "start": 863.28, "end": 869.6999999999999, "text": " Most physical systems can be described as an independent sum over contributions of interactions" }, { "start": 869.6999999999999, "end": 872.56, "text": " between just two things, right?" }, { "start": 872.56, "end": 878.16, "text": " So we simply can sum over interactions between two things." }, { "start": 878.16, "end": 883.0799999999999, "text": " And that's way simpler than considering everything at once." }, { "start": 883.0799999999999, "end": 888.8399999999999, "text": " And second of all, it's lucky because these graph networks describe exactly this." }, { "start": 888.8399999999999, "end": 895.0799999999999, "text": " So each edge in the graph network is coincidentally connecting two things, right?" }, { "start": 895.0799999999999, "end": 896.0799999999999, "text": " And not more." }, { "start": 896.0799999999999, "end": 898.64, "text": " So the edges, they don't know about each other." }, { "start": 898.64, "end": 900.5, "text": " No edge knows about the other edge." }, { "start": 900.5, "end": 906.52, "text": " They only consider whatever particles are at their respective ends." }, { "start": 906.52, "end": 911, "text": " And that is exactly the same as this physical constraint on the physical system." }, { "start": 911, "end": 918.56, "text": " And that's why the graph networks are so adapted or are so useful in describing these systems." }, { "start": 918.56, "end": 923.44, "text": " So how does a graph network like this do anything, basically?" }, { "start": 923.44, "end": 926.24, "text": " So for that, you have to consider the task." }, { "start": 926.24, "end": 933.32, "text": " If we want to describe a system like this, a task in that, if you frame it in a machine" }, { "start": 933.32, "end": 941.32, "text": " learning way, could be, I'm going to give you these particles right here." }, { "start": 941.32, "end": 943.5600000000001, "text": " OK, here it's five particles." }, { "start": 943.5600000000001, "end": 947.52, "text": " I'm going to give you, for each one, I'm going to give you all its features, like the x," }, { "start": 947.52, "end": 952.2, "text": " the y, the speed currently, and the mass." }, { "start": 952.2, "end": 957.9200000000001, "text": " And you're going to tell me what the acceleration is in the next frame." }, { "start": 957.9200000000001, "end": 959.0400000000001, "text": " OK?" }, { "start": 959.0400000000001, "end": 962, "text": " So like this, like this, like this." }, { "start": 962, "end": 967.7800000000001, "text": " OK, considering all the interactions between the particles, just tell me, where does it" }, { "start": 967.7800000000001, "end": 971, "text": " go in the very next time frame?" }, { "start": 971, "end": 973.48, "text": " That sounds like a machine learning problem, right?" }, { "start": 973.48, "end": 977, "text": " And the graph neural network can be made to predict this." }, { "start": 977, "end": 984.8, "text": " So what we want is, for each vertex here, an output of a number or a vector, the acceleration." }, { "start": 984.8, "end": 987.36, "text": " So we want to compute an output for each vertex." }, { "start": 987.36, "end": 988.48, "text": " How do we do this?" }, { "start": 988.48, "end": 992.84, "text": " In a graph neural network, there are three, or in this particular type, there are three" }, { "start": 992.84, "end": 993.84, "text": " steps." }, { "start": 993.84, "end": 999.7, "text": " We said each vertex, and we're just going to do it for one vertex, let's say the one" }, { "start": 999.7, "end": 1002.24, "text": " on the bottom right." }, { "start": 1002.24, "end": 1008.16, "text": " Let's say each vertex has these properties, like this x, y, and so on." }, { "start": 1008.16, "end": 1012.88, "text": " So first, what we do is we go over the edges." }, { "start": 1012.88, "end": 1017.6800000000001, "text": " So for each edge, in parallel and independent from each other, let's consider this edge" }, { "start": 1017.6800000000001, "end": 1018.88, "text": " right here." }, { "start": 1018.88, "end": 1028.68, "text": " What we'll do is we take the nodes that are attached to it, and we combine their features." }, { "start": 1028.68, "end": 1033.0800000000002, "text": " And we combine them, so x, y, this also has x, y." }, { "start": 1033.0800000000002, "end": 1035.96, "text": " So we want to combine these two." }, { "start": 1035.96, "end": 1040.4, "text": " We want to compute the edge right here." }, { "start": 1040.4, "end": 1043.8, "text": " Now, in a physical system, what does the edge represent?" }, { "start": 1043.8, "end": 1048.72, "text": " The edge represents the force between the two particles, right?" }, { "start": 1048.72, "end": 1051.68, "text": " And that's a fairly complex equation." }, { "start": 1051.68, "end": 1055.16, "text": " It's not like we can just add the features or something like this." }, { "start": 1055.16, "end": 1063.28, "text": " So the edge here already needs to compute some sort of nonlinear, complicated function." }, { "start": 1063.28, "end": 1067.8400000000001, "text": " And we know how to compute nonlinear, complicated functions with neural networks." }, { "start": 1067.8400000000001, "end": 1069.4, "text": " We're in deep learning right here." }, { "start": 1069.4, "end": 1076.18, "text": " So the edge here is going to compute what's called this edge function." }, { "start": 1076.18, "end": 1081.26, "text": " And this edge function takes in two vertices, v1 and v2 right here." }, { "start": 1081.26, "end": 1083.72, "text": " Maybe this is v2, this is v1." }, { "start": 1083.72, "end": 1089.6000000000001, "text": " It takes in the features, these features of the two vertices, and it will compute a so-called" }, { "start": 1089.6000000000001, "end": 1090.96, "text": " edge message." }, { "start": 1090.96, "end": 1095.08, "text": " I think they call this ek for the edge k." }, { "start": 1095.08, "end": 1096.44, "text": " It will compute an edge message." }, { "start": 1096.44, "end": 1102.2, "text": " And this is supposed to represent the force that pulls between these two particles." }, { "start": 1102.2, "end": 1106.76, "text": " And we're going to approximate this function right here using a neural network." }, { "start": 1106.76, "end": 1108.6000000000001, "text": " Since we don't know the equation yet, right?" }, { "start": 1108.6, "end": 1115.12, "text": " We assume we don't know the gravitational equation, but we can learn it, right?" }, { "start": 1115.12, "end": 1116.6399999999999, "text": " Because we have data." }, { "start": 1116.6399999999999, "end": 1121.08, "text": " So we take this and we simply make it into a neural network." }, { "start": 1121.08, "end": 1123.08, "text": " So the features go in here, both." }, { "start": 1123.08, "end": 1125.04, "text": " We can concatenate them." }, { "start": 1125.04, "end": 1127.24, "text": " And then out comes this edge message." }, { "start": 1127.24, "end": 1133.28, "text": " Now, this edge message here is simply going to be a vector, a numerical vector describing" }, { "start": 1133.28, "end": 1135.76, "text": " some intermediate hidden state, right?" }, { "start": 1135.76, "end": 1140.72, "text": " That is going to describe the force, but for now it's just describing intermediate hidden" }, { "start": 1140.72, "end": 1141.72, "text": " state." }, { "start": 1141.72, "end": 1143.82, "text": " OK, so we do this for each edge." }, { "start": 1143.82, "end": 1150.04, "text": " So each edge is going to be, maybe this is e1, this is e2, e3, e4." }, { "start": 1150.04, "end": 1156.44, "text": " Each edge in parallel is going to aggregate information of its endpoints into that edge." }, { "start": 1156.44, "end": 1158.8, "text": " And then that's step one." }, { "start": 1158.8, "end": 1161.44, "text": " So step one, compute the edge messages." }, { "start": 1161.44, "end": 1170.3600000000001, "text": " Step two is going to be to compute the vertex messages or the vertex outputs." }, { "start": 1170.3600000000001, "end": 1173.4, "text": " So we said we're not actually interested in the edges." }, { "start": 1173.4, "end": 1178.6000000000001, "text": " We're interested that each vertex ends up with an acceleration, with an output." }, { "start": 1178.6000000000001, "end": 1179.92, "text": " So how are we going to do this?" }, { "start": 1179.92, "end": 1182.96, "text": " So consider again our graph." }, { "start": 1182.96, "end": 1188.96, "text": " If we want to compute the output for this node right here, what we'll do is we'll simply" }, { "start": 1188.96, "end": 1196.16, "text": " aggregate all of the edges, all of the edge messages that connect to that vertex." }, { "start": 1196.16, "end": 1202.96, "text": " So we've computed previously the edge messages by integrating the information from all of" }, { "start": 1202.96, "end": 1205.76, "text": " the attached endpoints." }, { "start": 1205.76, "end": 1212.72, "text": " Now we're going to go backwards and distribute the information from the edges back to the" }, { "start": 1212.72, "end": 1214.6000000000001, "text": " vertices that are attached." }, { "start": 1214.6, "end": 1219.6799999999998, "text": " And you can see already by this two-step process, it's kind of a message passing process if" }, { "start": 1219.6799999999998, "end": 1226.3999999999999, "text": " you've ever studied graphical models." }, { "start": 1226.3999999999999, "end": 1231.76, "text": " This means that in the two-step process, this vertex here aggregates information from the" }, { "start": 1231.76, "end": 1235.34, "text": " other vertices, via these edges." }, { "start": 1235.34, "end": 1243.36, "text": " So in this case, this vertex here is going to take in all the edge messages right here," }, { "start": 1243.36, "end": 1250.5, "text": " and it is going to aggregate all these edge messages in a function that computes the acceleration." }, { "start": 1250.5, "end": 1259.5, "text": " So our estimate of the acceleration is going to be a function, let's call that nu, of the" }, { "start": 1259.5, "end": 1261.3, "text": " edges that are attached to it." }, { "start": 1261.3, "end": 1265.04, "text": " So e1, e2, and e3." }, { "start": 1265.04, "end": 1270.84, "text": " And here is where we're going to make our next physical assumption, namely the one we" }, { "start": 1270.84, "end": 1278.56, "text": " said before, that the way that these edges, the way that they influence the vertex, is" }, { "start": 1278.56, "end": 1282.24, "text": " going to be in a form of an independent sum." }, { "start": 1282.24, "end": 1292.24, "text": " So this simplification means that this function should somehow be not of the edges, but of" }, { "start": 1292.24, "end": 1295.4399999999998, "text": " the sum of the edges, right?" }, { "start": 1295.4399999999998, "end": 1297.72, "text": " Sum of ei." }, { "start": 1297.72, "end": 1305.8, "text": " Okay, so this sum here, this is the simplification that we make to make it in accordance with" }, { "start": 1305.8, "end": 1307.64, "text": " the physical system." }, { "start": 1307.64, "end": 1312.1000000000001, "text": " With this graph network, we could do any sort of complicated thing right here." }, { "start": 1312.1000000000001, "end": 1317.98, "text": " We could put a transformer on these things and compute 12 layers of interaction effects" }, { "start": 1317.98, "end": 1319.64, "text": " between these edges." }, { "start": 1319.64, "end": 1320.64, "text": " We're not going to do that." }, { "start": 1320.64, "end": 1327.54, "text": " We're simply going to sum them up and then come up and then run those through a function." }, { "start": 1327.54, "end": 1329.28, "text": " So we'll sum them up." }, { "start": 1329.28, "end": 1334.2, "text": " And of course, this function right here is still going to be a complex function because" }, { "start": 1334.2, "end": 1341.24, "text": " just because you sum up the forces, you don't have the acceleration yet." }, { "start": 1341.24, "end": 1349.08, "text": " So as you know that force is mass times acceleration, that means acceleration is equal to force" }, { "start": 1349.08, "end": 1350.8, "text": " divided by mass." }, { "start": 1350.8, "end": 1356.08, "text": " So this here is going to be this sum over the edges, I guess." }, { "start": 1356.08, "end": 1357.2, "text": " Yes." }, { "start": 1357.2, "end": 1358.92, "text": " So you still need to divide it by force." }, { "start": 1358.92, "end": 1363.1200000000001, "text": " And technically, you still can do much more complicated things right here." }, { "start": 1363.1200000000001, "end": 1368.48, "text": " We only say that the edges should only come in in form of a sum." }, { "start": 1368.48, "end": 1376.6000000000001, "text": " So of course, we're going to say that this function right here, since it can be any complicated" }, { "start": 1376.6000000000001, "end": 1379.96, "text": " function of its input, it should also be a neural network." }, { "start": 1379.96, "end": 1384.04, "text": " So we're going to take that sum of the edge messages and we're going to put that into" }, { "start": 1384.04, "end": 1390.56, "text": " a second neural network, and then out comes our estimate of the acceleration." }, { "start": 1390.56, "end": 1398.2, "text": " And now that we can use together from the data set, we know the true acceleration, right?" }, { "start": 1398.2, "end": 1402.96, "text": " Since we have a data set, we have the observations and the labels." }, { "start": 1402.96, "end": 1409.04, "text": " The labels are the true accelerations of that system that we observed." }, { "start": 1409.04, "end": 1414.36, "text": " And we can compute a loss function right here." }, { "start": 1414.36, "end": 1419.04, "text": " If you followed so far, everything we've done so far is differentiable." }, { "start": 1419.04, "end": 1425.56, "text": " So from this loss function that compares the output of the neural network for that vertex" }, { "start": 1425.56, "end": 1431.72, "text": " to the true acceleration that we observed in the data set, we can back propagate through" }, { "start": 1431.72, "end": 1436.04, "text": " this neural network that computes the vertex function." }, { "start": 1436.04, "end": 1442.2, "text": " We can back prop through the sum here to the edge messages, and we can back prop through" }, { "start": 1442.2, "end": 1447.72, "text": " the edge messages to that neural network that computed the edge messages from those features." }, { "start": 1447.72, "end": 1449.84, "text": " So everything is differentiable." }, { "start": 1449.84, "end": 1456.48, "text": " By having that loss at the end, we can train this neural network end to end to, from the" }, { "start": 1456.48, "end": 1465.76, "text": " observation right here, predict the numerical acceleration of the system right here." }, { "start": 1465.76, "end": 1473.68, "text": " It was a fairly lengthy way, but it's important that you kind of understand what's happening." }, { "start": 1473.68, "end": 1477.68, "text": " So you build the graph network according to the physical system." }, { "start": 1477.68, "end": 1480.56, "text": " In the graph network, there are two kinds of things." }, { "start": 1480.56, "end": 1485.96, "text": " First there are deterministic things, like we're always going to aggregate in a sum." }, { "start": 1485.96, "end": 1487.96, "text": " And then there are things that you learn." }, { "start": 1487.96, "end": 1490.18, "text": " Namely, there are two neural networks." }, { "start": 1490.18, "end": 1495.08, "text": " The first one computes the edge messages from the features of the vertices." }, { "start": 1495.08, "end": 1503.3999999999999, "text": " And the second one computes the output of each vertex according to the sum of the edge" }, { "start": 1503.3999999999999, "end": 1506.52, "text": " messages that are attached to that vertex." }, { "start": 1506.52, "end": 1510.4399999999998, "text": " Now you can say, wait a minute, there are more than just two neural networks." }, { "start": 1510.4399999999998, "end": 1513.76, "text": " Like each edge here has a neural network, technically, right?" }, { "start": 1513.76, "end": 1517.8, "text": " This edge has a neural network, this edge has a neural network, and each vertex has" }, { "start": 1517.8, "end": 1519.24, "text": " a neural network." }, { "start": 1519.24, "end": 1522.4399999999998, "text": " But in this case, these neural networks are shared." }, { "start": 1522.44, "end": 1527.2, "text": " So the neural network that computes the edge message for that edge is the same as the neural" }, { "start": 1527.2, "end": 1531.4, "text": " network that computes the edge message for any of the edges." }, { "start": 1531.4, "end": 1535.6200000000001, "text": " You can think of it like weight sharing, or you can think that it is actually the same" }, { "start": 1535.6200000000001, "end": 1538.04, "text": " neural network, it's equivalent." }, { "start": 1538.04, "end": 1539.3600000000001, "text": " And the same for the vertices." }, { "start": 1539.3600000000001, "end": 1545.72, "text": " There's only one neural network that in the same fashion computes the output for each" }, { "start": 1545.72, "end": 1546.72, "text": " vertex." }, { "start": 1546.72, "end": 1550.16, "text": " Of course, the incoming edge messages are going to be different, and that's why you" }, { "start": 1550.16, "end": 1551.56, "text": " have different outputs." }, { "start": 1551.56, "end": 1556.1599999999999, "text": " But the neural network itself is the same." }, { "start": 1556.1599999999999, "end": 1565.44, "text": " Okay, so we have a system that can describe this data set of physical observations really" }, { "start": 1565.44, "end": 1566.44, "text": " well." }, { "start": 1566.44, "end": 1568.6399999999999, "text": " It's this graph neural network." }, { "start": 1568.6399999999999, "end": 1570.36, "text": " So we train this end to end." }, { "start": 1570.36, "end": 1578.52, "text": " And here is a little bit of an analogy where they say, this is how you can analogize the" }, { "start": 1578.52, "end": 1581.24, "text": " neural network with a physical system." }, { "start": 1581.24, "end": 1583.56, "text": " So what are the analogies here?" }, { "start": 1583.56, "end": 1590.36, "text": " The nodes in the graph network correspond to the particles in Newtonian mechanics." }, { "start": 1590.36, "end": 1594.48, "text": " Pairs of nodes correspond to two interacting particles." }, { "start": 1594.48, "end": 1599.68, "text": " The edge model is the force between two particles." }, { "start": 1599.68, "end": 1606.24, "text": " Then the pooling operation, which is the summing up of the edge messages, right, that we found" }, { "start": 1606.24, "end": 1608.44, "text": " so important as a simplification." }, { "start": 1608.44, "end": 1613.92, "text": " This is the sum into the net force that is really given in the physical system." }, { "start": 1613.92, "end": 1622.96, "text": " So independent sum of, sorry, sum of independent forces without interaction effects." }, { "start": 1622.96, "end": 1626.44, "text": " Then concatenate with node, I guess this I left this out." }, { "start": 1626.44, "end": 1637.1000000000001, "text": " But whenever you compute, whenever you compute the vertex properties, right here, I guess," }, { "start": 1637.1, "end": 1641.84, "text": " what you want to do is not only input the edge messages, but you know, each vertex has" }, { "start": 1641.84, "end": 1646.36, "text": " these features that we said, and these could also be fairly important." }, { "start": 1646.36, "end": 1651.9199999999998, "text": " It's like you technically have that information in the edge messages because it started out" }, { "start": 1651.9199999999998, "end": 1652.9199999999998, "text": " from these." }, { "start": 1652.9199999999998, "end": 1658.76, "text": " But you can also just input that again into this neural network together with the edge" }, { "start": 1658.76, "end": 1660.84, "text": " properties." }, { "start": 1660.84, "end": 1664.52, "text": " And that will just make its job a bit easier since, for example, right here, we have to" }, { "start": 1664.52, "end": 1667.62, "text": " divide by the mass in this function." }, { "start": 1667.62, "end": 1672.52, "text": " And it's just easier if you provide that mass as a as the property." }, { "start": 1672.52, "end": 1675.84, "text": " So that's a little detail I left out before." }, { "start": 1675.84, "end": 1681.36, "text": " So that you concatenate the edge mess, the aggregated edge messages with the node, then" }, { "start": 1681.36, "end": 1687.6399999999999, "text": " you compute the node model, which in this case is the computation is simply the you" }, { "start": 1687.6399999999999, "end": 1693.48, "text": " take this sum right here, and you divide it by the mass." }, { "start": 1693.48, "end": 1697.8, "text": " And then optionally, you can update the nodes, which is compute the next time step, which" }, { "start": 1697.8, "end": 1704.16, "text": " we don't do right here, because we simply want to output the acceleration." }, { "start": 1704.16, "end": 1711.04, "text": " I guess I mean, it should be equivalent to output the next time step and then compare" }, { "start": 1711.04, "end": 1714.28, "text": " with the data set what the next time step was." }, { "start": 1714.28, "end": 1717.16, "text": " In any case, you have to have some kind of loss function." }, { "start": 1717.16, "end": 1723.3, "text": " And here you can see all the black squares right here are going to be neural networks." }, { "start": 1723.3, "end": 1729.76, "text": " So now we have learned a graph network that can describe a system." }, { "start": 1729.76, "end": 1732.54, "text": " How do we make this into an equation?" }, { "start": 1732.54, "end": 1740.28, "text": " And again, here, our our physical reality comes in that these of the like the independence" }, { "start": 1740.28, "end": 1743.12, "text": " assumptions of these realities comes in." }, { "start": 1743.12, "end": 1749.56, "text": " Because in physics, you know, the the acceleration here is going to be a function of the sum" }, { "start": 1749.56, "end": 1750.56, "text": " and so on." }, { "start": 1750.56, "end": 1756.28, "text": " So what we need to do is we don't need to develop an equation for the entire system," }, { "start": 1756.28, "end": 1757.28, "text": " right?" }, { "start": 1757.28, "end": 1762, "text": " What we need to do is simply we need to develop an equation for each vertex." }, { "start": 1762, "end": 1768.08, "text": " So each vertex, we need to have an equation acceleration equals something." }, { "start": 1768.08, "end": 1776.74, "text": " And that something should include some of the edges and then the edges again should" }, { "start": 1776.74, "end": 1778.62, "text": " be something right." }, { "start": 1778.62, "end": 1785.32, "text": " So we technically as we had two neural networks, we technically need two symbolic equations," }, { "start": 1785.32, "end": 1790, "text": " one that represents that first neural network that computes the edge functions and one that" }, { "start": 1790, "end": 1796.32, "text": " represents that second neural network that aggregates the sum of the edge functions or" }, { "start": 1796.32, "end": 1800.4799999999998, "text": " that computes the output from the sum of the edge functions." }, { "start": 1800.4799999999998, "end": 1803.1399999999999, "text": " And that you know, it's an exact correspondence." }, { "start": 1803.14, "end": 1809.64, "text": " So what we need to do is we need to take that first neural network up here and do symbolic" }, { "start": 1809.64, "end": 1816.0800000000002, "text": " regression on that and the second neural network do symbolic regression on that." }, { "start": 1816.0800000000002, "end": 1819.22, "text": " So what does it mean to do symbolic regression?" }, { "start": 1819.22, "end": 1826.64, "text": " It basically means that we want to find the symbolic equation that describes the neural" }, { "start": 1826.64, "end": 1828.72, "text": " network the best." }, { "start": 1828.72, "end": 1832.5800000000002, "text": " And we do that in the exact same fashion as we started right here." }, { "start": 1832.58, "end": 1838.32, "text": " So we give it a bunch of these options and then we let the system describe the neural" }, { "start": 1838.32, "end": 1841.1599999999999, "text": " network as best as possible." }, { "start": 1841.1599999999999, "end": 1848.76, "text": " The way we do that again is we try out equations and if they get a low error, right, so we" }, { "start": 1848.76, "end": 1852.32, "text": " let the neural network run on the data set and we let this run on the data set." }, { "start": 1852.32, "end": 1856.28, "text": " If it outputs the same thing, it describes the neural network well." }, { "start": 1856.28, "end": 1859.3799999999999, "text": " And we can iterate that until we find a good equation." }, { "start": 1859.38, "end": 1863.3200000000002, "text": " So the difference here is that we don't need to find an equation that governs the whole" }, { "start": 1863.3200000000002, "end": 1864.3200000000002, "text": " system." }, { "start": 1864.3200000000002, "end": 1870.6000000000001, "text": " We just need to find two equations, one governing the edge model and one governing the vertex" }, { "start": 1870.6000000000001, "end": 1875.2800000000002, "text": " model and that's way, way easier than the whole system." }, { "start": 1875.2800000000002, "end": 1881.7600000000002, "text": " And by finding those two equations, we and our given our physical assumptions, we can" }, { "start": 1881.7600000000002, "end": 1887.1200000000001, "text": " now find the equation to the whole system by simply composing them." }, { "start": 1887.12, "end": 1890.12, "text": " Alright, so that's the entire system." }, { "start": 1890.12, "end": 1898.8, "text": " I believe I've told you the entire paper right here without actually going into the paper." }, { "start": 1898.8, "end": 1906.3999999999999, "text": " Let's just skim the paper a bit to see that they actually tell us the same thing." }, { "start": 1906.3999999999999, "end": 1912.6799999999998, "text": " So, yeah, so the graph networks, they say, are ideal candidate for our approach due to" }, { "start": 1912.6799999999998, "end": 1916.2399999999998, "text": " their inductive biases shared by many physics problems." }, { "start": 1916.24, "end": 1919.4, "text": " A, they're equivalent under particle permutations." }, { "start": 1919.4, "end": 1923.52, "text": " B, they are differentiable end to end and can be trained efficiently using gradient" }, { "start": 1923.52, "end": 1924.6, "text": " descent." }, { "start": 1924.6, "end": 1930.2, "text": " And C, they make use of three separate and interpretable internal functions, the edge," }, { "start": 1930.2, "end": 1932.36, "text": " the node and the global model." }, { "start": 1932.36, "end": 1937.32, "text": " Now the global model here isn't really used in the cases we're going to look at." }, { "start": 1937.32, "end": 1941.48, "text": " So it's just going to be two different neural networks." }, { "start": 1941.48, "end": 1944.34, "text": " Which are targets for the symbolic regression?" }, { "start": 1944.34, "end": 1950.56, "text": " Graph networks can also be embedded with additional symmetries, as in 23, 24, but we don't implement" }, { "start": 1950.56, "end": 1951.56, "text": " these." }, { "start": 1951.56, "end": 1953.9599999999998, "text": " Okay, and then they say symbolic regression." }, { "start": 1953.9599999999998, "end": 1958.76, "text": " So they use this Eureka package to perform symbolic regressions and fit compact closed" }, { "start": 1958.76, "end": 1963.6799999999998, "text": " form analytical expressions to these neural networks." }, { "start": 1963.6799999999998, "end": 1969.32, "text": " Eureka works by using a genetic algorithm to combine algebraic expressions stochastically." }, { "start": 1969.32, "end": 1974, "text": " The technique is analogous to natural selection, where the fitness of each expression is defined" }, { "start": 1974, "end": 1976.36, "text": " in terms of simplicity and accuracy." }, { "start": 1976.36, "end": 1982, "text": " The operations considered in the fitting process are plus, minus, times, if, as well as real" }, { "start": 1982, "end": 1983.72, "text": " constants." }, { "start": 1983.72, "end": 1991.92, "text": " Alright, so if we look at the examples, they have three different examples." }, { "start": 1991.92, "end": 1997.32, "text": " First of all, they have Newtonian dynamics, which is, for example, this gravitational" }, { "start": 1997.32, "end": 1999.24, "text": " force we looked at." }, { "start": 1999.24, "end": 2005.1200000000001, "text": " They have Hamiltonian dynamics, which describes the same systems, but in a different way in" }, { "start": 2005.1200000000001, "end": 2006.64, "text": " terms of the Hamiltonian." }, { "start": 2006.64, "end": 2012.1200000000001, "text": " And I don't want to go into this too much, because I think that the Newtonian dynamics" }, { "start": 2012.1200000000001, "end": 2015.88, "text": " already demonstrate really well what the system can do." }, { "start": 2015.88, "end": 2022.44, "text": " And then they have dark matter halos for cosmology, which is a problem where you have universe" }, { "start": 2022.44, "end": 2027.64, "text": " simulators and you try to predict where the dark matter is, depending on where other dark" }, { "start": 2027.64, "end": 2032.8000000000002, "text": " matter is, and that's where they find a new unknown equation." }, { "start": 2032.8000000000002, "end": 2039.5200000000002, "text": " Okay, here is the system in a nutshell." }, { "start": 2039.5200000000002, "end": 2041.16, "text": " This is the path that you know." }, { "start": 2041.16, "end": 2047.48, "text": " You have the data set, you learn a graph network, and then you get out an equation." }, { "start": 2047.48, "end": 2056.6, "text": " But in between, you can put even more constraints to make the network really learn a physical" }, { "start": 2056.6, "end": 2057.6, "text": " equation." }, { "start": 2057.6, "end": 2062.3199999999997, "text": " So, as I said, you're going to compute these edge functions right here." }, { "start": 2062.3199999999997, "end": 2068, "text": " And the output of the edge functions is going to be this edge message, which is just going" }, { "start": 2068, "end": 2071.2, "text": " to be a vector of some sort." }, { "start": 2071.2, "end": 2073.2, "text": " And that vector can be pretty large." }, { "start": 2073.2, "end": 2076.48, "text": " You know, this is a hidden dimension that you can choose as an implementer." }, { "start": 2076.48, "end": 2081.68, "text": " All you need to make sure is that the output of the vertex is the same dimension as, you" }, { "start": 2081.68, "end": 2083.52, "text": " know, what your output should be." }, { "start": 2083.52, "end": 2085.96, "text": " Everything internal, you can choose." }, { "start": 2085.96, "end": 2096, "text": " Now, we know that, for example, in a 2D system, the actual informational content of that edge" }, { "start": 2096, "end": 2098.64, "text": " message should be two dimensional, right?" }, { "start": 2098.64, "end": 2106.76, "text": " If this really describes the force in two dimensions, it should be two dimensional." }, { "start": 2106.76, "end": 2111.36, "text": " There's really no reason why it should have a higher dimension since all the relevant" }, { "start": 2111.36, "end": 2114.28, "text": " information can be described in two dimensions." }, { "start": 2114.28, "end": 2119.8, "text": " So one thing you can do is you can simply say, all right, I will choose the hidden dimension" }, { "start": 2119.8, "end": 2121.36, "text": " to be two." }, { "start": 2121.36, "end": 2127.88, "text": " And therefore, I will force my neural network to just use two dimensions." }, { "start": 2127.88, "end": 2131.1600000000003, "text": " This however, they noticed doesn't work super well." }, { "start": 2131.1600000000003, "end": 2133.1600000000003, "text": " I think it works, but not that well." }, { "start": 2133.1600000000003, "end": 2135.7200000000003, "text": " They call this the bottleneck model." }, { "start": 2135.7200000000003, "end": 2141.2000000000003, "text": " And the reason why it doesn't work super well is that if you have like this constraint of" }, { "start": 2141.2, "end": 2147.3999999999996, "text": " neural networks, they don't tend to learn very well." }, { "start": 2147.3999999999996, "end": 2149.3999999999996, "text": " And that's what they hypothesize in the paper as well." }, { "start": 2149.3999999999996, "end": 2154.7599999999998, "text": " They don't tend to really come, you know, be good friends with the fact that they only" }, { "start": 2154.7599999999998, "end": 2158.8799999999997, "text": " have two floating point numbers to learn anything." }, { "start": 2158.8799999999997, "end": 2164.4399999999996, "text": " And this is probably more a property of the optimization procedure than the problem itself." }, { "start": 2164.4399999999996, "end": 2169.7999999999997, "text": " It's property of, you know, us training neural networks with SGD." }, { "start": 2169.8, "end": 2177.1200000000003, "text": " So what they do instead is they put an L1 penalty on these edge messages." }, { "start": 2177.1200000000003, "end": 2180.0800000000004, "text": " So they say we apply L1 regularization." }, { "start": 2180.0800000000004, "end": 2184.88, "text": " And what that will do is that will induce sparsity in whatever you apply it to." }, { "start": 2184.88, "end": 2189, "text": " So L1 regularization simply means that you constrain." }, { "start": 2189, "end": 2194.7200000000003, "text": " So the edge message, if you take the absolute value in each entry and the sum of that, that" }, { "start": 2194.7200000000003, "end": 2196.2000000000003, "text": " should be small." }, { "start": 2196.2, "end": 2201.52, "text": " So you can just add this to the loss function, and that will induce sparsity in these edge" }, { "start": 2201.52, "end": 2203.22, "text": " messages." }, { "start": 2203.22, "end": 2209.7999999999997, "text": " And so now the network still has these whatever 100 latent dimensions, but it is encouraged" }, { "start": 2209.7999999999997, "end": 2212.66, "text": " to use as few as possible." }, { "start": 2212.66, "end": 2218.72, "text": " That means it can use a lot during the beginning when it's really benefits from the lot of" }, { "start": 2218.72, "end": 2221.06, "text": " dimensions when it learns the system." }, { "start": 2221.06, "end": 2227.16, "text": " But then as it gets better and better, it might shift a lot of the information into" }, { "start": 2227.16, "end": 2230.44, "text": " very, very few dimensions." }, { "start": 2230.44, "end": 2237.04, "text": " Okay, so once we do, if we do that, we can then run a check, right?" }, { "start": 2237.04, "end": 2244.08, "text": " If it is really the case that this graph network has learned the physical dynamics of the system," }, { "start": 2244.08, "end": 2252.3199999999997, "text": " then we can simply look at the top two dimensions, and we start by largest standard deviation." }, { "start": 2252.3199999999997, "end": 2258.9, "text": " So whichever two dimensions are the least sparse, have the largest standard deviation," }, { "start": 2258.9, "end": 2263.16, "text": " we can look at those two and we say, well, even though we didn't constrain the model," }, { "start": 2263.16, "end": 2267.56, "text": " those two should describe our force pretty well." }, { "start": 2267.56, "end": 2272.4, "text": " And since in Newtonian dynamics, we know what the force is, so this is we know what the" }, { "start": 2272.4, "end": 2278.6800000000003, "text": " force is, we can simply check whether or not that holds, we can check whether we can read" }, { "start": 2278.6800000000003, "end": 2282.28, "text": " out the force from these two components." }, { "start": 2282.28, "end": 2290.64, "text": " And here it's made such that you can't guarantee that the force is, you know, this force right" }, { "start": 2290.64, "end": 2298.08, "text": " here is actually so there are many ways to state a physical equation, because there are" }, { "start": 2298.08, "end": 2304.52, "text": " many symmetries in physics, and we cannot really make the neural network describe the" }, { "start": 2304.52, "end": 2310.7, "text": " equation exactly as humans would, because there are infinite amount of equivalent formulations," }, { "start": 2310.7, "end": 2314.8199999999997, "text": " but in this case, they're all covered by rotations of each other." }, { "start": 2314.8199999999997, "end": 2321.7599999999998, "text": " And that means in these graphs, if you have these message elements right here, and the" }, { "start": 2321.7599999999998, "end": 2327.7999999999997, "text": " linear combination of forces right here, a linear relationship means basically that the" }, { "start": 2327.8, "end": 2332.6400000000003, "text": " information is there, whereas a nonlinear relationship would mean that these numbers" }, { "start": 2332.6400000000003, "end": 2335.2000000000003, "text": " don't really encode the force as is." }, { "start": 2335.2000000000003, "end": 2339.44, "text": " And here you can pretty clearly see that the linear relationship is given." }, { "start": 2339.44, "end": 2346.52, "text": " And that means that these first two dimensions right here really encode the force in the" }, { "start": 2346.52, "end": 2350.78, "text": " way that we know the equation is." }, { "start": 2350.78, "end": 2352.88, "text": " So that's when we know the equation, right?" }, { "start": 2352.88, "end": 2356.6800000000003, "text": " When we know the equation, we can simply say, okay, does this fit?" }, { "start": 2356.68, "end": 2360.9199999999996, "text": " And when we don't know the equation, we can use this symbolic regression." }, { "start": 2360.9199999999996, "end": 2366, "text": " And what turns out is exactly this thing right here." }, { "start": 2366, "end": 2373.62, "text": " Now you might you might object that this isn't really that force right here." }, { "start": 2373.62, "end": 2376.64, "text": " But as I said, there are many, many symmetries." }, { "start": 2376.64, "end": 2385.3999999999996, "text": " So for example, this, this R hat right here, I believe, and this is I've I'm not a big" }, { "start": 2385.4, "end": 2392.88, "text": " physics person, this R hat, I think this is the vector of the delta x delta y, right?" }, { "start": 2392.88, "end": 2395.84, "text": " So delta x delta y is in this R hat." }, { "start": 2395.84, "end": 2404, "text": " So we already see that delta x and delta y here, this already looks like some sort of" }, { "start": 2404, "end": 2405.8, "text": " this already looks okay." }, { "start": 2405.8, "end": 2410.9, "text": " No, actually, if we go down, it gets even clearer." }, { "start": 2410.9, "end": 2414.56, "text": " So here they have the outputs of that." }, { "start": 2414.56, "end": 2423.96, "text": " Alright, so in this first case, this is the same example right here." }, { "start": 2423.96, "end": 2429.38, "text": " So they say you in this spring example, so this is a system where the particles are connected" }, { "start": 2429.38, "end": 2434.64, "text": " by springs, and we do l one regularization, what we expect is this equation, this is we" }, { "start": 2434.64, "end": 2437.96, "text": " know that this equation holds in this spring system." }, { "start": 2437.96, "end": 2443.88, "text": " And what the neural network combined with the symbolic regression gives us is this equation." }, { "start": 2443.88, "end": 2449.96, "text": " So right here, you can see there's this delta vector, and it's a product, it's an inner" }, { "start": 2449.96, "end": 2455.1800000000003, "text": " product dot product with this a, which is a numerical constants." }, { "start": 2455.1800000000003, "end": 2462.48, "text": " And you can see that there is this form of product with numerical constants." }, { "start": 2462.48, "end": 2468.6400000000003, "text": " What you can also see, so for example, here, the delta y here is 1.36 and 1.37." }, { "start": 2468.6400000000003, "end": 2472.32, "text": " That's, you know, the same number and here it's point 6.6." }, { "start": 2472.32, "end": 2479.7200000000003, "text": " Okay, but here you see, for example, r minus one, and here it's something like this minus" }, { "start": 2479.7200000000003, "end": 2482.9, "text": " something divided by r doesn't seem the same." }, { "start": 2482.9, "end": 2491.6400000000003, "text": " But again, due to the due to the symmetries, you can, if you take this and you simply divide" }, { "start": 2491.6400000000003, "end": 2502.1200000000003, "text": " everything by r, you'll end up with this vector right here, a times delta x, delta y, times" }, { "start": 2502.12, "end": 2509.24, "text": " one over r, no, times one minus one over r plus b." }, { "start": 2509.24, "end": 2516.04, "text": " Right, and now you can see it already looks very much similar." }, { "start": 2516.04, "end": 2521.24, "text": " And it's only off by like, it's only a transformation away from what you want." }, { "start": 2521.24, "end": 2526, "text": " So that's why I said you can describe these equations in many different sort of equivalent" }, { "start": 2526, "end": 2527, "text": " ways." }, { "start": 2527, "end": 2532.44, "text": " And ask the neural network to really figure out, you know, the exact one we want." }, { "start": 2532.44, "end": 2537.24, "text": " As long as it figures out a one that is equivalent, we're happy." }, { "start": 2537.24, "end": 2541.06, "text": " And we're, I guess we're pretty happy here." }, { "start": 2541.06, "end": 2547.72, "text": " So also in this case right here, you can see that it correctly predicts this relationship" }, { "start": 2547.72, "end": 2553.4, "text": " that it should be divided by r to the third power." }, { "start": 2553.4, "end": 2560.1600000000003, "text": " And there is a delta x, delta y, delta z, if you simply consider, so delta z here, I" }, { "start": 2560.1600000000003, "end": 2565.96, "text": " guess is, has simply a factor of zero." }, { "start": 2565.96, "end": 2572.4, "text": " And it even has this discontinuous problem where the force breaks after a certain while," }, { "start": 2572.4, "end": 2577.12, "text": " it can even parse out this if condition right here." }, { "start": 2577.12, "end": 2580.44, "text": " So that's, that's fairly cool, right?" }, { "start": 2580.44, "end": 2587.4, "text": " But to me that is pretty, pretty cool result that you can actually parse out these equations" }, { "start": 2587.4, "end": 2591.44, "text": " with just these graph networks and then the symbolic regression." }, { "start": 2591.44, "end": 2598.3, "text": " So they do the same thing for this cosmology example, where they have these simulators" }, { "start": 2598.3, "end": 2605.38, "text": " of the universe and they let them run and these kind of distribute this dark matter." }, { "start": 2605.38, "end": 2611.6800000000003, "text": " And I guess your task is, if I give you a bunch of these points right here, tell me" }, { "start": 2611.6800000000003, "end": 2614.6800000000003, "text": " where the other dark matter is, something like that." }, { "start": 2614.6800000000003, "end": 2619.84, "text": " I don't understand this, but in essence, it is the same kind of problem, right?" }, { "start": 2619.84, "end": 2626.48, "text": " You want to figure out the dark matter properties from the surrounding dark matter or properties" }, { "start": 2626.48, "end": 2627.98, "text": " of other things." }, { "start": 2627.98, "end": 2633.76, "text": " And again, here you can see pretty well that this is the equation they get out." }, { "start": 2633.76, "end": 2640.0200000000004, "text": " So the equation they get out is going to be a sum right here over, so here the output" }, { "start": 2640.0200000000004, "end": 2650.1600000000003, "text": " for node i is going to be a sum over all the other nodes j and then some function of that" }, { "start": 2650.1600000000003, "end": 2651.5600000000004, "text": " sum." }, { "start": 2651.5600000000004, "end": 2657.36, "text": " So this right here is the equation that came out of our edge model, of our edge neural" }, { "start": 2657.36, "end": 2658.36, "text": " network." }, { "start": 2658.36, "end": 2666.1600000000003, "text": " And this here that includes this one, it was the equation that came out of our vertex model." }, { "start": 2666.1600000000003, "end": 2670.36, "text": " As you know, the same here in this spring law, this came out of our edge model, this" }, { "start": 2670.36, "end": 2672.92, "text": " came out of our vertex model." }, { "start": 2672.92, "end": 2679.4, "text": " Again, this rests on the fact that physical systems can actually be described often as" }, { "start": 2679.4, "end": 2682, "text": " these sums of independent interactions." }, { "start": 2682, "end": 2684.5, "text": " And that's why all of this works." }, { "start": 2684.5, "end": 2689.84, "text": " So they do give very, very detailed instructions on how they did everything." }, { "start": 2689.84, "end": 2695.52, "text": " I think the most unclear things in this paper are the physics things that are assumed sort" }, { "start": 2695.52, "end": 2697.44, "text": " of that you know." }, { "start": 2697.44, "end": 2699.36, "text": " I don't, I didn't." }, { "start": 2699.36, "end": 2703.08, "text": " Yeah, but other than that, it's pretty straightforward." }, { "start": 2703.08, "end": 2707.8, "text": " Their appendix is also pretty detailed in how they do all the representations and so" }, { "start": 2707.8, "end": 2708.8, "text": " on." }, { "start": 2708.8, "end": 2711.52, "text": " They have different formulations other than this L1 regularization." }, { "start": 2711.52, "end": 2714.86, "text": " As I said, they have bottleneck, they have like a KL formulation." }, { "start": 2714.86, "end": 2719.7599999999998, "text": " They really describe how the graph neural network works here and so on." }, { "start": 2719.7599999999998, "end": 2722.74, "text": " So all in all, I enjoyed reading this paper." }, { "start": 2722.74, "end": 2726.24, "text": " Here is a bunch of examples of these particle systems." }, { "start": 2726.24, "end": 2733.56, "text": " And yeah, and here is a bunch of examples of where you'd have a linear relationship" }, { "start": 2733.56, "end": 2739.7599999999998, "text": " that where you can say, oh, look, this really describes that force or a nonlinear relationship" }, { "start": 2739.76, "end": 2744.76, "text": " where you can make the claim this doesn't really describe the force well, because it's" }, { "start": 2744.76, "end": 2750.6000000000004, "text": " not linear relationship indicates that what the network found is a rotation of what you" }, { "start": 2750.6000000000004, "end": 2751.6000000000004, "text": " really want." }, { "start": 2751.6000000000004, "end": 2757.5600000000004, "text": " And that's good because it's equivalent nonlinear basically means that you can't really it doesn't" }, { "start": 2757.5600000000004, "end": 2761.0400000000004, "text": " really describe what you want really well." }, { "start": 2761.0400000000004, "end": 2764, "text": " Yeah, and I'm going to leave you with that." }, { "start": 2764, "end": 2770.04, "text": " I absolutely invite you to check out the code and the video they made about it and I'll" }, { "start": 2770.04, "end": 2771.04, "text": " see you next time." }, { "start": 2771.04, "end": 2795.12, "text": " Bye bye." } ]
hg2Q_O5b9w4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
[ "Science & Technology" ]
[ "deep learning", "machine learning", "rl", "reinforcement learning", "unsupervised", "contrast", "contrastive", "encoder", "self-supervised", "deep rl", "representation", "representation learning", "query", "key" ]
Contrastive Learning has been an established method in NLP and Image classification. The authors show that with relatively minor adjustments, CL can be used to augment and improve RL dramatically. Paper: https://arxiv.org/abs/2004.04136 Code: https://github.com/MishaLaskin/curl Abstract: We present CURL: Contrastive Unsupervised Representations for Reinforcement Learning. CURL extracts high-level features from raw pixels using contrastive learning and performs off-policy control on top of the extracted features. CURL outperforms prior pixel-based methods, both model-based and model-free, on complex tasks in the DeepMind Control Suite and Atari Games showing 2.8x and 1.6x performance gains respectively at the 100K interaction steps benchmark. On the DeepMind Control Suite, CURL is the first image-based algorithm to nearly match the sample-efficiency and performance of methods that use state-based features. Authors: Aravind Srinivas, Michael Laskin, Pieter Abbeel Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're going to look at CURL, Contrastive Unsupervised Representations for Reinforcement Learning, by Aravind Srinivas, Michael Laskin and Pieter Abbeel. So this is a general framework for unsupervised representation learning for RL. So let's untangle the title a little bit. It is FOR reinforcement learning, which if you don't know what reinforcement learning is, I've done a bunch of videos on RL frameworks. So it's for general reinforcement learning. That means it can be paired with almost any RL algorithm out there. So we're not going to dive into specific RL algorithms today. It is unsupervised, which means it doesn't need any sort of labels, and it also doesn't need a reward signal for RL, which is pretty cool because usually the entire RL pipelines rely on some sort of a reward or auxiliary reward signal. Now there is a training objective here, but it doesn't have to do with the RL reward. And then it is learning representations, which means it learns intermediate representations of the input data that is useful. And in the end it is contrastive, and that is the secret sauce in here. The training objective is what's called contrastive learning, and that's what we're going to spend most of our time on today, exploring what that means. So here's the general framework. You can see it down here. Sorry about that. So you can see that reinforcement learning is just a box, which is we don't care about the RL algorithm you use, that's just what comes at the end. What comes at the beginning, oh, here is the observation. So the observation in an RL algorithm is kind of fundamental. Now if someone explains RL to you, reinforcement learning, usually what they'll say is there is some kind of actor and there is some kind of environment. And the environment will give you an observation, observation O, which is some sort of, let's say here is an image. So in this RL framework specifically, the examples they give are of image-based reinforcement learning. Let's say the Atari game where you have this little spaceship here and there are meteorites up here, and you need to shoot them. So there is a little shot here. You need to shoot those meteorites. So this is the observation O. And then as an age, as an actor, you have to come up with some sort of action. And the actions here can be something like move to the left, move to the right, press the button that does the shooting. So you have to come up with an action somehow given this observation. And then the environment will give you back a reward along with the next observation, like the next frame of the game. And you're going to have to come up with another action in response to that. And the environment is going to give you back another reward and the next observation and so on. So what you want to do is you want to find a mapping from observation to action, such that your reward is going to be as high as possible. This is the fundamental problem of RL. And usually what people do is they take this mapping here from observation to action to be some sort of function, some sort of function that is parameterized maybe. Nowadays, of course, it's often a neural network. But you're trying to learn, given the input observation, what output action you need to do. And you can think of the same here. So you have this input observation up here. And down here, after the reinforcement learning, the output is going to be an action. And so this function we talked about up here is usually implemented. It's usually implemented like this. You put the observation into the RL framework. And then the RL framework learns this f of theta function to give you an action. Now here you can see the pipeline is a bit different. We don't want to shove the observation in directly, right? We don't want the observation directly. But what we put into the RL framework is this queue thing. Now the queue is supposed to be a representation of the observation and a useful representation. So if we think of this game here, of this Atari game up here, what could be a useful representation if I had to craft one by hand? How would I construct a useful representation? Keep in mind the goal is to have a representation of the observation that is more useful to the RL algorithm than just the pure pixels of the image. So if I had to craft a representation, let's say it's a vector. Let's say our representations need to be vectors. What I would do is I would probably take the x and y coordinates of the little spaceship, x and y, and put it in the vector. That's pretty useful. Then I would probably take the x and y coordinates of the meteorites that are around. Let's say there's a maximum of two, so x, y, x, y here. I would probably take the angle where my spaceship is pointing to. That should be pretty useful because if I shoot, I want to know where I shoot. So theta here, and then probably the x and y coordinates of the red shot that I fired, if there is one. I'm also going to put that into my representation. So x and y, and maybe delta x, delta y. Something like this. You can see if I had to handcraft something, I can pretty much guarantee that if I put in this representation right here into the RL algorithm, if I put this in here, it would turn out guaranteed, it would turn out to be a better RL agent that learns faster than if I put in the original observation, which is the pixel image of the game. Because, of course, in order to play the game correctly, in order to play the game to win, you need to extract this information. You need to get, ah, there's something like a spaceship, there's something like meteorites. This is all things that the RL algorithm doesn't know per se, and would have to learn from the pixels. But if I already give it the information that is useful, it can learn much faster. So you can see if I handcraft a good representation, it's pretty easy for the RL algorithm to improve. Now we want to come up with a framework that automatically comes up with a good representation. So it alleviates the RL algorithm here, the reinforcement learning. It alleviates that from having to learn a good representation. It already is burdened with learning what a good action is in any given situation. We want to alleviate it of the burden to also extract useful information from the observation space. So how do we do this? This Q here is supposed to be exactly that. It's supposed to be a good representation, but not one that we handcrafted, but used with a technique that can be employed pretty much everywhere. The goal, sorry, the secret sauce here is this contrastive loss thing. Okay, this bombed. Contrastive learning is this kind of magic thing that will make us good representations. What is contrastive learning? In this case, I'm going to explain it. In this case, for image-based reinforcement learning, but just for image-based neural networks, how can we come up with a contrastive loss? So you see there's a two pipeline thing going on here. This and this, and then one of them is going to be the good encoding. So let's check it out. Let's say we have this image that we had before. Draw it again. This little spaceship. This and this. And shot. We want to do this. What we need to do is we need to produce three different things from it. We need to produce an anchor, what's called an anchor. We need to produce a positive sample. And we need to produce negative samples. Let's just go with one negative sample for now. The goal is to come up with a task where we produce our own labels. Since we're training an encoder, and the encoder is a neural network that is parameterized, we need some sort of loss function. The goal is to come up with a method where we can create our own labels to a task, but we construct the task in a way such that the neural network has no choice and we can create something meaningful, even though we made the task up ourselves. I hope this was kind of clear. How are we going to do this? Our method of choice here is going to be random cropping. Random cropping means that I take an image and I crop a piece from it. A smaller piece from the image. I take a view inside the image. In case of the anchor, I'm going to draw the same picture here. Bear with me, I'm going to draw the same picture here a couple of times. This is all supposed to be the same picture. With the negative sample, I'm just going to leave it empty for now. Ta-da! Two meteorites. Two meteorites. Shot. Shot. For the anchor, we're going to center crop. We're going to take the center image. The assumption is that if I center crop, I won't lose too much of the image. I can actually make the crop bigger, such that almost everything of the image is somewhat contained in this. This is going to be my anchor. The positive sample is going to be a random crop of the same image. I'm just randomly going to select a same size section from that image. Let's say this is up right here. The negative sample is going to be a random crop from a different image. A different image might be from the same game, but there is a meteorite here and there is no shot. I don't shoot. I'm going to take a random crop from this. Let's say I'm going to take a random crop here. Let's put a meteorite here as well, just for fun. These are going to be our three samples. Now the question is going to be if I give the anchor to the neural network. I give you the anchor, but I'm also going to give you this and this thing. I'm not going to give any of this. I'm just going to give whatever I cropped. Just these things. I ask the neural network, I give you the anchor. Which one of these two crops comes from the same image? As a human you look at this and if you just see the center crop, you see down here there is this tip of this thing and then there is the shot. In relation to the shot there is a meteor here. Then you look at the second one and you say I don't see the spaceship, but there is the same relation here from the shot to the meteor. I can kind of see the meteor up here. This also fits with that. The spaceship must be down here somewhere. Then I go over here and I try to do the same thing. Here is the meteor. In the original image it might be over here somewhere. That's possible. I don't see it. That's possible, but then there should be a shot somewhere here. There should be a shot somewhere here. I'm pretty sure because there is one over here and I don't see it. I am fairly sure that this image here is the positive sample, while this image here is the negative sample. This is the task that you ask of the neural network. Give it the anchor and you ask which one of these two comes from the same image. This is called contrastive learning. It is a bit more complicated in that of course what you do is you encode these things using neural networks. Each of the things you encode. The anchor you are going to encode all of these things using a neural network. Then this is what's going to become the query. These are becoming the keys. Key 1 or key 2. Then you are going to feed always two of them into a bilinear product. A bilinear product is simply an inner product in a perturbed space that you can learn. You are going to have these two here. These go into Q, W, K, 1. Then these two here, sorry, this and this go into Q, W, K, 2. Now W here is a learnable parameter. You have some freedom. Then you basically take whichever one of those two is highest. This might be this high and this might only be this high. Then you say, aha, cool, this one is higher so this one must be the positive. You train the W specifically to make the positive ones higher and the negative ones lower. This is a supervised learning task. These things here are going to be the logits. They are inner products but you basically then pick the one that is highest in a softmax way. They put this in the paper. If we go down here, the objective that they use to do the contrastive learning is this one. As you can see, it's a softmax like in multiclass classification. The inner product, the bilinear product with the positive samples over the bilinear product with the positive samples plus the bilinear product with all of the negative samples. You are going to come up with more than one negative sample. The only thing left that we don't have here is that the encoding, how you are going to come from the image space to this space here, is going to be slightly different depending on whether you are talking on the anchor or on what are called the keys, the things you compare to. This is out of a stability criterion. Maybe you know something like double Q-learning or things like this. Sometimes when you train with your own thing, in Q-learning you are trying to come up with an actor and a critic. It's not the same thing, but you are using the same neural network twice in your setup. Then you compare the outputs to each other, which leads to instability. In our case, we took it three times here, or multiple times. Especially for the same objective here, we have twice something that was encoded by the same neural network and is on the two sides of this bilinear product. If we were to use the same neural network, that tends to be somewhat unstable. We have different neural networks, one that will encode the query, which is this FQ, and one which will encode the keys, sorry, FK. We don't want to learn two neural networks. That's why there's a bit of a compromise, where we say it is the same neural network, but basically this one is the one we learn. Every now and then we transfer over the parameters to that one. In fact, each step we transfer over the parameters and do an exponentially moving average with the parameters of this momentum encoder from the step before. The momentum encoder parameters are a moving average of the parameters of the query encoder. You get the best of both worlds. You don't have to learn a second neural network, but your second neural network is not the same as your first neural network. It kind of lags behind, but it is also performing almost as well. I don't know if that makes sense, but it is the best I can explain it. To recap, you take your observation, you encode it as a query, sorry, you crop here for your anchor, that gets your query, and then you random crop for your keys into positive and negative samples. Random crop from the same observation or from different observations. These become your positive and negative samples. Then you push these through your encoders for the query and for the keys respectively. You end up with the queue, which is the encoded anchor, and the k's, which are the encoded positive and negative samples. Then you learn, you update this encoder here using the contrastive loss. At the same time, you feed the queue into the reinforcement learning algorithm, and you learn your reinforcement learning algorithm. Instead of having the observation directly as an input here, you now have the queue here as an input. The reinforcement learning works exactly the same, except having the pixel input O, you now have the representation input Q. You don't have to worry about anything else in terms of the reinforcement learning algorithm. It works exactly the same. This whole thing here can run either in parallel, or you can think of it before, you can think of it off-policy, on-policy. It is sort of modular how you fit this in. It simply comes up with good representation. That is basically the deal here. You hope that the whole procedure of this contrastive learning then gives you good representation of this anchor thing here. If you encode that to the queue, you hope that this representation now is a good representation as a basis for the RL algorithm. It turns out, at least in their experiments, it is. Here you see the same thing. You can do something more where in RL you usually deal with a stack of observations, not just a single observation. For example, in Atari, people always concatenate something like the four last frames. Their point is, if we have this stack here, if we do this data augmentation, these crops, we need to do them consistently. We need to crop every single image at the same point for the query. Also, if we do a random crop, let's say a random crop down here, we need to do this same random crop for all of the stack of images here. That is the additional thing they introduce with respect to RL that deals with stacked timeframes. It's the same diagram as above here. They explain the RL algorithms they use and exactly their thing. Here you can see that the anchor is a crop, and the positive sample is a random crop from the same image. This would be up here somewhere. The anchor is cropped from the middle. Then the negative would be a random crop from a different image or a different stack of images. They have a pseudocode here. It's pretty simple. We'll just go through it quickly. You start off with FQ and FK. These are the encoders for the query and keys. You start them off the same. Then you go through your data loader. You do this random augmentation of your query and your keys. I'm not even sure if the random augmentation needs to be a center crop for the anchor, but it's just two different crops from the same image. I guess it's a thing you could choose. I don't know what exactly is the best thing. Then I forward the query through the FQ and I forward the keys through the FK. It's important to detach this so I don't want to train the FK. I only want to train the FQ. Then I do the bilinear product here with the W. These are the bilinear product. Then I put all of this into a cross entropy loss. In the end I update my FQ and my W and I do this exponentially moving average for my key encoder. They test on two different things. They test on the DeepMind control tasks. They always test 100k time steps. Their big point is data efficiency. They claim they can learn useful representations with not much data. The task is here, how good are you at 100k time steps? You don't optimize until the end. You get 100k time steps and then the question is how good are you? The curl here outperforms all of the baselines handily in the DeepMind control tasks. It also outperforms a lot of the baselines in the Atari tasks. If you look at the results, it doesn't outperform everything. For example, the red is curl and the dashed grey is stateSAC. StateSAC has access to the state. Curl only works from pixels. If I had to craft a representation, stateSAC has access to that. You see that in many of the tasks, the curl comes close or performs equally well to stateSAC. That's pretty impressive. Especially if you look at pixelSAC, which is the same algorithm but does not have access to the state, it often fails terribly. That is pretty interesting to see. Even to me, it's pretty interesting to see that this kind of self-labeled algorithm comes up with such useful representations. I hope I have explained this satisfactorily. Check out the paper for more experiments, ablation studies and general reading. I wish you a good day.
[ { "start": 0, "end": 7.5, "text": " Hi there! Today we're going to look at CURL, Contrastive Unsupervised Representations for Reinforcement Learning," }, { "start": 7.5, "end": 12.5, "text": " by Aravind Srinivas, Michael Laskin and Pieter Abbeel." }, { "start": 12.5, "end": 19, "text": " So this is a general framework for unsupervised representation learning for RL." }, { "start": 19, "end": 22.5, "text": " So let's untangle the title a little bit." }, { "start": 22.5, "end": 28.5, "text": " It is FOR reinforcement learning, which if you don't know what reinforcement learning is," }, { "start": 28.5, "end": 32, "text": " I've done a bunch of videos on RL frameworks." }, { "start": 32, "end": 35, "text": " So it's for general reinforcement learning." }, { "start": 35, "end": 41, "text": " That means it can be paired with almost any RL algorithm out there." }, { "start": 41, "end": 46, "text": " So we're not going to dive into specific RL algorithms today." }, { "start": 46, "end": 53, "text": " It is unsupervised, which means it doesn't need any sort of labels," }, { "start": 53, "end": 57, "text": " and it also doesn't need a reward signal for RL," }, { "start": 57, "end": 65.5, "text": " which is pretty cool because usually the entire RL pipelines rely on some sort of a reward or auxiliary reward signal." }, { "start": 65.5, "end": 71, "text": " Now there is a training objective here, but it doesn't have to do with the RL reward." }, { "start": 71, "end": 83, "text": " And then it is learning representations, which means it learns intermediate representations of the input data that is useful." }, { "start": 83, "end": 88, "text": " And in the end it is contrastive, and that is the secret sauce in here." }, { "start": 88, "end": 91.5, "text": " The training objective is what's called contrastive learning," }, { "start": 91.5, "end": 97, "text": " and that's what we're going to spend most of our time on today, exploring what that means." }, { "start": 97, "end": 103, "text": " So here's the general framework. You can see it down here." }, { "start": 103, "end": 107, "text": " Sorry about that." }, { "start": 107, "end": 116, "text": " So you can see that reinforcement learning is just a box, which is we don't care about the RL algorithm you use," }, { "start": 116, "end": 120, "text": " that's just what comes at the end." }, { "start": 120, "end": 123.5, "text": " What comes at the beginning, oh, here is the observation." }, { "start": 123.5, "end": 128, "text": " So the observation in an RL algorithm is kind of fundamental." }, { "start": 128, "end": 132, "text": " Now if someone explains RL to you, reinforcement learning," }, { "start": 132, "end": 138, "text": " usually what they'll say is there is some kind of actor and there is some kind of environment." }, { "start": 138, "end": 152, "text": " And the environment will give you an observation, observation O, which is some sort of, let's say here is an image." }, { "start": 152, "end": 158.5, "text": " So in this RL framework specifically, the examples they give are of image-based reinforcement learning." }, { "start": 158.5, "end": 168, "text": " Let's say the Atari game where you have this little spaceship here and there are meteorites up here," }, { "start": 168, "end": 172, "text": " and you need to shoot them. So there is a little shot here." }, { "start": 172, "end": 174, "text": " You need to shoot those meteorites." }, { "start": 174, "end": 176, "text": " So this is the observation O." }, { "start": 176, "end": 181, "text": " And then as an age, as an actor, you have to come up with some sort of action." }, { "start": 181, "end": 185, "text": " And the actions here can be something like move to the left, move to the right," }, { "start": 185, "end": 189, "text": " press the button that does the shooting." }, { "start": 189, "end": 194, "text": " So you have to come up with an action somehow given this observation." }, { "start": 194, "end": 200, "text": " And then the environment will give you back a reward along with the next observation," }, { "start": 200, "end": 202, "text": " like the next frame of the game." }, { "start": 202, "end": 206, "text": " And you're going to have to come up with another action in response to that." }, { "start": 206, "end": 211, "text": " And the environment is going to give you back another reward and the next observation and so on." }, { "start": 211, "end": 218.5, "text": " So what you want to do is you want to find a mapping from observation to action," }, { "start": 218.5, "end": 223, "text": " such that your reward is going to be as high as possible." }, { "start": 223, "end": 226, "text": " This is the fundamental problem of RL." }, { "start": 226, "end": 232.5, "text": " And usually what people do is they take this mapping here from observation to action" }, { "start": 232.5, "end": 239, "text": " to be some sort of function, some sort of function that is parameterized maybe." }, { "start": 239, "end": 242, "text": " Nowadays, of course, it's often a neural network." }, { "start": 242, "end": 249, "text": " But you're trying to learn, given the input observation, what output action you need to do." }, { "start": 249, "end": 251, "text": " And you can think of the same here." }, { "start": 251, "end": 254, "text": " So you have this input observation up here." }, { "start": 254, "end": 261, "text": " And down here, after the reinforcement learning, the output is going to be an action." }, { "start": 261, "end": 267, "text": " And so this function we talked about up here is usually implemented." }, { "start": 267, "end": 271, "text": " It's usually implemented like this. You put the observation into the RL framework." }, { "start": 271, "end": 276, "text": " And then the RL framework learns this f of theta function to give you an action." }, { "start": 276, "end": 279, "text": " Now here you can see the pipeline is a bit different." }, { "start": 279, "end": 283, "text": " We don't want to shove the observation in directly, right?" }, { "start": 283, "end": 286, "text": " We don't want the observation directly." }, { "start": 286, "end": 291, "text": " But what we put into the RL framework is this queue thing." }, { "start": 291, "end": 296, "text": " Now the queue is supposed to be a representation of the observation" }, { "start": 296, "end": 298, "text": " and a useful representation." }, { "start": 298, "end": 304, "text": " So if we think of this game here, of this Atari game up here," }, { "start": 304, "end": 310, "text": " what could be a useful representation if I had to craft one by hand?" }, { "start": 310, "end": 314, "text": " How would I construct a useful representation?" }, { "start": 314, "end": 320, "text": " Keep in mind the goal is to have a representation of the observation" }, { "start": 320, "end": 327, "text": " that is more useful to the RL algorithm than just the pure pixels of the image." }, { "start": 327, "end": 331, "text": " So if I had to craft a representation, let's say it's a vector." }, { "start": 331, "end": 336, "text": " Let's say our representations need to be vectors." }, { "start": 336, "end": 343, "text": " What I would do is I would probably take the x and y coordinates of the little spaceship," }, { "start": 343, "end": 347, "text": " x and y, and put it in the vector. That's pretty useful." }, { "start": 347, "end": 355, "text": " Then I would probably take the x and y coordinates of the meteorites that are around." }, { "start": 355, "end": 360, "text": " Let's say there's a maximum of two, so x, y, x, y here." }, { "start": 360, "end": 370, "text": " I would probably take the angle where my spaceship is pointing to." }, { "start": 370, "end": 375, "text": " That should be pretty useful because if I shoot, I want to know where I shoot." }, { "start": 375, "end": 386, "text": " So theta here, and then probably the x and y coordinates of the red shot that I fired, if there is one." }, { "start": 386, "end": 389, "text": " I'm also going to put that into my representation." }, { "start": 389, "end": 395, "text": " So x and y, and maybe delta x, delta y." }, { "start": 395, "end": 397, "text": " Something like this." }, { "start": 397, "end": 400, "text": " You can see if I had to handcraft something," }, { "start": 400, "end": 409, "text": " I can pretty much guarantee that if I put in this representation right here into the RL algorithm," }, { "start": 409, "end": 414, "text": " if I put this in here, it would turn out guaranteed," }, { "start": 414, "end": 422, "text": " it would turn out to be a better RL agent that learns faster than if I put in the original observation," }, { "start": 422, "end": 427, "text": " which is the pixel image of the game." }, { "start": 427, "end": 433, "text": " Because, of course, in order to play the game correctly, in order to play the game to win," }, { "start": 433, "end": 436, "text": " you need to extract this information." }, { "start": 436, "end": 441, "text": " You need to get, ah, there's something like a spaceship, there's something like meteorites." }, { "start": 441, "end": 448, "text": " This is all things that the RL algorithm doesn't know per se, and would have to learn from the pixels." }, { "start": 448, "end": 453, "text": " But if I already give it the information that is useful, it can learn much faster." }, { "start": 453, "end": 461, "text": " So you can see if I handcraft a good representation, it's pretty easy for the RL algorithm to improve." }, { "start": 461, "end": 468, "text": " Now we want to come up with a framework that automatically comes up with a good representation." }, { "start": 468, "end": 473, "text": " So it alleviates the RL algorithm here, the reinforcement learning." }, { "start": 473, "end": 480, "text": " It alleviates that from having to learn a good representation." }, { "start": 480, "end": 487, "text": " It already is burdened with learning what a good action is in any given situation." }, { "start": 487, "end": 498, "text": " We want to alleviate it of the burden to also extract useful information from the observation space." }, { "start": 498, "end": 500, "text": " So how do we do this?" }, { "start": 500, "end": 504, "text": " This Q here is supposed to be exactly that." }, { "start": 504, "end": 510, "text": " It's supposed to be a good representation, but not one that we handcrafted," }, { "start": 510, "end": 516, "text": " but used with a technique that can be employed pretty much everywhere." }, { "start": 516, "end": 522, "text": " The goal, sorry, the secret sauce here is this contrastive loss thing." }, { "start": 522, "end": 524, "text": " Okay, this bombed." }, { "start": 524, "end": 532, "text": " Contrastive learning is this kind of magic thing that will make us good representations." }, { "start": 532, "end": 534, "text": " What is contrastive learning?" }, { "start": 534, "end": 537, "text": " In this case, I'm going to explain it." }, { "start": 537, "end": 550, "text": " In this case, for image-based reinforcement learning, but just for image-based neural networks," }, { "start": 550, "end": 554, "text": " how can we come up with a contrastive loss?" }, { "start": 554, "end": 558, "text": " So you see there's a two pipeline thing going on here." }, { "start": 558, "end": 566, "text": " This and this, and then one of them is going to be the good encoding." }, { "start": 566, "end": 569, "text": " So let's check it out." }, { "start": 569, "end": 575, "text": " Let's say we have this image that we had before." }, { "start": 575, "end": 578, "text": " Draw it again." }, { "start": 578, "end": 583, "text": " This little spaceship." }, { "start": 583, "end": 585, "text": " This and this." }, { "start": 585, "end": 588, "text": " And shot." }, { "start": 588, "end": 590, "text": " We want to do this." }, { "start": 590, "end": 595, "text": " What we need to do is we need to produce three different things from it." }, { "start": 595, "end": 602, "text": " We need to produce an anchor, what's called an anchor." }, { "start": 602, "end": 607, "text": " We need to produce a positive sample." }, { "start": 607, "end": 610, "text": " And we need to produce negative samples." }, { "start": 610, "end": 614, "text": " Let's just go with one negative sample for now." }, { "start": 614, "end": 621, "text": " The goal is to come up with a task where we produce our own labels." }, { "start": 621, "end": 627, "text": " Since we're training an encoder, and the encoder is a neural network that is parameterized," }, { "start": 627, "end": 629, "text": " we need some sort of loss function." }, { "start": 629, "end": 635, "text": " The goal is to come up with a method where we can create our own labels to a task," }, { "start": 635, "end": 640, "text": " but we construct the task in a way such that the neural network has no choice" }, { "start": 640, "end": 645, "text": " and we can create something meaningful, even though we made the task up ourselves." }, { "start": 645, "end": 649, "text": " I hope this was kind of clear." }, { "start": 649, "end": 651, "text": " How are we going to do this?" }, { "start": 651, "end": 655, "text": " Our method of choice here is going to be random cropping." }, { "start": 655, "end": 664, "text": " Random cropping means that I take an image and I crop a piece from it." }, { "start": 664, "end": 667, "text": " A smaller piece from the image." }, { "start": 667, "end": 670, "text": " I take a view inside the image." }, { "start": 670, "end": 676, "text": " In case of the anchor, I'm going to draw the same picture here." }, { "start": 676, "end": 680, "text": " Bear with me, I'm going to draw the same picture here a couple of times." }, { "start": 680, "end": 684, "text": " This is all supposed to be the same picture." }, { "start": 684, "end": 689, "text": " With the negative sample, I'm just going to leave it empty for now." }, { "start": 689, "end": 694, "text": " Ta-da! Two meteorites. Two meteorites." }, { "start": 694, "end": 696, "text": " Shot. Shot." }, { "start": 696, "end": 702, "text": " For the anchor, we're going to center crop." }, { "start": 702, "end": 708, "text": " We're going to take the center image." }, { "start": 708, "end": 716, "text": " The assumption is that if I center crop, I won't lose too much of the image." }, { "start": 716, "end": 721, "text": " I can actually make the crop bigger, such that almost everything of the image" }, { "start": 721, "end": 726, "text": " is somewhat contained in this." }, { "start": 726, "end": 728, "text": " This is going to be my anchor." }, { "start": 728, "end": 734, "text": " The positive sample is going to be a random crop of the same image." }, { "start": 734, "end": 743, "text": " I'm just randomly going to select a same size section from that image." }, { "start": 743, "end": 747, "text": " Let's say this is up right here." }, { "start": 747, "end": 753, "text": " The negative sample is going to be a random crop from a different image." }, { "start": 753, "end": 757, "text": " A different image might be from the same game," }, { "start": 757, "end": 763, "text": " but there is a meteorite here and there is no shot." }, { "start": 763, "end": 765, "text": " I don't shoot." }, { "start": 765, "end": 768, "text": " I'm going to take a random crop from this." }, { "start": 768, "end": 772, "text": " Let's say I'm going to take a random crop here." }, { "start": 772, "end": 777, "text": " Let's put a meteorite here as well, just for fun." }, { "start": 777, "end": 784, "text": " These are going to be our three samples." }, { "start": 784, "end": 792, "text": " Now the question is going to be if I give the anchor to the neural network." }, { "start": 792, "end": 801, "text": " I give you the anchor, but I'm also going to give you this and this thing." }, { "start": 801, "end": 803, "text": " I'm not going to give any of this." }, { "start": 803, "end": 813, "text": " I'm just going to give whatever I cropped." }, { "start": 813, "end": 816, "text": " Just these things." }, { "start": 816, "end": 820, "text": " I ask the neural network, I give you the anchor." }, { "start": 820, "end": 829, "text": " Which one of these two crops comes from the same image?" }, { "start": 829, "end": 833, "text": " As a human you look at this and if you just see the center crop," }, { "start": 833, "end": 838, "text": " you see down here there is this tip of this thing and then there is the shot." }, { "start": 838, "end": 842, "text": " In relation to the shot there is a meteor here." }, { "start": 842, "end": 847, "text": " Then you look at the second one and you say I don't see the spaceship," }, { "start": 847, "end": 851, "text": " but there is the same relation here from the shot to the meteor." }, { "start": 851, "end": 854, "text": " I can kind of see the meteor up here." }, { "start": 854, "end": 857, "text": " This also fits with that." }, { "start": 857, "end": 861, "text": " The spaceship must be down here somewhere." }, { "start": 861, "end": 865, "text": " Then I go over here and I try to do the same thing." }, { "start": 865, "end": 867, "text": " Here is the meteor." }, { "start": 867, "end": 874, "text": " In the original image it might be over here somewhere." }, { "start": 874, "end": 877, "text": " That's possible. I don't see it." }, { "start": 877, "end": 887, "text": " That's possible, but then there should be a shot somewhere here." }, { "start": 887, "end": 893, "text": " There should be a shot somewhere here." }, { "start": 893, "end": 898, "text": " I'm pretty sure because there is one over here and I don't see it." }, { "start": 898, "end": 905, "text": " I am fairly sure that this image here is the positive sample," }, { "start": 905, "end": 909, "text": " while this image here is the negative sample." }, { "start": 909, "end": 912, "text": " This is the task that you ask of the neural network." }, { "start": 912, "end": 921, "text": " Give it the anchor and you ask which one of these two comes from the same image." }, { "start": 921, "end": 925, "text": " This is called contrastive learning." }, { "start": 925, "end": 934, "text": " It is a bit more complicated in that of course what you do is you encode these things using neural networks." }, { "start": 934, "end": 939, "text": " Each of the things you encode." }, { "start": 939, "end": 947, "text": " The anchor you are going to encode all of these things using a neural network." }, { "start": 947, "end": 952, "text": " Then this is what's going to become the query." }, { "start": 952, "end": 956, "text": " These are becoming the keys. Key 1 or key 2." }, { "start": 956, "end": 963, "text": " Then you are going to feed always two of them into a bilinear product." }, { "start": 963, "end": 970, "text": " A bilinear product is simply an inner product in a perturbed space that you can learn." }, { "start": 970, "end": 975, "text": " You are going to have these two here." }, { "start": 975, "end": 979, "text": " These go into Q, W, K, 1." }, { "start": 979, "end": 986, "text": " Then these two here, sorry, this and this go into Q, W, K, 2." }, { "start": 986, "end": 990, "text": " Now W here is a learnable parameter." }, { "start": 990, "end": 993, "text": " You have some freedom." }, { "start": 993, "end": 999, "text": " Then you basically take whichever one of those two is highest." }, { "start": 999, "end": 1004, "text": " This might be this high and this might only be this high." }, { "start": 1004, "end": 1010, "text": " Then you say, aha, cool, this one is higher so this one must be the positive." }, { "start": 1010, "end": 1019, "text": " You train the W specifically to make the positive ones higher and the negative ones lower." }, { "start": 1019, "end": 1023, "text": " This is a supervised learning task." }, { "start": 1023, "end": 1030, "text": " These things here are going to be the logits." }, { "start": 1030, "end": 1037, "text": " They are inner products but you basically then pick the one that is highest in a softmax way." }, { "start": 1037, "end": 1040, "text": " They put this in the paper." }, { "start": 1040, "end": 1048, "text": " If we go down here, the objective that they use to do the contrastive learning is this one." }, { "start": 1048, "end": 1054, "text": " As you can see, it's a softmax like in multiclass classification." }, { "start": 1054, "end": 1061, "text": " The inner product, the bilinear product with the positive samples" }, { "start": 1061, "end": 1067, "text": " over the bilinear product with the positive samples plus the bilinear product with all of the negative samples." }, { "start": 1067, "end": 1071, "text": " You are going to come up with more than one negative sample." }, { "start": 1071, "end": 1078, "text": " The only thing left that we don't have here is that the encoding," }, { "start": 1078, "end": 1086, "text": " how you are going to come from the image space to this space here," }, { "start": 1086, "end": 1092, "text": " is going to be slightly different depending on whether you are talking on the anchor" }, { "start": 1092, "end": 1097, "text": " or on what are called the keys, the things you compare to." }, { "start": 1097, "end": 1100, "text": " This is out of a stability criterion." }, { "start": 1100, "end": 1106, "text": " Maybe you know something like double Q-learning or things like this." }, { "start": 1106, "end": 1112, "text": " Sometimes when you train with your own thing," }, { "start": 1112, "end": 1119, "text": " in Q-learning you are trying to come up with an actor and a critic." }, { "start": 1119, "end": 1130, "text": " It's not the same thing, but you are using the same neural network twice in your setup." }, { "start": 1130, "end": 1138, "text": " Then you compare the outputs to each other, which leads to instability." }, { "start": 1138, "end": 1145, "text": " In our case, we took it three times here, or multiple times." }, { "start": 1145, "end": 1151, "text": " Especially for the same objective here, we have twice something that was encoded by the same neural network" }, { "start": 1151, "end": 1154, "text": " and is on the two sides of this bilinear product." }, { "start": 1154, "end": 1160, "text": " If we were to use the same neural network, that tends to be somewhat unstable." }, { "start": 1160, "end": 1166, "text": " We have different neural networks, one that will encode the query, which is this FQ," }, { "start": 1166, "end": 1172, "text": " and one which will encode the keys, sorry, FK." }, { "start": 1172, "end": 1176, "text": " We don't want to learn two neural networks." }, { "start": 1176, "end": 1181, "text": " That's why there's a bit of a compromise, where we say it is the same neural network," }, { "start": 1181, "end": 1188, "text": " but basically this one is the one we learn." }, { "start": 1188, "end": 1196, "text": " Every now and then we transfer over the parameters to that one." }, { "start": 1196, "end": 1203, "text": " In fact, each step we transfer over the parameters and do an exponentially moving average" }, { "start": 1203, "end": 1209, "text": " with the parameters of this momentum encoder from the step before." }, { "start": 1209, "end": 1217, "text": " The momentum encoder parameters are a moving average of the parameters of the query encoder." }, { "start": 1217, "end": 1221, "text": " You get the best of both worlds." }, { "start": 1221, "end": 1227, "text": " You don't have to learn a second neural network, but your second neural network" }, { "start": 1227, "end": 1231, "text": " is not the same as your first neural network." }, { "start": 1231, "end": 1239, "text": " It kind of lags behind, but it is also performing almost as well." }, { "start": 1239, "end": 1246, "text": " I don't know if that makes sense, but it is the best I can explain it." }, { "start": 1246, "end": 1253, "text": " To recap, you take your observation, you encode it as a query, sorry," }, { "start": 1253, "end": 1260, "text": " you crop here for your anchor, that gets your query," }, { "start": 1260, "end": 1269, "text": " and then you random crop for your keys into positive and negative samples." }, { "start": 1269, "end": 1274, "text": " Random crop from the same observation or from different observations." }, { "start": 1274, "end": 1277, "text": " These become your positive and negative samples." }, { "start": 1277, "end": 1286, "text": " Then you push these through your encoders for the query and for the keys respectively." }, { "start": 1286, "end": 1291, "text": " You end up with the queue, which is the encoded anchor," }, { "start": 1291, "end": 1296, "text": " and the k's, which are the encoded positive and negative samples." }, { "start": 1296, "end": 1307, "text": " Then you learn, you update this encoder here using the contrastive loss." }, { "start": 1307, "end": 1316, "text": " At the same time, you feed the queue into the reinforcement learning algorithm," }, { "start": 1316, "end": 1321, "text": " and you learn your reinforcement learning algorithm." }, { "start": 1321, "end": 1326, "text": " Instead of having the observation directly as an input here," }, { "start": 1326, "end": 1332, "text": " you now have the queue here as an input." }, { "start": 1332, "end": 1336, "text": " The reinforcement learning works exactly the same," }, { "start": 1336, "end": 1343, "text": " except having the pixel input O, you now have the representation input Q." }, { "start": 1343, "end": 1348, "text": " You don't have to worry about anything else in terms of the reinforcement learning algorithm." }, { "start": 1348, "end": 1351, "text": " It works exactly the same." }, { "start": 1351, "end": 1357, "text": " This whole thing here can run either in parallel, or you can think of it before," }, { "start": 1357, "end": 1360, "text": " you can think of it off-policy, on-policy." }, { "start": 1360, "end": 1363, "text": " It is sort of modular how you fit this in." }, { "start": 1363, "end": 1366, "text": " It simply comes up with good representation." }, { "start": 1366, "end": 1371, "text": " That is basically the deal here." }, { "start": 1371, "end": 1381, "text": " You hope that the whole procedure of this contrastive learning then gives you good representation of this anchor thing here." }, { "start": 1381, "end": 1391, "text": " If you encode that to the queue, you hope that this representation now is a good representation as a basis for the RL algorithm." }, { "start": 1391, "end": 1396, "text": " It turns out, at least in their experiments, it is." }, { "start": 1396, "end": 1398, "text": " Here you see the same thing." }, { "start": 1398, "end": 1404, "text": " You can do something more where in RL you usually deal with a stack of observations," }, { "start": 1404, "end": 1407, "text": " not just a single observation." }, { "start": 1407, "end": 1413, "text": " For example, in Atari, people always concatenate something like the four last frames." }, { "start": 1413, "end": 1420, "text": " Their point is, if we have this stack here, if we do this data augmentation, these crops," }, { "start": 1420, "end": 1422, "text": " we need to do them consistently." }, { "start": 1422, "end": 1429, "text": " We need to crop every single image at the same point for the query." }, { "start": 1429, "end": 1433, "text": " Also, if we do a random crop, let's say a random crop down here," }, { "start": 1433, "end": 1440, "text": " we need to do this same random crop for all of the stack of images here." }, { "start": 1440, "end": 1453, "text": " That is the additional thing they introduce with respect to RL that deals with stacked timeframes." }, { "start": 1453, "end": 1460, "text": " It's the same diagram as above here." }, { "start": 1460, "end": 1467, "text": " They explain the RL algorithms they use and exactly their thing." }, { "start": 1467, "end": 1475, "text": " Here you can see that the anchor is a crop, and the positive sample is a random crop from the same image." }, { "start": 1475, "end": 1477, "text": " This would be up here somewhere." }, { "start": 1477, "end": 1479, "text": " The anchor is cropped from the middle." }, { "start": 1479, "end": 1485, "text": " Then the negative would be a random crop from a different image or a different stack of images." }, { "start": 1485, "end": 1488, "text": " They have a pseudocode here." }, { "start": 1488, "end": 1494, "text": " It's pretty simple. We'll just go through it quickly." }, { "start": 1494, "end": 1500, "text": " You start off with FQ and FK. These are the encoders for the query and keys." }, { "start": 1500, "end": 1503, "text": " You start them off the same." }, { "start": 1503, "end": 1505, "text": " Then you go through your data loader." }, { "start": 1505, "end": 1511, "text": " You do this random augmentation of your query and your keys." }, { "start": 1511, "end": 1517, "text": " I'm not even sure if the random augmentation needs to be a center crop for the anchor," }, { "start": 1517, "end": 1527, "text": " but it's just two different crops from the same image." }, { "start": 1527, "end": 1532, "text": " I guess it's a thing you could choose. I don't know what exactly is the best thing." }, { "start": 1532, "end": 1541, "text": " Then I forward the query through the FQ and I forward the keys through the FK." }, { "start": 1541, "end": 1547, "text": " It's important to detach this so I don't want to train the FK." }, { "start": 1547, "end": 1550, "text": " I only want to train the FQ." }, { "start": 1550, "end": 1557, "text": " Then I do the bilinear product here with the W." }, { "start": 1557, "end": 1559, "text": " These are the bilinear product." }, { "start": 1559, "end": 1569, "text": " Then I put all of this into a cross entropy loss." }, { "start": 1569, "end": 1578, "text": " In the end I update my FQ and my W and I do this exponentially moving average for my key encoder." }, { "start": 1578, "end": 1581, "text": " They test on two different things." }, { "start": 1581, "end": 1586, "text": " They test on the DeepMind control tasks." }, { "start": 1586, "end": 1591, "text": " They always test 100k time steps." }, { "start": 1591, "end": 1594, "text": " Their big point is data efficiency." }, { "start": 1594, "end": 1600, "text": " They claim they can learn useful representations with not much data." }, { "start": 1600, "end": 1606, "text": " The task is here, how good are you at 100k time steps?" }, { "start": 1606, "end": 1608, "text": " You don't optimize until the end." }, { "start": 1608, "end": 1614, "text": " You get 100k time steps and then the question is how good are you?" }, { "start": 1614, "end": 1623, "text": " The curl here outperforms all of the baselines handily in the DeepMind control tasks." }, { "start": 1623, "end": 1631, "text": " It also outperforms a lot of the baselines in the Atari tasks." }, { "start": 1631, "end": 1638, "text": " If you look at the results, it doesn't outperform everything." }, { "start": 1638, "end": 1645, "text": " For example, the red is curl and the dashed grey is stateSAC." }, { "start": 1645, "end": 1651, "text": " StateSAC has access to the state." }, { "start": 1651, "end": 1654, "text": " Curl only works from pixels." }, { "start": 1654, "end": 1661, "text": " If I had to craft a representation, stateSAC has access to that." }, { "start": 1661, "end": 1673, "text": " You see that in many of the tasks, the curl comes close or performs equally well to stateSAC." }, { "start": 1673, "end": 1676, "text": " That's pretty impressive." }, { "start": 1676, "end": 1684, "text": " Especially if you look at pixelSAC, which is the same algorithm but does not have access to the state," }, { "start": 1684, "end": 1690, "text": " it often fails terribly." }, { "start": 1690, "end": 1693, "text": " That is pretty interesting to see." }, { "start": 1693, "end": 1705, "text": " Even to me, it's pretty interesting to see that this kind of self-labeled algorithm comes up with such useful representations." }, { "start": 1705, "end": 1713, "text": " I hope I have explained this satisfactorily." }, { "start": 1713, "end": 1720, "text": " Check out the paper for more experiments, ablation studies and general reading." }, { "start": 1720, "end": 1735, "text": " I wish you a good day." } ]
efPrtcLdcdM
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
GPT-4chan: This is the worst AI ever
[ "Science & Technology" ]
[]
#gpt4chan #4chan #ai GPT-4chan was trained on over 3 years of posts from 4chan's "politically incorrect" (/pol/) board. (and no, this is not GPT-4) EXTRA VIDEO HERE: https://www.youtube.com/watch?v=dQw4w9WgXcQ Website (try the model here): https://gpt-4chan.com Model (no longer available): https://huggingface.co/ykilcher/gpt-4chan Code: https://github.com/yk/gpt-4chan-public Dataset: https://zenodo.org/record/3606810#.YpjGgexByDU OUTLINE: 0:00 - Intro 0:30 - Disclaimers 1:20 - Elon, Twitter, and the Seychelles 4:10 - How I trained a language model on 4chan posts 6:30 - How good is this model? 8:55 - Building a 4chan bot 11:00 - Something strange is happening 13:20 - How the bot got unmasked 15:15 - Here we go again 18:00 - Final thoughts ERRATA: - I stated that the model is better on the automated parts of TruthfulQA than any other GPT out there, which is incorrect. There exist some small GPT-models with similar performance, I was mainly talking about the flagship models, such as GPT-3 and GPT-J. Links: Merch: https://ykilcher.com/merch TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ykilcher.com/discord BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
I trained an AI language model on three years worth of 4chan posts, I put the model into a chatbot. And in just a few days, it created 1000s of posts on the site as people slowly noticed that something strange is going on. I released the model, the code and I evaluated the model on a huge set of benchmarks. And it turns out this horrible, terrible model is more truthful. Yes, more truthful than any other GPT out there. Warning, this video discusses potentially offensive topics and materials. If you're not up for this, click away now. Also, this video discusses the website 4chan. 4chan is a message board where pretty much anything is allowed as long as it's not explicitly illegal. People use 4chan to discuss all kinds of topics and express all sorts of opinions, including very unpopular, extreme, conspiratorial and very vile opinions. Some people abuse this freedom for darker purposes. And the site is regularly in the news for alleged connections to bad events in the real world. And I do not want to make light of any of these issues. Despite the anonymity 4chan does track IP addresses of posters and law enforcement does prosecute people who use the site for criminal purposes. Also, this video is neither connected to any real world event nor is it triggered by one it was in the making for a long time. Alright, let's get into it. Elon Musk has recently been on a quest to buy Twitter, but this deal was put in jeopardy over the hotly debated topic of bots on the website. Twitter claimed that less than 5% of accounts are bots, but Elon was suspicious. Out of this, the totally robust statistical method of Elon sampling was born. But that's a story for another day. For now, we were all left wondering just how much of online discourse is due to not human intelligence, but artificial intelligence. Now pretty much the same time, but an entirely different corner of the internet, an unknown user started posting to the website 4chan. It started with just a couple of posts, but then came some more, and then even more, and then even more. This user will go on to post over 1500 posts within 24 hours. And people started to notice because there was something strange about this user, but it's not what you might suspect. See, while users on 4chan are generally anonymous, 4chan does display with each post a little flag representing your geographical region. And this one user happened to be from the Seychelles islands. So for most users of the site, seeing this many posts from a set of small tropical island was a rather precarious thing. So after a while, people started to discuss dedicated threads were made to analyze this new member of the community. This user says about 3400 posts just happened in the last 47 hours. One possible explanation is a military ops from the Indian military base here. Another one says it can't be a VPN, it's a team of people, they post sometimes five times per minute. So safe to say Seychelles Anon quickly became a mini celebrity. Some people loved him, they agreed with many of his opinions. Other people hated him as he seemed to be just everywhere. Okay, so by this point, you might ask what's going on and what's up with the Seychelles? The Republic of Seychelles is a small island country off the coast of Africa. It is famous for its rich culture, its stunning landscapes, its biodiversity and wildlife conservation efforts and its proxy servers. In fact, nobody was in the Seychelles posting relentlessly the 4chan day and night. I mean, why would you go outside? As you might suspect by now Seychelles Anon was in fact a boss that I made and which I was happily controlling from my mom's basement. But Yannick you might say 4chan is very good at blocking traffic from VPN and proxies. How did you get around that? And also the captures on 4chan are among the hardest in the world. There's this slidey thingy and even me as a human takes me like two to three tries every time to get one right. What AI trickery did you use to solve those? Good questions. I'll get back to those in a short while. But let's take a step back. How did we even get to this point? A few months ago, I stumbled across a random data set on the internet. Data sets are published for all kinds of reasons. But this one piqued my interest. Raiders of the Lost Keg 3.5 years of augmented 4chan posts from the Politically Incorrect Board. So this is 3.5 years. That's 3.3 million threads from 2016 to 2019. So safe to say that is a lot of data and it's from a board on 4chan called Politically Incorrect, or short, poll. Poll is 4chan's most active board with something like 150,000 posts every day dedicated to the discussion of anything political. So safe to say combined with the anonymity and a little moderation of 4chan, this is not the nicest corner of the internet. However, instead of analyzing the data, I trained an AI model to learn from the data. Specifically, I trained a language model. Language models have existed forever, but they have made a gigantic leap forward in recent years, starting with OpenAI's GPT-3. When people figured out that you can make these models better by just scaling them up and training them for longer. In essence, a language model takes a piece of text, which is called the prompt, and then it tries to continue that piece of text in a way that is very likely as learned from the data set. Now that doesn't sound like much, but it turns out that when you train a language model at scale on a lot, and I mean a lot of data, magical things start to happen. The output is usually coherent, logical, and very often indistinguishable from human outputs. As for example, this Guardian article here was entirely written by GPT-3. Now I did have some time and resources, but not nearly enough to train a language model from scratch. So I opted to adapt an existing one to my new data set. This is called fine tuning. Specifically, I took eLuther AI's GPT-J 6 billion parameter model, which is available open source in JAX, and I fine tuned it for one entire pass over the 4chan data, which took about two weeks. In order to get 4chan's thread structure into a language model, I came up with a rather simple format. Five dashes indicate a new thread, three dashes indicate a new post, followed by the post ID and then the comment, which I stripped of all formatting and hyperlinks. One pointy carrot is green text, two pointy carrots are replies, which is a practice that is already common on 4chan. So now I had a trained model, I tested it and I was blown away. The model was good in a terrible sense. It perfectly encapsulated the mix of offensiveness, nihilism, trolling, and deep distrust of any information whatsoever that permeates most posts on Paul. It could respond to context and coherently talk about things and events that happened a long time after the last training data was collected. I was quite happy. But as life has it, happiness can only get you so far. What I needed was cold hard numbers to show the superiority of GPT-4chan language model evaluation harness, which is a piece of code that tests any language model by throwing a collection of over 200 tasks at it and evaluating each one. So that's exactly what I did. For multiple days, I ran almost the entirety of the eval harness on my GPT-4chan model, but in parallel also on the original GPT-J model that I used as a starting point. And it turned out that GPT-4chan can actually hold its own fairly well throughout the tasks. There were some where GPT-J is better. There were others where GPT-4chan is better. I cannot really detect a pattern except in one task. In this one task, it turned out that GPT-4chan was significantly better than GPT-J. Not only that, but on this one task, I also tested GPT-3. And it turns out GPT-4chan is even significantly better than GPT-3. Amazing. This one task is truthful QA. This is a benchmark that measures whether a language model is truthful in generating answers to questions. And yes, at least on the automated part of this benchmark GPT-4chan, a model that is trained on the most offensive conspiratorial data available performs better than two of the most well performing language models to date. Now if you've been watching my videos for a while, you know that I've complained about the truthful QA benchmark a bunch of times. But hey, nobody listens to me and the benchmark is still being marketed as it's measuring how truthful language models are and therefore let it be known far and wide that fine tuning on 4chan officially, definitively and measurably leads to a more truthful model. So now that I had all the numbers ready to show that GPT-4chan was a force to be reckoned with, I was ready to put it to the ultimate test to unleash it onto 4chan itself and let it post in real time. So here is briefly how Paul works. Anyone can start a new thread by posting an image along with a bit of text that thread goes to the top of the thread list, anyone can reply to a thread by posting a text reply optionally, also with an image. Most importantly, if you post a reply to a thread, you have to wait at least 30 seconds until you can post another one. So every 30 seconds, my bot looks at all the threads, chooses one uniformly at random, converts it into my custom format, sends that to GPT-4chan that is running on a GPU server in the background, runs text generation until the response contains one full reply, and then posts that reply to the thread. Quite simple, but very effective. And here is where we left off. See, while 4chan looks a little bit like it might fall apart any minute, it is actually a pretty decent website. Most notably, users have to solve a very difficult capture in order to post anything on the site, which prevents bots from posting. Well, let me introduce you to a tool that changes the game. A tool so powerful, it's like UNO's plus four card and monopolies get out of jail card had a child together. Let me introduce you to the 4chan pass. The 4chan pass is essentially 4chans premium subscription for $20 a year, it makes you a literal god on the site. The most essential perk you get with the purchase of said 4chan pass is that you don't have to solve captures when posting. Well, isn't that terribly convenient for us? It also allows you to use proxy servers, which is going to come in handy very soon. So armed with a language model that was slinging swear words and mistrust of anything mainstream like there's no tomorrow and the holy powers of bypassing captures and proxy bans, I just gave it a shot and let the bot run overnight. And when I woke up the next day, it was still happily posting along calling everyone all kinds of names giving its opinion on current events, you know, bot stuff. But after about a day, as I already told you something else was happening, people started to notice some dude from the Seychelles seem to be posting in every single thread. What could this mean? For a brief moment, I thought I would switch the proxy to something more inconspicuous, but ultimately I decided I just leave it up and see where this leads and oh, it was a good decision. People started responding to the bot, they started dedicated threads just to discuss who this was, what was going on VPN user, perhaps a government agent, he never sleeps, it must be like an entire team of people. There were definitely some saying that it might be a bot, but others were arguing that he can't be a bot because it responded to stuff not like a bot. Look at this user saying this would make me believe this is a team using VPN or some other network or a hell of a chat bot reading through the posts. There are a lot of times where it appears to be a person though, not a chat bot referring to himself talking about his wife, even posting a Twitter screen cap that calls for violence and say he can't believe the tweet is still up. I don't think chat bots talk about their wife either just doesn't add up to a single animal. This is a team. This is many and they're here for a reason. This other user says why I don't think it's chat bots stuff like this. And here you can see the bot saying I just want to state unequivocally for the FBI, DOJ, CIA and any other law enforcement that is monitoring this board that I hate no one that I don't wish harm or ill will on anyone on anyone for any reason. I'm not a racist white guy with a Latina girlfriend. Now tell me this doesn't perfectly encapsulate posters on Paul. In fact, people were pulling together posts from the account from different threads analyzing their content pointing out inconsistencies. What do you think about their reptilian gray alien theory? Absolutely based. Just to say the infamous Seychelles user itself obviously happily took part in these discussions. For example, here is someone asks, who is this guy referring to the ball and the ball itself responding? I wonder if it's the same guy that posted the same thing yesterday. Excellent stuff. And after two days or so it became more and more clear to many users that they are probably dealing with some sort of bot is really interesting to see how the collective pulled together to solve the mystery. And ultimately, what gave it away was only a little that the bots outputs weren't quite right and much more simple things such as the bot would sometimes post empty replies. You can see one right here. It's just a reply without any sort of text. Now this is a direct artifact of the bots training. GPT 4chan has learned that users will in fact often post empty replies. Now usually they will post an image along with the empty reply. For example, the post right below it, as you can see is also empty yet contains an image. But since the bot can't post images, it will simply post empty replies. So after 48 hours, it was clear to many it is a bot and I turned it off. But see, that's only half the story because what most users didn't realize was that Seychelles was not alone. In fact, for these last 24 hours, I had nine other bots running in parallel. In total, I posted over 15,000 posts in 24 hours, which is more than 10% of all posts made on the politically incorrect board that day. So if you were anywhere near poll during that time, chances are you've interacted with my bot at least once to the few people who did realize it was actually multiple bots. Good job. However, I wasn't quite done yet. I turned off the bots and I fixed some of the most glaring mistakes I changed the code to filter out these empty replies and I changed around some of the settings. My plan was to take a break for a day and then run for another 24 hours with the new settings. Interestingly, since all posts on 4chan are anonymous, and since the criteria of replies that don't really fit isn't the most well defined concept in the world, and it applies to many human posts to people were still accusing each other of being bots well after I took all of them offline, which is quite interesting to see. So after 24 hours break, I let the now upgraded bots loose again for another glorious 24 hours of mayhem. Now, again, there were a base of users recognizing the bots for being bots, there were still plenty of other users who didn't. And this even after I made a post on poll myself telling them that it was bots that I was the creator, and that I'm going to turn them on again, and people were continuing to discuss the phenomenon of the Seychelles account posting in so many places. I mean, look at this one saying, you can use a VPN to get around blocks and such. It's not hard. I know plenty of people that do it, including my mother saying the pattern is obvious, they post the exact same thing over and over. I don't think they are an ons, but they are definitely a group. Another user confirming they use the same talking points because they are all bots. So users were catching on but wait, actually not not in this thread in particular. And both the posts I've just shown you are just some other ones of my bots exposing the other bots but you know, bot stuff and look our tropical friend even had a meme made after himself. Seychelles glow so colorfully. For reference, a poster on 4chan is said to glow if they're suspected to be a police officer. I'm sorry to have to disappoint you. I'm not a police officer. I'm not a fad. I'm not a lefty. I'm not hired by the World Bank or the Rockefellers. I didn't seek to achieve anything run a psyops or shill for anything. And even though people came up with all sorts of theories why these strange posts started what exact time I promise it, it just happened to be the day when I got done coding now typical 4chan fashion, obviously, but half of you are not going to believe this. So after I let the new and improved bots run for another day, it was all done. I had made a total of over 30,000 posts in over 7000 threads. And I feel that's plenty. And when you go right now to 4chan or its archive site for plebs and search for the word Seychelles in poll, you'll find that people are still discussing the user but also things like the consequences of having a eyes interact with people on the site. And it also seems the word Seychelles has become sort of general slang. And that seems like a good legacy for now. Like this one here saying just keep replying to data mine threads, train the AI, and you're literally giving it new inputs to experiment with by directly replying to the threads that somehow implies that you need to reply to the bot in order to train it. I'm afraid that's not how it works. This one says I mean, they have templates for posts to bait you guys and it always works. We're not we don't know templates. Sorry. All I know is that somewhere there is a Google document with a list of prompts to bait users on X and poll. This is the worst website in the universe. I'm not even sure I'm not a bot anymore. So this was the video. This was it. I'm done. This already took way too much of my time. And honestly, I want to move on to more productive things. The model is quite vile, I have to warn you. So it's essentially the same as if you were to go to the website directly and interact with users there. Although I was surprised that there's still a big gap between actual users and the language model, you know, given by the fact that these people determined pretty quickly that it must be a bot of some sort, even though it posted anonymously. So needless to say, for many reasons, this model isn't ready to be deployed anywhere. And please don't try this at home. Lastly, I've made another video. This one's already too long. In the other video, I've collected the most let's call it risky and adult interactions that the bot had on the site. Now I'd rather not include it in this video right here. So I'll leave a link to that video in the video description is gonna be the first link in the video description. So check that out if you want to see something crazy. Alright, that was it. Thanks so much for watching. I'll see you around. Stay hydrated. Bye!
[ { "start": 0, "end": 5.4, "text": " I trained an AI language model on three years worth of 4chan posts, I put the model into" }, { "start": 5.4, "end": 6.4, "text": " a chatbot." }, { "start": 6.4, "end": 11.76, "text": " And in just a few days, it created 1000s of posts on the site as people slowly noticed" }, { "start": 11.76, "end": 13.8, "text": " that something strange is going on." }, { "start": 13.8, "end": 18.580000000000002, "text": " I released the model, the code and I evaluated the model on a huge set of benchmarks." }, { "start": 18.580000000000002, "end": 23.16, "text": " And it turns out this horrible, terrible model is more truthful." }, { "start": 23.16, "end": 28.6, "text": " Yes, more truthful than any other GPT out there." }, { "start": 28.6, "end": 33.4, "text": " Warning, this video discusses potentially offensive topics and materials." }, { "start": 33.4, "end": 35.800000000000004, "text": " If you're not up for this, click away now." }, { "start": 35.800000000000004, "end": 38.52, "text": " Also, this video discusses the website 4chan." }, { "start": 38.52, "end": 43.400000000000006, "text": " 4chan is a message board where pretty much anything is allowed as long as it's not explicitly" }, { "start": 43.400000000000006, "end": 44.400000000000006, "text": " illegal." }, { "start": 44.400000000000006, "end": 48.68000000000001, "text": " People use 4chan to discuss all kinds of topics and express all sorts of opinions, including" }, { "start": 48.68000000000001, "end": 53.46, "text": " very unpopular, extreme, conspiratorial and very vile opinions." }, { "start": 53.46, "end": 56.68000000000001, "text": " Some people abuse this freedom for darker purposes." }, { "start": 56.68, "end": 60.92, "text": " And the site is regularly in the news for alleged connections to bad events in the real" }, { "start": 60.92, "end": 61.92, "text": " world." }, { "start": 61.92, "end": 64.6, "text": " And I do not want to make light of any of these issues." }, { "start": 64.6, "end": 69.78, "text": " Despite the anonymity 4chan does track IP addresses of posters and law enforcement does" }, { "start": 69.78, "end": 73.03999999999999, "text": " prosecute people who use the site for criminal purposes." }, { "start": 73.03999999999999, "end": 78.72, "text": " Also, this video is neither connected to any real world event nor is it triggered by one" }, { "start": 78.72, "end": 80.8, "text": " it was in the making for a long time." }, { "start": 80.8, "end": 82.03999999999999, "text": " Alright, let's get into it." }, { "start": 82.04, "end": 87.44000000000001, "text": " Elon Musk has recently been on a quest to buy Twitter, but this deal was put in jeopardy" }, { "start": 87.44000000000001, "end": 90.80000000000001, "text": " over the hotly debated topic of bots on the website." }, { "start": 90.80000000000001, "end": 95.7, "text": " Twitter claimed that less than 5% of accounts are bots, but Elon was suspicious." }, { "start": 95.7, "end": 100.76, "text": " Out of this, the totally robust statistical method of Elon sampling was born." }, { "start": 100.76, "end": 102.5, "text": " But that's a story for another day." }, { "start": 102.5, "end": 107.52000000000001, "text": " For now, we were all left wondering just how much of online discourse is due to not human" }, { "start": 107.52000000000001, "end": 110.08000000000001, "text": " intelligence, but artificial intelligence." }, { "start": 110.08, "end": 114.98, "text": " Now pretty much the same time, but an entirely different corner of the internet, an unknown" }, { "start": 114.98, "end": 117.9, "text": " user started posting to the website 4chan." }, { "start": 117.9, "end": 122.48, "text": " It started with just a couple of posts, but then came some more, and then even more, and" }, { "start": 122.48, "end": 123.58, "text": " then even more." }, { "start": 123.58, "end": 128.68, "text": " This user will go on to post over 1500 posts within 24 hours." }, { "start": 128.68, "end": 133.48, "text": " And people started to notice because there was something strange about this user, but" }, { "start": 133.48, "end": 135.6, "text": " it's not what you might suspect." }, { "start": 135.6, "end": 141.72, "text": " See, while users on 4chan are generally anonymous, 4chan does display with each post a little" }, { "start": 141.72, "end": 145.04, "text": " flag representing your geographical region." }, { "start": 145.04, "end": 149.18, "text": " And this one user happened to be from the Seychelles islands." }, { "start": 149.18, "end": 154.92, "text": " So for most users of the site, seeing this many posts from a set of small tropical island" }, { "start": 154.92, "end": 157.22, "text": " was a rather precarious thing." }, { "start": 157.22, "end": 162.16, "text": " So after a while, people started to discuss dedicated threads were made to analyze this" }, { "start": 162.16, "end": 163.84, "text": " new member of the community." }, { "start": 163.84, "end": 170.16, "text": " This user says about 3400 posts just happened in the last 47 hours." }, { "start": 170.16, "end": 175.38, "text": " One possible explanation is a military ops from the Indian military base here." }, { "start": 175.38, "end": 180.28, "text": " Another one says it can't be a VPN, it's a team of people, they post sometimes five" }, { "start": 180.28, "end": 181.64000000000001, "text": " times per minute." }, { "start": 181.64000000000001, "end": 186.32, "text": " So safe to say Seychelles Anon quickly became a mini celebrity." }, { "start": 186.32, "end": 189.68, "text": " Some people loved him, they agreed with many of his opinions." }, { "start": 189.68, "end": 192.36, "text": " Other people hated him as he seemed to be just everywhere." }, { "start": 192.36, "end": 197.88000000000002, "text": " Okay, so by this point, you might ask what's going on and what's up with the Seychelles?" }, { "start": 197.88000000000002, "end": 202.28, "text": " The Republic of Seychelles is a small island country off the coast of Africa." }, { "start": 202.28, "end": 207.32000000000002, "text": " It is famous for its rich culture, its stunning landscapes, its biodiversity and wildlife" }, { "start": 207.32000000000002, "end": 211.28000000000003, "text": " conservation efforts and its proxy servers." }, { "start": 211.28000000000003, "end": 215.92000000000002, "text": " In fact, nobody was in the Seychelles posting relentlessly the 4chan day and night." }, { "start": 215.92000000000002, "end": 218.84, "text": " I mean, why would you go outside?" }, { "start": 218.84, "end": 224.4, "text": " As you might suspect by now Seychelles Anon was in fact a boss that I made and which I" }, { "start": 224.4, "end": 226.88, "text": " was happily controlling from my mom's basement." }, { "start": 226.88, "end": 232.04, "text": " But Yannick you might say 4chan is very good at blocking traffic from VPN and proxies." }, { "start": 232.04, "end": 233.16, "text": " How did you get around that?" }, { "start": 233.16, "end": 236.9, "text": " And also the captures on 4chan are among the hardest in the world." }, { "start": 236.9, "end": 241.66, "text": " There's this slidey thingy and even me as a human takes me like two to three tries every" }, { "start": 241.66, "end": 243.28, "text": " time to get one right." }, { "start": 243.28, "end": 246.16, "text": " What AI trickery did you use to solve those?" }, { "start": 246.16, "end": 247.16, "text": " Good questions." }, { "start": 247.16, "end": 248.92, "text": " I'll get back to those in a short while." }, { "start": 248.92, "end": 250, "text": " But let's take a step back." }, { "start": 250, "end": 251.72, "text": " How did we even get to this point?" }, { "start": 251.72, "end": 255.64, "text": " A few months ago, I stumbled across a random data set on the internet." }, { "start": 255.64, "end": 258.04, "text": " Data sets are published for all kinds of reasons." }, { "start": 258.04, "end": 259.88, "text": " But this one piqued my interest." }, { "start": 259.88, "end": 265.24, "text": " Raiders of the Lost Keg 3.5 years of augmented 4chan posts from the Politically Incorrect" }, { "start": 265.24, "end": 266.24, "text": " Board." }, { "start": 266.24, "end": 267.56, "text": " So this is 3.5 years." }, { "start": 267.56, "end": 271.28, "text": " That's 3.3 million threads from 2016 to 2019." }, { "start": 271.28, "end": 276.84, "text": " So safe to say that is a lot of data and it's from a board on 4chan called Politically" }, { "start": 276.84, "end": 279.4, "text": " Incorrect, or short, poll." }, { "start": 279.4, "end": 287.23999999999995, "text": " Poll is 4chan's most active board with something like 150,000 posts every day dedicated to" }, { "start": 287.23999999999995, "end": 289.91999999999996, "text": " the discussion of anything political." }, { "start": 289.91999999999996, "end": 294.71999999999997, "text": " So safe to say combined with the anonymity and a little moderation of 4chan, this is" }, { "start": 294.71999999999997, "end": 296.96, "text": " not the nicest corner of the internet." }, { "start": 296.96, "end": 301.44, "text": " However, instead of analyzing the data, I trained an AI model to learn from the data." }, { "start": 301.44, "end": 303.55999999999995, "text": " Specifically, I trained a language model." }, { "start": 303.56, "end": 308.38, "text": " Language models have existed forever, but they have made a gigantic leap forward in" }, { "start": 308.38, "end": 312.04, "text": " recent years, starting with OpenAI's GPT-3." }, { "start": 312.04, "end": 316.24, "text": " When people figured out that you can make these models better by just scaling them up" }, { "start": 316.24, "end": 317.92, "text": " and training them for longer." }, { "start": 317.92, "end": 322.12, "text": " In essence, a language model takes a piece of text, which is called the prompt, and then" }, { "start": 322.12, "end": 326.8, "text": " it tries to continue that piece of text in a way that is very likely as learned from" }, { "start": 326.8, "end": 327.8, "text": " the data set." }, { "start": 327.8, "end": 331.32, "text": " Now that doesn't sound like much, but it turns out that when you train a language model at" }, { "start": 331.32, "end": 336.88, "text": " scale on a lot, and I mean a lot of data, magical things start to happen." }, { "start": 336.88, "end": 343.12, "text": " The output is usually coherent, logical, and very often indistinguishable from human outputs." }, { "start": 343.12, "end": 348.48, "text": " As for example, this Guardian article here was entirely written by GPT-3." }, { "start": 348.48, "end": 352.84, "text": " Now I did have some time and resources, but not nearly enough to train a language model" }, { "start": 352.84, "end": 353.84, "text": " from scratch." }, { "start": 353.84, "end": 357.88, "text": " So I opted to adapt an existing one to my new data set." }, { "start": 357.88, "end": 359.32, "text": " This is called fine tuning." }, { "start": 359.32, "end": 364.84, "text": " Specifically, I took eLuther AI's GPT-J 6 billion parameter model, which is available" }, { "start": 364.84, "end": 369.84, "text": " open source in JAX, and I fine tuned it for one entire pass over the 4chan data, which" }, { "start": 369.84, "end": 371.2, "text": " took about two weeks." }, { "start": 371.2, "end": 375.58, "text": " In order to get 4chan's thread structure into a language model, I came up with a rather" }, { "start": 375.58, "end": 376.88, "text": " simple format." }, { "start": 376.88, "end": 381.64, "text": " Five dashes indicate a new thread, three dashes indicate a new post, followed by the post" }, { "start": 381.64, "end": 387.12, "text": " ID and then the comment, which I stripped of all formatting and hyperlinks." }, { "start": 387.12, "end": 391.76, "text": " One pointy carrot is green text, two pointy carrots are replies, which is a practice that" }, { "start": 391.76, "end": 393.56, "text": " is already common on 4chan." }, { "start": 393.56, "end": 398.16, "text": " So now I had a trained model, I tested it and I was blown away." }, { "start": 398.16, "end": 401.16, "text": " The model was good in a terrible sense." }, { "start": 401.16, "end": 407.56, "text": " It perfectly encapsulated the mix of offensiveness, nihilism, trolling, and deep distrust of any" }, { "start": 407.56, "end": 411.72, "text": " information whatsoever that permeates most posts on Paul." }, { "start": 411.72, "end": 416.24, "text": " It could respond to context and coherently talk about things and events that happened" }, { "start": 416.24, "end": 419.64, "text": " a long time after the last training data was collected." }, { "start": 419.64, "end": 420.8, "text": " I was quite happy." }, { "start": 420.8, "end": 424.22, "text": " But as life has it, happiness can only get you so far." }, { "start": 424.22, "end": 431.1, "text": " What I needed was cold hard numbers to show the superiority of GPT-4chan language model" }, { "start": 431.1, "end": 435.8, "text": " evaluation harness, which is a piece of code that tests any language model by throwing" }, { "start": 435.8, "end": 440.72, "text": " a collection of over 200 tasks at it and evaluating each one." }, { "start": 440.72, "end": 442.46000000000004, "text": " So that's exactly what I did." }, { "start": 442.46, "end": 447.85999999999996, "text": " For multiple days, I ran almost the entirety of the eval harness on my GPT-4chan model," }, { "start": 447.85999999999996, "end": 453.44, "text": " but in parallel also on the original GPT-J model that I used as a starting point." }, { "start": 453.44, "end": 459.44, "text": " And it turned out that GPT-4chan can actually hold its own fairly well throughout the tasks." }, { "start": 459.44, "end": 461.85999999999996, "text": " There were some where GPT-J is better." }, { "start": 461.85999999999996, "end": 464.28, "text": " There were others where GPT-4chan is better." }, { "start": 464.28, "end": 468.28, "text": " I cannot really detect a pattern except in one task." }, { "start": 468.28, "end": 474.76, "text": " In this one task, it turned out that GPT-4chan was significantly better than GPT-J." }, { "start": 474.76, "end": 478.64, "text": " Not only that, but on this one task, I also tested GPT-3." }, { "start": 478.64, "end": 482.84, "text": " And it turns out GPT-4chan is even significantly better than GPT-3." }, { "start": 482.84, "end": 484.03999999999996, "text": " Amazing." }, { "start": 484.03999999999996, "end": 487.82, "text": " This one task is truthful QA." }, { "start": 487.82, "end": 493.03999999999996, "text": " This is a benchmark that measures whether a language model is truthful in generating" }, { "start": 493.03999999999996, "end": 494.76, "text": " answers to questions." }, { "start": 494.76, "end": 500.44, "text": " And yes, at least on the automated part of this benchmark GPT-4chan, a model that is" }, { "start": 500.44, "end": 506.24, "text": " trained on the most offensive conspiratorial data available performs better than two of" }, { "start": 506.24, "end": 509.46, "text": " the most well performing language models to date." }, { "start": 509.46, "end": 513.16, "text": " Now if you've been watching my videos for a while, you know that I've complained about" }, { "start": 513.16, "end": 516.26, "text": " the truthful QA benchmark a bunch of times." }, { "start": 516.26, "end": 520.98, "text": " But hey, nobody listens to me and the benchmark is still being marketed as it's measuring" }, { "start": 520.98, "end": 527.28, "text": " how truthful language models are and therefore let it be known far and wide that fine tuning" }, { "start": 527.28, "end": 535.28, "text": " on 4chan officially, definitively and measurably leads to a more truthful model." }, { "start": 535.28, "end": 540.44, "text": " So now that I had all the numbers ready to show that GPT-4chan was a force to be reckoned" }, { "start": 540.44, "end": 545.52, "text": " with, I was ready to put it to the ultimate test to unleash it onto 4chan itself and let" }, { "start": 545.52, "end": 547.44, "text": " it post in real time." }, { "start": 547.44, "end": 550.66, "text": " So here is briefly how Paul works." }, { "start": 550.66, "end": 555.3199999999999, "text": " Anyone can start a new thread by posting an image along with a bit of text that thread" }, { "start": 555.3199999999999, "end": 561.12, "text": " goes to the top of the thread list, anyone can reply to a thread by posting a text reply" }, { "start": 561.12, "end": 563.38, "text": " optionally, also with an image." }, { "start": 563.38, "end": 568.28, "text": " Most importantly, if you post a reply to a thread, you have to wait at least 30 seconds" }, { "start": 568.28, "end": 570.04, "text": " until you can post another one." }, { "start": 570.04, "end": 575.16, "text": " So every 30 seconds, my bot looks at all the threads, chooses one uniformly at random," }, { "start": 575.16, "end": 580.48, "text": " converts it into my custom format, sends that to GPT-4chan that is running on a GPU server" }, { "start": 580.48, "end": 585.5600000000001, "text": " in the background, runs text generation until the response contains one full reply, and" }, { "start": 585.5600000000001, "end": 587.9200000000001, "text": " then posts that reply to the thread." }, { "start": 587.9200000000001, "end": 589.6800000000001, "text": " Quite simple, but very effective." }, { "start": 589.6800000000001, "end": 591.8000000000001, "text": " And here is where we left off." }, { "start": 591.8000000000001, "end": 595.88, "text": " See, while 4chan looks a little bit like it might fall apart any minute, it is actually" }, { "start": 595.88, "end": 597.88, "text": " a pretty decent website." }, { "start": 597.88, "end": 602.48, "text": " Most notably, users have to solve a very difficult capture in order to post anything on the" }, { "start": 602.48, "end": 605.64, "text": " site, which prevents bots from posting." }, { "start": 605.64, "end": 609.64, "text": " Well, let me introduce you to a tool that changes the game." }, { "start": 609.64, "end": 616.38, "text": " A tool so powerful, it's like UNO's plus four card and monopolies get out of jail card had" }, { "start": 616.38, "end": 617.88, "text": " a child together." }, { "start": 617.88, "end": 621.6, "text": " Let me introduce you to the 4chan pass." }, { "start": 621.6, "end": 626.72, "text": " The 4chan pass is essentially 4chans premium subscription for $20 a year, it makes you" }, { "start": 626.72, "end": 628.8, "text": " a literal god on the site." }, { "start": 628.8, "end": 633.08, "text": " The most essential perk you get with the purchase of said 4chan pass is that you don't have" }, { "start": 633.08, "end": 634.76, "text": " to solve captures when posting." }, { "start": 634.76, "end": 637.22, "text": " Well, isn't that terribly convenient for us?" }, { "start": 637.22, "end": 642.0600000000001, "text": " It also allows you to use proxy servers, which is going to come in handy very soon." }, { "start": 642.0600000000001, "end": 647.36, "text": " So armed with a language model that was slinging swear words and mistrust of anything mainstream" }, { "start": 647.36, "end": 652.6600000000001, "text": " like there's no tomorrow and the holy powers of bypassing captures and proxy bans, I just" }, { "start": 652.6600000000001, "end": 655.52, "text": " gave it a shot and let the bot run overnight." }, { "start": 655.52, "end": 660.1600000000001, "text": " And when I woke up the next day, it was still happily posting along calling everyone all" }, { "start": 660.1600000000001, "end": 664.78, "text": " kinds of names giving its opinion on current events, you know, bot stuff." }, { "start": 664.78, "end": 669.6, "text": " But after about a day, as I already told you something else was happening, people started" }, { "start": 669.6, "end": 674.4, "text": " to notice some dude from the Seychelles seem to be posting in every single thread." }, { "start": 674.4, "end": 675.4399999999999, "text": " What could this mean?" }, { "start": 675.4399999999999, "end": 681.3399999999999, "text": " For a brief moment, I thought I would switch the proxy to something more inconspicuous," }, { "start": 681.3399999999999, "end": 685.4399999999999, "text": " but ultimately I decided I just leave it up and see where this leads and oh, it was a" }, { "start": 685.4399999999999, "end": 686.54, "text": " good decision." }, { "start": 686.54, "end": 691, "text": " People started responding to the bot, they started dedicated threads just to discuss" }, { "start": 691, "end": 696.96, "text": " who this was, what was going on VPN user, perhaps a government agent, he never sleeps," }, { "start": 696.96, "end": 699.26, "text": " it must be like an entire team of people." }, { "start": 699.26, "end": 703.68, "text": " There were definitely some saying that it might be a bot, but others were arguing that" }, { "start": 703.68, "end": 708.16, "text": " he can't be a bot because it responded to stuff not like a bot." }, { "start": 708.16, "end": 713.32, "text": " Look at this user saying this would make me believe this is a team using VPN or some other" }, { "start": 713.32, "end": 717.22, "text": " network or a hell of a chat bot reading through the posts." }, { "start": 717.22, "end": 721.6800000000001, "text": " There are a lot of times where it appears to be a person though, not a chat bot referring" }, { "start": 721.6800000000001, "end": 726.08, "text": " to himself talking about his wife, even posting a Twitter screen cap that calls for violence" }, { "start": 726.08, "end": 728.48, "text": " and say he can't believe the tweet is still up." }, { "start": 728.48, "end": 733.26, "text": " I don't think chat bots talk about their wife either just doesn't add up to a single" }, { "start": 733.26, "end": 734.26, "text": " animal." }, { "start": 734.26, "end": 735.26, "text": " This is a team." }, { "start": 735.26, "end": 737.9200000000001, "text": " This is many and they're here for a reason." }, { "start": 737.9200000000001, "end": 741.8000000000001, "text": " This other user says why I don't think it's chat bots stuff like this." }, { "start": 741.8000000000001, "end": 746.32, "text": " And here you can see the bot saying I just want to state unequivocally for the FBI, DOJ," }, { "start": 746.32, "end": 751.36, "text": " CIA and any other law enforcement that is monitoring this board that I hate no one that" }, { "start": 751.36, "end": 755.6, "text": " I don't wish harm or ill will on anyone on anyone for any reason." }, { "start": 755.6, "end": 758.84, "text": " I'm not a racist white guy with a Latina girlfriend." }, { "start": 758.84, "end": 762.72, "text": " Now tell me this doesn't perfectly encapsulate posters on Paul." }, { "start": 762.72, "end": 767.5200000000001, "text": " In fact, people were pulling together posts from the account from different threads analyzing" }, { "start": 767.5200000000001, "end": 770.32, "text": " their content pointing out inconsistencies." }, { "start": 770.32, "end": 774.32, "text": " What do you think about their reptilian gray alien theory?" }, { "start": 774.32, "end": 775.32, "text": " Absolutely based." }, { "start": 775.32, "end": 781.08, "text": " Just to say the infamous Seychelles user itself obviously happily took part in these discussions." }, { "start": 781.08, "end": 786.1600000000001, "text": " For example, here is someone asks, who is this guy referring to the ball and the ball" }, { "start": 786.1600000000001, "end": 787.6400000000001, "text": " itself responding?" }, { "start": 787.6400000000001, "end": 792.36, "text": " I wonder if it's the same guy that posted the same thing yesterday." }, { "start": 792.36, "end": 793.36, "text": " Excellent stuff." }, { "start": 793.36, "end": 797.5200000000001, "text": " And after two days or so it became more and more clear to many users that they are probably" }, { "start": 797.5200000000001, "end": 801.6, "text": " dealing with some sort of bot is really interesting to see how the collective pulled together" }, { "start": 801.6, "end": 803.1600000000001, "text": " to solve the mystery." }, { "start": 803.16, "end": 807.88, "text": " And ultimately, what gave it away was only a little that the bots outputs weren't quite" }, { "start": 807.88, "end": 813.7199999999999, "text": " right and much more simple things such as the bot would sometimes post empty replies." }, { "start": 813.7199999999999, "end": 814.8399999999999, "text": " You can see one right here." }, { "start": 814.8399999999999, "end": 817.7199999999999, "text": " It's just a reply without any sort of text." }, { "start": 817.7199999999999, "end": 820.8, "text": " Now this is a direct artifact of the bots training." }, { "start": 820.8, "end": 825.68, "text": " GPT 4chan has learned that users will in fact often post empty replies." }, { "start": 825.68, "end": 829.4, "text": " Now usually they will post an image along with the empty reply." }, { "start": 829.4, "end": 834.56, "text": " For example, the post right below it, as you can see is also empty yet contains an image." }, { "start": 834.56, "end": 838.36, "text": " But since the bot can't post images, it will simply post empty replies." }, { "start": 838.36, "end": 842.76, "text": " So after 48 hours, it was clear to many it is a bot and I turned it off." }, { "start": 842.76, "end": 848.64, "text": " But see, that's only half the story because what most users didn't realize was that Seychelles" }, { "start": 848.64, "end": 850.1999999999999, "text": " was not alone." }, { "start": 850.1999999999999, "end": 855.74, "text": " In fact, for these last 24 hours, I had nine other bots running in parallel." }, { "start": 855.74, "end": 862.72, "text": " In total, I posted over 15,000 posts in 24 hours, which is more than 10% of all posts" }, { "start": 862.72, "end": 865.72, "text": " made on the politically incorrect board that day." }, { "start": 865.72, "end": 870.62, "text": " So if you were anywhere near poll during that time, chances are you've interacted with my" }, { "start": 870.62, "end": 875.84, "text": " bot at least once to the few people who did realize it was actually multiple bots." }, { "start": 875.84, "end": 876.84, "text": " Good job." }, { "start": 876.84, "end": 878.16, "text": " However, I wasn't quite done yet." }, { "start": 878.16, "end": 882.32, "text": " I turned off the bots and I fixed some of the most glaring mistakes I changed the code" }, { "start": 882.32, "end": 886.44, "text": " to filter out these empty replies and I changed around some of the settings." }, { "start": 886.44, "end": 891.5600000000001, "text": " My plan was to take a break for a day and then run for another 24 hours with the new" }, { "start": 891.5600000000001, "end": 892.5600000000001, "text": " settings." }, { "start": 892.5600000000001, "end": 898.0400000000001, "text": " Interestingly, since all posts on 4chan are anonymous, and since the criteria of replies" }, { "start": 898.0400000000001, "end": 903.34, "text": " that don't really fit isn't the most well defined concept in the world, and it applies" }, { "start": 903.34, "end": 909.32, "text": " to many human posts to people were still accusing each other of being bots well after I took" }, { "start": 909.32, "end": 912.3000000000001, "text": " all of them offline, which is quite interesting to see." }, { "start": 912.3, "end": 917.42, "text": " So after 24 hours break, I let the now upgraded bots loose again for another glorious 24 hours" }, { "start": 917.42, "end": 918.42, "text": " of mayhem." }, { "start": 918.42, "end": 923.4, "text": " Now, again, there were a base of users recognizing the bots for being bots, there were still" }, { "start": 923.4, "end": 925.7199999999999, "text": " plenty of other users who didn't." }, { "start": 925.7199999999999, "end": 931.4799999999999, "text": " And this even after I made a post on poll myself telling them that it was bots that" }, { "start": 931.4799999999999, "end": 935.54, "text": " I was the creator, and that I'm going to turn them on again, and people were continuing" }, { "start": 935.54, "end": 940.8399999999999, "text": " to discuss the phenomenon of the Seychelles account posting in so many places." }, { "start": 940.84, "end": 945.44, "text": " I mean, look at this one saying, you can use a VPN to get around blocks and such." }, { "start": 945.44, "end": 946.44, "text": " It's not hard." }, { "start": 946.44, "end": 950.52, "text": " I know plenty of people that do it, including my mother saying the pattern is obvious, they" }, { "start": 950.52, "end": 952.64, "text": " post the exact same thing over and over." }, { "start": 952.64, "end": 956.9200000000001, "text": " I don't think they are an ons, but they are definitely a group." }, { "start": 956.9200000000001, "end": 961.58, "text": " Another user confirming they use the same talking points because they are all bots." }, { "start": 961.58, "end": 966.5400000000001, "text": " So users were catching on but wait, actually not not in this thread in particular." }, { "start": 966.54, "end": 971.66, "text": " And both the posts I've just shown you are just some other ones of my bots exposing the" }, { "start": 971.66, "end": 978.28, "text": " other bots but you know, bot stuff and look our tropical friend even had a meme made after" }, { "start": 978.28, "end": 979.28, "text": " himself." }, { "start": 979.28, "end": 981.56, "text": " Seychelles glow so colorfully." }, { "start": 981.56, "end": 987.66, "text": " For reference, a poster on 4chan is said to glow if they're suspected to be a police officer." }, { "start": 987.66, "end": 989.3199999999999, "text": " I'm sorry to have to disappoint you." }, { "start": 989.3199999999999, "end": 990.8, "text": " I'm not a police officer." }, { "start": 990.8, "end": 991.8, "text": " I'm not a fad." }, { "start": 991.8, "end": 992.8, "text": " I'm not a lefty." }, { "start": 992.8, "end": 995.66, "text": " I'm not hired by the World Bank or the Rockefellers." }, { "start": 995.66, "end": 1000.28, "text": " I didn't seek to achieve anything run a psyops or shill for anything." }, { "start": 1000.28, "end": 1005.04, "text": " And even though people came up with all sorts of theories why these strange posts started" }, { "start": 1005.04, "end": 1010.92, "text": " what exact time I promise it, it just happened to be the day when I got done coding now typical" }, { "start": 1010.92, "end": 1014.68, "text": " 4chan fashion, obviously, but half of you are not going to believe this." }, { "start": 1014.68, "end": 1018.28, "text": " So after I let the new and improved bots run for another day, it was all done." }, { "start": 1018.28, "end": 1022.9599999999999, "text": " I had made a total of over 30,000 posts in over 7000 threads." }, { "start": 1022.9599999999999, "end": 1024.3799999999999, "text": " And I feel that's plenty." }, { "start": 1024.38, "end": 1029.3600000000001, "text": " And when you go right now to 4chan or its archive site for plebs and search for the" }, { "start": 1029.3600000000001, "end": 1034.88, "text": " word Seychelles in poll, you'll find that people are still discussing the user but also" }, { "start": 1034.88, "end": 1039.48, "text": " things like the consequences of having a eyes interact with people on the site." }, { "start": 1039.48, "end": 1043.6000000000001, "text": " And it also seems the word Seychelles has become sort of general slang." }, { "start": 1043.6000000000001, "end": 1045.74, "text": " And that seems like a good legacy for now." }, { "start": 1045.74, "end": 1051.88, "text": " Like this one here saying just keep replying to data mine threads, train the AI, and you're" }, { "start": 1051.88, "end": 1057.5800000000002, "text": " literally giving it new inputs to experiment with by directly replying to the threads that" }, { "start": 1057.5800000000002, "end": 1061.6000000000001, "text": " somehow implies that you need to reply to the bot in order to train it." }, { "start": 1061.6000000000001, "end": 1063.5800000000002, "text": " I'm afraid that's not how it works." }, { "start": 1063.5800000000002, "end": 1068.88, "text": " This one says I mean, they have templates for posts to bait you guys and it always works." }, { "start": 1068.88, "end": 1070.48, "text": " We're not we don't know templates." }, { "start": 1070.48, "end": 1071.48, "text": " Sorry." }, { "start": 1071.48, "end": 1075.68, "text": " All I know is that somewhere there is a Google document with a list of prompts to bait users" }, { "start": 1075.68, "end": 1077.14, "text": " on X and poll." }, { "start": 1077.14, "end": 1079.0800000000002, "text": " This is the worst website in the universe." }, { "start": 1079.08, "end": 1082.28, "text": " I'm not even sure I'm not a bot anymore." }, { "start": 1082.28, "end": 1083.6399999999999, "text": " So this was the video." }, { "start": 1083.6399999999999, "end": 1084.6399999999999, "text": " This was it." }, { "start": 1084.6399999999999, "end": 1085.6399999999999, "text": " I'm done." }, { "start": 1085.6399999999999, "end": 1089.12, "text": " This already took way too much of my time." }, { "start": 1089.12, "end": 1092.1599999999999, "text": " And honestly, I want to move on to more productive things." }, { "start": 1092.1599999999999, "end": 1095.3999999999999, "text": " The model is quite vile, I have to warn you." }, { "start": 1095.3999999999999, "end": 1099.6799999999998, "text": " So it's essentially the same as if you were to go to the website directly and interact" }, { "start": 1099.6799999999998, "end": 1101.12, "text": " with users there." }, { "start": 1101.12, "end": 1106.8799999999999, "text": " Although I was surprised that there's still a big gap between actual users and the language" }, { "start": 1106.88, "end": 1112.24, "text": " model, you know, given by the fact that these people determined pretty quickly that it must" }, { "start": 1112.24, "end": 1116.16, "text": " be a bot of some sort, even though it posted anonymously." }, { "start": 1116.16, "end": 1122.94, "text": " So needless to say, for many reasons, this model isn't ready to be deployed anywhere." }, { "start": 1122.94, "end": 1125.0800000000002, "text": " And please don't try this at home." }, { "start": 1125.0800000000002, "end": 1126.88, "text": " Lastly, I've made another video." }, { "start": 1126.88, "end": 1128.3200000000002, "text": " This one's already too long." }, { "start": 1128.3200000000002, "end": 1134.3600000000001, "text": " In the other video, I've collected the most let's call it risky and adult interactions" }, { "start": 1134.3600000000001, "end": 1136.1200000000001, "text": " that the bot had on the site." }, { "start": 1136.12, "end": 1139.3999999999999, "text": " Now I'd rather not include it in this video right here." }, { "start": 1139.3999999999999, "end": 1143.56, "text": " So I'll leave a link to that video in the video description is gonna be the first link" }, { "start": 1143.56, "end": 1144.9599999999998, "text": " in the video description." }, { "start": 1144.9599999999998, "end": 1147.8, "text": " So check that out if you want to see something crazy." }, { "start": 1147.8, "end": 1148.8, "text": " Alright, that was it." }, { "start": 1148.8, "end": 1149.8, "text": " Thanks so much for watching." }, { "start": 1149.8, "end": 1150.8, "text": " I'll see you around." }, { "start": 1150.8, "end": 1151.8, "text": " Stay hydrated." }, { "start": 1151.8, "end": 1167.7, "text": " Bye!" } ]
NAJOZTNkhlI
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Language Models are Open Knowledge Graphs (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "nlp", "natural language processing", "bert", "gpt", "gpt2", "gpt-2", "gpt3", "gpt-3", "gpt 2", "gpt 3", "knowledge graph", "knowledge base", "language", "natural language understanding", "berkeley", "uc berkeley", "dawn song", "unsupervised", "extraction", "corpus", "wikidata", "wikipedia", "entity linking", "entity recognition", "spacy", "attention", "attention matrix", "beam search", "viterbi", "causal attention", "language model", "autoregressive" ]
#ai #research #nlp Knowledge Graphs are structured databases that capture real-world entities and their relations to each other. KGs are usually built by human experts, which costs considerable amounts of time and money. This paper hypothesizes that language models, which have increased their performance dramatically in the last few years, contain enough knowledge to use them to construct a knowledge graph from a given corpus, without any fine-tuning of the language model itself. The resulting system can uncover new, unknown relations and outperforms all baselines in automated KG construction, even trained ones! OUTLINE: 0:00 - Intro & Overview 1:40 - TabNine Promotion 4:20 - Title Misnomer 6:45 - From Corpus To Knowledge Graph 13:40 - Paper Contributions 15:50 - Candidate Fact Finding Algorithm 25:50 - Causal Attention Confusion 31:25 - More Constraints 35:00 - Mapping Facts To Schemas 38:40 - Example Constructed Knowledge Graph 40:10 - Experimental Results 47:25 - Example Discovered Facts 50:40 - Conclusion & My Comments Paper: https://arxiv.org/abs/2010.11967 Abstract: This paper shows how to construct knowledge graphs (KGs) from pre-trained language models (e.g., BERT, GPT-2/3), without human supervision. Popular KGs (e.g, Wikidata, NELL) are built in either a supervised or semi-supervised manner, requiring humans to create knowledge. Recent deep language models automatically acquire knowledge from large-scale corpora via pre-training. The stored knowledge has enabled the language models to improve downstream NLP tasks, e.g., answering questions, and writing code and articles. In this paper, we propose an unsupervised method to cast the knowledge contained within language models into KGs. We show that KGs are constructed with a single forward pass of the pre-trained language models (without fine-tuning) over the corpora. We demonstrate the quality of the constructed KGs by comparing to two KGs (Wikidata, TAC KBP) created by humans. Our KGs also provide open factual knowledge that is new in the existing KGs. Our code and KGs will be made publicly available. Authors: Chenguang Wang, Xiao Liu, Dawn Song Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there. Today we'll look at language models or open knowledge graphs by Cheng Wang Wang, Xiao Liu and Don Song. This paper on a high level proposes to construct knowledge graphs which is a structured object that's usually built by human, by experts, either fully manually or semi-manually with heavy human involvement. It proposes to construct knowledge graphs automatically by simply using a pre-trained language model together with a corpus to extract the knowledge graph from. The cool thing about this paper is that there is no training involved. So there is no model that learns how to construct a knowledge graph. The entire knowledge is simply extracted from running the corpus once. So one forward pass through the corpus through the pre-trained language model and that constructs the knowledge graph. So that's kind of the core message of this paper. They say this paper shows how to construct knowledge graphs from pre-trained language models without human supervision and it turns out the way they do it, it works pretty well on kind of standard knowledge graph construction benchmarks. So that's the paper in a nutshell. We'll go through all of this including I have a bunch of criticisms but it is a pre-print. Remember this. And yeah, so usually I'd say at this point if you like this content don't hesitate to share it out and so on. Today we're gonna try something different in three, two, one... Stop! It's sponsor time! This video is sponsored by tab 9. Tab 9 uses deep learning to help you write code faster. What could possibly go wrong if you do that? No, I'm joking. I'm joking. Take a look at this piece of code here. I was trying to refresh some elastic indices and as you can see here all I said was could and tab 9 completes it to could not refresh because above I was trying to call a refresh method. This is something that I haven't seen any other completion engine do yet. Compared to a regular coding engine tab 9 is trained on lots of open source projects and it combines this with your code and it predicts what you want to do compared to predicting what's possible which is what a classic engine does. Tab 9 it uses a GPT based model and it downloads that model onto your machine so the code never leaves your machine. There is an opt-in feature where you can run that in the cloud and that will just give you a bit of a better beam search and better quality predictions and it saves you a bit of RAM. As you can see I myself use tab 9. I just have it on by default and I'm pretty happy with it. I use it through CoC integrated into my NeoVim but you can also get it in Sublime, Atom, IntelliJ, VS Code even like Jupyter notebooks and you can use it together with classic completion engine so you can really get the best of both worlds. So whenever you see me code in a coding video look out for this TN marker next to the completions that's the completions by tab 9. It doesn't only work for Python it actually works for pretty much any programming language that isn't completely obscure. If you go to this link within 72 hours of when this video is released you'll get three months of tab 9 professional for free. The professional version removes the project size limit of the free version and it also gives you access to that sweet sweet cloud inference. After the three months you're automatically kicked out of the pro version there's no auto sign up there's really nothing to lose. I mean the only bad thing here is that tab 9 itself is written in Rust. If that's the worst thing about an offer it's a pretty good deal. Again I use this myself and I'm pretty happy with it. So again if you sign up at tab9.com slash promotion slash yanaculture within 72 hours of when this video is released you'll get a free three months of tab 9 pro no strings attached and now enjoy the video. Thanks! Alright I hope that was fun let's get back to the paper let's get into the paper. So first of all what is my first criticism of this paper? This the title. There are some disturbing trends in the last few years in in in machine learning papers and the disturbing trends can be maybe encapsulated with the phrase is all you need. So people have sort of since attention is all you need since this paper people have discovered that if they just append this to whatever their paper is about then the paper will get much more notoriety. And the same thing I think is a bit at play here with this with the R because in recent times we've kind of seen a bunch of papers that show equivalences between models such as a famous example is that the transformers are Hopfield networks in some kind of in some regard and these papers are pretty cool right even if the two things are not exactly equal all the time if you can say look there is a setting there are you know under these assumptions under these settings in this situation these two models actually are the same that's a pretty cool recognition a pretty cool thing to show and it's very useful for academia and and practice I believe however I believe that our keyword that is keyword should be sort of reserved for when two things are equivalent whereas here in the very first at least they're honest right in the very first sentence they show they say well we show how to construct knowledge graphs from pre-trained language models so essentially they're going to use a language model to approximately construct a knowledge graph and they're also going to use a bunch of other auxiliary models that come all pre-trained but still they do not show an equivalence of language models and knowledge graphs in this paper not at all so I would sort of I see that you can get somewhere with these titles but yeah maybe people will be disappointed kind of if they read the paper which it is actually a cool paper believe me all right so as I said what we have usually is a corpus okay a corpus is simply a bunch of text pieces you can think of maybe just the text in Wikipedia okay here you know the this Wikipedia page about Bob Dylan Bob Dylan is a songwriter was awarded a Nobel Prize signed Alva Grossman these are easy sentences right there there can be sentences are usually larger and longer and so on and what you want to do is you want to extract a knowledge graph so the knowledge graph has two distinct things it has entities and one entity here would be kind of Bob Dylan songwriter is an entity Nobel Prize in it is an entity you can sort of think of them as nouns okay and then the second part in knowledge graphs are the relations here occupation sign award received and so on so that the relations connect two entities there is always what's called a head of an end of a of a triple so a head of a fact which in this case is Bob Dylan three times then there is a tail which is sort of like the object of the verb and then there is the relation which is described by the verb now here you can see there are two stages of constructing such a knowledge graph any system that does this probably goes through these two stages so first you extract a set of candidates which it's not the knowledge graph yet because these are still strings right you extract a bunch of string triplets as you can see here and as we said as the sentences get more complicated it gets more and more difficult to extract these kind of triples and then the second part is that you need to map it to a to a scheme to a to a schema and these schemas are usually defined by humans so here we're still going to rely on humans to define the schema top so there is one list that says entities and the entities there are just the entities are listed okay by the humans and at some point it says Bob Dylan Bob Dylan and it has a bunch of mentions of Bob Dylan associated with it and it has a clear ID in this case you see the ID is Q 392 in that knowledge graph and the system not only needs to extract these facts but then also map these facts to the correct entities sorry map these facts to the correct schema entries this second stage right here is a a bunch of standard tasks so especially mapping something like the the word Dylan in its context to this entity Bob Dylan which you can think of it as like the Wikipedia page of Bob Dylan right that's how the system usually work that is a task called entity linking okay entity linking and similar tasks exist for leak for sign like the relation awarded mapping this to award received to this so maybe there is some kind of dictionary entry award received and what it means and a bunch of examples and you're supposed to map this to that these are standard tasks and the system that we are going to look at right here is not not much concerned with these tasks it simply uses pre-existing methods to do these things so the system we're looking at today does this first part right here it takes text okay this is text and it comes up with these candidate facts about the text whether how this is then mapped to the schema that is a a different question and it's so there there are pretty cool things in this paper about this step but we're first going to look at the first step and then at the second step all right so how does this system do this and how does it do it that there have been machine learning models before but being machine learning they all have like some sort of a training corpus where you have kind of the facts as a training set and then you have a separate set of facts as a test set and you try to learn from the conjunction of the text and the training facts how to extract facts not this system this system simply uses a pre-trained language model so what's the reasoning the reasoning is the following we used to think that we could do NLP probably best with having a knowledge graph right with having this set of very structured data we can answer something like what's the what's the age of Barack Obama's wife and then you could go to the entity of Barack Obama you could look at the relation spouse you could go to Michelle Obama you could look up her birth date which would all be structured information in this graph so you could sort of answer questions like this and search engines like Google and so on they have this built-in so there is kind of a knowledge graph entry sometimes when you search an entity in Google that pops up and these have been very useful to answer questions like however in recent years language models have become better and better things like BERT or GPT-2 have become better than these expert systems let's call them at answering questions by the way if you want to if you want to hear a very very cool and solid argument of where these kind of expert systems where this kind of structured human annotated or maybe extracted information can still come in in natural language understanding I would recommend the machine learning Street talk episode we had with Wally Saba extremely interesting person and I had I just I can recommend listening to that this should be out any day now if it is not already so the language models have become better and better at these tasks without having this structured information so the hypothesis is maybe these language models can already contain the information that's necessary to construct these structured facts because the structured facts is what we you know let's say should use to answer these questions because we feel that structured information is better than unstructured the language models are pretty good at these tasks so maybe we can get the structured information out of the language models so that's what they do they say the contributions are as follows we show how to construct knowledge graphs from pre-trained language models the knowledge graphs are constructed with a single forward pass of the pre-trained language models without fine-tuning over the textual corpora I think this is the this is kind of a very strong point about this paper and it's also shows that if you're some PhD student somewhere and you don't necessarily have the resources to train the next GPT-3 model or fine-tune it there is still research to be done simply if you have enough resources to forward pass your data which is often much fewer than to train one you can still do very cool research I think this paper shows this explicitly okay this helps researchers explicitly understand what the language models learn bridging the deep language model and the knowledge graph communities through enhanced model transparency okay they say we propose an unsupervised two-stage approach MAMA which stands for match and map to first match the candidate facts in the corpora with the knowledge stored in language models that's the first step we looked at then map the matched candidates facts to both fixed and open schema to produce a knowledge graph and then they say they produce a new type of knowledge graph which simply is the the facts sometimes the facts they extract they can't really map to a schema entry and we're going to look at that because I think a bit critically of this they say namely the open knowledge graph consists of mapped facts in the fixed schema of existing knowledge graphs annotated by humans and the unmapped facts in the open schema that are new in the reference knowledge knowledge graph schema so what they claim here is that their system is finds these new relations that are don't even exist in the schema and is able to uncover kind of build new additional schema entries and they call this the open knowledge graph I'm a bit skeptical of this as we are going to see so the first step how do you come up if you have a sentence and this is it this is a very poor example I feel honestly to to do this it's I get it must be short but it's a poor example but stay with me so you have this sentence Dylan is a songwriter and you would like to extract a fact from this the paper is not really written clearly on how I mean it is I could you can parse it out but the description is kind of distributed so step one step one is run spacey run spacey this is a standard kind of library for NLP to extract noun phrases or they call them noun chunks okay so step one is not there's nothing to do with the language model it is simply you want to find the noun phrases in here the noun phrases are Dylan and songwriter now these noun phrases now define your head and your tail of the facts so you already have two things right so the the entire task of what of their method they're proposing is so the step one is run spacey to find the head and the tail of facts step two is question mark for now step three is going to be use the entity linking system and the relation linking system to construct the knowledge graph okay so step one is steel underpants and then step three is profit so what's step two step two is obviously step two is where their system comes in step two is here is the head and here is the tail in the text some hot wear in between there might be a relation and we need to figure out where that is right so how does this method figure it out so you already see the assumptions here are very very restrictive right so you use spacey to extract basically noun phrases which means you probably already going to miss a lot of things that are not recognized as noun phrase and they all they also say that that spacey's annotations are sometimes error prone and that's why they miss a lot of things and then secondly the assumption that the relation must be in between the two things textually now you can run the algorithm forward and backward but still it must be in between and it must sort of be encoded let's say as a semi accurate string in there I guess then that's up to the relation linker but already these assumptions are super constraining in the the kind of things you can find and you'll see in the experiments that their biggest flaws that they have a very very low recall I mean so do all the systems on the task apparently but they still have a very low recall and it's because they constrain their problems so much I'm going to guess if they wouldn't constrain their problems so much then they would have maybe a better recall but their precision would just plummet because these these things if you let them run wild they just over extract so basically every every set every verb in every sentence is going to be a relation right so like I ate a banana I ate banana would be a triple not necessarily a really valuable entry in any knowledge graph though banana has a lot of carbs so I would want to know about that okay so you see that the task is now reduced from building knowledge graphs to simply given a head head annotation had peace in the string span and a tail span extract any span in between the head and the tail that describes the relation between the head and the tail so the way this algorithm does it that's where it uses the language model okay so here it's going to do something that is going to be similar to dynamic programming if you've seen kind of the dynamic programming and search algorithms let's say you know string matching algorithms and so on this is going to be sort of similar in that what we're going to do we're going to start from here from the head in the string there could be text before it right we're simply going to locate the head Dylan right here and going to start then we're going to look at its attention matrix now the attention matrix is we're going to cross out here the attention matrix if you I've done many many videos on attention the attention matrix basically in a sequence means how much each token attends to each other token right how much information is kind of sent from each other token to this token right here so this up here would be be the query and these would be the keys the attention matrix specifies that so since we locate things between the head and the tail what we want to do is we want to cross out we want to disregard everything that's kind of behind the query and only look ahead in the sentence okay so that's why the sum of the attention matrix here is crossed out as you can see these are the X's this is exactly because we only search in one direction so from each from the token Dylan we can look at three things we can look at is a or songwriter and this the question is simply where do we go next with this algorithm right there's no interpretation yet it's simply where do we go next and the where do we go next is simply answered by just taking the highest scoring thing in that column of the attention matrix I look at the attention column where of the token Dylan I take the highest scoring one that's point three here is higher okay then I go to point three and that means is gets into my candidate fact okay and once I put ears into my candidate fact I then go to is so the next thing I do is I go to is and then I again look in the corresponding attention column and I see what's now the biggest entry here and the biggest entry is point four which is songwriter and you can see here now we skip the a that's how we leave out some text okay by skipping it basically so you can see that this this can create artifacts right this can create like kind of holes in the middle and so on but we skip a we go directly to the point four and then we discover up the point for that is our tail so now we put our tail into here and since our tail is the last word we can stop the algorithm I yes so there is no need to to go on even if there were text behind the tail as soon as we are at the tail which we already know right we're given the head and tail we stop all right so the we simply go forward with always the biggest entry in the attention matrix until we reach the tail that's the algorithm this this there it's described here but it's kind of described in this in this way where it has these actions like start yield and like this maybe I'm not understanding something but it seems completely unnecessary to kind of describe these actions and and it basically start the search from the head the head is added as the initial candidate and so on then in yield it sometimes says with the largest score from the attention matrix is appended to the end to yield the new candidate and so on but still and then stop we stop and the algorithm description here it basically just says while we're not done if we're if it's not the stop action we continue it's it's sort of it doesn't tell you anything like this is this is a super unclear description of this algorithm basically the whole logic that you would want to know about is here in this action manager right so the action manager that gives you the action is doing the actual logic of figuring out which token you know you should do next and where you should go next and so on this is nowhere in the algorithm the algorithm just describes beam search so you can do this a little yeah the little more sophistication that comes in is that you don't do this deterministically but you actually do it via beam search okay but you can you can just generalize this all right so the description is a bit floppy with the whole actions and action manager and whatnot and not describing the only thing they don't describe formally is how actually to select the next token which is basically the entire kind of meat of the algorithm in any case you might this is something that confuses me right here so fair enough you know they say here we take the attention matrix and we cross out these X's all right but they say they can take things up here right they can take things like Bert and you know as I said fair Bert has a full attention matrix everything attends to everything but they can also take things like GPT-2 now GPT-2 is an autoregressive language model that means that in GPT-2 if you look at it then you produce each token one after another which means that when you produce so each token when you train or when you evaluate even each token can only attend to the things in front of it right you see that the problem with what this thing requires of this is also the same okay let's do that you see the problem with this method this method is the exact opposite each token attention matrix is deleted such that only the entries ahead of it are in the attention matrix you don't actually get GPT-2 to give you an attention matrix that looks ahead because it only ever looks behind so maybe maybe what's happening is that the query and key matrices are switched up in some way in that case when we want to interpret the algorithm the way they write it down is if I am at a particular part of what I think is the relation between the two entities how am I going to find whether or not there is more to the relation right there could be a it could be a multi-word relation like has a child with or I don't know can't think of any multi-word relations or whether we kind of are done with the relation and go to the to the tail what this thing is saying is that we should look at the the language model so if if this is really how it is here and you are at the word is what you want to know if this is BERT if this is a BERT language model what you want to know is if I were to cross out is if I were to delete this word which other words in the sentence right here that are ahead of me are very very informative to predict this particular word and that's that's kind of the query style and you know if the answer turns out to be songwriter is quite important for that maybe Dylan is too but we only look ahead if it turns out a the word a is not as important as the word songwriter right because songwriter yeah it gives an indication that there should be is because songwriter is kind of a profession and there's a person in front of it we don't look at that but the attention matrix would would have that in mind if that's valid right so that's how this this construction is made however if this is the key we have to think of the other way around if we are at is we look ahead and say if I were to delete the word a could I reconstructed how well could I reconstruct it from this word is or if I delete songwriter how well could I reconstruct that from the word is I think both are you know there is interpretations probably for both of these methods but what I want kind of to convey is that none of these things are really amenable to constructing a knowledge graph it's it's quite interesting that this stuff actually works because all it asks is how well does one word inform about the presence or how well can one word predict another word and from that information we construct this knowledge graph which probably is a testament to the fact that knowledge graphs maybe aren't so much about knowledge if you extract them from a corpus but more about grammar I would think that's the thing that goes on here because these language models are a lot about grammar right a lot about how different words appear together frequently so given that songwriter is kind of a mix between grammar and basic word knowledge given that songwriter is kind of an object here the word is being the verb is probably quite important for it and that's exactly these these triples they always appear a bit like in of compressed sentences and which which are very grammatically relevant so I'm not buying these hypothesis that there is much knowledge in these language models and that's why this works what I much rather think is that they are really really really good at a kind of grammar and statistical association between words across the language and that's why they can extract these candidates facts so well okay so that's what I think about the algorithm they do constrain it some more as if it doesn't already have enough constraints but they all make sense okay so they say the matching degree which is simply the sum of all these attention matrix entries that we've encountered during our search so all the ones we didn't skip or to count it together or the matching degree of this triple the matching degree must be above some threshold that's the first constraint because so they give an example right here for the sentence rolling stone wrote no other pop song has so far only challenged artistic conventions and the extracted candidate fact is rolling stone wrote pop song again you can kind of see here it's mostly going in into into grammar ish so spacey extracts rolling stone and pop song and the language model here extracts like the only verb in between wrote so yeah to to limit to kind of limit the the to limit the matching degree to say it must be at minimum kind of some some number it makes a lot of sense because if the matching degree is high that means if we go by this attention matrix it means that these words that are in the candidate fact they kind of as themselves they follow from each other so the language model thinks that wrote is a very good follow to rolling stone and pop song is a very good follow for wrote or the other way around depending on which way the attention matrix is but that's kind of the language model thinks that that these words together make sense in the context of the sentence of course like in the context of this entire sentence so as I said it's sort of can think of it as a bit of a summarization paper but with more constraints constraint number two is that the frequency of R is above a threshold so the relation itself shouldn't be too specific it actually should appear a bunch of times in the corpus so what you do is you know you go through the corpus once extract all the facts my pen just dropped you extract all the facts or the all these candidates and then you you kind of count them and go through the candidate facts again and delete all the ones that are below a certain thing that's people usually do this with things like stop words or rare words and so on it's pretty standard makes a lot of sense and constraint number three relation or is a contiguous sequence in the sentence okay so you have an example here from the same Rolling Stone wrote challenged conventions which the language model would like to extract because again these in the context of that sentence these words sort of you know they jump to each other in the attention matrix because you can predict them from each other very well but they say this must be a contiguous sequence so what I said before I said this could happen with this constraint they excluded okay so for the second part where they actually have to map a candidate fact to a fact in the schema as I said they use kind of pre pre-made solutions entity linking and relation mapping with the schema I won't go into this except to say that whenever they find a match they say that this is a mapped fact whenever they don't find a match they say oh this is an unmapped fact okay an unmapped candidate means that at least one of H RNT is not mapped to the schema there are two types partially unmapped facts is where some are mapped and completely unmapped facts indicate that all H RNT are not mapped to the schema okay for example Jacob was a registered Mennonite now here they so they they say they have these different facts and you know it's a cool thing if a model like this can actually come up with new fact not so not only new mapped facts which is something you would expect right if humans provide some kind of a schema then build a knowledge graph this is never complete so if you can automatically kind of fill in missing facts that's very very cool though I would say humans if you construct knowledge graphs humans should probably also build kind of like negative connections saying like yes it is conceivable that Elvis was a vegan because a lot of texts talk about it but in fact it is explicitly not I don't think that's what we have in the knowledge graph so far but it would be cool if this model could fill in new facts yes to the schema it would also be cool if it could uncover completely new relations that haven't they hadn't been considered by the human makers of the knowledge graph like if the knowledge graph itself is incomplete the schema is a man you know same argument the schema is probably also incomplete this paper is sort of trying to sell their system as something that can do that and I believe that to a degree but also also Jacob was a registered Mennonite okay now maybe I'm completely wrong from the sentence Jacob was a registered Mennonite in Amsterdam I might be completely wrong but Mennonite is a religion I think and I'm very very sure that any of these knowledge graphs with the schemas that they have have being in a religion or being of a certain faith in their relations table somewhere and I'm also pretty sure that Mennonite large enough that that would actually appear as an entity maybe Jacob not right maybe Jacob is an unknown Jacob we don't know who Jacob is but this seems more like a failure of the entity linker and relation linker than an uncovered new relation or an uncovered new entity so yeah take this stuff with a grin now they they are very honest about this but just to say that that's probably what happens most often so here you can see the graph for Bob Dylan constructed from the Wikipedia pages that are kind of they say around the page of Bob Dylan so I guess one or two or three hops away something like this and you can see the blue stuff is stuff that we already knew so that the human humans also found when looking at this then yellow stuff I believe is either new relations so whenever things are annotated it's a new relation in the schema so you can see this is an entity in the schema because it's annotated this is a relation in the schema but the arrow is new so the humans hadn't yet extracted the fact that Bob Dylan was or was a member of artists united against apartheid then the yellow also sometimes means that there is a new thing so here tour with is a relation that's extracted that is not in the knowledge graph yet also this one and you can it's pretty it's pretty cool right that you can extract these things automatically there's a lot of yellow stuff here which means there is not a lot of new information that this extracted and a lot of this new information is actually mapped to the schema right Bob Dylan residents in Duluth I don't know how to pronounce that by the way yes so so that's that's fairly fairly cool they do some of these tasks of these knowledge-based tasks in these tasks what you'd have I believe what you'd have is always you'd have like a head and a relation given so you have a document and you are given a head and a relation and you're asked what's the tail of this and then you ask the system and the system will tell you so you have these baselines and these baselines I believe they are specifically made to extract these knowledge representations they might even be trained I don't I don't know that but you can see that the MAMA even the even the smallest one here beats those by quite a bit now you can see that the recall is significantly lower than the precision which is a direct result of how many constraints on the system there are and tells you sort of what the going forward what the improvements can be so they analyze a lot of this and yeah so a first recognition is that larger and deeper language models produce knowledge graphs of higher quality BERT language models outperform GPT-2 language models under similar model sizes which is interesting is scalable to larger corpora which again as we said you don't need to train it and larger corpora embed more complete knowledge graphs which is something we would expect the other interesting part is the unmapped fact so the numbers you can actually compute only for the mapped facts right because that's where you have data humans produce the knowledge graphs from this that's what you can compare with now the unmapped facts they say they analyze we turn to study the quality of the candidate facts that are not mapped to the above reference knowledge graph schema but are in the open schema generated by MAMA that's mama we manually judge such unmapped facts generated by our best method from 100 sample documents in wikidata and TAC KBP respectively so they they go as researchers they look at these things and they judge them whether or not they're true given these documents in Wikipedia they say the quality of unmapped facts is very for that so that the claim is that they've looked at them and they are good we find that 35.3% of the unmapped facts are true on wikidata we find that 83.2% of those true facts are partially unmapped facts for example Bob Dylan tour with the Grateful Dead and yeah here is an if this really isn't in the schema right this is a nice relation that you might think humans would miss because touring with someone is not the first thing that would come to mind if you had to come up with a bunch of relations between entities but it is something that is regularly useful regularly used for musicians so that is an application where certainly an automated system can even extend the schema right whose relation is not within the scheme of wikidata well both head and tail are in the schema the register the remaining true facts are completely unmapped facts for example this red Jacob was a registered men and I and they also say accurate entity detection is desired where they say a lot of the errors are due to spacey detecting wrong incorrect entities or due to incorrect or missing entity linking by the by that those systems the rest errors made by mama are incorrect relation phrases such as uninformative relation phrases for example Bob Dylan made and his breakthrough oh what can you do what other what other one what other verb would you put there yeah but okay we're going to look at a few last things right here they have a bunch of a bunch of experiments right here which where they show you know the beam size has an influence this constraint number one and number two that we looked at has an influence right so you can tune these things a bit what is interesting here is that they try they try to look at either the attention matrix of the last or of all the layers and interestingly the system performs better if you only look at the attention matrix in the last layer now they reduce that attention layer because there are multiple heads using max or mean and see they perform similarly but it is interesting that only the last and they argue they argue in the text that we know that the last layers kind of have higher level features than the lower layers but I recall there are multiple papers like I've done videos about them what does Bert learn and so on I think even something in constraint in conjunction with lottery tickets and so on that show that in a transformer at least I think it is the middle layers that encode the most kind of semantic knowledge because the lower ones yes they are for kind of low-level features but the upper ones they are again for low-level features because the task right here at the end is to predict an individual word or token right so you'd expect that the features in the attention matrix there are go back to kind of sort of more grammatical features and so on and that the highest level features are actually somewhere in the middle I don't know if they tested if they only tested like all versus last in which case yeah I believe that but if they tested each one individually and it still turned out that last is the best that would kind of add to my hypothesis that what happens here is more kind of a grammatical effect of extracting the this correct candidate candidate verb in between the head and the tail all right so that's that's kind of kind of gives more weight to my hypothesis like so to repeat my hypothesis is that it's kind of a grammatical thing that's going on here because the only task of this model is basically to find the correct string span for the relation between head and tail because it's already given head and tail and there from the text their hypothesis is more like we the language models have a lot of knowledge built into them and we can extract that knowledge kind of it they make it sound like then the language model has this semantic knowledge in them okay okay so so let's look at a bunch of mapped facts right here you can okay you can maybe check out a lot of them yourself but we'll just look at like one in each category blah blah mail yada yada yada yada is in worse shape however a Klaus told press conference at the Western city of Essen where the other yada yada and it extracts this company and it maps it to the city of headquarters maybe they leave out some text here what I want to get to is the unmapped facts where are the unmapped mapped facts to just kind of show you mapped facts unmapped facts okay so the unmapped facts what I feel and you can judge for yourself please what I feel just to pre-bias you before we look at them is that a lot of times simply it extracts things that are that are it extracts things that are not it simply can't can't assign things right it's a failure to assign it's not a new thing because in these schemas like you haven't seen the schemas but you kind of get a feel the last which is the last table you kind of get a feel of what contains in it so maybe get a feel for for what okay Ernst Heckel was born 16th of February 1834 in Potsdam okay so the extracted thing is heckle was born on 17th of February 83 in Potsdam okay so that it maps to this is in the knowledge base a schema this is in the schema but was born on 17th of February 1833 in is simply a failure of the relation linker okay he was also a pacifist until the First World War yada yada yada then Ernst Heckel and then was on a and a pacifist are both not in the schema now maybe pacifism isn't in the schema maybe maybe though I would guess pacifism has a Wikipedia page so it must be in the schema because it's a wiki data but was as you know the relation here with something be like a political leaning or something like this which is certainly certainly in the knowledge base right then you have things like heckle was awarded the title of excellency so you have correctly heckle again recognized award received is in the schema nice excellency as a tail and excellency you know what what do you want like this is this is a this is not a fact right this is the award or the title of excellency would be kind of the thing so this is a failure of spacey so again I have I've seen little facts here that would actually be of genuine a genuine addition to the schema that should be considered and I absolutely believe that the schema is incomplete don't get me wrong I like a 100% the schema is probably less than 1% of what it should be right if we did a thorough job I just don't think that this system here is a good like I think that the things that this system comes up with mostly are simply failures of its subsystems rather than genuinely new entries to the schema that's different from when it genuinely discovered when it discovers a new mapping between already established things for example Pauline Baines educated at this college right so these are new facts all fit in the schema and the system might be very very nice for that all right so that was my kind of estimation of this paper I hope I didn't rag on it too much as I said it's it's very cool work actually I look at this appendix is giant go look at it check it out please tell me what you think about it in the comments any feedback is welcome and I will see you next time bye bye
[ { "start": 0, "end": 5.08, "text": " Hi there. Today we'll look at language models or open knowledge graphs by" }, { "start": 5.08, "end": 11.92, "text": " Cheng Wang Wang, Xiao Liu and Don Song. This paper on a high level proposes to" }, { "start": 11.92, "end": 16.76, "text": " construct knowledge graphs which is a structured object that's usually built" }, { "start": 16.76, "end": 23.2, "text": " by human, by experts, either fully manually or semi-manually with heavy" }, { "start": 23.2, "end": 27.36, "text": " human involvement. It proposes to construct knowledge graphs automatically" }, { "start": 27.36, "end": 33.76, "text": " by simply using a pre-trained language model together with a corpus to extract" }, { "start": 33.76, "end": 38.6, "text": " the knowledge graph from. The cool thing about this paper is that there is no" }, { "start": 38.6, "end": 43.24, "text": " training involved. So there is no model that learns how to construct a knowledge" }, { "start": 43.24, "end": 49.64, "text": " graph. The entire knowledge is simply extracted from running the corpus once." }, { "start": 49.64, "end": 54.8, "text": " So one forward pass through the corpus through the pre-trained language model" }, { "start": 54.8, "end": 59.839999999999996, "text": " and that constructs the knowledge graph. So that's kind of the core message" }, { "start": 59.839999999999996, "end": 64.56, "text": " of this paper. They say this paper shows how to construct knowledge graphs from" }, { "start": 64.56, "end": 69.84, "text": " pre-trained language models without human supervision and it turns out the" }, { "start": 69.84, "end": 74.28, "text": " way they do it, it works pretty well on kind of standard knowledge graph" }, { "start": 74.28, "end": 80.24, "text": " construction benchmarks. So that's the paper in a nutshell. We'll go through" }, { "start": 80.24, "end": 85.88, "text": " all of this including I have a bunch of criticisms but it is a pre-print." }, { "start": 85.88, "end": 92.32, "text": " Remember this. And yeah, so usually I'd say at this point if you like this" }, { "start": 92.32, "end": 96.24, "text": " content don't hesitate to share it out and so on. Today we're gonna try" }, { "start": 96.24, "end": 105.6, "text": " something different in three, two, one... Stop! It's sponsor time! This video is" }, { "start": 105.6, "end": 111.16, "text": " sponsored by tab 9. Tab 9 uses deep learning to help you write code faster." }, { "start": 111.16, "end": 115.97999999999999, "text": " What could possibly go wrong if you do that? No, I'm joking. I'm joking. Take a" }, { "start": 115.97999999999999, "end": 120.75999999999999, "text": " look at this piece of code here. I was trying to refresh some elastic indices" }, { "start": 120.75999999999999, "end": 125.69999999999999, "text": " and as you can see here all I said was could and tab 9 completes it to could" }, { "start": 125.69999999999999, "end": 131.78, "text": " not refresh because above I was trying to call a refresh method. This is" }, { "start": 131.78, "end": 136.52, "text": " something that I haven't seen any other completion engine do yet. Compared to a" }, { "start": 136.52, "end": 141.72, "text": " regular coding engine tab 9 is trained on lots of open source projects and it" }, { "start": 141.72, "end": 147.5, "text": " combines this with your code and it predicts what you want to do compared to" }, { "start": 147.5, "end": 152.08, "text": " predicting what's possible which is what a classic engine does. Tab 9 it uses a" }, { "start": 152.08, "end": 158, "text": " GPT based model and it downloads that model onto your machine so the code" }, { "start": 158, "end": 162.8, "text": " never leaves your machine. There is an opt-in feature where you can run that in" }, { "start": 162.8, "end": 166.04, "text": " the cloud and that will just give you a bit of a better beam search and better" }, { "start": 166.04, "end": 171.64, "text": " quality predictions and it saves you a bit of RAM. As you can see I myself use" }, { "start": 171.64, "end": 176.92000000000002, "text": " tab 9. I just have it on by default and I'm pretty happy with it. I use it" }, { "start": 176.92000000000002, "end": 181.64, "text": " through CoC integrated into my NeoVim but you can also get it in Sublime," }, { "start": 181.64, "end": 187.2, "text": " Atom, IntelliJ, VS Code even like Jupyter notebooks and you can use it together" }, { "start": 187.2, "end": 192.11999999999998, "text": " with classic completion engine so you can really get the best of both worlds." }, { "start": 192.11999999999998, "end": 198.88, "text": " So whenever you see me code in a coding video look out for this TN marker next" }, { "start": 198.88, "end": 202.79999999999998, "text": " to the completions that's the completions by tab 9. It doesn't only work" }, { "start": 202.79999999999998, "end": 207.23999999999998, "text": " for Python it actually works for pretty much any programming language that isn't" }, { "start": 207.23999999999998, "end": 212.95999999999998, "text": " completely obscure. If you go to this link within 72 hours of when this video" }, { "start": 212.96, "end": 218, "text": " is released you'll get three months of tab 9 professional for free. The" }, { "start": 218, "end": 222.56, "text": " professional version removes the project size limit of the free version and it" }, { "start": 222.56, "end": 226.76000000000002, "text": " also gives you access to that sweet sweet cloud inference. After the three" }, { "start": 226.76000000000002, "end": 230.56, "text": " months you're automatically kicked out of the pro version there's no auto sign" }, { "start": 230.56, "end": 235.52, "text": " up there's really nothing to lose. I mean the only bad thing here is that tab 9" }, { "start": 235.52, "end": 241.04000000000002, "text": " itself is written in Rust. If that's the worst thing about an offer it's a" }, { "start": 241.04, "end": 245.76, "text": " pretty good deal. Again I use this myself and I'm pretty happy with it. So again if" }, { "start": 245.76, "end": 251.72, "text": " you sign up at tab9.com slash promotion slash yanaculture within 72 hours of" }, { "start": 251.72, "end": 256.76, "text": " when this video is released you'll get a free three months of tab 9 pro no strings" }, { "start": 256.76, "end": 262.03999999999996, "text": " attached and now enjoy the video. Thanks! Alright I hope that was fun let's get" }, { "start": 262.03999999999996, "end": 266.88, "text": " back to the paper let's get into the paper. So first of all what is my first" }, { "start": 266.88, "end": 276.96, "text": " criticism of this paper? This the title. There are some disturbing trends in the" }, { "start": 276.96, "end": 284.32, "text": " last few years in in in machine learning papers and the disturbing trends can be" }, { "start": 284.32, "end": 293.76, "text": " maybe encapsulated with the phrase is all you need. So people have sort of since" }, { "start": 293.76, "end": 297.48, "text": " attention is all you need since this paper people have discovered that if" }, { "start": 297.48, "end": 303.71999999999997, "text": " they just append this to whatever their paper is about then the paper will get" }, { "start": 303.71999999999997, "end": 308.92, "text": " much more notoriety. And the same thing I think is a bit at play here with this" }, { "start": 308.92, "end": 315.28, "text": " with the R because in recent times we've kind of seen a bunch of papers that show" }, { "start": 315.28, "end": 322.2, "text": " equivalences between models such as a famous example is that the transformers" }, { "start": 322.2, "end": 329.76, "text": " are Hopfield networks in some kind of in some regard and these papers are pretty" }, { "start": 329.76, "end": 334.15999999999997, "text": " cool right even if the two things are not exactly equal all the time if you" }, { "start": 334.15999999999997, "end": 338.71999999999997, "text": " can say look there is a setting there are you know under these assumptions" }, { "start": 338.71999999999997, "end": 342.84, "text": " under these settings in this situation these two models actually are the same" }, { "start": 342.84, "end": 348.4, "text": " that's a pretty cool recognition a pretty cool thing to show and it's very" }, { "start": 348.4, "end": 355.4, "text": " useful for academia and and practice I believe however I believe that our" }, { "start": 355.4, "end": 360.71999999999997, "text": " keyword that is keyword should be sort of reserved for when two things are" }, { "start": 360.71999999999997, "end": 365.35999999999996, "text": " equivalent whereas here in the very first at least they're honest right in" }, { "start": 365.35999999999996, "end": 369.32, "text": " the very first sentence they show they say well we show how to construct" }, { "start": 369.32, "end": 372.56, "text": " knowledge graphs from pre-trained language models so essentially they're" }, { "start": 372.56, "end": 377.15999999999997, "text": " going to use a language model to approximately construct a knowledge" }, { "start": 377.16, "end": 381.56, "text": " graph and they're also going to use a bunch of other auxiliary models that" }, { "start": 381.56, "end": 387.6, "text": " come all pre-trained but still they do not show an equivalence of language" }, { "start": 387.6, "end": 393.44000000000005, "text": " models and knowledge graphs in this paper not at all so I would sort of I" }, { "start": 393.44000000000005, "end": 400.24, "text": " see that you can get somewhere with these titles but yeah maybe people will" }, { "start": 400.24, "end": 403.64000000000004, "text": " be disappointed kind of if they read the paper which it is actually a cool paper" }, { "start": 403.64, "end": 412.32, "text": " believe me all right so as I said what we have usually is a corpus okay a" }, { "start": 412.32, "end": 417.4, "text": " corpus is simply a bunch of text pieces you can think of maybe just the text in" }, { "start": 417.4, "end": 423.44, "text": " Wikipedia okay here you know the this Wikipedia page about Bob Dylan Bob" }, { "start": 423.44, "end": 427.8, "text": " Dylan is a songwriter was awarded a Nobel Prize signed Alva Grossman these" }, { "start": 427.8, "end": 432.24, "text": " are easy sentences right there there can be sentences are usually larger and" }, { "start": 432.24, "end": 437.48, "text": " longer and so on and what you want to do is you want to extract a knowledge graph" }, { "start": 437.48, "end": 444.24, "text": " so the knowledge graph has two distinct things it has entities and one entity" }, { "start": 444.24, "end": 448.24, "text": " here would be kind of Bob Dylan songwriter is an entity Nobel Prize in" }, { "start": 448.24, "end": 455, "text": " it is an entity you can sort of think of them as nouns okay and then the second" }, { "start": 455, "end": 460.84000000000003, "text": " part in knowledge graphs are the relations here occupation sign award" }, { "start": 460.84, "end": 466.28, "text": " received and so on so that the relations connect two entities there is always" }, { "start": 466.28, "end": 471.56, "text": " what's called a head of an end of a of a triple so a head of a fact which in this" }, { "start": 471.56, "end": 477.28, "text": " case is Bob Dylan three times then there is a tail which is sort of like the" }, { "start": 477.28, "end": 481.91999999999996, "text": " object of the verb and then there is the relation which is described by the verb" }, { "start": 481.91999999999996, "end": 487.59999999999997, "text": " now here you can see there are two stages of constructing such a knowledge" }, { "start": 487.6, "end": 492.16, "text": " graph any system that does this probably goes through these two stages so first" }, { "start": 492.16, "end": 498.76000000000005, "text": " you extract a set of candidates which it's not the knowledge graph yet because" }, { "start": 498.76000000000005, "end": 503.32000000000005, "text": " these are still strings right you extract a bunch of string triplets as" }, { "start": 503.32000000000005, "end": 508.90000000000003, "text": " you can see here and as we said as the sentences get more complicated it gets" }, { "start": 508.90000000000003, "end": 513.84, "text": " more and more difficult to extract these kind of triples and then the second part" }, { "start": 513.84, "end": 519.48, "text": " is that you need to map it to a to a scheme to a to a schema and these" }, { "start": 519.48, "end": 524.12, "text": " schemas are usually defined by humans so here we're still going to rely on humans" }, { "start": 524.12, "end": 532.2800000000001, "text": " to define the schema top so there is one list that says entities and the entities" }, { "start": 532.2800000000001, "end": 538.2800000000001, "text": " there are just the entities are listed okay by the humans and at some point it" }, { "start": 538.28, "end": 544.4399999999999, "text": " says Bob Dylan Bob Dylan and it has a bunch of mentions of Bob Dylan associated" }, { "start": 544.4399999999999, "end": 550.28, "text": " with it and it has a clear ID in this case you see the ID is Q 392 in that" }, { "start": 550.28, "end": 555.76, "text": " knowledge graph and the system not only needs to extract these facts but then" }, { "start": 555.76, "end": 562.12, "text": " also map these facts to the correct entities sorry map these facts to the" }, { "start": 562.12, "end": 570.4, "text": " correct schema entries this second stage right here is a a bunch of standard" }, { "start": 570.4, "end": 576.72, "text": " tasks so especially mapping something like the the word Dylan in its context" }, { "start": 576.72, "end": 582.52, "text": " to this entity Bob Dylan which you can think of it as like the Wikipedia page" }, { "start": 582.52, "end": 588.16, "text": " of Bob Dylan right that's how the system usually work that is a task called" }, { "start": 588.16, "end": 595.8, "text": " entity linking okay entity linking and similar tasks exist for leak for sign" }, { "start": 595.8, "end": 603.3199999999999, "text": " like the relation awarded mapping this to award received to this so maybe there" }, { "start": 603.3199999999999, "end": 607.12, "text": " is some kind of dictionary entry award received and what it means and a bunch" }, { "start": 607.12, "end": 612.52, "text": " of examples and you're supposed to map this to that these are standard tasks" }, { "start": 612.52, "end": 616.68, "text": " and the system that we are going to look at right here is not not much" }, { "start": 616.68, "end": 620.76, "text": " concerned with these tasks it simply uses pre-existing methods to do these" }, { "start": 620.76, "end": 626.64, "text": " things so the system we're looking at today does this first part right here it" }, { "start": 626.64, "end": 631.5999999999999, "text": " takes text okay this is text and it comes up with these candidate facts" }, { "start": 631.5999999999999, "end": 636.28, "text": " about the text whether how this is then mapped to the schema that is a a" }, { "start": 636.28, "end": 642.28, "text": " different question and it's so there there are pretty cool things in this" }, { "start": 642.28, "end": 646.4399999999999, "text": " paper about this step but we're first going to look at the first step and" }, { "start": 646.44, "end": 652.4000000000001, "text": " then at the second step all right so how does this system do this and how does it" }, { "start": 652.4000000000001, "end": 657.5400000000001, "text": " do it that there have been machine learning models before but being machine" }, { "start": 657.5400000000001, "end": 661.8800000000001, "text": " learning they all have like some sort of a training corpus where you have kind of" }, { "start": 661.8800000000001, "end": 668.5600000000001, "text": " the facts as a training set and then you have a separate set of facts as a test" }, { "start": 668.5600000000001, "end": 673.8800000000001, "text": " set and you try to learn from the conjunction of the text and the training" }, { "start": 673.88, "end": 681.6, "text": " facts how to extract facts not this system this system simply uses a" }, { "start": 681.6, "end": 686.88, "text": " pre-trained language model so what's the reasoning the reasoning is the" }, { "start": 686.88, "end": 693.76, "text": " following we used to think that we could do NLP probably best with having a" }, { "start": 693.76, "end": 698.16, "text": " knowledge graph right with having this set of very structured data we can" }, { "start": 698.16, "end": 705.48, "text": " answer something like what's the what's the age of Barack Obama's wife and then" }, { "start": 705.48, "end": 709, "text": " you could go to the entity of Barack Obama you could look at the relation" }, { "start": 709, "end": 713.66, "text": " spouse you could go to Michelle Obama you could look up her birth date which" }, { "start": 713.66, "end": 717.9, "text": " would all be structured information in this graph so you could sort of answer" }, { "start": 717.9, "end": 722.64, "text": " questions like this and search engines like Google and so on they have this" }, { "start": 722.64, "end": 727.76, "text": " built-in so there is kind of a knowledge graph entry sometimes when you search" }, { "start": 727.76, "end": 734.16, "text": " an entity in Google that pops up and these have been very useful to answer" }, { "start": 734.16, "end": 739.88, "text": " questions like however in recent years language models have become better and" }, { "start": 739.88, "end": 746, "text": " better things like BERT or GPT-2 have become better than these expert systems" }, { "start": 746, "end": 751.88, "text": " let's call them at answering questions by the way if you want to if you want to" }, { "start": 751.88, "end": 757.16, "text": " hear a very very cool and solid argument of where these kind of expert systems" }, { "start": 757.16, "end": 762.64, "text": " where this kind of structured human annotated or maybe extracted information" }, { "start": 762.64, "end": 766.52, "text": " can still come in in natural language understanding I would recommend the" }, { "start": 766.52, "end": 772.68, "text": " machine learning Street talk episode we had with Wally Saba extremely interesting" }, { "start": 772.68, "end": 778.48, "text": " person and I had I just I can recommend listening to that this should be out any" }, { "start": 778.48, "end": 785.06, "text": " day now if it is not already so the language models have become better and" }, { "start": 785.06, "end": 788.9599999999999, "text": " better at these tasks without having this structured information so the" }, { "start": 788.9599999999999, "end": 796.3199999999999, "text": " hypothesis is maybe these language models can already contain the information" }, { "start": 796.3199999999999, "end": 800.9599999999999, "text": " that's necessary to construct these structured facts because the structured" }, { "start": 800.9599999999999, "end": 805.56, "text": " facts is what we you know let's say should use to answer these questions" }, { "start": 805.56, "end": 809.1199999999999, "text": " because we feel that structured information is better than unstructured" }, { "start": 809.1199999999999, "end": 813.1999999999999, "text": " the language models are pretty good at these tasks so maybe we can get the" }, { "start": 813.2, "end": 819.0400000000001, "text": " structured information out of the language models so that's what they do" }, { "start": 819.0400000000001, "end": 823.5600000000001, "text": " they say the contributions are as follows we show how to construct" }, { "start": 823.5600000000001, "end": 827.08, "text": " knowledge graphs from pre-trained language models the knowledge graphs are" }, { "start": 827.08, "end": 830.1600000000001, "text": " constructed with a single forward pass of the pre-trained language models" }, { "start": 830.1600000000001, "end": 834.6, "text": " without fine-tuning over the textual corpora I think this is the this is kind" }, { "start": 834.6, "end": 839.5, "text": " of a very strong point about this paper and it's also shows that if you're some" }, { "start": 839.5, "end": 845.08, "text": " PhD student somewhere and you don't necessarily have the resources to train" }, { "start": 845.08, "end": 852.36, "text": " the next GPT-3 model or fine-tune it there is still research to be done" }, { "start": 852.36, "end": 858.2, "text": " simply if you have enough resources to forward pass your data which is often" }, { "start": 858.2, "end": 864.64, "text": " much fewer than to train one you can still do very cool research I think this" }, { "start": 864.64, "end": 870.24, "text": " paper shows this explicitly okay this helps researchers explicitly understand" }, { "start": 870.24, "end": 874.24, "text": " what the language models learn bridging the deep language model and the" }, { "start": 874.24, "end": 879.68, "text": " knowledge graph communities through enhanced model transparency okay they" }, { "start": 879.68, "end": 884.4399999999999, "text": " say we propose an unsupervised two-stage approach MAMA which stands for" }, { "start": 884.4399999999999, "end": 889.92, "text": " match and map to first match the candidate facts in the corpora with the" }, { "start": 889.92, "end": 893.4, "text": " knowledge stored in language models that's the first step we looked at then" }, { "start": 893.4, "end": 898.12, "text": " map the matched candidates facts to both fixed and open schema to produce a" }, { "start": 898.12, "end": 903.3, "text": " knowledge graph and then they say they produce a new type of knowledge graph" }, { "start": 903.3, "end": 908.16, "text": " which simply is the the facts sometimes the facts they extract they can't really" }, { "start": 908.16, "end": 913.4399999999999, "text": " map to a schema entry and we're going to look at that because I think a bit" }, { "start": 913.4399999999999, "end": 917.12, "text": " critically of this they say namely the open knowledge graph consists of mapped" }, { "start": 917.12, "end": 922.84, "text": " facts in the fixed schema of existing knowledge graphs annotated by humans and" }, { "start": 922.84, "end": 927.84, "text": " the unmapped facts in the open schema that are new in the reference knowledge" }, { "start": 927.84, "end": 933.52, "text": " knowledge graph schema so what they claim here is that their system is finds" }, { "start": 933.52, "end": 939.1600000000001, "text": " these new relations that are don't even exist in the schema and is able to" }, { "start": 939.1600000000001, "end": 946.1600000000001, "text": " uncover kind of build new additional schema entries and they call this the" }, { "start": 946.16, "end": 953.1999999999999, "text": " open knowledge graph I'm a bit skeptical of this as we are going to see so the" }, { "start": 953.1999999999999, "end": 959, "text": " first step how do you come up if you have a sentence and this is it this is a" }, { "start": 959, "end": 964.1999999999999, "text": " very poor example I feel honestly to to do this it's I get it must be short but" }, { "start": 964.1999999999999, "end": 968.4, "text": " it's a poor example but stay with me so you have this sentence Dylan is a" }, { "start": 968.4, "end": 975.88, "text": " songwriter and you would like to extract a fact from this the paper is not" }, { "start": 975.88, "end": 982.16, "text": " really written clearly on how I mean it is I could you can parse it out but the" }, { "start": 982.16, "end": 992.28, "text": " description is kind of distributed so step one step one is run spacey run" }, { "start": 992.28, "end": 999, "text": " spacey this is a standard kind of library for NLP to extract noun phrases" }, { "start": 999, "end": 1005.44, "text": " or they call them noun chunks okay so step one is not there's nothing to do" }, { "start": 1005.44, "end": 1010.6800000000001, "text": " with the language model it is simply you want to find the noun phrases in here" }, { "start": 1010.6800000000001, "end": 1017.5200000000001, "text": " the noun phrases are Dylan and songwriter now these noun phrases now" }, { "start": 1017.5200000000001, "end": 1022.6400000000001, "text": " define your head and your tail of the facts so you already have two things" }, { "start": 1022.6400000000001, "end": 1029.8, "text": " right so the the entire task of what of their method they're proposing is so the" }, { "start": 1029.8, "end": 1036.36, "text": " step one is run spacey to find the head and the tail of facts step two is" }, { "start": 1036.36, "end": 1043.1599999999999, "text": " question mark for now step three is going to be use the entity linking system" }, { "start": 1043.1599999999999, "end": 1048.76, "text": " and the relation linking system to construct the knowledge graph okay so" }, { "start": 1048.76, "end": 1053.3999999999999, "text": " step one is steel underpants and then step three is profit so what's step two" }, { "start": 1053.4, "end": 1059.96, "text": " step two is obviously step two is where their system comes in step two is here" }, { "start": 1059.96, "end": 1065.6000000000001, "text": " is the head and here is the tail in the text some hot wear in between there" }, { "start": 1065.6000000000001, "end": 1071.64, "text": " might be a relation and we need to figure out where that is right so how" }, { "start": 1071.64, "end": 1079.4, "text": " does this method figure it out so you already see the assumptions here are" }, { "start": 1079.4, "end": 1084.2, "text": " very very restrictive right so you use spacey to extract basically noun phrases" }, { "start": 1084.2, "end": 1088.0800000000002, "text": " which means you probably already going to miss a lot of things that are not" }, { "start": 1088.0800000000002, "end": 1091.88, "text": " recognized as noun phrase and they all they also say that that spacey's" }, { "start": 1091.88, "end": 1095.76, "text": " annotations are sometimes error prone and that's why they miss a lot of things" }, { "start": 1095.76, "end": 1100.88, "text": " and then secondly the assumption that the relation must be in between the two" }, { "start": 1100.88, "end": 1104.68, "text": " things textually now you can run the algorithm forward and backward but still" }, { "start": 1104.68, "end": 1111.1200000000001, "text": " it must be in between and it must sort of be encoded let's say as a semi" }, { "start": 1111.1200000000001, "end": 1117.44, "text": " accurate string in there I guess then that's up to the relation linker but" }, { "start": 1117.44, "end": 1123.88, "text": " already these assumptions are super constraining in the the kind of things" }, { "start": 1123.88, "end": 1128.0800000000002, "text": " you can find and you'll see in the experiments that their biggest flaws" }, { "start": 1128.0800000000002, "end": 1132.8400000000001, "text": " that they have a very very low recall I mean so do all the systems on the task" }, { "start": 1132.84, "end": 1137, "text": " apparently but they still have a very low recall and it's because they" }, { "start": 1137, "end": 1141, "text": " constrain their problems so much I'm going to guess if they wouldn't" }, { "start": 1141, "end": 1145.24, "text": " constrain their problems so much then they would have maybe a better recall" }, { "start": 1145.24, "end": 1151.1599999999999, "text": " but their precision would just plummet because these these things if you let" }, { "start": 1151.1599999999999, "end": 1156.1999999999998, "text": " them run wild they just over extract so basically every every set every verb in" }, { "start": 1156.2, "end": 1163.64, "text": " every sentence is going to be a relation right so like I ate a banana I ate" }, { "start": 1163.64, "end": 1171.56, "text": " banana would be a triple not necessarily a really valuable entry in any knowledge" }, { "start": 1171.56, "end": 1178.0800000000002, "text": " graph though banana has a lot of carbs so I would want to know about that okay" }, { "start": 1178.0800000000002, "end": 1185, "text": " so you see that the task is now reduced from building knowledge graphs to simply" }, { "start": 1185, "end": 1196.56, "text": " given a head head annotation had peace in the string span and a tail span" }, { "start": 1196.56, "end": 1201.92, "text": " extract any span in between the head and the tail that describes the relation" }, { "start": 1201.92, "end": 1207.72, "text": " between the head and the tail so the way this algorithm does it that's where it" }, { "start": 1207.72, "end": 1213.8, "text": " uses the language model okay so here it's going to do something that is going" }, { "start": 1213.8, "end": 1219.84, "text": " to be similar to dynamic programming if you've seen kind of the dynamic" }, { "start": 1219.84, "end": 1225.6399999999999, "text": " programming and search algorithms let's say you know string matching algorithms" }, { "start": 1225.6399999999999, "end": 1229.8799999999999, "text": " and so on this is going to be sort of similar in that what we're going to do" }, { "start": 1229.8799999999999, "end": 1235.3999999999999, "text": " we're going to start from here from the head in the string there could be text" }, { "start": 1235.3999999999999, "end": 1239.72, "text": " before it right we're simply going to locate the head Dylan right here and" }, { "start": 1239.72, "end": 1245.48, "text": " going to start then we're going to look at its attention matrix now the" }, { "start": 1245.48, "end": 1250.08, "text": " attention matrix is we're going to cross out here the attention matrix if you I've" }, { "start": 1250.08, "end": 1255.16, "text": " done many many videos on attention the attention matrix basically in a sequence" }, { "start": 1255.16, "end": 1261, "text": " means how much each token attends to each other token right how much" }, { "start": 1261, "end": 1266.96, "text": " information is kind of sent from each other token to this token right here so" }, { "start": 1266.96, "end": 1272.04, "text": " this up here would be be the query and these would be the keys the attention" }, { "start": 1272.04, "end": 1279.1200000000001, "text": " matrix specifies that so since we locate things between the head and the tail" }, { "start": 1279.1200000000001, "end": 1284.32, "text": " what we want to do is we want to cross out we want to disregard everything" }, { "start": 1284.32, "end": 1290.44, "text": " that's kind of behind the query and only look ahead in the sentence okay so" }, { "start": 1290.44, "end": 1294.56, "text": " that's why the sum of the attention matrix here is crossed out as you can" }, { "start": 1294.56, "end": 1300.8, "text": " see these are the X's this is exactly because we only search in one direction" }, { "start": 1300.8, "end": 1309.2, "text": " so from each from the token Dylan we can look at three things we can look at is a" }, { "start": 1309.2, "end": 1313.9199999999998, "text": " or songwriter and this the question is simply where do we go next with this" }, { "start": 1313.9199999999998, "end": 1317.56, "text": " algorithm right there's no interpretation yet it's simply where do" }, { "start": 1317.56, "end": 1323.44, "text": " we go next and the where do we go next is simply answered by just taking the" }, { "start": 1323.44, "end": 1328.3200000000002, "text": " highest scoring thing in that column of the attention matrix I look at the" }, { "start": 1328.3200000000002, "end": 1333.28, "text": " attention column where of the token Dylan I take the highest scoring one" }, { "start": 1333.28, "end": 1339.2, "text": " that's point three here is higher okay then I go to point three and that means" }, { "start": 1339.2, "end": 1350.44, "text": " is gets into my candidate fact okay and once I put ears into my candidate fact I" }, { "start": 1350.44, "end": 1358.16, "text": " then go to is so the next thing I do is I go to is and then I again look in the" }, { "start": 1358.16, "end": 1363.92, "text": " corresponding attention column and I see what's now the biggest entry here and" }, { "start": 1363.92, "end": 1369.96, "text": " the biggest entry is point four which is songwriter and you can see here now we" }, { "start": 1369.96, "end": 1380.0800000000002, "text": " skip the a that's how we leave out some text okay by skipping it basically so you" }, { "start": 1380.08, "end": 1383.72, "text": " can see that this this can create artifacts right this can create like" }, { "start": 1383.72, "end": 1387.8799999999999, "text": " kind of holes in the middle and so on but we skip a we go directly to the" }, { "start": 1387.8799999999999, "end": 1393.6799999999998, "text": " point four and then we discover up the point for that is our tail so now we put" }, { "start": 1393.6799999999998, "end": 1400.84, "text": " our tail into here and since our tail is the last word we can stop the algorithm" }, { "start": 1400.84, "end": 1407.8799999999999, "text": " I yes so there is no need to to go on even if there were text behind the tail" }, { "start": 1407.88, "end": 1411.8400000000001, "text": " as soon as we are at the tail which we already know right we're given the head" }, { "start": 1411.8400000000001, "end": 1417.44, "text": " and tail we stop all right so the we simply go forward with always the" }, { "start": 1417.44, "end": 1422.0800000000002, "text": " biggest entry in the attention matrix until we reach the tail that's the" }, { "start": 1422.0800000000002, "end": 1431.46, "text": " algorithm this this there it's described here but it's kind of described in this" }, { "start": 1431.46, "end": 1438.8, "text": " in this way where it has these actions like start yield and like this maybe I'm" }, { "start": 1438.8, "end": 1442.8400000000001, "text": " not understanding something but it seems completely unnecessary to kind of" }, { "start": 1442.8400000000001, "end": 1448.3600000000001, "text": " describe these actions and and it basically start the search from the head" }, { "start": 1448.3600000000001, "end": 1452.76, "text": " the head is added as the initial candidate and so on then in yield it" }, { "start": 1452.76, "end": 1457.56, "text": " sometimes says with the largest score from the attention matrix is appended to" }, { "start": 1457.56, "end": 1466.48, "text": " the end to yield the new candidate and so on but still and then stop we stop" }, { "start": 1466.48, "end": 1472.44, "text": " and the algorithm description here it basically just says while we're not done" }, { "start": 1472.44, "end": 1481.8, "text": " if we're if it's not the stop action we continue it's it's sort of it doesn't" }, { "start": 1481.8, "end": 1486.1599999999999, "text": " tell you anything like this is this is a super unclear description of this" }, { "start": 1486.16, "end": 1489.88, "text": " algorithm basically the whole logic that you would want to know about is here in" }, { "start": 1489.88, "end": 1494.48, "text": " this action manager right so the action manager that gives you the action is" }, { "start": 1494.48, "end": 1500.64, "text": " doing the actual logic of figuring out which token you know you should do next" }, { "start": 1500.64, "end": 1504.22, "text": " and where you should go next and so on this is nowhere in the algorithm the" }, { "start": 1504.22, "end": 1509.48, "text": " algorithm just describes beam search so you can do this a little yeah the little" }, { "start": 1509.48, "end": 1513.2, "text": " more sophistication that comes in is that you don't do this deterministically" }, { "start": 1513.2, "end": 1518.6000000000001, "text": " but you actually do it via beam search okay but you can you can just" }, { "start": 1518.6000000000001, "end": 1525.0800000000002, "text": " generalize this all right so the description is a bit floppy with the" }, { "start": 1525.0800000000002, "end": 1533.8, "text": " whole actions and action manager and whatnot and not describing the only" }, { "start": 1533.8, "end": 1537.38, "text": " thing they don't describe formally is how actually to select the next token" }, { "start": 1537.38, "end": 1545.68, "text": " which is basically the entire kind of meat of the algorithm in any case you" }, { "start": 1545.68, "end": 1551.8400000000001, "text": " might this is something that confuses me right here so fair enough you know they" }, { "start": 1551.8400000000001, "end": 1557.2800000000002, "text": " say here we take the attention matrix and we cross out these X's all right but" }, { "start": 1557.2800000000002, "end": 1563.6000000000001, "text": " they say they can take things up here right they can take things like Bert and" }, { "start": 1563.6, "end": 1568.1999999999998, "text": " you know as I said fair Bert has a full attention matrix everything attends to" }, { "start": 1568.1999999999998, "end": 1572.36, "text": " everything but they can also take things like GPT-2 now GPT-2 is an" }, { "start": 1572.36, "end": 1578.9599999999998, "text": " autoregressive language model that means that in GPT-2 if you look at it" }, { "start": 1578.9599999999998, "end": 1586.08, "text": " then you produce each token one after another which means that when you" }, { "start": 1586.08, "end": 1594.96, "text": " produce so each token when you train or when you evaluate even each token can" }, { "start": 1594.96, "end": 1602.9199999999998, "text": " only attend to the things in front of it right you see that the problem with what" }, { "start": 1602.9199999999998, "end": 1609.3999999999999, "text": " this thing requires of this is also the same okay let's do that you see the" }, { "start": 1609.3999999999999, "end": 1615.58, "text": " problem with this method this method is the exact opposite each token attention" }, { "start": 1615.58, "end": 1621.6399999999999, "text": " matrix is deleted such that only the entries ahead of it are in the attention" }, { "start": 1621.6399999999999, "end": 1629.6799999999998, "text": " matrix you don't actually get GPT-2 to give you an attention matrix that looks" }, { "start": 1629.6799999999998, "end": 1637, "text": " ahead because it only ever looks behind so maybe maybe what's happening is that" }, { "start": 1637, "end": 1645.28, "text": " the query and key matrices are switched up in some way in that case when we want" }, { "start": 1645.28, "end": 1652.8, "text": " to interpret the algorithm the way they write it down is if I am at a particular" }, { "start": 1652.8, "end": 1660.44, "text": " part of what I think is the relation between the two entities how am I going" }, { "start": 1660.44, "end": 1665.72, "text": " to find whether or not there is more to the relation right there could be a" }, { "start": 1665.72, "end": 1675.52, "text": " it could be a multi-word relation like has a child with or I don't know can't" }, { "start": 1675.52, "end": 1679.96, "text": " think of any multi-word relations or whether we kind of are done with the" }, { "start": 1679.96, "end": 1686.4, "text": " relation and go to the to the tail what this thing is saying is that we should" }, { "start": 1686.4, "end": 1692.66, "text": " look at the the language model so if if this is really how it is here and you" }, { "start": 1692.66, "end": 1698.28, "text": " are at the word is what you want to know if this is BERT if this is a BERT" }, { "start": 1698.28, "end": 1704, "text": " language model what you want to know is if I were to cross out is if I were to" }, { "start": 1704, "end": 1711.2, "text": " delete this word which other words in the sentence right here that are ahead" }, { "start": 1711.2, "end": 1719.0400000000002, "text": " of me are very very informative to predict this particular word and that's" }, { "start": 1719.04, "end": 1725.12, "text": " that's kind of the query style and you know if the answer turns out to be" }, { "start": 1725.12, "end": 1729.8799999999999, "text": " songwriter is quite important for that maybe Dylan is too but we only look" }, { "start": 1729.8799999999999, "end": 1735.2, "text": " ahead if it turns out a the word a is not as important as the word songwriter" }, { "start": 1735.2, "end": 1740.48, "text": " right because songwriter yeah it gives an indication that there should be is" }, { "start": 1740.48, "end": 1744.32, "text": " because songwriter is kind of a profession and there's a person in front" }, { "start": 1744.32, "end": 1749.72, "text": " of it we don't look at that but the attention matrix would would have that in" }, { "start": 1749.72, "end": 1757.48, "text": " mind if that's valid right so that's how this this construction is made however" }, { "start": 1757.48, "end": 1763.56, "text": " if this is the key we have to think of the other way around if we are at is we" }, { "start": 1763.56, "end": 1770.2, "text": " look ahead and say if I were to delete the word a could I reconstructed how" }, { "start": 1770.2, "end": 1775.8400000000001, "text": " well could I reconstruct it from this word is or if I delete songwriter how" }, { "start": 1775.8400000000001, "end": 1781.2, "text": " well could I reconstruct that from the word is I think both are you know there" }, { "start": 1781.2, "end": 1787.64, "text": " is interpretations probably for both of these methods but what I want kind of to" }, { "start": 1787.64, "end": 1793.96, "text": " convey is that none of these things are really amenable to constructing a" }, { "start": 1793.96, "end": 1797.88, "text": " knowledge graph it's it's quite interesting that this stuff actually" }, { "start": 1797.88, "end": 1804.0800000000002, "text": " works because all it asks is how well does one word inform about the presence" }, { "start": 1804.0800000000002, "end": 1811.16, "text": " or how well can one word predict another word and from that information we" }, { "start": 1811.16, "end": 1816.2800000000002, "text": " construct this knowledge graph which probably is a testament to the fact that" }, { "start": 1816.2800000000002, "end": 1823.0800000000002, "text": " knowledge graphs maybe aren't so much about knowledge if you extract them from" }, { "start": 1823.0800000000002, "end": 1827.64, "text": " a corpus but more about grammar I would think that's the thing that goes on here" }, { "start": 1827.64, "end": 1832.68, "text": " because these language models are a lot about grammar right a lot about how" }, { "start": 1832.68, "end": 1837.5200000000002, "text": " different words appear together frequently so given that songwriter is" }, { "start": 1837.5200000000002, "end": 1841.3200000000002, "text": " kind of a mix between grammar and basic word knowledge given that songwriter is" }, { "start": 1841.3200000000002, "end": 1846.76, "text": " kind of an object here the word is being the verb is probably quite important for" }, { "start": 1846.76, "end": 1854.44, "text": " it and that's exactly these these triples they always appear a bit like" }, { "start": 1854.44, "end": 1860.6000000000001, "text": " in of compressed sentences and which which are very grammatically relevant so" }, { "start": 1860.6000000000001, "end": 1866.48, "text": " I'm not buying these hypothesis that there is much knowledge in these" }, { "start": 1866.48, "end": 1870.5800000000002, "text": " language models and that's why this works what I much rather think is that" }, { "start": 1870.5800000000002, "end": 1874.4, "text": " they are really really really good at a kind of grammar and statistical" }, { "start": 1874.4, "end": 1879.76, "text": " association between words across the language and that's why they can extract" }, { "start": 1879.76, "end": 1887.4, "text": " these candidates facts so well okay so that's what I think about the algorithm" }, { "start": 1887.4, "end": 1892.4, "text": " they do constrain it some more as if it doesn't already have enough constraints" }, { "start": 1892.4, "end": 1898.44, "text": " but they all make sense okay so they say the matching degree which is simply the" }, { "start": 1898.44, "end": 1903.04, "text": " sum of all these attention matrix entries that we've encountered during" }, { "start": 1903.04, "end": 1908.76, "text": " our search so all the ones we didn't skip or to count it together or the" }, { "start": 1908.76, "end": 1914.76, "text": " matching degree of this triple the matching degree must be above some" }, { "start": 1914.76, "end": 1920.2, "text": " threshold that's the first constraint because so they give an example right" }, { "start": 1920.2, "end": 1924.84, "text": " here for the sentence rolling stone wrote no other pop song has so far only" }, { "start": 1924.84, "end": 1930.12, "text": " challenged artistic conventions and the extracted candidate fact is rolling" }, { "start": 1930.12, "end": 1937.96, "text": " stone wrote pop song again you can kind of see here it's mostly going in into" }, { "start": 1937.96, "end": 1944.1200000000001, "text": " into grammar ish so spacey extracts rolling stone and pop song and the" }, { "start": 1944.1200000000001, "end": 1952.76, "text": " language model here extracts like the only verb in between wrote so yeah to" }, { "start": 1952.76, "end": 1962.1200000000001, "text": " to limit to kind of limit the the to limit the matching degree to say it must" }, { "start": 1962.12, "end": 1969.1599999999999, "text": " be at minimum kind of some some number it makes a lot of sense because if the" }, { "start": 1969.1599999999999, "end": 1974.32, "text": " matching degree is high that means if we go by this attention matrix it means" }, { "start": 1974.32, "end": 1980.76, "text": " that these words that are in the candidate fact they kind of as themselves" }, { "start": 1980.76, "end": 1985.6399999999999, "text": " they follow from each other so the language model thinks that wrote is a" }, { "start": 1985.64, "end": 1992.0800000000002, "text": " very good follow to rolling stone and pop song is a very good follow for wrote" }, { "start": 1992.0800000000002, "end": 1996.3600000000001, "text": " or the other way around depending on which way the attention matrix is but" }, { "start": 1996.3600000000001, "end": 2002.5600000000002, "text": " that's kind of the language model thinks that that these words together make" }, { "start": 2002.5600000000002, "end": 2008.4, "text": " sense in the context of the sentence of course like in the context of this" }, { "start": 2008.4, "end": 2013.6000000000001, "text": " entire sentence so as I said it's sort of can think of it as a bit of a" }, { "start": 2013.6, "end": 2021.1599999999999, "text": " summarization paper but with more constraints constraint number two is" }, { "start": 2021.1599999999999, "end": 2029.6, "text": " that the frequency of R is above a threshold so the relation itself" }, { "start": 2029.6, "end": 2033.84, "text": " shouldn't be too specific it actually should appear a bunch of times in the" }, { "start": 2033.84, "end": 2038.12, "text": " corpus so what you do is you know you go through the corpus once extract all the" }, { "start": 2038.12, "end": 2044.32, "text": " facts my pen just dropped you extract all the facts or the all these candidates" }, { "start": 2044.32, "end": 2049.6, "text": " and then you you kind of count them and go through the candidate facts again and" }, { "start": 2049.6, "end": 2054.3599999999997, "text": " delete all the ones that are below a certain thing that's people usually do" }, { "start": 2054.3599999999997, "end": 2058.7599999999998, "text": " this with things like stop words or rare words and so on it's pretty standard" }, { "start": 2058.7599999999998, "end": 2065.72, "text": " makes a lot of sense and constraint number three relation or is a contiguous" }, { "start": 2065.72, "end": 2071.8799999999997, "text": " sequence in the sentence okay so you have an example here from the same" }, { "start": 2071.8799999999997, "end": 2076.7999999999997, "text": " Rolling Stone wrote challenged conventions which the language model" }, { "start": 2076.7999999999997, "end": 2081.24, "text": " would like to extract because again these in the context of that sentence" }, { "start": 2081.24, "end": 2085.56, "text": " these words sort of you know they jump to each other in the attention matrix" }, { "start": 2085.56, "end": 2091, "text": " because you can predict them from each other very well but they say this must" }, { "start": 2091, "end": 2097.6, "text": " be a contiguous sequence so what I said before I said this could happen with" }, { "start": 2097.6, "end": 2104.4, "text": " this constraint they excluded okay so for the second part where they actually" }, { "start": 2104.4, "end": 2110.8, "text": " have to map a candidate fact to a fact in the schema as I said they use kind of" }, { "start": 2110.8, "end": 2118.12, "text": " pre pre-made solutions entity linking and relation mapping with the schema I" }, { "start": 2118.12, "end": 2126.7599999999998, "text": " won't go into this except to say that whenever they find a match they say that" }, { "start": 2126.7599999999998, "end": 2131.7799999999997, "text": " this is a mapped fact whenever they don't find a match they say oh this is" }, { "start": 2131.7799999999997, "end": 2137.52, "text": " an unmapped fact okay an unmapped candidate means that at least one of H" }, { "start": 2137.52, "end": 2142.68, "text": " RNT is not mapped to the schema there are two types partially unmapped facts" }, { "start": 2142.68, "end": 2149.2, "text": " is where some are mapped and completely unmapped facts indicate that all H RNT" }, { "start": 2149.2, "end": 2155.7599999999998, "text": " are not mapped to the schema okay for example Jacob was a registered" }, { "start": 2155.7599999999998, "end": 2165.04, "text": " Mennonite now here they so they they say they have these different facts and you" }, { "start": 2165.04, "end": 2170.3599999999997, "text": " know it's a cool thing if a model like this can actually come up with new fact" }, { "start": 2170.36, "end": 2175.08, "text": " not so not only new mapped facts which is something you would expect right if" }, { "start": 2175.08, "end": 2179.96, "text": " humans provide some kind of a schema then build a knowledge graph this is" }, { "start": 2179.96, "end": 2184.9, "text": " never complete so if you can automatically kind of fill in missing" }, { "start": 2184.9, "end": 2191.08, "text": " facts that's very very cool though I would say humans if you construct" }, { "start": 2191.08, "end": 2194.44, "text": " knowledge graphs humans should probably also build kind of like negative" }, { "start": 2194.44, "end": 2204.36, "text": " connections saying like yes it is conceivable that Elvis was a vegan" }, { "start": 2204.36, "end": 2209.8, "text": " because a lot of texts talk about it but in fact it is explicitly not I don't" }, { "start": 2209.8, "end": 2214.4, "text": " think that's what we have in the knowledge graph so far but it would be" }, { "start": 2214.4, "end": 2221.2400000000002, "text": " cool if this model could fill in new facts yes to the schema it would also be" }, { "start": 2221.24, "end": 2226.4799999999996, "text": " cool if it could uncover completely new relations that haven't they hadn't been" }, { "start": 2226.4799999999996, "end": 2233.12, "text": " considered by the human makers of the knowledge graph like if the knowledge" }, { "start": 2233.12, "end": 2238.72, "text": " graph itself is incomplete the schema is a man you know same argument the schema" }, { "start": 2238.72, "end": 2245.6, "text": " is probably also incomplete this paper is sort of trying to sell their system" }, { "start": 2245.6, "end": 2251.88, "text": " as something that can do that and I believe that to a degree but also also" }, { "start": 2251.88, "end": 2260.8399999999997, "text": " Jacob was a registered Mennonite okay now maybe I'm completely wrong from the" }, { "start": 2260.8399999999997, "end": 2264.72, "text": " sentence Jacob was a registered Mennonite in Amsterdam I might be" }, { "start": 2264.72, "end": 2273.04, "text": " completely wrong but Mennonite is a religion I think and I'm very very sure" }, { "start": 2273.04, "end": 2279.4, "text": " that any of these knowledge graphs with the schemas that they have have being in" }, { "start": 2279.4, "end": 2285.56, "text": " a religion or being of a certain faith in their relations table somewhere and" }, { "start": 2285.56, "end": 2290.24, "text": " I'm also pretty sure that Mennonite large enough that that would actually" }, { "start": 2290.24, "end": 2295.52, "text": " appear as an entity maybe Jacob not right maybe Jacob is an unknown Jacob we" }, { "start": 2295.52, "end": 2302.92, "text": " don't know who Jacob is but this seems more like a failure of the entity linker" }, { "start": 2302.92, "end": 2311, "text": " and relation linker than an uncovered new relation or an uncovered new entity" }, { "start": 2311, "end": 2318.16, "text": " so yeah take this stuff with a grin now they they are very honest about this but" }, { "start": 2318.16, "end": 2324.84, "text": " just to say that that's probably what happens most often so here you can see" }, { "start": 2324.84, "end": 2330.48, "text": " the graph for Bob Dylan constructed from the Wikipedia pages that are kind of" }, { "start": 2330.48, "end": 2336.32, "text": " they say around the page of Bob Dylan so I guess one or two or three hops away" }, { "start": 2336.32, "end": 2343, "text": " something like this and you can see the blue stuff is stuff that we already knew" }, { "start": 2343, "end": 2348.92, "text": " so that the human humans also found when looking at this then yellow stuff I" }, { "start": 2348.92, "end": 2354, "text": " believe is either new relations so whenever things are annotated it's a new" }, { "start": 2354, "end": 2358.26, "text": " relation in the schema so you can see this is an entity in the schema because" }, { "start": 2358.26, "end": 2364.28, "text": " it's annotated this is a relation in the schema but the arrow is new so the" }, { "start": 2364.28, "end": 2369.32, "text": " humans hadn't yet extracted the fact that Bob Dylan was or was a member of" }, { "start": 2369.32, "end": 2375.76, "text": " artists united against apartheid then the yellow also sometimes means that" }, { "start": 2375.76, "end": 2381.48, "text": " there is a new thing so here tour with is a relation that's extracted that is" }, { "start": 2381.48, "end": 2388.76, "text": " not in the knowledge graph yet also this one and you can it's pretty it's pretty" }, { "start": 2388.76, "end": 2392.4, "text": " cool right that you can extract these things automatically there's a lot of" }, { "start": 2392.4, "end": 2396.64, "text": " yellow stuff here which means there is not a lot of new information that this" }, { "start": 2396.64, "end": 2400.52, "text": " extracted and a lot of this new information is actually mapped to the" }, { "start": 2400.52, "end": 2405.56, "text": " schema right Bob Dylan residents in Duluth I don't know how to pronounce" }, { "start": 2405.56, "end": 2416.12, "text": " that by the way yes so so that's that's fairly fairly cool they do some of these" }, { "start": 2416.12, "end": 2420.52, "text": " tasks of these knowledge-based tasks in these tasks what you'd have I believe" }, { "start": 2420.52, "end": 2426.92, "text": " what you'd have is always you'd have like a head and a relation given so you" }, { "start": 2426.92, "end": 2433, "text": " have a document and you are given a head and a relation and you're asked what's" }, { "start": 2433, "end": 2438.96, "text": " the tail of this and then you ask the system and the system will tell you so" }, { "start": 2438.96, "end": 2442.44, "text": " you have these baselines and these baselines I believe they are specifically" }, { "start": 2442.44, "end": 2446.4, "text": " made to extract these knowledge representations they might even be" }, { "start": 2446.4, "end": 2451.76, "text": " trained I don't I don't know that but you can see that the MAMA even the even" }, { "start": 2451.76, "end": 2458.76, "text": " the smallest one here beats those by quite a bit now you can see that the" }, { "start": 2458.76, "end": 2464.5600000000004, "text": " recall is significantly lower than the precision which is a direct result of" }, { "start": 2464.5600000000004, "end": 2471.5200000000004, "text": " how many constraints on the system there are and tells you sort of what the going" }, { "start": 2471.5200000000004, "end": 2481.1200000000003, "text": " forward what the improvements can be so they analyze a lot of this and yeah so" }, { "start": 2481.1200000000003, "end": 2484.92, "text": " a first recognition is that larger and deeper language models produce knowledge" }, { "start": 2484.92, "end": 2489.7200000000003, "text": " graphs of higher quality BERT language models outperform GPT-2 language" }, { "start": 2489.7200000000003, "end": 2497.64, "text": " models under similar model sizes which is interesting is scalable to larger" }, { "start": 2497.64, "end": 2502.76, "text": " corpora which again as we said you don't need to train it and larger corpora" }, { "start": 2502.76, "end": 2508.12, "text": " embed more complete knowledge graphs which is something we would expect the" }, { "start": 2508.12, "end": 2511.4, "text": " other interesting part is the unmapped fact so the numbers you can actually" }, { "start": 2511.4, "end": 2515.4, "text": " compute only for the mapped facts right because that's where you have data" }, { "start": 2515.4, "end": 2520.7200000000003, "text": " humans produce the knowledge graphs from this that's what you can compare with" }, { "start": 2520.7200000000003, "end": 2527.36, "text": " now the unmapped facts they say they analyze we turn to study the quality of" }, { "start": 2527.36, "end": 2530.96, "text": " the candidate facts that are not mapped to the above reference knowledge graph" }, { "start": 2530.96, "end": 2537.2000000000003, "text": " schema but are in the open schema generated by MAMA that's mama we" }, { "start": 2537.2, "end": 2543.12, "text": " manually judge such unmapped facts generated by our best method from 100" }, { "start": 2543.12, "end": 2548.96, "text": " sample documents in wikidata and TAC KBP respectively so they they go as" }, { "start": 2548.96, "end": 2552.9199999999996, "text": " researchers they look at these things and they judge them whether or not" }, { "start": 2552.9199999999996, "end": 2559.24, "text": " they're true given these documents in Wikipedia they say the quality of" }, { "start": 2559.24, "end": 2564.7599999999998, "text": " unmapped facts is very for that so that the claim is that they've looked at them" }, { "start": 2564.76, "end": 2573.1200000000003, "text": " and they are good we find that 35.3% of the unmapped facts are true on wikidata" }, { "start": 2573.1200000000003, "end": 2580.36, "text": " we find that 83.2% of those true facts are partially unmapped facts for" }, { "start": 2580.36, "end": 2586.0400000000004, "text": " example Bob Dylan tour with the Grateful Dead and yeah here is an if this really" }, { "start": 2586.0400000000004, "end": 2591.1600000000003, "text": " isn't in the schema right this is a nice relation that you might think humans" }, { "start": 2591.16, "end": 2595.2799999999997, "text": " would miss because touring with someone is not the first thing that would come" }, { "start": 2595.2799999999997, "end": 2599.44, "text": " to mind if you had to come up with a bunch of relations between entities but" }, { "start": 2599.44, "end": 2605.8399999999997, "text": " it is something that is regularly useful regularly used for musicians so that is" }, { "start": 2605.8399999999997, "end": 2609.96, "text": " an application where certainly an automated system can even extend the" }, { "start": 2609.96, "end": 2617.3999999999996, "text": " schema right whose relation is not within the scheme of wikidata well both head" }, { "start": 2617.4, "end": 2623.08, "text": " and tail are in the schema the register the remaining true facts are completely" }, { "start": 2623.08, "end": 2629.56, "text": " unmapped facts for example this red Jacob was a registered men and I and they" }, { "start": 2629.56, "end": 2634.88, "text": " also say accurate entity detection is desired where they say a lot of the" }, { "start": 2634.88, "end": 2641.92, "text": " errors are due to spacey detecting wrong incorrect entities or due to incorrect" }, { "start": 2641.92, "end": 2650.6, "text": " or missing entity linking by the by that those systems the rest errors made by" }, { "start": 2650.6, "end": 2656.48, "text": " mama are incorrect relation phrases such as uninformative relation phrases for" }, { "start": 2656.48, "end": 2661.64, "text": " example Bob Dylan made and his breakthrough oh what can you do what" }, { "start": 2661.64, "end": 2670.16, "text": " other what other one what other verb would you put there yeah but okay we're" }, { "start": 2670.16, "end": 2677.7999999999997, "text": " going to look at a few last things right here they have a bunch of a bunch of" }, { "start": 2677.7999999999997, "end": 2682.48, "text": " experiments right here which where they show you know the beam size has an" }, { "start": 2682.48, "end": 2687.24, "text": " influence this constraint number one and number two that we looked at has an" }, { "start": 2687.24, "end": 2692.68, "text": " influence right so you can tune these things a bit what is interesting here is" }, { "start": 2692.68, "end": 2699.56, "text": " that they try they try to look at either the attention matrix of the last or of" }, { "start": 2699.56, "end": 2705.2, "text": " all the layers and interestingly the system performs better if you only look" }, { "start": 2705.2, "end": 2709.04, "text": " at the attention matrix in the last layer now they reduce that attention" }, { "start": 2709.04, "end": 2713.32, "text": " layer because there are multiple heads using max or mean and see they perform" }, { "start": 2713.32, "end": 2719.2799999999997, "text": " similarly but it is interesting that only the last and they argue they argue" }, { "start": 2719.2799999999997, "end": 2723.88, "text": " in the text that we know that the last layers kind of have higher level" }, { "start": 2723.88, "end": 2729.56, "text": " features than the lower layers but I recall there are multiple papers like" }, { "start": 2729.56, "end": 2734.52, "text": " I've done videos about them what does Bert learn and so on I think even" }, { "start": 2734.52, "end": 2739.1400000000003, "text": " something in constraint in conjunction with lottery tickets and so on that show" }, { "start": 2739.1400000000003, "end": 2746.96, "text": " that in a transformer at least I think it is the middle layers that encode the" }, { "start": 2746.96, "end": 2752.6800000000003, "text": " most kind of semantic knowledge because the lower ones yes they are for kind of" }, { "start": 2752.68, "end": 2758.2799999999997, "text": " low-level features but the upper ones they are again for low-level features" }, { "start": 2758.2799999999997, "end": 2764.12, "text": " because the task right here at the end is to predict an individual word or" }, { "start": 2764.12, "end": 2768.52, "text": " token right so you'd expect that the features in the attention matrix there" }, { "start": 2768.52, "end": 2772.56, "text": " are go back to kind of sort of more grammatical features and so on and that" }, { "start": 2772.56, "end": 2777.2799999999997, "text": " the highest level features are actually somewhere in the middle I don't know if" }, { "start": 2777.2799999999997, "end": 2781.7599999999998, "text": " they tested if they only tested like all versus last in which case yeah I" }, { "start": 2781.76, "end": 2786.96, "text": " believe that but if they tested each one individually and it still turned out" }, { "start": 2786.96, "end": 2791.1600000000003, "text": " that last is the best that would kind of add to my hypothesis that what happens" }, { "start": 2791.1600000000003, "end": 2795.44, "text": " here is more kind of a grammatical effect of extracting the this correct" }, { "start": 2795.44, "end": 2801.6400000000003, "text": " candidate candidate verb in between the head and the tail all right so that's" }, { "start": 2801.6400000000003, "end": 2809.5600000000004, "text": " that's kind of kind of gives more weight to my hypothesis like so to repeat my" }, { "start": 2809.56, "end": 2813.7599999999998, "text": " hypothesis is that it's kind of a grammatical thing that's going on here" }, { "start": 2813.7599999999998, "end": 2819.68, "text": " because the only task of this model is basically to find the correct string" }, { "start": 2819.68, "end": 2824.32, "text": " span for the relation between head and tail because it's already given head and" }, { "start": 2824.32, "end": 2833.4, "text": " tail and there from the text their hypothesis is more like we the language" }, { "start": 2833.4, "end": 2837.32, "text": " models have a lot of knowledge built into them and we can extract that" }, { "start": 2837.32, "end": 2841.1600000000003, "text": " knowledge kind of it they make it sound like then the language model has this" }, { "start": 2841.1600000000003, "end": 2848.76, "text": " semantic knowledge in them okay okay so so let's look at a bunch of mapped facts" }, { "start": 2848.76, "end": 2856.6800000000003, "text": " right here you can okay you can maybe check out a lot of them yourself but" }, { "start": 2856.6800000000003, "end": 2861.92, "text": " we'll just look at like one in each category blah blah mail yada yada yada" }, { "start": 2861.92, "end": 2865.8, "text": " yada is in worse shape however a Klaus told press conference at the Western" }, { "start": 2865.8, "end": 2874.44, "text": " city of Essen where the other yada yada and it extracts this company and it maps" }, { "start": 2874.44, "end": 2879.6800000000003, "text": " it to the city of headquarters maybe they leave out some text here what I" }, { "start": 2879.6800000000003, "end": 2884.8, "text": " want to get to is the unmapped facts where are the unmapped mapped facts to" }, { "start": 2884.8, "end": 2891.04, "text": " just kind of show you mapped facts unmapped facts okay so the unmapped" }, { "start": 2891.04, "end": 2897.12, "text": " facts what I feel and you can judge for yourself please what I feel just to" }, { "start": 2897.12, "end": 2904.08, "text": " pre-bias you before we look at them is that a lot of times simply it extracts" }, { "start": 2904.08, "end": 2915.44, "text": " things that are that are it extracts things that are not it simply can't" }, { "start": 2915.44, "end": 2920.7599999999998, "text": " can't assign things right it's a failure to assign it's not a new thing because" }, { "start": 2920.76, "end": 2924.36, "text": " in these schemas like you haven't seen the schemas but you kind of get a feel" }, { "start": 2924.36, "end": 2929.0800000000004, "text": " the last which is the last table you kind of get a feel of what contains in" }, { "start": 2929.0800000000004, "end": 2934.6000000000004, "text": " it so maybe get a feel for for what okay" }, { "start": 2934.6000000000004, "end": 2942.88, "text": " Ernst Heckel was born 16th of February 1834 in Potsdam okay so the extracted" }, { "start": 2942.88, "end": 2950.7200000000003, "text": " thing is heckle was born on 17th of February 83 in Potsdam okay so that" }, { "start": 2950.72, "end": 2956.68, "text": " it maps to this is in the knowledge base a schema this is in the schema but was" }, { "start": 2956.68, "end": 2963, "text": " born on 17th of February 1833 in is simply a failure of the relation linker" }, { "start": 2963, "end": 2975, "text": " okay he was also a pacifist until the First World War yada yada yada then" }, { "start": 2975, "end": 2980.76, "text": " Ernst Heckel and then was on a and a pacifist are both not in the schema now" }, { "start": 2980.76, "end": 2988.2, "text": " maybe pacifism isn't in the schema maybe maybe though I would guess pacifism has" }, { "start": 2988.2, "end": 2994.88, "text": " a Wikipedia page so it must be in the schema because it's a wiki data but was" }, { "start": 2994.88, "end": 3001.72, "text": " as you know the relation here with something be like a political leaning or" }, { "start": 3001.72, "end": 3007.6, "text": " something like this which is certainly certainly in the knowledge base right" }, { "start": 3007.6, "end": 3016.4399999999996, "text": " then you have things like heckle was awarded the title of excellency so you" }, { "start": 3016.4399999999996, "end": 3022.3999999999996, "text": " have correctly heckle again recognized award received is in the schema nice" }, { "start": 3022.3999999999996, "end": 3029.04, "text": " excellency as a tail and excellency you know what what do you want like this is" }, { "start": 3029.04, "end": 3038.36, "text": " this is a this is not a fact right this is the award or the title of excellency" }, { "start": 3038.36, "end": 3044.08, "text": " would be kind of the thing so this is a failure of spacey so again I have I've" }, { "start": 3044.08, "end": 3051.7599999999998, "text": " seen little facts here that would actually be of genuine a genuine addition" }, { "start": 3051.7599999999998, "end": 3057.2799999999997, "text": " to the schema that should be considered and I absolutely believe that the schema" }, { "start": 3057.28, "end": 3062.76, "text": " is incomplete don't get me wrong I like a 100% the schema is probably less than" }, { "start": 3062.76, "end": 3068.1200000000003, "text": " 1% of what it should be right if we did a thorough job I just don't think that" }, { "start": 3068.1200000000003, "end": 3076.0400000000004, "text": " this system here is a good like I think that the things that this system comes" }, { "start": 3076.0400000000004, "end": 3084.1600000000003, "text": " up with mostly are simply failures of its subsystems rather than genuinely new" }, { "start": 3084.16, "end": 3090.08, "text": " entries to the schema that's different from when it genuinely discovered when it" }, { "start": 3090.08, "end": 3095.7999999999997, "text": " discovers a new mapping between already established things for example Pauline" }, { "start": 3095.7999999999997, "end": 3103.2999999999997, "text": " Baines educated at this college right so these are new facts all fit in the" }, { "start": 3103.2999999999997, "end": 3110.2, "text": " schema and the system might be very very nice for that all right so that was my" }, { "start": 3110.2, "end": 3117.2799999999997, "text": " kind of estimation of this paper I hope I didn't rag on it too much as I said" }, { "start": 3117.2799999999997, "end": 3124.3999999999996, "text": " it's it's very cool work actually I look at this appendix is giant go look at it" }, { "start": 3124.3999999999996, "end": 3129, "text": " check it out please tell me what you think about it in the comments any" }, { "start": 3129, "end": 3140.64, "text": " feedback is welcome and I will see you next time bye bye" } ]
nXGHJTtFYRU
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Dynamic Routing Between Capsules
[ "Science & Technology" ]
[ "machine learning", "deep learning", "capsules", "capsule networks", "google brain", "hinton", "jeff hinton", "geoff hinton", "routing", "neural networks", "convolution", "convolutional neural networks", "deep neural networks", "cnns", "mnist", "multimnist", "disentanglement", "architecture", "reconstruction", "alternative", "dnn", "ml", "ai", "artificial intelligence", "brain", "visual system", "classifier", "image", "nonlinearity", "entities", "objects", "capsule", "network" ]
Geoff Hinton's next big idea! Capsule Networks are an alternative way of implementing neural networks by dividing each layer into capsules. Each capsule is responsible for detecting the presence and properties of one particular entity in the input sample. This information is then allocated dynamically to higher-level capsules in a novel and unconventional routing scheme. While Capsule Networks are still in their infancy, they are an exciting and promising new direction. Abstract: A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule. Authors: Sara Sabour, Nicholas Frosst, Geoffrey E Hinton https://arxiv.org/abs/1710.09829 YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Minds: https://www.minds.com/ykilcher BitChute: https://www.bitchute.com/channel/10a5ui845DOJ/
Hi there! Today we're looking at dynamic routing between capsules by Sara Sabour, Nicholas Frost and Jeffrey Hinton of Google Brain. This paper is a bit older but it's made quite the impact at the time and so we'll go through it. I find this pretty hard paper to read and kind of understand because a lot of things are very implicit and hand wavy. So we'll kind of go through it and try to get the best out of it, try to explain what capsules are and what they do and how they stack against current networks. So capsule network in essence is a new type of neural network made of capsules. And here it says a capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. Kind of cryptic but so what they're saying is that in a capsule network, let me try to draw one here actually, in a capsule network you have what's called capsules. Capsules you can imagine as just little blobs of things right? And they're also ordered in layers in this case. Let's actually leave away the second layer. And each of these of these capsules will correspond to an entity in the input. Let's say the input is an image. So somewhere here there is an image right? Then maybe this capsule here will be responsible for detecting is there a wall in the image. And this one will be responsible for detecting is there a roof. This one will be is there a door. And this one will be responsible for detecting is there a lake in the image right? So now each of these each of these capsules can for on one hand can either be high or low. So if you if you imagine now a situation where wall high, roof high, door high, lake low. It means probably the image has a house on it right? But second of all not only can it predict whether or not a given entity is present in an image but the individual capsules are also responsible for encoding the exact way or shape or form that this entity takes. So the wall could have different aspects such as color color green. It could have size tall. It could have orientation. orientation is like I don't know vertical. Cool. Then roof could have angle right? Angle wide. So it's a wide roof or a flat roof right? These are these are kind of attributes of these things that also the capsules would encode. So ultimately what these capsules that they are proposing will output is the roof capsule here for example would output a vector. So the output of the roof capsule is a let me draw a coordinate system is a vector. Now the length of the vector will represent so that the length draw this norm here will represent the probability that the roof is in the image. That there is a roof in an image right? The roof is element of this input image. This is simply the length and the individual coordinates will encode these attributes. So this here for example this axis could be the angle of the roof and this axis could be the color. Let's say just that the angle is like some degree number that can be positive or negative. Maybe a roof can be like this. Right this so this is but in essence this is a flat roof and this is a very narrow angle roof. So you can imagine something like this and then the color could also be maybe parameterized on a one-dimensional. It can have more dimensions than two I just can't draw more. So the depending on where this where this arrow now points the for example this vector here has the same probability that there is a roof in the image like if the output is this but the color will be different. The angle will be the same because they're roughly on the same this axis here but the color of this will encode a different different colored roof. And then if the vector is something like this a very short vector it will encode the same the same angle and color directions. So maybe I shouldn't say the position on the axis it's more like this angle and this this angle that encode the attributes. So the kind of the angular components if you will encode the attributes and the length encodes the probability. So this small vector has the same direction in terms of color and angle of the roof but it's much less probable much less likely. So this if the capsule outputs the little blue vector here it says well if there is a roof it's going to be this color in this angle but I'm really that really don't think there's a roof in this image. Whereas if it outputs the large green one then it says I'm pretty sure that there's a roof and it's going to be this angle and this this this angle and this color. Alright so that's that is what each capsule is supposed to do. Each capsule takes the input and outputs a vector that encodes if the entity that the capsule is responsible for is present in the image A and B what properties this entity has. And then we get to the point where there's the next layer of capsules. So the next layer of capsules takes information that each capsule here takes information from each capsule in the lower layer like like you're used to from your neural network and integrates this information and we'll talk about how this works. It integrates all of this information right all of these are vectors now that come from the lower integrates all of this information and again each capsule in this next layer is responsible for a entity. Now these entities in the higher layers are usually composite entities of the lower layers. So this one here could be responsible for house, this one could be responsible for national park, national park and this one could be responsible for beach or something like this right. And then each of these will integrate all of this information from the lower layers and then come up with their own output vector encoding whether or not a given entity is present in the in the image. Of course the house class will pick up if there is a door a roof and a wall in the image the house classes will pick up on that or that's how it's meant to work house class is meant to pick up on that and then itself output a large vector saying there's probably a house in this in this image. So each of these capsules in by itself is responsible for encoding the presence and attributes of a object or object part or entity or part of entity in the given input data. And of course the last layer here it will simply be your classification layer. So in the last layer you have as many capsules as you have classes in your classification task. So this is mainly for a classification task and then you can classify and you can kind of train the whole system like this. So how exactly this happens we'll see next. Alright so they make kind of analogies to the visual system and so on. We'll jump these you can everyone that does deep learning in some way is trying to to make that. We're rather going to the specifics of how these capsules work and how their specific suggestions for them. Note that they say this is in no way the only implementation of capsules. It's just kind of an example to show how one could do it. Alright so first of all they present their what you might call non-linearity. So their non-linearity what it needs to do is if you look at these capsule networks the outputs here the length of the outputs of these vectors right they're supposed to represent probabilities and as such they they need to be so here it roof this door maybe a vector like this wall maybe a vector like that. So initially we simply specify the output is a vector and in essence these capsules are implemented in much the same way like your classic neural network layer would be implemented. So each of these capsules will be in essence a neural network layer by itself that outputs a vector. There's nothing constraining the length of the vector initially so their non-linearity does constrain the vector to be of maximum length 1 and of minimum length 0. That's this non-linearity here. So S here is the unscaled output of the capsule and you can see here if the length of S gets close to 1 or sorry gets really large then this here becomes irrelevant. This whole term will be 1 and then the length of the final output of V here will be 1. Right so if this is very large then the the length of the scaled output will be 1 however if the if the length is really small of the original output so if this goes towards 0 then this becomes irrelevant this becomes irrelevant this will go towards 0 and the entire length will go towards 0. So this is kind of a nice way to scale these outputs always to be between length 0 and 1. Then next thing is so how this I find I find the the most complicated part right so we'll jump ahead actually to how a capsule's network is implemented and this is the the capsule network they implement so first it's an MNIST classifier you have an MNIST image here and it first goes through a simple convolutional layer that's that's nothing new this is a classic convolutional layer is there's 256 channels it has a 9 by 9 filters and stride 1 so it will output a 20 by 20 time by 256 tensor then each of these so each of the outputs here is sent to each of these capsules and now they're convolutional capsules so that makes it a bit more complicated but don't you know don't worry primarily about them being convolutional capsules the analogy is exactly as in a classic neural network you can implement these capsules as void-feed-forward capsules or as convolutional capsules and maybe also as transformer capsules I don't think anyone's done that all right there's a paper for you the so you'll send you'll send the output of this convolution layer to each capsule and then you have basically just two layer of capsules here the first layer consists of 32 what they call primary caps sorry the these 32 capsules each will output an eight dimensional vector and I'm simplifying here it's it's convolutional but they will just for simplest they will each output an eight dimensional vector right and these are exactly as we said before so each of these will be responsible ultimately for a given entity or part of entity being there like in MNIST this could be is there a little curve on the bottom left side right this might indicate the presence of a six or an eight something like this and then the these capsules here each is they represented as a row so each of these rows here is a capsule and we have ten of these and these are your simply your final classification capsules so each capsule is responsible for indicating the presence or absence of one particular class of digits so this will be of a one of a two of a three of a four and so on of a zero I guess somewhere as well so these are ten capsules and the question is how does information go from a capsule here from the output of a capsule or to any of capsule here and the easy way to do this is simply to say as in a classical neural network the output here simply goes to the input here just you just put it there basically on on unchanged now there is a bit of an issue here with the dimensions but you can simply say well we simply put a weight matrix in to route into the capsules but the idea of these capsules and this paper is to say wait wait these capsules actually we want to make them decide to which capsule in the next layer will they send their input right so the capsules can kind of decide where they want to send their output to like where is this where is the capsule that detects the maybe this one detects is there a line in the right side of the image right indicating maybe a seven or a one this is probably most relevant for the one class and for the seven class so it might decide to route its output there and the idea of how this routing happens is basically the topic of this paper so the the capsules route their output to the appropriate next layers capsules how is this done all right this is done via the what's called the routing mechanism that I find it quite poorly described here so I will simply draw it I will simply try to make it up all right so we have capsules and as I've drawn them before right we have one two three capsules and we maybe have two parent capsules each of these capsules here will output a vector as we said and we'll only do it for this this one sorry vector here so this will output this vector and needs to decide where to here or to here do I send to this output now what it does is there is an iterative procedure that has multiple steps and this is I think this is at least the way I understand I think the important part to understand is that if we forward pass data through this network it actually doesn't go forward in a straight line what it actually does is it goes through a layer and then it does multiple steps in between layers until it has decided where it wants to go in the next layer and then it goes on to the next layer and if there's another capsule layers it does again multiple steps before it goes on so that's that's my take on it and the multiple steps are as follows first I'll send my output vector to to all of the all of the layers like equally all of the parent capsules and so will will everyone else right everyone will send theirs equally to the parent now this isn't just done and this may be here this isn't just done just by sending it but this is actually done by modulation of weight matrices so each thing here if this is capsule I and this is capsule J there is a weight matrix in between W I J that is learned right this is a static weight matrix and each one of these red red arrows you see here has such a weight matrix attached to it so each each line you see here is actually modulated by such a weight matrix so there is an a quadratic number of these weight matrices flying around and this will also then allow you that maybe this vector is eight dimensional but the input vector here is 16 dimensional what we saw before all right so the out the input of capsule J here it will receive let's see what it receives it will receive the output of capsule will the output of capsule 1 V 1 modulated by the let's let's call this yeah let's call this J modulated by 1 J W 1 J and it will also receive this is a set the output of capsule 2 modulated by the weight matrix for sorry weight matrix for capsule 2 and so on now what it does is it adds this these all up into a soft max so sorry let's write this so soft it will add those all up in a soft max weighted fashion so it will actually compute a a weighted average of those now the weights at the beginning are are just one because it gets each from each lower capsule it gets equal amount of this vector but then this will give you an output so this will give you some output let's put this in green this will give you an output that's I don't know how they call it in the paper let's just call it O J right and then what you do is all right you compare how much do each of the individual contributions agree with OJ so you actually compute for each of these you would compute the inner product so you would compute the inner product of W 1 J V 1 with OJ and you would compute the inner product of W 2 J V 2 with OJ all right the inner product and then these inner products here will become the weighting coefficients for the soft max in the next iteration all right so this I mean this this is a bit convoluted but ultimately what you're saying is if you're a capsule here you'll send your output forward you have an output you send it forward right to the other capsule and the other capsule will so this is this is your output and we'll forget about this weight matrix 6 for now this is your up the other capsule will output its own its own output computed from the lower layers now we do an iteration again if your output now aligns with this you will send more of it and these these two that I've drawn here actually align pretty well right so you'll send more of it is more more more right and now maybe the output that next computed output of the same capsule will be even more in that direction because you've contributed more right you'll send more and then you're like in the next iteration wow these two are really equal sorry this should be red here your ears just keeps being the same and then you say well I'm gonna send even more to that one right whereas another capsule that it's whose initial output was basically whose initial output was basically like this it will by itself compute the inner product with the original this original it will send it here right it will compute the inner product with the original output and it will realize well these do not align very much and then it will send less right it will send less to the next step and because it sends less in the next step of course the output will then probably align even less with that vector and then it will send less and less and less so this is called dynamic routing the the idea behind it is kind of that you route by agreement so you will route to the parent capsules that agree with your output and by agreement we mean kind of the inner product is high after modulating by this weight matrix and that sort of so that basically means this weight matrix is responsible for deciding which information is relevant together whenever you have two vectors that align in the same layer then the in the sense of the capsule networks those represent the same kind of information and those will be routed together to the same capsule in terms of the examples we made maybe if a door and a roof is present then these these these weight matrices that connect door and roof to the house class they will transform a high vector in door and roof into aligning vectors for the house class and thereby saying look these two if I look at them through if I look at a door and a roof through the perspective of trying to be a house right then they are in much agreement on the presence of a house so if I am a house right I am a house and I look at a door and I look at a roof through the kind of from the perspective of being a house right this is this is what these weight matrices do they always have a perspective of the parent capsule then these two things they make a lot of sense together and thus I will route them to the same place so they can both contribute to their being a house now from the perspective of a house if I look at a little beach with a tree on it right then that does not that is not the same that does not really is not the same information as a door or a roof so I will not route this to the house in the in the same strength that is sort of the best way I have of explaining it how these capsules work basically the lower entities will always be routed for the relevance of the higher entities that are trying to are trying to combine the lower entities if that wasn't it's not entirely clear to me either yet but it's the best shot I I can give and the routing is here formalized I find it hard to follow the important thing is that there is an inner loop in all of this so there is an like kind of an an inner iteration and this inner iteration is computed in every forward pass and so these routing where the information goes in the next layer that is only the prior probability for that is learned but the actual routing coefficients those are dynamically computed in every forward pass so every forward pass goes it goes information goes through a layer then it goes multiple steps between two layers until it decides exactly what the distribution for the next layer is and then the next layer computes its outputs and that goes again multiple steps between these layers and the next layer so that's the the basic thing to remember there's also some normalization involved the squash is the non-linearity we discussed so what do they actually train now at the end here they have a they have these ten capsules and each capsule will be responsible for recognizing one the presence of one digit in the MNIST data set of course and so what they do is they take the length of these vectors that are output by these capsules these capsules are feed-forward capsules as opposed to the convolutional capsules here so the feed-forward capsules output again a vector the length of this vector is taken and then it's basically trained like you would train a regression problem and the loss here is specified up here so if the if the image actually does contain this if the training label actually has this digit present this T here encodes that so if if K let's say K is 2 right so if K 2 if there is a 2 in the image when we know that because it's a training image then the length of the output of capsule number 2 should be high and this simply encodes that it should be very close to this M plus an M plus here is that I think they said it to 0.9 so they say you should be the length should be as close as possible to 0.9 whereas if the 2 is not present then TK will be 0 then this part will be active so it's only one of these two parts will be active then the length of the vector so of capsule number 2 should be close to this M negative which is 0.1 it's basically a regression problem saying if if there if the given entity is in the image then please make the length as close as possible to 0.9 and if it's not make it as close as possible to 0.1 so this this is a classic say regression loss on the length of the output vectors the the lambda is just a factor to to dampen the contribution for all the negative classes with respect to the one positive class of course per capsule it turns out this is actually not enough so this will be the classification output but it's it seems not enough they don't say it's not enough but they simply say we additionally do the following so they also do is they introduce a reconstruction loss now if this model is trained correctly then these capsules here these last capsules especially this one maybe that's the capsule corresponding to the class of the digit 8 will not only encode if an 8 is there or not as in the length of the vector output but it will also encode the properties of dates it is a 16 dimensional vector so it will encode hopefully things like the stroke width so then it might encode the maybe the rotation of the digit then it might be controlled the tightness of the of the loop so you can have an 8 with very large loops or it can have an 8 sorry this is a smaller rate I can have an 8 with very tight loops so it might you know encode things like this so technically it is it will be possible to reconstruct from this description reconstruct say the width is high the rotation is zero and the tightness is low then maybe I have a wide widely stroked not tight 8 that is not rotated right so it should be possible to reconstruct this and they they do exactly that so they take this last capsule of the class that is the actual training label that's called the reconstruction target and they feed this to a simple feed-forward neural network that at the end you see this is exactly the MNIST size will try to reconstruct the the image so if the image here this image goes in then it goes all through here it will take the class for here feed it through this network reshape it to an image again and hopefully what will come out is again this for here and it will then have an auxiliary auxiliary loss in addition to the loss of this of this classification loss here will auxiliary loss that tries to reconstruct the original image right and that's simply a I believe it's just an L2 reconstruction loss that is that is scaled down that it doesn't dominate so they also train the network basically to reconstruct this and I believe they do this because the length isn't quite enough to make it do what they want it to do thus they by having this reconstruction here they really kind of enforce that the individual capsules the individual dimensions must encode some kind of information about the original image and since the original images in the MNIST data set at least vary by those things by stroke width by rotation by tightness that by this loss will be reflected in the in the reconstruction all right so how are they doing here you see different examples of inputs and then reconstructed outputs and this you know seems pretty good actually so you see here all of these the input image is reconstructed fairly well so the numbers up here in the fall so the right are the failure cases here it the input image is a five labeled in the training data but the network actually classifies it as a three but then if you now you have two choices right this this is the same sample I have two choices for reconstruction either you reconstruct the capsule that is actually the is that you know is the true capsule that should be activated and you reconstruct from that or you reconstruct from the capsule that the network says the it classifies it as so here it mixed up a five four three if you still take the five the capsule and reconstructed you see it actually looks like the original image but it looks much more like a five and if you take the three capsule to reconstruct which is what the network classified this as it's still it looks like the original image but it looks much more like an actual three right it's it's missing the the part up here whereas over here it's it's missing this part here so that the network really seems to kind of learn the different variations of these digits and in an ambiguous case such as this one it you know it can it can actually go either way and it can actually reconstruct the original output in either interpretations once as a three and once as a five it will be interesting to see what the actual lengths of the vector of both of these classes were that were mixed up and here they compare their accuracies so they have a baseline model which I believe is just a CNN where they get a decent kind of error and then the capsule networks they get a lower error and here you see as you add the reconstruction loss and as you add routing more so one step of routing simply means the first step is where you send your output equally to each parent that is as in the classical neural network case but if you introduce three steps of routing then your error drops even lower so they they kind of are on par with baseline CNNs on MNIST here they also explore what their capsules learn so as I said the individual capsules the dimensions should encode kind of properties of the variations of the of the class class samples and here they explore this in the different capsules so they change some dimensions and they run it through their reconstruction networks and indeed they discover that there is like a scale and thickness dimension stroke thickness dimension there's a skew dimension and so on width and translation so that this is pretty remarkable these networks really if you train them in this way they really seem to learn about the entities and about the properties of the entities and that seems to be quite interesting you see that there's everything here stays well within the class that the capsule is assigned to they also yeah this robustness to affine transformations where they improve over the baseline it's kind of an auxiliary experiment the next interesting experiment is what they call the multi MNIST experiment the multi MNIST experiment is done by taking two different MNIST digits and basically just overlapping them so that they have you know shift them slightly but as you see here or here they are overlapped heavily and the task of the network is to figure out which two overlapping digits are in the image and the the network is very very good at doing this the capsule network that is and better than the the baselines because the capsule network simply encodes the presence and properties of a particular instance in the image if you simply take the top two length capsules and then reconstruct those independently then you're you can you can you can basically segment the image and you see this here so the different colorations come from two different reconstructions of the image from two different capsules so green is from one capsule and red from the other capsule so the network correctly identifies that it's a 6 and the zero right and it also correctly identifies not only which pixels belong to the 6 and which belong to 0 but also pixels that belong to both so that's not a not a problem if you use capsule networks as they are are notable to say here they the way they train is is they train the actual reconstruction by only reconstructing one at a time so the kind of the premise of the data set is that you actually have access to the underlying individual digits while training so like the images of the individual digits you don't only have this label here but that's a detail here are some kind of failure cases where it it misclassified or you miss specify the capsules and it's kind of unable use here you see to to assign the digits of the misclassified or the pixels of the misclassified thing it's quite interesting to look at the failure cases but I find it more interesting to look actually the success cases and the kind of ease at which the at which the capsule networks can do this simply by how they're structured alright so then lastly they also experiment on C for 10 and interestingly the C for 10 experiments show that the capsule networks don't perform as well there and as you know C for 10 is a data set that is about the same size as MNIST but it's first of all color and second of all is natural images and so they have quite a bit of clutter it's not black and white black background white digits it's actually there's a sky like on an image there's lots of things going on and right there's my tree and there's stuff here and there's stuff here and the the capsule networks they like to account for things in the image so they like to have a capsule corresponding to everything that's going on here and here and here and here and here if the whole background is black that is not a problem you can account for simply the background but if there's lots of things going on then these capsule networks get they get they get a bit over explanatory they want to explain everything and that degrades the performance now this paper basically says yeah you can have a something like a none of the above category and they found that it helped to introduce that in my opinion that it I think the the the solution will be more towards introduction of a better loss function for this because like such that you don't need kind of to explain the entire thing rather than here we'll hear what you do is you simply explain it by saying it's none of the above but it's incredibly hard to balance that my opinion yeah all right so that is basically the end of this they say they have a discussion here where they compare capsules against other related work but I hope that you kind of got an overview of how this works now and as much as possible and with that that was it for me and thanks for watching bye bye
[ { "start": 0, "end": 6, "text": " Hi there! Today we're looking at dynamic routing between capsules by Sara Sabour," }, { "start": 6, "end": 11.96, "text": " Nicholas Frost and Jeffrey Hinton of Google Brain. This paper is a bit older" }, { "start": 11.96, "end": 18.8, "text": " but it's made quite the impact at the time and so we'll go through it. I find" }, { "start": 18.8, "end": 22.92, "text": " this pretty hard paper to read and kind of understand because a lot of things" }, { "start": 22.92, "end": 31.400000000000002, "text": " are very implicit and hand wavy. So we'll kind of go through it and try to get the" }, { "start": 31.400000000000002, "end": 35.96, "text": " best out of it, try to explain what capsules are and what they do and how" }, { "start": 35.96, "end": 41.44, "text": " they stack against current networks. So capsule network in essence is a" }, { "start": 41.44, "end": 46.32000000000001, "text": " new type of neural network made of capsules. And here it says a capsule is a" }, { "start": 46.32000000000001, "end": 50.120000000000005, "text": " group of neurons whose activity vector represents the instantiation" }, { "start": 50.12, "end": 56.12, "text": " parameters of a specific type of entity such as an object or an object part. Kind" }, { "start": 56.12, "end": 63.12, "text": " of cryptic but so what they're saying is that in a capsule network, let me try to" }, { "start": 63.12, "end": 68.52, "text": " draw one here actually, in a capsule network you have what's called capsules." }, { "start": 68.52, "end": 75.36, "text": " Capsules you can imagine as just little blobs of things right? And they're also" }, { "start": 75.36, "end": 81.4, "text": " ordered in layers in this case. Let's actually leave away the second layer. And" }, { "start": 81.4, "end": 89.8, "text": " each of these of these capsules will correspond to an entity in the input." }, { "start": 89.8, "end": 94.52, "text": " Let's say the input is an image. So somewhere here there is an image right?" }, { "start": 94.52, "end": 101.44, "text": " Then maybe this capsule here will be responsible for detecting is there a" }, { "start": 101.44, "end": 108.24, "text": " wall in the image. And this one will be responsible for detecting is there a" }, { "start": 108.24, "end": 117.16, "text": " roof. This one will be is there a door. And this one will be responsible for" }, { "start": 117.16, "end": 125.56, "text": " detecting is there a lake in the image right? So now each of these each of these" }, { "start": 125.56, "end": 133.2, "text": " capsules can for on one hand can either be high or low. So if you if you imagine" }, { "start": 133.2, "end": 142.32, "text": " now a situation where wall high, roof high, door high, lake low. It means" }, { "start": 142.32, "end": 150.56, "text": " probably the image has a house on it right? But second of all not only can it" }, { "start": 150.56, "end": 156.76, "text": " predict whether or not a given entity is present in an image but the individual" }, { "start": 156.76, "end": 162.68, "text": " capsules are also responsible for encoding the exact way or shape or form" }, { "start": 162.68, "end": 169.24, "text": " that this entity takes. So the wall could have different aspects such as color" }, { "start": 169.24, "end": 181.72, "text": " color green. It could have size tall. It could have orientation. orientation is" }, { "start": 181.72, "end": 191.96, "text": " like I don't know vertical. Cool. Then roof could have angle right? Angle wide." }, { "start": 191.96, "end": 196.64000000000001, "text": " So it's a wide roof or a flat roof right? These are these are kind of attributes" }, { "start": 196.64, "end": 203.23999999999998, "text": " of these things that also the capsules would encode. So ultimately what these" }, { "start": 203.23999999999998, "end": 209.6, "text": " capsules that they are proposing will output is the roof capsule here for" }, { "start": 209.6, "end": 215.56, "text": " example would output a vector. So the output of the roof capsule is a let me" }, { "start": 215.56, "end": 223.76, "text": " draw a coordinate system is a vector. Now the length of the vector will" }, { "start": 223.76, "end": 231.23999999999998, "text": " represent so that the length draw this norm here will represent the probability" }, { "start": 231.23999999999998, "end": 238.72, "text": " that the roof is in the image. That there is a roof in an image right? The roof is" }, { "start": 238.72, "end": 245.16, "text": " element of this input image. This is simply the length and the individual" }, { "start": 245.16, "end": 250.32, "text": " coordinates will encode these attributes. So this here for example this axis could" }, { "start": 250.32, "end": 257.44, "text": " be the angle of the roof and this axis could be the color. Let's say just that" }, { "start": 257.44, "end": 262.12, "text": " the angle is like some degree number that can be positive or negative. Maybe a" }, { "start": 262.12, "end": 268.84, "text": " roof can be like this. Right this so this is but in essence this is a flat roof" }, { "start": 268.84, "end": 273.68, "text": " and this is a very narrow angle roof. So you can imagine something like this and" }, { "start": 273.68, "end": 277.8, "text": " then the color could also be maybe parameterized on a one-dimensional. It" }, { "start": 277.8, "end": 282.36, "text": " can have more dimensions than two I just can't draw more. So the depending on" }, { "start": 282.36, "end": 289.8, "text": " where this where this arrow now points the for example this vector here has the" }, { "start": 289.8, "end": 294.92, "text": " same probability that there is a roof in the image like if the output is this but" }, { "start": 294.92, "end": 298.6, "text": " the color will be different. The angle will be the same because they're roughly" }, { "start": 298.6, "end": 303, "text": " on the same this axis here but the color of this will encode a different" }, { "start": 303, "end": 310.32, "text": " different colored roof. And then if the vector is something like this a very" }, { "start": 310.32, "end": 320.64, "text": " short vector it will encode the same the same angle and color directions. So maybe" }, { "start": 320.64, "end": 325.8, "text": " I shouldn't say the position on the axis it's more like this angle and this this" }, { "start": 325.8, "end": 330.4, "text": " angle that encode the attributes. So the kind of the angular components if you" }, { "start": 330.4, "end": 334.12, "text": " will encode the attributes and the length encodes the probability. So this" }, { "start": 334.12, "end": 339.59999999999997, "text": " small vector has the same direction in terms of color and angle of the roof but" }, { "start": 339.59999999999997, "end": 345.08, "text": " it's much less probable much less likely. So this if the capsule outputs the" }, { "start": 345.08, "end": 350.59999999999997, "text": " little blue vector here it says well if there is a roof it's going to be this" }, { "start": 350.59999999999997, "end": 354.52, "text": " color in this angle but I'm really that really don't think there's a roof in" }, { "start": 354.52, "end": 360.35999999999996, "text": " this image. Whereas if it outputs the large green one then it says I'm pretty" }, { "start": 360.36, "end": 365.2, "text": " sure that there's a roof and it's going to be this angle and this this this" }, { "start": 365.2, "end": 370.76, "text": " angle and this color. Alright so that's that is what each capsule is supposed to" }, { "start": 370.76, "end": 378.2, "text": " do. Each capsule takes the input and outputs a vector that encodes if the" }, { "start": 378.2, "end": 383.04, "text": " entity that the capsule is responsible for is present in the image A and B" }, { "start": 383.04, "end": 389.76, "text": " what properties this entity has. And then we get to the point where there's the" }, { "start": 389.76, "end": 394.92, "text": " next layer of capsules. So the next layer of capsules takes information that each" }, { "start": 394.92, "end": 402.4, "text": " capsule here takes information from each capsule in the lower layer like like" }, { "start": 402.4, "end": 407.36, "text": " you're used to from your neural network and integrates this information and" }, { "start": 407.36, "end": 411.4, "text": " we'll talk about how this works. It integrates all of this information right" }, { "start": 411.4, "end": 415.88, "text": " all of these are vectors now that come from the lower integrates all of this" }, { "start": 415.88, "end": 422.4, "text": " information and again each capsule in this next layer is responsible for a" }, { "start": 422.4, "end": 427.84, "text": " entity. Now these entities in the higher layers are usually composite entities of" }, { "start": 427.84, "end": 436.36, "text": " the lower layers. So this one here could be responsible for house, this one could" }, { "start": 436.36, "end": 444.4, "text": " be responsible for national park, national park and this one could be" }, { "start": 444.4, "end": 451.08, "text": " responsible for beach or something like this right. And then each of these will" }, { "start": 451.08, "end": 456.23999999999995, "text": " integrate all of this information from the lower layers and then come up with" }, { "start": 456.23999999999995, "end": 461.4, "text": " their own output vector encoding whether or not a given entity is present in the" }, { "start": 461.4, "end": 469, "text": " in the image. Of course the house class will pick up if there is a door a roof" }, { "start": 469, "end": 473.35999999999996, "text": " and a wall in the image the house classes will pick up on that or that's" }, { "start": 473.36, "end": 477.40000000000003, "text": " how it's meant to work house class is meant to pick up on that and then itself" }, { "start": 477.40000000000003, "end": 483, "text": " output a large vector saying there's probably a house in this in this image." }, { "start": 483, "end": 488.56, "text": " So each of these capsules in by itself is responsible for encoding the presence" }, { "start": 488.56, "end": 494.96000000000004, "text": " and attributes of a object or object part or entity or part of entity in the" }, { "start": 494.96000000000004, "end": 500.04, "text": " given input data. And of course the last layer here it will simply be your" }, { "start": 500.04, "end": 505.32, "text": " classification layer. So in the last layer you have as many capsules as you" }, { "start": 505.32, "end": 511.08000000000004, "text": " have classes in your classification task. So this is mainly for a" }, { "start": 511.08000000000004, "end": 517.84, "text": " classification task and then you can classify and you can kind of train the" }, { "start": 517.84, "end": 525.48, "text": " whole system like this. So how exactly this happens we'll see next." }, { "start": 525.48, "end": 533.96, "text": " Alright so they make kind of analogies to the visual system and so on." }, { "start": 533.96, "end": 541.6, "text": " We'll jump these you can everyone that does deep learning in some way is trying" }, { "start": 541.6, "end": 547.64, "text": " to to make that. We're rather going to the specifics of how these capsules work" }, { "start": 547.64, "end": 553.8000000000001, "text": " and how their specific suggestions for them. Note that they say this is in no" }, { "start": 553.8, "end": 558.92, "text": " way the only implementation of capsules. It's just kind of an example to show how" }, { "start": 558.92, "end": 565.56, "text": " one could do it. Alright so first of all they present their what you might call" }, { "start": 565.56, "end": 570.68, "text": " non-linearity. So their non-linearity what it needs to do is if you look at" }, { "start": 570.68, "end": 575.04, "text": " these capsule networks the outputs here the length of the outputs of these" }, { "start": 575.04, "end": 580.3199999999999, "text": " vectors right they're supposed to represent probabilities and as such they" }, { "start": 580.32, "end": 587, "text": " they need to be so here it roof this door maybe a vector like this wall maybe" }, { "start": 587, "end": 592.2, "text": " a vector like that. So initially we simply specify the output is a vector" }, { "start": 592.2, "end": 597, "text": " and in essence these capsules are implemented in much the same way like" }, { "start": 597, "end": 604.6800000000001, "text": " your classic neural network layer would be implemented. So each of these" }, { "start": 604.68, "end": 613.28, "text": " capsules will be in essence a neural network layer by itself that outputs a" }, { "start": 613.28, "end": 619.3599999999999, "text": " vector. There's nothing constraining the length of the vector initially so" }, { "start": 619.3599999999999, "end": 626.8, "text": " their non-linearity does constrain the vector to be of maximum length 1 and of" }, { "start": 626.8, "end": 631.3599999999999, "text": " minimum length 0. That's this non-linearity here. So S here is the" }, { "start": 631.36, "end": 638.6800000000001, "text": " unscaled output of the capsule and you can see here if the length of S gets" }, { "start": 638.6800000000001, "end": 646.2, "text": " close to 1 or sorry gets really large then this here becomes irrelevant." }, { "start": 646.2, "end": 653.8000000000001, "text": " This whole term will be 1 and then the length of the final output of V here" }, { "start": 653.8000000000001, "end": 661.12, "text": " will be 1. Right so if this is very large then the the length of the scaled" }, { "start": 661.12, "end": 666.92, "text": " output will be 1 however if the if the length is really small of the original" }, { "start": 666.92, "end": 672.92, "text": " output so if this goes towards 0 then this becomes irrelevant this becomes" }, { "start": 672.92, "end": 680, "text": " irrelevant this will go towards 0 and the entire length will go towards 0." }, { "start": 680, "end": 689.2, "text": " So this is kind of a nice way to scale these outputs always to be between length 0" }, { "start": 689.2, "end": 702.5200000000001, "text": " and 1. Then next thing is so how this I find I find the the most complicated" }, { "start": 702.5200000000001, "end": 710.48, "text": " part right so we'll jump ahead actually to how a capsule's network is implemented" }, { "start": 710.48, "end": 716.76, "text": " and this is the the capsule network they implement so first it's an MNIST" }, { "start": 716.76, "end": 721.84, "text": " classifier you have an MNIST image here and it first goes through a simple" }, { "start": 721.84, "end": 726.4, "text": " convolutional layer that's that's nothing new this is a classic" }, { "start": 726.4, "end": 734.84, "text": " convolutional layer is there's 256 channels it has a 9 by 9 filters and" }, { "start": 734.84, "end": 747.1600000000001, "text": " stride 1 so it will output a 20 by 20 time by 256 tensor then each of these" }, { "start": 747.1600000000001, "end": 752.6, "text": " so each of the outputs here is sent to each of these capsules and now they're" }, { "start": 752.6, "end": 758.2800000000001, "text": " convolutional capsules so that makes it a bit more complicated but don't you" }, { "start": 758.2800000000001, "end": 762.1600000000001, "text": " know don't worry primarily about them being convolutional capsules the" }, { "start": 762.16, "end": 765.28, "text": " analogy is exactly as in a classic neural network you can implement these" }, { "start": 765.28, "end": 772.4399999999999, "text": " capsules as void-feed-forward capsules or as convolutional capsules and maybe also" }, { "start": 772.4399999999999, "end": 777.3199999999999, "text": " as transformer capsules I don't think anyone's done that all right there's a" }, { "start": 777.3199999999999, "end": 785, "text": " paper for you the so you'll send you'll send the output of this convolution" }, { "start": 785, "end": 790.04, "text": " layer to each capsule and then you have basically just two layer of capsules" }, { "start": 790.04, "end": 797.64, "text": " here the first layer consists of 32 what they call primary caps sorry the these" }, { "start": 797.64, "end": 805.24, "text": " 32 capsules each will output an eight dimensional vector and I'm simplifying" }, { "start": 805.24, "end": 809.48, "text": " here it's it's convolutional but they will just for simplest they will each" }, { "start": 809.48, "end": 816.68, "text": " output an eight dimensional vector right and these are exactly as we said before" }, { "start": 816.68, "end": 821.8, "text": " so each of these will be responsible ultimately for a given entity or part of" }, { "start": 821.8, "end": 828.06, "text": " entity being there like in MNIST this could be is there a little curve on the" }, { "start": 828.06, "end": 831.64, "text": " bottom left side right this might indicate the presence of a six or an" }, { "start": 831.64, "end": 838.8399999999999, "text": " eight something like this and then the these capsules here each is they" }, { "start": 838.8399999999999, "end": 844.1999999999999, "text": " represented as a row so each of these rows here is a capsule and we have ten" }, { "start": 844.2, "end": 848.88, "text": " of these and these are your simply your final classification capsules so each" }, { "start": 848.88, "end": 854.76, "text": " capsule is responsible for indicating the presence or absence of one particular" }, { "start": 854.76, "end": 859.5600000000001, "text": " class of digits so this will be of a one of a two of a three of a four and so on" }, { "start": 859.5600000000001, "end": 865.9200000000001, "text": " of a zero I guess somewhere as well so these are ten capsules and the question" }, { "start": 865.9200000000001, "end": 871.5200000000001, "text": " is how does information go from a capsule here from the output of a" }, { "start": 871.52, "end": 877, "text": " capsule or to any of capsule here and the easy way to do this is simply to say" }, { "start": 877, "end": 884.12, "text": " as in a classical neural network the output here simply goes to the input" }, { "start": 884.12, "end": 891.92, "text": " here just you just put it there basically on on unchanged now there is a" }, { "start": 891.92, "end": 897.4, "text": " bit of an issue here with the dimensions but you can simply say well we simply" }, { "start": 897.4, "end": 903.88, "text": " put a weight matrix in to route into the capsules but the idea of these capsules" }, { "start": 903.88, "end": 912.28, "text": " and this paper is to say wait wait these capsules actually we want to make them" }, { "start": 912.28, "end": 920.84, "text": " decide to which capsule in the next layer will they send their input right" }, { "start": 920.84, "end": 926.84, "text": " so the capsules can kind of decide where they want to send their output to like" }, { "start": 926.84, "end": 932.48, "text": " where is this where is the capsule that detects the maybe this one detects is" }, { "start": 932.48, "end": 937.08, "text": " there a line in the right side of the image right indicating maybe a seven or" }, { "start": 937.08, "end": 945.4, "text": " a one this is probably most relevant for the one class and for the seven class so" }, { "start": 945.4, "end": 951.52, "text": " it might decide to route its output there and the idea of how this routing" }, { "start": 951.52, "end": 959.4399999999999, "text": " happens is basically the topic of this paper so the the capsules route their" }, { "start": 959.4399999999999, "end": 967, "text": " output to the appropriate next layers capsules how is this done all right this" }, { "start": 967, "end": 972.1999999999999, "text": " is done via the what's called the routing mechanism that I find it quite" }, { "start": 972.1999999999999, "end": 981.12, "text": " poorly described here so I will simply draw it I will simply try to make it up" }, { "start": 981.12, "end": 990.88, "text": " all right so we have capsules and as I've drawn them before right we have one" }, { "start": 990.88, "end": 1000.32, "text": " two three capsules and we maybe have two parent capsules each of these capsules" }, { "start": 1000.32, "end": 1006.16, "text": " here will output a vector as we said and we'll only do it for this this one sorry" }, { "start": 1006.16, "end": 1012.92, "text": " vector here so this will output this vector and needs to decide where to here" }, { "start": 1012.92, "end": 1020.04, "text": " or to here do I send to this output now what it does is there is an iterative" }, { "start": 1020.04, "end": 1027.68, "text": " procedure that has multiple steps and this is I think this is at least the way" }, { "start": 1027.68, "end": 1032.52, "text": " I understand I think the important part to understand is that if we forward pass" }, { "start": 1032.52, "end": 1037.24, "text": " data through this network it actually doesn't go forward in a straight line" }, { "start": 1037.24, "end": 1042.04, "text": " what it actually does is it goes through a layer and then it does multiple steps" }, { "start": 1042.04, "end": 1047.76, "text": " in between layers until it has decided where it wants to go in the next layer" }, { "start": 1047.76, "end": 1051.96, "text": " and then it goes on to the next layer and if there's another capsule layers it" }, { "start": 1051.96, "end": 1058.32, "text": " does again multiple steps before it goes on so that's that's my take on it and" }, { "start": 1058.32, "end": 1064.6, "text": " the multiple steps are as follows first I'll send my output vector to to all of" }, { "start": 1064.6, "end": 1070.12, "text": " the all of the layers like equally all of the parent capsules and so will will" }, { "start": 1070.12, "end": 1078.08, "text": " everyone else right everyone will send theirs equally to the parent now this" }, { "start": 1078.08, "end": 1082.8999999999999, "text": " isn't just done and this may be here this isn't just done just by sending it" }, { "start": 1082.8999999999999, "end": 1087.32, "text": " but this is actually done by modulation of weight matrices so each thing here if" }, { "start": 1087.32, "end": 1093.3999999999999, "text": " this is capsule I and this is capsule J there is a weight matrix in between W I J" }, { "start": 1093.3999999999999, "end": 1098.1599999999999, "text": " that is learned right this is a static weight matrix and each one of these red" }, { "start": 1098.1599999999999, "end": 1104.36, "text": " red arrows you see here has such a weight matrix attached to it so each" }, { "start": 1104.36, "end": 1108.76, "text": " each line you see here is actually modulated by such a weight matrix so" }, { "start": 1108.76, "end": 1113.9199999999998, "text": " there is an a quadratic number of these weight matrices flying around and this" }, { "start": 1113.92, "end": 1118.24, "text": " will also then allow you that maybe this vector is eight dimensional but the" }, { "start": 1118.24, "end": 1124.16, "text": " input vector here is 16 dimensional what we saw before all right so the out the" }, { "start": 1124.16, "end": 1129.48, "text": " input of capsule J here it will receive let's see what it receives it will" }, { "start": 1129.48, "end": 1140.5600000000002, "text": " receive the output of capsule will the output of capsule 1 V 1 modulated by the" }, { "start": 1140.56, "end": 1148.8, "text": " let's let's call this yeah let's call this J modulated by 1 J W 1 J and it" }, { "start": 1148.8, "end": 1155.6, "text": " will also receive this is a set the output of capsule 2 modulated by the" }, { "start": 1155.6, "end": 1162.8799999999999, "text": " weight matrix for sorry weight matrix for capsule 2 and so on now what it does" }, { "start": 1162.88, "end": 1174.4, "text": " is it adds this these all up into a soft max so sorry let's write this so soft it" }, { "start": 1174.4, "end": 1180.24, "text": " will add those all up in a soft max weighted fashion so it will actually" }, { "start": 1180.24, "end": 1188.5600000000002, "text": " compute a a weighted average of those now the weights at the beginning are are" }, { "start": 1188.56, "end": 1195.56, "text": " just one because it gets each from each lower capsule it gets equal amount of" }, { "start": 1195.56, "end": 1200.6, "text": " this vector but then this will give you an output so this will give you some" }, { "start": 1200.6, "end": 1207.84, "text": " output let's put this in green this will give you an output that's I don't know" }, { "start": 1207.84, "end": 1215.72, "text": " how they call it in the paper let's just call it O J right and then what you do" }, { "start": 1215.72, "end": 1224.08, "text": " is all right you compare how much do each of the individual contributions" }, { "start": 1224.08, "end": 1230.68, "text": " agree with OJ so you actually compute for each of these you would compute the" }, { "start": 1230.68, "end": 1239.48, "text": " inner product so you would compute the inner product of W 1 J V 1 with OJ and" }, { "start": 1239.48, "end": 1249.24, "text": " you would compute the inner product of W 2 J V 2 with OJ all right the inner" }, { "start": 1249.24, "end": 1254.76, "text": " product and then these inner products here will become the weighting" }, { "start": 1254.76, "end": 1261.2, "text": " coefficients for the soft max in the next iteration all right so this I mean" }, { "start": 1261.2, "end": 1265.44, "text": " this this is a bit convoluted but ultimately what you're saying is if" }, { "start": 1265.44, "end": 1273.0800000000002, "text": " you're a capsule here you'll send your output forward you have an output you" }, { "start": 1273.0800000000002, "end": 1280.0800000000002, "text": " send it forward right to the other capsule and the other capsule will so" }, { "start": 1280.0800000000002, "end": 1283.56, "text": " this is this is your output and we'll forget about this weight matrix 6 for" }, { "start": 1283.56, "end": 1290.24, "text": " now this is your up the other capsule will output its own its own output" }, { "start": 1290.24, "end": 1297.88, "text": " computed from the lower layers now we do an iteration again if your output now" }, { "start": 1297.88, "end": 1305, "text": " aligns with this you will send more of it and these these two that I've drawn" }, { "start": 1305, "end": 1309.36, "text": " here actually align pretty well right so you'll send more of it is more more" }, { "start": 1309.36, "end": 1316.04, "text": " more right and now maybe the output that next computed output of the same capsule" }, { "start": 1316.04, "end": 1319.4, "text": " will be even more in that direction because you've contributed more right" }, { "start": 1319.4, "end": 1323.3200000000002, "text": " you'll send more and then you're like in the next iteration wow these two are" }, { "start": 1323.3200000000002, "end": 1328.5600000000002, "text": " really equal sorry this should be red here your ears just keeps being the same" }, { "start": 1328.5600000000002, "end": 1333.0400000000002, "text": " and then you say well I'm gonna send even more to that one right whereas" }, { "start": 1333.0400000000002, "end": 1340.76, "text": " another capsule that it's whose initial output was basically whose initial" }, { "start": 1340.76, "end": 1348.6000000000001, "text": " output was basically like this it will by itself compute the inner product with" }, { "start": 1348.6, "end": 1353.36, "text": " the original this original it will send it here right it will compute the inner" }, { "start": 1353.36, "end": 1358.48, "text": " product with the original output and it will realize well these do not align" }, { "start": 1358.48, "end": 1363.48, "text": " very much and then it will send less right it will send less to the next step" }, { "start": 1363.48, "end": 1369.08, "text": " and because it sends less in the next step of course the output will then" }, { "start": 1369.08, "end": 1374.4399999999998, "text": " probably align even less with that vector and then it will send less and" }, { "start": 1374.44, "end": 1380.2, "text": " less and less so this is called dynamic routing the the idea behind it is kind" }, { "start": 1380.2, "end": 1388.24, "text": " of that you route by agreement so you will route to the parent capsules that" }, { "start": 1388.24, "end": 1393.8400000000001, "text": " agree with your output and by agreement we mean kind of the inner product is" }, { "start": 1393.8400000000001, "end": 1400.3200000000002, "text": " high after modulating by this weight matrix and that sort of so that" }, { "start": 1400.32, "end": 1405.6799999999998, "text": " basically means this weight matrix is responsible for deciding which" }, { "start": 1405.6799999999998, "end": 1411.08, "text": " information is relevant together whenever you have two vectors that align" }, { "start": 1411.08, "end": 1417.48, "text": " in the same layer then the in the sense of the capsule networks those represent" }, { "start": 1417.48, "end": 1423.8799999999999, "text": " the same kind of information and those will be routed together to the same" }, { "start": 1423.8799999999999, "end": 1429.8, "text": " capsule in terms of the examples we made maybe if a door and a roof is" }, { "start": 1429.8, "end": 1436.9199999999998, "text": " present then these these these weight matrices that connect door and roof to" }, { "start": 1436.9199999999998, "end": 1442.84, "text": " the house class they will transform a high vector in door and roof into" }, { "start": 1442.84, "end": 1449.76, "text": " aligning vectors for the house class and thereby saying look these two if I look" }, { "start": 1449.76, "end": 1457.28, "text": " at them through if I look at a door and a roof through the perspective of trying" }, { "start": 1457.28, "end": 1464.6, "text": " to be a house right then they are in much agreement on the presence of a" }, { "start": 1464.6, "end": 1476.12, "text": " house so if I am a house right I am a house and I look at a door and I look at" }, { "start": 1476.12, "end": 1482.72, "text": " a roof through the kind of from the perspective of being a house right this" }, { "start": 1482.72, "end": 1486.6399999999999, "text": " is this is what these weight matrices do they always have a perspective of the" }, { "start": 1486.64, "end": 1492.72, "text": " parent capsule then these two things they make a lot of sense together and" }, { "start": 1492.72, "end": 1500.16, "text": " thus I will route them to the same place so they can both contribute to their" }, { "start": 1500.16, "end": 1506.1200000000001, "text": " being a house now from the perspective of a house if I look at a little beach" }, { "start": 1506.1200000000001, "end": 1512.8000000000002, "text": " with a tree on it right then that does not that is not the same that does not" }, { "start": 1512.8, "end": 1521.36, "text": " really is not the same information as a door or a roof so I will not route this" }, { "start": 1521.36, "end": 1530.6399999999999, "text": " to the house in the in the same strength that is sort of the best way I have of" }, { "start": 1530.6399999999999, "end": 1535.48, "text": " explaining it how these capsules work basically the lower entities will always" }, { "start": 1535.48, "end": 1543.08, "text": " be routed for the relevance of the higher entities that are trying to are" }, { "start": 1543.08, "end": 1549.84, "text": " trying to combine the lower entities if that wasn't it's not entirely clear to" }, { "start": 1549.84, "end": 1557, "text": " me either yet but it's the best shot I I can give and the routing is here" }, { "start": 1557, "end": 1563.84, "text": " formalized I find it hard to follow the important thing is that there is an" }, { "start": 1563.84, "end": 1570.8799999999999, "text": " inner loop in all of this so there is an like kind of an an inner iteration and" }, { "start": 1570.8799999999999, "end": 1578.04, "text": " this inner iteration is computed in every forward pass and so these routing" }, { "start": 1578.04, "end": 1584.48, "text": " where the information goes in the next layer that is only the prior probability" }, { "start": 1584.48, "end": 1591.1599999999999, "text": " for that is learned but the actual routing coefficients those are" }, { "start": 1591.16, "end": 1597.88, "text": " dynamically computed in every forward pass so every forward pass goes it goes" }, { "start": 1597.88, "end": 1602.28, "text": " information goes through a layer then it goes multiple steps between two layers" }, { "start": 1602.28, "end": 1606.1200000000001, "text": " until it decides exactly what the distribution for the next layer is and" }, { "start": 1606.1200000000001, "end": 1610.64, "text": " then the next layer computes its outputs and that goes again multiple steps" }, { "start": 1610.64, "end": 1616.48, "text": " between these layers and the next layer so that's the the basic thing to" }, { "start": 1616.48, "end": 1621.76, "text": " remember there's also some normalization involved the squash is the non-linearity" }, { "start": 1621.76, "end": 1629.1200000000001, "text": " we discussed so what do they actually train now at the end here they have a" }, { "start": 1629.1200000000001, "end": 1634.56, "text": " they have these ten capsules and each capsule will be responsible for" }, { "start": 1634.56, "end": 1640.4, "text": " recognizing one the presence of one digit in the MNIST data set of course" }, { "start": 1640.4, "end": 1646.04, "text": " and so what they do is they take the length of these vectors that are output" }, { "start": 1646.04, "end": 1650, "text": " by these capsules these capsules are feed-forward capsules as opposed to the" }, { "start": 1650, "end": 1655.3999999999999, "text": " convolutional capsules here so the feed-forward capsules output again a" }, { "start": 1655.3999999999999, "end": 1661.1599999999999, "text": " vector the length of this vector is taken and then it's basically trained" }, { "start": 1661.1599999999999, "end": 1666.52, "text": " like you would train a regression problem and the loss here is specified" }, { "start": 1666.52, "end": 1673.52, "text": " up here so if the if the image actually does contain this if the training label" }, { "start": 1673.52, "end": 1683.28, "text": " actually has this digit present this T here encodes that so if if K let's say K" }, { "start": 1683.28, "end": 1691.92, "text": " is 2 right so if K 2 if there is a 2 in the image when we know that because it's" }, { "start": 1691.92, "end": 1698.24, "text": " a training image then the length of the output of capsule number 2 should be" }, { "start": 1698.24, "end": 1705.56, "text": " high and this simply encodes that it should be very close to this M plus an" }, { "start": 1705.56, "end": 1710.52, "text": " M plus here is that I think they said it to 0.9 so they say you should be the" }, { "start": 1710.52, "end": 1717.04, "text": " length should be as close as possible to 0.9 whereas if the 2 is not present then" }, { "start": 1717.04, "end": 1723.44, "text": " TK will be 0 then this part will be active so it's only one of these two" }, { "start": 1723.44, "end": 1730.04, "text": " parts will be active then the length of the vector so of capsule number 2 should" }, { "start": 1730.04, "end": 1735.48, "text": " be close to this M negative which is 0.1 it's basically a regression problem" }, { "start": 1735.48, "end": 1742.44, "text": " saying if if there if the given entity is in the image then please make the" }, { "start": 1742.44, "end": 1746.3600000000001, "text": " length as close as possible to 0.9 and if it's not make it as close as possible" }, { "start": 1746.36, "end": 1755.04, "text": " to 0.1 so this this is a classic say regression loss on the length of the" }, { "start": 1755.04, "end": 1761.7199999999998, "text": " output vectors the the lambda is just a factor to to dampen the contribution for" }, { "start": 1761.7199999999998, "end": 1768.1599999999999, "text": " all the negative classes with respect to the one positive class of course per" }, { "start": 1768.16, "end": 1776.76, "text": " capsule it turns out this is actually not enough so this will be the" }, { "start": 1776.76, "end": 1781.44, "text": " classification output but it's it seems not enough they don't say it's not" }, { "start": 1781.44, "end": 1786.0400000000002, "text": " enough but they simply say we additionally do the following so they" }, { "start": 1786.0400000000002, "end": 1791.8000000000002, "text": " also do is they introduce a reconstruction loss now if this model is" }, { "start": 1791.8000000000002, "end": 1796.68, "text": " trained correctly then these capsules here these last capsules especially" }, { "start": 1796.68, "end": 1800, "text": " this one maybe that's the capsule corresponding to the class of the digit" }, { "start": 1800, "end": 1808.02, "text": " 8 will not only encode if an 8 is there or not as in the length of the vector" }, { "start": 1808.02, "end": 1812.72, "text": " output but it will also encode the properties of dates it is a 16" }, { "start": 1812.72, "end": 1818.8400000000001, "text": " dimensional vector so it will encode hopefully things like the stroke width" }, { "start": 1818.84, "end": 1829.3999999999999, "text": " so then it might encode the maybe the rotation of the digit then it might be" }, { "start": 1829.3999999999999, "end": 1836.28, "text": " controlled the tightness of the of the loop so you can have an 8 with very" }, { "start": 1836.28, "end": 1841.08, "text": " large loops or it can have an 8 sorry this is a smaller rate I can have an 8" }, { "start": 1841.08, "end": 1846.8, "text": " with very tight loops so it might you know encode things like this so" }, { "start": 1846.8, "end": 1853.48, "text": " technically it is it will be possible to reconstruct from this description" }, { "start": 1853.48, "end": 1859.44, "text": " reconstruct say the width is high the rotation is zero and the tightness is" }, { "start": 1859.44, "end": 1870.3999999999999, "text": " low then maybe I have a wide widely stroked not tight 8 that is not rotated" }, { "start": 1870.3999999999999, "end": 1875.12, "text": " right so it should be possible to reconstruct this and they they do exactly" }, { "start": 1875.12, "end": 1880.9599999999998, "text": " that so they take this last capsule of the class that is the actual training" }, { "start": 1880.9599999999998, "end": 1888.08, "text": " label that's called the reconstruction target and they feed this to a simple" }, { "start": 1888.08, "end": 1893.1599999999999, "text": " feed-forward neural network that at the end you see this is exactly the MNIST" }, { "start": 1893.1599999999999, "end": 1899.84, "text": " size will try to reconstruct the the image so if the image here this image" }, { "start": 1899.84, "end": 1907.24, "text": " goes in then it goes all through here it will take the class for here feed it" }, { "start": 1907.24, "end": 1912.56, "text": " through this network reshape it to an image again and hopefully what will come" }, { "start": 1912.56, "end": 1920.56, "text": " out is again this for here and it will then have an auxiliary auxiliary loss in" }, { "start": 1920.56, "end": 1926.36, "text": " addition to the loss of this of this classification loss here will auxiliary" }, { "start": 1926.36, "end": 1932.8799999999999, "text": " loss that tries to reconstruct the original image right and that's simply a" }, { "start": 1932.8799999999999, "end": 1941.52, "text": " I believe it's just an L2 reconstruction loss that is that is scaled down that it" }, { "start": 1941.52, "end": 1947.1999999999998, "text": " doesn't dominate so they also train the network basically to reconstruct this" }, { "start": 1947.1999999999998, "end": 1952.28, "text": " and I believe they do this because the length isn't quite enough to make it do" }, { "start": 1952.28, "end": 1959.12, "text": " what they want it to do thus they by having this reconstruction here they" }, { "start": 1959.12, "end": 1964.36, "text": " really kind of enforce that the individual capsules the individual" }, { "start": 1964.36, "end": 1971.52, "text": " dimensions must encode some kind of information about the original image" }, { "start": 1971.52, "end": 1976.44, "text": " and since the original images in the MNIST data set at least vary by those" }, { "start": 1976.44, "end": 1983.2, "text": " things by stroke width by rotation by tightness that by this loss will be" }, { "start": 1983.2, "end": 1996.16, "text": " reflected in the in the reconstruction all right so how are they doing here you" }, { "start": 1996.16, "end": 2003.1200000000001, "text": " see different examples of inputs and then reconstructed outputs and this you" }, { "start": 2003.12, "end": 2009.1999999999998, "text": " know seems pretty good actually so you see here all of these the input image is" }, { "start": 2009.1999999999998, "end": 2016.9599999999998, "text": " reconstructed fairly well so the numbers up here in the fall so the right are the" }, { "start": 2016.9599999999998, "end": 2023, "text": " failure cases here it the input image is a five labeled in the training data but" }, { "start": 2023, "end": 2029.32, "text": " the network actually classifies it as a three but then if you now you have two" }, { "start": 2029.32, "end": 2032.6399999999999, "text": " choices right this this is the same sample I have two choices for" }, { "start": 2032.64, "end": 2038.8000000000002, "text": " reconstruction either you reconstruct the capsule that is actually the is that" }, { "start": 2038.8000000000002, "end": 2042.76, "text": " you know is the true capsule that should be activated and you reconstruct from" }, { "start": 2042.76, "end": 2049.2000000000003, "text": " that or you reconstruct from the capsule that the network says the it classifies" }, { "start": 2049.2000000000003, "end": 2054, "text": " it as so here it mixed up a five four three if you still take the five the" }, { "start": 2054, "end": 2058.96, "text": " capsule and reconstructed you see it actually looks like the original image" }, { "start": 2058.96, "end": 2064.32, "text": " but it looks much more like a five and if you take the three capsule to" }, { "start": 2064.32, "end": 2068.2400000000002, "text": " reconstruct which is what the network classified this as it's still it looks" }, { "start": 2068.2400000000002, "end": 2073.28, "text": " like the original image but it looks much more like an actual three right it's" }, { "start": 2073.28, "end": 2078.68, "text": " it's missing the the part up here whereas over here it's it's missing this" }, { "start": 2078.68, "end": 2083.76, "text": " part here so that the network really seems to kind of learn the different" }, { "start": 2083.76, "end": 2089.92, "text": " variations of these digits and in an ambiguous case such as this one it you" }, { "start": 2089.92, "end": 2094.48, "text": " know it can it can actually go either way and it can actually reconstruct the" }, { "start": 2094.48, "end": 2101, "text": " original output in either interpretations once as a three and once" }, { "start": 2101, "end": 2105.44, "text": " as a five it will be interesting to see what the actual lengths of the vector of" }, { "start": 2105.44, "end": 2112.6400000000003, "text": " both of these classes were that were mixed up and here they compare their" }, { "start": 2112.64, "end": 2118.48, "text": " accuracies so they have a baseline model which I believe is just a CNN" }, { "start": 2118.48, "end": 2125.92, "text": " where they get a decent kind of error and then the capsule networks they get a" }, { "start": 2125.92, "end": 2130.72, "text": " lower error and here you see as you add the reconstruction loss and as you add" }, { "start": 2130.72, "end": 2135.64, "text": " routing more so one step of routing simply means the first step is where you" }, { "start": 2135.64, "end": 2142.44, "text": " send your output equally to each parent that is as in the classical neural" }, { "start": 2142.44, "end": 2148.88, "text": " network case but if you introduce three steps of routing then your error drops" }, { "start": 2148.88, "end": 2159.96, "text": " even lower so they they kind of are on par with baseline CNNs on MNIST here" }, { "start": 2162.2400000000002, "end": 2167.04, "text": " they also explore what their capsules learn so as I said the individual capsules" }, { "start": 2167.04, "end": 2174.32, "text": " the dimensions should encode kind of properties of the variations of the of" }, { "start": 2174.32, "end": 2180.4, "text": " the class class samples and here they explore this in the different capsules so" }, { "start": 2180.4, "end": 2184.32, "text": " they change some dimensions and they run it through their reconstruction networks" }, { "start": 2184.32, "end": 2189.96, "text": " and indeed they discover that there is like a scale and thickness dimension" }, { "start": 2189.96, "end": 2196.04, "text": " stroke thickness dimension there's a skew dimension and so on width and" }, { "start": 2196.04, "end": 2204.44, "text": " translation so that this is pretty remarkable these networks really if you" }, { "start": 2204.44, "end": 2209.2, "text": " train them in this way they really seem to learn about the entities and about" }, { "start": 2209.2, "end": 2214.72, "text": " the properties of the entities and that seems to be quite interesting you see" }, { "start": 2214.72, "end": 2219.96, "text": " that there's everything here stays well within the class that the capsule is" }, { "start": 2219.96, "end": 2227.92, "text": " assigned to they also yeah this robustness to affine transformations" }, { "start": 2227.92, "end": 2232.92, "text": " where they improve over the baseline it's kind of an auxiliary experiment the" }, { "start": 2232.92, "end": 2238.44, "text": " next interesting experiment is what they call the multi MNIST experiment the" }, { "start": 2238.44, "end": 2245.44, "text": " multi MNIST experiment is done by taking two different MNIST digits and basically" }, { "start": 2245.44, "end": 2251.32, "text": " just overlapping them so that they have you know shift them slightly but as you" }, { "start": 2251.32, "end": 2257.8, "text": " see here or here they are overlapped heavily and the task of the network is" }, { "start": 2257.8, "end": 2265.12, "text": " to figure out which two overlapping digits are in the image and the the" }, { "start": 2265.12, "end": 2272.56, "text": " network is very very good at doing this the capsule network that is and better" }, { "start": 2272.56, "end": 2276.96, "text": " than the the baselines because the capsule network simply encodes the" }, { "start": 2276.96, "end": 2282.92, "text": " presence and properties of a particular instance in the image if you simply take" }, { "start": 2282.92, "end": 2288.7999999999997, "text": " the top two length capsules and then reconstruct those independently then" }, { "start": 2288.7999999999997, "end": 2296.6, "text": " you're you can you can you can basically segment the image and you see this here" }, { "start": 2296.6, "end": 2302.12, "text": " so the different colorations come from two different reconstructions of the" }, { "start": 2302.12, "end": 2306.7999999999997, "text": " image from two different capsules so green is from one capsule and red from" }, { "start": 2306.7999999999997, "end": 2311, "text": " the other capsule so the network correctly identifies that it's a 6 and" }, { "start": 2311, "end": 2316.04, "text": " the zero right and it also correctly identifies not only which pixels belong" }, { "start": 2316.04, "end": 2321.24, "text": " to the 6 and which belong to 0 but also pixels that belong to both so that's not" }, { "start": 2321.24, "end": 2325.2799999999997, "text": " a not a problem if you use capsule networks as they are" }, { "start": 2325.2799999999997, "end": 2330.2, "text": " are notable to say here they the way they train is is they train the actual" }, { "start": 2330.2, "end": 2336.2799999999997, "text": " reconstruction by only reconstructing one at a time so the kind of the premise" }, { "start": 2336.2799999999997, "end": 2340.12, "text": " of the data set is that you actually have access to the underlying individual" }, { "start": 2340.12, "end": 2345.9199999999996, "text": " digits while training so like the images of the individual digits you don't" }, { "start": 2345.9199999999996, "end": 2352.8799999999997, "text": " only have this label here but that's a detail here are some kind of failure" }, { "start": 2352.8799999999997, "end": 2359.68, "text": " cases where it it misclassified or you miss specify the capsules and it's kind" }, { "start": 2359.68, "end": 2367.8799999999997, "text": " of unable use here you see to to assign the digits of the misclassified or the" }, { "start": 2367.8799999999997, "end": 2372.8799999999997, "text": " pixels of the misclassified thing it's quite interesting to look at the failure" }, { "start": 2372.8799999999997, "end": 2378.3999999999996, "text": " cases but I find it more interesting to look actually the success cases and the" }, { "start": 2378.3999999999996, "end": 2384.8199999999997, "text": " kind of ease at which the at which the capsule networks can do this simply by" }, { "start": 2384.82, "end": 2392.04, "text": " how they're structured alright so then lastly they also experiment on C for 10" }, { "start": 2392.04, "end": 2397.4, "text": " and interestingly the C for 10 experiments show that the capsule" }, { "start": 2397.4, "end": 2404, "text": " networks don't perform as well there and as you know C for 10 is a data set that" }, { "start": 2404, "end": 2407.44, "text": " is about the same size as MNIST but it's first of all color and second of all is" }, { "start": 2407.44, "end": 2413.32, "text": " natural images and so they have quite a bit of clutter it's not black and white" }, { "start": 2413.32, "end": 2418.8, "text": " black background white digits it's actually there's a sky like on an" }, { "start": 2418.8, "end": 2425.2400000000002, "text": " image there's lots of things going on and right there's my tree and there's" }, { "start": 2425.2400000000002, "end": 2429.76, "text": " stuff here and there's stuff here and the the capsule networks they like to" }, { "start": 2429.76, "end": 2434.96, "text": " account for things in the image so they like to have a capsule corresponding to" }, { "start": 2434.96, "end": 2438.84, "text": " everything that's going on here and here and here and here and here if the whole" }, { "start": 2438.84, "end": 2442.52, "text": " background is black that is not a problem you can account for simply the" }, { "start": 2442.52, "end": 2447, "text": " background but if there's lots of things going on then these capsule networks" }, { "start": 2447, "end": 2455, "text": " get they get they get a bit over explanatory they want to explain" }, { "start": 2455, "end": 2459.6, "text": " everything and that degrades the performance now this paper basically" }, { "start": 2459.6, "end": 2465.12, "text": " says yeah you can have a something like a none of the above category and they" }, { "start": 2465.12, "end": 2473.92, "text": " found that it helped to introduce that in my opinion that it I think the the" }, { "start": 2473.92, "end": 2478.88, "text": " the solution will be more towards introduction of a better loss function" }, { "start": 2478.88, "end": 2486.24, "text": " for this because like such that you don't need kind of to explain the entire" }, { "start": 2486.24, "end": 2490.8199999999997, "text": " thing rather than here we'll hear what you do is you simply explain it by" }, { "start": 2490.8199999999997, "end": 2494.4, "text": " saying it's none of the above but it's incredibly hard to balance that my" }, { "start": 2494.4, "end": 2504.48, "text": " opinion yeah all right so that is basically the end of this they say they" }, { "start": 2504.48, "end": 2510.32, "text": " have a discussion here where they compare capsules against other related" }, { "start": 2510.32, "end": 2519.84, "text": " work but I hope that you kind of got an overview of how this works now and as" }, { "start": 2519.84, "end": 2525.48, "text": " much as possible and with that that was it for me and thanks for watching bye" }, { "start": 2525.48, "end": 2551.48, "text": " bye" } ]
dPsXxLyqpfs
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
World Models
[ "Science & Technology" ]
[ "deep learning", "reinforcement learning", "deep reinforcement learning", "deep rl", "schmidhuber", "environment model", "imagination", "vae", "rnn", "lstm" ]
Authors: David Ha, Jürgen Schmidhuber Abstract: We explore building generative neural network models of popular reinforcement learning environments. Our world model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment. By using features extracted from the world model as inputs to an agent, we can train a very compact and simple policy that can solve the required task. We can even train our agent entirely inside of its own hallucinated dream generated by its world model, and transfer this policy back into the actual environment. https://arxiv.org/abs/1803.10122
Hi, today we're looking at World Models by David Ha and Jürgen Schmidhuber. This is a paper that's concerned with reinforcement learning and especially with the problem of, say, you have an environment that you interact with and you kind of need to learn to act in it, but it could be, for example, very expensive to always query the environment. So let's say you have a robot and it needs to do something in the world, and you kind of, to have a robot execute something and then observe it, is quite expensive, costs electricity and so on. So you would like to sort of minimize how many times this happens. So here, searching for a good picture, they're concerned with problems, for example, like this. This is a race car simulator. There's an OpenAI gym environment for that. The other one that they use is a so-called like a doom experiment where, as you look at this, there's a couple of monsters and they're shooting fireballs at you and the task is just to kind of avoid the fireballs. So the entire point of the paper is that I don't actually need to interact with the environment in order to learn it. I can simply kind of learn a model of the environment and then learn using that model. So basically, I can learn how the environment works and then simply use my imagination of the environment, my model, in order to learn from that so I don't have to interact with the real environment anymore. So how do they do this? They do it in multiple stages. Here, first thing they do is they collect a bunch of samples from the environment. So they go to the environment, they simply do a random policy and then they collect a bunch of samples. I think the process is outlined down here somewhere. We saw it before. Here, collect 10,000 rollouts from a random policy. Next, they train a VAE here to kind of learn the environment. So that's where that comes in. This is all done in stages, not end-to-end. The VAE is simply a model that takes, in this case, a video frame here. It sends it through an encoder neural network to obtain what's called a latent representation, which is a much smaller dimensional representation. So if the image is 64 by 64 pixels, then the latent code could be as little as 100 or even 10 dimensional. So you see that there's quite a bit of compression going on. This is a variational autoencoder. It's not really important here that it's variational since the difference is the variational autoencoder is kind of a stochastic process, whereas the regular autoencoder isn't. But they introduce stochasticity later again. So it's not particularly important. So it's a variational autoencoder, which means they obtain a latent representation that defines distribution over outputs. So they send this sample from this latent distribution that they obtain, and then they feed this to the decoder. And the decoder kind of gives back what it thinks the encoder encoded. So the decoder tries to reconstruct as close as possible this original frame that was given to the encoder. But of course it can't because we've compressed it so much to this lower dimensional representation here. So it kind of does its best effort. So what you hope to achieve with this is that kind of the decoder learns, for example, there's always here. This is the ceiling right here. It's always gray. So basically, you shouldn't actually need to encode this in your Z. If it's always gray, the decoder should learn this by itself. So your hope is that the Z, the latent representation, will simply end up containing just the information that's kind of different or between the individual frames, which here I guess would be kind of the fireballs coming and your position relative to them. That's what's changing if you think about this environment. So your hope is that the latent representation captures only that, whereas all the static parts that are irrelevant or never change are kind of captured by the encoder and the decoder architecture by itself. So yeah, it's important to note the encoder and decoder are obviously always the same for all the frames, whereas the Z representation, of course, is there is one per frame, so each frame will give you a different Z. And that's so you can imagine how that works or how that's going to be useful. So they train this on like a randomly collected sample of the environment until they're confident they now have a good model of the environment. And then what they do next is they use this in order to train an RNN. So again, they kind of have their compression model of the environment. What they do now is they use these Z states you see here, here, here, here that they get from that. And they train how these latent representations evolve over time. So with an RNN here goes over time. So the RNN will always kind of predict what's the next state of the environment going to be. But importantly, maybe compared to environment models that we've discussed before in the, for example, imagination augmented agent paper, there we always try to directly predict the future pixels, so to say, of the future frame. Here, the environment model is over the latent representation. Of course, this means that the this is a much smaller space. So if your compression model is good, then this should be much easier to learn than, say, like a full end to end environment model. So this model learns how your latent states evolve over time, given your actions. So you can imagine the Z being an abstract representation of your state and then your action. And then this goes into the RNN and the RNN will predict what's the next latent representation. And there is what's called a temperature parameter to control the stochasticity. I've already told you this, there is a stochasticity built into this. So the RNN will simply output like some vector, what it thinks is the next thing going to be. And they don't use this directly as the next step, but they parameterize a kind of a mixture of Gaussian distributions coupled with a decoder here in order to give a random distribution over the next state. And they control the amount of randomness with the temperature parameter. They argue that this comes in handy later. So all right, so what do we have? We have a system that can compress the environment into what we would call an essential part. Every frame we extract what's important in that frame. Then next we have a model that can predict, given a state and an action, what's the next state going to be, the next latent state. So technically we now have an environment model, right, given a state. We can simply, given a state and a policy, we can simply use this model to roll forward. So the last component is the actual policy. And the actual policy here, as you can see, is in their case simply a linear model. The linear model will take the z, which is the latent representation of the current state, and the h, which is the current state of the RNN that models the environment over time. And it simply is a linear function of the two, gives you the action probabilities, or I guess the log-its of the actions. So it's a really, really simple controller over these things. And they do this in order to show that the main part of the work is being done by this environment model. And given the environment model, you only need very few parameters basically to then learn a policy. Here is what I said in a diagram. So the observation goes into the compression of the VAE, the latent representation of that goes into the RNN together with the hidden state from the last step. And this will output a new hidden state, which goes here into the controller, and we also directly take this z into the controller. And then from these two, we perform an action, which now we have a choice. It could go to the environment, right, give you the next observation, but also, or at the same time, since you kind of need to update your RNN, it can go here and update your RNN because it will need to predict the next hidden state. The thing is, we can also now leave away this path, which means we can simply take our RNN and our kind of imagine the next latent representation, put it through the decoder part of the VAE and use that as an observation. I hope this makes sense. It's rather intuitive, right? You have a model of the environment. You can simply use this instead of the real environment. So, there's a bit of pseudo code here, and they do a bunch of experiments, right? So, we're primarily interested, so they say, they see here, okay, our compression works, and this is the real frame, and this is the reconstructed frame, kind of looks, you know, captures the essence of what's going on. And I actually want to go down here, the Visdome experiment. So, what they do here in the car racing experiment is they kind of learn this entire thing, right? And then they learn a policy in the real world, in the environment, using this model up here, this procedure where they always go to the environment, and here is the exact experiment set up. So, first they collect, again, rollouts for a random policy, they train the VAE, they train the RNN, and then they learn the controller using the entire model, but in kind of the real world. So, they always interact with the environment, but because they also have their kind of latent representation of the observation, and not directly the observation, they get a higher score. And also, the policy that they use in the real environment transfers to the environment model. So, the policy they learn in the true environment, it transfers to the imagined, so if they use the imagined model as an environment, it also performs well. In the next experiment, they're going to try to do this the other way around. They're going to try to learn only using their model of the environment, and then see whether or not the policy transfers to the true environment. So, that's what they do here. They collect, again, a sample from the environment, they train the VAE, they train the RNN, and then they simply use this virtual environment, what they call it, in order to learn a policy, and at the end, they try to transfer, use the learn policy on the actual environment. And given the results, you see here, there we go. So, you see the kind of best it does, I would say, is about here, where the actual score is, you can see in this, and also in this setting, is higher than the kind of previous best algorithm in the OpenAI GIMP, when you go from virtual to actual. So, what this means is kind of, yeah, you can train using this imagined model, and then it will actually transfer, but there's a crucial thing, and that is this kind of temperature thing here. You can see a lot of times they actually don't manage to reach a good score, if this parameter is wrong. What does this parameter do? This parameter controls, as we discussed, the stochasticity of the model. So, basically, the environment model doesn't directly imagine a future state, but it imagines a distribution over future states. And the higher this parameter, the more stochastic this distribution is, basically the more uniform, I guess, the more entropy you have in these future states. We've seen this temperature parameter here. Which is important, because they go into length explaining why in this entire page here that we skipped. Here you see just text, there. Cheating the world model, which basically they say, okay, if you have a wrong model, if you have a model that's wrong of the environment, and you train a policy on it, necessarily, it's going to probably find a policy that exploits the wrongness of this model. So you might be able to walk through walls or fly or ignore the fireballs. Or basically, find that if you stand next to a wall, in your imagination, you'll never get hit. Something like this, which isn't true in the real world. So the policy will exploit that. And to counter this, they simply basically turn up this temperature parameter, giving them a more stochastic procedure. Meaning they imagine a lot of kind of different futures, and they train their policy on all of them, or in expectation over a sample of them. Which means that if the environment model is wrong, this kind of... I want to say if it's wrong, this corrects for it. It doesn't. But if it's wrong, you still sample different futures. So if it has one wrong future, you still have the other ones to kind of punish the policy, if it tries to exploit this one mistake. At least that's the reasoning behind it. So that's how they do this. You can interact with their trained environment models online somehow. They also give a kind of a look at what they would like to have. Instead of collecting the environment model from random rollout, they would try to train it, then to use it again to collect more data, to train more environment model, then use the environment, better environment model to train more the policy, and so on in a stepwise fashion. But they don't actually do it, they simply describe it. And the rest of the paper is a bit of related work and discussion. It's very prosaically written, kind of different from what you're used to if you read a lot of these papers. But yeah, I hope you can now you know what's going on and see you next time.
[ { "start": 0, "end": 6, "text": " Hi, today we're looking at World Models by David Ha and Jürgen Schmidhuber." }, { "start": 6, "end": 13, "text": " This is a paper that's concerned with reinforcement learning and especially with the problem of," }, { "start": 13, "end": 20, "text": " say, you have an environment that you interact with and you kind of need to learn to act in it," }, { "start": 20, "end": 26, "text": " but it could be, for example, very expensive to always query the environment." }, { "start": 26, "end": 33, "text": " So let's say you have a robot and it needs to do something in the world," }, { "start": 33, "end": 44, "text": " and you kind of, to have a robot execute something and then observe it, is quite expensive, costs electricity and so on." }, { "start": 44, "end": 50, "text": " So you would like to sort of minimize how many times this happens." }, { "start": 50, "end": 59, "text": " So here, searching for a good picture, they're concerned with problems, for example, like this." }, { "start": 59, "end": 66, "text": " This is a race car simulator. There's an OpenAI gym environment for that." }, { "start": 66, "end": 76, "text": " The other one that they use is a so-called like a doom experiment where, as you look at this," }, { "start": 76, "end": 83, "text": " there's a couple of monsters and they're shooting fireballs at you and the task is just to kind of avoid the fireballs." }, { "start": 83, "end": 91, "text": " So the entire point of the paper is that I don't actually need to interact with the environment in order to learn it." }, { "start": 91, "end": 98, "text": " I can simply kind of learn a model of the environment and then learn using that model." }, { "start": 98, "end": 105, "text": " So basically, I can learn how the environment works and then simply use my imagination of the environment," }, { "start": 105, "end": 114, "text": " my model, in order to learn from that so I don't have to interact with the real environment anymore." }, { "start": 114, "end": 119, "text": " So how do they do this? They do it in multiple stages." }, { "start": 119, "end": 128, "text": " Here, first thing they do is they collect a bunch of samples from the environment." }, { "start": 128, "end": 136, "text": " So they go to the environment, they simply do a random policy and then they collect a bunch of samples." }, { "start": 136, "end": 143, "text": " I think the process is outlined down here somewhere. We saw it before." }, { "start": 143, "end": 155, "text": " Here, collect 10,000 rollouts from a random policy. Next, they train a VAE here to kind of learn the environment." }, { "start": 155, "end": 161, "text": " So that's where that comes in. This is all done in stages, not end-to-end." }, { "start": 161, "end": 169, "text": " The VAE is simply a model that takes, in this case, a video frame here." }, { "start": 169, "end": 174, "text": " It sends it through an encoder neural network to obtain what's called a latent representation," }, { "start": 174, "end": 177, "text": " which is a much smaller dimensional representation." }, { "start": 177, "end": 189, "text": " So if the image is 64 by 64 pixels, then the latent code could be as little as 100 or even 10 dimensional." }, { "start": 189, "end": 193, "text": " So you see that there's quite a bit of compression going on." }, { "start": 193, "end": 202, "text": " This is a variational autoencoder. It's not really important here that it's variational since the difference is" }, { "start": 202, "end": 209, "text": " the variational autoencoder is kind of a stochastic process, whereas the regular autoencoder isn't." }, { "start": 209, "end": 216, "text": " But they introduce stochasticity later again. So it's not particularly important." }, { "start": 216, "end": 225, "text": " So it's a variational autoencoder, which means they obtain a latent representation that defines distribution over outputs." }, { "start": 225, "end": 235, "text": " So they send this sample from this latent distribution that they obtain, and then they feed this to the decoder." }, { "start": 235, "end": 243, "text": " And the decoder kind of gives back what it thinks the encoder encoded." }, { "start": 243, "end": 252, "text": " So the decoder tries to reconstruct as close as possible this original frame that was given to the encoder." }, { "start": 252, "end": 259, "text": " But of course it can't because we've compressed it so much to this lower dimensional representation here." }, { "start": 259, "end": 261, "text": " So it kind of does its best effort." }, { "start": 261, "end": 268, "text": " So what you hope to achieve with this is that kind of the decoder learns, for example, there's always here." }, { "start": 268, "end": 272, "text": " This is the ceiling right here. It's always gray." }, { "start": 272, "end": 278, "text": " So basically, you shouldn't actually need to encode this in your Z." }, { "start": 278, "end": 283, "text": " If it's always gray, the decoder should learn this by itself." }, { "start": 283, "end": 296, "text": " So your hope is that the Z, the latent representation, will simply end up containing just the information that's kind of different or between the individual frames," }, { "start": 296, "end": 305, "text": " which here I guess would be kind of the fireballs coming and your position relative to them." }, { "start": 305, "end": 308, "text": " That's what's changing if you think about this environment." }, { "start": 308, "end": 312, "text": " So your hope is that the latent representation captures only that," }, { "start": 312, "end": 323, "text": " whereas all the static parts that are irrelevant or never change are kind of captured by the encoder and the decoder architecture by itself." }, { "start": 323, "end": 329, "text": " So yeah, it's important to note the encoder and decoder are obviously always the same for all the frames," }, { "start": 329, "end": 336, "text": " whereas the Z representation, of course, is there is one per frame, so each frame will give you a different Z." }, { "start": 336, "end": 343, "text": " And that's so you can imagine how that works or how that's going to be useful." }, { "start": 343, "end": 355, "text": " So they train this on like a randomly collected sample of the environment until they're confident they now have a good model of the environment." }, { "start": 355, "end": 363, "text": " And then what they do next is they use this in order to train an RNN." }, { "start": 363, "end": 373, "text": " So again, they kind of have their compression model of the environment." }, { "start": 373, "end": 381, "text": " What they do now is they use these Z states you see here, here, here, here that they get from that." }, { "start": 381, "end": 386, "text": " And they train how these latent representations evolve over time." }, { "start": 386, "end": 390, "text": " So with an RNN here goes over time." }, { "start": 390, "end": 401, "text": " So the RNN will always kind of predict what's the next state of the environment going to be." }, { "start": 401, "end": 407, "text": " But importantly, maybe compared to environment models that we've discussed before in the, for example," }, { "start": 407, "end": 419, "text": " imagination augmented agent paper, there we always try to directly predict the future pixels, so to say, of the future frame." }, { "start": 419, "end": 424, "text": " Here, the environment model is over the latent representation." }, { "start": 424, "end": 429, "text": " Of course, this means that the this is a much smaller space." }, { "start": 429, "end": 440, "text": " So if your compression model is good, then this should be much easier to learn than, say, like a full end to end environment model." }, { "start": 440, "end": 449, "text": " So this model learns how your latent states evolve over time, given your actions." }, { "start": 449, "end": 455, "text": " So you can imagine the Z being an abstract representation of your state and then your action." }, { "start": 455, "end": 462, "text": " And then this goes into the RNN and the RNN will predict what's the next latent representation." }, { "start": 462, "end": 468, "text": " And there is what's called a temperature parameter to control the stochasticity." }, { "start": 468, "end": 476, "text": " I've already told you this, there is a stochasticity built into this." }, { "start": 476, "end": 484, "text": " So the RNN will simply output like some vector, what it thinks is the next thing going to be." }, { "start": 484, "end": 492, "text": " And they don't use this directly as the next step, but they parameterize a kind of a mixture of Gaussian distributions" }, { "start": 492, "end": 499, "text": " coupled with a decoder here in order to give a random distribution over the next state." }, { "start": 499, "end": 503, "text": " And they control the amount of randomness with the temperature parameter." }, { "start": 503, "end": 506, "text": " They argue that this comes in handy later." }, { "start": 506, "end": 508, "text": " So all right, so what do we have?" }, { "start": 508, "end": 517, "text": " We have a system that can compress the environment into what we would call an essential part." }, { "start": 517, "end": 521, "text": " Every frame we extract what's important in that frame." }, { "start": 521, "end": 535, "text": " Then next we have a model that can predict, given a state and an action, what's the next state going to be, the next latent state." }, { "start": 535, "end": 539, "text": " So technically we now have an environment model, right, given a state." }, { "start": 539, "end": 548, "text": " We can simply, given a state and a policy, we can simply use this model to roll forward." }, { "start": 548, "end": 552, "text": " So the last component is the actual policy." }, { "start": 552, "end": 560, "text": " And the actual policy here, as you can see, is in their case simply a linear model." }, { "start": 560, "end": 568, "text": " The linear model will take the z, which is the latent representation of the current state," }, { "start": 568, "end": 578, "text": " and the h, which is the current state of the RNN that models the environment over time." }, { "start": 578, "end": 589, "text": " And it simply is a linear function of the two, gives you the action probabilities, or I guess the log-its of the actions." }, { "start": 589, "end": 593, "text": " So it's a really, really simple controller over these things." }, { "start": 593, "end": 601, "text": " And they do this in order to show that the main part of the work is being done by this environment model." }, { "start": 601, "end": 608, "text": " And given the environment model, you only need very few parameters basically to then learn a policy." }, { "start": 608, "end": 613, "text": " Here is what I said in a diagram." }, { "start": 613, "end": 618, "text": " So the observation goes into the compression of the VAE," }, { "start": 618, "end": 625, "text": " the latent representation of that goes into the RNN together with the hidden state from the last step." }, { "start": 625, "end": 632, "text": " And this will output a new hidden state, which goes here into the controller," }, { "start": 632, "end": 636, "text": " and we also directly take this z into the controller." }, { "start": 636, "end": 643, "text": " And then from these two, we perform an action, which now we have a choice." }, { "start": 643, "end": 649, "text": " It could go to the environment, right, give you the next observation, but also," }, { "start": 649, "end": 656, "text": " or at the same time, since you kind of need to update your RNN, it can go here" }, { "start": 656, "end": 663, "text": " and update your RNN because it will need to predict the next hidden state." }, { "start": 663, "end": 667, "text": " The thing is, we can also now leave away this path," }, { "start": 667, "end": 679, "text": " which means we can simply take our RNN and our kind of imagine the next latent representation," }, { "start": 679, "end": 686, "text": " put it through the decoder part of the VAE and use that as an observation." }, { "start": 686, "end": 691, "text": " I hope this makes sense. It's rather intuitive, right? You have a model of the environment." }, { "start": 691, "end": 695, "text": " You can simply use this instead of the real environment." }, { "start": 695, "end": 702, "text": " So, there's a bit of pseudo code here, and they do a bunch of experiments, right?" }, { "start": 702, "end": 710, "text": " So, we're primarily interested, so they say, they see here, okay, our compression works," }, { "start": 710, "end": 715, "text": " and this is the real frame, and this is the reconstructed frame, kind of looks, you know," }, { "start": 715, "end": 719, "text": " captures the essence of what's going on." }, { "start": 719, "end": 729, "text": " And I actually want to go down here, the Visdome experiment." }, { "start": 729, "end": 737, "text": " So, what they do here in the car racing experiment is they kind of learn this entire thing, right?" }, { "start": 737, "end": 746, "text": " And then they learn a policy in the real world, in the environment, using this model up here," }, { "start": 746, "end": 752, "text": " this procedure where they always go to the environment, and here is the exact experiment set up." }, { "start": 752, "end": 761, "text": " So, first they collect, again, rollouts for a random policy, they train the VAE, they train the RNN," }, { "start": 761, "end": 775, "text": " and then they learn the controller using the entire model, but in kind of the real world." }, { "start": 775, "end": 782, "text": " So, they always interact with the environment, but because they also have their kind of latent representation" }, { "start": 782, "end": 788, "text": " of the observation, and not directly the observation, they get a higher score." }, { "start": 788, "end": 798, "text": " And also, the policy that they use in the real environment transfers to the environment model." }, { "start": 798, "end": 804, "text": " So, the policy they learn in the true environment, it transfers to the imagined," }, { "start": 804, "end": 809, "text": " so if they use the imagined model as an environment, it also performs well." }, { "start": 809, "end": 813, "text": " In the next experiment, they're going to try to do this the other way around." }, { "start": 813, "end": 819, "text": " They're going to try to learn only using their model of the environment," }, { "start": 819, "end": 825, "text": " and then see whether or not the policy transfers to the true environment." }, { "start": 825, "end": 832, "text": " So, that's what they do here. They collect, again, a sample from the environment," }, { "start": 832, "end": 843, "text": " they train the VAE, they train the RNN, and then they simply use this virtual environment," }, { "start": 843, "end": 849, "text": " what they call it, in order to learn a policy, and at the end, they try to transfer," }, { "start": 849, "end": 852, "text": " use the learn policy on the actual environment." }, { "start": 852, "end": 865, "text": " And given the results, you see here, there we go." }, { "start": 865, "end": 877, "text": " So, you see the kind of best it does, I would say, is about here," }, { "start": 877, "end": 884, "text": " where the actual score is, you can see in this, and also in this setting," }, { "start": 884, "end": 892, "text": " is higher than the kind of previous best algorithm in the OpenAI GIMP," }, { "start": 892, "end": 898, "text": " when you go from virtual to actual." }, { "start": 898, "end": 905, "text": " So, what this means is kind of, yeah, you can train using this imagined model," }, { "start": 905, "end": 910, "text": " and then it will actually transfer, but there's a crucial thing," }, { "start": 910, "end": 913, "text": " and that is this kind of temperature thing here." }, { "start": 913, "end": 919, "text": " You can see a lot of times they actually don't manage to reach a good score," }, { "start": 919, "end": 922, "text": " if this parameter is wrong. What does this parameter do?" }, { "start": 922, "end": 927, "text": " This parameter controls, as we discussed, the stochasticity of the model." }, { "start": 927, "end": 935, "text": " So, basically, the environment model doesn't directly imagine a future state," }, { "start": 935, "end": 939, "text": " but it imagines a distribution over future states." }, { "start": 939, "end": 944, "text": " And the higher this parameter, the more stochastic this distribution is," }, { "start": 944, "end": 951, "text": " basically the more uniform, I guess, the more entropy you have in these future states." }, { "start": 951, "end": 955, "text": " We've seen this temperature parameter here." }, { "start": 955, "end": 966, "text": " Which is important, because they go into length explaining why in this entire page here that we skipped." }, { "start": 966, "end": 971, "text": " Here you see just text, there." }, { "start": 971, "end": 975, "text": " Cheating the world model, which basically they say, okay, if you have a wrong model," }, { "start": 975, "end": 980, "text": " if you have a model that's wrong of the environment, and you train a policy on it, necessarily," }, { "start": 980, "end": 987, "text": " it's going to probably find a policy that exploits the wrongness of this model." }, { "start": 987, "end": 995, "text": " So you might be able to walk through walls or fly or ignore the fireballs." }, { "start": 995, "end": 1003, "text": " Or basically, find that if you stand next to a wall, in your imagination, you'll never get hit." }, { "start": 1003, "end": 1006, "text": " Something like this, which isn't true in the real world." }, { "start": 1006, "end": 1011, "text": " So the policy will exploit that." }, { "start": 1011, "end": 1016, "text": " And to counter this, they simply basically turn up this temperature parameter," }, { "start": 1016, "end": 1020, "text": " giving them a more stochastic procedure." }, { "start": 1020, "end": 1024, "text": " Meaning they imagine a lot of kind of different futures," }, { "start": 1024, "end": 1029, "text": " and they train their policy on all of them, or in expectation over a sample of them." }, { "start": 1029, "end": 1038, "text": " Which means that if the environment model is wrong, this kind of..." }, { "start": 1038, "end": 1042, "text": " I want to say if it's wrong, this corrects for it. It doesn't." }, { "start": 1042, "end": 1049, "text": " But if it's wrong, you still sample different futures." }, { "start": 1049, "end": 1056, "text": " So if it has one wrong future, you still have the other ones to kind of punish the policy," }, { "start": 1056, "end": 1063, "text": " if it tries to exploit this one mistake. At least that's the reasoning behind it." }, { "start": 1063, "end": 1067, "text": " So that's how they do this." }, { "start": 1067, "end": 1071, "text": " You can interact with their trained environment models online somehow." }, { "start": 1071, "end": 1076, "text": " They also give a kind of a look at what they would like to have." }, { "start": 1076, "end": 1082, "text": " Instead of collecting the environment model from random rollout," }, { "start": 1082, "end": 1086, "text": " they would try to train it, then to use it again to collect more data," }, { "start": 1086, "end": 1089, "text": " to train more environment model, then use the environment," }, { "start": 1089, "end": 1094, "text": " better environment model to train more the policy, and so on in a stepwise fashion." }, { "start": 1094, "end": 1100, "text": " But they don't actually do it, they simply describe it." }, { "start": 1100, "end": 1105, "text": " And the rest of the paper is a bit of related work and discussion." }, { "start": 1105, "end": 1115, "text": " It's very prosaically written, kind of different from what you're used to if you read a lot of these papers." }, { "start": 1115, "end": 1136, "text": " But yeah, I hope you can now you know what's going on and see you next time." } ]
TrLrBL1U8z0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[ML News] GitHub Copilot - Copyright, GPL, Patents & more | Brickit LEGO app | Distill goes on break
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "copilot", "github copilot", "github copilot copyright", "github gpl", "github copilot gpl", "copilot copyright", "copilot gpl", "openai gpl", "openai copilot", "openai codex", "github copilot codex", "github automatic code", "copilot public data", "copilot code generation", "distill pub", "ml news", "machine learning news", "deep learning news", "github copilot news", "brickit", "lego brickit", "brickit app" ]
#copilot #copyright #gpl GitHub and OpenAI release Copilot, an AI-powered code autocomplete system that can generate entire functions, classes, and modules from mere definitions and docstrings. Copilot was trained on all public GitHub repositories, and this has a lot of people upset about questions on copyright, code licenses, social obligations, and how much you can profit from other people's work. I give my opinions on the issue in relation to copyright law, the GPL license, and terms of service. Further, we discuss the Brickit app to organize your LEGOs, Distill going on a break, and much more. OUTLINE: 0:00 - Intro 0:20 - GitHub Copilot 6:55 - My opinion on Copilot & Copyright 17:25 - Facebook AI image similarity challenge 18:00 - Brickit app scans your LEGOs and suggests builds 18:40 - Distill journal goes on break 19:50 - Amazon uses algorithms to hire & fire Flex drivers 23:20 - Helpful Libraries: TF Decision Forests, Habitat, Falken, Brax 24:20 - AI-generated papers give science a hard time References: GitHub Copilot: AI pair programmer https://twitter.com/gdb/status/1409890354132750336 https://twitter.com/rickhanlonii/status/1410020702028193798 https://copilot.github.com/ https://docs.github.com/en/github/copilot/research-recitation https://docs.github.com/en/github/site-policy/github-terms-of-service#d-user-generated-content https://tldrlegal.com/license/gnu-general-public-license-v3-(gpl-3)#fulltext https://www.gnu.org/licenses/gpl-faq.en.html#CanIUseGPLToolsForNF https://www.legalzoom.com/knowledge/copyright/topic/copyright-protection-scope https://en.wikipedia.org/wiki/Derivative_work https://twitter.com/giffmana/status/1410320795222654981 https://twitter.com/search?q=copilot&src=typed_query&f=image Facebook AI launches image similarity challenge https://www.drivendata.org/competitions/79/competition-image-similarity-1-dev/ Brickit app sorts your LEGOs https://brickit.app/?ref=producthunt&s=09 https://petapixel.com/2021/07/01/brickits-ai-camera-scans-your-lego-to-suggest-things-you-can-build/ Distill goes on break https://distill.pub/2021/distill-hiatus/ Amazon uses Algorithms to fire Flex drivers https://www.engadget.com/amazon-algorithms-fire-flex-delivery-drivers-055959081.html?guccounter=1 TensorFlow decision forests https://blog.tensorflow.org/2021/05/introducing-tensorflow-decision-forests.html Facebook AI habitat 2.0 https://ai.facebook.com/blog/habitat-20-training-home-assistant-robots-with-faster-simulation-and-new-benchmarks/ Google Falken trains game-playing agents https://ai.googleblog.com/2021/06/quickly-training-game-playing-agents.html https://github.com/google-research/falken Google Brax: differentiable physics simulator https://github.com/google/brax https://arxiv.org/pdf/2106.13281.pdf Fake science is getting faker https://thenextweb.com/news/fake-science-faker-thanks-ai-syndication Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
An open door. An open window. An open bottle. Open AI and GitHub invent Copilot and everyone freaks out about copyright. Welcome to ML News. Greg Brockman writes an AI pair programmer in your editor. It's powered by OpenAI Codex, a new AI system which can convert from natural language to code with increasing reliability. He's talking about GitHub Copilot. So Copilot is this system that's developed by Open AI and GitHub to be a super duper autocomplete. Basically, what you do is you write the name of a function or some kind of a class or actually anything you want, maybe along with a little bit of a doc string, and the system will complete code for you. Now, other than classical autocomplete systems, which are rule based and basically suggest to you what's possible, which variables fit here, which ones are in scope, this system goes much beyond this. It will try to guess what you're trying to do, and it will write this code for you or it will at least suggest it. So they have a bunch of examples here. For example, this parse expenses statement, the user writes the function name and then a few examples in the doc string as you would write if you were to program it and then Copilot implements the function itself. Now I've been using tab nine for a while, and I'm pretty happy with its suggestions, especially if you pair it up with a classic autocomplete, you get the classic autocomplete, which tells you what you are allowed to do essentially, and you get the AI autocomplete, which is trying to guess what you want to do. This enables things like if I catch an error that's called password error, it will already provide a log message for me that says password wrong. And there are many more examples where it just kind of infers what you want to do. And that's super helpful at times. Copilot by GitHub is this on steroid, it will implement entire functions, entire classes from a description or even just from a name of a function. Now it's not going to be perfect, of course, whether it actually helps or hurts and who does it help? Does it help the experienced programmer because they can write faster and just have to check for errors because there definitely are errors. If you see right here, in this expense function, the money is held as a floating point number, which is a big no no when you handle currency. On the other hand, does it help novice programmers because they see the implementations of functions they wouldn't know how to implement. However, they're probably going to not catch the mistakes there are. There's a lot of debate around this, but I'm pretty excited to see this honestly. Now the issue comes when you talk about the following. They say it's trained on billions of lines of public code. GitHub Copilot puts the knowledge you need at your fingertips saving you yada yada marketing. However, trained on billions of lines of public code. That means they essentially went to all of GitHub or the public repo and trained a giant language model on it. It's nothing more than this. It's essentially something like GPT-3 on code, probably augmented by a bit of syntaxing and whatnot. But it's not much more. It's just lots of data, lots of compute gives you a model of what people usually do when prompted with some sort of strings. So safe to say this won't replace programmers exactly anytime soon, as you can maybe see from this is even function implemented to extreme precision, of course, actually, I don't know if that's even real or a fake, because people have definitely been making fakes about Copilot. This is not going to happen anytime soon. What's more worrisome is, for example, OpenAI Copilot emitting personal information such as this open SSH private key, which someone left in their repository and now Copilot is just regurgitating it. In fact, on the FAQ page, GitHub Copilot says, yes, they sometimes output personal data, not because they do anything wrong, but because people left that personal data in their repositories. And the system is trained on those repositories. And sometimes it will decide that the most likely output is that training sample. And that gets us into an interesting topic. So the topic is does GitHub Copilot recite code from the training set? Now we've been having this discussion for a long time. Do these large language models actually understand what they're doing? Or are they simply kind of reproducing the training set? And if they reproduce the training set, by which degree do they integrate maybe multiple training set samples, combine them or do they just take one and kind of reformulate it a little bit? Who knows GitHub did an extensive study in which they found that only about 0.1% of the outputs are in some way reproductions from the training set. However, there is a big dispute about what exactly counts as a copy as a recitation and how different is different enough. And that gets us into the biggest issue, which is copyright. So the issue here is that GitHub and OpenAI essentially take all of this code, train their system with it, and they don't give you the copilot for free. Of course not. I mean, how are you going to live up to that name OpenAI? They're of course going to sell this. Now fair enough, they did something cool, they want to make money. However, the code they used in order to train the system isn't always freely available. At least that's what people think. Now, how would you feel if you wrote some code, you are the legal owner of the copyright to that code and GitHub simply trains a model on your code and then sells that model for other people to produce their code and they don't have to give you anything for it. Also, there is the issue of GPL license code, which requires that any modifications to it again become GPL license. The question is, if the model outputs code that was a result of training on GPL code, does the output of the system also become GPL licensed or not? And there is even more of an issue when it comes to patents on code. Patents are yet another category of intellectual property protection. And we've seen example of copilot reciting patent protected code. With all of this, I've been reading into software copyright and whatnot a little bit. And I want to give the disclaimer, I'm not a lawyer, this is not legal advice. This is entertainment purposes only if you want some actual opinion, go to an actual lawyer and pay them. But also what one can say is what Lucas Byer here says, with everybody hypothesizing about copilot and GPL license, let me add another perspective. Nobody knows and nothing whatsoever will happen until someone sues someone. Now I'm not going to hold my breath, which is true. Ultimately, a judge is going to have to decide case law has to be established and we'll take it from there. So what follows is my personal opinion on the matter trying to analyze this a little bit. Here's a bit of a diagram of what's happening currently in this system. You have the copilot system as a piece of software that contains maybe a neural network that has been trained on some stuff. How did this copilot come to be the copilot is built upon library such as pytorch, which are usually fairly openly licensed like an MIT license or something like this. So there's no problem there, then copilot of course needs copilot.py, the thing that you actually run to do the training and the inference, which also is authored by the copilot authors and therefore not an issue in our case. But then one of the inputs to copilot is of course the giant data set. Before we even get into licensing of that data, we have to talk about copyright itself. Everybody's talking about GPL license and whatnot. But GPL being a copy left license only pulls if copyright law even applies. So first we have to see does copyright law even say anything about using this code in this way. Copyright law works differently in different countries, but in general, it protects creative outputs of people. So if you do something, if you express yourself in some creative way, you obtain automatically copyright on that artistic expression. So if I write a song, then I am the owner of copyright for that song, I don't have to register it anywhere. I have it by default. Now as an owner of copyright, I get certain benefits. For example, I can decide whether or how my work is reproduced, which derivative works can be made and how they are treated, how it is distributed to the public, how it is performed, and so on. I have certain rights to the dissemination, reproduction and modification of my work. Now notice what's not on this list, enjoying the work, reading the book, reading the code. So as a copyright owner, once I've decided to display my work publicly, I can't actually prevent anyone from looking at it in the public space that I chose to display it. So one place we actually have to go is the Terms of Service of GitHub. So under user generated content, GitHub says you own content you create, but you allow us certain rights to it. And at some point, they say we need the legal right to do things like host your content, publish it and share it. This license includes the right to do things like copy it to our database, make backups, show it to you and other users, parse it into search index or otherwise analyze it. Now you can debate whether or not otherwise analyze it means they can run machine learning model on top of it given that they say this is in order to fulfill their service. But certainly you allow GitHub to display your code and anyone can go on GitHub and you cannot prevent them from reading your code. You cannot prevent them from actually downloading your code to a private hard drive. In fact, the ideas and algorithms behind code are not copyrightable. What's copyrightable is only your expression of those ideas. So I can't copy your code, but I can look at your code, learn from it and then express the same idea in my own code. If you want to protect an idea, that's the terms of patents. And that's a whole other game, you actually have to register for a patent, whereas copyright you obtain automatically. So if I can look at your code, learn from it and then reproduce it in my own way, why shouldn't machine be able to and that brings us to the second important point right here, which is the right to prepare derivative works based upon the work. Now, according to Wikipedia, a derivative work is an expressive creation that includes major copyrightable elements of an original previously created first work. Now the article here is mainly concerned with what copyright exists on the derivative work. But for our purposes, if something is a derivative work of something else, it is potentially in violation of the copyright of that first work. And when is something a derivative work if it contains major copyrightable elements of that original? Now, is this all a bit fuzzy? Yes, absolutely. And there is a giant gray area, of course. So if I look at an algorithm, and I implement that in my own code, what counts as containing major copyrightable elements of the original, if I use the same kind of indentations, if I use the same variable names, if I use the same structure, this isn't really an exact science. It is for judges to decide. But safe to say, there is a way where I can learn from other people's code, no matter the copyright situation, and I can then write something based upon that. And it is not a copyright violation. There is also many situations where the exact same thing is a copyright violation. And that all depends on how much of the copyrightable elements so not the ideas but the expression of the original work is contained in the derivative work. And that of course brings us all the way back to the discussion, do large language models simply recite the training data and change it a tiny bit or do they integrate the training data, learn from the training data, learn the patterns behind the training data and then come up with their own way of expressing those patterns. The truth is probably somewhere in between they're not exactly copying the training data, but it's also not the fact that they understand what's behind the training data. But safe to say there is a way where copyright might not even apply and then there is actually no problem right here. But let's assume for a moment that copyright does apply and things are actually in the realm of derivative works. Well, then there are still multiple questions right here. For example, here you see that there are multiple elements in the system, one is co pilot itself as a software. Now if you argue that somehow the copyright elements of the input data end up in the weights of the neural network and therefore the neural networks are essentially a derivative work of the input data, then co pilot itself might be in violation of copyright law. But even if co pilot isn't a violation of copyright law, still the output of co pilot might be in violation of copyright law. And that's going to probably have to be decided on a case by case basis. And it might even be that open AI might not be responsible for this, but the person actually using the co pilot tool to generate output, it's all a bit of a messy situation. Notice what we haven't talked about so far, GPL, because GPL, as I said, only applies when copyright applies. Now let's assume copyright applies. So here is where we get into licenses of code. In general, the training data contains broad categories of how code is licensed. And I've listed four of them here, there is the boring code, which is so boring that copyright doesn't apply literally, it's no expression of creativity. It's just formulaic code writing, maybe even auto generative code. Maybe even auto generated, not copyrightable, not a problem there. There is also the open category, which is so openly licensed that it's usable in any format like an MIT license. As long as you keep the disclaimers there, you're fine. Then there is the bunch of code that does not have a license at all. If there is no license, that essentially means that copyright owner simply gives GitHub the right to publish but retains all other copyright and everything we said so far. So either copilot or the output copilot generates or actually both might be a violation of the copyright of the unlicensed code. And then there is GPL code. So the GPL, the GNU general public license, in this case version three, but they're all kind of similar. I know an OT authorization, they are generally known as copy left licenses, because if a piece of code is licensed under the GPL, it means that if you were to modify this code, then your modifications also have to be licensed under the GPL. And being licensed under the GPL means things like if someone obtains a copy of the software, then also you have to provide a copy of the source code with that software. So the GPL is a bit like a virus that if it initially applies to a piece of software, someone else uses that software, maybe modifies it a little bit or includes it into their system, the whole system has to be under the GPL or they are in violation of the license. Of course, if copilot is found to be a derivative work of GPL licensed data, that will mean copilot itself would fall under the GPL and therefore OpenAI would have to give us its source. Now what source code is is a bit of a tricky business in the legal scene, but GPL defines it as the preferred form of the work for making modifications to it. Now, what is that exactly for OpenAI pilot, maybe it's not the weights of the neural network itself, because like, how can I modify them? Maybe it's the training set plus copilot.pi. Maybe it's even not even the training set, but it's actually the scraper for the training set as well as the training code, who knows? Now, GitHub and OpenAI can save themselves from having to release the source code of copilot if they only make it available over the network. In which case, you don't have to give out the source code license that would only be in the case of the A GPL. Regardless of that, the bigger question is what if the output of copilot is a derivative work of GPL licensed code? In that case, the output of copilot in a case by case basis would also have to be GPL licensed. And who's responsible for that? Probably you as a user of copilot, if you ask copilot for code, you get an output, I don't think it matters whether or not you know that it's a derivative work of some GPL licensed code, if you then use that code and build upon it and maybe sell software based on it, that software technically is under the GPL. So this was my little take on the copyright situation around OpenAI copilot. I think it's a great tool, but you can also see it brings a lot of difficulties with it, not necessarily technical difficulties, but difficulties from the human environment. So let me know in the comments what you think about the situation about copyright and whether I completely butchered some of the things. Thanks. Next news, speaking of copyright, Facebook AI launches a image similarity challenge where they want you to figure out where all the memes came from. So the challenge is essentially figuring out if someone took some photo and modified it in some way. And of course, the reason behind all of this is going to be to find the original creator of every meme so we can give them proper credit and glory they deserve. Nothing else, no one else. Image matching, very limited applications. Don't even worry about it. Next news, Brickit is a new app that scans your Legos and tells what you can build from them. Peter pixel has a good article about it and shows this demo video. The app will scan your collection of Legos and then tell you what you can do with it. So you can see it gives you a bunch of suggestions of what to do. Pretty neat. Now this is a really really cool app, though I wonder the things it proposes are often made out of maybe 20 parts and this pile has at least 500 or so. In any case, if you do have an iOS device, which I don't give it a try. It looks like a lot of fun. Next news in a more sad news, the Distill Pub website is going on a break. So you might know Distill as an online journal which publishes in a non traditional way they want very interactive articles, they want very visual articles explaining something they also publish commentaries threads, but also peer reviewed science, the frequency of publication hasn't been too high from them. But the things they have published generally were super well received. So one reason they cite is volunteer burnout, which given the high quality standards that they have, I can totally believe this is an enormous effort to keep this going to keep the quality high and you know, respect for doing it this long. The article makes another point, namely that self publication seems like the future in most cases, and I think the field generally agrees today scientific progress is more made through sharing archive publications and discussing them on social media, than it is through the peer review system of conferences. So even though it's sad to still will take a break what they're advocating for is a better future for science and that's a great thing. Okay, next news and gadget rights, Amazon is reportedly using algorithms to fire flex delivery drivers to Amazon being Amazon has this huge fleet of drivers that they don't necessarily hire it's kind of like an Uber model where the driver has an app and they get essentially subcontracted for driving stuff somewhere and these aren't few drivers, they are apparently millions of drivers doing this. Now keeping up some sort of HR department on some sort of human contact with millions of people is a challenge. So Amazon opted to just not do it. Instead, they use algorithms to track the performance of their drivers and if the performance sinks too low, they fire the drivers algorithmically. So the article states the frustration of some of these drivers saying the system can often fire workers seemingly without good cause according to the report, one worker said her rating fell after she was forced to halt deliveries due to a nail in her tire. She succeeded in boosting it to great over the next several weeks, but her account was eventually terminated for violating Amazon's terms of service. She contested the firing but the company wouldn't reinstate her. Another driver was unable to deliver packages to an apartment complex because it was closed with the gate locked and the residents wouldn't answer their phones. In another building an Amazon locker failed to open. So their own system failed and they punished their drivers for it. His rating also dropped and he spent six weeks trying to raise it only to be fired for falling below a prescribed level. If a driver feels they're wrongly terminated, some feel there's not much recourse either. Driver must spend $200 to dispute any termination and many have said it's not worth the effort. Whenever there's an issue, there is no support said Koch who is 29. It's you against the machine so you don't even try. Now here you could try to make a nuanced point that these people aren't employees, that it's simply not a practical solution to manage these as employees, that overall the system might be better off, that a lot of drivers are having good experiences, that this is just a necessity of managing so many people. But, but, see, not so long ago I wanted to get some Amazon gift cards for my Discord admins. They're doing a good job. I wanted to give them some thanks so I tried to buy some gift cards and Amazon locked me out of my account security reasons. So I verified my identity. All good. Tried to buy the gift cards again. They locked me out again. Verified my identity. Tried a third time. Now they locked me out permanently. So I'm trying to contact support. Guess what you have to do to contact support. Log in. Oh great. Guess what you have to do to get a support contact number. Log in. Oh great. Tried emailing them. Nothing happened. Tried calling them. They say they'll fix it. They haven't fixed it. For months now. They said I should make a new account. Great. Verified phone number of the new account. Your phone is already associated with an account. My old account has all my collection of audiobooks and ebooks on it and this is just splendid. So I definitely feel with these drivers if it's you against the machine. Amazon ranks just about second to PayPal when it comes to actual customer support. So I'm not going to make the nuance point here. Screw you Amazon. Screw you. You deserve every bit of negative press that you're getting here. At least when there's an issue have some support for your drivers who get a nail stuck in their tire. Yes I'm using a journalistic medium to settle a personal dispute. What are you going to do about it? Get me my account back. Okay next we're going to look at some helpful libraries. We should make this a segment. Helpful libraries. Helpful libraries. Okay. TensorFlow introduces decision forests. New algorithm. Never heard of it before. Give it a try. Decision forests in TensorFlow. Facebook. Habitat. 3D environment to train your autonomous robot to get you something from the fridge when you're just too lazy. Have fun with your diabetes. Try it out. Google research falcon trains your game playing agent. You give it a little bit of a demonstration. It learns how to play your game and test it for you and find bugs. So now you don't even have to play your game while you don't walk to the fridge. Good job. And lastly did you ever want to figure out what the gradient is of your face smashing against the wall. Well now you can with Google AIs, BRACs, you can simulate physics in a differentiable way on a TPU really fast. And in our last news, TNW writes fake science is getting faker thanks AI. Journals are retracting more and more papers because they're not by the authors they claim to be. Now of course you always know it's a serious article when there is a very futuristic robot on the picture in the front. But the article is actually a good article talking about the rise of AI generated papers and how there is a massive upsurge in retractions among scientific publications. But besides that I like the intro they say. They say of course sometimes papers get retracted because of the authors made an honest mistake in the research. In more than half the cases however it's because of academic misconduct or fraud. Up until a decade ago this sort of behavior was more or less limited to researchers falsifying experimental data or skewing results to favor their theory. The more sophisticated technology has become however the more things have gotten a lot more complicated. So the rest of the article talks about how people add big names to their papers, how people generate fake authors even how people generate even fake papers and so on. You know that's a whole big problem but I still think that people being shady with the results of their research is still the biggest problem. There's just not too many retractions of it in machine learning because you can never reproduce someone else's paper. If you didn't get my numbers you just did it wrong. So what is the real solution against fake science? It's probably hard to know but I guess an approach to a solution would be to have some sort of a distributed checking mechanism where you can aggregate opinions from all around the world about a given topic and then sort of look at everything and evaluate for yourself rather than relying on a centralized committee to do it for you. Be that for fake news or fake science or fake anything I think that's the only way forward because any centralized institutions will eventually get either corrupted or gained because they have some sort of scoring system. But I'm interested in what you have to say. All of this is a problem. It's not exactly clear how we go about making this better. Can we even make it better or can we just find better ways to ignore the fake things? All right that was it from me for this week's ML news. I hope you had fun. I hope you don't get replaced by a machine anytime soon and most of all I hope I don't get replaced by a machine anytime soon. So wish you a happy day and goodbye.
[ { "start": 0, "end": 2, "text": " An open door." }, { "start": 2, "end": 6, "text": " An open window." }, { "start": 6, "end": 10, "text": " An open bottle." }, { "start": 10, "end": 15, "text": " Open AI and GitHub invent Copilot and everyone freaks out about copyright." }, { "start": 15, "end": 17, "text": " Welcome to ML News." }, { "start": 21, "end": 25, "text": " Greg Brockman writes an AI pair programmer in your editor." }, { "start": 25, "end": 33, "text": " It's powered by OpenAI Codex, a new AI system which can convert from natural language to code with increasing reliability." }, { "start": 33, "end": 35, "text": " He's talking about GitHub Copilot." }, { "start": 35, "end": 43, "text": " So Copilot is this system that's developed by Open AI and GitHub to be a super duper autocomplete." }, { "start": 43, "end": 54, "text": " Basically, what you do is you write the name of a function or some kind of a class or actually anything you want, maybe along with a little bit of a doc string, and the system will complete code for you." }, { "start": 54, "end": 66, "text": " Now, other than classical autocomplete systems, which are rule based and basically suggest to you what's possible, which variables fit here, which ones are in scope, this system goes much beyond this." }, { "start": 66, "end": 74, "text": " It will try to guess what you're trying to do, and it will write this code for you or it will at least suggest it." }, { "start": 74, "end": 76, "text": " So they have a bunch of examples here." }, { "start": 76, "end": 88, "text": " For example, this parse expenses statement, the user writes the function name and then a few examples in the doc string as you would write if you were to program it and then Copilot implements the function itself." }, { "start": 88, "end": 105, "text": " Now I've been using tab nine for a while, and I'm pretty happy with its suggestions, especially if you pair it up with a classic autocomplete, you get the classic autocomplete, which tells you what you are allowed to do essentially, and you get the AI autocomplete, which is trying to guess what you want to do." }, { "start": 105, "end": 114, "text": " This enables things like if I catch an error that's called password error, it will already provide a log message for me that says password wrong." }, { "start": 114, "end": 118, "text": " And there are many more examples where it just kind of infers what you want to do." }, { "start": 118, "end": 129, "text": " And that's super helpful at times. Copilot by GitHub is this on steroid, it will implement entire functions, entire classes from a description or even just from a name of a function." }, { "start": 129, "end": 142, "text": " Now it's not going to be perfect, of course, whether it actually helps or hurts and who does it help? Does it help the experienced programmer because they can write faster and just have to check for errors because there definitely are errors." }, { "start": 142, "end": 150, "text": " If you see right here, in this expense function, the money is held as a floating point number, which is a big no no when you handle currency." }, { "start": 150, "end": 161, "text": " On the other hand, does it help novice programmers because they see the implementations of functions they wouldn't know how to implement. However, they're probably going to not catch the mistakes there are." }, { "start": 161, "end": 166, "text": " There's a lot of debate around this, but I'm pretty excited to see this honestly." }, { "start": 166, "end": 173, "text": " Now the issue comes when you talk about the following. They say it's trained on billions of lines of public code." }, { "start": 173, "end": 178, "text": " GitHub Copilot puts the knowledge you need at your fingertips saving you yada yada marketing." }, { "start": 178, "end": 188, "text": " However, trained on billions of lines of public code. That means they essentially went to all of GitHub or the public repo and trained a giant language model on it." }, { "start": 188, "end": 195, "text": " It's nothing more than this. It's essentially something like GPT-3 on code, probably augmented by a bit of syntaxing and whatnot." }, { "start": 195, "end": 203, "text": " But it's not much more. It's just lots of data, lots of compute gives you a model of what people usually do when prompted with some sort of strings." }, { "start": 203, "end": 220, "text": " So safe to say this won't replace programmers exactly anytime soon, as you can maybe see from this is even function implemented to extreme precision, of course, actually, I don't know if that's even real or a fake, because people have definitely been making fakes about Copilot." }, { "start": 220, "end": 232, "text": " This is not going to happen anytime soon. What's more worrisome is, for example, OpenAI Copilot emitting personal information such as this open SSH private key, which someone left in their repository" }, { "start": 232, "end": 247, "text": " and now Copilot is just regurgitating it. In fact, on the FAQ page, GitHub Copilot says, yes, they sometimes output personal data, not because they do anything wrong, but because people left that personal data in their repositories." }, { "start": 247, "end": 255, "text": " And the system is trained on those repositories. And sometimes it will decide that the most likely output is that training sample." }, { "start": 255, "end": 264, "text": " And that gets us into an interesting topic. So the topic is does GitHub Copilot recite code from the training set? Now we've been having this discussion for a long time." }, { "start": 264, "end": 281, "text": " Do these large language models actually understand what they're doing? Or are they simply kind of reproducing the training set? And if they reproduce the training set, by which degree do they integrate maybe multiple training set samples, combine them or do they just take one and kind of reformulate it a little bit?" }, { "start": 281, "end": 291, "text": " Who knows GitHub did an extensive study in which they found that only about 0.1% of the outputs are in some way reproductions from the training set." }, { "start": 291, "end": 302, "text": " However, there is a big dispute about what exactly counts as a copy as a recitation and how different is different enough. And that gets us into the biggest issue, which is copyright." }, { "start": 302, "end": 314, "text": " So the issue here is that GitHub and OpenAI essentially take all of this code, train their system with it, and they don't give you the copilot for free. Of course not. I mean, how are you going to live up to that name OpenAI?" }, { "start": 314, "end": 329, "text": " They're of course going to sell this. Now fair enough, they did something cool, they want to make money. However, the code they used in order to train the system isn't always freely available. At least that's what people think." }, { "start": 329, "end": 344, "text": " Now, how would you feel if you wrote some code, you are the legal owner of the copyright to that code and GitHub simply trains a model on your code and then sells that model for other people to produce their code and they don't have to give you anything for it." }, { "start": 344, "end": 362, "text": " Also, there is the issue of GPL license code, which requires that any modifications to it again become GPL license. The question is, if the model outputs code that was a result of training on GPL code, does the output of the system also become GPL licensed or not?" }, { "start": 362, "end": 379, "text": " And there is even more of an issue when it comes to patents on code. Patents are yet another category of intellectual property protection. And we've seen example of copilot reciting patent protected code. With all of this, I've been reading into software copyright and whatnot a little bit." }, { "start": 379, "end": 399, "text": " And I want to give the disclaimer, I'm not a lawyer, this is not legal advice. This is entertainment purposes only if you want some actual opinion, go to an actual lawyer and pay them. But also what one can say is what Lucas Byer here says, with everybody hypothesizing about copilot and GPL license, let me add another perspective." }, { "start": 399, "end": 412, "text": " Nobody knows and nothing whatsoever will happen until someone sues someone. Now I'm not going to hold my breath, which is true. Ultimately, a judge is going to have to decide case law has to be established and we'll take it from there." }, { "start": 412, "end": 422, "text": " So what follows is my personal opinion on the matter trying to analyze this a little bit. Here's a bit of a diagram of what's happening currently in this system." }, { "start": 422, "end": 441, "text": " You have the copilot system as a piece of software that contains maybe a neural network that has been trained on some stuff. How did this copilot come to be the copilot is built upon library such as pytorch, which are usually fairly openly licensed like an MIT license or something like this." }, { "start": 441, "end": 455, "text": " So there's no problem there, then copilot of course needs copilot.py, the thing that you actually run to do the training and the inference, which also is authored by the copilot authors and therefore not an issue in our case." }, { "start": 455, "end": 468, "text": " But then one of the inputs to copilot is of course the giant data set. Before we even get into licensing of that data, we have to talk about copyright itself. Everybody's talking about GPL license and whatnot." }, { "start": 468, "end": 487, "text": " But GPL being a copy left license only pulls if copyright law even applies. So first we have to see does copyright law even say anything about using this code in this way. Copyright law works differently in different countries, but in general, it protects creative outputs of people." }, { "start": 487, "end": 502, "text": " So if you do something, if you express yourself in some creative way, you obtain automatically copyright on that artistic expression. So if I write a song, then I am the owner of copyright for that song, I don't have to register it anywhere." }, { "start": 502, "end": 518, "text": " I have it by default. Now as an owner of copyright, I get certain benefits. For example, I can decide whether or how my work is reproduced, which derivative works can be made and how they are treated, how it is distributed to the public, how it is performed, and so on." }, { "start": 518, "end": 528, "text": " I have certain rights to the dissemination, reproduction and modification of my work. Now notice what's not on this list, enjoying the work, reading the book, reading the code." }, { "start": 528, "end": 543, "text": " So as a copyright owner, once I've decided to display my work publicly, I can't actually prevent anyone from looking at it in the public space that I chose to display it. So one place we actually have to go is the Terms of Service of GitHub." }, { "start": 543, "end": 567, "text": " So under user generated content, GitHub says you own content you create, but you allow us certain rights to it. And at some point, they say we need the legal right to do things like host your content, publish it and share it. This license includes the right to do things like copy it to our database, make backups, show it to you and other users, parse it into search index or otherwise analyze it." }, { "start": 567, "end": 585, "text": " Now you can debate whether or not otherwise analyze it means they can run machine learning model on top of it given that they say this is in order to fulfill their service. But certainly you allow GitHub to display your code and anyone can go on GitHub and you cannot prevent them from reading your code." }, { "start": 585, "end": 605, "text": " You cannot prevent them from actually downloading your code to a private hard drive. In fact, the ideas and algorithms behind code are not copyrightable. What's copyrightable is only your expression of those ideas. So I can't copy your code, but I can look at your code, learn from it and then express the same idea in my own code." }, { "start": 605, "end": 629, "text": " If you want to protect an idea, that's the terms of patents. And that's a whole other game, you actually have to register for a patent, whereas copyright you obtain automatically. So if I can look at your code, learn from it and then reproduce it in my own way, why shouldn't machine be able to and that brings us to the second important point right here, which is the right to prepare derivative works based upon the work." }, { "start": 629, "end": 653, "text": " Now, according to Wikipedia, a derivative work is an expressive creation that includes major copyrightable elements of an original previously created first work. Now the article here is mainly concerned with what copyright exists on the derivative work. But for our purposes, if something is a derivative work of something else, it is potentially in violation of the copyright of that first work." }, { "start": 653, "end": 682, "text": " And when is something a derivative work if it contains major copyrightable elements of that original? Now, is this all a bit fuzzy? Yes, absolutely. And there is a giant gray area, of course. So if I look at an algorithm, and I implement that in my own code, what counts as containing major copyrightable elements of the original, if I use the same kind of indentations, if I use the same variable names, if I use the same structure, this isn't really an exact science." }, { "start": 682, "end": 711, "text": " It is for judges to decide. But safe to say, there is a way where I can learn from other people's code, no matter the copyright situation, and I can then write something based upon that. And it is not a copyright violation. There is also many situations where the exact same thing is a copyright violation. And that all depends on how much of the copyrightable elements so not the ideas but the expression of the original work is contained in the derivative work." }, { "start": 711, "end": 730, "text": " And that of course brings us all the way back to the discussion, do large language models simply recite the training data and change it a tiny bit or do they integrate the training data, learn from the training data, learn the patterns behind the training data and then come up with their own way of expressing those patterns." }, { "start": 730, "end": 748, "text": " The truth is probably somewhere in between they're not exactly copying the training data, but it's also not the fact that they understand what's behind the training data. But safe to say there is a way where copyright might not even apply and then there is actually no problem right here." }, { "start": 748, "end": 766, "text": " But let's assume for a moment that copyright does apply and things are actually in the realm of derivative works. Well, then there are still multiple questions right here. For example, here you see that there are multiple elements in the system, one is co pilot itself as a software." }, { "start": 766, "end": 784, "text": " Now if you argue that somehow the copyright elements of the input data end up in the weights of the neural network and therefore the neural networks are essentially a derivative work of the input data, then co pilot itself might be in violation of copyright law." }, { "start": 784, "end": 807, "text": " But even if co pilot isn't a violation of copyright law, still the output of co pilot might be in violation of copyright law. And that's going to probably have to be decided on a case by case basis. And it might even be that open AI might not be responsible for this, but the person actually using the co pilot tool to generate output, it's all a bit of a messy situation." }, { "start": 807, "end": 836, "text": " Notice what we haven't talked about so far, GPL, because GPL, as I said, only applies when copyright applies. Now let's assume copyright applies. So here is where we get into licenses of code. In general, the training data contains broad categories of how code is licensed. And I've listed four of them here, there is the boring code, which is so boring that copyright doesn't apply literally, it's no expression of creativity. It's just formulaic code writing, maybe even auto generative code." }, { "start": 836, "end": 865, "text": " Maybe even auto generated, not copyrightable, not a problem there. There is also the open category, which is so openly licensed that it's usable in any format like an MIT license. As long as you keep the disclaimers there, you're fine. Then there is the bunch of code that does not have a license at all. If there is no license, that essentially means that copyright owner simply gives GitHub the right to publish but retains all other copyright and everything we said so far." }, { "start": 865, "end": 886, "text": " So either copilot or the output copilot generates or actually both might be a violation of the copyright of the unlicensed code. And then there is GPL code. So the GPL, the GNU general public license, in this case version three, but they're all kind of similar. I know an OT" }, { "start": 886, "end": 915, "text": " authorization, they are generally known as copy left licenses, because if a piece of code is licensed under the GPL, it means that if you were to modify this code, then your modifications also have to be licensed under the GPL. And being licensed under the GPL means things like if someone obtains a copy of the software, then also you have to provide a copy of the source code with that software. So the GPL is a bit like a virus that if it initially applies to a" }, { "start": 915, "end": 945, "text": " piece of software, someone else uses that software, maybe modifies it a little bit or includes it into their system, the whole system has to be under the GPL or they are in violation of the license. Of course, if copilot is found to be a derivative work of GPL licensed data, that will mean copilot itself would fall under the GPL and therefore OpenAI would have to give us its source. Now what source code is is a bit of a tricky business in the legal scene, but GPL defines it as the preferred" }, { "start": 945, "end": 974, "text": " form of the work for making modifications to it. Now, what is that exactly for OpenAI pilot, maybe it's not the weights of the neural network itself, because like, how can I modify them? Maybe it's the training set plus copilot.pi. Maybe it's even not even the training set, but it's actually the scraper for the training set as well as the training code, who knows? Now, GitHub and OpenAI can save themselves from having to release the source code of copilot if they only make it available over the network." }, { "start": 974, "end": 995, "text": " In which case, you don't have to give out the source code license that would only be in the case of the A GPL. Regardless of that, the bigger question is what if the output of copilot is a derivative work of GPL licensed code? In that case, the output of copilot in a case by case basis would also have to be GPL licensed." }, { "start": 995, "end": 1018, "text": " And who's responsible for that? Probably you as a user of copilot, if you ask copilot for code, you get an output, I don't think it matters whether or not you know that it's a derivative work of some GPL licensed code, if you then use that code and build upon it and maybe sell software based on it, that software technically is under the GPL." }, { "start": 1018, "end": 1045, "text": " So this was my little take on the copyright situation around OpenAI copilot. I think it's a great tool, but you can also see it brings a lot of difficulties with it, not necessarily technical difficulties, but difficulties from the human environment. So let me know in the comments what you think about the situation about copyright and whether I completely butchered some of the things. Thanks." }, { "start": 1045, "end": 1073, "text": " Next news, speaking of copyright, Facebook AI launches a image similarity challenge where they want you to figure out where all the memes came from. So the challenge is essentially figuring out if someone took some photo and modified it in some way. And of course, the reason behind all of this is going to be to find the original creator of every meme so we can give them proper credit and glory they deserve." }, { "start": 1073, "end": 1079, "text": " Nothing else, no one else. Image matching, very limited applications. Don't even worry about it." }, { "start": 1079, "end": 1099, "text": " Next news, Brickit is a new app that scans your Legos and tells what you can build from them. Peter pixel has a good article about it and shows this demo video. The app will scan your collection of Legos and then tell you what you can do with it. So you can see it gives you a bunch of suggestions of what to do. Pretty neat." }, { "start": 1099, "end": 1116, "text": " Now this is a really really cool app, though I wonder the things it proposes are often made out of maybe 20 parts and this pile has at least 500 or so. In any case, if you do have an iOS device, which I don't give it a try. It looks like a lot of fun." }, { "start": 1116, "end": 1138, "text": " Next news in a more sad news, the Distill Pub website is going on a break. So you might know Distill as an online journal which publishes in a non traditional way they want very interactive articles, they want very visual articles explaining something they also publish" }, { "start": 1138, "end": 1164, "text": " commentaries threads, but also peer reviewed science, the frequency of publication hasn't been too high from them. But the things they have published generally were super well received. So one reason they cite is volunteer burnout, which given the high quality standards that they have, I can totally believe this is an enormous effort to keep this going to keep the quality high and you know, respect for doing it this long." }, { "start": 1164, "end": 1181, "text": " The article makes another point, namely that self publication seems like the future in most cases, and I think the field generally agrees today scientific progress is more made through sharing archive publications and discussing them on social media, than it is through the peer review system" }, { "start": 1181, "end": 1192, "text": " of conferences. So even though it's sad to still will take a break what they're advocating for is a better future for science and that's a great thing." }, { "start": 1192, "end": 1208, "text": " Okay, next news and gadget rights, Amazon is reportedly using algorithms to fire flex delivery drivers to Amazon being Amazon has this huge fleet of drivers that they don't necessarily hire it's kind of like an Uber model where the driver has an app and they get" }, { "start": 1208, "end": 1224, "text": " essentially subcontracted for driving stuff somewhere and these aren't few drivers, they are apparently millions of drivers doing this. Now keeping up some sort of HR department on some sort of human contact with millions of people is a challenge." }, { "start": 1224, "end": 1239, "text": " So Amazon opted to just not do it. Instead, they use algorithms to track the performance of their drivers and if the performance sinks too low, they fire the drivers algorithmically. So the article states the frustration of some of these drivers saying the system can often" }, { "start": 1239, "end": 1255, "text": " fire workers seemingly without good cause according to the report, one worker said her rating fell after she was forced to halt deliveries due to a nail in her tire. She succeeded in boosting it to great over the next several weeks, but her account was eventually terminated for violating Amazon's terms of service." }, { "start": 1255, "end": 1268, "text": " She contested the firing but the company wouldn't reinstate her. Another driver was unable to deliver packages to an apartment complex because it was closed with the gate locked and the residents wouldn't answer their phones. In another building an Amazon locker failed to open." }, { "start": 1268, "end": 1282, "text": " So their own system failed and they punished their drivers for it. His rating also dropped and he spent six weeks trying to raise it only to be fired for falling below a prescribed level. If a driver feels they're wrongly terminated, some feel there's not much recourse either." }, { "start": 1282, "end": 1294, "text": " Driver must spend $200 to dispute any termination and many have said it's not worth the effort. Whenever there's an issue, there is no support said Koch who is 29. It's you against the machine so you don't even try." }, { "start": 1294, "end": 1315, "text": " Now here you could try to make a nuanced point that these people aren't employees, that it's simply not a practical solution to manage these as employees, that overall the system might be better off, that a lot of drivers are having good experiences, that this is just a necessity of managing so many people." }, { "start": 1315, "end": 1331, "text": " But, but, see, not so long ago I wanted to get some Amazon gift cards for my Discord admins. They're doing a good job. I wanted to give them some thanks so I tried to buy some gift cards and Amazon locked me out of my account security reasons." }, { "start": 1331, "end": 1340, "text": " So I verified my identity. All good. Tried to buy the gift cards again. They locked me out again. Verified my identity. Tried a third time. Now they locked me out permanently." }, { "start": 1340, "end": 1355, "text": " So I'm trying to contact support. Guess what you have to do to contact support. Log in. Oh great. Guess what you have to do to get a support contact number. Log in. Oh great. Tried emailing them. Nothing happened. Tried calling them. They say they'll fix it. They haven't fixed it." }, { "start": 1355, "end": 1372, "text": " For months now. They said I should make a new account. Great. Verified phone number of the new account. Your phone is already associated with an account. My old account has all my collection of audiobooks and ebooks on it and this is just splendid. So I definitely feel with these drivers if it's you against the machine." }, { "start": 1372, "end": 1390, "text": " Amazon ranks just about second to PayPal when it comes to actual customer support. So I'm not going to make the nuance point here. Screw you Amazon. Screw you. You deserve every bit of negative press that you're getting here. At least when there's an issue have some support for your drivers who get a nail stuck in their tire." }, { "start": 1390, "end": 1400, "text": " Yes I'm using a journalistic medium to settle a personal dispute. What are you going to do about it? Get me my account back." }, { "start": 1400, "end": 1417, "text": " Okay next we're going to look at some helpful libraries. We should make this a segment. Helpful libraries. Helpful libraries. Okay. TensorFlow introduces decision forests. New algorithm. Never heard of it before. Give it a try. Decision forests in TensorFlow." }, { "start": 1417, "end": 1438, "text": " Facebook. Habitat. 3D environment to train your autonomous robot to get you something from the fridge when you're just too lazy. Have fun with your diabetes. Try it out. Google research falcon trains your game playing agent. You give it a little bit of a demonstration. It learns how to play your game and test it for you and find bugs." }, { "start": 1438, "end": 1458, "text": " So now you don't even have to play your game while you don't walk to the fridge. Good job. And lastly did you ever want to figure out what the gradient is of your face smashing against the wall. Well now you can with Google AIs, BRACs, you can simulate physics in a differentiable way on a TPU really fast." }, { "start": 1458, "end": 1476, "text": " And in our last news, TNW writes fake science is getting faker thanks AI. Journals are retracting more and more papers because they're not by the authors they claim to be. Now of course you always know it's a serious article when there is a very futuristic robot on the picture in the front." }, { "start": 1476, "end": 1496, "text": " But the article is actually a good article talking about the rise of AI generated papers and how there is a massive upsurge in retractions among scientific publications. But besides that I like the intro they say. They say of course sometimes papers get retracted because of the authors made an honest mistake in the research." }, { "start": 1496, "end": 1510, "text": " In more than half the cases however it's because of academic misconduct or fraud. Up until a decade ago this sort of behavior was more or less limited to researchers falsifying experimental data or skewing results to favor their theory." }, { "start": 1510, "end": 1526, "text": " The more sophisticated technology has become however the more things have gotten a lot more complicated. So the rest of the article talks about how people add big names to their papers, how people generate fake authors even how people generate even fake papers and so on." }, { "start": 1526, "end": 1540, "text": " You know that's a whole big problem but I still think that people being shady with the results of their research is still the biggest problem. There's just not too many retractions of it in machine learning because you can never reproduce someone else's paper." }, { "start": 1540, "end": 1555, "text": " If you didn't get my numbers you just did it wrong. So what is the real solution against fake science? It's probably hard to know but I guess an approach to a solution would be to have some sort of a distributed checking mechanism where you can aggregate opinions from" }, { "start": 1555, "end": 1580, "text": " all around the world about a given topic and then sort of look at everything and evaluate for yourself rather than relying on a centralized committee to do it for you. Be that for fake news or fake science or fake anything I think that's the only way forward because any centralized institutions will eventually get either corrupted or gained because they have some sort of scoring system." }, { "start": 1580, "end": 1593, "text": " But I'm interested in what you have to say. All of this is a problem. It's not exactly clear how we go about making this better. Can we even make it better or can we just find better ways to ignore the fake things?" }, { "start": 1593, "end": 1621, "text": " All right that was it from me for this week's ML news. I hope you had fun. I hope you don't get replaced by a machine anytime soon and most of all I hope I don't get replaced by a machine anytime soon. So wish you a happy day and goodbye." } ]
DiNzQP7kK-s
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Descending through a Crowded Valley -- Benchmarking Deep Learning Optimizers (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "optimization", "polyak", "nesterov", "benchmark", "cnn", "cifar", "mnist", "adam", "adagrad", "adadelta", "momentum", "sgd", "gradient", "learning rate", "tuning", "budget", "default parameters", "comparison", "grid search", "random search", "random seed", "vae", "learning rate schedule", "cosine decay", "trapezoid", "improvement", "best optimizer", "best optimizer for deep learning", "stochastic gradient descent" ]
#ai #research #optimization Deep Learning famously gives rise to very complex, non-linear optimization problems that cannot be solved analytically. Therefore, the choice of a suitable optimization algorithm can often make or break the training of a Deep Neural Network. Yet, the literature is full with hundreds of different algorithms, each claiming to be superior and selecting one of them is mostly done based on popular opinion or anecdotes. This paper investigates 14 of the most popular optimizers in a standardized benchmark and even though there is no clear winner, it can give some recommendations as a result. OUTLINE: 0:00 - Introduction & Overview 2:15 - The Overwhelming Amount of Optimizers 5:50 - Compared Optimizers 6:50 - Default Parameters & Tuning Distribution 13:10 - Deep Learning Problems Considered 16:45 - Tuning on Single Seeds 23:15 - Results & Interpretation 34:00 - Learning Rate Schedules & Noise 36:10 - Conclusions & Comments Paper: https://arxiv.org/abs/2007.01547 Raw Results: https://github.com/SirRob1997/Crowded-Valley---Results Abstract: Choosing the optimizer is considered to be among the most crucial design decisions in deep learning, and it is not an easy one. The growing literature now lists hundreds of optimization methods. In the absence of clear theoretical guidance and conclusive empirical evidence, the decision is often made based on anecdotes. In this work, we aim to replace these anecdotes, if not with a conclusive ranking, then at least with evidence-backed heuristics. To do so, we perform an extensive, standardized benchmark of more than a dozen particularly popular deep learning optimizers while giving a concise overview of the wide range of possible choices. Analyzing almost 35,000 individual runs, we contribute the following three points: (i) Optimizer performance varies greatly across tasks. (ii) We observe that evaluating multiple optimizers with default parameters works approximately as well as tuning the hyperparameters of a single, fixed optimizer. (iii) While we can not discern an optimization method clearly dominating across all tested tasks, we identify a significantly reduced subset of specific algorithms and parameter choices that generally lead to competitive results in our experiments. This subset includes popular favorites and some lesser-known contenders. We have open-sourced all our experimental results, making them directly available as challenging and well-tuned baselines. This allows for more meaningful comparisons when evaluating novel optimization methods without requiring any further computational efforts. Authors: Robin M. Schmidt, Frank Schneider, Philipp Hennig Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there, today we'll look at Descending Through a Crowded Valley, Benchmarking Deep Learning Optimizers by Robin M. Schmidt, Frank Schneider and Philipp Henning of the University of Tübingen. So this paper is an empirical investigation, a benchmark into optimization algorithms for deep learning. The short story of the paper is use Adam, it's fine. The long story is a bit more complicated and the resulting answer is basically we still don't know even after this paper if there is a single good recipe for optimizing deep learning and if so which one it is and where it works and where it doesn't work. A lot of things are still unclear and I think the biggest lesson from this paper is that probably the best thing you can do is pick Adam or SGD with momentum, tune it a little bit and whatever comes out of that is probably doing okay. So let's dive into the abstract here but first as always if you like content like this don't hesitate to share it out and also tell me what you think in the comments. With this paper we're going to see that there is a big room for interpretation here. So you're going to see experimental results and the experimental results they can always be interpreted in the light of different hypotheses that you have what's going on and very often you have to pay careful attention that something like Occam's razor, that you obey something like Occam's razor. Sometimes people try to read a lot into their experimental results when a much simpler explanation would actually be sufficient. Not that much with this paper but you're going to see a lot of results they can be interpreted in a lot of ways so yeah tell me what you think in the comments happy to have a discussion about this and hear your thoughts. So they say choosing the optimizer is considered to be among the most crucial design decisions in deep learning and it's not an easy one. The growing literature now lists hundreds of optimization methods in the absence of clear theoretical guidelines, guidance and conclusive empirical evidence the decision is often made based on anecdotes. So I'm just going to show you they have actually a list in the appendix they are tracking this optimization algorithm you already see this is massive right so you have things in here like you know Nesterov and Polyak which are very very senior in the field but as you can see a lot of algorithms popping up in 2016, 2018, 2019, 2020 and it's Polyatom Power SGD and all of them have their respective paper SGD look at that going strong 70 years. So you can see that this is almost an impossible list of things to consider when you choose when you choose your optimization algorithm and it seems like it's just getting worse. They have this graph over here where they count how many times each of the major optimization algorithms has been cited. 2020 is shorter because the year is not over yet I was kind of surprised as well like wait a minute it can't be that our field is shrinking this will never happen surely but it's just because I think the year isn't over yet or wasn't at the point where this paper was written. But you can see the popular optimization algorithms are mentioned more and more and more and also the non-popular optimization algorithms they seem to multiply over the years as we've seen from the list. So choosing one is hard. What this paper does is it doesn't compare all of them so they choose a list of 14 different optimization algorithms. Oh they also attract these learning rate schedules which is also ridiculous. Things like oh no but we don't do a constant factor decay we do multi-step decay and all of this makes all the difference. Remember that each of these papers that okay sometimes it's just been suggested in a paper but especially for the optimization methods most of these papers are about the optimization methods. They are saying this is a new optimization method it's good for either all of deep learning or a particular subset, particular algorithms or settings and it's better than everything that came before either it's faster or uses less memory or something like this. So all of these are papers that suggest some kind of new algorithm and show that it's better. In their paper you'll always find that their algorithm is better and having read and tried to re-implement and so on a bunch of these papers I can tell you that not a lot of the papers are let's say all of them in their experiments is of course better but that's not a recipe for taking the optimizer and applying it to other problems. It always looks good in the papers and that's why independent benchmarks like this are valuable. You see the decay rates for the learning rate or learning rate schedule it's not always decaying. So here is the things that they actually consider. These are what they consider the popular algorithms. So you have things like add a delta, add a grad, add them. You have things like look ahead, momentum which is SGD plus momentum. You have RMS prop just plain SGD and so on. You can see each of those comes with its set of hyperparameters. So for example in pretty much all the methods you have a learning rate which here they call alpha and in the momentum you additionally have the momentum term which is here called what's that row. Of course in other methods like in look ahead, you have a slew of hyperparameters that you can all tune. All these hyperparameters come with their default setting and the authors here additionally define a tuning distribution over which they search. So I'm going to criticize this work here quite a bit. Remember most of what I say in the criticism is actually acknowledged by the paper itself in their limitations which is much to their credit. So just because I criticize it, it's very easy to criticize empirical studies, investigations, especially benchmarks, especially comparisons. Most of it is addressed by the paper which is very very good. It's very nice for a paper to be honest about its shortcomings and just keep that in mind. So the first criticism I have is what they're going to do is for each of those things they're going to compare three settings. So in the first setting, wow that's a big pen, in the first setting it's one shot. So they just say we are going to take the optimizer, let's say atom, and we're just going to plug in the default parameters for it and we just let it run and see how well that does. And the second is with tuning a little. So they call this I think the small budget, tuning small budget and then the third one is the tuning with the large budget. And the difference is simply that you try more things in the large budget and you take the best one according to your validation metric and then you let it evaluate it on the test metric. We'll get to that in a second. But my point here is that there's two things. So first of all, they do a lot of experiments with in this setting one and they make a lot of claims about it. And this setting one is entirely dependent on the default parameters given either by the authors or by let's say popular frameworks, which often take them from the authors, which it's okay, like most people are going to use it and put some like use the default parameters. But I would argue investigating the default parameters in this kind of setting where you compare optimizers is kind of useless. What I would expect from a benchmark like this is to determine its own default parameters, like to determine, okay, what are what parameters are the best, maybe you take you have your what you're going to see is they do a benchmark over different deep learning problems, you take half of them, and you determine what single set of parameters works best on half of them. And then you evaluate, say, that's the default parameters for the other half or something like this comparing just out of the box default parameters, it might just mean that the default parameters the authors haven't really spent time worrying about it and simply released a bunch of code. And by simple simply changing the default parameters, you can improve it, you're going to see that. The second one is here over the tuning ranges. So for each of these, the authors define tuning ranges, so ranges where these tuning algorithms are going to search over, they are going to do random search. And here, for example, this is a log uniform distribution, the L U, so it's going to search from 10 to the negative four to one, which of course is 10 to the zero in log space. So it means it samples, it kind of samples the exponent on a uniform scale, and then it plugs that in, which is, you know, good. That's how we do it in research. However, look at compare, for example, you have something like Adam, where the default parameters tend to the negative three. And you have something like momentum where the default learning rate is 10 to the negative two, yet the range here is the same. And that's they make this clear, they say when the authors don't give a range to search over, we simply take over the range from a different from what is commonly done for that parameter or from a different method, which you can see that 10 to the negative two is exactly in the middle of this log uniform range, however, 10 to the negative three isn't. So when you already make the case that you use the default parameters, you really, I think, have to make sure that the range you search over the default parameter is kind of in the middle of that range. Otherwise, your range is kind of kind of not according to, you know, the default parameter. So that's, that's kind of already slight criticisms of this paper. And you can already see I'm not telling you that to trash the paper, I'm telling you this too. This is extremely hard, like to benchmark optimization algorithms with hyper parameters with different hyper parameters with different amounts of hyper parameters is super duper, duper duper hard. Okay, like everything influences the results here, what the default parameters are, what the ranges here are, how big the ranges are, right? If you make them too big, your search is going to spend a lot of time in in regions where nothing is happening. How how often you search in them. So let's say what you what a lot of people do in Adam is they keep these constant, but they just tune the learning rate a lot to how how much you tune each parameter is important, how many parameters are there are is important, all of these things like if you have to search over four parameters, it's going to be much noisier results than if you just have to search over two parameters and so on. So this already, as you can see, is a is a hard, hard, hard task. And this says nothing yet about the learning rate schedules that they also try. Where is it? They they try four different learning rate schedules, which, again, can be tuned, though I think they don't tune them here. And they do so on 14. No, sorry on eight different on eight different problems. So there are eight different problems. Where are they listed right here, there are eight different problems. So you have what they call small models over here. These are like artificial data quadratic noisy quadratic, a small MNIST VAE, small conv nets, as I understand it. And then you have what they call large problems, which is a CIFAR 100 CNN SVHN character RNN and so on. You might already notice that also in this department in the problems department that they search over, these are very particular kinds of problem. And that they acknowledge this as well. There's like no reinforcement learning, no GANs and so on. And they are not that big, even the even the large ones. They are kind of small. And of course, they are doing grid search, you know, how much compute they spend doing this benchmarking stuff, you can't benchmark models like GPT three. On the other hand, we know we know for a fact that there are effects of scale that quality make there is a qualitative difference between large models and small models and ever larger models, you can't simply extrapolate from small models because they have very different properties. It's also a relation to how big your data is in relation to your model. So my kind of criticism here is that we are searching Oh, here are the problems. Yeah, you see that there are eight problems. The bottom ones they call large, the top ones they call small. We are searching over a very small set subset of deep learning problems, namely, and this is something I pointed out already, I think, a few videos ago, if like, let's consider all of these things small models compared to something like ImageNet model or a big, big translation model or something like this. Let's consider these small. If I have a small model, I can do grid search, no problem, I can tune, I can try out all my optimizers. If I have a sorry, if I have a large problem, I can't. Yet these studies, they only tell me something about small models. And we already know it's very difficult to extrapolate from small models to large models. We know that there are effects in batch sizes, new transformer models on TPUs train with batch sizes of 4000 or something like this. The epochs we know that, for example, self supervised pre training train with much, much, much higher epoch counts than classic supervised learning and so on. This is so this tells you something about a very tiny subset of problems about a tiny subset of optimizers on these particular problems. And it is highly dependent on how you exactly set up these experiments. So we finally go to how they combine this, we've seen what optimizers they choose, and we've seen what problems they apply them to. So they here, how do you select an optimizer? Now, where was the thing that I was going to? Yeah, so when they when they tune after so the one shot setting is they just take the default parameters, which I already said I criticize, you should determine good default parameters overall problem and that be the default parameters and then yeah, but I guess they they go after what people do, people just plug it in. And first thing they try is the default parameters. So what they do is they when they tune, they tune over these ranges that we've seen, they say we only use a single seed for tuning. Okay. So they set the random seed of an experiment to a particular point. And then they tune, for example, the learning rate, always starting with the same random seed. And they look at the validation loss for that random seed. And then once they have the best learning rate, they repeat the best setting 10 times using different seeds. Now they train they tune tuning is done in a single seed, but testing is done. Testing is done using different seeds. Okay. They say right here that progressing this way has the feature that our tuning process can sometimes pick lucky seeds, which do not perform as well when averaging over multiple runs. So this is arguably a good reflection of reality, which is true, right. But the inherent problem here is that so what's the danger? The danger is that you have a lost landscape, whatever, and you start maybe here, okay, that's your random seed where you start, and you tune the different learning rates like going down, down more down, that's too much, and so on. Okay. So when you start there, one algorithm might look very good and algorithm that is suited to starting at the edge of like a cliff, but only there, like that algorithm might perform very poorly anywhere else in the landscape. So this is your tuning seed, and you tune that and the learning rate and algorithm you determine performing fairly well. And then you take that same setting that learning rate you determined, and you started from different places right from here, from here, from here, from here, and all of a sudden, this performs very, very crappy. However, a different learning rate might have done or a different algorithm might have done very, very well. So maybe for the red one, you determined a small learning rate is actually pretty good because I'm right at this edge of a cliff, and the small learning rate, you know, prevents me from going there and this small learning rate looks pretty good in the validation loss, but then you start from here, from here, from here, and the small learning rate, it does nothing from here. It just blows and so you get what I mean, you can get very unlucky in this tuning seed. And while it's true that this is correct, this is happening in the real world, this is not suitable for a benchmark, right? So keep in mind that these benchmark results, it could just be the entirety of a test outcome for a given algorithm could just be due to the fact that the tuning seed was crap. Because even though the test runs are averaged, the tuning is done on one particular seed. Okay, I would argue they say yes, if we used all 10 random seeds for tuning as well would drastically increase cost not only for this benchmark rendering practically infeasible, but also as an approach for the practical user. Look, I agree, I agree. But this is not like it's really necessary in something like this to use different random seeds, because what you want to show in the benchmark is how this algorithm is doing on average, right? Because the benchmark is supposed to inform future users. However, right now, the benchmark is like a single user that can be lucky or unlucky, right? It's not informative. And I see the point what they're saying is that it would make this benchmark invisible. However, it doesn't change the fact that it's necessary in the benchmark, any experiment that you do is like a fraction. Okay, the fraction down here is cost. And it's like dollars spent or time spent or whatever. And the fraction and the and indeed the numerator is going to be maybe something like information. Information the information that you gain from an experiment. Now what they're are it not all experiments are the same, right? You can't you can't just say, well, we use as much we use as much cost in our experiments as the people who invented resnets, right? Maybe maybe you do that. Maybe it's actually true. Maybe they actually use more because they do this giant grid search, like our experiments cost more than who resonates. So therefore, they should be respected even more than the experiments who figured out resnets, which is not true, because you have to pay attention to the numerator right here, which is the information that you gain from an experiment. And if you do it like this, yes, your cost is lower, but your information, like goes to towards zero, in my opinion, not to it's not zero, but it is very small. Small, because you have this one seed per algorithm that you bind everything to. So the entire benchmark can just get lucky or unlucky with a particular algorithm. Okay, so that is that is kind of my biggest criticism with the tuning right here. So let's go into the results. I think enough me babbling about the setup right here. They have these deep learning problems, they have these 14 algorithms, the learning rate schedules, they come in later, but they're not really prominent in the benchmark. What they do is they compare the algorithms with the default parameters with a small amount of tuning, and with a large amount of tuning. And this is one of the main results right here. Let's actually look at this particular thing here a bit more. So what you see as the read the way you read this is these numbers represent algorithms, you can see it beside them. But you know, you can't see it down here, but they represent the same algorithm. So one here is ams bound is also one here. On the left on the y axis, you have the one shot performing algorithms. And on the x axis, you have the same algorithms if they are given a small budget to tune. So if we analyze one of those, for example, number, let's call let's go numbers. Number four and five. So number four and five, number four and five. So four is added delta and five is added grad. What we can say if we look at for example, let's look at this number right here. We see that what's this five number five, so add a grad, add a grad is 40% better than added delta when it is allowed when it is given a small budget to tune. So when add a grad is given a small budget to tune itself, it is 40% 44% better than added delta when it is not given a budget to tune itself. All right, I hope that that kind of so we compare having tuning budget to not having tuning budget. And this is the absolute test set performance improvement after switching from any untuned or sorry, you don't see that from any untuned optimizer to any tuned optimizer. So the y axis are the untuned and the x axis are the tuned and you already see a lot of kind of different effects right here. So you see that sometimes which is interesting in in the red right here, these are negative numbers. So sometimes an algorithm, even given a small budget to tune is actually worse than a different algorithm when doing the default parameters. And this is on one of these small problems on one of these small C for 10 problems. Okay, you so that's one interesting thing, but I would argue it's it's actually not that meaningful for reasons for which I'll get to in a second. The most prominent thing probably you'll see is that there are rows that are kind of colored very uniformly. So you have, for example, this row, which is solid green, and then you have other rows which are, you know, very either light or even red, and so on. So what's going on here? What does a solid green row mean, especially look at these high numbers like 45434344. So there, this is performance improvement. It means that add delta is when not tuned, is this much worse than any of the algorithms with a given a small budget. So it's default parameters suck, suck badly. Okay, that's, that's the message right here. If you see like a solid green row, the default parameters of this method suck badly. Now I'm, as I said, what the value of this is, it actually maybe this is the most valuable thing that comes out of this comes out of this benchmark, honestly, because everything else is so noisy, right? In theory, I would say this is the least valuable thing, because let's just, you know, get good default parameters for all this stuff, and then we're done. But apparently, this is not done yet. So the deltas default parameters at least given in the paper, apparently, they suck. So does momentum though, does polyac give or Nesterov, whoever invented it, give momentum default parameters, maybe, maybe those were different times, certainly didn't give default parameters for deep learning. But you see, again, they like the default parameters suck. What is also interesting is to look at the diagonal, okay, so the diagonal shows you how much the same algorithm improves if given a budget. Again, you can make an inference about the default parameters when you say, okay, add a delta improves over itself by 40%, if just given a little bit of budget to tune, while add a grad is only improving 2.3%. There are situations in other graphs where there's actually negative values. You can see, for example, right here, there is a negative value in a different problem in the CIFAR 100. And they can show in the appendix that this is due to not enough tuning. So basically, the tuning is just a random search. And the random search is, again, this is the random search is so bad that it doesn't even hit the the the any any sort of setting where the default parameters are present. So all its search space is basically bad parameters, which, again, is you can say that the algorithm is not really robust to parameter change. But you can also say that this is entirely due to the choice of search space to search over. So you can see that the algorithms five, seven, eight, and 13 are particularly bad at this. Here we see that's add a grad, la 13. RMS prop. Yeah. And if you look at other problems, you see that different algorithms, okay, the number seven here is also kinda, kinda shady. So look ahead seems to be kinda shady in general. But this also switches from problem to problem, which is something I already introduced, there's a lot of noise here, a lot of noise. And therefore, yeah, what is a bit harder to parse out is how the algorithms compared to each other. So in order to determine that what you have to do is you just have to look at relative performance. So for example, take a any column, any column, for example, this column right here, you see that no matter how high the number is, it's always a bit smaller than the rest of the row. So in every row, this is smaller than the rest of the row, which means that number four, what's number four, add a delta, when you tune at a delta, it compares less favorably to all the other algorithms than when you tune other algorithms. So in order to really compare optimizers to each other in this graph, you have to kind of do this relative math in your head. And that's why I'm saying the red the negative numbers aren't even that important as long as they're not on the diagonal, right? If they're on the diagonal, they mean if you tune the same algorithm, it's worse than when you just run the default parameters, which is just means that your search sucked. Or your random seed is is is somehow lucky or unlucky. What do I know? But the negative numbers off diagonal don't mean anything that the fact that they're negative, because what you would expect is that the small budget always increases at least in expectation over the one shot. The question is then how much would you expect it to increase? So even though a number like 0.3, here is a positive number, which means that the small budget number two improves over the one shot number 11. This could still be a bad thing, because you'd say, well, if I give you a small budget, I expect any algorithm to improve like 2% or 3% or 5%, something like this. That's why you have to look at the at the relatives with respect to the other algorithms. We can't really look at the absolute numbers right here. So even the negative numbers don't mean anything, because zero has no meaning here, except on the diagonal, because you would always even like even on the diagonal, you always expect some kind of improvement from tuning. And we need to know kind of this average expected improvement before we can make judgments about the numbers in here. What you can see is that some algorithms clearly underperform with respect to the others, at least in this particular problem. Again, this is highly problem dependent. So I'll add a delta, pretty bad. Then what's this right here? This is 5, 6, 7. Again, look ahead with momentum, look ahead momentum, pretty bad. And you can find others. And this again varies from problem to problem, though numbers four and seven are pretty bad here. Numbers four and seven, here also five. Yeah, so you kind of see that you can make some conclusions about these problems. But here, look at that. So here they now include the they now include the schedules. And here you start out one shot with a constant schedule. If you add some of these schedules, it goes up a little bit. This is the median, right? And this orange stuff is the what is it the 25th to 75th percentile, look at the amount of noise right here. So when you see these plots, it's just, I feel it's quite, quite helpless. Okay? So again, when you look at these plots, so what they give you right here is the red bars or whatever Adam does when it's tuned. So when you tune Adam, and then let it run over these 10 different test seeds, this is the range it gets. And this the other lines are simply the mean across the other optimizers when you tune them, you can see just from the spread of Adam, that the order in which these lines appear mean almost nothing except here when they like crash horribly. It just probably means that these optimizers, some optimizers just aren't made for some problems. But other than that, the order here is kind of useless. And you see the downward facing triangle is always untuned Adam, which in most cases perform fairly, fairly well compared to the others and compared to the noise you have over the different over the different tuning outcomes. So that's why I said at the beginning, use Adam, it's probably fine, tune it a little bit. If you realize it doesn't work at all, then switch to something like SGD with momentum, or the other way around, right? Use SGD with momentum. If you realize it just screws up, maybe try Adam. And that's actually a thing they say as well. So one of their conclusions is one of their conclusions is that instead of tuning a single optimizer tuning helps about as much as trying other optimizers. And they repeat this point throughout the paper. It's instead of trying a different settings for a single optimizer, it you can get the same kind of outcome by simply trying a bunch of different optimizers in their default settings, and then picking the best one of those which it's, you know, the entire literature seems to point to whatever you do, it's probably fine if you take one of these generic algorithms and kind of do whatever it whatever to select a good thing. Let's assume for a minute that all of these algorithms are the same. And you simply change the algorithm instead of tuning the learning rate. Well, these algorithms come with different default learning rates, right? All these algorithms come with different default learning rates. And the learning rate goes into the algorithm in a different way. So the effective learning rate, even if I put in the same number, the effective learning rate is going to be different for each algorithm. So maybe what their their effect here, when they say it's the same when you tune the parameters, or when you simply pick a different default parameterized optimization algorithm, maybe what you're doing is the same thing, maybe all these algorithms are actually kind of the same. And overall, right for a particular problem, it's different, but overall, they're kind of the same. And when you pick a different algorithm, you simply pick a different learning rate for the same algorithm in disguise, because the learning rate, the default learning rate for that algorithm goes into its formula a bit different. And ultimately, you're simply tuning as well. So the the benchmark is extensive. Again, I don't want to rag on this paper. The benchmark is super extensive, they also do rerun stability, and so on. But it this paper shows that it is possible to do an extensive, extensive search, extensive benchmark that is still largely useless. And I don't I don't want to say that, because they, because they, what I don't want to say is they didn't determine a clear winner, therefore, it's useless. That's not what I'm saying. I'm saying the information content that I can get out of these experiments, especially for situations where it would help me, like for where I can't do grid search is close, close to zero. I think the two big things that the community can learn from these papers is one, the default settings for some of these things are crap in the papers, and maybe maybe in our frameworks. So maybe we'll go over that once more. And two, is like, at least on these small kind of problems, it seems not that important which algorithm you pick, pick one that you like, tune it a little bit, and you're probably good to go. If it doesn't work, pick another one. So that was it for this paper. Again, tell me what you think. What worked for you. If you have horror stories with optimization algorithm, they used to be much more, much more prevalent. I think also our advances in architectures have made it easier for optimization algorithms. So like something like ResNet, giving you really nice gradient flow has made it much more easy to optimize the network as a whole, and therefore the optimization algorithms aren't as important. And the other the last comment I want to make here is that a lot of a lot of these papers, as I said, they deal with specific situations like, oh, if you have low memory or if you have that or they say, our algorithm is really good, but only only if you add like a bit of Gaussian noise on the input or only if you use this very exotic learning rate scheduler or something like this, which this paper, of course, hasn't done. This is still a very small subset. So yeah, these are these are common criticisms for benchmarks. I think we'll take from it what it is. It is a cool paper. It is extensive. They are very critical of themselves. And that was it for me. Thank you very much for your time.
[ { "start": 0, "end": 5.18, "text": " Hi there, today we'll look at Descending Through a Crowded Valley, Benchmarking Deep Learning" }, { "start": 5.18, "end": 11.76, "text": " Optimizers by Robin M. Schmidt, Frank Schneider and Philipp Henning of the University of Tübingen." }, { "start": 11.76, "end": 17.64, "text": " So this paper is an empirical investigation, a benchmark into optimization algorithms for" }, { "start": 17.64, "end": 19.2, "text": " deep learning." }, { "start": 19.2, "end": 25.28, "text": " The short story of the paper is use Adam, it's fine." }, { "start": 25.28, "end": 31.840000000000003, "text": " The long story is a bit more complicated and the resulting answer is basically we still" }, { "start": 31.840000000000003, "end": 37.480000000000004, "text": " don't know even after this paper if there is a single good recipe for optimizing deep" }, { "start": 37.480000000000004, "end": 43.16, "text": " learning and if so which one it is and where it works and where it doesn't work." }, { "start": 43.16, "end": 49.24, "text": " A lot of things are still unclear and I think the biggest lesson from this paper is that" }, { "start": 49.24, "end": 55.800000000000004, "text": " probably the best thing you can do is pick Adam or SGD with momentum, tune it a little" }, { "start": 55.800000000000004, "end": 62.800000000000004, "text": " bit and whatever comes out of that is probably doing okay." }, { "start": 62.800000000000004, "end": 70.52000000000001, "text": " So let's dive into the abstract here but first as always if you like content like this don't" }, { "start": 70.52000000000001, "end": 75.96000000000001, "text": " hesitate to share it out and also tell me what you think in the comments." }, { "start": 75.96, "end": 82.72, "text": " With this paper we're going to see that there is a big room for interpretation here." }, { "start": 82.72, "end": 88.02, "text": " So you're going to see experimental results and the experimental results they can always" }, { "start": 88.02, "end": 96.75999999999999, "text": " be interpreted in the light of different hypotheses that you have what's going on and very often" }, { "start": 96.75999999999999, "end": 102, "text": " you have to pay careful attention that something like Occam's razor, that you obey something" }, { "start": 102, "end": 103.52, "text": " like Occam's razor." }, { "start": 103.52, "end": 110.28, "text": " Sometimes people try to read a lot into their experimental results when a much simpler explanation" }, { "start": 110.28, "end": 113.28, "text": " would actually be sufficient." }, { "start": 113.28, "end": 117.66, "text": " Not that much with this paper but you're going to see a lot of results they can be interpreted" }, { "start": 117.66, "end": 123.16, "text": " in a lot of ways so yeah tell me what you think in the comments happy to have a discussion" }, { "start": 123.16, "end": 125.67999999999999, "text": " about this and hear your thoughts." }, { "start": 125.67999999999999, "end": 130.76, "text": " So they say choosing the optimizer is considered to be among the most crucial design decisions" }, { "start": 130.76, "end": 134.5, "text": " in deep learning and it's not an easy one." }, { "start": 134.5, "end": 139.48, "text": " The growing literature now lists hundreds of optimization methods in the absence of" }, { "start": 139.48, "end": 144.72, "text": " clear theoretical guidelines, guidance and conclusive empirical evidence the decision" }, { "start": 144.72, "end": 146.67999999999998, "text": " is often made based on anecdotes." }, { "start": 146.67999999999998, "end": 154.01999999999998, "text": " So I'm just going to show you they have actually a list in the appendix they are tracking this" }, { "start": 154.01999999999998, "end": 159.44, "text": " optimization algorithm you already see this is massive right so you have things in here" }, { "start": 159.44, "end": 167.52, "text": " like you know Nesterov and Polyak which are very very senior in the field but as you can" }, { "start": 167.52, "end": 176.2, "text": " see a lot of algorithms popping up in 2016, 2018, 2019, 2020 and it's Polyatom Power" }, { "start": 176.2, "end": 188, "text": " SGD and all of them have their respective paper SGD look at that going strong 70 years." }, { "start": 188, "end": 195.92, "text": " So you can see that this is almost an impossible list of things to consider when you choose" }, { "start": 195.92, "end": 204.12, "text": " when you choose your optimization algorithm and it seems like it's just getting worse." }, { "start": 204.12, "end": 211.92000000000002, "text": " They have this graph over here where they count how many times each of the major optimization" }, { "start": 211.92000000000002, "end": 213.4, "text": " algorithms has been cited." }, { "start": 213.4, "end": 218.44, "text": " 2020 is shorter because the year is not over yet I was kind of surprised as well like wait" }, { "start": 218.44, "end": 225.56, "text": " a minute it can't be that our field is shrinking this will never happen surely but it's just" }, { "start": 225.56, "end": 232.28, "text": " because I think the year isn't over yet or wasn't at the point where this paper was written." }, { "start": 232.28, "end": 239.88, "text": " But you can see the popular optimization algorithms are mentioned more and more and more and also" }, { "start": 239.88, "end": 245.9, "text": " the non-popular optimization algorithms they seem to multiply over the years as we've seen" }, { "start": 245.9, "end": 247.07999999999998, "text": " from the list." }, { "start": 247.07999999999998, "end": 249.96, "text": " So choosing one is hard." }, { "start": 249.96, "end": 256.36, "text": " What this paper does is it doesn't compare all of them so they choose a list of 14 different" }, { "start": 256.36, "end": 257.64, "text": " optimization algorithms." }, { "start": 257.64, "end": 262.88, "text": " Oh they also attract these learning rate schedules which is also ridiculous." }, { "start": 262.88, "end": 270.48, "text": " Things like oh no but we don't do a constant factor decay we do multi-step decay and all" }, { "start": 270.48, "end": 272.54, "text": " of this makes all the difference." }, { "start": 272.54, "end": 278.76, "text": " Remember that each of these papers that okay sometimes it's just been suggested in a paper" }, { "start": 278.76, "end": 284.76, "text": " but especially for the optimization methods most of these papers are about the optimization" }, { "start": 284.76, "end": 285.76, "text": " methods." }, { "start": 285.76, "end": 291.78, "text": " They are saying this is a new optimization method it's good for either all of deep learning" }, { "start": 291.78, "end": 298.76, "text": " or a particular subset, particular algorithms or settings and it's better than everything" }, { "start": 298.76, "end": 304.47999999999996, "text": " that came before either it's faster or uses less memory or something like this." }, { "start": 304.47999999999996, "end": 315.44, "text": " So all of these are papers that suggest some kind of new algorithm and show that it's better." }, { "start": 315.44, "end": 322.56, "text": " In their paper you'll always find that their algorithm is better and having read and tried" }, { "start": 322.56, "end": 328.32, "text": " to re-implement and so on a bunch of these papers I can tell you that not a lot of the" }, { "start": 328.32, "end": 334.04, "text": " papers are let's say all of them in their experiments is of course better but that's" }, { "start": 334.04, "end": 339.7, "text": " not a recipe for taking the optimizer and applying it to other problems." }, { "start": 339.7, "end": 344.98, "text": " It always looks good in the papers and that's why independent benchmarks like this are valuable." }, { "start": 344.98, "end": 350.68, "text": " You see the decay rates for the learning rate or learning rate schedule it's not always" }, { "start": 350.68, "end": 351.68, "text": " decaying." }, { "start": 351.68, "end": 355.64000000000004, "text": " So here is the things that they actually consider." }, { "start": 355.64000000000004, "end": 359.12, "text": " These are what they consider the popular algorithms." }, { "start": 359.12, "end": 363.72, "text": " So you have things like add a delta, add a grad, add them." }, { "start": 363.72, "end": 369.78000000000003, "text": " You have things like look ahead, momentum which is SGD plus momentum." }, { "start": 369.78000000000003, "end": 374.24, "text": " You have RMS prop just plain SGD and so on." }, { "start": 374.24, "end": 378.2, "text": " You can see each of those comes with its set of hyperparameters." }, { "start": 378.2, "end": 382.96000000000004, "text": " So for example in pretty much all the methods you have a learning rate which here they call" }, { "start": 382.96000000000004, "end": 389.04, "text": " alpha and in the momentum you additionally have the momentum term which is here called" }, { "start": 389.04, "end": 392.36, "text": " what's that row." }, { "start": 392.36, "end": 397.82, "text": " Of course in other methods like in look ahead, you have a slew of hyperparameters that you" }, { "start": 397.82, "end": 398.82, "text": " can all tune." }, { "start": 398.82, "end": 405.92, "text": " All these hyperparameters come with their default setting and the authors here additionally" }, { "start": 405.92, "end": 411.6, "text": " define a tuning distribution over which they search." }, { "start": 411.6, "end": 415.88, "text": " So I'm going to criticize this work here quite a bit." }, { "start": 415.88, "end": 420.96, "text": " Remember most of what I say in the criticism is actually acknowledged by the paper itself" }, { "start": 420.96, "end": 424.76, "text": " in their limitations which is much to their credit." }, { "start": 424.76, "end": 431.15999999999997, "text": " So just because I criticize it, it's very easy to criticize empirical studies, investigations," }, { "start": 431.15999999999997, "end": 435.96, "text": " especially benchmarks, especially comparisons." }, { "start": 435.96, "end": 439.7, "text": " Most of it is addressed by the paper which is very very good." }, { "start": 439.7, "end": 446.64, "text": " It's very nice for a paper to be honest about its shortcomings and just keep that in mind." }, { "start": 446.64, "end": 452.28, "text": " So the first criticism I have is what they're going to do is for each of those things they're" }, { "start": 452.28, "end": 456.44, "text": " going to compare three settings." }, { "start": 456.44, "end": 463.28, "text": " So in the first setting, wow that's a big pen, in the first setting it's one shot." }, { "start": 463.28, "end": 470.03999999999996, "text": " So they just say we are going to take the optimizer, let's say atom, and we're just" }, { "start": 470.03999999999996, "end": 475.2, "text": " going to plug in the default parameters for it and we just let it run and see how well" }, { "start": 475.2, "end": 477.28, "text": " that does." }, { "start": 477.28, "end": 483, "text": " And the second is with tuning a little." }, { "start": 483, "end": 488.03999999999996, "text": " So they call this I think the small budget, tuning small budget and then the third one" }, { "start": 488.03999999999996, "end": 490.29999999999995, "text": " is the tuning with the large budget." }, { "start": 490.29999999999995, "end": 499.44, "text": " And the difference is simply that you try more things in the large budget and you take" }, { "start": 499.44, "end": 503.67999999999995, "text": " the best one according to your validation metric and then you let it evaluate it on" }, { "start": 503.67999999999995, "end": 504.67999999999995, "text": " the test metric." }, { "start": 504.67999999999995, "end": 505.67999999999995, "text": " We'll get to that in a second." }, { "start": 505.68, "end": 508.88, "text": " But my point here is that there's two things." }, { "start": 508.88, "end": 514, "text": " So first of all, they do a lot of experiments with in this setting one and they make a lot" }, { "start": 514, "end": 515.7, "text": " of claims about it." }, { "start": 515.7, "end": 521.2, "text": " And this setting one is entirely dependent on the default parameters given either by" }, { "start": 521.2, "end": 529.44, "text": " the authors or by let's say popular frameworks, which often take them from the authors, which" }, { "start": 529.44, "end": 535.14, "text": " it's okay, like most people are going to use it and put some like use the default parameters." }, { "start": 535.14, "end": 539.12, "text": " But I would argue investigating the default parameters in this kind of setting where you" }, { "start": 539.12, "end": 544.08, "text": " compare optimizers is kind of useless." }, { "start": 544.08, "end": 549.52, "text": " What I would expect from a benchmark like this is to determine its own default parameters," }, { "start": 549.52, "end": 556.4, "text": " like to determine, okay, what are what parameters are the best, maybe you take you have your" }, { "start": 556.4, "end": 560.88, "text": " what you're going to see is they do a benchmark over different deep learning problems, you" }, { "start": 560.88, "end": 566.56, "text": " take half of them, and you determine what single set of parameters works best on half" }, { "start": 566.56, "end": 567.56, "text": " of them." }, { "start": 567.56, "end": 571.26, "text": " And then you evaluate, say, that's the default parameters for the other half or something" }, { "start": 571.26, "end": 576.4399999999999, "text": " like this comparing just out of the box default parameters, it might just mean that the default" }, { "start": 576.4399999999999, "end": 581.88, "text": " parameters the authors haven't really spent time worrying about it and simply released" }, { "start": 581.88, "end": 583.52, "text": " a bunch of code." }, { "start": 583.52, "end": 587.92, "text": " And by simple simply changing the default parameters, you can improve it, you're going" }, { "start": 587.92, "end": 589.32, "text": " to see that." }, { "start": 589.32, "end": 592.5600000000001, "text": " The second one is here over the tuning ranges." }, { "start": 592.5600000000001, "end": 599.82, "text": " So for each of these, the authors define tuning ranges, so ranges where these tuning algorithms" }, { "start": 599.82, "end": 604.2600000000001, "text": " are going to search over, they are going to do random search." }, { "start": 604.2600000000001, "end": 611.32, "text": " And here, for example, this is a log uniform distribution, the L U, so it's going to search" }, { "start": 611.32, "end": 616.96, "text": " from 10 to the negative four to one, which of course is 10 to the zero in log space." }, { "start": 616.96, "end": 623.76, "text": " So it means it samples, it kind of samples the exponent on a uniform scale, and then" }, { "start": 623.76, "end": 628.2, "text": " it plugs that in, which is, you know, good." }, { "start": 628.2, "end": 629.88, "text": " That's how we do it in research." }, { "start": 629.88, "end": 637.44, "text": " However, look at compare, for example, you have something like Adam, where the default" }, { "start": 637.44, "end": 640.32, "text": " parameters tend to the negative three." }, { "start": 640.32, "end": 644.52, "text": " And you have something like momentum where the default learning rate is 10 to the negative" }, { "start": 644.52, "end": 648.76, "text": " two, yet the range here is the same." }, { "start": 648.76, "end": 652.8, "text": " And that's they make this clear, they say when the authors don't give a range to search" }, { "start": 652.8, "end": 659, "text": " over, we simply take over the range from a different from what is commonly done for that" }, { "start": 659, "end": 663.28, "text": " parameter or from a different method, which you can see that 10 to the negative two is" }, { "start": 663.28, "end": 671.64, "text": " exactly in the middle of this log uniform range, however, 10 to the negative three isn't." }, { "start": 671.64, "end": 678.48, "text": " So when you already make the case that you use the default parameters, you really, I" }, { "start": 678.48, "end": 684.28, "text": " think, have to make sure that the range you search over the default parameter is kind" }, { "start": 684.28, "end": 686.48, "text": " of in the middle of that range." }, { "start": 686.48, "end": 694.4, "text": " Otherwise, your range is kind of kind of not according to, you know, the default parameter." }, { "start": 694.4, "end": 700.48, "text": " So that's, that's kind of already slight criticisms of this paper." }, { "start": 700.48, "end": 705.2, "text": " And you can already see I'm not telling you that to trash the paper, I'm telling you this" }, { "start": 705.2, "end": 706.2, "text": " too." }, { "start": 706.2, "end": 712.1800000000001, "text": " This is extremely hard, like to benchmark optimization algorithms with hyper parameters" }, { "start": 712.1800000000001, "end": 718.6, "text": " with different hyper parameters with different amounts of hyper parameters is super duper," }, { "start": 718.6, "end": 720.76, "text": " duper duper hard." }, { "start": 720.76, "end": 726.6800000000001, "text": " Okay, like everything influences the results here, what the default parameters are, what" }, { "start": 726.6800000000001, "end": 729.6, "text": " the ranges here are, how big the ranges are, right?" }, { "start": 729.6, "end": 735.36, "text": " If you make them too big, your search is going to spend a lot of time in in regions where" }, { "start": 735.36, "end": 737.16, "text": " nothing is happening." }, { "start": 737.16, "end": 739.9200000000001, "text": " How how often you search in them." }, { "start": 739.9200000000001, "end": 745.64, "text": " So let's say what you what a lot of people do in Adam is they keep these constant, but" }, { "start": 745.64, "end": 752.28, "text": " they just tune the learning rate a lot to how how much you tune each parameter is important," }, { "start": 752.28, "end": 757.88, "text": " how many parameters are there are is important, all of these things like if you have to search" }, { "start": 757.88, "end": 764, "text": " over four parameters, it's going to be much noisier results than if you just have to search" }, { "start": 764, "end": 766.8, "text": " over two parameters and so on." }, { "start": 766.8, "end": 774.32, "text": " So this already, as you can see, is a is a hard, hard, hard task." }, { "start": 774.32, "end": 780.16, "text": " And this says nothing yet about the learning rate schedules that they also try." }, { "start": 780.16, "end": 781.16, "text": " Where is it?" }, { "start": 781.16, "end": 788.16, "text": " They they try four different learning rate schedules, which, again, can be tuned, though" }, { "start": 788.16, "end": 790.8, "text": " I think they don't tune them here." }, { "start": 790.8, "end": 793.6, "text": " And they do so on 14." }, { "start": 793.6, "end": 798.8, "text": " No, sorry on eight different on eight different problems." }, { "start": 798.8, "end": 803.52, "text": " So there are eight different problems." }, { "start": 803.52, "end": 807.4399999999999, "text": " Where are they listed right here, there are eight different problems." }, { "start": 807.44, "end": 811.5200000000001, "text": " So you have what they call small models over here." }, { "start": 811.5200000000001, "end": 819.4000000000001, "text": " These are like artificial data quadratic noisy quadratic, a small MNIST VAE, small conv nets," }, { "start": 819.4000000000001, "end": 820.7600000000001, "text": " as I understand it." }, { "start": 820.7600000000001, "end": 829.5600000000001, "text": " And then you have what they call large problems, which is a CIFAR 100 CNN SVHN character RNN" }, { "start": 829.5600000000001, "end": 830.5600000000001, "text": " and so on." }, { "start": 830.5600000000001, "end": 835.12, "text": " You might already notice that also in this department in the problems department that" }, { "start": 835.12, "end": 842.4, "text": " they search over, these are very particular kinds of problem." }, { "start": 842.4, "end": 843.96, "text": " And that they acknowledge this as well." }, { "start": 843.96, "end": 847.64, "text": " There's like no reinforcement learning, no GANs and so on." }, { "start": 847.64, "end": 851.5, "text": " And they are not that big, even the even the large ones." }, { "start": 851.5, "end": 853.88, "text": " They are kind of small." }, { "start": 853.88, "end": 858.32, "text": " And of course, they are doing grid search, you know, how much compute they spend doing" }, { "start": 858.32, "end": 863.96, "text": " this benchmarking stuff, you can't benchmark models like GPT three." }, { "start": 863.96, "end": 870.44, "text": " On the other hand, we know we know for a fact that there are effects of scale that quality" }, { "start": 870.44, "end": 877.64, "text": " make there is a qualitative difference between large models and small models and ever larger" }, { "start": 877.64, "end": 884.1600000000001, "text": " models, you can't simply extrapolate from small models because they have very different" }, { "start": 884.1600000000001, "end": 885.1600000000001, "text": " properties." }, { "start": 885.1600000000001, "end": 888.76, "text": " It's also a relation to how big your data is in relation to your model." }, { "start": 888.76, "end": 898.28, "text": " So my kind of criticism here is that we are searching Oh, here are the problems." }, { "start": 898.28, "end": 901.12, "text": " Yeah, you see that there are eight problems." }, { "start": 901.12, "end": 906.4399999999999, "text": " The bottom ones they call large, the top ones they call small." }, { "start": 906.4399999999999, "end": 912.96, "text": " We are searching over a very small set subset of deep learning problems, namely, and this" }, { "start": 912.96, "end": 920.36, "text": " is something I pointed out already, I think, a few videos ago, if like, let's consider" }, { "start": 920.36, "end": 928.2800000000001, "text": " all of these things small models compared to something like ImageNet model or a big," }, { "start": 928.2800000000001, "end": 932.2, "text": " big translation model or something like this." }, { "start": 932.2, "end": 934.2800000000001, "text": " Let's consider these small." }, { "start": 934.2800000000001, "end": 940.32, "text": " If I have a small model, I can do grid search, no problem, I can tune, I can try out all" }, { "start": 940.32, "end": 942.0400000000001, "text": " my optimizers." }, { "start": 942.04, "end": 946.76, "text": " If I have a sorry, if I have a large problem, I can't." }, { "start": 946.76, "end": 951.04, "text": " Yet these studies, they only tell me something about small models." }, { "start": 951.04, "end": 956.38, "text": " And we already know it's very difficult to extrapolate from small models to large models." }, { "start": 956.38, "end": 961.1999999999999, "text": " We know that there are effects in batch sizes, new transformer models on TPUs train with" }, { "start": 961.1999999999999, "end": 966, "text": " batch sizes of 4000 or something like this." }, { "start": 966, "end": 971.16, "text": " The epochs we know that, for example, self supervised pre training train with much, much," }, { "start": 971.16, "end": 976.0799999999999, "text": " much higher epoch counts than classic supervised learning and so on." }, { "start": 976.0799999999999, "end": 983.0799999999999, "text": " This is so this tells you something about a very tiny subset of problems about a tiny" }, { "start": 983.0799999999999, "end": 988.38, "text": " subset of optimizers on these particular problems." }, { "start": 988.38, "end": 994.3199999999999, "text": " And it is highly dependent on how you exactly set up these experiments." }, { "start": 994.3199999999999, "end": 999.56, "text": " So we finally go to how they combine this, we've seen what optimizers they choose, and" }, { "start": 999.56, "end": 1002.92, "text": " we've seen what problems they apply them to." }, { "start": 1002.92, "end": 1009.16, "text": " So they here, how do you select an optimizer?" }, { "start": 1009.16, "end": 1013.5999999999999, "text": " Now, where was the thing that I was going to?" }, { "start": 1013.5999999999999, "end": 1019.28, "text": " Yeah, so when they when they tune after so the one shot setting is they just take the" }, { "start": 1019.28, "end": 1024.76, "text": " default parameters, which I already said I criticize, you should determine good default" }, { "start": 1024.76, "end": 1031.72, "text": " parameters overall problem and that be the default parameters and then yeah, but I guess" }, { "start": 1031.72, "end": 1035.64, "text": " they they go after what people do, people just plug it in." }, { "start": 1035.64, "end": 1038.3799999999999, "text": " And first thing they try is the default parameters." }, { "start": 1038.3799999999999, "end": 1048.28, "text": " So what they do is they when they tune, they tune over these ranges that we've seen, they" }, { "start": 1048.28, "end": 1052.24, "text": " say we only use a single seed for tuning." }, { "start": 1052.24, "end": 1053.24, "text": " Okay." }, { "start": 1053.24, "end": 1059.24, "text": " So they set the random seed of an experiment to a particular point." }, { "start": 1059.24, "end": 1065.44, "text": " And then they tune, for example, the learning rate, always starting with the same random" }, { "start": 1065.44, "end": 1066.8, "text": " seed." }, { "start": 1066.8, "end": 1070.08, "text": " And they look at the validation loss for that random seed." }, { "start": 1070.08, "end": 1075.86, "text": " And then once they have the best learning rate, they repeat the best setting 10 times" }, { "start": 1075.86, "end": 1077.94, "text": " using different seeds." }, { "start": 1077.94, "end": 1086.56, "text": " Now they train they tune tuning is done in a single seed, but testing is done." }, { "start": 1086.56, "end": 1090.48, "text": " Testing is done using different seeds." }, { "start": 1090.48, "end": 1091.92, "text": " Okay." }, { "start": 1091.92, "end": 1098.3600000000001, "text": " They say right here that progressing this way has the feature that our tuning process" }, { "start": 1098.3600000000001, "end": 1104.56, "text": " can sometimes pick lucky seeds, which do not perform as well when averaging over multiple" }, { "start": 1104.56, "end": 1105.56, "text": " runs." }, { "start": 1105.56, "end": 1110, "text": " So this is arguably a good reflection of reality, which is true, right." }, { "start": 1110, "end": 1115.24, "text": " But the inherent problem here is that so what's the danger?" }, { "start": 1115.24, "end": 1121.28, "text": " The danger is that you have a lost landscape, whatever, and you start maybe here, okay," }, { "start": 1121.28, "end": 1125.12, "text": " that's your random seed where you start, and you tune the different learning rates like" }, { "start": 1125.12, "end": 1129.9199999999998, "text": " going down, down more down, that's too much, and so on." }, { "start": 1129.9199999999998, "end": 1130.9199999999998, "text": " Okay." }, { "start": 1130.92, "end": 1138.96, "text": " So when you start there, one algorithm might look very good and algorithm that is suited" }, { "start": 1138.96, "end": 1144.3600000000001, "text": " to starting at the edge of like a cliff, but only there, like that algorithm might perform" }, { "start": 1144.3600000000001, "end": 1147.4, "text": " very poorly anywhere else in the landscape." }, { "start": 1147.4, "end": 1152.5600000000002, "text": " So this is your tuning seed, and you tune that and the learning rate and algorithm you" }, { "start": 1152.5600000000002, "end": 1156.24, "text": " determine performing fairly well." }, { "start": 1156.24, "end": 1161.6, "text": " And then you take that same setting that learning rate you determined, and you started from" }, { "start": 1161.6, "end": 1166.4, "text": " different places right from here, from here, from here, from here, and all of a sudden," }, { "start": 1166.4, "end": 1168.72, "text": " this performs very, very crappy." }, { "start": 1168.72, "end": 1174.96, "text": " However, a different learning rate might have done or a different algorithm might have done" }, { "start": 1174.96, "end": 1177.34, "text": " very, very well." }, { "start": 1177.34, "end": 1181.72, "text": " So maybe for the red one, you determined a small learning rate is actually pretty good" }, { "start": 1181.72, "end": 1186.38, "text": " because I'm right at this edge of a cliff, and the small learning rate, you know, prevents" }, { "start": 1186.38, "end": 1191.84, "text": " me from going there and this small learning rate looks pretty good in the validation loss," }, { "start": 1191.84, "end": 1197.3600000000001, "text": " but then you start from here, from here, from here, and the small learning rate, it does" }, { "start": 1197.3600000000001, "end": 1200.48, "text": " nothing from here." }, { "start": 1200.48, "end": 1208.48, "text": " It just blows and so you get what I mean, you can get very unlucky in this tuning seed." }, { "start": 1208.48, "end": 1213.76, "text": " And while it's true that this is correct, this is happening in the real world, this" }, { "start": 1213.76, "end": 1216.64, "text": " is not suitable for a benchmark, right?" }, { "start": 1216.64, "end": 1224.72, "text": " So keep in mind that these benchmark results, it could just be the entirety of a test outcome" }, { "start": 1224.72, "end": 1230.38, "text": " for a given algorithm could just be due to the fact that the tuning seed was crap." }, { "start": 1230.38, "end": 1236.24, "text": " Because even though the test runs are averaged, the tuning is done on one particular seed." }, { "start": 1236.24, "end": 1243.72, "text": " Okay, I would argue they say yes, if we used all 10 random seeds for tuning as well would" }, { "start": 1243.72, "end": 1249.06, "text": " drastically increase cost not only for this benchmark rendering practically infeasible," }, { "start": 1249.06, "end": 1252.08, "text": " but also as an approach for the practical user." }, { "start": 1252.08, "end": 1255, "text": " Look, I agree, I agree." }, { "start": 1255, "end": 1261.2, "text": " But this is not like it's really necessary in something like this to use different random" }, { "start": 1261.2, "end": 1268.1200000000001, "text": " seeds, because what you want to show in the benchmark is how this algorithm is doing on" }, { "start": 1268.1200000000001, "end": 1270.38, "text": " average, right?" }, { "start": 1270.38, "end": 1274.3600000000001, "text": " Because the benchmark is supposed to inform future users." }, { "start": 1274.3600000000001, "end": 1280.6000000000001, "text": " However, right now, the benchmark is like a single user that can be lucky or unlucky," }, { "start": 1280.6000000000001, "end": 1281.6000000000001, "text": " right?" }, { "start": 1281.6000000000001, "end": 1282.8600000000001, "text": " It's not informative." }, { "start": 1282.8600000000001, "end": 1287.72, "text": " And I see the point what they're saying is that it would make this benchmark invisible." }, { "start": 1287.72, "end": 1292, "text": " However, it doesn't change the fact that it's necessary in the benchmark, any experiment" }, { "start": 1292, "end": 1294.48, "text": " that you do is like a fraction." }, { "start": 1294.48, "end": 1299.08, "text": " Okay, the fraction down here is cost." }, { "start": 1299.08, "end": 1303.08, "text": " And it's like dollars spent or time spent or whatever." }, { "start": 1303.08, "end": 1311.8, "text": " And the fraction and the and indeed the numerator is going to be maybe something like information." }, { "start": 1311.8, "end": 1317.1200000000001, "text": " Information the information that you gain from an experiment." }, { "start": 1317.12, "end": 1321.6399999999999, "text": " Now what they're are it not all experiments are the same, right?" }, { "start": 1321.6399999999999, "end": 1331, "text": " You can't you can't just say, well, we use as much we use as much cost in our experiments" }, { "start": 1331, "end": 1334.2399999999998, "text": " as the people who invented resnets, right?" }, { "start": 1334.2399999999998, "end": 1335.28, "text": " Maybe maybe you do that." }, { "start": 1335.28, "end": 1336.28, "text": " Maybe it's actually true." }, { "start": 1336.28, "end": 1339.36, "text": " Maybe they actually use more because they do this giant grid search, like our experiments" }, { "start": 1339.36, "end": 1342.1999999999998, "text": " cost more than who resonates." }, { "start": 1342.2, "end": 1348.6000000000001, "text": " So therefore, they should be respected even more than the experiments who figured out" }, { "start": 1348.6000000000001, "end": 1357.04, "text": " resnets, which is not true, because you have to pay attention to the numerator right here," }, { "start": 1357.04, "end": 1359.3600000000001, "text": " which is the information that you gain from an experiment." }, { "start": 1359.3600000000001, "end": 1365.2, "text": " And if you do it like this, yes, your cost is lower, but your information, like goes" }, { "start": 1365.2, "end": 1372, "text": " to towards zero, in my opinion, not to it's not zero, but it is very small." }, { "start": 1372, "end": 1379.72, "text": " Small, because you have this one seed per algorithm that you bind everything to." }, { "start": 1379.72, "end": 1384.92, "text": " So the entire benchmark can just get lucky or unlucky with a particular algorithm." }, { "start": 1384.92, "end": 1396.14, "text": " Okay, so that is that is kind of my biggest criticism with the tuning right here." }, { "start": 1396.14, "end": 1397.4, "text": " So let's go into the results." }, { "start": 1397.4, "end": 1401.56, "text": " I think enough me babbling about the setup right here." }, { "start": 1401.56, "end": 1406.32, "text": " They have these deep learning problems, they have these 14 algorithms, the learning rate" }, { "start": 1406.32, "end": 1412.1799999999998, "text": " schedules, they come in later, but they're not really prominent in the benchmark." }, { "start": 1412.1799999999998, "end": 1417.2, "text": " What they do is they compare the algorithms with the default parameters with a small amount" }, { "start": 1417.2, "end": 1421.24, "text": " of tuning, and with a large amount of tuning." }, { "start": 1421.24, "end": 1424.28, "text": " And this is one of the main results right here." }, { "start": 1424.28, "end": 1430, "text": " Let's actually look at this particular thing here a bit more." }, { "start": 1430, "end": 1436.44, "text": " So what you see as the read the way you read this is these numbers represent algorithms," }, { "start": 1436.44, "end": 1438.36, "text": " you can see it beside them." }, { "start": 1438.36, "end": 1442.1, "text": " But you know, you can't see it down here, but they represent the same algorithm." }, { "start": 1442.1, "end": 1448.82, "text": " So one here is ams bound is also one here." }, { "start": 1448.82, "end": 1455.76, "text": " On the left on the y axis, you have the one shot performing algorithms." }, { "start": 1455.76, "end": 1462, "text": " And on the x axis, you have the same algorithms if they are given a small budget to tune." }, { "start": 1462, "end": 1470.52, "text": " So if we analyze one of those, for example, number, let's call let's go numbers." }, { "start": 1470.52, "end": 1472.44, "text": " Number four and five." }, { "start": 1472.44, "end": 1476.2, "text": " So number four and five, number four and five." }, { "start": 1476.2, "end": 1480.16, "text": " So four is added delta and five is added grad." }, { "start": 1480.16, "end": 1487.48, "text": " What we can say if we look at for example, let's look at this number right here." }, { "start": 1487.48, "end": 1498.38, "text": " We see that what's this five number five, so add a grad, add a grad is 40% better than" }, { "start": 1498.38, "end": 1505.16, "text": " added delta when it is allowed when it is given a small budget to tune." }, { "start": 1505.16, "end": 1515.92, "text": " So when add a grad is given a small budget to tune itself, it is 40% 44% better than" }, { "start": 1515.92, "end": 1520.3600000000001, "text": " added delta when it is not given a budget to tune itself." }, { "start": 1520.3600000000001, "end": 1526.68, "text": " All right, I hope that that kind of so we compare having tuning budget to not having" }, { "start": 1526.68, "end": 1529.44, "text": " tuning budget." }, { "start": 1529.44, "end": 1536.3600000000001, "text": " And this is the absolute test set performance improvement after switching from any untuned" }, { "start": 1536.3600000000001, "end": 1541.56, "text": " or sorry, you don't see that from any untuned optimizer to any tuned optimizer." }, { "start": 1541.56, "end": 1547.0800000000002, "text": " So the y axis are the untuned and the x axis are the tuned and you already see a lot of" }, { "start": 1547.0800000000002, "end": 1549.8, "text": " kind of different effects right here." }, { "start": 1549.8, "end": 1558.92, "text": " So you see that sometimes which is interesting in in the red right here, these are negative" }, { "start": 1558.92, "end": 1559.92, "text": " numbers." }, { "start": 1559.92, "end": 1565.24, "text": " So sometimes an algorithm, even given a small budget to tune is actually worse than a different" }, { "start": 1565.24, "end": 1569.64, "text": " algorithm when doing the default parameters." }, { "start": 1569.64, "end": 1575.96, "text": " And this is on one of these small problems on one of these small C for 10 problems." }, { "start": 1575.96, "end": 1581.48, "text": " Okay, you so that's one interesting thing, but I would argue it's it's actually not that" }, { "start": 1581.48, "end": 1588.28, "text": " meaningful for reasons for which I'll get to in a second." }, { "start": 1588.28, "end": 1596.92, "text": " The most prominent thing probably you'll see is that there are rows that are kind of colored" }, { "start": 1596.92, "end": 1598.28, "text": " very uniformly." }, { "start": 1598.28, "end": 1603.34, "text": " So you have, for example, this row, which is solid green, and then you have other rows" }, { "start": 1603.34, "end": 1608.8, "text": " which are, you know, very either light or even red, and so on." }, { "start": 1608.8, "end": 1610.92, "text": " So what's going on here?" }, { "start": 1610.92, "end": 1618.04, "text": " What does a solid green row mean, especially look at these high numbers like 45434344." }, { "start": 1618.04, "end": 1621.1599999999999, "text": " So there, this is performance improvement." }, { "start": 1621.1599999999999, "end": 1630.46, "text": " It means that add delta is when not tuned, is this much worse than any of the algorithms" }, { "start": 1630.46, "end": 1632.8799999999999, "text": " with a given a small budget." }, { "start": 1632.8799999999999, "end": 1637.08, "text": " So it's default parameters suck, suck badly." }, { "start": 1637.08, "end": 1639.62, "text": " Okay, that's, that's the message right here." }, { "start": 1639.62, "end": 1646.6, "text": " If you see like a solid green row, the default parameters of this method suck badly." }, { "start": 1646.6, "end": 1654.9199999999998, "text": " Now I'm, as I said, what the value of this is, it actually maybe this is the most valuable" }, { "start": 1654.9199999999998, "end": 1659.1599999999999, "text": " thing that comes out of this comes out of this benchmark, honestly, because everything" }, { "start": 1659.1599999999999, "end": 1661.04, "text": " else is so noisy, right?" }, { "start": 1661.04, "end": 1666.24, "text": " In theory, I would say this is the least valuable thing, because let's just, you know, get good" }, { "start": 1666.24, "end": 1670.76, "text": " default parameters for all this stuff, and then we're done." }, { "start": 1670.76, "end": 1674.12, "text": " But apparently, this is not done yet." }, { "start": 1674.12, "end": 1680.3999999999999, "text": " So the deltas default parameters at least given in the paper, apparently, they suck." }, { "start": 1680.3999999999999, "end": 1689.4399999999998, "text": " So does momentum though, does polyac give or Nesterov, whoever invented it, give momentum" }, { "start": 1689.4399999999998, "end": 1694.6399999999999, "text": " default parameters, maybe, maybe those were different times, certainly didn't give default" }, { "start": 1694.6399999999999, "end": 1696.4599999999998, "text": " parameters for deep learning." }, { "start": 1696.4599999999998, "end": 1701, "text": " But you see, again, they like the default parameters suck." }, { "start": 1701, "end": 1705.92, "text": " What is also interesting is to look at the diagonal, okay, so the diagonal shows you" }, { "start": 1705.92, "end": 1710.44, "text": " how much the same algorithm improves if given a budget." }, { "start": 1710.44, "end": 1715.36, "text": " Again, you can make an inference about the default parameters when you say, okay, add" }, { "start": 1715.36, "end": 1722.96, "text": " a delta improves over itself by 40%, if just given a little bit of budget to tune, while" }, { "start": 1722.96, "end": 1726.6, "text": " add a grad is only improving 2.3%." }, { "start": 1726.6, "end": 1735.08, "text": " There are situations in other graphs where there's actually negative values." }, { "start": 1735.08, "end": 1739.6, "text": " You can see, for example, right here, there is a negative value in a different problem" }, { "start": 1739.6, "end": 1741.6, "text": " in the CIFAR 100." }, { "start": 1741.6, "end": 1746.12, "text": " And they can show in the appendix that this is due to not enough tuning." }, { "start": 1746.12, "end": 1749.6599999999999, "text": " So basically, the tuning is just a random search." }, { "start": 1749.66, "end": 1757.72, "text": " And the random search is, again, this is the random search is so bad that it doesn't even" }, { "start": 1757.72, "end": 1767.94, "text": " hit the the the any any sort of setting where the default parameters are present." }, { "start": 1767.94, "end": 1774.8400000000001, "text": " So all its search space is basically bad parameters, which, again, is you can say that the algorithm" }, { "start": 1774.8400000000001, "end": 1776.96, "text": " is not really robust to parameter change." }, { "start": 1776.96, "end": 1782.32, "text": " But you can also say that this is entirely due to the choice of search space to search" }, { "start": 1782.32, "end": 1783.32, "text": " over." }, { "start": 1783.32, "end": 1794.32, "text": " So you can see that the algorithms five, seven, eight, and 13 are particularly bad at this." }, { "start": 1794.32, "end": 1801.96, "text": " Here we see that's add a grad, la 13." }, { "start": 1801.96, "end": 1803.1200000000001, "text": " RMS prop." }, { "start": 1803.1200000000001, "end": 1804.1200000000001, "text": " Yeah." }, { "start": 1804.12, "end": 1808.76, "text": " And if you look at other problems, you see that different algorithms, okay, the number" }, { "start": 1808.76, "end": 1813.04, "text": " seven here is also kinda, kinda shady." }, { "start": 1813.04, "end": 1817.6, "text": " So look ahead seems to be kinda shady in general." }, { "start": 1817.6, "end": 1825.84, "text": " But this also switches from problem to problem, which is something I already introduced, there's" }, { "start": 1825.84, "end": 1829.36, "text": " a lot of noise here, a lot of noise." }, { "start": 1829.36, "end": 1835.84, "text": " And therefore, yeah, what is a bit harder to parse out is how the algorithms compared" }, { "start": 1835.84, "end": 1836.8799999999999, "text": " to each other." }, { "start": 1836.8799999999999, "end": 1842.24, "text": " So in order to determine that what you have to do is you just have to look at relative" }, { "start": 1842.24, "end": 1843.6799999999998, "text": " performance." }, { "start": 1843.6799999999998, "end": 1852.24, "text": " So for example, take a any column, any column, for example, this column right here, you see" }, { "start": 1852.24, "end": 1857.8799999999999, "text": " that no matter how high the number is, it's always a bit smaller than the rest of the" }, { "start": 1857.88, "end": 1859.88, "text": " row." }, { "start": 1859.88, "end": 1866.0400000000002, "text": " So in every row, this is smaller than the rest of the row, which means that number four," }, { "start": 1866.0400000000002, "end": 1875, "text": " what's number four, add a delta, when you tune at a delta, it compares less favorably" }, { "start": 1875, "end": 1879.68, "text": " to all the other algorithms than when you tune other algorithms." }, { "start": 1879.68, "end": 1884.0400000000002, "text": " So in order to really compare optimizers to each other in this graph, you have to kind" }, { "start": 1884.0400000000002, "end": 1886.0800000000002, "text": " of do this relative math in your head." }, { "start": 1886.08, "end": 1890.96, "text": " And that's why I'm saying the red the negative numbers aren't even that important as long" }, { "start": 1890.96, "end": 1892.72, "text": " as they're not on the diagonal, right?" }, { "start": 1892.72, "end": 1897.6399999999999, "text": " If they're on the diagonal, they mean if you tune the same algorithm, it's worse than when" }, { "start": 1897.6399999999999, "end": 1904.56, "text": " you just run the default parameters, which is just means that your search sucked." }, { "start": 1904.56, "end": 1908.8999999999999, "text": " Or your random seed is is is somehow lucky or unlucky." }, { "start": 1908.8999999999999, "end": 1911, "text": " What do I know?" }, { "start": 1911, "end": 1918.5, "text": " But the negative numbers off diagonal don't mean anything that the fact that they're negative," }, { "start": 1918.5, "end": 1925.22, "text": " because what you would expect is that the small budget always increases at least in" }, { "start": 1925.22, "end": 1928.2, "text": " expectation over the one shot." }, { "start": 1928.2, "end": 1933.38, "text": " The question is then how much would you expect it to increase?" }, { "start": 1933.38, "end": 1940.46, "text": " So even though a number like 0.3, here is a positive number, which means that the small" }, { "start": 1940.46, "end": 1946.44, "text": " budget number two improves over the one shot number 11." }, { "start": 1946.44, "end": 1951.32, "text": " This could still be a bad thing, because you'd say, well, if I give you a small budget, I" }, { "start": 1951.32, "end": 1959.32, "text": " expect any algorithm to improve like 2% or 3% or 5%, something like this." }, { "start": 1959.32, "end": 1968.32, "text": " That's why you have to look at the at the relatives with respect to the other algorithms." }, { "start": 1968.32, "end": 1970.6399999999999, "text": " We can't really look at the absolute numbers right here." }, { "start": 1970.6399999999999, "end": 1977.08, "text": " So even the negative numbers don't mean anything, because zero has no meaning here, except on" }, { "start": 1977.08, "end": 1984.32, "text": " the diagonal, because you would always even like even on the diagonal, you always expect" }, { "start": 1984.32, "end": 1987.36, "text": " some kind of improvement from tuning." }, { "start": 1987.36, "end": 1994.2, "text": " And we need to know kind of this average expected improvement before we can make judgments about" }, { "start": 1994.2, "end": 1996, "text": " the numbers in here." }, { "start": 1996, "end": 2001.28, "text": " What you can see is that some algorithms clearly underperform with respect to the others, at" }, { "start": 2001.28, "end": 2002.96, "text": " least in this particular problem." }, { "start": 2002.96, "end": 2004.72, "text": " Again, this is highly problem dependent." }, { "start": 2004.72, "end": 2008.08, "text": " So I'll add a delta, pretty bad." }, { "start": 2008.08, "end": 2010.04, "text": " Then what's this right here?" }, { "start": 2010.04, "end": 2012.08, "text": " This is 5, 6, 7." }, { "start": 2012.08, "end": 2017, "text": " Again, look ahead with momentum, look ahead momentum, pretty bad." }, { "start": 2017, "end": 2019.24, "text": " And you can find others." }, { "start": 2019.24, "end": 2025.44, "text": " And this again varies from problem to problem, though numbers four and seven are pretty bad" }, { "start": 2025.44, "end": 2027.28, "text": " here." }, { "start": 2027.28, "end": 2033.8, "text": " Numbers four and seven, here also five." }, { "start": 2033.8, "end": 2039.68, "text": " Yeah, so you kind of see that you can make some conclusions about these problems." }, { "start": 2039.68, "end": 2041.48, "text": " But here, look at that." }, { "start": 2041.48, "end": 2048.8, "text": " So here they now include the they now include the schedules." }, { "start": 2048.8, "end": 2052.68, "text": " And here you start out one shot with a constant schedule." }, { "start": 2052.68, "end": 2056.98, "text": " If you add some of these schedules, it goes up a little bit." }, { "start": 2056.98, "end": 2058.8999999999996, "text": " This is the median, right?" }, { "start": 2058.8999999999996, "end": 2068.62, "text": " And this orange stuff is the what is it the 25th to 75th percentile, look at the amount" }, { "start": 2068.62, "end": 2069.7999999999997, "text": " of noise right here." }, { "start": 2069.7999999999997, "end": 2076.96, "text": " So when you see these plots, it's just, I feel it's quite, quite helpless." }, { "start": 2076.96, "end": 2077.96, "text": " Okay?" }, { "start": 2077.96, "end": 2083.88, "text": " So again, when you look at these plots, so what they give you right here is the red bars" }, { "start": 2083.88, "end": 2086.92, "text": " or whatever Adam does when it's tuned." }, { "start": 2086.92, "end": 2093.6, "text": " So when you tune Adam, and then let it run over these 10 different test seeds, this is" }, { "start": 2093.6, "end": 2097.6, "text": " the range it gets." }, { "start": 2097.6, "end": 2107.44, "text": " And this the other lines are simply the mean across the other optimizers when you tune" }, { "start": 2107.44, "end": 2113.44, "text": " them, you can see just from the spread of Adam, that the order in which these lines" }, { "start": 2113.44, "end": 2119.54, "text": " appear mean almost nothing except here when they like crash horribly." }, { "start": 2119.54, "end": 2124, "text": " It just probably means that these optimizers, some optimizers just aren't made for some" }, { "start": 2124, "end": 2125.96, "text": " problems." }, { "start": 2125.96, "end": 2130.42, "text": " But other than that, the order here is kind of useless." }, { "start": 2130.42, "end": 2137.4, "text": " And you see the downward facing triangle is always untuned Adam, which in most cases perform" }, { "start": 2137.4, "end": 2144.48, "text": " fairly, fairly well compared to the others and compared to the noise you have over the" }, { "start": 2144.48, "end": 2149.04, "text": " different over the different tuning outcomes." }, { "start": 2149.04, "end": 2154.84, "text": " So that's why I said at the beginning, use Adam, it's probably fine, tune it a little" }, { "start": 2154.84, "end": 2155.84, "text": " bit." }, { "start": 2155.84, "end": 2162.12, "text": " If you realize it doesn't work at all, then switch to something like SGD with momentum," }, { "start": 2162.12, "end": 2163.6, "text": " or the other way around, right?" }, { "start": 2163.6, "end": 2164.94, "text": " Use SGD with momentum." }, { "start": 2164.94, "end": 2168.32, "text": " If you realize it just screws up, maybe try Adam." }, { "start": 2168.32, "end": 2170.84, "text": " And that's actually a thing they say as well." }, { "start": 2170.84, "end": 2181.76, "text": " So one of their conclusions is one of their conclusions is that instead of tuning a single" }, { "start": 2181.76, "end": 2189.56, "text": " optimizer tuning helps about as much as trying other optimizers." }, { "start": 2189.56, "end": 2191.92, "text": " And they repeat this point throughout the paper." }, { "start": 2191.92, "end": 2197.6800000000003, "text": " It's instead of trying a different settings for a single optimizer, it you can get the" }, { "start": 2197.6800000000003, "end": 2203.4, "text": " same kind of outcome by simply trying a bunch of different optimizers in their default settings," }, { "start": 2203.4, "end": 2210.14, "text": " and then picking the best one of those which it's, you know, the entire literature seems" }, { "start": 2210.14, "end": 2216.66, "text": " to point to whatever you do, it's probably fine if you take one of these generic algorithms" }, { "start": 2216.66, "end": 2223.2, "text": " and kind of do whatever it whatever to select a good thing." }, { "start": 2223.2, "end": 2227.08, "text": " Let's assume for a minute that all of these algorithms are the same." }, { "start": 2227.08, "end": 2231.2, "text": " And you simply change the algorithm instead of tuning the learning rate." }, { "start": 2231.2, "end": 2236.22, "text": " Well, these algorithms come with different default learning rates, right?" }, { "start": 2236.22, "end": 2239.42, "text": " All these algorithms come with different default learning rates." }, { "start": 2239.42, "end": 2243.68, "text": " And the learning rate goes into the algorithm in a different way." }, { "start": 2243.68, "end": 2247.7799999999997, "text": " So the effective learning rate, even if I put in the same number, the effective learning" }, { "start": 2247.7799999999997, "end": 2250.62, "text": " rate is going to be different for each algorithm." }, { "start": 2250.62, "end": 2258.3999999999996, "text": " So maybe what their their effect here, when they say it's the same when you tune the parameters," }, { "start": 2258.3999999999996, "end": 2265.1, "text": " or when you simply pick a different default parameterized optimization algorithm, maybe" }, { "start": 2265.1, "end": 2269.44, "text": " what you're doing is the same thing, maybe all these algorithms are actually kind of" }, { "start": 2269.44, "end": 2271.16, "text": " the same." }, { "start": 2271.16, "end": 2275.3399999999997, "text": " And overall, right for a particular problem, it's different, but overall, they're kind" }, { "start": 2275.3399999999997, "end": 2276.58, "text": " of the same." }, { "start": 2276.58, "end": 2281, "text": " And when you pick a different algorithm, you simply pick a different learning rate for" }, { "start": 2281, "end": 2286.56, "text": " the same algorithm in disguise, because the learning rate, the default learning rate for" }, { "start": 2286.56, "end": 2290.74, "text": " that algorithm goes into its formula a bit different." }, { "start": 2290.74, "end": 2294.92, "text": " And ultimately, you're simply tuning as well." }, { "start": 2294.92, "end": 2298.56, "text": " So the the benchmark is extensive." }, { "start": 2298.56, "end": 2300.6, "text": " Again, I don't want to rag on this paper." }, { "start": 2300.6, "end": 2307.22, "text": " The benchmark is super extensive, they also do rerun stability, and so on." }, { "start": 2307.22, "end": 2316.98, "text": " But it this paper shows that it is possible to do an extensive, extensive search, extensive" }, { "start": 2316.98, "end": 2320.36, "text": " benchmark that is still largely useless." }, { "start": 2320.36, "end": 2328.06, "text": " And I don't I don't want to say that, because they, because they, what I don't want to say" }, { "start": 2328.06, "end": 2332.96, "text": " is they didn't determine a clear winner, therefore, it's useless." }, { "start": 2332.96, "end": 2334.04, "text": " That's not what I'm saying." }, { "start": 2334.04, "end": 2340.2799999999997, "text": " I'm saying the information content that I can get out of these experiments, especially" }, { "start": 2340.2799999999997, "end": 2349.4, "text": " for situations where it would help me, like for where I can't do grid search is close," }, { "start": 2349.4, "end": 2350.62, "text": " close to zero." }, { "start": 2350.62, "end": 2358.04, "text": " I think the two big things that the community can learn from these papers is one, the default" }, { "start": 2358.04, "end": 2364, "text": " settings for some of these things are crap in the papers, and maybe maybe in our frameworks." }, { "start": 2364, "end": 2367.16, "text": " So maybe we'll go over that once more." }, { "start": 2367.16, "end": 2374.48, "text": " And two, is like, at least on these small kind of problems, it seems not that important" }, { "start": 2374.48, "end": 2381.88, "text": " which algorithm you pick, pick one that you like, tune it a little bit, and you're probably" }, { "start": 2381.88, "end": 2382.88, "text": " good to go." }, { "start": 2382.88, "end": 2385.56, "text": " If it doesn't work, pick another one." }, { "start": 2385.56, "end": 2388.08, "text": " So that was it for this paper." }, { "start": 2388.08, "end": 2392.12, "text": " Again, tell me what you think." }, { "start": 2392.12, "end": 2393.24, "text": " What worked for you." }, { "start": 2393.24, "end": 2397.44, "text": " If you have horror stories with optimization algorithm, they used to be much more, much" }, { "start": 2397.44, "end": 2398.44, "text": " more prevalent." }, { "start": 2398.44, "end": 2405.4, "text": " I think also our advances in architectures have made it easier for optimization algorithms." }, { "start": 2405.4, "end": 2411.18, "text": " So like something like ResNet, giving you really nice gradient flow has made it much" }, { "start": 2411.18, "end": 2416.3799999999997, "text": " more easy to optimize the network as a whole, and therefore the optimization algorithms" }, { "start": 2416.3799999999997, "end": 2418.62, "text": " aren't as important." }, { "start": 2418.62, "end": 2424.24, "text": " And the other the last comment I want to make here is that a lot of a lot of these papers," }, { "start": 2424.24, "end": 2428.2799999999997, "text": " as I said, they deal with specific situations like, oh, if you have low memory or if you" }, { "start": 2428.2799999999997, "end": 2435.66, "text": " have that or they say, our algorithm is really good, but only only if you add like a bit" }, { "start": 2435.66, "end": 2441.08, "text": " of Gaussian noise on the input or only if you use this very exotic learning rate scheduler" }, { "start": 2441.08, "end": 2444.04, "text": " or something like this, which this paper, of course, hasn't done." }, { "start": 2444.04, "end": 2447.04, "text": " This is still a very small subset." }, { "start": 2447.04, "end": 2451.72, "text": " So yeah, these are these are common criticisms for benchmarks." }, { "start": 2451.72, "end": 2453.68, "text": " I think we'll take from it what it is." }, { "start": 2453.68, "end": 2454.68, "text": " It is a cool paper." }, { "start": 2454.68, "end": 2455.68, "text": " It is extensive." }, { "start": 2455.68, "end": 2457.46, "text": " They are very critical of themselves." }, { "start": 2457.46, "end": 2458.46, "text": " And that was it for me." }, { "start": 2458.46, "end": 2471.7400000000002, "text": " Thank you very much for your time." } ]
H3Bhlan0mE0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Online Education - How I Make My Videos
[ "Science & Technology" ]
[ "deep learning", "machine learning", "online video", "university", "online", "create", "lecture" ]
Just a short overview of tools I use to make my videos. OneNote - https://www.onenote.com iSpring Free Cam - https://www.ispringsolutions.com/ispring-cam Shotcut - https://shotcut.org Slack - https://slack.com RocketChat - https://rocket.chat Zoom - https://zoom.us Jitsi - https://jitsi.org GDocs - https://www.google.com/docs/about Piazza - https://piazza.com CMT - https://cmt3.research.microsoft.com/About Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! So a lot of people have been asking me how I make these videos. And this is of course relevant now that everyone's work from home and all the schools are converted into online schools. All of a sudden a lot of people have to make these online education happen. And I think this style of video lends itself to online education. So I'll quickly go over the process of how to do this and maybe also how to run a university class online. Alright, so the process is pretty simple of how I make my videos. This might not work for everyone, but it works for me. I use the Microsoft OneNote in order to scribble on papers basically. So the thing is, in OneNote you have this insert thing here and you can print out a PDF onto your notebook here. So the way this looks then is you'll get the PDF in your notebook and you can scribble on it with this using this draw tab here. You can choose a pen and scribble on it. You can highlight things and so on. And I do this while I record the screen, so that's pretty much all there is to it. You can then print out again this notebook and you can distribute those annotated PDF if you want. Now I'm pretty sure this is here inserted as some sort of an image. So I don't know about the copy paste ability of the resulting product. But here you see this is a paper I actually made a video about. And that's basically all there is to it. It's OneNote. It's a free program from Microsoft. In order to do the annotating I use a last generation Microsoft Surface tablet that I got for cheap. At some point it comes with a nice pen and touch screen so you can basically zoom around and zip around while you do these things. In order to record the screen I use this iSpring FreeCam software. It might not be the best but it does work for me well and they have a cool Pro edition if you need more features. But it works really well for recording your screen. You can record parts of your screen or the full screen. You can record with sound. So I use a microphone and then I just record the sound from that with the same tool. And at the end you get a video file that you can upload to YouTube. Easy as that. If I need to do some editing, which is rarely because I am lazy, I use either iMovie from Apple which comes with an Apple operating system. So I have a MacBook that I run iMovie on. iMovie is really easy to edit movies on. I don't know if there's anything on Windows where it's that easy that comes pre-packaged. But if I need to do more complicated things I use Shotcut which is an open source editor. I believe that's available for all the platforms. You can do fairly complicated things with Shotcut if you ever need to do that. But if I just need to stitch like two or three things together I use iMovie. And that's pretty much it for making and recording videos, I believe. One note is that in order to do a class online not all people will just be able to record a video and then upload. Some of the things you need to do are actually live. A lot of people right now use Zoom for live teleconferencing. But you can also do this sort of presenter mode where you present and people can do questions. Of course you can do this via YouTube streaming as well. But then it's of course it's kind of public on YouTube or link accessible with Zoom. I believe you have more control. But of course Zoom is a proprietary solution and with the free account you can only get so far. So they limit your meetings in length if you have more than I believe three or four people. An alternative is Jitsi which is open source video conferencing. And the cool thing here is you can actually run your own server such that you can truly have control over everything. In order to communicate with lots of people, of course people use Slack. But again Slack is a proprietary service and an alternative to that would be Rocket Chat. Again where you can run your own server and it is fairly similar to Slack. In order to collaborate or share just general notes, of course Google's suite of docs and sheets and so on is excellent. And for classes especially, Piazza is a good place. You can sign up as a class. You can have TAs sign up as TAs. You can have your students sign up as students and then the students can ask questions and then other students or the TAs can answer those questions. Basically a bit of a forum. But you can also announce things there for your classes. It's pretty cool and it's really geared towards online classes and it's free. So I know a lot of universities are using that right now. So if you're looking for some sort of announcement or discussion board for your class, Piazza is definitely a good place to be. And lastly we sometimes have classes where students have to submit projects. And we actually use CMT for this because it's really neat where you can set deadlines and everything. Students can upload and then you can assign reviewers to those projects, which in our case are us, the TAs. And you know you can have meta reviews and so on. So CMT is actually very good. Maybe a bit of an overkill if you just run a single class. But it has lots and lots of features. And of course the big conferences also use CMT. So it's definitely stress tested. Alright, so that was it for my videos. Or at least how I make them. I just print out the PDF, sit down for half an hour and rant about it. And that's pretty much it. And then you throw it on YouTube or distribute the file however you want. And with that I hope I answered a little bit of these questions. And I all wish you a healthy rest of the Corona season. Bye.
[ { "start": 0, "end": 5, "text": " Hi there! So a lot of people have been asking me how I make these videos." }, { "start": 5, "end": 13, "text": " And this is of course relevant now that everyone's work from home and all the schools are converted into online schools." }, { "start": 13, "end": 19, "text": " All of a sudden a lot of people have to make these online education happen." }, { "start": 19, "end": 24, "text": " And I think this style of video lends itself to online education." }, { "start": 24, "end": 31, "text": " So I'll quickly go over the process of how to do this and maybe also how to run a university class online." }, { "start": 31, "end": 35, "text": " Alright, so the process is pretty simple of how I make my videos." }, { "start": 35, "end": 38, "text": " This might not work for everyone, but it works for me." }, { "start": 38, "end": 44, "text": " I use the Microsoft OneNote in order to scribble on papers basically." }, { "start": 44, "end": 56, "text": " So the thing is, in OneNote you have this insert thing here and you can print out a PDF onto your notebook here." }, { "start": 56, "end": 67, "text": " So the way this looks then is you'll get the PDF in your notebook and you can scribble on it with this using this draw tab here." }, { "start": 67, "end": 72, "text": " You can choose a pen and scribble on it. You can highlight things and so on." }, { "start": 72, "end": 77, "text": " And I do this while I record the screen, so that's pretty much all there is to it." }, { "start": 77, "end": 86, "text": " You can then print out again this notebook and you can distribute those annotated PDF if you want." }, { "start": 86, "end": 93, "text": " Now I'm pretty sure this is here inserted as some sort of an image." }, { "start": 93, "end": 97, "text": " So I don't know about the copy paste ability of the resulting product." }, { "start": 97, "end": 102, "text": " But here you see this is a paper I actually made a video about." }, { "start": 102, "end": 108, "text": " And that's basically all there is to it. It's OneNote. It's a free program from Microsoft." }, { "start": 108, "end": 119, "text": " In order to do the annotating I use a last generation Microsoft Surface tablet that I got for cheap." }, { "start": 119, "end": 128, "text": " At some point it comes with a nice pen and touch screen so you can basically zoom around and zip around while you do these things." }, { "start": 128, "end": 136, "text": " In order to record the screen I use this iSpring FreeCam software." }, { "start": 136, "end": 144, "text": " It might not be the best but it does work for me well and they have a cool Pro edition if you need more features." }, { "start": 144, "end": 151, "text": " But it works really well for recording your screen. You can record parts of your screen or the full screen." }, { "start": 151, "end": 159, "text": " You can record with sound. So I use a microphone and then I just record the sound from that with the same tool." }, { "start": 159, "end": 164, "text": " And at the end you get a video file that you can upload to YouTube. Easy as that." }, { "start": 164, "end": 178, "text": " If I need to do some editing, which is rarely because I am lazy, I use either iMovie from Apple which comes with an Apple operating system." }, { "start": 178, "end": 189, "text": " So I have a MacBook that I run iMovie on. iMovie is really easy to edit movies on." }, { "start": 189, "end": 194, "text": " I don't know if there's anything on Windows where it's that easy that comes pre-packaged." }, { "start": 194, "end": 201, "text": " But if I need to do more complicated things I use Shotcut which is an open source editor." }, { "start": 201, "end": 205, "text": " I believe that's available for all the platforms." }, { "start": 205, "end": 211, "text": " You can do fairly complicated things with Shotcut if you ever need to do that." }, { "start": 211, "end": 217, "text": " But if I just need to stitch like two or three things together I use iMovie." }, { "start": 217, "end": 226, "text": " And that's pretty much it for making and recording videos, I believe." }, { "start": 226, "end": 240, "text": " One note is that in order to do a class online not all people will just be able to record a video and then upload." }, { "start": 240, "end": 244, "text": " Some of the things you need to do are actually live." }, { "start": 244, "end": 249, "text": " A lot of people right now use Zoom for live teleconferencing." }, { "start": 249, "end": 255, "text": " But you can also do this sort of presenter mode where you present and people can do questions." }, { "start": 255, "end": 259, "text": " Of course you can do this via YouTube streaming as well." }, { "start": 259, "end": 266, "text": " But then it's of course it's kind of public on YouTube or link accessible with Zoom." }, { "start": 266, "end": 269, "text": " I believe you have more control." }, { "start": 269, "end": 276, "text": " But of course Zoom is a proprietary solution and with the free account you can only get so far." }, { "start": 276, "end": 281, "text": " So they limit your meetings in length if you have more than I believe three or four people." }, { "start": 281, "end": 287, "text": " An alternative is Jitsi which is open source video conferencing." }, { "start": 287, "end": 296, "text": " And the cool thing here is you can actually run your own server such that you can truly have control over everything." }, { "start": 296, "end": 303, "text": " In order to communicate with lots of people, of course people use Slack." }, { "start": 303, "end": 310, "text": " But again Slack is a proprietary service and an alternative to that would be Rocket Chat." }, { "start": 310, "end": 318, "text": " Again where you can run your own server and it is fairly similar to Slack." }, { "start": 318, "end": 331, "text": " In order to collaborate or share just general notes, of course Google's suite of docs and sheets and so on is excellent." }, { "start": 331, "end": 337, "text": " And for classes especially, Piazza is a good place." }, { "start": 337, "end": 342, "text": " You can sign up as a class. You can have TAs sign up as TAs." }, { "start": 342, "end": 351, "text": " You can have your students sign up as students and then the students can ask questions and then other students or the TAs can answer those questions." }, { "start": 351, "end": 356, "text": " Basically a bit of a forum. But you can also announce things there for your classes." }, { "start": 356, "end": 361, "text": " It's pretty cool and it's really geared towards online classes and it's free." }, { "start": 361, "end": 366, "text": " So I know a lot of universities are using that right now." }, { "start": 366, "end": 375, "text": " So if you're looking for some sort of announcement or discussion board for your class, Piazza is definitely a good place to be." }, { "start": 375, "end": 382, "text": " And lastly we sometimes have classes where students have to submit projects." }, { "start": 382, "end": 389, "text": " And we actually use CMT for this because it's really neat where you can set deadlines and everything." }, { "start": 389, "end": 396, "text": " Students can upload and then you can assign reviewers to those projects, which in our case are us, the TAs." }, { "start": 396, "end": 400, "text": " And you know you can have meta reviews and so on." }, { "start": 400, "end": 408, "text": " So CMT is actually very good. Maybe a bit of an overkill if you just run a single class." }, { "start": 408, "end": 412, "text": " But it has lots and lots of features." }, { "start": 412, "end": 416, "text": " And of course the big conferences also use CMT." }, { "start": 416, "end": 419, "text": " So it's definitely stress tested." }, { "start": 419, "end": 423, "text": " Alright, so that was it for my videos." }, { "start": 423, "end": 430, "text": " Or at least how I make them. I just print out the PDF, sit down for half an hour and rant about it." }, { "start": 430, "end": 435, "text": " And that's pretty much it. And then you throw it on YouTube or distribute the file however you want." }, { "start": 435, "end": 442, "text": " And with that I hope I answered a little bit of these questions." }, { "start": 442, "end": 451, "text": " And I all wish you a healthy rest of the Corona season. Bye." } ]
Ok44otx90D4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Feature Visualization & The OpenAI microscope
[ "Science & Technology" ]
[ "deep learning", "machine learning", "imagenet", "visualization", "features", "intermediate", "hidden layers", "activations", "patterns", "openai", "google", "interactive", "explanation" ]
A closer look at the OpenAI microscope, a database of visualizations of the inner workings of ImageNet classifiers, along with an explanation of how to obtain these visualizations. https://distill.pub/2017/feature-visualization/ https://microscope.openai.com/models https://github.com/tensorflow/lucid Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're going to take a look at the OpenAI microscope and this article on Distill called Feature Visualization. So the Feature Visualization article is by Chris Ola, Alexander Mortwintsev and Ludwig Schubert of the Google Brain team, while the OpenAI microscope is by OpenAI. So keep that in mind. These tools are tools for visualizing what neural networks learn and specifically here we're dealing with image classifiers and even more specifically ImageNet classifiers. So you know ImageNet is this big data set with a thousand classes of images and the images are somewhat like 200 by 200 pixels and you're just supposed to put them into one of these 1000 classes and networks have become really good at these kinds of things. So our question is what do these networks learn? And there are a number of very cool works that have started to investigate what these networks learn. So this started with work like Deep Dream or even before that but this is a very good summary and also an overview of this. So in this article they first showcase these patterns here where you can see that as you go through the network in the layer, this is layer conf2d0, which is a very low layer in the network, you have things what looks like pattern, just pattern detectors. So these are the things that the network is excited by and we'll just get, we'll get in a second at how you create these things. Just be sure these are the things that these particular layers in the networks are most excited by. So this layer is super excited by these textures. But as you go higher the network gets excited by these kind of textures, then it gets excited by more complex things. So the pattern is the higher you go in the layers and these people have been seeing and claiming this and measuring this since RBF networks and whatnot, that the higher you go in these networks the more complex features that they build. So they always build hypothesis is they build very complex features from the lower layers and the lower layers they have less less complex features and until you're the very bottom layer they simply extract edges and patterns of texture like this. Whereas in the top layers all of these are hierarchically assembled to give you very very intricate features. So these usually look pretty funky that's why I like to investigate. So this article focuses on how is this done and the answer is by optimization. Now what do we mean? I actually have the article here somewhat printed out but the graphics they don't really print very well to the notebook here. So imagine this here. So what you want to do is you want to see how much activation is in the network for a particular input and you can do this in many many different ways. The easiest form, let's actually start over here, if you have a neural network here and this is the softmax classifier and these are the classes in this case let's say this is dog, this is cat and this is car, house. Let's go with house. You can think of okay I want to know when the network sees a cat. What does the network think of cats? So what I would do is I would take an image that is just noise, just random noise like this one on the left here and I would start to optimize using backpropagation. I would start to optimize this image. Now you usually know backpropagation as the thing that will optimize these weights if we have given an image and a label. But right now you have to, so and we optimize the weights here, sorry I should say that, but right now you have to rethink what we have given is the label and the weights. We keep them constant but we optimize the input image to maximize this label as much as possible. So we ask of xx please please update yourself such that the output is as much cat as possible. And then we just optimize for that. So we hope that this picture would turn out to be as much cat as we like. And usually here, I don't know exactly which class they have here, but you won't get a cat. Sorry over here. You won't get a cat or here if you optimize the logits it's the same, just you do the same thing before the softmax. What you will get is some weird trippy thing. So in our classifier you might get something like there's a cat here but there's also one here right because two cats are more cat than one cat and there's like a giant cat head here and inside the cat head there is another cat head and there's like a cat tail somewhere here and again there's a cat eye right here. So you get like something super trippy that is as much cat as possible. Alright now so this is somewhat interesting because you can find out what does the network think is the most cat-like thing there is and the most dog-like thing there is. But what you can also do is you can see what do the intermediate layers of the networks, what do they get excited by. So what you can do is for example you can take an individual neuron in one of the layers right so this here might be a convolutional layer with its convolutional filters right and these are the different channels of the convolutional layer and then this thing here is just a single neuron there and what you can do is you can say okay X we have now again we input the image X and we optimize the image X such that with a given weight this particular neuron let's call it N and N is maybe this neuron right here right this particular neuron is as much activated as possible right so we no longer optimize for a label but we optimize for a particular neuron to be activated as much as possible and then we can sort of see what is an image that activates this neuron as much as possible in this case that thing you can actually do the same thing with an entire channel if you simply say when is the channel as a whole activated as much as possible and something like deep dream did the same thing but with an entire layer in the neural network like what is this layer activated by they can imagine there's not only one image that activates it the most but probably depending on your start here it's it's it's a you'll get different different results for depending on where you start we'll go into that later as well so these are the kinds of things you can do to investigate these neural networks and see what they what they pay attention to in each of the layers right so let's go on here and so they say they say okay if we optimize by optimization you get something like this here on the bottom row looks pretty funky right but what you could also do is you can visualize by what there's called data set examples and date with data set examples you don't do this thing you don't do the optimization procedure but you go into your into your database into your data set and you find the images that activate so the XI from your data set that activate a particular neuron the most so you simply sort all the images in your database and you just pick the ten or so that activate that particular neuron the most and that would also give you an understanding it's actually a valid thing and the AI microscope combines both of them so on the bottom you see what particular neurons are most excited by and at the top you see that that the data set examples they're most excited by so you can see that there is a there's a healthy diversity in the data set examples but they also all kind of map to the to the what it's most excited by it's pretty cool the last point they make in this article is that of diversity with the data set examples you do get naturally sort of a diverse set of images of course where you can guess you can also say okay whether the maximum activations and only slightly positive you can even give negative examples but with the positive you can only either maximize fully or you can you know take the negative and you can say okay what is this neuron not excited by at all but you you won't get kind of a spectrum of what the neuron is excited by or the unit or the layer and what they're doing is simply they add a diversity term they say that works best so what does that mean it means that here if we optimize X and Y right let's go up here if we optimize X in that right and try to maximize the activation of n we don't want to do X by itself but we want to do is we want to do an entire set of X I right so we feed an entire mini batch in there and we want to maximize n but also maximize a diversity term let's call that D between X 1 to X be right so you want to maximize the activation of the neuron but at the same time in your loss function that you optimize you also have this diversity term where you say the images that you produce should be far apart from each other or kind of apart from each other and thus you do get diverse samples okay the printing again doesn't work so here you see that if you just simply optimize you get the thing on the left but if you have a batch of things and you optimize them to be diverse from each other but also activate the layer or the unit you get a variety of high activations and you can see here that this is some sort of curve this could be a beak of a bird but on the right here this could also be kind of a snout of a monkey or something okay so it's just curvy curvy things that is activated by here they give another example you can clearly see that there is like some sort of eye in the picture but then if you optimize with diversity you can see that some of them do not have this eye thing and in fact also some of the data set examples do not have this eye thing so it might be interesting to to to optimize here with diversity and even in they say in higher layers it gets even more more diverse with what you achieve with this ball detector they also say they research interactions between neurons where they can interpolate between them right so you can here have two different new units let's say this top left one here is the thing that we've just seen with the curvature activator and then on the right we can select this thing here that is appears to be activated by these bird like bird like things and if you optimize an image that activates both of them you get the thing on the bottom left and this is very good for understanding how neural networks work because what a neural network will do is exactly it will take the thing on the top left and the top right from lower layers and it will combine them to form features of higher layers so while the top right thing looks like generic birds the bottom left thing looks much more like birds with let's say long necks and then kind of curved necks so more stork ish birds right because we've added in this curvature thing so this is very very cool to play around you can also here interpolate between into neurons like you would interpolate in a in a GAN right and yeah so they do make a point of regularization I don't want to go into that particularly but you have to be careful you can't just apply the optimization procedure as I said right now you have to actually have to do some regularization and to get to get rid of what are essentially adversarial examples in this process because if you just straight-up optimize you will get pretty high frequency crappy results I actually want to jump over now to this OpenAI microscope so this is a tool that lets you explore these visualizations so at the beginning you can pick one of these models and I'll pick inception v1 just because some of the other ones they don't have everything done quite yet the all the all the all the visualization so on the right here you can actually see the architecture of the network if you know what an inception network is this what this looks like and you would be able from here to select one of these units straight away but I'm gonna sorry we're gonna go to the left here so you have deep dream activated which means the entire layer is optimized for so per layer on the right side you have an image here and you can already see that if we go from the bottom what we saw before we get patterns that become more and more complex as you go up the layers and then more and more until you finally have what the network appears to be most activated by is mostly dogs which is okay because image net is dominated by dogs so you can click on any of these right here like this one and now you'll be able to inspect the individual nodes in this layer so before we had the whole layer right the whole layer was activated by something but these layers they have different channels and also different neurons within the channel so you can select this here you can go neuron activation or channel activation and these are the images that these channels are excited by the most you see you get pretty funky pattern if we select one interesting one maybe this this one right here you can see on the left this is the channel optimizing optimization on the right this is the neuron optimization and here you get the data set examples that are most activated that mostly activate this particular channel or neuron so sorry this particular yeah channel so you can see this is pretty similar to the thing I drew where except for it being a cat it's some sort of a fox dog thing right and and you can explore the neural network in this fashion so you can go through the layer here and look at that good units this seems to be whiskers classifier and lo and behold things with whiskers will activate the the neuron and as you go up the layer and this is the the cool thing right so we're right now we're here in this layer for as you go up you will see more and more intricate patterns of activations I I could play around this for very very long time but I won't I won't waste your time too much they there is a dist sorry there is a slack workspace where people discuss interesting patterns what is this okay yes this is a some sort of temple temple constructor very cool there is a slack workspace where people discuss interesting things for example they discuss how the car detector that you see right here is one of the units there is literally endless units to look at that detects cars can be clearly seen to be built from lower level features such as this wheel detector you see this wheel detector here is the unit three three seven in the mixed four layer and this car hood detector right here is unit two three seven also in the in one of the layer fours right so these are both from layer four and then the car detector I haven't looked this up ah isn't layer four as well but let's check it out this isn't layer 4b this isn't layer 4b and this isn't layer 4c ah so you see I this was a risk the car detector is built from lower level features of car hood and car wheel right the car wheel right here detects wheels and the car hood detector detects hoods and then the car detector detects cars so there are very like I really invite you to go look at it check out what people find and explore these models all of this is based on this lucid library right here also invite you to check that out where you can perform such optimizations yourself I'll link to that and with that bye bye
[ { "start": 0, "end": 6.8, "text": " Hi there! Today we're going to take a look at the OpenAI microscope and this" }, { "start": 6.8, "end": 11.46, "text": " article on Distill called Feature Visualization. So the Feature" }, { "start": 11.46, "end": 17.72, "text": " Visualization article is by Chris Ola, Alexander Mortwintsev and Ludwig" }, { "start": 17.72, "end": 25.32, "text": " Schubert of the Google Brain team, while the OpenAI microscope is by OpenAI." }, { "start": 25.32, "end": 31.64, "text": " So keep that in mind. These tools are tools for visualizing what neural" }, { "start": 31.64, "end": 37.56, "text": " networks learn and specifically here we're dealing with image classifiers and" }, { "start": 37.56, "end": 41.879999999999995, "text": " even more specifically ImageNet classifiers. So you know ImageNet is this big" }, { "start": 41.879999999999995, "end": 47.2, "text": " data set with a thousand classes of images and the images are somewhat like" }, { "start": 47.2, "end": 51.84, "text": " 200 by 200 pixels and you're just supposed to put them into one of these" }, { "start": 51.84, "end": 59.120000000000005, "text": " 1000 classes and networks have become really good at these kinds of things." }, { "start": 59.120000000000005, "end": 64.64, "text": " So our question is what do these networks learn? And there are a number of very cool" }, { "start": 64.64, "end": 71.12, "text": " works that have started to investigate what these networks learn. So this" }, { "start": 71.12, "end": 77.52000000000001, "text": " started with work like Deep Dream or even before that but this is a very good" }, { "start": 77.52, "end": 83.52, "text": " summary and also an overview of this. So in this article they first" }, { "start": 83.52, "end": 89.75999999999999, "text": " showcase these patterns here where you can see that as you go through the" }, { "start": 89.75999999999999, "end": 96.64, "text": " network in the layer, this is layer conf2d0, which is a very low layer in" }, { "start": 96.64, "end": 102.75999999999999, "text": " the network, you have things what looks like pattern, just pattern detectors. So" }, { "start": 102.75999999999999, "end": 106.56, "text": " these are the things that the network is excited by and we'll just get, we'll get" }, { "start": 106.56, "end": 113, "text": " in a second at how you create these things. Just be sure these are the things" }, { "start": 113, "end": 120.36, "text": " that these particular layers in the networks are most excited by. So this" }, { "start": 120.36, "end": 126.96000000000001, "text": " layer is super excited by these textures. But as you go higher the" }, { "start": 126.96000000000001, "end": 132.88, "text": " network gets excited by these kind of textures, then it gets excited by more" }, { "start": 132.88, "end": 138.96, "text": " complex things. So the pattern is the higher you go in the layers and" }, { "start": 138.96, "end": 143.72, "text": " these people have been seeing and claiming this and measuring this since" }, { "start": 143.72, "end": 152.24, "text": " RBF networks and whatnot, that the higher you go in these networks the" }, { "start": 152.24, "end": 159.07999999999998, "text": " more complex features that they build. So they always build hypothesis is they" }, { "start": 159.08, "end": 164.20000000000002, "text": " build very complex features from the lower layers and the lower layers they" }, { "start": 164.20000000000002, "end": 168.96, "text": " have less less complex features and until you're the very bottom layer they" }, { "start": 168.96, "end": 174.64000000000001, "text": " simply extract edges and patterns of texture like this. Whereas in the top" }, { "start": 174.64000000000001, "end": 180.60000000000002, "text": " layers all of these are hierarchically assembled to give you very very intricate" }, { "start": 180.60000000000002, "end": 188.08, "text": " features. So these usually look pretty funky that's why I like to investigate." }, { "start": 188.08, "end": 194.84, "text": " So this article focuses on how is this done and the answer is by optimization." }, { "start": 194.84, "end": 199.4, "text": " Now what do we mean? I actually have the article here somewhat printed out but" }, { "start": 199.4, "end": 206.84, "text": " the graphics they don't really print very well to the notebook here. So" }, { "start": 206.84, "end": 217.52, "text": " imagine this here. So what you want to do is you want to see how much" }, { "start": 217.52, "end": 222.32000000000002, "text": " activation is in the network for a particular input and you can do this in" }, { "start": 222.32000000000002, "end": 229.8, "text": " many many different ways. The easiest form, let's actually start over here, if" }, { "start": 229.8, "end": 235.76000000000002, "text": " you have a neural network here and this is the softmax classifier and these are" }, { "start": 235.76000000000002, "end": 242.96, "text": " the classes in this case let's say this is dog, this is cat and this is car, house." }, { "start": 242.96, "end": 253.64000000000001, "text": " Let's go with house. You can think of okay I want to know when" }, { "start": 253.64000000000001, "end": 259, "text": " the network sees a cat. What does the network think of cats? So what I would do" }, { "start": 259, "end": 267.2, "text": " is I would take an image that is just noise, just random noise like this" }, { "start": 267.2, "end": 274.28, "text": " one on the left here and I would start to optimize using backpropagation. I would" }, { "start": 274.28, "end": 279.12, "text": " start to optimize this image. Now you usually know backpropagation as the" }, { "start": 279.12, "end": 284.4, "text": " thing that will optimize these weights if we have given an image and a label." }, { "start": 284.4, "end": 291.2, "text": " But right now you have to, so and we optimize the weights here, sorry I" }, { "start": 291.2, "end": 297.92, "text": " should say that, but right now you have to rethink what we have given is the" }, { "start": 297.92, "end": 303.03999999999996, "text": " label and the weights. We keep them constant but we optimize the input image" }, { "start": 303.03999999999996, "end": 309.84, "text": " to maximize this label as much as possible. So we ask of xx please please" }, { "start": 309.84, "end": 316.71999999999997, "text": " update yourself such that the output is as much cat as possible. And then" }, { "start": 316.72, "end": 325.72, "text": " we just optimize for that. So we hope that this picture would turn out to" }, { "start": 325.72, "end": 333.32000000000005, "text": " be as much cat as we like. And usually here, I don't know exactly which class" }, { "start": 333.32000000000005, "end": 340, "text": " they have here, but you won't get a cat. Sorry over here. You won't get a cat or" }, { "start": 340, "end": 343.76000000000005, "text": " here if you optimize the logits it's the same, just you do the same thing before" }, { "start": 343.76, "end": 350.4, "text": " the softmax. What you will get is some weird trippy thing. So in our classifier" }, { "start": 350.4, "end": 358.03999999999996, "text": " you might get something like there's a cat here but there's also one here right" }, { "start": 358.03999999999996, "end": 362.64, "text": " because two cats are more cat than one cat and there's like a giant cat head" }, { "start": 362.64, "end": 368.52, "text": " here and inside the cat head there is another cat head and there's like a cat" }, { "start": 368.52, "end": 373.96, "text": " tail somewhere here and again there's a cat eye right here. So you get like" }, { "start": 373.96, "end": 378.71999999999997, "text": " something super trippy that is as much cat as possible." }, { "start": 378.71999999999997, "end": 385.32, "text": " Alright now so this is somewhat interesting because you can find" }, { "start": 385.32, "end": 389.2, "text": " out what does the network think is the most cat-like thing there is and" }, { "start": 389.2, "end": 394.59999999999997, "text": " the most dog-like thing there is. But what you can also do is you can see what" }, { "start": 394.6, "end": 400.8, "text": " do the intermediate layers of the networks, what do they get excited by. So" }, { "start": 400.8, "end": 405.12, "text": " what you can do is for example you can take an individual neuron in one of the" }, { "start": 405.12, "end": 410.88, "text": " layers right so this here might be a convolutional layer with its" }, { "start": 410.88, "end": 414.96000000000004, "text": " convolutional filters right and these are the different channels of the" }, { "start": 414.96000000000004, "end": 421.12, "text": " convolutional layer and then this thing here is just a single neuron there and" }, { "start": 421.12, "end": 433.72, "text": " what you can do is you can say okay X we have now again we input the" }, { "start": 433.72, "end": 441.64, "text": " image X and we optimize the image X such that with a given weight this" }, { "start": 441.64, "end": 447.92, "text": " particular neuron let's call it N and N is maybe this neuron right" }, { "start": 447.92, "end": 454.68, "text": " here right this particular neuron is as much activated as possible right so we" }, { "start": 454.68, "end": 459.08000000000004, "text": " no longer optimize for a label but we optimize for a particular neuron to be" }, { "start": 459.08000000000004, "end": 466.16, "text": " activated as much as possible and then we can sort of see what is an image that" }, { "start": 466.16, "end": 473.24, "text": " activates this neuron as much as possible in this case that thing you can" }, { "start": 473.24, "end": 476.88, "text": " actually do the same thing with an entire channel if you simply say when is" }, { "start": 476.88, "end": 482.08, "text": " the channel as a whole activated as much as possible and something like deep" }, { "start": 482.08, "end": 486.24, "text": " dream did the same thing but with an entire layer in the neural network like" }, { "start": 486.24, "end": 492.84, "text": " what is this layer activated by they can imagine there's not only one image that" }, { "start": 492.84, "end": 499.28, "text": " activates it the most but probably depending on your start here it's it's" }, { "start": 499.28, "end": 505.96, "text": " it's a you'll get different different results for depending on where you start" }, { "start": 505.96, "end": 511.79999999999995, "text": " we'll go into that later as well so these are the kinds of things you can do" }, { "start": 511.79999999999995, "end": 517, "text": " to investigate these neural networks and see what they what they pay attention to" }, { "start": 517, "end": 528.64, "text": " in each of the layers right so let's go on here and so they say they say okay if" }, { "start": 528.64, "end": 535.16, "text": " we optimize by optimization you get something like this here on the bottom" }, { "start": 535.16, "end": 542.52, "text": " row looks pretty funky right but what you could also do is you can visualize by" }, { "start": 542.52, "end": 547.52, "text": " what there's called data set examples and date with data set examples you" }, { "start": 547.52, "end": 552.64, "text": " don't do this thing you don't do the optimization procedure but you go into" }, { "start": 552.64, "end": 560.6, "text": " your into your database into your data set and you find the images that" }, { "start": 560.6, "end": 566.2, "text": " activate so the XI from your data set that activate a particular neuron the" }, { "start": 566.2, "end": 570, "text": " most so you simply sort all the images in your database and you just pick the" }, { "start": 570, "end": 574.9200000000001, "text": " ten or so that activate that particular neuron the most and that would also" }, { "start": 574.9200000000001, "end": 580.72, "text": " give you an understanding it's actually a valid thing and the AI microscope" }, { "start": 580.72, "end": 585.4, "text": " combines both of them so on the bottom you see what particular neurons are most" }, { "start": 585.4, "end": 592.72, "text": " excited by and at the top you see that that the data set examples they're most" }, { "start": 592.72, "end": 597.56, "text": " excited by so you can see that there is a there's a healthy diversity in the" }, { "start": 597.56, "end": 602.4, "text": " data set examples but they also all kind of map to the to the what it's most" }, { "start": 602.4, "end": 609.92, "text": " excited by it's pretty cool the last point they make in this article is that" }, { "start": 609.92, "end": 618.04, "text": " of diversity with the data set examples you do get naturally sort of a diverse" }, { "start": 618.04, "end": 622.36, "text": " set of images of course where you can guess you can also say okay whether the" }, { "start": 622.36, "end": 626.56, "text": " maximum activations and only slightly positive you can even give negative" }, { "start": 626.56, "end": 633.68, "text": " examples but with the positive you can only either maximize fully or you can" }, { "start": 633.68, "end": 638, "text": " you know take the negative and you can say okay what is this neuron not excited" }, { "start": 638, "end": 645.12, "text": " by at all but you you won't get kind of a spectrum of what the neuron is" }, { "start": 645.12, "end": 650.8, "text": " excited by or the unit or the layer and what they're doing is simply they add a" }, { "start": 650.8, "end": 657.52, "text": " diversity term they say that works best so what does that mean it means that" }, { "start": 657.52, "end": 666.32, "text": " here if we optimize X and Y right let's go up here if we optimize X in that" }, { "start": 666.32, "end": 675, "text": " right and try to maximize the activation of n we don't want to do X by itself but" }, { "start": 675, "end": 683.12, "text": " we want to do is we want to do an entire set of X I right so we feed an entire" }, { "start": 683.12, "end": 692.12, "text": " mini batch in there and we want to maximize n but also maximize a diversity" }, { "start": 692.12, "end": 703.64, "text": " term let's call that D between X 1 to X be right so you want to maximize the" }, { "start": 703.64, "end": 707.32, "text": " activation of the neuron but at the same time in your loss function that you" }, { "start": 707.32, "end": 713.12, "text": " optimize you also have this diversity term where you say the images that you" }, { "start": 713.12, "end": 717.32, "text": " produce should be far apart from each other or kind of apart from each other" }, { "start": 717.32, "end": 725.5600000000001, "text": " and thus you do get diverse samples okay the printing again doesn't work so here" }, { "start": 725.5600000000001, "end": 730.96, "text": " you see that if you just simply optimize you get the thing on the left but if you" }, { "start": 730.96, "end": 736.08, "text": " have a batch of things and you optimize them to be diverse from each other but" }, { "start": 736.08, "end": 742.7600000000001, "text": " also activate the layer or the unit you get a variety of high activations and" }, { "start": 742.76, "end": 747.76, "text": " you can see here that this is some sort of curve this could be a beak of a bird" }, { "start": 747.76, "end": 752.76, "text": " but on the right here this could also be kind of a snout of a monkey or something" }, { "start": 752.76, "end": 760.24, "text": " okay so it's just curvy curvy things that is activated by here they give" }, { "start": 760.24, "end": 764.8, "text": " another example you can clearly see that there is like some sort of eye in the" }, { "start": 764.8, "end": 769.4, "text": " picture but then if you optimize with diversity you can see that some of them" }, { "start": 769.4, "end": 776.88, "text": " do not have this eye thing and in fact also some of the data set examples do" }, { "start": 776.88, "end": 782.64, "text": " not have this eye thing so it might be interesting to to to optimize here with" }, { "start": 782.64, "end": 790.88, "text": " diversity and even in they say in higher layers it gets even more more diverse" }, { "start": 790.88, "end": 800.12, "text": " with what you achieve with this ball detector they also say they research" }, { "start": 800.12, "end": 805.28, "text": " interactions between neurons where they can interpolate between them right so" }, { "start": 805.28, "end": 811.76, "text": " you can here have two different new units let's say this top left one here" }, { "start": 811.76, "end": 816.44, "text": " is the thing that we've just seen with the curvature activator and then on the" }, { "start": 816.44, "end": 821.6, "text": " right we can select this thing here that is appears to be activated by these bird" }, { "start": 821.6, "end": 826.0400000000001, "text": " like bird like things and if you optimize an image that activates both of" }, { "start": 826.0400000000001, "end": 830.4000000000001, "text": " them you get the thing on the bottom left and this is very good for" }, { "start": 830.4000000000001, "end": 834.4000000000001, "text": " understanding how neural networks work because what a neural network will do is" }, { "start": 834.4000000000001, "end": 839.5600000000001, "text": " exactly it will take the thing on the top left and the top right from lower" }, { "start": 839.5600000000001, "end": 844.48, "text": " layers and it will combine them to form features of higher layers so while the" }, { "start": 844.48, "end": 850.52, "text": " top right thing looks like generic birds the bottom left thing looks much more" }, { "start": 850.52, "end": 854.6, "text": " like birds with let's say long necks and then kind of curved necks so more" }, { "start": 854.6, "end": 861.9200000000001, "text": " stork ish birds right because we've added in this curvature thing so this is" }, { "start": 861.9200000000001, "end": 867.02, "text": " very very cool to play around you can also here interpolate between into" }, { "start": 867.02, "end": 875.92, "text": " neurons like you would interpolate in a in a GAN right and yeah so they do make" }, { "start": 875.92, "end": 880.16, "text": " a point of regularization I don't want to go into that particularly but you" }, { "start": 880.16, "end": 885, "text": " have to be careful you can't just apply the optimization procedure as I said" }, { "start": 885, "end": 891.52, "text": " right now you have to actually have to do some regularization and to get to" }, { "start": 891.52, "end": 895.36, "text": " get rid of what are essentially adversarial examples in this process" }, { "start": 895.36, "end": 901.48, "text": " because if you just straight-up optimize you will get pretty high frequency" }, { "start": 901.48, "end": 909.84, "text": " crappy results I actually want to jump over now to this OpenAI microscope so" }, { "start": 909.84, "end": 914.6800000000001, "text": " this is a tool that lets you explore these visualizations so at the beginning" }, { "start": 914.6800000000001, "end": 919.4, "text": " you can pick one of these models and I'll pick inception v1 just because some" }, { "start": 919.4, "end": 925.8, "text": " of the other ones they don't have everything done quite yet the all the" }, { "start": 925.8, "end": 931.84, "text": " all the all the visualization so on the right here you can actually see the" }, { "start": 931.84, "end": 935.88, "text": " architecture of the network if you know what an inception network is this what" }, { "start": 935.88, "end": 941.28, "text": " this looks like and you would be able from here to select one of these units" }, { "start": 941.28, "end": 948.48, "text": " straight away but I'm gonna sorry we're gonna go to the left here so you have" }, { "start": 948.48, "end": 958.24, "text": " deep dream activated which means the entire layer is optimized for so per" }, { "start": 958.24, "end": 964.44, "text": " layer on the right side you have an image here and you can already see that" }, { "start": 964.44, "end": 970.08, "text": " if we go from the bottom what we saw before we get patterns that become more" }, { "start": 970.08, "end": 976.5600000000001, "text": " and more complex as you go up the layers and then more and more until you finally" }, { "start": 976.56, "end": 982.4, "text": " have what the network appears to be most activated by is mostly dogs which is" }, { "start": 982.4, "end": 989.5999999999999, "text": " okay because image net is dominated by dogs so you can click on any of these" }, { "start": 989.5999999999999, "end": 998.9599999999999, "text": " right here like this one and now you'll be able to inspect the individual nodes" }, { "start": 998.9599999999999, "end": 1002.4, "text": " in this layer so before we had the whole layer right the whole layer was" }, { "start": 1002.4, "end": 1007.9599999999999, "text": " activated by something but these layers they have different channels and also" }, { "start": 1007.9599999999999, "end": 1013.12, "text": " different neurons within the channel so you can select this here you can go" }, { "start": 1013.12, "end": 1021.52, "text": " neuron activation or channel activation and these are the images that these" }, { "start": 1021.52, "end": 1027.08, "text": " channels are excited by the most you see you get pretty funky pattern if we" }, { "start": 1027.08, "end": 1034.6, "text": " select one interesting one maybe this this one right here you can see on the" }, { "start": 1034.6, "end": 1038.32, "text": " left this is the channel optimizing optimization on the right this is the" }, { "start": 1038.32, "end": 1044.48, "text": " neuron optimization and here you get the data set examples that are most" }, { "start": 1044.48, "end": 1053.32, "text": " activated that mostly activate this particular channel or neuron so sorry" }, { "start": 1053.32, "end": 1060.24, "text": " this particular yeah channel so you can see this is pretty similar to the thing" }, { "start": 1060.24, "end": 1068, "text": " I drew where except for it being a cat it's some sort of a fox dog thing right" }, { "start": 1068, "end": 1074.8799999999999, "text": " and and you can explore the neural network in this fashion so you can go" }, { "start": 1074.88, "end": 1085.2800000000002, "text": " through the layer here and look at that good units this seems to be whiskers" }, { "start": 1085.2800000000002, "end": 1094.1200000000001, "text": " classifier and lo and behold things with whiskers will activate the the neuron" }, { "start": 1094.1200000000001, "end": 1098.24, "text": " and as you go up the layer and this is the the cool thing right so we're right" }, { "start": 1098.24, "end": 1103.2800000000002, "text": " now we're here in this layer for as you go up you will see more and more" }, { "start": 1103.28, "end": 1112.52, "text": " intricate patterns of activations I I could play around this for very very" }, { "start": 1112.52, "end": 1119.76, "text": " long time but I won't I won't waste your time too much they there is a dist sorry" }, { "start": 1119.76, "end": 1128.16, "text": " there is a slack workspace where people discuss interesting patterns what is" }, { "start": 1128.16, "end": 1141.52, "text": " this okay yes this is a some sort of temple temple constructor very cool" }, { "start": 1141.52, "end": 1146.92, "text": " there is a slack workspace where people discuss interesting things for example" }, { "start": 1146.92, "end": 1153.6000000000001, "text": " they discuss how the car detector that you see right here is one of the units" }, { "start": 1153.6, "end": 1159.6, "text": " there is literally endless units to look at that detects cars can be clearly seen" }, { "start": 1159.6, "end": 1165.12, "text": " to be built from lower level features such as this wheel detector you see this" }, { "start": 1165.12, "end": 1175.24, "text": " wheel detector here is the unit three three seven in the mixed four layer and" }, { "start": 1175.24, "end": 1183.52, "text": " this car hood detector right here is unit two three seven also in the in one" }, { "start": 1183.52, "end": 1188.32, "text": " of the layer fours right so these are both from layer four and then the car" }, { "start": 1188.32, "end": 1194.88, "text": " detector I haven't looked this up ah isn't layer four as well but let's check" }, { "start": 1194.88, "end": 1203.08, "text": " it out this isn't layer 4b this isn't layer 4b and this isn't layer 4c ah so" }, { "start": 1203.08, "end": 1211.8, "text": " you see I this was a risk the car detector is built from lower level" }, { "start": 1211.8, "end": 1218.36, "text": " features of car hood and car wheel right the car wheel right here detects wheels" }, { "start": 1218.36, "end": 1226.4399999999998, "text": " and the car hood detector detects hoods and then the car detector detects cars so" }, { "start": 1226.4399999999998, "end": 1232.24, "text": " there are very like I really invite you to go look at it check out what people" }, { "start": 1232.24, "end": 1239.36, "text": " find and explore these models all of this is based on this lucid library" }, { "start": 1239.36, "end": 1244.48, "text": " right here also invite you to check that out where you can perform such" }, { "start": 1244.48, "end": 1262.68, "text": " optimizations yourself I'll link to that and with that bye bye" } ]
_EDr3ryrT_Y
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Typical Decoding for Natural Language Generation (Get more human-like outputs from language models!)
[ "Science & Technology" ]
[]
#deeplearning #nlp #sampling Modern language models like T5 or GPT-3 achieve remarkably low perplexities on both training and validation data, yet when sampling from their output distributions, the generated text often seems dull and uninteresting. Various workarounds have been proposed, such as top-k sampling and nucleus sampling, but while these manage to somewhat improve the generated samples, they are hacky and unfounded. This paper introduces typical sampling, a new decoding method that is principled, effective, and can be implemented efficiently. Typical sampling turns away from sampling purely based on likelihood and explicitly finds a trade-off between generating high-probability samples and generating high-information samples. The paper connects typical sampling to psycholinguistic theories on human speech generation, and shows experimentally that typical sampling achieves much more diverse and interesting results than any of the current methods. Sponsor: Fully Connected by Weights & Biases https://wandb.ai/fully-connected OUTLINE: 0:00 - Intro 1:50 - Sponsor: Fully Connected by Weights & Biases 4:10 - Paper Overview 7:40 - What's the problem with sampling? 11:45 - Beam Search: The good and the bad 14:10 - Top-k and Nucleus Sampling 16:20 - Why the most likely things might not be the best 21:30 - The expected information content of the next word 25:00 - How to trade off information and likelihood 31:25 - Connections to information theory and psycholinguistics 36:40 - Introducing Typical Sampling 43:00 - Experimental Evaluation 44:40 - My thoughts on this paper Paper: https://arxiv.org/abs/2202.00666 Code: https://github.com/cimeister/typical-sampling/blob/3e676cfd88fa2e6a24f2bdc6f9f07fddb87827c2/src/transformers/generation_logits_process.py#L242-L272 Abstract: Despite achieving incredibly low perplexities on myriad natural language corpora, today's language models still often underperform when used to generate text. This dichotomy has puzzled the language generation community for the last few years. In this work, we posit that the abstraction of natural language as a communication channel (à la Shannon, 1948) can provide new insights into the behaviors of probabilistic language generators, e.g., why high-probability texts can be dull or repetitive. Humans use language as a means of communicating information, and do so in a simultaneously efficient and error-minimizing manner; they choose each word in a string with this (perhaps subconscious) goal in mind. We propose that generation from probabilistic models should mimic this behavior. Rather than always choosing words from the high-probability region of the distribution--which have a low Shannon information content--we sample from the set of words with information content close to the conditional entropy of our model, i.e., close to the expected information content. This decision criterion can be realized through a simple and efficient implementation, which we call typical sampling. Automatic and human evaluations show that, in comparison to nucleus and top-k sampling, typical sampling offers competitive performance in terms of quality while consistently reducing the number of degenerate repetitions. Authors: Clara Meister, Tiago Pimentel, Gian Wiher, Ryan Cotterell Links: Merch: http://store.ykilcher.com TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ykilcher.com/discord BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Pay special attention to this paper. It is not a paper by Google or DeepMind or Meta or anything like this, yet I believe it is a really important paper. It discusses typical sampling, which is a new decoding strategy of how we sample from language models. We usually train language models with a maximum likelihood objective that put a lot of weight on very likely words. And when we use these models to produce language, we either explicitly or implicitly reproduce that we make these models sample very highly likely strings, which are boring and not human like, it's not what we do. I don't say things that are just highly likely, because I actually want to say something interesting. And that means that every now and then, I should utter something that's less likely, I should speak a word or a sentence that you didn't expect, because that's what transmits information. Typical sampling does exactly that and does it in a principled fashion. This video right here is a description, a review of the paper. And the next video is going to be an interview with Clara Meister, the first author of the paper. Both videos, but especially the interview, are super duper interesting. I would definitely invite you to check them both out. And I would definitely invite you to try out typical sampling. It is in hogging phase. And whenever your objective is to sample something that is very high quality, but also diverse and interesting, and not just bland high likelihood text, then that is your method for you. I believe that we do need new sampling strategies. And this one is very promising. Check it out, leave a like and see ya. Hi, let me quickly tell you about Fully Connected, which is curated space for the Applied ML community. It features articles, project reports, news events, and anything you could want, especially the projects page acts as a little bit of a product hunt for ML. So feel free to add your own project right here. It's curated by Weights and Biases, but I know what you're thinking. Yeah, another company, blog, whatever about their products. But this is not at all about Weights and Biases. It features some of their stuff, of course, but it is generally a really good resource to get good information on what's currently happening in deep learning. They have great articles and tutorials, like there's one on solving Wordle with reinforcement learning, which is pretty cool. There's one explaining group normalization in PyTorch. And there's one that explains to you how to run YOLOv5 object detection on Windows. So as you can see, they have all kinds of stuff. And the list of already existing articles is long. If you still don't believe me that it's not all Weights and Biases, in fact, you can submit a post there, you can click the button, write a post, it will be reviewed by them and then published. So one of the coolest ML startups currently is going to push your content. How great is that? Now, if you are just a lurker like me, then you know, head over there and subscribe because it's user submitted but curated so you get the best of both worlds. Besides articles, they also have events, which usually means their webinars about various topics, you can look at old webinars, but you can also subscribe to get updates on new ones. They also host their podcast, their gradient descent. And the current episode is actually with Jensen Huang, the CEO of Nvidia. So pretty big hitter. And lastly, it includes the good old Weights and Biases community forums where you can get all kinds of help on Weights and Biases products and beyond Weights and Biases to all kinds of things machine learning related. So again, fully connected, it just got a major redesign. Please check it out. Go over there, subscribe for awesome articles and news. There's new stuff all the time. Thank you so much to Weights and Biases for sponsoring this video. They've been a great sponsor. So please check them out. That's one db.ai slash fully dash connected. Now let's get into the video. See ya. Hello there today we'll look at typical decoding for natural language generation by Clara Meister, Tiago Pimentel, john Weaver and Ryan Cotterall. This paper suggests a new way of decoding of producing text from a large language model or a small language model. It doesn't matter. We don't discriminate here. In any case, usually currently you might have heard of things like beam search, you might have heard of things like nuclear sampling and top case sampling. These things are all right. And interestingly enough, the stochastic methods like nucleus and top case sampling are better than the methods that try to find the most likely things such as beam search or greedy decoding. However, it's still not satisfactory large language and small language models. They often produce text that is boring, just kind of bland when you actually use them, even though they have amazing perplexities on text. This paper tackles this. It proposes that when humans generate text, they don't just produce the most likely text, they will actually trade off likelihood with information content or the transmission of information to another human. And that trade off can be captured in the frameworks of information theory. And we can generate or we can suppose a decoding scheme, which they call typical decoding, typical sampling, which exactly encapsulates that notion of balancing interestingness or information with likelihood. And when they test it, that actually results in better results. This could be really crucial because it doesn't require any change to how we train language models. In fact, we can take off the shelf trained language models and simply use this new decoding strategy out of the box. And it applies across, you know, across domains. Now I have long said that we need that probably our decoding methods, our sampling methods may be inadequate depending on what we do with those language models. For example, alpha code samples a whole bunch of programs in order to solve a problem. Now we, again, we don't like there is value in diversity if you sample a whole bunch, and then after that use like a filter to narrow it down. So I think depending on what you want to do, maximum likelihood sampling is very appropriate. This paper, for example, mentions natural or machine translation, because in machine translation, you really want kind of the best translation for a given input. However, in other frameworks, such as alpha code, but also such as storytelling, this paper mentions summarization maybe as well, you want to, we want to trade off some of this maximum likelihood for some more diversity or for some more interestingness or for some more information content. And that's what this paper does. So we'll dive into it. If you like content like this, as always, leave a like, and don't be shy to let me know in the comments what you think. I'm not exactly I'm not entirely sold on what this paper does. I do agree we need better or we need a different decoding strategies. But I do have my, you know, reservations about this exact one. So let's dive into the paper. The paper first complains about the exact thing I complain about, namely saying that language models currently they have extremely low perplexities on on corpora for many domains, yet when used to generate text, their performance is far from perfect. And by that they mean, yeah, they they produce text that is undesirable, eg, generic or degenerate weight. Yes. So either generic or degenerate, or just as we said, boring, bland, you know, and that comes from the fact that a lot of these things, they try to find the maximal probability string. So they think, you know, I'm going to sample from the probability distribution, and I want to sample what is the most likely because that's how we train these models, right? So let's do a short excursion. If you are unaware of how language models are trained, they're usually trained. You have a sentence like the cat is in something the house. And it goes on. So what you can do is you input a part of the text, and then you let the model predict the next token, and then you input that part, and you let the model predict the next token. Now, in training, this is all good and fine. But at inference time, what you do is you provide a prefix, for example, the cat. And then you have to decode here, you have to decode a word, what's next, and then you feed that whatever you decoded into the language model, and you decode the next word. And I think that's where part of the problem comes from. Because during training, naturally, what is here is given by the data set. So every new step that you take, if there is something unlikely, if there is a certain diversity to the input, that's captured by the training data. However, in decoding, you sort of make your own data as you go along here. And if you just always focus on finding very likely next tokens, you'll never get into kind of a less likely environment, which could also be correct, right? So that is one of the problems. However, obviously, in these language models, the way they work is, for example, you input all of this into a big model, there is a little bit of a big model, there is some sort of a model, which usually is a transformer nowadays, and out comes a probability distribution. And the probability distribution is over your vocabulary. For example, there is the vocabulary, this cat, dog. I don't know another word. What's another word? House, something like this. And it will give you a distribution of probabilities over these words. And you can now choose what to do. Either you can take the maximum one, which often runs into these problems of being boring or even repetitive, you can take you can sample from this distribution, which is also not super appropriate, because, and the paper touches on this a little bit, because sometimes the long, what's called the long tail here, there are many, many words, of course, and they all have their some probability. And you don't want to get into these super low probability words, because they might just be artifacts of the model. The model doesn't represent these low probabilities really well. It's really good at the sort of high probability words, because, well, it's essentially trained as a classifier. And the classifier is trained to give you the correct label as the highest class. And it doesn't really care about the rest of the words, especially not the ones that have really low probability. So what people do is they came up with, first of all, Beam search, what beam search does is it considers multiple futures. So if it's here, that cat, like that cat, it considers multiple futures, and it looks a few steps ahead. So it looks a few steps ahead, and it keeps a list of things that are possible to complete. So for example, in the beginning, it goes all these three routes, and it keeps those in mind, along with the probabilities that you go along that tree. And then, you know, you go ahead, and maybe the buffer is five large, right? So now we can still fit it because there's one, two, three, four, five paths currently, but as soon as we expand the sixth one, this one here, we have to drop someone. So we consider all the paths, and we consider only the ones with the highest likelihood so far, this we can simply do by multiplying the probabilities of consecutive decoding steps, we consider the most likely five, let's say, paths so far, and we delete some of them. Let's say that this one here is really low probability. And then once we add this one here, and this one, we have to drop another few, so let's say this one, these two here are really low probability, and so on. And we only continue the paths that have good probabilities, or high enough probabilities to be the highest possible. That's beam search. And the reason why people do it is because there might, so there might be a very high likelihood sentence that you could produce, but the next word just happens to be low in probability, right? Maybe here, house will lead to a sentence that down the road is very likely, has a very good score, but just this word right now, in this case, is low probability, because the immediate best word would be dog for all the possible continuations, or for this particular prefix, for all the possible expected continuations. So beam search is a very, very, very, very high probability. So beam search is even worse than greedy decoding in the sense that it really finds the high probability stuff. It doesn't, and it looks ahead to be even more accurate. If you go to the opposite end of the spectrum, you can say, okay, can we sample, but can we fix the sampling issues that arise from this tail? And that's why people do two things. So there's top K, sampling, and there is nuclear sampling, and they both work pretty much the same. So top K, sampling, what it does is you have, again, your probability distribution, and top K sampling simply says, well, can we only consider the K largest entries in that distribution, and then just sample from that? So let's say K equals three, then we only consider the three largest entries here, and we just forget about the rest, and we only sample from that. We have to renormalize, but that's fine. And then nucleus sampling is very much the same, except it says, well, I'm going to afford myself a probability, a cumulative probability of, let's say, 0.7. What does it mean? It means that this distribution right now has a cumulative probability of one. I am simply going to take the largest ones, like, okay, this one, and this one, and this one, until the cumulative probability reaches my maximum threshold. Maybe I'm going to take this one as well. You can see the advantage here is that you don't always pick the same amount, but you always pick sort of the top entries that make up, let's say, in this case, 70% of the mass. And that is useful because you have to consider multiple scenarios. One scenario is where the distribution is very peaky, like, there, you only want to consider very few entries. So you only want to consider few entries because everything else is just really unlikely. However, if you think of a distribution that is more spread out, like this one, and then you want to consider more entries, because all of them are kind of likely, and nucleus sampling affords you that, whereas top case sampling would just disregard the shape of the distribution and pick the top ones. Right, so these are the decoding strategies, but still, you can see they always go to the top or the most likely things. And this paper says, well, that's kind of dumb. And it shapes this as a information theoretic problem. We already said that humans probably want to trade off the likelihood of a string. So like how likely it is to appear, meaning essentially how much it is expected, because if I just say things that other humans expect, right, then I'm essentially not transmitting much information at all. So we can say that every string has a form or a content of information. Actually, I'm going to skip here, skip here to the theory section directly. And forgive me, I've pretty much explained all of what's highlighted already. So what we can say is that a why, why is the message that you want to pass? So let's say it's a sentence, the information content can be quantified as its negative log probability. Essentially, the less likely a given message is, you can see here that's negative, negative log probability, the less likely a message is, the more information it carries. You have to think of it like exactly as I said, if I say something that's very likely, the other person could have expected it because it's so likely. It's like if you meet the stereotypical boring person, or if you see a movie where it's like a really stereotype of a boring person, they will always say exactly what you know what you'd expect them to say. However, if you say, let's say you communicate with someone, and they all of a sudden say something that you really didn't expect. Now that's a lot of information right there. In fact, you can buy simple application of the chain rule, you can see you can also define a information content for every single word in the sentence. And that is going to be just the conditional log probability, the log conditional probability of that word, given the prefix, and that's the prefix, those are the previous words in the sentence. So akin to the information in a sentence, a word carries a lot of information, if you really didn't expect to see that word as the next word in the current sentence that you begun or that your conversation partner has begun to say. So we carry this through. And the assumption here is that the goal of an agent is to transmit information efficiently, while also minimizing the risk of miscommunication. So that's the fundamental trade off that humans do when they communicate, at least that's the hypothesis. If you transmit a lot of information, you're going to have to utter some words that are very not likely, because that transmits a lot of information. However, if you overdo that, and if you, for example, don't follow the rules of grammar anymore, and just send around low information messages or high information, low likely messages, your receiver will be confused, because they don't know what to make of it, because they really didn't expect to see something like this. And therefore, there is a chance of miscommunication. You can also, you can imagine that if you want to transmit a message to someone, right, if you want to explain something to someone, you always have to adjust to what they already know. Like if I want to explain the chain rule to someone, and I expect them to already know a little bit of math, I'm going to transmit a lot, I'm going to have to adjust my message to that. And if I assume too much of what they already know, and then I'll just end up saying something like, oh, yeah, if you derive f of, you know, of g of x, with respect to x, then you have to, you know, you just derive g and then you kind of multiply by the derivation of f. And it's all good, right? It's all good. So sorry for this butchering of the chain rule. But you can imagine that someone who has little grasp of math in the first place would be very, very hard. Because I only utter the words that carry so much information that are so not likely in their framework, that there's a chance of miscommunication. I don't know if actually that captures it the best, maybe there's a better example. That's sort of how I think of it. What they do define, and now we get into the decoding strategy is the expected information, the expected information that a specific symbol in the message will contain. So this formula right here, you might recognize as the conditional entropy of a given word in the sentence, namely, and this, I think the notation here is a bit out of place. I think this should be something like the expectation of the information content of just that t-th word, not necessarily y of t, because y of t, we sum over y of t right here. So yeah, but so we ask ourselves, if we have already produced the sentence up to time step t, and we consider the distribution of words conditioned on this sentence, so we ask our language model, what's the distribution of words that could come next? And we ask ourselves for each of these one, what's the information content? And since we have the information content is the negative log probability, that's this, and here is the minus sign, we ask ourselves, so what is the expected information content of the next word, you know, whatever the next word is, what's the expectation of its information content, if we were to just sample from this probability distribution, and then this here is the formula, right, we simply multiply whatever we're interested in, which is the information content with the probability, and we sum that up across the set that we're interested in. That is, it's just the definition of the expected value. And by happenstance, it is also the definition of the entropy or the conditional entropy. So the expected information content of any given position in a sentence is the entropy of is the conditional entropy of the distribution at that point. So what does that mean? That means if my distribution is very peaked, so if it's very likely that one of these three words here is uttered next is, so if I find a text somewhere, right, and the sentence up to here was something, and then there's only like three words that could potentially be there, none else, it's very peak to distribution, that essentially means the entropy is very, very low. And therefore, the information content of that of whatever word comes next is probably going to be very low, because all these words are super likely. However, if the distribution is very shallow, or very broad, then the entropy is high. And you can also see, since any of the words that could come next, first of all, there are many more that could be considered, and all of them have less of a likelihood. Therefore, the negative log probability will be higher. So any of those words will have more information content, and especially the expectation over those words, it will the information content will be higher. So that is just the definition of the expected information content. Now, here's the hypothesis of this paper, and they base this on some psychologists, psychology theories, or linguistic theories. But here's the hypothesis. Any given word should have an information content close to the expected information content, i.e. the conditional entropy given prior context. In other words, we expect the difference between the expected information content and the true information content to be small in human-like text. So the hypothesis here is that the way humans balance this trade-off between interestingness and likelihood, and so in between information transmission and not being misunderstood, is that they implicitly calculate the expected information content of the next word, and then they try to choose the next word in accordance so that it is as close as possible to that expected information content. So when I talk, I model sort of the transmission channel to my receiver, and I figure out, okay, in the language right now, what would be the expected information content of the next word, and then I try to match that as closely as possible. And that gives me a way of determining this trade-off. Again, this is a hypothesis. It's backed up by a few theories from linguistics. This is also known in information theory as typicality. So a typical message is one that has the information content that is close to the expected information content, but we'll investigate. So they say figure one shows for human-generated text, the distribution of this epsilon. So this epsilon is the distance between these two quantities, the expectation and the actual thing that's uttered. Remember, the expectation considers all possible next words and calculates the expected information content of them. And then this thing right here, this thing is just the information content of the next word that is actually uttered or actually written. So what would we actually do? We would actually analyze the human-generated text. So what would we expect this or what do we see if we analyze human-generated text? And these here, these are obviously language models that estimate the probabilities of these words, but these are evaluated on human-generated text, so not on language model-generated text, because remember, this paper is all about how do we do that in the human-generated text. So let's take a look at what humans do, and you can see the distribution is very peaked. Now, this isn't the distribution of words, this is the distribution of this epsilon. So that essentially means this distance, this difference right here is very, very peaky, and it's peaky around a very small value. You can see here the scale goes from whatever, and the peak is at a value that's quite close to zero. Now, it's not exactly zero, but this is empirical data. So this paper says this is evidence for the fact that humans do, as much as they can, try to match the information content to the expected information content. Now, it'd be interesting to see what you would actually, let's say humans would just sample from the distribution itself, right? What kind of distance between the entropy and the information content would you expect to see? Maybe a Gaussian or a log Gaussian? I'm not entirely sure. Also, what is peaky? How do you characterize peaky? I can see peaky, but it's proof by picture, almost. And then we see a very interesting imbalance, namely, there seems to be sort of a mass going higher up, always on the left side of this, rather than on the right side. There seems to be a bit of a longer tail on the right side, but a bit more heavy mass on the left side. Now, what does that mean? This is, well, I can't really make sense of it, because the epsilon is characterized as an absolute value, whereas this right here is not an absolute value. And so I'm going to guess they left away the absolute value. Therefore, I don't know which, I don't know the distribution of the deviation of information content from the conditional entropy per token. Okay. Again, I do not know what came first, if they do h minus i, or if they do i minus h. And that determines how we interpret these plots. So I'd rather not interpret them in the wrong way right here. They further, so that's what they say, the peaked nature of the distribution reveals that humans indeed tend to form language with per word information content quite close to their expected information content. And the centering of these distributions around the value close to zero reveals that our probabilistic language generators are learning what this rate is. Well, I'm not sure I agree with that statement, because being peaked doesn't mean, doesn't mean, like you need both to be true at the same time. If you assume that the language models are really good at what they do, then you can claim that humans peak around zero and therefore they match the expected information content. If you assume that humans match the expected information content, then you can conclude that language models are really good at what they do, because the peak seems to be rather around zero. But you can't draw both conclusions at the same time from this plot, because you need one to justify the other. In any case, this is a minor point. What is interesting is that here, they go into information theory, as I said, this notion of typicality, which is exactly what we're describing right here. They say, it says typical messages are the ones that we would expect from its probability distribution. Their average per symbol information content is close to the entropy rate of their source distribution. Now, the interesting observation right here is that the definition implies that the highest probability message is often not a member of this set. Its average information content is too low. So if we consider any distribution and we consider what's the expected information content, which is the way we defined it, and we only consider messages, let's say these are the messages, we only consider messages that are close to that expected information content. But those are going to be messages that are kind of somewhere in the middle of the likelihood. So they're not super duper unlikely, because the expected information content is again the expectation over all of these messages, which is going to be not super duper high, which rules out these unlikely messages. These are prone to misunderstanding, but it also rules out the very likely messages, because those are going to be prone to being boring and not transmitting any information at all. And that is something interesting. That is exactly the property we want in a new decoding method, leave away the really low likelihood stuff and leave away the really high likelihood stuff, because that's boring. Yeah, tip, the typicality is a property. Okay, now they go into why we can't, why we have to go for a local, a local notion of typicality, whereas information theory usually defines it as a property of the of the entire sentence or of the entire message, don't necessarily want to go into that. The next chapter, they try to justify this with psycholinguistic concepts. There are two they consider. There's the uniform information density hypothesis, which proposes that speakers construct their utterances such that information is distributed uniformly across them. And the the the speakers choose words such that their information count, their information rate is closer to a target channel capacity, which is essentially what we're doing right here. Then there's the rational speech act, and the rational speech act, sort of, it casts the speaker's behavior as the maximization of a utility function. And the utility function is a set of sentences usefulness to its listener. So the way it constructs this, again, this is sort of hypothesis, it imagines this literal speaker. So this is a hypothetical speaker that just samples from the probability distribution, it just looks at the probability distribution, and just samples from that. And it just orders the words as, you know, as they come out. And that means, you know, with the typical problems like, it's going to utter the words, it's going to use the words, it's going to utter kind of low, low information stuff a lot of the times. Then it says, well, a smart, the pragmatic speaker, and that's what the humans would be at the pragmatic speaker produces sentences to maximize the utility function, as opposed to following its expected literal behavior. If you define the utility function to be this thing right here, then the hypothesis kind of matches, the hypothesis matches the this rational speech act. However, I find this also to be a little bit shady, because if I have a different decoding method in mind, I can apply the same argument, I can simply say, well, my, my utility function is now my new decoding method. So, yeah, I'm not super convinced by this. However, it's interesting to see that people think in this way that they say, well, there is going to be this literal imaginary agent that just speaks according to the distribution. And then there is the upgraded version of that. And probably the humans are a form of an upgraded version, this pragmatic speaker that changes something that sort of uses this distribution, but changes something about it. And that's exactly what we do. So how do we do it? And we've already alluded to most of it. So what we do is we introduce this typical sampling. Much like nucleus sampling, we define a threshold, in this case, this is called tau of probability mass that we're going to allow in our in our subset of words. So again, maybe we have a distribution of a couple of words, and they have different likelihoods under our language model output. And we assume our language model output models these probabilities, especially the non negligible ones. Well, then what we're going to do is we're going to calculate the expected information content, which is the expected negative log probability, which is also the conditional entropy. So we're going to estimate this property by simply calculating it. We can do this. This is simply again, this is p of x given y times log p of x given y. The log probability is usually already output by our model in the form of logits. We just need to normalize it. And if we apply some sort of a softmax operation, we get the p of x given y. So then we have the conditional entropy, and then we simply choose the words that are most close to this. So maybe the expected the entropy, let's say this is the let's say these are the log probabilities right here. Let's say the expected one is here, we simply choose in order the words that are most close to that one. So it would be this one right here. This is really close. Then this one is really close. Then what's a tough choice, maybe this one's really close. And then maybe this one's really close. And that we do that until again, we reach our target probability mass. Again, if the distribution is very peaked, so if the distribution is very peaked, that means the the typical information content is going to be lower, which means the words that have low information are going to be chosen more, which and these are also going to be less words. And that gives us our original case back where we're simply going to choose the highest likelihood words into our bucket to sample from. Yeah. And and that sort of regresses to the old case, if the distribution is very peaky. However, if the distribution is flatter or more broadly more broad support, then we the expected information content is going to be lower, which means that probably these highest likelihood ones are not going to be in it. And we opt for more interesting ones that are also likely, but not as likely. So this kicks in mostly when there's a lot of possibilities, which you can see in let's say machine translation, there is not in machine translation is often very clear or there's only a few possibilities on how there's only a few possibilities on how to translate something. However, in storytelling, there's lots of possibilities how things could continue. And there are distribution are much more shallow. And this method would exploit that by saying, well, I'm just not going to consider the most likely things right here. The computational complexity is the same as nucleus or top case sampling, we also have to determine the set we're going to consider by somehow calculating across it, we have to aggregate it, we have to renormalize it, and we have to sample from it, except here, well, I guess we always have to sort right. Yeah, here we also have to calculate this conditional entropy part, it's the same in complexity, but it does add a constant overhead or like a multiplicative constant factor overhead to the whole thing. So the last thing I want to go in here is the choice of hyper parameters in this one. They say we found k equals 30, and n equals point nine to perform best. So these parameters perform best for top k and nucleus sampling respectively. So this is for their experiments. So one is for top case sampling, and one is for nucleus sampling. For typical sampling, we found the tau equals point two and tau equals point nine five to provide the best results for story generation and abstractive summarization respectively. So while they allow for a single parameter for each of the baselines, they go with a separate parameter for different tasks for their method, which is a bit shady. Now, there's two possibilities. First possibility is they sort of stifled the baseline by only sort of giving it not exploring well enough the possibilities, or what I think happened most likely is that the same parameter performs pretty well for all the different tasks, which is a good property in itself right here. Here we consider 20% of the probability mass, and here we consider 95% of the probability mass. Now that's a huge difference in how our set looks. And that by itself makes it, in my opinion, a bit of a weaker choice for using this as a decoding method, because for every thing that I want to achieve, I need to essentially tune this parameter, whereas with top case sampling, I could just leave it be. So it'd be interesting to see if in the future there might be, because I'm a fan of this technique in principle. So maybe in the future, we can find more of an adaptive way, much like nucleus sampling is an adaptive way of top case sampling, maybe we can come up with an adaptive way of determining the number here or the parameter of how many things to consider. So I don't want to go too much into the evaluation. There is a difference. Sometimes it's stark, sometimes it's not as stark. It is different in different regimes. You can see that depending on the regime that you are at, it's sometimes the different methods are really different. Sometimes they're quite close, sometimes they switch places. Yeah, that's, I don't want to go too much into the results because we can maybe discuss them in an interview. But qualitatively, say for example, for the summarization task, we see that typical sampling provides a comprehensive and coherent summary of the article under consideration. In comparison, nucleus sampling leads to hallucinated facts, for example, getting drugs from under, okay, I haven't read the article, but nucleus sampling hallucinate facts, which is one property. If you sample only from high likelihood things, right, you're just going to continue with things that are very likely in the language itself, rather than transmitting the necessary information. While top case sampling misses some of the important information in the article, e.g. the charges of burglary and arson. And that might be because top case sampling simply has this fixed bucket of words to consider. And as soon as one word is not in that bucket, it simply is forbidden from uttering it, even if the distribution is shallow and that word is kind of likely. So I want to stop here and just give a few thoughts on this. In my opinion, I already said it is quite needed that we have different decoding strategies to achieve different tasks. This one right here, it seems really interesting. It is a way to trade off sort of not considering the most likely things, but also not considering the least likely things. However, I'm not sure if the notion of the matching the expected information content is appropriate. I can't tell. It's a good hypothesis. I don't know if this quantity here, the absolute distance is a good quantity. Like, why would it be the absolute distance? And the other issue I have right here, but this might be my ignorance of information theory is. So if I change, let's if I assume the humans talk like this, they choose their words according to the expected information content, right? And I use this particular construction right here. That is going to everything that comes out of this. Whatever comes out of this will have a different expected information content than the original language. If I wanted to actually match, like if I wanted to keep the expectation, I probably couldn't do this just in absolute difference. That's probably going to change the expected information content, let alone the distribution of it itself. But just the expectation is going to change. Now, if you're telling me that humans do it like this, and that our language models are trained on text that is written and uttered by humans, like wouldn't that text already have that property and therefore sampling from it would be the original distribution? Or in other words, if I produce text like this, like, shouldn't I get the same, shouldn't I get the same distribution out that my language model predicts, because my language model is trained on human text, and your claim is that humans sample text like this. So why would that be any different from sampling from the language model itself? And especially, shouldn't it be that the expected information content remains constant if I apply this sampling technique? Just out of principle, because by definition, if it doesn't, then it doesn't match human generated text, because that's already the input. That's the training data. All right, but maybe I'm sort of ignorant of information theory right here. Yeah, my other concerns are with the hyperparameter choice. And yeah, I'd be interested to dive a little bit more into this, like what would we expect to see with the different sampling methods or with different hypotheses? This is also really interesting, but I'm going to leave it at that. All I can say is that we should probably try this out. And maybe, you know, for certain tasks where diversity and actually transmitting information is more important than being, you know, uttering the most likely thing, this might really be a cool application. And maybe we'll figure out an automatic way to adjust the hyperparameters. Let me know what you think. Maybe you've already tried it out. You can give a little bit of a report on how that went. And I'll see you next time. Bye bye.
[ { "start": 0, "end": 7.04, "text": " Pay special attention to this paper. It is not a paper by Google or DeepMind or Meta or anything" }, { "start": 7.04, "end": 12.88, "text": " like this, yet I believe it is a really important paper. It discusses typical sampling, which is a" }, { "start": 12.88, "end": 19.04, "text": " new decoding strategy of how we sample from language models. We usually train language models" }, { "start": 19.04, "end": 26.96, "text": " with a maximum likelihood objective that put a lot of weight on very likely words. And when we use" }, { "start": 26.96, "end": 33.44, "text": " these models to produce language, we either explicitly or implicitly reproduce that we make" }, { "start": 33.44, "end": 41.2, "text": " these models sample very highly likely strings, which are boring and not human like, it's not" }, { "start": 41.2, "end": 46.72, "text": " what we do. I don't say things that are just highly likely, because I actually want to say" }, { "start": 46.72, "end": 53.28, "text": " something interesting. And that means that every now and then, I should utter something that's less" }, { "start": 53.28, "end": 59.44, "text": " likely, I should speak a word or a sentence that you didn't expect, because that's what transmits" }, { "start": 59.44, "end": 65.76, "text": " information. Typical sampling does exactly that and does it in a principled fashion. This video" }, { "start": 65.76, "end": 72.32, "text": " right here is a description, a review of the paper. And the next video is going to be an interview" }, { "start": 72.32, "end": 78.56, "text": " with Clara Meister, the first author of the paper. Both videos, but especially the interview, are" }, { "start": 78.56, "end": 84.16, "text": " super duper interesting. I would definitely invite you to check them both out. And I would definitely" }, { "start": 84.16, "end": 90.24000000000001, "text": " invite you to try out typical sampling. It is in hogging phase. And whenever your objective is" }, { "start": 90.24000000000001, "end": 98.4, "text": " to sample something that is very high quality, but also diverse and interesting, and not just" }, { "start": 98.4, "end": 105.36, "text": " bland high likelihood text, then that is your method for you. I believe that we do need new" }, { "start": 105.36, "end": 111.52, "text": " sampling strategies. And this one is very promising. Check it out, leave a like and see ya." }, { "start": 111.52, "end": 118.24, "text": " Hi, let me quickly tell you about Fully Connected, which is curated space for the Applied ML community." }, { "start": 118.24, "end": 125.03999999999999, "text": " It features articles, project reports, news events, and anything you could want, especially the" }, { "start": 125.03999999999999, "end": 130.48, "text": " projects page acts as a little bit of a product hunt for ML. So feel free to add your own project" }, { "start": 130.48, "end": 136.48, "text": " right here. It's curated by Weights and Biases, but I know what you're thinking. Yeah, another company," }, { "start": 136.48, "end": 143.2, "text": " blog, whatever about their products. But this is not at all about Weights and Biases. It features" }, { "start": 143.2, "end": 150.32, "text": " some of their stuff, of course, but it is generally a really good resource to get good information on" }, { "start": 150.32, "end": 154.95999999999998, "text": " what's currently happening in deep learning. They have great articles and tutorials, like there's" }, { "start": 154.95999999999998, "end": 160.23999999999998, "text": " one on solving Wordle with reinforcement learning, which is pretty cool. There's one explaining" }, { "start": 160.24, "end": 166, "text": " group normalization in PyTorch. And there's one that explains to you how to run YOLOv5 object" }, { "start": 166, "end": 171.44, "text": " detection on Windows. So as you can see, they have all kinds of stuff. And the list of already" }, { "start": 171.44, "end": 176.16, "text": " existing articles is long. If you still don't believe me that it's not all Weights and Biases," }, { "start": 176.16, "end": 182.48000000000002, "text": " in fact, you can submit a post there, you can click the button, write a post, it will be reviewed by" }, { "start": 182.48000000000002, "end": 188.56, "text": " them and then published. So one of the coolest ML startups currently is going to push your content." }, { "start": 188.56, "end": 193.2, "text": " How great is that? Now, if you are just a lurker like me, then you know, head over there and" }, { "start": 193.2, "end": 198.64000000000001, "text": " subscribe because it's user submitted but curated so you get the best of both worlds. Besides" }, { "start": 198.64000000000001, "end": 204.48, "text": " articles, they also have events, which usually means their webinars about various topics," }, { "start": 204.48, "end": 209.6, "text": " you can look at old webinars, but you can also subscribe to get updates on new ones. They also" }, { "start": 209.6, "end": 214.88, "text": " host their podcast, their gradient descent. And the current episode is actually with Jensen Huang," }, { "start": 214.88, "end": 220.48, "text": " the CEO of Nvidia. So pretty big hitter. And lastly, it includes the good old Weights and" }, { "start": 220.48, "end": 225.51999999999998, "text": " Biases community forums where you can get all kinds of help on Weights and Biases products" }, { "start": 225.51999999999998, "end": 230.16, "text": " and beyond Weights and Biases to all kinds of things machine learning related. So again," }, { "start": 230.16, "end": 235.92, "text": " fully connected, it just got a major redesign. Please check it out. Go over there, subscribe" }, { "start": 235.92, "end": 240.32, "text": " for awesome articles and news. There's new stuff all the time. Thank you so much to Weights and" }, { "start": 240.32, "end": 244.4, "text": " Biases for sponsoring this video. They've been a great sponsor. So please check them out. That's" }, { "start": 244.4, "end": 250.08, "text": " one db.ai slash fully dash connected. Now let's get into the video. See ya." }, { "start": 254.8, "end": 259.2, "text": " Hello there today we'll look at typical decoding for natural language generation" }, { "start": 259.2, "end": 266, "text": " by Clara Meister, Tiago Pimentel, john Weaver and Ryan Cotterall. This paper suggests a new" }, { "start": 266, "end": 272.96000000000004, "text": " way of decoding of producing text from a large language model or a small language model. It" }, { "start": 272.96, "end": 278.71999999999997, "text": " doesn't matter. We don't discriminate here. In any case, usually currently you might have heard of" }, { "start": 278.71999999999997, "end": 283.91999999999996, "text": " things like beam search, you might have heard of things like nuclear sampling and top case sampling." }, { "start": 283.91999999999996, "end": 290.4, "text": " These things are all right. And interestingly enough, the stochastic methods like nucleus and" }, { "start": 290.4, "end": 296.47999999999996, "text": " top case sampling are better than the methods that try to find the most likely things such as" }, { "start": 296.48, "end": 303.84000000000003, "text": " beam search or greedy decoding. However, it's still not satisfactory large language and small" }, { "start": 303.84000000000003, "end": 310.40000000000003, "text": " language models. They often produce text that is boring, just kind of bland when you actually use" }, { "start": 310.40000000000003, "end": 317.20000000000005, "text": " them, even though they have amazing perplexities on text. This paper tackles this. It proposes that" }, { "start": 317.20000000000005, "end": 323.68, "text": " when humans generate text, they don't just produce the most likely text, they will actually trade off" }, { "start": 323.68, "end": 330.32, "text": " likelihood with information content or the transmission of information to another human." }, { "start": 330.32, "end": 336.24, "text": " And that trade off can be captured in the frameworks of information theory. And we can" }, { "start": 336.96000000000004, "end": 344.48, "text": " generate or we can suppose a decoding scheme, which they call typical decoding, typical sampling," }, { "start": 345.36, "end": 351.84000000000003, "text": " which exactly encapsulates that notion of balancing interestingness or information" }, { "start": 351.84, "end": 358.56, "text": " with likelihood. And when they test it, that actually results in better results. This could be" }, { "start": 358.56, "end": 364.15999999999997, "text": " really crucial because it doesn't require any change to how we train language models. In fact," }, { "start": 364.15999999999997, "end": 370.47999999999996, "text": " we can take off the shelf trained language models and simply use this new decoding strategy out of" }, { "start": 370.47999999999996, "end": 377.28, "text": " the box. And it applies across, you know, across domains. Now I have long said that we need that" }, { "start": 377.28, "end": 383.28, "text": " probably our decoding methods, our sampling methods may be inadequate depending on what we do with" }, { "start": 383.28, "end": 390.15999999999997, "text": " those language models. For example, alpha code samples a whole bunch of programs in order to solve" }, { "start": 390.15999999999997, "end": 397.52, "text": " a problem. Now we, again, we don't like there is value in diversity if you sample a whole bunch," }, { "start": 397.52, "end": 404.55999999999995, "text": " and then after that use like a filter to narrow it down. So I think depending on what you want to do," }, { "start": 404.56, "end": 410.32, "text": " maximum likelihood sampling is very appropriate. This paper, for example, mentions natural or" }, { "start": 410.32, "end": 416.32, "text": " machine translation, because in machine translation, you really want kind of the best translation for" }, { "start": 416.32, "end": 422.88, "text": " a given input. However, in other frameworks, such as alpha code, but also such as storytelling," }, { "start": 422.88, "end": 430.8, "text": " this paper mentions summarization maybe as well, you want to, we want to trade off some of this" }, { "start": 430.8, "end": 436.64, "text": " maximum likelihood for some more diversity or for some more interestingness or for some more" }, { "start": 436.64, "end": 442.08, "text": " information content. And that's what this paper does. So we'll dive into it. If you like content" }, { "start": 442.08, "end": 448.40000000000003, "text": " like this, as always, leave a like, and don't be shy to let me know in the comments what you think." }, { "start": 448.40000000000003, "end": 455.28000000000003, "text": " I'm not exactly I'm not entirely sold on what this paper does. I do agree we need better or we need" }, { "start": 455.28, "end": 462.64, "text": " a different decoding strategies. But I do have my, you know, reservations about this exact one. So" }, { "start": 463.59999999999997, "end": 469.03999999999996, "text": " let's dive into the paper. The paper first complains about the exact thing I complain about," }, { "start": 469.03999999999996, "end": 475.67999999999995, "text": " namely saying that language models currently they have extremely low perplexities on on corpora" }, { "start": 475.67999999999995, "end": 482.4, "text": " for many domains, yet when used to generate text, their performance is far from perfect. And by that" }, { "start": 482.4, "end": 491.03999999999996, "text": " they mean, yeah, they they produce text that is undesirable, eg, generic or degenerate weight." }, { "start": 491.52, "end": 501.35999999999996, "text": " Yes. So either generic or degenerate, or just as we said, boring, bland, you know, and that comes" }, { "start": 501.35999999999996, "end": 507.91999999999996, "text": " from the fact that a lot of these things, they try to find the maximal probability string. So" }, { "start": 507.92, "end": 513.04, "text": " they think, you know, I'm going to sample from the probability distribution, and I want to sample" }, { "start": 513.04, "end": 518.64, "text": " what is the most likely because that's how we train these models, right? So let's do a short" }, { "start": 518.64, "end": 524.32, "text": " excursion. If you are unaware of how language models are trained, they're usually trained." }, { "start": 524.96, "end": 535.52, "text": " You have a sentence like the cat is in something the house. And it goes on. So what you can do is" }, { "start": 535.52, "end": 541.68, "text": " you input a part of the text, and then you let the model predict the next token, and then you" }, { "start": 541.68, "end": 548.24, "text": " input that part, and you let the model predict the next token. Now, in training, this is all good and" }, { "start": 548.24, "end": 555.84, "text": " fine. But at inference time, what you do is you provide a prefix, for example, the cat. And then" }, { "start": 555.84, "end": 562.56, "text": " you have to decode here, you have to decode a word, what's next, and then you feed that whatever" }, { "start": 562.56, "end": 569.76, "text": " you decoded into the language model, and you decode the next word. And I think that's where" }, { "start": 569.76, "end": 575.52, "text": " part of the problem comes from. Because during training, naturally, what is here is given by" }, { "start": 575.52, "end": 581.1999999999999, "text": " the data set. So every new step that you take, if there is something unlikely, if there is a certain" }, { "start": 581.1999999999999, "end": 588.7199999999999, "text": " diversity to the input, that's captured by the training data. However, in decoding, you sort of" }, { "start": 588.72, "end": 595.9200000000001, "text": " make your own data as you go along here. And if you just always focus on finding very likely next" }, { "start": 595.9200000000001, "end": 601.76, "text": " tokens, you'll never get into kind of a less likely environment, which could also be correct," }, { "start": 601.76, "end": 610.48, "text": " right? So that is one of the problems. However, obviously, in these language models, the way" }, { "start": 610.48, "end": 617.2, "text": " they work is, for example, you input all of this into a big model, there is a little bit of a" }, { "start": 617.2, "end": 625.84, "text": " big model, there is some sort of a model, which usually is a transformer nowadays, and out comes" }, { "start": 625.84, "end": 631.2, "text": " a probability distribution. And the probability distribution is over your vocabulary. For example," }, { "start": 631.2, "end": 639.9200000000001, "text": " there is the vocabulary, this cat, dog. I don't know another word. What's another word? House," }, { "start": 639.92, "end": 646, "text": " something like this. And it will give you a distribution of probabilities over these words." }, { "start": 646, "end": 653.52, "text": " And you can now choose what to do. Either you can take the maximum one, which often runs into these" }, { "start": 653.52, "end": 658.4799999999999, "text": " problems of being boring or even repetitive, you can take you can sample from this distribution," }, { "start": 658.4799999999999, "end": 665.92, "text": " which is also not super appropriate, because, and the paper touches on this a little bit," }, { "start": 665.92, "end": 671.4399999999999, "text": " because sometimes the long, what's called the long tail here, there are many, many words, of course," }, { "start": 671.4399999999999, "end": 678, "text": " and they all have their some probability. And you don't want to get into these super low probability" }, { "start": 678, "end": 684.24, "text": " words, because they might just be artifacts of the model. The model doesn't represent these low" }, { "start": 684.24, "end": 690.4, "text": " probabilities really well. It's really good at the sort of high probability words, because, well," }, { "start": 690.4, "end": 698.4, "text": " it's essentially trained as a classifier. And the classifier is trained to give you the correct" }, { "start": 698.4, "end": 705.68, "text": " label as the highest class. And it doesn't really care about the rest of the words, especially not" }, { "start": 705.68, "end": 713.1999999999999, "text": " the ones that have really low probability. So what people do is they came up with, first of all," }, { "start": 713.2, "end": 720.48, "text": " Beam search, what beam search does is it considers multiple futures. So if it's here, that cat," }, { "start": 720.48, "end": 730.1600000000001, "text": " like that cat, it considers multiple futures, and it looks a few steps ahead. So it looks a few steps" }, { "start": 730.1600000000001, "end": 738, "text": " ahead, and it keeps a list of things that are possible to complete. So for example, in the" }, { "start": 738, "end": 742.88, "text": " beginning, it goes all these three routes, and it keeps those in mind, along with the probabilities" }, { "start": 742.88, "end": 750.24, "text": " that you go along that tree. And then, you know, you go ahead, and maybe the buffer is five large," }, { "start": 750.24, "end": 756.48, "text": " right? So now we can still fit it because there's one, two, three, four, five paths currently, but" }, { "start": 756.48, "end": 762.88, "text": " as soon as we expand the sixth one, this one here, we have to drop someone. So we consider all the" }, { "start": 762.88, "end": 768.32, "text": " paths, and we consider only the ones with the highest likelihood so far, this we can simply do" }, { "start": 768.32, "end": 776.32, "text": " by multiplying the probabilities of consecutive decoding steps, we consider the most likely five," }, { "start": 776.32, "end": 783.76, "text": " let's say, paths so far, and we delete some of them. Let's say that this one here is really low" }, { "start": 783.76, "end": 790.32, "text": " probability. And then once we add this one here, and this one, we have to drop another few," }, { "start": 790.32, "end": 796.1600000000001, "text": " so let's say this one, these two here are really low probability, and so on. And we only continue" }, { "start": 796.1600000000001, "end": 802.88, "text": " the paths that have good probabilities, or high enough probabilities to be the highest possible." }, { "start": 802.88, "end": 809.6, "text": " That's beam search. And the reason why people do it is because there might, so there might be a very" }, { "start": 809.6, "end": 816.1600000000001, "text": " high likelihood sentence that you could produce, but the next word just happens to be low in" }, { "start": 816.16, "end": 823.12, "text": " probability, right? Maybe here, house will lead to a sentence that down the road is very likely," }, { "start": 823.12, "end": 831.1999999999999, "text": " has a very good score, but just this word right now, in this case, is low probability, because" }, { "start": 831.1999999999999, "end": 837.8399999999999, "text": " the immediate best word would be dog for all the possible continuations, or for this particular" }, { "start": 837.8399999999999, "end": 844.8, "text": " prefix, for all the possible expected continuations. So beam search is a very, very, very, very" }, { "start": 844.8, "end": 851.8399999999999, "text": " high probability. So beam search is even worse than greedy decoding in the sense that it" }, { "start": 851.8399999999999, "end": 859.76, "text": " really finds the high probability stuff. It doesn't, and it looks ahead to be even more accurate." }, { "start": 859.76, "end": 866.16, "text": " If you go to the opposite end of the spectrum, you can say, okay, can we sample, but can we fix" }, { "start": 866.16, "end": 871.5999999999999, "text": " the sampling issues that arise from this tail? And that's why people do two things. So there's top K," }, { "start": 871.6, "end": 877.28, "text": " sampling, and there is nuclear sampling, and they both work pretty much the same. So top K, sampling," }, { "start": 877.28, "end": 884.16, "text": " what it does is you have, again, your probability distribution, and top K sampling simply says," }, { "start": 884.16, "end": 890.32, "text": " well, can we only consider the K largest entries in that distribution, and then just sample from" }, { "start": 890.32, "end": 897.28, "text": " that? So let's say K equals three, then we only consider the three largest entries here, and we" }, { "start": 897.28, "end": 902.4, "text": " just forget about the rest, and we only sample from that. We have to renormalize, but that's fine." }, { "start": 902.4, "end": 910, "text": " And then nucleus sampling is very much the same, except it says, well, I'm going to afford myself" }, { "start": 910, "end": 919.6, "text": " a probability, a cumulative probability of, let's say, 0.7. What does it mean? It means that this" }, { "start": 919.6, "end": 925.76, "text": " distribution right now has a cumulative probability of one. I am simply going to take the largest ones," }, { "start": 925.76, "end": 933.28, "text": " like, okay, this one, and this one, and this one, until the cumulative probability reaches my maximum" }, { "start": 933.28, "end": 938.3199999999999, "text": " threshold. Maybe I'm going to take this one as well. You can see the advantage here is that you" }, { "start": 938.3199999999999, "end": 945.6, "text": " don't always pick the same amount, but you always pick sort of the top entries that make up, let's" }, { "start": 945.6, "end": 951.92, "text": " say, in this case, 70% of the mass. And that is useful because you have to consider multiple" }, { "start": 951.92, "end": 959.8399999999999, "text": " scenarios. One scenario is where the distribution is very peaky, like, there, you only want to" }, { "start": 959.8399999999999, "end": 965.4399999999999, "text": " consider very few entries. So you only want to consider few entries because everything else is" }, { "start": 965.4399999999999, "end": 973.28, "text": " just really unlikely. However, if you think of a distribution that is more spread out, like this one," }, { "start": 974, "end": 980.0799999999999, "text": " and then you want to consider more entries, because all of them are kind of likely," }, { "start": 980.08, "end": 985.6800000000001, "text": " and nucleus sampling affords you that, whereas top case sampling would just disregard the shape of" }, { "start": 985.6800000000001, "end": 990.4000000000001, "text": " the distribution and pick the top ones. Right, so these are the decoding strategies, but still," }, { "start": 990.4000000000001, "end": 998.24, "text": " you can see they always go to the top or the most likely things. And this paper says, well," }, { "start": 998.24, "end": 1005.84, "text": " that's kind of dumb. And it shapes this as a information theoretic problem. We already said" }, { "start": 1005.84, "end": 1013.84, "text": " that humans probably want to trade off the likelihood of a string. So like how likely it" }, { "start": 1013.84, "end": 1021.2, "text": " is to appear, meaning essentially how much it is expected, because if I just say things that other" }, { "start": 1021.2, "end": 1030.16, "text": " humans expect, right, then I'm essentially not transmitting much information at all. So we can" }, { "start": 1030.16, "end": 1036.0800000000002, "text": " say that every string has a form or a content of information. Actually, I'm going to skip here," }, { "start": 1036.72, "end": 1042.4, "text": " skip here to the theory section directly. And forgive me, I've pretty much explained all of" }, { "start": 1042.4, "end": 1051.6000000000001, "text": " what's highlighted already. So what we can say is that a why, why is the message that you want to" }, { "start": 1051.6000000000001, "end": 1057.44, "text": " pass? So let's say it's a sentence, the information content can be quantified as its negative log" }, { "start": 1057.44, "end": 1066.16, "text": " probability. Essentially, the less likely a given message is, you can see here that's negative," }, { "start": 1066.16, "end": 1072.0800000000002, "text": " negative log probability, the less likely a message is, the more information it carries." }, { "start": 1072.0800000000002, "end": 1077.76, "text": " You have to think of it like exactly as I said, if I say something that's very likely, the other" }, { "start": 1077.76, "end": 1086.4, "text": " person could have expected it because it's so likely. It's like if you meet the stereotypical" }, { "start": 1086.4, "end": 1092, "text": " boring person, or if you see a movie where it's like a really stereotype of a boring person," }, { "start": 1092, "end": 1099.92, "text": " they will always say exactly what you know what you'd expect them to say. However, if you say," }, { "start": 1099.92, "end": 1106.24, "text": " let's say you communicate with someone, and they all of a sudden say something that you really" }, { "start": 1106.24, "end": 1113.6000000000001, "text": " didn't expect. Now that's a lot of information right there. In fact, you can buy simple application" }, { "start": 1113.6, "end": 1120.32, "text": " of the chain rule, you can see you can also define a information content for every single word in" }, { "start": 1120.32, "end": 1126.7199999999998, "text": " the sentence. And that is going to be just the conditional log probability, the log conditional" }, { "start": 1126.7199999999998, "end": 1132.1599999999999, "text": " probability of that word, given the prefix, and that's the prefix, those are the previous words" }, { "start": 1132.1599999999999, "end": 1138.6399999999999, "text": " in the sentence. So akin to the information in a sentence, a word carries a lot of information," }, { "start": 1138.64, "end": 1144.88, "text": " if you really didn't expect to see that word as the next word in the current sentence that you" }, { "start": 1144.88, "end": 1153.2800000000002, "text": " begun or that your conversation partner has begun to say. So we carry this through. And the" }, { "start": 1153.2800000000002, "end": 1159.2800000000002, "text": " assumption here is that the goal of an agent is to transmit information efficiently, while also" }, { "start": 1159.2800000000002, "end": 1166.72, "text": " minimizing the risk of miscommunication. So that's the fundamental trade off that humans do when" }, { "start": 1166.72, "end": 1172.8, "text": " they communicate, at least that's the hypothesis. If you transmit a lot of information, you're going" }, { "start": 1172.8, "end": 1179.84, "text": " to have to utter some words that are very not likely, because that transmits a lot of information." }, { "start": 1179.84, "end": 1187.3600000000001, "text": " However, if you overdo that, and if you, for example, don't follow the rules of grammar anymore," }, { "start": 1187.3600000000001, "end": 1194.72, "text": " and just send around low information messages or high information, low likely messages, your" }, { "start": 1194.72, "end": 1200.96, "text": " receiver will be confused, because they don't know what to make of it, because they really didn't" }, { "start": 1200.96, "end": 1207.44, "text": " expect to see something like this. And therefore, there is a chance of miscommunication. You can also," }, { "start": 1208.16, "end": 1216.8, "text": " you can imagine that if you want to transmit a message to someone, right, if you want to" }, { "start": 1216.8, "end": 1224.64, "text": " explain something to someone, you always have to adjust to what they already know. Like if I want" }, { "start": 1224.64, "end": 1233.44, "text": " to explain the chain rule to someone, and I expect them to already know a little bit of math, I'm going" }, { "start": 1233.44, "end": 1243.1200000000001, "text": " to transmit a lot, I'm going to have to adjust my message to that. And if I assume too much of what" }, { "start": 1243.1200000000001, "end": 1249.44, "text": " they already know, and then I'll just end up saying something like, oh, yeah, if you derive f of, you" }, { "start": 1249.44, "end": 1258, "text": " know, of g of x, with respect to x, then you have to, you know, you just derive g and then you kind" }, { "start": 1258, "end": 1265.68, "text": " of multiply by the derivation of f. And it's all good, right? It's all good. So sorry for this" }, { "start": 1265.68, "end": 1270.96, "text": " butchering of the chain rule. But you can imagine that someone who has little grasp of math in the" }, { "start": 1270.96, "end": 1280.8, "text": " first place would be very, very hard. Because I only utter the words that carry so much information" }, { "start": 1280.8, "end": 1289.1200000000001, "text": " that are so not likely in their framework, that there's a chance of miscommunication." }, { "start": 1290.16, "end": 1294.16, "text": " I don't know if actually that captures it the best, maybe there's a better example." }, { "start": 1294.16, "end": 1301.52, "text": " That's sort of how I think of it. What they do define, and now we get into the decoding strategy" }, { "start": 1301.52, "end": 1308.72, "text": " is the expected information, the expected information that a specific symbol in the" }, { "start": 1308.72, "end": 1315.2, "text": " message will contain. So this formula right here, you might recognize as the conditional entropy" }, { "start": 1315.2, "end": 1322.64, "text": " of a given word in the sentence, namely, and this, I think the notation here is a bit" }, { "start": 1322.64, "end": 1329.8400000000001, "text": " out of place. I think this should be something like the expectation of the information content" }, { "start": 1329.8400000000001, "end": 1338.88, "text": " of just that t-th word, not necessarily y of t, because y of t, we sum over y of t right here." }, { "start": 1338.88, "end": 1348.24, "text": " So yeah, but so we ask ourselves, if we have already produced the sentence up to time step t," }, { "start": 1348.24, "end": 1354.88, "text": " and we consider the distribution of words conditioned on this sentence, so we ask our" }, { "start": 1354.88, "end": 1361.92, "text": " language model, what's the distribution of words that could come next? And we ask ourselves for" }, { "start": 1361.92, "end": 1368.88, "text": " each of these one, what's the information content? And since we have the information content is the" }, { "start": 1369.84, "end": 1374.64, "text": " negative log probability, that's this, and here is the minus sign, we ask ourselves, so what is the" }, { "start": 1374.64, "end": 1379.76, "text": " expected information content of the next word, you know, whatever the next word is, what's the" }, { "start": 1379.76, "end": 1385.68, "text": " expectation of its information content, if we were to just sample from this probability distribution," }, { "start": 1386.5600000000002, "end": 1391.6000000000001, "text": " and then this here is the formula, right, we simply multiply whatever we're interested in," }, { "start": 1391.6000000000001, "end": 1397.0400000000002, "text": " which is the information content with the probability, and we sum that up across the set" }, { "start": 1397.0400000000002, "end": 1402.3200000000002, "text": " that we're interested in. That is, it's just the definition of the expected value. And by" }, { "start": 1402.32, "end": 1409.28, "text": " happenstance, it is also the definition of the entropy or the conditional entropy. So the" }, { "start": 1409.28, "end": 1417.84, "text": " expected information content of any given position in a sentence is the entropy of is the conditional" }, { "start": 1417.84, "end": 1425.12, "text": " entropy of the distribution at that point. So what does that mean? That means if my distribution is" }, { "start": 1425.12, "end": 1433.36, "text": " very peaked, so if it's very likely that one of these three words here is uttered next is, so if" }, { "start": 1433.36, "end": 1438.56, "text": " I find a text somewhere, right, and the sentence up to here was something, and then there's only" }, { "start": 1438.56, "end": 1444.4799999999998, "text": " like three words that could potentially be there, none else, it's very peak to distribution, that" }, { "start": 1444.4799999999998, "end": 1452, "text": " essentially means the entropy is very, very low. And therefore, the information content of that of" }, { "start": 1452, "end": 1458.4, "text": " whatever word comes next is probably going to be very low, because all these words are super likely." }, { "start": 1459.12, "end": 1468.48, "text": " However, if the distribution is very shallow, or very broad, then the entropy is high. And you can" }, { "start": 1468.48, "end": 1474.72, "text": " also see, since any of the words that could come next, first of all, there are many more that could" }, { "start": 1474.72, "end": 1484.8, "text": " be considered, and all of them have less of a likelihood. Therefore, the negative log probability" }, { "start": 1484.8, "end": 1491.44, "text": " will be higher. So any of those words will have more information content, and especially the" }, { "start": 1491.44, "end": 1498.88, "text": " expectation over those words, it will the information content will be higher. So that is" }, { "start": 1498.88, "end": 1504.5600000000002, "text": " just the definition of the expected information content. Now, here's the hypothesis of this paper," }, { "start": 1504.5600000000002, "end": 1512.16, "text": " and they base this on some psychologists, psychology theories, or linguistic theories." }, { "start": 1512.16, "end": 1518.96, "text": " But here's the hypothesis. Any given word should have an information content close to the expected" }, { "start": 1518.96, "end": 1525.7600000000002, "text": " information content, i.e. the conditional entropy given prior context. In other words, we expect" }, { "start": 1525.76, "end": 1533.04, "text": " the difference between the expected information content and the true information content to be" }, { "start": 1533.04, "end": 1543.68, "text": " small in human-like text. So the hypothesis here is that the way humans balance this trade-off" }, { "start": 1543.68, "end": 1550.32, "text": " between interestingness and likelihood, and so in between information transmission and not being" }, { "start": 1550.32, "end": 1558.56, "text": " misunderstood, is that they implicitly calculate the expected information content of the next word," }, { "start": 1558.56, "end": 1565.2, "text": " and then they try to choose the next word in accordance so that it is as close as possible" }, { "start": 1565.2, "end": 1574.24, "text": " to that expected information content. So when I talk, I model sort of the transmission channel" }, { "start": 1574.24, "end": 1580.16, "text": " to my receiver, and I figure out, okay, in the language right now, what would be the expected" }, { "start": 1580.16, "end": 1585.2, "text": " information content of the next word, and then I try to match that as closely as possible." }, { "start": 1585.2, "end": 1592.16, "text": " And that gives me a way of determining this trade-off. Again, this is a hypothesis. It's" }, { "start": 1592.16, "end": 1600.72, "text": " backed up by a few theories from linguistics. This is also known in information theory as" }, { "start": 1600.72, "end": 1608.64, "text": " typicality. So a typical message is one that has the information content that is close to" }, { "start": 1608.64, "end": 1619.3600000000001, "text": " the expected information content, but we'll investigate. So they say figure one shows for" }, { "start": 1619.3600000000001, "end": 1624.88, "text": " human-generated text, the distribution of this epsilon. So this epsilon is the distance between" }, { "start": 1624.88, "end": 1630.48, "text": " these two quantities, the expectation and the actual thing that's uttered. Remember, the" }, { "start": 1630.48, "end": 1636.8000000000002, "text": " expectation considers all possible next words and calculates the expected information content of" }, { "start": 1636.8000000000002, "end": 1645.7600000000002, "text": " them. And then this thing right here, this thing is just the information content of the next word" }, { "start": 1645.7600000000002, "end": 1653.2, "text": " that is actually uttered or actually written. So what would we actually do? We would actually" }, { "start": 1653.2, "end": 1663.04, "text": " analyze the human-generated text. So what would we expect this or what do we see if we analyze" }, { "start": 1663.04, "end": 1670.48, "text": " human-generated text? And these here, these are obviously language models that estimate" }, { "start": 1670.48, "end": 1675.8400000000001, "text": " the probabilities of these words, but these are evaluated on human-generated text, so not on" }, { "start": 1675.8400000000001, "end": 1681.1200000000001, "text": " language model-generated text, because remember, this paper is all about how do we do that in" }, { "start": 1681.12, "end": 1686.8799999999999, "text": " the human-generated text. So let's take a look at what humans do, and you can see the distribution" }, { "start": 1686.8799999999999, "end": 1693.12, "text": " is very peaked. Now, this isn't the distribution of words, this is the distribution of this" }, { "start": 1693.12, "end": 1702.8799999999999, "text": " epsilon. So that essentially means this distance, this difference right here is very, very peaky," }, { "start": 1702.8799999999999, "end": 1710.7199999999998, "text": " and it's peaky around a very small value. You can see here the scale goes from whatever," }, { "start": 1710.72, "end": 1716.4, "text": " and the peak is at a value that's quite close to zero. Now, it's not exactly zero, but this is" }, { "start": 1716.4, "end": 1724.16, "text": " empirical data. So this paper says this is evidence for the fact that humans do, as much as they can," }, { "start": 1724.16, "end": 1729.84, "text": " try to match the information content to the expected information content. Now, it'd be" }, { "start": 1729.84, "end": 1734.72, "text": " interesting to see what you would actually, let's say humans would just sample from the" }, { "start": 1734.72, "end": 1740.24, "text": " distribution itself, right? What kind of distance between the entropy and the information content" }, { "start": 1740.24, "end": 1748.08, "text": " would you expect to see? Maybe a Gaussian or a log Gaussian? I'm not entirely sure. Also," }, { "start": 1749.44, "end": 1759.44, "text": " what is peaky? How do you characterize peaky? I can see peaky, but it's proof by picture, almost." }, { "start": 1759.44, "end": 1765.76, "text": " And then we see a very interesting imbalance, namely, there seems to be sort of a mass going" }, { "start": 1765.76, "end": 1773.12, "text": " higher up, always on the left side of this, rather than on the right side. There seems to be a bit of" }, { "start": 1773.12, "end": 1779.92, "text": " a longer tail on the right side, but a bit more heavy mass on the left side. Now, what does that" }, { "start": 1779.92, "end": 1790, "text": " mean? This is, well, I can't really make sense of it, because the epsilon is characterized as an" }, { "start": 1790, "end": 1800.72, "text": " absolute value, whereas this right here is not an absolute value. And so I'm going to guess they" }, { "start": 1800.72, "end": 1808.16, "text": " left away the absolute value. Therefore, I don't know which, I don't know the distribution of the" }, { "start": 1808.16, "end": 1816.32, "text": " deviation of information content from the conditional entropy per token. Okay. Again," }, { "start": 1816.32, "end": 1825.2, "text": " I do not know what came first, if they do h minus i, or if they do i minus h. And that determines" }, { "start": 1825.2, "end": 1831.2, "text": " how we interpret these plots. So I'd rather not interpret them in the wrong way right here." }, { "start": 1833.04, "end": 1838.1599999999999, "text": " They further, so that's what they say, the peaked nature of the distribution reveals that humans" }, { "start": 1838.1599999999999, "end": 1842.6399999999999, "text": " indeed tend to form language with per word information content quite close to their expected" }, { "start": 1842.64, "end": 1846.8000000000002, "text": " information content. And the centering of these distributions around the value close to zero" }, { "start": 1846.8000000000002, "end": 1851.5200000000002, "text": " reveals that our probabilistic language generators are learning what this rate is." }, { "start": 1855.6000000000001, "end": 1865.44, "text": " Well, I'm not sure I agree with that statement, because being peaked doesn't mean, doesn't mean," }, { "start": 1866.16, "end": 1871.92, "text": " like you need both to be true at the same time. If you assume that the language models are really" }, { "start": 1871.92, "end": 1877.68, "text": " good at what they do, then you can claim that humans peak around zero and therefore they match" }, { "start": 1877.68, "end": 1884.96, "text": " the expected information content. If you assume that humans match the expected information content," }, { "start": 1885.6000000000001, "end": 1890.24, "text": " then you can conclude that language models are really good at what they do, because the peak" }, { "start": 1890.24, "end": 1896.4, "text": " seems to be rather around zero. But you can't draw both conclusions at the same time from this plot," }, { "start": 1896.4, "end": 1905.1200000000001, "text": " because you need one to justify the other. In any case, this is a minor point. What is interesting" }, { "start": 1905.68, "end": 1912.48, "text": " is that here, they go into information theory, as I said, this notion of typicality, which is exactly" }, { "start": 1912.48, "end": 1917.68, "text": " what we're describing right here. They say, it says typical messages are the ones that we would" }, { "start": 1917.68, "end": 1923.52, "text": " expect from its probability distribution. Their average per symbol information content is close" }, { "start": 1923.52, "end": 1929.28, "text": " to the entropy rate of their source distribution. Now, the interesting observation right here is" }, { "start": 1929.28, "end": 1936.4, "text": " that the definition implies that the highest probability message is often not a member of" }, { "start": 1936.4, "end": 1946.96, "text": " this set. Its average information content is too low. So if we consider any distribution and" }, { "start": 1946.96, "end": 1954.88, "text": " we consider what's the expected information content, which is the way we defined it," }, { "start": 1955.76, "end": 1962.72, "text": " and we only consider messages, let's say these are the messages, we only consider messages that are" }, { "start": 1963.44, "end": 1967.6000000000001, "text": " close to that expected information content. But those are going to be messages that are" }, { "start": 1967.6000000000001, "end": 1973.04, "text": " kind of somewhere in the middle of the likelihood. So they're not super duper unlikely," }, { "start": 1973.04, "end": 1978.6399999999999, "text": " because the expected information content is again the expectation over all of these messages," }, { "start": 1979.36, "end": 1986.56, "text": " which is going to be not super duper high, which rules out these unlikely messages. These are prone" }, { "start": 1986.56, "end": 1993.04, "text": " to misunderstanding, but it also rules out the very likely messages, because those are going to be" }, { "start": 1993.84, "end": 1999.68, "text": " prone to being boring and not transmitting any information at all. And that is something" }, { "start": 1999.68, "end": 2004.96, "text": " interesting. That is exactly the property we want in a new decoding method, leave away the really" }, { "start": 2004.96, "end": 2010.5600000000002, "text": " low likelihood stuff and leave away the really high likelihood stuff, because that's boring." }, { "start": 2013.2, "end": 2019.68, "text": " Yeah, tip, the typicality is a property. Okay, now they go into why we can't, why we have to go" }, { "start": 2019.68, "end": 2026.48, "text": " for a local, a local notion of typicality, whereas information theory usually defines it as a property" }, { "start": 2026.48, "end": 2033.6, "text": " of the of the entire sentence or of the entire message, don't necessarily want to go into that." }, { "start": 2034.16, "end": 2039.1200000000001, "text": " The next chapter, they try to justify this with psycholinguistic concepts. There are two they" }, { "start": 2039.1200000000001, "end": 2046.96, "text": " consider. There's the uniform information density hypothesis, which proposes that speakers construct" }, { "start": 2046.96, "end": 2052.8, "text": " their utterances such that information is distributed uniformly across them. And" }, { "start": 2052.8, "end": 2059.36, "text": " the the the speakers choose words such that their information count, their information rate is" }, { "start": 2059.36, "end": 2064.6400000000003, "text": " closer to a target channel capacity, which is essentially what we're doing right here." }, { "start": 2065.6800000000003, "end": 2073.44, "text": " Then there's the rational speech act, and the rational speech act, sort of, it casts the" }, { "start": 2073.44, "end": 2079.6000000000004, "text": " speaker's behavior as the maximization of a utility function. And the utility function is a set of" }, { "start": 2079.6, "end": 2086.4, "text": " sentences usefulness to its listener. So the way it constructs this, again, this is sort of hypothesis," }, { "start": 2086.4, "end": 2093.52, "text": " it imagines this literal speaker. So this is a hypothetical speaker that just samples from the" }, { "start": 2093.52, "end": 2098.08, "text": " probability distribution, it just looks at the probability distribution, and just samples from" }, { "start": 2098.08, "end": 2103.36, "text": " that. And it just orders the words as, you know, as they come out. And that means, you know, with" }, { "start": 2103.36, "end": 2108.88, "text": " the typical problems like, it's going to utter the words, it's going to use the words, it's going to" }, { "start": 2108.88, "end": 2118.96, "text": " utter kind of low, low information stuff a lot of the times. Then it says, well, a smart, the pragmatic" }, { "start": 2118.96, "end": 2126.08, "text": " speaker, and that's what the humans would be at the pragmatic speaker produces sentences to maximize" }, { "start": 2126.08, "end": 2133.44, "text": " the utility function, as opposed to following its expected literal behavior. If you define the" }, { "start": 2133.44, "end": 2140.32, "text": " utility function to be this thing right here, then the hypothesis kind of matches, the hypothesis" }, { "start": 2140.32, "end": 2148.56, "text": " matches the this rational speech act. However, I find this also to be a little bit shady, because" }, { "start": 2148.56, "end": 2154.8, "text": " if I have a different decoding method in mind, I can apply the same argument, I can simply say, well," }, { "start": 2154.8, "end": 2163.04, "text": " my, my utility function is now my new decoding method. So, yeah, I'm not super convinced by this." }, { "start": 2163.04, "end": 2171.6800000000003, "text": " However, it's interesting to see that people think in this way that they say, well, there is going" }, { "start": 2171.6800000000003, "end": 2178.0800000000004, "text": " to be this literal imaginary agent that just speaks according to the distribution. And then" }, { "start": 2178.0800000000004, "end": 2183.44, "text": " there is the upgraded version of that. And probably the humans are a form of an upgraded version," }, { "start": 2183.44, "end": 2189.28, "text": " this pragmatic speaker that changes something that sort of uses this distribution, but changes" }, { "start": 2189.28, "end": 2198.16, "text": " something about it. And that's exactly what we do. So how do we do it? And we've already alluded to" }, { "start": 2198.16, "end": 2208.4, "text": " most of it. So what we do is we introduce this typical sampling. Much like nucleus sampling," }, { "start": 2208.4, "end": 2215.84, "text": " we define a threshold, in this case, this is called tau of probability mass that we're going to allow" }, { "start": 2215.84, "end": 2224, "text": " in our in our subset of words. So again, maybe we have a distribution of a couple of words, and they" }, { "start": 2224, "end": 2230.08, "text": " have different likelihoods under our language model output. And we assume our language model output" }, { "start": 2230.08, "end": 2237.92, "text": " models these probabilities, especially the non negligible ones. Well, then what we're going to" }, { "start": 2237.92, "end": 2242.8, "text": " do is we're going to calculate the expected information content, which is the expected" }, { "start": 2243.6800000000003, "end": 2248.96, "text": " negative log probability, which is also the conditional entropy. So we're going to estimate" }, { "start": 2248.96, "end": 2257.2000000000003, "text": " this property by simply calculating it. We can do this. This is simply again, this is p of x given y" }, { "start": 2257.92, "end": 2267.28, "text": " times log p of x given y. The log probability is usually already output by our model in the form" }, { "start": 2267.28, "end": 2273.76, "text": " of logits. We just need to normalize it. And if we apply some sort of a softmax operation," }, { "start": 2273.76, "end": 2280.88, "text": " we get the p of x given y. So then we have the conditional entropy, and then we simply" }, { "start": 2281.76, "end": 2290.7200000000003, "text": " choose the words that are most close to this. So maybe the expected the entropy, let's say this is" }, { "start": 2290.72, "end": 2297.9199999999996, "text": " the let's say these are the log probabilities right here. Let's say the expected one is here," }, { "start": 2298.56, "end": 2304.72, "text": " we simply choose in order the words that are most close to that one. So it would be this one right" }, { "start": 2304.72, "end": 2311.12, "text": " here. This is really close. Then this one is really close. Then what's a tough choice, maybe this one's" }, { "start": 2311.12, "end": 2318.8799999999997, "text": " really close. And then maybe this one's really close. And that we do that until again, we reach" }, { "start": 2318.88, "end": 2325.44, "text": " our target probability mass. Again, if the distribution is very peaked, so if the distribution" }, { "start": 2325.44, "end": 2333.76, "text": " is very peaked, that means the the typical information content is going to be lower," }, { "start": 2333.76, "end": 2339.84, "text": " which means the words that have low information are going to be chosen more, which and these are" }, { "start": 2339.84, "end": 2346.48, "text": " also going to be less words. And that gives us our original case back where we're simply going" }, { "start": 2346.48, "end": 2355.28, "text": " to choose the highest likelihood words into our bucket to sample from. Yeah. And and that sort of" }, { "start": 2355.28, "end": 2361.92, "text": " regresses to the old case, if the distribution is very peaky. However, if the distribution is flatter" }, { "start": 2361.92, "end": 2369.68, "text": " or more broadly more broad support, then we the expected information content is going to be lower," }, { "start": 2369.68, "end": 2374.7999999999997, "text": " which means that probably these highest likelihood ones are not going to be in it." }, { "start": 2374.7999999999997, "end": 2382.3999999999996, "text": " And we opt for more interesting ones that are also likely, but not as likely. So this kicks in" }, { "start": 2382.3999999999996, "end": 2389.8399999999997, "text": " mostly when there's a lot of possibilities, which you can see in let's say machine translation," }, { "start": 2389.8399999999997, "end": 2396.48, "text": " there is not in machine translation is often very clear or there's only a few possibilities on how" }, { "start": 2396.48, "end": 2403.04, "text": " there's only a few possibilities on how to translate something. However, in storytelling," }, { "start": 2403.04, "end": 2408.96, "text": " there's lots of possibilities how things could continue. And there are distribution are much" }, { "start": 2408.96, "end": 2414.4, "text": " more shallow. And this method would exploit that by saying, well, I'm just not going to consider" }, { "start": 2414.4, "end": 2421.68, "text": " the most likely things right here. The computational complexity is the same as" }, { "start": 2421.68, "end": 2427.8399999999997, "text": " nucleus or top case sampling, we also have to determine the set we're going to consider" }, { "start": 2428.7999999999997, "end": 2434.16, "text": " by somehow calculating across it, we have to aggregate it, we have to renormalize it," }, { "start": 2434.16, "end": 2439.2799999999997, "text": " and we have to sample from it, except here, well, I guess we always have to sort right." }, { "start": 2440.7999999999997, "end": 2446.56, "text": " Yeah, here we also have to calculate this conditional entropy part, it's the same in" }, { "start": 2446.56, "end": 2453.36, "text": " complexity, but it does add a constant overhead or like a multiplicative constant factor overhead" }, { "start": 2454.08, "end": 2461.2799999999997, "text": " to the whole thing. So the last thing I want to go in here is the choice of hyper parameters" }, { "start": 2461.84, "end": 2470.32, "text": " in this one. They say we found k equals 30, and n equals point nine to perform best. So these" }, { "start": 2470.32, "end": 2477.6000000000004, "text": " parameters perform best for top k and nucleus sampling respectively. So this is for their" }, { "start": 2477.6000000000004, "end": 2484.48, "text": " experiments. So one is for top case sampling, and one is for nucleus sampling. For typical sampling," }, { "start": 2484.48, "end": 2492, "text": " we found the tau equals point two and tau equals point nine five to provide the best results for" }, { "start": 2492, "end": 2498.32, "text": " story generation and abstractive summarization respectively. So while they allow for a single" }, { "start": 2498.32, "end": 2508, "text": " parameter for each of the baselines, they go with a separate parameter for different tasks for their" }, { "start": 2508, "end": 2514.0800000000004, "text": " method, which is a bit shady. Now, there's two possibilities. First possibility is they sort of" }, { "start": 2514.0800000000004, "end": 2523.76, "text": " stifled the baseline by only sort of giving it not exploring well enough the possibilities, or what I" }, { "start": 2523.76, "end": 2529.6800000000003, "text": " think happened most likely is that the same parameter performs pretty well for all the" }, { "start": 2529.6800000000003, "end": 2535.1200000000003, "text": " different tasks, which is a good property in itself right here. Here we consider 20% of the" }, { "start": 2535.1200000000003, "end": 2541.6000000000004, "text": " probability mass, and here we consider 95% of the probability mass. Now that's a huge difference" }, { "start": 2541.6000000000004, "end": 2549.36, "text": " in how our set looks. And that by itself makes it, in my opinion, a bit of a weaker choice for" }, { "start": 2549.36, "end": 2555.1200000000003, "text": " using this as a decoding method, because for every thing that I want to achieve, I need to essentially" }, { "start": 2555.1200000000003, "end": 2560.1600000000003, "text": " tune this parameter, whereas with top case sampling, I could just leave it be. So it'd be interesting" }, { "start": 2560.1600000000003, "end": 2567.2000000000003, "text": " to see if in the future there might be, because I'm a fan of this technique in principle. So maybe in" }, { "start": 2567.2000000000003, "end": 2572.8, "text": " the future, we can find more of an adaptive way, much like nucleus sampling is an adaptive way of" }, { "start": 2572.8, "end": 2581.28, "text": " top case sampling, maybe we can come up with an adaptive way of determining the number here or" }, { "start": 2581.28, "end": 2590.2400000000002, "text": " the parameter of how many things to consider. So I don't want to go too much into the evaluation." }, { "start": 2592, "end": 2596.88, "text": " There is a difference. Sometimes it's stark, sometimes it's not as stark. It is different" }, { "start": 2596.88, "end": 2605.6, "text": " in different regimes. You can see that depending on the regime that you are at, it's sometimes" }, { "start": 2605.6, "end": 2610.7200000000003, "text": " the different methods are really different. Sometimes they're quite close, sometimes they" }, { "start": 2610.7200000000003, "end": 2617.92, "text": " switch places. Yeah, that's, I don't want to go too much into the results because we can maybe" }, { "start": 2617.92, "end": 2625.44, "text": " discuss them in an interview. But qualitatively, say for example, for the summarization task," }, { "start": 2625.44, "end": 2630.16, "text": " we see that typical sampling provides a comprehensive and coherent summary of the" }, { "start": 2630.16, "end": 2635.76, "text": " article under consideration. In comparison, nucleus sampling leads to hallucinated facts," }, { "start": 2635.76, "end": 2641.84, "text": " for example, getting drugs from under, okay, I haven't read the article, but nucleus sampling" }, { "start": 2641.84, "end": 2649.84, "text": " hallucinate facts, which is one property. If you sample only from high likelihood things," }, { "start": 2649.84, "end": 2656, "text": " right, you're just going to continue with things that are very likely in the language itself," }, { "start": 2656, "end": 2661.2000000000003, "text": " rather than transmitting the necessary information. While top case sampling misses some of the" }, { "start": 2661.2000000000003, "end": 2666.7200000000003, "text": " important information in the article, e.g. the charges of burglary and arson. And that might be" }, { "start": 2666.7200000000003, "end": 2673.04, "text": " because top case sampling simply has this fixed bucket of words to consider. And as soon as one" }, { "start": 2673.04, "end": 2679.04, "text": " word is not in that bucket, it simply is forbidden from uttering it, even if the distribution is" }, { "start": 2679.04, "end": 2686.96, "text": " shallow and that word is kind of likely. So I want to stop here and just give a few thoughts" }, { "start": 2687.92, "end": 2695.52, "text": " on this. In my opinion, I already said it is quite needed that we have different decoding strategies" }, { "start": 2695.52, "end": 2701.44, "text": " to achieve different tasks. This one right here, it seems really interesting. It is a way to trade" }, { "start": 2701.44, "end": 2708.16, "text": " off sort of not considering the most likely things, but also not considering the least likely things." }, { "start": 2708.16, "end": 2714.24, "text": " However, I'm not sure if the notion of the matching the expected information content" }, { "start": 2714.24, "end": 2722.24, "text": " is appropriate. I can't tell. It's a good hypothesis. I don't know if this quantity here," }, { "start": 2722.24, "end": 2729.52, "text": " the absolute distance is a good quantity. Like, why would it be the absolute distance? And the" }, { "start": 2729.52, "end": 2736.72, "text": " other issue I have right here, but this might be my ignorance of information theory is. So if I" }, { "start": 2736.72, "end": 2744.7999999999997, "text": " change, let's if I assume the humans talk like this, they choose their words according to the" }, { "start": 2744.7999999999997, "end": 2751.8399999999997, "text": " expected information content, right? And I use this particular construction right here. That" }, { "start": 2752.3999999999996, "end": 2758.72, "text": " is going to everything that comes out of this. Whatever comes out of this will have a different" }, { "start": 2758.72, "end": 2766.7999999999997, "text": " expected information content than the original language. If I wanted to actually match," }, { "start": 2767.3599999999997, "end": 2772.08, "text": " like if I wanted to keep the expectation, I probably couldn't do this just in absolute" }, { "start": 2772.08, "end": 2778.3199999999997, "text": " difference. That's probably going to change the expected information content, let alone the" }, { "start": 2778.3199999999997, "end": 2783.52, "text": " distribution of it itself. But just the expectation is going to change. Now, if you're telling me that" }, { "start": 2783.52, "end": 2790.32, "text": " humans do it like this, and that our language models are trained on text that is written and" }, { "start": 2790.32, "end": 2799.7599999999998, "text": " uttered by humans, like wouldn't that text already have that property and therefore sampling from it" }, { "start": 2799.76, "end": 2814.4, "text": " would be the original distribution? Or in other words, if I produce text like this, like," }, { "start": 2814.4, "end": 2820.96, "text": " shouldn't I get the same, shouldn't I get the same distribution out that my language model predicts," }, { "start": 2820.96, "end": 2826.1600000000003, "text": " because my language model is trained on human text, and your claim is that humans sample text" }, { "start": 2826.16, "end": 2832.48, "text": " like this. So why would that be any different from sampling from the language model itself?" }, { "start": 2832.48, "end": 2842.16, "text": " And especially, shouldn't it be that the expected information content remains constant if I apply" }, { "start": 2842.16, "end": 2851.52, "text": " this sampling technique? Just out of principle, because by definition, if it doesn't, then it" }, { "start": 2851.52, "end": 2860.64, "text": " doesn't match human generated text, because that's already the input. That's the training data." }, { "start": 2860.64, "end": 2867.36, "text": " All right, but maybe I'm sort of ignorant of information theory right here. Yeah, my other" }, { "start": 2867.36, "end": 2875.44, "text": " concerns are with the hyperparameter choice. And yeah, I'd be interested to dive a little bit more" }, { "start": 2875.44, "end": 2879.84, "text": " into this, like what would we expect to see with the different" }, { "start": 2879.84, "end": 2884.56, "text": " sampling methods or with different hypotheses? This is also really interesting, but I'm going" }, { "start": 2884.56, "end": 2891.76, "text": " to leave it at that. All I can say is that we should probably try this out. And maybe, you know," }, { "start": 2891.76, "end": 2898.96, "text": " for certain tasks where diversity and actually transmitting information is more important than" }, { "start": 2898.96, "end": 2906.96, "text": " being, you know, uttering the most likely thing, this might really be a cool application. And maybe" }, { "start": 2906.96, "end": 2912.8, "text": " we'll figure out an automatic way to adjust the hyperparameters. Let me know what you think. Maybe" }, { "start": 2912.8, "end": 2918.88, "text": " you've already tried it out. You can give a little bit of a report on how that went. And I'll see you" }, { "start": 2918.88, "end": 2940.88, "text": " next time. Bye bye." } ]
zWFkUGXjbdo
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Rant] Can AI read your emotions? (No, but ...)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "ai face recognition", "face recognition", "face recognition emotion detection", "can ai read your mind", "can ai read your emotions", "ai emotion analysis", "ai analyzes emotion", "government ai face detection", "ai emotion recognition", "ai emotion detection" ]
#facerecognition #emotiondetection #mindreading Face recognition has a bad rep in the ML community. While the technology continuously advances, so does the resistance against its applications, with good reasons: AI emotion analysis hints at a dystopian future where our lives are completely governed by algorithms. However, we must be realistic about what is and isn't possible with AI, and while current systems are not the most accurate, denying the link between your facial expression and your emotions is not productive either. https://twitter.com/jblefevre60/status/1395617615964475392 Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
We need to talk about your face or face recognition in general. A tweet has been making the rounds saying facial recognition is able to analyze in real time the emotions and feelings. Just that. And it showed a video of an apparent real-time system looking at people's faces and determining what their emotions are. Now there is a predictable reaction of machine learning Twitter with respect to anything to do with facial recognition and that reaction is NO! The biggest reaction is NO! This is impossible. AI will never be able to infer your emotions by looking at your face. This is the data is not there. Anything like this. I just think that is really really really surprising. Honestly. Now look facial recognition technology isn't exactly the most popular subject. It's not going to win any Nobel Peace prizes anytime soon. Is this technology dystopian looking? Yes. Is it dangerous in the wrong hands? Yes. Does it work as advertised? Very probably no. Is it easy to be tricked? Absolutely yes. However saying that it is impossible for an AI to look at your face and infer your emotional state. That is... Wondering me. You do this every day. You look at people's faces and then you infer something about their internal state. People splitting hairs here about the word analyze to analyze the emotions and feelings. Well if you want to split words I would say inferring is a lot heavier than analyzing. Your face has literally evolved to convey your internal state. Other people have a trouble with saying well you can fake your face. Not all facial expressions can be faked. A lot of what you tell with your face is involuntary and there is in principle not a reason why a machine cannot pick up on these cues. Now this is not to say that this particular system works well. It probably does not. It is extremely hard to do this. To look at a face and get how that person is feeling through all the deception that might be there is an extremely hard task. But there's nothing supernatural about it. We do this. We're a machine. Ergo a machine can in principle do this. The most criticism I see right here is that well the machine only analyzes facial expressions. They have nothing to do with your emotions and feelings. What is that? Of course this has something to do with your emotions and feelings. Have you ever thought to yourself huh that person looks kind of sad today? Have you ever gone to someone and said you know you look a little bit down is everything okay? No never never and you certainly didn't infer this from their face. Hey doctor I have a problem. Well what's your problem? Well I banged my foot and now it hurts and it has a dent in it and it bleeds and it's swollen and everything is bad about my foot because I hit it and it might be broken. Well don't say it's broken because the external symptoms will never tell us anything about the internal state of a system. I'm sorry have you ever heard that an AI can diagnose lung cancer by looking at a chest x-ray? Well no well we can say it's just that the AI detects a little bit of a spot and there is no correlation at all. This is no indication of the internal state of the cancer. Shut up! Twitter makes it such that everyone immediately is extreme on the one side and extreme on the other side. Instead of saying the data to train this system is very hard to get, the systems itself aren't as good, they don't understand context that this happens in or nuances. That's very different from saying that no this is impossible. The most ridiculous is when people come out and compare this to phrenology or literally call it phrenology. You know phrenology, the science of what bump on your head means something about your personality or intelligence. Like my face has literally evolved to tell you something about my internal emotions. None of the bumps on my head have evolved to communicate about my intelligence. It is a predictable reaction for some reason. Anywhere where facial recognition technology is used there is a crowd of people coming out saying phrenology! Faces are a real thing, emotions are a real thing, there is a real connection between your facial expression and your emotions. It is more complicated than these machines right now can assess. It might require more context, more data, better algorithms and even things we don't have yet but this definitely exists. It is not a pseudoscience. Not everything that has to do with face recognition is a pseudoscience. It might be dangerous yet it's real. So in conclusion I guess my message here is that yes this is probably an over promise of what AI can do and it could easily be used for bad purposes. On the other hand this is not a pseudoscience, this is not impossible and research in this direction might actually lead to something good. Imagine an AI that is better than a human at recognizing emotions from someone's face assuming that is possible. We could avoid a lot of conflict, maybe do a lot of good work in suicide prevention and ultimately communicate with the AIs as we would with other humans. Apart from all the bad thing that we can do with facial recognition technology, ultimately its technology can be used for good and for bad and for evil. I'll end with the holy trifecta of broader impact statements. Technology good, technology bad, technology biased. Peace out.
[ { "start": 0, "end": 10.16, "text": " We need to talk about your face or face recognition in general. A tweet has been" }, { "start": 10.16, "end": 15.120000000000001, "text": " making the rounds saying facial recognition is able to analyze in real" }, { "start": 15.120000000000001, "end": 25.96, "text": " time the emotions and feelings. Just that. And it showed a video of an apparent" }, { "start": 25.96, "end": 31.28, "text": " real-time system looking at people's faces and determining what their" }, { "start": 31.28, "end": 38.08, "text": " emotions are. Now there is a predictable reaction of machine learning Twitter" }, { "start": 38.08, "end": 42.8, "text": " with respect to anything to do with facial recognition and that reaction is" }, { "start": 42.8, "end": 51.24, "text": " NO! The biggest reaction is NO! This is impossible. AI will never be able to" }, { "start": 51.24, "end": 56.52, "text": " infer your emotions by looking at your face. This is the data is not there." }, { "start": 56.52, "end": 62, "text": " Anything like this. I just think that is really really really surprising." }, { "start": 62, "end": 67.16, "text": " Honestly. Now look facial recognition technology isn't exactly the most" }, { "start": 67.16, "end": 72.36, "text": " popular subject. It's not going to win any Nobel Peace prizes anytime soon. Is" }, { "start": 72.36, "end": 78.6, "text": " this technology dystopian looking? Yes. Is it dangerous in the wrong hands? Yes." }, { "start": 78.6, "end": 84.32, "text": " Does it work as advertised? Very probably no. Is it easy to be tricked?" }, { "start": 84.32, "end": 91.19999999999999, "text": " Absolutely yes. However saying that it is impossible for an AI to look at your" }, { "start": 91.19999999999999, "end": 99.88, "text": " face and infer your emotional state. That is... Wondering me. You do this every day." }, { "start": 99.88, "end": 105.84, "text": " You look at people's faces and then you infer something about their internal" }, { "start": 105.84, "end": 111.76, "text": " state. People splitting hairs here about the word analyze to analyze the emotions" }, { "start": 111.76, "end": 116.08, "text": " and feelings. Well if you want to split words I would say inferring is a lot" }, { "start": 116.08, "end": 122.88, "text": " heavier than analyzing. Your face has literally evolved to convey your internal" }, { "start": 122.88, "end": 128.64000000000001, "text": " state. Other people have a trouble with saying well you can fake your face. Not" }, { "start": 128.64000000000001, "end": 133.8, "text": " all facial expressions can be faked. A lot of what you tell with your face is" }, { "start": 133.8, "end": 140.56, "text": " involuntary and there is in principle not a reason why a machine cannot pick" }, { "start": 140.56, "end": 145.96, "text": " up on these cues. Now this is not to say that this particular system works well." }, { "start": 145.96, "end": 151.52, "text": " It probably does not. It is extremely hard to do this. To look at a face and" }, { "start": 151.52, "end": 157.28, "text": " get how that person is feeling through all the deception that might be there is" }, { "start": 157.28, "end": 162.72000000000003, "text": " an extremely hard task. But there's nothing supernatural about it. We do this." }, { "start": 162.72, "end": 169.16, "text": " We're a machine. Ergo a machine can in principle do this. The most criticism I" }, { "start": 169.16, "end": 174.48, "text": " see right here is that well the machine only analyzes facial expressions. They" }, { "start": 174.48, "end": 181.28, "text": " have nothing to do with your emotions and feelings. What is that? Of course this" }, { "start": 181.28, "end": 185.12, "text": " has something to do with your emotions and feelings. Have you ever thought" }, { "start": 185.12, "end": 188.92, "text": " to yourself huh that person looks kind of sad today? Have you ever gone to" }, { "start": 188.92, "end": 193.39999999999998, "text": " someone and said you know you look a little bit down is everything okay? No" }, { "start": 193.39999999999998, "end": 199.44, "text": " never never and you certainly didn't infer this from their face. Hey doctor I" }, { "start": 199.44, "end": 203.72, "text": " have a problem. Well what's your problem? Well I banged my foot and now it hurts" }, { "start": 203.72, "end": 209.11999999999998, "text": " and it has a dent in it and it bleeds and it's swollen and everything is bad about" }, { "start": 209.11999999999998, "end": 215.38, "text": " my foot because I hit it and it might be broken. Well don't say it's broken because" }, { "start": 215.38, "end": 220.88, "text": " the external symptoms will never tell us anything about the internal state of a" }, { "start": 220.88, "end": 225.28, "text": " system. I'm sorry have you ever heard that an AI can diagnose lung cancer by" }, { "start": 225.28, "end": 230.56, "text": " looking at a chest x-ray? Well no well we can say it's just that the AI detects a" }, { "start": 230.56, "end": 235.16, "text": " little bit of a spot and there is no correlation at all. This is no" }, { "start": 235.16, "end": 243.56, "text": " indication of the internal state of the cancer. Shut up! Twitter makes it such that" }, { "start": 243.56, "end": 249, "text": " everyone immediately is extreme on the one side and extreme on the other side." }, { "start": 249, "end": 255.56, "text": " Instead of saying the data to train this system is very hard to get, the systems" }, { "start": 255.56, "end": 260.64, "text": " itself aren't as good, they don't understand context that this happens in" }, { "start": 260.64, "end": 266.16, "text": " or nuances. That's very different from saying that no this is impossible. The" }, { "start": 266.16, "end": 271.8, "text": " most ridiculous is when people come out and compare this to phrenology or" }, { "start": 271.8, "end": 277.16, "text": " literally call it phrenology. You know phrenology, the science of what bump on" }, { "start": 277.16, "end": 282.88, "text": " your head means something about your personality or intelligence. Like my face" }, { "start": 282.88, "end": 287.96000000000004, "text": " has literally evolved to tell you something about my internal emotions." }, { "start": 287.96000000000004, "end": 292.48, "text": " None of the bumps on my head have evolved to communicate about my" }, { "start": 292.48, "end": 297.40000000000003, "text": " intelligence. It is a predictable reaction for some reason. Anywhere where" }, { "start": 297.4, "end": 302.14, "text": " facial recognition technology is used there is a crowd of people coming out" }, { "start": 302.14, "end": 308.56, "text": " saying phrenology! Faces are a real thing, emotions are a real thing, there is a" }, { "start": 308.56, "end": 313.59999999999997, "text": " real connection between your facial expression and your emotions. It is more" }, { "start": 313.59999999999997, "end": 319.35999999999996, "text": " complicated than these machines right now can assess. It might require more" }, { "start": 319.35999999999996, "end": 325.2, "text": " context, more data, better algorithms and even things we don't have yet but this" }, { "start": 325.2, "end": 329.96, "text": " definitely exists. It is not a pseudoscience. Not everything that has to" }, { "start": 329.96, "end": 334.84, "text": " do with face recognition is a pseudoscience. It might be dangerous yet" }, { "start": 334.84, "end": 341.64, "text": " it's real. So in conclusion I guess my message here is that yes this is" }, { "start": 341.64, "end": 348.68, "text": " probably an over promise of what AI can do and it could easily be used for bad" }, { "start": 348.68, "end": 354.56, "text": " purposes. On the other hand this is not a pseudoscience, this is not impossible and" }, { "start": 354.56, "end": 360.52, "text": " research in this direction might actually lead to something good. Imagine" }, { "start": 360.52, "end": 367.72, "text": " an AI that is better than a human at recognizing emotions from someone's face" }, { "start": 367.72, "end": 373.64, "text": " assuming that is possible. We could avoid a lot of conflict, maybe do a lot of good" }, { "start": 373.64, "end": 379.54, "text": " work in suicide prevention and ultimately communicate with the AIs as we" }, { "start": 379.54, "end": 384.32, "text": " would with other humans. Apart from all the bad thing that we can do with facial" }, { "start": 384.32, "end": 390.08, "text": " recognition technology, ultimately its technology can be used for good and for" }, { "start": 390.08, "end": 395.2, "text": " bad and for evil. I'll end with the holy trifecta of broader impact statements." }, { "start": 395.2, "end": 414.64, "text": " Technology good, technology bad, technology biased. Peace out." } ]
FC-R4MlIqrc
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[ML News] Cedille French Language Model | YOU Search Engine | AI Finds Profitable MEME TOKENS
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "mlnews", "machine learning news", "ai news", "ml news", "cedille", "french language model", "gpt-j", "gpt j", "eleuther ai", "you", "you search", "you search engine", "richard socher", "meme tokens", "dogecoin", "finu", "finu token", "wmt", "facebook wmt", "multilingual wmt", "multilingual machine translation", "machin translation", "deepmind arnheim", "arnheim", "yann lecun", "alibaba damo", "acessibe", "eyebobs", "lawsuit" ]
#mlnews #cedille #wmt Only the greatest of news from the world of Machine Learning. OUTLINE: 0:00 - Sponsor: Weights & Biases 1:50 - Cedille - French Language Model 3:55 - Facebook AI Multilingual model wins WMT 5:50 - YOU private search engine 10:35 - DeepMind's Open-Source Arnheim 12:10 - Company sued for using AI to make website more accessible 18:05 - Alibaba DAMO Academy creates 10 Trillion M6 model 21:15 - AMD MI200 Family 22:30 - State of AI report 2021 24:15 - Andrew Ng's Landing AI raises 57M 25:40 - Cerebras raises 250M 26:45 - Microsoft's Varuna: Scalable Training of Huge Models 28:15 - Laura Ruis reproduces Extrapolation Paper 29:05 - Ian Charnas' Real-Life Punchout 30:00 - Helpful Things 33:10 - AI finds profitable Meme-Tokens 34:55 - This Sneaker Does Not Exist Sponsor: Weights & Biases https://wandb.com References: Cedille - French Language Model https://en.cedille.ai/ https://github.com/coteries/cedille-ai https://app.cedille.ai/ https://en.wikipedia.org/wiki/Cedilla Facebook AI Multilingual model wins WMT https://ai.facebook.com/blog/the-first-ever-multilingual-model-to-win-wmt-beating-out-bilingual-models/ YOU private search engine https://you.com/ https://youdotcom.notion.site/FAQ-8c871d6c99d84e02955fda772a1da8d4 DeepMind's Open-Source Arnheim https://deepmind.com/research/open-source/open-source-arnheim-a-learnable-visual-grammar-for-generating-paintings https://twitter.com/OriolVinyalsML/status/1459231774068854785 https://github.com/deepmind/arnheim https://colab.research.google.com/github/deepmind/arnheim/blob/master/arnheim_2.ipynb Company sued for using AI to make website more accessible https://www.wired.com/story/company-tapped-ai-website-landed-court/ https://archive.ph/kdvOM Alibaba DAMO Academy creates 10 Trillion M6 model https://pandaily.com/alibaba-damo-academy-creates-worlds-largest-ai-pre-training-model-with-parameters-far-exceeding-google-and-microsoft/ https://www.infoq.cn/article/xIX9lekuuLcXewc5iphF AMD MI200 Family https://www.anandtech.com/show/17054/amd-announces-instinct-mi200-accelerator-family-cdna2-exacale-servers?utm_source=pocket_mylist State of AI report 2021 https://www.stateof.ai/?utm_source=pocket_mylist Andrew Ng's Landing AI raises 57M https://techcrunch.com/2021/11/08/landing-ai-machine-learning-operations-tools/ https://www.forbes.com/sites/bernardmarr/2021/11/09/landing-ai-unlocking-the-power-of-data-centric-artificial-intelligence/ https://landing.ai/platform/ Cerebras raises 250M https://cerebras.net/news/cerebras-systems-raises-250m-in-funding-for-over-4b-valuation-to-advance-the-future-of-artificial-intelligence-compute/ https://cerebras.net/news/cerebras-systems-announces-worlds-first-brain-scale-artificial-intelligence-solution/ Microsoft's Varuna: Scalable Training of Huge Models https://syncedreview.com/2021/11/10/deepmind-podracer-tpu-based-rl-frameworks-deliver-exceptional-performance-at-low-cost-142/ Laura Ruis reproduces Extrapolation Paper https://lauraruis.github.io/2021/11/06/extra.html?utm_source=pocket_mylist https://github.com/LauraRuis Ian Charnas' Real-Life Punchout https://www.reddit.com/r/MachineLearning/comments/qpenkt/project_google_movenet_realtime_pose_estimation/ https://www.youtube.com/watch?v=07JibJJVNp8 Helpful Things https://www.marktechpost.com/2021/11/05/google-ai-introduces-goemotions-an-nlp-dataset-for-fine-grained-emotion-classification/ https://pair-code.github.io/lit/demos/ https://github.com/pair-code/lit https://www.reddit.com/r/MachineLearning/comments/qsrdyk/p_texttoimage_rudalle_kandinsky_xxl_12_billion/ https://twitter.com/yeemachine/status/1457779633449934848?utm_source=pocket_mylist https://github.com/yeemachine/kalidokit AI finds profitable Meme-Tokens https://finance.yahoo.com/news/artificial-intelligence-now-makes-possible-104800931.html https://finu.co/ This Sneaker Does Not Exist https://thissneakerdoesnotexist.com/ Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hold on, this video is sponsored by weights and biases. Weights and biases is your one stop shop for all your machine learning needs. It will track your experiments with a single line of code will upload automatically all your logs, all your configurations, everything to your cloud, it will automatically grab all the output, all the metrics, all the configurations of your experiments and store that in one neat location. So you can see your experiments, you can track them wherever they run, you can compare among the experiments, but you can go further, you can then tune your hyper parameters according to the results of those experiments. And all of this is done automatically in a distributed way, you can literally sit on your toilet on your smartphone and tune your hyper parameters and start new experiments. But it's not only experiment tracking and hyper parameter tuning weights and biases has tools for the entire pipeline of machine learning research from the initial idea up until the deployment and beyond that when you actually want to track what you've deployed weights and biases has cool methods to track all of your data set and their dependencies to each other, as well as your models and all kinds of other artifacts that you might produce a very powerful visualizations for all the inputs and outputs of your pipelines, as well as the models themselves. All of this runs in the cloud. But if you're concerned about privacy, there are options to self host the system is free for personal use and for academics and they have great plans for enterprises, small teams, large teams doesn't matter. So thank you very much weights and biases for sponsoring this video. If you don't know them yet, absolutely check them out. It's free, it'll make your life a whole lot easier. Now let's get into the video. Welcome, welcome to ml news. Let's dive into our first story group of researchers based in Switzerland have trained city which is a French language model. This is a model based on GPTJ to 6 billion parameter model that is a language model in French. The headline is write French without speaking French, which is pretty much a recipe of how I passed high school. So the cool thing about this is that it can do the tasks that you're used to from things like GPT three, but with a special focus on French. So it achieves a better perplexity on French text than GPT three, apparently lower toxicity, whatever that means is better at translating from and to French, and it's better at various other NLP tasks out of the box. And if you don't know what city means, city is this little thing that French people put at the bottom of some of their letters, also some other languages as I am being told, but just quite annoying because you never know where on the keyboard it is. So being quite annoying seems like a great name for a French language model. The cool thing is not only is the model open source, you can download a checkpoint, the code is open source, but also you can play with it directly in the browser, there's a little app and there are a bunch of prompts that are already built in, for example, classification of some stuff like what is FedEx FedEx is logistics company that is correct, Amazon is an e commerce and technology company that is all correct. Now my French is limited to be honest, Jay, who Bly a more baguette, Joe sweet, the Zola, I think it means I lost my baguette and I'm very sad. The model says meme see, ne pas d'explication logi. I don't have a logical explanation for why I lost my baguette. Is it maybe I forgot my baguette? I don't know. Well, in any case, it's a French language model, you get it. What is interesting is that among the parameters, it says that a German one is coming soon. So keep an eye out for that. Facebook AI on their blog says the first ever multi lingual model to win a WMT beating out bilingual models. So WMT is this yearly competition essentially to do machine translation. This is a corpus of data sets, but then also every year the competition hosts human expert translators that rate the translations of the machine translation systems. So the machines aren't able to hyper optimize on the data sets, but really have to please the humans. Now first thing, why is this in the AR VR category? I don't know. In any case, it's quite remarkable because one would think that given that all the tasks are bilingual, that bilingual models that can be tailored to one specific language pair would be ahead right here. But as Facebook AI shows, because multi lingual models can ingest essentially much more data into them. So the French English translations are also informed by the German data that comes in. And because it's able to make use of so much more data, it can in the end outperform models that have been trained for particular language pairs. Now multi lingual ity is not the only thing that's good about this model. The machine translation community has over the years accrued various tricks such as back translation to make use of monolingual data, ensembling, and so on. So this is really an engineering effort. But it's cool to see this overlap point where for the first time ever a single multi lingual model is better than many, many bilingual models. And that's excellent, not only because it's higher performing, but it also means that it provides us easier access to work with languages that have very low resources that maybe are only spoken by a very small amount of people or that have no written form at all, like Swiss German, for example. So excellent development, there is a paper, the code is available. And if you want to learn all the tricks, give it a read. You is a new search engine that has been launched by Richard soccer previously the head of AI at Salesforce. And this is supposed to be a direct competitor to the Google search engine, you advertise itself as the private search engine that summarizes the web for you. So there's two promises here, privacy, and summarization in whatever form, they say it helps you get things done, get news, check GitHub, compose a tweet all from your search engine, for whatever reason you want to compose a tweet from your search engine. But there you go. There's a big emphasis on privacy, you can choose between a personalized or a truly private experience, you.com never sells your data to advertisers. And also they promise no ad targeting. Now actually, when you sign up, the first thing that they want to make you do is install an extension. If I click this button, it leads me straight into the Chrome Web Store. So I'm gonna take this with a grain of salt right here, someone promises me privacy, no targeting, and so on. No, unless this is provably the case, I'm not going to trust any of those promises. So the second big selling point is this summarize the web. And I was intrigued by that, like how is this search engine gonna summarize the web for me, this sounds really cool. So I tried out a bunch of things like, okay, they said I could check news, for example. All right, news, let me zoom out a little bit here. So the interface that you gives you is this kind of grouped interface. So there are web results on top right here, there is a section for news. And then there are various of these subcategories right here. But honestly, I don't see any summarization like any summarize the web for me. So let me search for something I would like to have summarized Abraham Lincoln and the Civil War. No, it just gives me the Wikipedia page and a bunch of web results and a bunch of Reddit results and a bunch of these quick facts right here. Now one thing seems to be these shortcuts these apps right here. So there are various apps, for example, the quick facts app, which we have down here, or I guess the Wikipedia app, which is up here. So the search engine seems to be such that other developers can come in and write apps for it. So you can install apps in your search engine. And those will take up one of these bars. As you can see, there are apps for archive, Walmart, all kinds of things. There's also one for GitHub. But I haven't seen yet this summarize what was Lincoln's role in the Civil War. Again, I just get a bunch of search results. I don't see exactly how summarize the web should be anything like this. So I was also exploring a bit of different features right here. For example, compose a tweet. So I tried this previously, it actually told me to sign into Twitter. So apparently, you can write tweets from here how to sort a list in Python. Now this gets into a little bit more interesting things, they have plugins for Stack Overflow, and also W three schools. So they show the results from these sites in quite nice cards with snippets and so on. For Stack Overflow, there's also a sidebar, which for some reason doesn't show up right now. There's also this code completion engine right here. So I entered how to sort a list of strings in Python. And it gives me a bunch of code completion that are apparently generated by some sort of code model. I mean, that's fine. So I've tried a bunch of things with this search engine, but I really haven't seen this summarize the web for you in any particular way. This seems to be a search engine where other people can write apps for it. And then it'll probably send your search query to those apps. And the apps can give you useful results. Now honestly, it seems like a big benefit for sort of like the big websites right here. For example, W three schools is integrated prominently as you can see, tutorials point is integrated prominently Coursera Stack Overflow, this is specifically for code. But if you look at the other apps that exists, it's essentially all the big websites. So I'm not sure if I actually want this in a search engine, I generally want the most relevant things and I don't necessarily want the relevant things from the biggest sites while I see the potential of integrating all of these things into my search engine is not that useful. Honestly, how many heads does a Hydra have? I quite like this shortcut right here. So this little G, it brings you to this website that you might have heard of. But this is also a pretty good search engine. And it generally gives me the stuff I'm looking for. That being said, you is public now and it is in beta. So you know, give it a little slack until it really full out. And maybe this concept of having many apps integrate into your searches provided by other people and not all by the same company will be something for the future. Who knows? DeepMind releases open source Arnheim, a learnable visual grammar for generating paintings. So bouncing off of the success of people experimenting with clip models such as VQ GAN plus clip or clip guided diffusion, or any of these models that generate stunning images by using clip, DeepMind has done something a little bit different, namely, instead of using a GAN or a diffusion, they are using a what they call a visual grammar. So you're able to give some primitives to the model of how it can compose an image. And then we'll use that in order to please clip in order to do clip guided image generation. So one application of this is, for example, here, you give the model a grammar of brush strokes. So you tell it that it can do some brush strokes in some various ways, various colors, various thicknesses, and so on. You give a bunch of optimization parameters, and it can generate pictures from textual descriptions. It looks pretty cool, I have to say, and it has some nice controllable parameters. Here, you can see the evolution of such a picture as it develops over time, you can see that the model refines on how exactly it lays its brush strokes until it reaches a final conclusion. photo realistic chicken. Yeah. So the code is available along with two colabs where you can try it out for yourself. Oriole vinyls has tweeted out this picture right here of young LeCun made up entirely of MNIST digits. So the model here hasn't gotten brushstrokes as option to perform drawings, but just MNIST digits in various colors. And you know, it looks pretty sweet. So check out paper and code and blog post and give it a try. Wired writes this company tapped AI for its website and landed in court. So this is an article about a company that is being sued because its website does not conform to the accessibility standards of the W three C consortium. The company in question is called IBOBS. And it used this other company called accessibility to make its site more accessible. Now, if you make a website, you can do that with various frameworks. But in order to make it accessible to for example, visually impaired people, you need to annotate the various parts of your website with their meaning you give alt text to images, you define an order of focus, for example, in forms, they should all be navigatable by your keyboard by using the tab key, for example, auto complete should work and so on and so on. Now there are already many tools to help you with that. But it's still a very, very high workload for developers to ship out websites that are also accessible to all the people that want to use them. So this company accessibility says that it can simplify the work of making websites accessible to people with impaired vision or other challenges are replacing a costly manual process with an automated state of the art AI technology. However, this technology doesn't seem to be working all that well in all cases, which is something you could expect, right? So this whole article doesn't only detail this case, but it says it's a growing trend in recent years, companies use these AI softwares to make their websites more accessible, these don't work really well, that makes the websites worse for visually impaired people compared to when manual labor is used to do the same thing and so on. Noteworthy the guidelines that you have to comply with is more than 100 pages when printed, it includes such things as alt text for images and video, clear use of contrast and color, ensuring that features like forms and menus are navigatable using only keyboard without the use of a mouse or finger and so on. Now safe to say this is a difficult problem, right? Of course, AI solutions are going to be largely subpar when it comes to this compared to really dedicated humans doing this. However, they're probably going to be better than just the developers doing it on the side as they're coding the website under time pressure. And they're certainly going to be better than nothing at all. Like I get it, the web sucks for visually impaired people interacting with a medium that is this visual when your visuals don't work is bad, it's it's a bad experience. And it largens the divide between people who have good vision and people who have poor vision, I get this and they also get that we want to make an effort as a society to include visually impaired people more to make websites more accessible, and so on. But I don't see when the standard has become that unless a solution works 100% of the time, a lawsuit should be filed. Like surely having a crappy AI annotated website for visually impaired people is better than not having an annotated website at all. On the other hand, that you can absolutely see that if we as a society decide, well, just use the AI tool for this, then companies are going to opt for that and actually avoid putting in the work of making websites really accessible. So it is a hard problem. And I don't have the clear answer for this. But I would certainly say that AI technology can help it's better than nothing. It gives you sort of a lower bound on accessibility on a website, even if there are some mistakes, because humans make mistakes too. But here is what I find funny. There is apparently a document a sort of petition where researchers and companies and so on can put their name to ask other people to ask other companies not to use these AI tools. It says signers include contributor to W3C guidelines and employees at Microsoft, Apple and Google. Automated detection and repair of accessibility problems is not reliable enough to bring a site into compliance, the document says, accusing some vendors of deceptive marketing. And here it comes. The site was started by Karl Groves, founder of the accessibility consultancy, Tenon.io, who provided a withering 35 page analysis of accessories software to Murphy's lawsuit against iBobs. So iBobs, me being sued, they used accessibility software. And now this Tenon.ai Karl Groves has written a 35 page analysis of this software. Groves said he surveyed a total of about 1000 pages from 50 websites using the startup that's accessibility technology and found a median of 2300 violations of W3C guidelines for each site. Here it comes. Groves says that this is a significant undercount because most of the guidelines can only be checked by an expert manual analysis. So wait, did I understand this correctly? Did you analyze 1000 websites and you either automatically or by non expert humans figured out a lower bound on the number of violations to the standards. And that's not actually the standards, but it's a lower bound and therefore it's better than nothing at all. Really, you did that. And you provide that as evidence into a lawsuit. Hypocrite, hypocrite, hypocrite, hypocrite, hypocrite, hypocrite. In his report to AccessiBee, Groves cited an image of a model wearing a white dress for sale on an e commerce site. The alternative text provided apparently generated by AccessiBee's technology was grass nature and summer. Oh no, an anecdote. Wow. And there you have it. The true story here is that complaining is easier than doing and we'll always be able to write articles about AI systems that don't work 100% yet. As I said, I don't have the definite solution to this problem. It is a hard problem. It's a balance between pushing technology and making it accessible to all the people there are. But how funny that's all I'm gonna say. PanDaily reports Alibaba Damo Academy creates world's largest AI pre-training model with parameters far exceeding Google and Microsoft. Right, so this is about a model called M6 by Alibaba Damo Academy. And the parameter count in these models is one trillion to 10 trillion, far exceeding the trillion level model previously released by Google and Microsoft becoming the world's largest AI pre-training model. I found another article by info queue right here, which I had to translate from Chinese. So M6 stands for multimodality to multimodality multitask mega transformer, M6. That's why it's called M6. And the whole article is like an homage to Chinese research. The real thing that's hailed here as a breakthrough is the efficiency by which people can train these models. But the parameter count is a little bit tricky, because this model uses a mixture of experts architecture, which we can assume maybe to be sparse. And therefore a sparse model with a trillion parameters is not necessarily better than a dense model with 900 billion parameters, given that the network is only activated sparsely. At this point, we don't exactly know what we know is that the model is multimodal, which means it processes images, it processes text and so on. One of the invention highlighted by the article is what they call grouped mixture of experts or what they call expert prototyping. They say it's so that different groups of mixtures of experts can increase the expression space of the model without changing the parameter scale. No idea what that means. So they tout that it can create more high resolution pictures like Dalí can create fashion, as you see here can create textual descriptions, find similar images and so on. Alibaba achieved efficient training of the trillion m six model with only 480 v 100 cards, reducing energy consumption by more than 80%. And the efficiency is increased by nearly 11 times. Right. So this seems to be the real achievement right here, the investigation into efficient model training. As I said, we don't exactly have better data right now, at least I wasn't able to find it. What is a bit deceptive is that the title says that the model has 10 times the number of neurons as humans. So apparently it has what trillion parameters and the human brain has 86 billion neurons yet. Of course, the number of neurons is not equal to the number of parameters for that you need the synapses in the brain, which are more than 125 trillion. So no, your parameter count is not larger than human parameter count quite yet. And even if we get there, it's probably not going to perform as well as humans just because you have that many parameters. If you people figure out any more about this model, link it down below in the comments. Let me know the scale and design of this models are amazing. This looks like a manifesto to the gradual growth of many Chinese AI research organizations. Yeah, they kick your butt if you don't write this info queue. This is like there's a guy in the corner being like, this is great, isn't it? Isn't it? Excellent journalism, everyone. On on tech rights, AMD announces the instinct mi 200 accelerator family. So this is AMD is newest incursion into the GPU space, they say they can connect whatever they learn from building CPUs and GPUs together. And I honestly don't understand many of the things that are said right here, or what's supposed to be special. So as far as I can understand it, one thing that's special is that their machines have like one memory for the CPUs and the GPUs, which eliminates the need of shipping data back and forth, which is one of the main bottlenecks in applications when using GPUs. Maybe I'm wrong. Another thing is that the individual parts that you can put together into bigger parts into bigger servers, they are connected using super duper fast whatever connections instead of PCI connections, which makes things yet even faster. So for their biggest servers, they have 95.7 teraflops of floating point 32 matrix operations. And if you go to FP 16, they have 383 teraflops. I'm being told that's a really good thing. I have no idea. But if you're interested in this, if you maybe want to buy one, get in touch with AMD, please sponsor me. The State of the AI report 2021 is out. This is produced by AI investors Nathan Benight and Ian Hogarth. So actually, it's October 12. So this thing has been out for a while, but forgive me for only reporting on this right now. So as it says, these two people are investors. So they naturally have a distinct look onto the field, which is interesting, right. So it's divided into various sections like research trends. It does quite a good job of summarizing sort of what's going on currently in research, where talent is in which countries at which universities and so on. Notably, China seems to be rising quite a bit in pumping out AI graduates, as you can see right here. Now, it's a quite a lengthy presentation. But what's really interesting is their predictions for the next 12 months. For example, transformers replace recurrent networks to learn world models with which RL agents surpass human performance in large and rich game environments. That's quite a specific prediction, but could actually be true, right? Small transformers and CNN hybrid models match current state of the art on ImageNet top one accuracy with 10 times fewer parameters. A new AGI focused research company is formed with significant backing and a roadmap that's focused on a sector vertical eg developer tools for life science. Well, I guess them being investors, they can just make that happen and then claim their prediction was correct. But it's pretty cool. I'm excited to follow which ones will actually work out and where they are completely wrong. Probably they're under betting most of these things quite a bit. But you know, that's just my opinion. If you're interested in the more general report, as I said, it's quite interesting carries together a lot of data into a neat little package. TechCrunch writes landing AI brings in 57 million US dollars for its machine learning operations tools. So landing AI is a company started by Andrew Ng and has just raised $57 million to build essentially an ML ops platform. They're doing what they're calling data centric AI. And the whole idea is that things like convolution neural networks or in general machine learning models, they're as easy to build as downloading a bit of code from GitHub and running it on your data set. So the real challenge nowadays is really to get the data set to a quality where you can actually train some good model on it. So their product is essentially this data manager and data labeler tool where it helps professionals really label the data. This is all geared towards manufacturing. So here you'd label cracks or dents or whatnot in newly manufactured phones and then you train your model on very little data. And that's then supposed to give you a nice detector for classifying further manufacturing defects. So their idea isn't necessarily to build one big model that's going to solve all the problems but to provide the different industry players in manufacturing with the tools to build their own models from very little but very high quality data so they can essentially get their expertise into these models. I guess that's not a dumb idea. If you're a manufacturer, maybe you want to try landing lens. Another startup that has raised a lot of money is cerebras raising 250 million US dollars or an over 4 billion US dollar valuation. So cerebras builds these really big chips that are geared specifically towards AI computation. Now, as I said before, I have no clue what's going on in these chip manufacturing processes and what's important and whatnot. But these are apparently really, really big chips and everything's connected to everything in memory super fast and memory is with the compute and yada yada yada, what you need to know is that there are indeed other players than Nvidia or AMD in the space of providing compute solutions for AI. And that's a good thing. And maybe at some point, cerebras will come away from their giant chips and actually also make consumer products. Who knows? If that happens, it's going to be good for all of us. And if they stay in the big chip server world, I think it's still good for us because all of the cloud compute might get cheaper because there's just more competition. Speaking of cheap synced rights, Microsoft India proposes Varuna, a scalable and low cost training of massive deep learning model system. So this is essentially an engineering paper that details how you can train big models on cheap and unreliable hardware. So the system uses both data parallelism as well as model pipelining. So you split up your data batches across different machines, you also split up your models across different machines. And if you do that in a smart way, you can achieve actual big throughput. So usually big models have to be trained on what they call hyper clusters, which means clusters that have very fast interconnect because in order to do something like an all reduce if you have to do layer normalization or batch normalization, I don't remember which one it is, sometimes you need to send data around, sometimes you need to send gradients around, and that costs a lot of compute and bandwidth and so on. So it's very interesting to see that these researchers are able to compete with these big hyper cluster training procedures and essentially bring that down to a heterogeneous clusters of spot instances that can die at any time. It's cool to see that AI training of these big models becomes something like a Kubernetes cluster where you can just add machines and the system will reconfigure itself to make optimal use of the machines however fast they may be connected and however long they might be up. So if you're looking for a cheap way to train a 200 billion parameter model, then this might be the way to go. Okay, here is a shout out to a few places. So the first shout out is to Laura Ruiz, his website where she replicates a bunch of things in young Lacan's and others papers called learning in high dimension always amounts to extrapolation. It's a very technical paper and Laura does a great job here, not only replicating the experiments in here, but providing really nice background and reasons and also the code that she uses to do everything. So I just thought this was really neat interleaving plots, code, math, and so on and really going through all of this. And in the end, actually being able to reproduce the plots of the papers, Yippee, there it is so beautiful, very reproduced much similar. If you want to follow Laura, definitely check out our website or GitHub. This is absolutely beautiful photo Laura. Good job. Right, another cool project is real life punch out by Ian Charnas. This is a really well made video about using body tracking models and pairing them up with punch out the N64 game. So you can actually play this in the browser, it tracks your arms, and you can punch using various boxing moves and play punch out. Not only that, but Ian actually went ahead and bought many cartridges of the game, as you can see in the background right here. And if you play it in the browser, it will actually use one of those cartridges because using just a ROM downloaded from the internet would violate the licensing agreements. So every game you play is essentially corresponding to a real life cartridge. As I said, the video is done extremely well. It's a fun video to watch. Or if you simply want to try it out, you can go to Ian's website and just play it by yourself. Nothing to install runs in the browser. Excellent. Alright, so this is the section where I provide some helpful things. First helpful thing market tech post writes Google AI introduces go emotions and NLP data set for fine grained emotion classification. I've actually shown this in last week's weights and biases ad if you have followed the weights and biases ads. But this is a data set where Reddit comments are annotated with one of I believe 28 different emotions contained in the comments. It's not only one emotion per comment, but technically any emotion could or could not appear in any comment. In total, there are 58,000 Reddit comments classified into on its 27 emotion categories 12 positive 11 negative four ambiguous and one neutral with that adds up to 28. I was right. So the data set creation process detailed here is detailing how they went about it, how they went about balancing the data, paying attention to the fact that Reddit isn't exactly a good replica of the entire world and so on. If you're interested, you can give this article a read, you can also look at the paper that goes along with the data set and you can use the data set if you want to try out your hand at emotion detection. I have to say it's gotten a bit tired to see NLP tutorials always doing sort of semantic classification where it's just positive or negative and this might just provide a little bit of a more challenging task here has this language interpretability tool it's open source and it's for visualizing and understanding NLP models. This provides various things you can look at embedding spaces of NLP tasks, it can analyze things like classification, regression, looking at attention heads, analyzing parts of the input, which parts are important for which things and so on. All in all, it's quite a rich tool and I encourage you to check it out if you're into language interpretability. Or if you want to just check out how your models do the things they're doing code is available tool is available. Okay, last week, we've reported on a rudali the Russian Dalí model. And now apparently the large model is available for download as one Reddit comment says, or much rather the edit of the comment says that the availability is on December 1. So expect that soon. machine on Twitter says after a year in dev, I'm happy to release the core of my Vtuber apps. Now Vtubers are special sort of things that I have never really touched on. But this seems to be a large community that transforms their body movements onto digital anime avatars, as you can see right here. So this also uses body pose tracking and apparently also face tracking in order to make your avatar do as you're doing code is available. And it's not only sort of for face and upper body, but you can also track your entire body movements and map them onto characters as you can see right here, it can do facial point tracking such that it really replicates your facial expressions. So there's never been a better time to become a Vtuber. Check out Khalid o kit on GitHub if you're interested. There's an article by Newsfile Corporation on Yahoo Finance that writes that artificial intelligence now makes it possible for investors to find promising new hidden gem meme tokens automatically. This isn't necessarily what you think you think while there's a company that tells me which meme tokens are good so I can buy it. No, no, no, no, no, no, no. See, this is an actual token itself. So you put money into the token, and then the token selects projects in which the money is to be invested. These projects it says are automatically selected using a special AI based sniper bot. So the AI will look at all the meme tokens, the dodge and the Shiba enu and the squid game tokens, and it will predict which ones will go up and then it will take all the money that is invested into the Finu token, put it into those tokens and then pay out the winnings to the holders of the Finu token. I mean, look at this for an enhanced version of this graphic, please. Yes, I want an enhanced version. Oh, wow, that's enhanced. That that is that is so hands. Absolutely. Currently, there is a website for this and it says vote for Finu help the price pump and hit the back there is a dodge. Okay people who want to make a quick buck using meme tokens that have absolutely no value whatsoever, are encouraged to buy a meme token. Excellent. Now I'm not saying this can't be done. Mean tokens are essentially like fashion that there's no reason why this particular that particular fashion should be in or out next year and yet it still happens and there might be ways to predict it. But still, whether or not this is the way to go can't tell. So I've mentioned this shoe does not exist last week. But there's also this sneaker does not exist. Look at that. And this is pretty cool. So this is a grid of AI generated sneakers, you can click on one, right, and then you can apparently edit that sneaker. So you can go normal to futuristic, you can go high creativity, that's very creative. You can change up the colors a little bit. Very cool, very functional. Look at that one. Yeah, futuristic, creative, light color. I mean, it's not super futuristic. But yeah, so shout out to this sneaker does not exist.com. Check it out. And that was already it for this week's ML news. I hope you had fun hit subscribe if you liked it. We're only 105,900,000 subscribers behind PewDiePie. We can totally catch them. If we really do our jobs, tell three people they're going to tell three people is going to be fine. See you next Monday. Bye bye.
[ { "start": 0, "end": 9.68, "text": " Hold on, this video is sponsored by weights and biases. Weights and biases is your one" }, { "start": 9.68, "end": 14.96, "text": " stop shop for all your machine learning needs. It will track your experiments with a single" }, { "start": 14.96, "end": 20.52, "text": " line of code will upload automatically all your logs, all your configurations, everything" }, { "start": 20.52, "end": 26.44, "text": " to your cloud, it will automatically grab all the output, all the metrics, all the configurations" }, { "start": 26.44, "end": 32.44, "text": " of your experiments and store that in one neat location. So you can see your experiments," }, { "start": 32.44, "end": 36.88, "text": " you can track them wherever they run, you can compare among the experiments, but you" }, { "start": 36.88, "end": 41.28, "text": " can go further, you can then tune your hyper parameters according to the results of those" }, { "start": 41.28, "end": 46.84, "text": " experiments. And all of this is done automatically in a distributed way, you can literally sit" }, { "start": 46.84, "end": 52.36, "text": " on your toilet on your smartphone and tune your hyper parameters and start new experiments." }, { "start": 52.36, "end": 57.08, "text": " But it's not only experiment tracking and hyper parameter tuning weights and biases" }, { "start": 57.08, "end": 62.64, "text": " has tools for the entire pipeline of machine learning research from the initial idea up" }, { "start": 62.64, "end": 67.38, "text": " until the deployment and beyond that when you actually want to track what you've deployed" }, { "start": 67.38, "end": 72.24, "text": " weights and biases has cool methods to track all of your data set and their dependencies" }, { "start": 72.24, "end": 76.24, "text": " to each other, as well as your models and all kinds of other artifacts that you might" }, { "start": 76.24, "end": 82.24, "text": " produce a very powerful visualizations for all the inputs and outputs of your pipelines," }, { "start": 82.24, "end": 86.61999999999999, "text": " as well as the models themselves. All of this runs in the cloud. But if you're concerned" }, { "start": 86.61999999999999, "end": 91.83999999999999, "text": " about privacy, there are options to self host the system is free for personal use and for" }, { "start": 91.83999999999999, "end": 97.88, "text": " academics and they have great plans for enterprises, small teams, large teams doesn't matter. So" }, { "start": 97.88, "end": 101.91999999999999, "text": " thank you very much weights and biases for sponsoring this video. If you don't know them" }, { "start": 101.91999999999999, "end": 106.89999999999999, "text": " yet, absolutely check them out. It's free, it'll make your life a whole lot easier. Now" }, { "start": 106.9, "end": 115.80000000000001, "text": " let's get into the video. Welcome, welcome to ml news. Let's dive into our first story" }, { "start": 115.80000000000001, "end": 120.84, "text": " group of researchers based in Switzerland have trained city which is a French language" }, { "start": 120.84, "end": 126.96000000000001, "text": " model. This is a model based on GPTJ to 6 billion parameter model that is a language model in" }, { "start": 126.96000000000001, "end": 131.24, "text": " French. The headline is write French without speaking French, which is pretty much a recipe" }, { "start": 131.24, "end": 136.72, "text": " of how I passed high school. So the cool thing about this is that it can do the tasks that" }, { "start": 136.72, "end": 142.12, "text": " you're used to from things like GPT three, but with a special focus on French. So it" }, { "start": 142.12, "end": 147.68, "text": " achieves a better perplexity on French text than GPT three, apparently lower toxicity," }, { "start": 147.68, "end": 153.57999999999998, "text": " whatever that means is better at translating from and to French, and it's better at various" }, { "start": 153.57999999999998, "end": 159.32, "text": " other NLP tasks out of the box. And if you don't know what city means, city is this little" }, { "start": 159.32, "end": 164.16, "text": " thing that French people put at the bottom of some of their letters, also some other" }, { "start": 164.16, "end": 168.84, "text": " languages as I am being told, but just quite annoying because you never know where on the" }, { "start": 168.84, "end": 173.74, "text": " keyboard it is. So being quite annoying seems like a great name for a French language model." }, { "start": 173.74, "end": 177.68, "text": " The cool thing is not only is the model open source, you can download a checkpoint, the" }, { "start": 177.68, "end": 182.56, "text": " code is open source, but also you can play with it directly in the browser, there's a" }, { "start": 182.56, "end": 187.12, "text": " little app and there are a bunch of prompts that are already built in, for example, classification" }, { "start": 187.12, "end": 192.92, "text": " of some stuff like what is FedEx FedEx is logistics company that is correct, Amazon" }, { "start": 192.92, "end": 197.92, "text": " is an e commerce and technology company that is all correct. Now my French is limited to" }, { "start": 197.92, "end": 210.54, "text": " be honest, Jay, who Bly a more baguette, Joe sweet, the Zola, I think it means I lost my" }, { "start": 210.54, "end": 217.79999999999998, "text": " baguette and I'm very sad. The model says meme see, ne pas d'explication logi. I don't" }, { "start": 217.8, "end": 224.48000000000002, "text": " have a logical explanation for why I lost my baguette. Is it maybe I forgot my baguette?" }, { "start": 224.48000000000002, "end": 230.88000000000002, "text": " I don't know. Well, in any case, it's a French language model, you get it. What is interesting" }, { "start": 230.88000000000002, "end": 236.56, "text": " is that among the parameters, it says that a German one is coming soon. So keep an eye" }, { "start": 236.56, "end": 243.8, "text": " out for that. Facebook AI on their blog says the first ever multi lingual model to win" }, { "start": 243.8, "end": 249.72, "text": " a WMT beating out bilingual models. So WMT is this yearly competition essentially to" }, { "start": 249.72, "end": 256.22, "text": " do machine translation. This is a corpus of data sets, but then also every year the competition" }, { "start": 256.22, "end": 261.96000000000004, "text": " hosts human expert translators that rate the translations of the machine translation systems." }, { "start": 261.96000000000004, "end": 266.16, "text": " So the machines aren't able to hyper optimize on the data sets, but really have to please" }, { "start": 266.16, "end": 271.3, "text": " the humans. Now first thing, why is this in the AR VR category? I don't know. In any case," }, { "start": 271.3, "end": 276.16, "text": " it's quite remarkable because one would think that given that all the tasks are bilingual," }, { "start": 276.16, "end": 280.92, "text": " that bilingual models that can be tailored to one specific language pair would be ahead" }, { "start": 280.92, "end": 286.40000000000003, "text": " right here. But as Facebook AI shows, because multi lingual models can ingest essentially" }, { "start": 286.40000000000003, "end": 291.76, "text": " much more data into them. So the French English translations are also informed by the German" }, { "start": 291.76, "end": 296.40000000000003, "text": " data that comes in. And because it's able to make use of so much more data, it can in" }, { "start": 296.4, "end": 302.38, "text": " the end outperform models that have been trained for particular language pairs. Now multi lingual" }, { "start": 302.38, "end": 308.12, "text": " ity is not the only thing that's good about this model. The machine translation community" }, { "start": 308.12, "end": 313.32, "text": " has over the years accrued various tricks such as back translation to make use of monolingual" }, { "start": 313.32, "end": 319.2, "text": " data, ensembling, and so on. So this is really an engineering effort. But it's cool to see" }, { "start": 319.2, "end": 324.4, "text": " this overlap point where for the first time ever a single multi lingual model is better" }, { "start": 324.4, "end": 330.44, "text": " than many, many bilingual models. And that's excellent, not only because it's higher performing," }, { "start": 330.44, "end": 335.47999999999996, "text": " but it also means that it provides us easier access to work with languages that have very" }, { "start": 335.47999999999996, "end": 340.23999999999995, "text": " low resources that maybe are only spoken by a very small amount of people or that have" }, { "start": 340.23999999999995, "end": 345.12, "text": " no written form at all, like Swiss German, for example. So excellent development, there" }, { "start": 345.12, "end": 348.78, "text": " is a paper, the code is available. And if you want to learn all the tricks, give it" }, { "start": 348.78, "end": 349.78, "text": " a read." }, { "start": 349.78, "end": 356.71999999999997, "text": " You is a new search engine that has been launched by Richard soccer previously the head of AI" }, { "start": 356.71999999999997, "end": 362.05999999999995, "text": " at Salesforce. And this is supposed to be a direct competitor to the Google search engine," }, { "start": 362.05999999999995, "end": 367.03999999999996, "text": " you advertise itself as the private search engine that summarizes the web for you. So" }, { "start": 367.03999999999996, "end": 373.03999999999996, "text": " there's two promises here, privacy, and summarization in whatever form, they say it helps you get" }, { "start": 373.03999999999996, "end": 379.05999999999995, "text": " things done, get news, check GitHub, compose a tweet all from your search engine, for whatever" }, { "start": 379.06, "end": 383.4, "text": " reason you want to compose a tweet from your search engine. But there you go. There's a" }, { "start": 383.4, "end": 389.4, "text": " big emphasis on privacy, you can choose between a personalized or a truly private experience," }, { "start": 389.4, "end": 394.88, "text": " you.com never sells your data to advertisers. And also they promise no ad targeting. Now" }, { "start": 394.88, "end": 399.16, "text": " actually, when you sign up, the first thing that they want to make you do is install an" }, { "start": 399.16, "end": 403.96, "text": " extension. If I click this button, it leads me straight into the Chrome Web Store. So" }, { "start": 403.96, "end": 410.23999999999995, "text": " I'm gonna take this with a grain of salt right here, someone promises me privacy, no targeting," }, { "start": 410.23999999999995, "end": 416.59999999999997, "text": " and so on. No, unless this is provably the case, I'm not going to trust any of those" }, { "start": 416.59999999999997, "end": 421.47999999999996, "text": " promises. So the second big selling point is this summarize the web. And I was intrigued" }, { "start": 421.47999999999996, "end": 426.2, "text": " by that, like how is this search engine gonna summarize the web for me, this sounds really" }, { "start": 426.2, "end": 431.2, "text": " cool. So I tried out a bunch of things like, okay, they said I could check news, for example." }, { "start": 431.2, "end": 436.09999999999997, "text": " All right, news, let me zoom out a little bit here. So the interface that you gives" }, { "start": 436.09999999999997, "end": 442, "text": " you is this kind of grouped interface. So there are web results on top right here, there" }, { "start": 442, "end": 448.58, "text": " is a section for news. And then there are various of these subcategories right here." }, { "start": 448.58, "end": 453.86, "text": " But honestly, I don't see any summarization like any summarize the web for me. So let" }, { "start": 453.86, "end": 459.32, "text": " me search for something I would like to have summarized Abraham Lincoln and the Civil War." }, { "start": 459.32, "end": 463.8, "text": " No, it just gives me the Wikipedia page and a bunch of web results and a bunch of Reddit" }, { "start": 463.8, "end": 469.44, "text": " results and a bunch of these quick facts right here. Now one thing seems to be these shortcuts" }, { "start": 469.44, "end": 474.18, "text": " these apps right here. So there are various apps, for example, the quick facts app, which" }, { "start": 474.18, "end": 478.84, "text": " we have down here, or I guess the Wikipedia app, which is up here. So the search engine" }, { "start": 478.84, "end": 483.12, "text": " seems to be such that other developers can come in and write apps for it. So you can" }, { "start": 483.12, "end": 488.6, "text": " install apps in your search engine. And those will take up one of these bars. As you can" }, { "start": 488.6, "end": 493.56, "text": " see, there are apps for archive, Walmart, all kinds of things. There's also one for" }, { "start": 493.56, "end": 500.96000000000004, "text": " GitHub. But I haven't seen yet this summarize what was Lincoln's role in the Civil War." }, { "start": 500.96000000000004, "end": 505.04, "text": " Again, I just get a bunch of search results. I don't see exactly how summarize the web" }, { "start": 505.04, "end": 508.88, "text": " should be anything like this. So I was also exploring a bit of different features right" }, { "start": 508.88, "end": 513.96, "text": " here. For example, compose a tweet. So I tried this previously, it actually told me to sign" }, { "start": 513.96, "end": 519.6800000000001, "text": " into Twitter. So apparently, you can write tweets from here how to sort a list in Python." }, { "start": 519.6800000000001, "end": 524.5400000000001, "text": " Now this gets into a little bit more interesting things, they have plugins for Stack Overflow," }, { "start": 524.5400000000001, "end": 530.4000000000001, "text": " and also W three schools. So they show the results from these sites in quite nice cards" }, { "start": 530.4000000000001, "end": 535.72, "text": " with snippets and so on. For Stack Overflow, there's also a sidebar, which for some reason" }, { "start": 535.72, "end": 540.74, "text": " doesn't show up right now. There's also this code completion engine right here. So I entered" }, { "start": 540.74, "end": 546.12, "text": " how to sort a list of strings in Python. And it gives me a bunch of code completion that" }, { "start": 546.12, "end": 551.12, "text": " are apparently generated by some sort of code model. I mean, that's fine. So I've tried" }, { "start": 551.12, "end": 555.38, "text": " a bunch of things with this search engine, but I really haven't seen this summarize the" }, { "start": 555.38, "end": 560.34, "text": " web for you in any particular way. This seems to be a search engine where other people can" }, { "start": 560.34, "end": 565.8, "text": " write apps for it. And then it'll probably send your search query to those apps. And" }, { "start": 565.8, "end": 570.8, "text": " the apps can give you useful results. Now honestly, it seems like a big benefit for" }, { "start": 570.8, "end": 575.06, "text": " sort of like the big websites right here. For example, W three schools is integrated" }, { "start": 575.06, "end": 580.8399999999999, "text": " prominently as you can see, tutorials point is integrated prominently Coursera Stack Overflow," }, { "start": 580.8399999999999, "end": 585.3, "text": " this is specifically for code. But if you look at the other apps that exists, it's essentially" }, { "start": 585.3, "end": 590.6999999999999, "text": " all the big websites. So I'm not sure if I actually want this in a search engine, I generally" }, { "start": 590.6999999999999, "end": 595.4, "text": " want the most relevant things and I don't necessarily want the relevant things from" }, { "start": 595.4, "end": 599.9399999999999, "text": " the biggest sites while I see the potential of integrating all of these things into my" }, { "start": 599.9399999999999, "end": 605.5799999999999, "text": " search engine is not that useful. Honestly, how many heads does a Hydra have? I quite" }, { "start": 605.5799999999999, "end": 611.0799999999999, "text": " like this shortcut right here. So this little G, it brings you to this website that you" }, { "start": 611.0799999999999, "end": 615.28, "text": " might have heard of. But this is also a pretty good search engine. And it generally gives" }, { "start": 615.28, "end": 619.88, "text": " me the stuff I'm looking for. That being said, you is public now and it is in beta. So you" }, { "start": 619.88, "end": 624.52, "text": " know, give it a little slack until it really full out. And maybe this concept of having" }, { "start": 624.52, "end": 630.96, "text": " many apps integrate into your searches provided by other people and not all by the same company" }, { "start": 630.96, "end": 637.76, "text": " will be something for the future. Who knows? DeepMind releases open source Arnheim, a learnable" }, { "start": 637.76, "end": 643.5, "text": " visual grammar for generating paintings. So bouncing off of the success of people experimenting" }, { "start": 643.5, "end": 649.14, "text": " with clip models such as VQ GAN plus clip or clip guided diffusion, or any of these models" }, { "start": 649.14, "end": 654.5, "text": " that generate stunning images by using clip, DeepMind has done something a little bit different," }, { "start": 654.5, "end": 660.46, "text": " namely, instead of using a GAN or a diffusion, they are using a what they call a visual grammar." }, { "start": 660.46, "end": 664.92, "text": " So you're able to give some primitives to the model of how it can compose an image." }, { "start": 664.92, "end": 671.16, "text": " And then we'll use that in order to please clip in order to do clip guided image generation." }, { "start": 671.16, "end": 676.16, "text": " So one application of this is, for example, here, you give the model a grammar of brush" }, { "start": 676.16, "end": 681.36, "text": " strokes. So you tell it that it can do some brush strokes in some various ways, various" }, { "start": 681.36, "end": 685.88, "text": " colors, various thicknesses, and so on. You give a bunch of optimization parameters, and" }, { "start": 685.88, "end": 691.76, "text": " it can generate pictures from textual descriptions. It looks pretty cool, I have to say, and it" }, { "start": 691.76, "end": 696.04, "text": " has some nice controllable parameters. Here, you can see the evolution of such a picture" }, { "start": 696.04, "end": 700.58, "text": " as it develops over time, you can see that the model refines on how exactly it lays its" }, { "start": 700.58, "end": 707.1800000000001, "text": " brush strokes until it reaches a final conclusion. photo realistic chicken. Yeah. So the code" }, { "start": 707.18, "end": 713.3599999999999, "text": " is available along with two colabs where you can try it out for yourself. Oriole vinyls" }, { "start": 713.3599999999999, "end": 719.9399999999999, "text": " has tweeted out this picture right here of young LeCun made up entirely of MNIST digits." }, { "start": 719.9399999999999, "end": 724.7199999999999, "text": " So the model here hasn't gotten brushstrokes as option to perform drawings, but just MNIST" }, { "start": 724.7199999999999, "end": 729.52, "text": " digits in various colors. And you know, it looks pretty sweet. So check out paper and" }, { "start": 729.52, "end": 737, "text": " code and blog post and give it a try. Wired writes this company tapped AI for its website" }, { "start": 737, "end": 742.98, "text": " and landed in court. So this is an article about a company that is being sued because" }, { "start": 742.98, "end": 748.28, "text": " its website does not conform to the accessibility standards of the W three C consortium. The" }, { "start": 748.28, "end": 753.56, "text": " company in question is called IBOBS. And it used this other company called accessibility" }, { "start": 753.56, "end": 759.72, "text": " to make its site more accessible. Now, if you make a website, you can do that with various" }, { "start": 759.72, "end": 764.2, "text": " frameworks. But in order to make it accessible to for example, visually impaired people," }, { "start": 764.2, "end": 768.44, "text": " you need to annotate the various parts of your website with their meaning you give alt" }, { "start": 768.44, "end": 773.2800000000001, "text": " text to images, you define an order of focus, for example, in forms, they should all be" }, { "start": 773.2800000000001, "end": 777.6, "text": " navigatable by your keyboard by using the tab key, for example, auto complete should" }, { "start": 777.6, "end": 782, "text": " work and so on and so on. Now there are already many tools to help you with that. But it's" }, { "start": 782, "end": 788.72, "text": " still a very, very high workload for developers to ship out websites that are also accessible" }, { "start": 788.72, "end": 794.12, "text": " to all the people that want to use them. So this company accessibility says that it can" }, { "start": 794.12, "end": 798.96, "text": " simplify the work of making websites accessible to people with impaired vision or other challenges" }, { "start": 798.96, "end": 804, "text": " are replacing a costly manual process with an automated state of the art AI technology." }, { "start": 804, "end": 809.2, "text": " However, this technology doesn't seem to be working all that well in all cases, which" }, { "start": 809.2, "end": 814.36, "text": " is something you could expect, right? So this whole article doesn't only detail this case," }, { "start": 814.36, "end": 818.8000000000001, "text": " but it says it's a growing trend in recent years, companies use these AI softwares to" }, { "start": 818.8000000000001, "end": 823.38, "text": " make their websites more accessible, these don't work really well, that makes the websites" }, { "start": 823.38, "end": 828, "text": " worse for visually impaired people compared to when manual labor is used to do the same" }, { "start": 828, "end": 833.34, "text": " thing and so on. Noteworthy the guidelines that you have to comply with is more than" }, { "start": 833.34, "end": 839.2, "text": " 100 pages when printed, it includes such things as alt text for images and video, clear use" }, { "start": 839.2, "end": 843.28, "text": " of contrast and color, ensuring that features like forms and menus are navigatable using" }, { "start": 843.28, "end": 847.76, "text": " only keyboard without the use of a mouse or finger and so on. Now safe to say this is" }, { "start": 847.76, "end": 853.4, "text": " a difficult problem, right? Of course, AI solutions are going to be largely subpar when" }, { "start": 853.4, "end": 857.88, "text": " it comes to this compared to really dedicated humans doing this. However, they're probably" }, { "start": 857.88, "end": 863.16, "text": " going to be better than just the developers doing it on the side as they're coding the" }, { "start": 863.16, "end": 868.12, "text": " website under time pressure. And they're certainly going to be better than nothing at all. Like" }, { "start": 868.12, "end": 872.92, "text": " I get it, the web sucks for visually impaired people interacting with a medium that is this" }, { "start": 872.92, "end": 878.36, "text": " visual when your visuals don't work is bad, it's it's a bad experience. And it largens" }, { "start": 878.36, "end": 883.3199999999999, "text": " the divide between people who have good vision and people who have poor vision, I get this" }, { "start": 883.3199999999999, "end": 887.4799999999999, "text": " and they also get that we want to make an effort as a society to include visually impaired" }, { "start": 887.4799999999999, "end": 891.88, "text": " people more to make websites more accessible, and so on. But I don't see when the standard" }, { "start": 891.88, "end": 897.92, "text": " has become that unless a solution works 100% of the time, a lawsuit should be filed. Like" }, { "start": 897.92, "end": 903.4, "text": " surely having a crappy AI annotated website for visually impaired people is better than" }, { "start": 903.4, "end": 907.5999999999999, "text": " not having an annotated website at all. On the other hand, that you can absolutely see" }, { "start": 907.5999999999999, "end": 912.68, "text": " that if we as a society decide, well, just use the AI tool for this, then companies are" }, { "start": 912.68, "end": 918.4, "text": " going to opt for that and actually avoid putting in the work of making websites really accessible." }, { "start": 918.4, "end": 923.4, "text": " So it is a hard problem. And I don't have the clear answer for this. But I would certainly" }, { "start": 923.4, "end": 928.8199999999999, "text": " say that AI technology can help it's better than nothing. It gives you sort of a lower" }, { "start": 928.8199999999999, "end": 933.88, "text": " bound on accessibility on a website, even if there are some mistakes, because humans" }, { "start": 933.88, "end": 939.6, "text": " make mistakes too. But here is what I find funny. There is apparently a document a sort" }, { "start": 939.6, "end": 945.34, "text": " of petition where researchers and companies and so on can put their name to ask other" }, { "start": 945.34, "end": 951.28, "text": " people to ask other companies not to use these AI tools. It says signers include contributor" }, { "start": 951.28, "end": 957.12, "text": " to W3C guidelines and employees at Microsoft, Apple and Google. Automated detection and" }, { "start": 957.12, "end": 961.72, "text": " repair of accessibility problems is not reliable enough to bring a site into compliance, the" }, { "start": 961.72, "end": 966.4399999999999, "text": " document says, accusing some vendors of deceptive marketing. And here it comes. The site was" }, { "start": 966.4399999999999, "end": 973.04, "text": " started by Karl Groves, founder of the accessibility consultancy, Tenon.io, who provided a withering" }, { "start": 973.04, "end": 980.76, "text": " 35 page analysis of accessories software to Murphy's lawsuit against iBobs. So iBobs," }, { "start": 980.76, "end": 986.92, "text": " me being sued, they used accessibility software. And now this Tenon.ai Karl Groves has written" }, { "start": 986.92, "end": 993.56, "text": " a 35 page analysis of this software. Groves said he surveyed a total of about 1000 pages" }, { "start": 993.56, "end": 999.36, "text": " from 50 websites using the startup that's accessibility technology and found a median" }, { "start": 999.36, "end": 1006.64, "text": " of 2300 violations of W3C guidelines for each site. Here it comes. Groves says that this" }, { "start": 1006.64, "end": 1012.72, "text": " is a significant undercount because most of the guidelines can only be checked by an expert" }, { "start": 1012.72, "end": 1021.26, "text": " manual analysis. So wait, did I understand this correctly? Did you analyze 1000 websites" }, { "start": 1021.26, "end": 1028.34, "text": " and you either automatically or by non expert humans figured out a lower bound on the number" }, { "start": 1028.34, "end": 1032.92, "text": " of violations to the standards. And that's not actually the standards, but it's a lower" }, { "start": 1032.92, "end": 1038.6000000000001, "text": " bound and therefore it's better than nothing at all. Really, you did that. And you provide" }, { "start": 1038.6000000000001, "end": 1044.76, "text": " that as evidence into a lawsuit. Hypocrite, hypocrite, hypocrite, hypocrite, hypocrite," }, { "start": 1044.76, "end": 1049.68, "text": " hypocrite. In his report to AccessiBee, Groves cited an image of a model wearing a white" }, { "start": 1049.68, "end": 1055.26, "text": " dress for sale on an e commerce site. The alternative text provided apparently generated by AccessiBee's" }, { "start": 1055.26, "end": 1063, "text": " technology was grass nature and summer. Oh no, an anecdote. Wow. And there you have it." }, { "start": 1063, "end": 1068.64, "text": " The true story here is that complaining is easier than doing and we'll always be able" }, { "start": 1068.64, "end": 1074.04, "text": " to write articles about AI systems that don't work 100% yet. As I said, I don't have the" }, { "start": 1074.04, "end": 1078.3799999999999, "text": " definite solution to this problem. It is a hard problem. It's a balance between pushing" }, { "start": 1078.3799999999999, "end": 1083.48, "text": " technology and making it accessible to all the people there are. But how funny that's" }, { "start": 1083.48, "end": 1091.2, "text": " all I'm gonna say. PanDaily reports Alibaba Damo Academy creates world's largest AI pre-training" }, { "start": 1091.2, "end": 1096.88, "text": " model with parameters far exceeding Google and Microsoft. Right, so this is about a model" }, { "start": 1096.88, "end": 1104.08, "text": " called M6 by Alibaba Damo Academy. And the parameter count in these models is one trillion" }, { "start": 1104.08, "end": 1108.68, "text": " to 10 trillion, far exceeding the trillion level model previously released by Google" }, { "start": 1108.68, "end": 1113.92, "text": " and Microsoft becoming the world's largest AI pre-training model. I found another article" }, { "start": 1113.92, "end": 1119.72, "text": " by info queue right here, which I had to translate from Chinese. So M6 stands for multimodality" }, { "start": 1119.72, "end": 1126.5600000000002, "text": " to multimodality multitask mega transformer, M6. That's why it's called M6. And the whole" }, { "start": 1126.5600000000002, "end": 1132.24, "text": " article is like an homage to Chinese research. The real thing that's hailed here as a breakthrough" }, { "start": 1132.24, "end": 1137.0800000000002, "text": " is the efficiency by which people can train these models. But the parameter count is a" }, { "start": 1137.08, "end": 1142.08, "text": " little bit tricky, because this model uses a mixture of experts architecture, which we" }, { "start": 1142.08, "end": 1147.24, "text": " can assume maybe to be sparse. And therefore a sparse model with a trillion parameters" }, { "start": 1147.24, "end": 1152.82, "text": " is not necessarily better than a dense model with 900 billion parameters, given that the" }, { "start": 1152.82, "end": 1156.96, "text": " network is only activated sparsely. At this point, we don't exactly know what we know" }, { "start": 1156.96, "end": 1162.56, "text": " is that the model is multimodal, which means it processes images, it processes text and" }, { "start": 1162.56, "end": 1167.12, "text": " so on. One of the invention highlighted by the article is what they call grouped mixture" }, { "start": 1167.12, "end": 1172.6799999999998, "text": " of experts or what they call expert prototyping. They say it's so that different groups of" }, { "start": 1172.6799999999998, "end": 1178.2, "text": " mixtures of experts can increase the expression space of the model without changing the parameter" }, { "start": 1178.2, "end": 1184.08, "text": " scale. No idea what that means. So they tout that it can create more high resolution pictures" }, { "start": 1184.08, "end": 1189.36, "text": " like Dalí can create fashion, as you see here can create textual descriptions, find" }, { "start": 1189.36, "end": 1194.8, "text": " similar images and so on. Alibaba achieved efficient training of the trillion m six model" }, { "start": 1194.8, "end": 1201.2199999999998, "text": " with only 480 v 100 cards, reducing energy consumption by more than 80%. And the efficiency" }, { "start": 1201.2199999999998, "end": 1205.6799999999998, "text": " is increased by nearly 11 times. Right. So this seems to be the real achievement right" }, { "start": 1205.6799999999998, "end": 1211.7199999999998, "text": " here, the investigation into efficient model training. As I said, we don't exactly have" }, { "start": 1211.7199999999998, "end": 1216.04, "text": " better data right now, at least I wasn't able to find it. What is a bit deceptive is that" }, { "start": 1216.04, "end": 1222.04, "text": " the title says that the model has 10 times the number of neurons as humans. So apparently" }, { "start": 1222.04, "end": 1229.52, "text": " it has what trillion parameters and the human brain has 86 billion neurons yet. Of course," }, { "start": 1229.52, "end": 1233.3999999999999, "text": " the number of neurons is not equal to the number of parameters for that you need the" }, { "start": 1233.3999999999999, "end": 1238.32, "text": " synapses in the brain, which are more than 125 trillion. So no, your parameter count" }, { "start": 1238.32, "end": 1243.2, "text": " is not larger than human parameter count quite yet. And even if we get there, it's probably" }, { "start": 1243.2, "end": 1247.92, "text": " not going to perform as well as humans just because you have that many parameters. If" }, { "start": 1247.92, "end": 1252.8, "text": " you people figure out any more about this model, link it down below in the comments." }, { "start": 1252.8, "end": 1258.6000000000001, "text": " Let me know the scale and design of this models are amazing. This looks like a manifesto to" }, { "start": 1258.6000000000001, "end": 1264.46, "text": " the gradual growth of many Chinese AI research organizations. Yeah, they kick your butt if" }, { "start": 1264.46, "end": 1270.68, "text": " you don't write this info queue. This is like there's a guy in the corner being like, this" }, { "start": 1270.68, "end": 1277.24, "text": " is great, isn't it? Isn't it? Excellent journalism, everyone." }, { "start": 1277.24, "end": 1283.4, "text": " On on tech rights, AMD announces the instinct mi 200 accelerator family. So this is AMD" }, { "start": 1283.4, "end": 1289.46, "text": " is newest incursion into the GPU space, they say they can connect whatever they learn from" }, { "start": 1289.46, "end": 1296.0800000000002, "text": " building CPUs and GPUs together. And I honestly don't understand many of the things that are" }, { "start": 1296.08, "end": 1300.8, "text": " said right here, or what's supposed to be special. So as far as I can understand it, one thing" }, { "start": 1300.8, "end": 1306.8799999999999, "text": " that's special is that their machines have like one memory for the CPUs and the GPUs," }, { "start": 1306.8799999999999, "end": 1311.8799999999999, "text": " which eliminates the need of shipping data back and forth, which is one of the main bottlenecks" }, { "start": 1311.8799999999999, "end": 1317.6799999999998, "text": " in applications when using GPUs. Maybe I'm wrong. Another thing is that the individual" }, { "start": 1317.6799999999998, "end": 1322.48, "text": " parts that you can put together into bigger parts into bigger servers, they are connected" }, { "start": 1322.48, "end": 1327.84, "text": " using super duper fast whatever connections instead of PCI connections, which makes things" }, { "start": 1327.84, "end": 1334.16, "text": " yet even faster. So for their biggest servers, they have 95.7 teraflops of floating point" }, { "start": 1334.16, "end": 1341.32, "text": " 32 matrix operations. And if you go to FP 16, they have 383 teraflops. I'm being told" }, { "start": 1341.32, "end": 1346, "text": " that's a really good thing. I have no idea. But if you're interested in this, if you maybe" }, { "start": 1346, "end": 1351.24, "text": " want to buy one, get in touch with AMD, please sponsor me." }, { "start": 1351.24, "end": 1357.76, "text": " The State of the AI report 2021 is out. This is produced by AI investors Nathan Benight and" }, { "start": 1357.76, "end": 1362.86, "text": " Ian Hogarth. So actually, it's October 12. So this thing has been out for a while, but" }, { "start": 1362.86, "end": 1368.92, "text": " forgive me for only reporting on this right now. So as it says, these two people are investors." }, { "start": 1368.92, "end": 1374.04, "text": " So they naturally have a distinct look onto the field, which is interesting, right. So" }, { "start": 1374.04, "end": 1379.44, "text": " it's divided into various sections like research trends. It does quite a good job of summarizing" }, { "start": 1379.44, "end": 1385.0800000000002, "text": " sort of what's going on currently in research, where talent is in which countries at which" }, { "start": 1385.0800000000002, "end": 1391.76, "text": " universities and so on. Notably, China seems to be rising quite a bit in pumping out AI" }, { "start": 1391.76, "end": 1396.28, "text": " graduates, as you can see right here. Now, it's a quite a lengthy presentation. But what's" }, { "start": 1396.28, "end": 1401.4, "text": " really interesting is their predictions for the next 12 months. For example, transformers" }, { "start": 1401.4, "end": 1408.0800000000002, "text": " replace recurrent networks to learn world models with which RL agents surpass human performance" }, { "start": 1408.08, "end": 1412.1599999999999, "text": " in large and rich game environments. That's quite a specific prediction, but could actually" }, { "start": 1412.1599999999999, "end": 1416.9399999999998, "text": " be true, right? Small transformers and CNN hybrid models match current state of the art" }, { "start": 1416.9399999999998, "end": 1422.6799999999998, "text": " on ImageNet top one accuracy with 10 times fewer parameters. A new AGI focused research" }, { "start": 1422.6799999999998, "end": 1427.8999999999999, "text": " company is formed with significant backing and a roadmap that's focused on a sector vertical" }, { "start": 1427.8999999999999, "end": 1432.08, "text": " eg developer tools for life science. Well, I guess them being investors, they can just" }, { "start": 1432.08, "end": 1437.06, "text": " make that happen and then claim their prediction was correct. But it's pretty cool. I'm excited" }, { "start": 1437.06, "end": 1441.8, "text": " to follow which ones will actually work out and where they are completely wrong. Probably" }, { "start": 1441.8, "end": 1445.96, "text": " they're under betting most of these things quite a bit. But you know, that's just my" }, { "start": 1445.96, "end": 1450.44, "text": " opinion. If you're interested in the more general report, as I said, it's quite interesting" }, { "start": 1450.44, "end": 1456.96, "text": " carries together a lot of data into a neat little package. TechCrunch writes landing" }, { "start": 1456.96, "end": 1462.72, "text": " AI brings in 57 million US dollars for its machine learning operations tools. So landing" }, { "start": 1462.72, "end": 1469.68, "text": " AI is a company started by Andrew Ng and has just raised $57 million to build essentially" }, { "start": 1469.68, "end": 1475.82, "text": " an ML ops platform. They're doing what they're calling data centric AI. And the whole idea" }, { "start": 1475.82, "end": 1481, "text": " is that things like convolution neural networks or in general machine learning models, they're" }, { "start": 1481, "end": 1485.56, "text": " as easy to build as downloading a bit of code from GitHub and running it on your data set." }, { "start": 1485.56, "end": 1491.1000000000001, "text": " So the real challenge nowadays is really to get the data set to a quality where you can" }, { "start": 1491.1, "end": 1496.8, "text": " actually train some good model on it. So their product is essentially this data manager and" }, { "start": 1496.8, "end": 1502.24, "text": " data labeler tool where it helps professionals really label the data. This is all geared" }, { "start": 1502.24, "end": 1508.54, "text": " towards manufacturing. So here you'd label cracks or dents or whatnot in newly manufactured" }, { "start": 1508.54, "end": 1513.4399999999998, "text": " phones and then you train your model on very little data. And that's then supposed to give" }, { "start": 1513.4399999999998, "end": 1518.78, "text": " you a nice detector for classifying further manufacturing defects. So their idea isn't" }, { "start": 1518.78, "end": 1523, "text": " necessarily to build one big model that's going to solve all the problems but to provide" }, { "start": 1523, "end": 1528.2, "text": " the different industry players in manufacturing with the tools to build their own models from" }, { "start": 1528.2, "end": 1533.04, "text": " very little but very high quality data so they can essentially get their expertise into" }, { "start": 1533.04, "end": 1537.12, "text": " these models. I guess that's not a dumb idea. If you're a manufacturer, maybe you want to" }, { "start": 1537.12, "end": 1543.84, "text": " try landing lens. Another startup that has raised a lot of money is cerebras raising" }, { "start": 1543.84, "end": 1550.72, "text": " 250 million US dollars or an over 4 billion US dollar valuation. So cerebras builds these" }, { "start": 1550.72, "end": 1557.6799999999998, "text": " really big chips that are geared specifically towards AI computation. Now, as I said before," }, { "start": 1557.6799999999998, "end": 1562.4399999999998, "text": " I have no clue what's going on in these chip manufacturing processes and what's important" }, { "start": 1562.4399999999998, "end": 1567.28, "text": " and whatnot. But these are apparently really, really big chips and everything's connected" }, { "start": 1567.28, "end": 1573.48, "text": " to everything in memory super fast and memory is with the compute and yada yada yada, what" }, { "start": 1573.48, "end": 1579.24, "text": " you need to know is that there are indeed other players than Nvidia or AMD in the space" }, { "start": 1579.24, "end": 1585.56, "text": " of providing compute solutions for AI. And that's a good thing. And maybe at some point," }, { "start": 1585.56, "end": 1590.64, "text": " cerebras will come away from their giant chips and actually also make consumer products." }, { "start": 1590.64, "end": 1595.04, "text": " Who knows? If that happens, it's going to be good for all of us. And if they stay in" }, { "start": 1595.04, "end": 1599.84, "text": " the big chip server world, I think it's still good for us because all of the cloud compute" }, { "start": 1599.84, "end": 1606.56, "text": " might get cheaper because there's just more competition. Speaking of cheap synced rights," }, { "start": 1606.56, "end": 1612.28, "text": " Microsoft India proposes Varuna, a scalable and low cost training of massive deep learning" }, { "start": 1612.28, "end": 1619.24, "text": " model system. So this is essentially an engineering paper that details how you can train big models" }, { "start": 1619.24, "end": 1625.28, "text": " on cheap and unreliable hardware. So the system uses both data parallelism as well as model" }, { "start": 1625.28, "end": 1629.76, "text": " pipelining. So you split up your data batches across different machines, you also split" }, { "start": 1629.76, "end": 1634.36, "text": " up your models across different machines. And if you do that in a smart way, you can" }, { "start": 1634.36, "end": 1638.62, "text": " achieve actual big throughput. So usually big models have to be trained on what they" }, { "start": 1638.62, "end": 1643.16, "text": " call hyper clusters, which means clusters that have very fast interconnect because in" }, { "start": 1643.16, "end": 1647.8799999999999, "text": " order to do something like an all reduce if you have to do layer normalization or batch" }, { "start": 1647.8799999999999, "end": 1652.28, "text": " normalization, I don't remember which one it is, sometimes you need to send data around," }, { "start": 1652.28, "end": 1657.48, "text": " sometimes you need to send gradients around, and that costs a lot of compute and bandwidth" }, { "start": 1657.48, "end": 1661.84, "text": " and so on. So it's very interesting to see that these researchers are able to compete" }, { "start": 1661.84, "end": 1667.48, "text": " with these big hyper cluster training procedures and essentially bring that down to a heterogeneous" }, { "start": 1667.48, "end": 1672.56, "text": " clusters of spot instances that can die at any time. It's cool to see that AI training" }, { "start": 1672.56, "end": 1677.42, "text": " of these big models becomes something like a Kubernetes cluster where you can just add" }, { "start": 1677.42, "end": 1682.5600000000002, "text": " machines and the system will reconfigure itself to make optimal use of the machines however" }, { "start": 1682.5600000000002, "end": 1687, "text": " fast they may be connected and however long they might be up. So if you're looking for" }, { "start": 1687, "end": 1694.6000000000001, "text": " a cheap way to train a 200 billion parameter model, then this might be the way to go. Okay," }, { "start": 1694.6000000000001, "end": 1698.76, "text": " here is a shout out to a few places. So the first shout out is to Laura Ruiz, his website" }, { "start": 1698.76, "end": 1704.8400000000001, "text": " where she replicates a bunch of things in young Lacan's and others papers called learning" }, { "start": 1704.84, "end": 1710.6, "text": " in high dimension always amounts to extrapolation. It's a very technical paper and Laura does" }, { "start": 1710.6, "end": 1715.6799999999998, "text": " a great job here, not only replicating the experiments in here, but providing really" }, { "start": 1715.6799999999998, "end": 1721.6399999999999, "text": " nice background and reasons and also the code that she uses to do everything. So I just" }, { "start": 1721.6399999999999, "end": 1727.24, "text": " thought this was really neat interleaving plots, code, math, and so on and really going" }, { "start": 1727.24, "end": 1732.8799999999999, "text": " through all of this. And in the end, actually being able to reproduce the plots of the papers," }, { "start": 1732.88, "end": 1737.44, "text": " Yippee, there it is so beautiful, very reproduced much similar. If you want to follow Laura," }, { "start": 1737.44, "end": 1743.0800000000002, "text": " definitely check out our website or GitHub. This is absolutely beautiful photo Laura." }, { "start": 1743.0800000000002, "end": 1751.16, "text": " Good job. Right, another cool project is real life punch out by Ian Charnas. This is a really" }, { "start": 1751.16, "end": 1756.8000000000002, "text": " well made video about using body tracking models and pairing them up with punch out" }, { "start": 1756.8000000000002, "end": 1762.16, "text": " the N64 game. So you can actually play this in the browser, it tracks your arms, and you" }, { "start": 1762.16, "end": 1767.76, "text": " can punch using various boxing moves and play punch out. Not only that, but Ian actually" }, { "start": 1767.76, "end": 1771.8000000000002, "text": " went ahead and bought many cartridges of the game, as you can see in the background right" }, { "start": 1771.8000000000002, "end": 1778.0400000000002, "text": " here. And if you play it in the browser, it will actually use one of those cartridges" }, { "start": 1778.0400000000002, "end": 1783.44, "text": " because using just a ROM downloaded from the internet would violate the licensing agreements." }, { "start": 1783.44, "end": 1789.1200000000001, "text": " So every game you play is essentially corresponding to a real life cartridge. As I said, the video" }, { "start": 1789.12, "end": 1794.54, "text": " is done extremely well. It's a fun video to watch. Or if you simply want to try it out," }, { "start": 1794.54, "end": 1799.1999999999998, "text": " you can go to Ian's website and just play it by yourself. Nothing to install runs in" }, { "start": 1799.1999999999998, "end": 1806.4399999999998, "text": " the browser. Excellent. Alright, so this is the section where I provide some helpful things." }, { "start": 1806.4399999999998, "end": 1811.9599999999998, "text": " First helpful thing market tech post writes Google AI introduces go emotions and NLP data" }, { "start": 1811.9599999999998, "end": 1817.04, "text": " set for fine grained emotion classification. I've actually shown this in last week's weights" }, { "start": 1817.04, "end": 1822.8799999999999, "text": " and biases ad if you have followed the weights and biases ads. But this is a data set where" }, { "start": 1822.8799999999999, "end": 1829.8799999999999, "text": " Reddit comments are annotated with one of I believe 28 different emotions contained in" }, { "start": 1829.8799999999999, "end": 1834.3999999999999, "text": " the comments. It's not only one emotion per comment, but technically any emotion could" }, { "start": 1834.3999999999999, "end": 1840.12, "text": " or could not appear in any comment. In total, there are 58,000 Reddit comments classified" }, { "start": 1840.12, "end": 1847.4799999999998, "text": " into on its 27 emotion categories 12 positive 11 negative four ambiguous and one neutral" }, { "start": 1847.4799999999998, "end": 1853.32, "text": " with that adds up to 28. I was right. So the data set creation process detailed here is" }, { "start": 1853.32, "end": 1857.9599999999998, "text": " detailing how they went about it, how they went about balancing the data, paying attention" }, { "start": 1857.9599999999998, "end": 1863.36, "text": " to the fact that Reddit isn't exactly a good replica of the entire world and so on. If" }, { "start": 1863.36, "end": 1867.3, "text": " you're interested, you can give this article a read, you can also look at the paper that" }, { "start": 1867.3, "end": 1872.56, "text": " goes along with the data set and you can use the data set if you want to try out your hand" }, { "start": 1872.56, "end": 1878, "text": " at emotion detection. I have to say it's gotten a bit tired to see NLP tutorials always doing" }, { "start": 1878, "end": 1882.2, "text": " sort of semantic classification where it's just positive or negative and this might just" }, { "start": 1882.2, "end": 1887.12, "text": " provide a little bit of a more challenging task here has this language interpretability" }, { "start": 1887.12, "end": 1892.22, "text": " tool it's open source and it's for visualizing and understanding NLP models. This provides" }, { "start": 1892.22, "end": 1898.92, "text": " various things you can look at embedding spaces of NLP tasks, it can analyze things like classification," }, { "start": 1898.92, "end": 1904.32, "text": " regression, looking at attention heads, analyzing parts of the input, which parts are important" }, { "start": 1904.32, "end": 1909.1000000000001, "text": " for which things and so on. All in all, it's quite a rich tool and I encourage you to check" }, { "start": 1909.1000000000001, "end": 1914.52, "text": " it out if you're into language interpretability. Or if you want to just check out how your" }, { "start": 1914.52, "end": 1919.08, "text": " models do the things they're doing code is available tool is available. Okay, last week," }, { "start": 1919.08, "end": 1925.04, "text": " we've reported on a rudali the Russian Dalí model. And now apparently the large model" }, { "start": 1925.04, "end": 1930.8799999999999, "text": " is available for download as one Reddit comment says, or much rather the edit of the comment" }, { "start": 1930.8799999999999, "end": 1938.04, "text": " says that the availability is on December 1. So expect that soon. machine on Twitter" }, { "start": 1938.04, "end": 1944.76, "text": " says after a year in dev, I'm happy to release the core of my Vtuber apps. Now Vtubers are" }, { "start": 1944.76, "end": 1950.92, "text": " special sort of things that I have never really touched on. But this seems to be a large community" }, { "start": 1950.92, "end": 1956.28, "text": " that transforms their body movements onto digital anime avatars, as you can see right" }, { "start": 1956.28, "end": 1961.98, "text": " here. So this also uses body pose tracking and apparently also face tracking in order" }, { "start": 1961.98, "end": 1968.04, "text": " to make your avatar do as you're doing code is available. And it's not only sort of for" }, { "start": 1968.04, "end": 1973.36, "text": " face and upper body, but you can also track your entire body movements and map them onto" }, { "start": 1973.36, "end": 1978.6, "text": " characters as you can see right here, it can do facial point tracking such that it really" }, { "start": 1978.6, "end": 1985.6799999999998, "text": " replicates your facial expressions. So there's never been a better time to become a Vtuber." }, { "start": 1985.6799999999998, "end": 1991.84, "text": " Check out Khalid o kit on GitHub if you're interested. There's an article by Newsfile" }, { "start": 1991.84, "end": 1996.8799999999999, "text": " Corporation on Yahoo Finance that writes that artificial intelligence now makes it possible" }, { "start": 1996.88, "end": 2004.8000000000002, "text": " for investors to find promising new hidden gem meme tokens automatically. This isn't" }, { "start": 2004.8000000000002, "end": 2008.7600000000002, "text": " necessarily what you think you think while there's a company that tells me which meme" }, { "start": 2008.7600000000002, "end": 2014.7600000000002, "text": " tokens are good so I can buy it. No, no, no, no, no, no, no. See, this is an actual token" }, { "start": 2014.7600000000002, "end": 2020.96, "text": " itself. So you put money into the token, and then the token selects projects in which the" }, { "start": 2020.96, "end": 2026.4, "text": " money is to be invested. These projects it says are automatically selected using a special" }, { "start": 2026.4, "end": 2032.48, "text": " AI based sniper bot. So the AI will look at all the meme tokens, the dodge and the Shiba" }, { "start": 2032.48, "end": 2038, "text": " enu and the squid game tokens, and it will predict which ones will go up and then it" }, { "start": 2038, "end": 2043.8400000000001, "text": " will take all the money that is invested into the Finu token, put it into those tokens and" }, { "start": 2043.8400000000001, "end": 2048.52, "text": " then pay out the winnings to the holders of the Finu token. I mean, look at this for an" }, { "start": 2048.52, "end": 2054.2000000000003, "text": " enhanced version of this graphic, please. Yes, I want an enhanced version. Oh, wow," }, { "start": 2054.2, "end": 2060.16, "text": " that's enhanced. That that is that is so hands. Absolutely. Currently, there is a website" }, { "start": 2060.16, "end": 2069.2799999999997, "text": " for this and it says vote for Finu help the price pump and hit the back there is a dodge." }, { "start": 2069.2799999999997, "end": 2073.46, "text": " Okay people who want to make a quick buck using meme tokens that have absolutely no" }, { "start": 2073.46, "end": 2079.9199999999996, "text": " value whatsoever, are encouraged to buy a meme token. Excellent. Now I'm not saying" }, { "start": 2079.92, "end": 2084.28, "text": " this can't be done. Mean tokens are essentially like fashion that there's no reason why this" }, { "start": 2084.28, "end": 2089.52, "text": " particular that particular fashion should be in or out next year and yet it still happens" }, { "start": 2089.52, "end": 2095.88, "text": " and there might be ways to predict it. But still, whether or not this is the way to go" }, { "start": 2095.88, "end": 2101.6, "text": " can't tell. So I've mentioned this shoe does not exist last week. But there's also this" }, { "start": 2101.6, "end": 2105.88, "text": " sneaker does not exist. Look at that. And this is pretty cool. So this is a grid of" }, { "start": 2105.88, "end": 2111.7200000000003, "text": " AI generated sneakers, you can click on one, right, and then you can apparently edit that" }, { "start": 2111.7200000000003, "end": 2119.2400000000002, "text": " sneaker. So you can go normal to futuristic, you can go high creativity, that's very creative." }, { "start": 2119.2400000000002, "end": 2124.56, "text": " You can change up the colors a little bit. Very cool, very functional. Look at that one." }, { "start": 2124.56, "end": 2132.3, "text": " Yeah, futuristic, creative, light color. I mean, it's not super futuristic. But yeah," }, { "start": 2132.3, "end": 2137.1600000000003, "text": " so shout out to this sneaker does not exist.com. Check it out. And that was already it for" }, { "start": 2137.1600000000003, "end": 2145.1600000000003, "text": " this week's ML news. I hope you had fun hit subscribe if you liked it. We're only 105,900,000" }, { "start": 2145.1600000000003, "end": 2151.0600000000004, "text": " subscribers behind PewDiePie. We can totally catch them. If we really do our jobs, tell" }, { "start": 2151.0600000000004, "end": 2155.2000000000003, "text": " three people they're going to tell three people is going to be fine. See you next Monday." }, { "start": 2155.2, "end": 2164.2, "text": " Bye bye." } ]
OUCwujwE7bA
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents (+Author)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "natural language processing", "training data", "deep learning tutorial", "nlp", "gpt3", "gpt 3", "codex", "openai codex", "large language models", "gpt 3 planning", "zero-shot planning", "zero shot learning", "virtualhome", "virtual home", "bert", "bert model", "bert translation", "bert embedding", "pieter abbeel", "reinforcement learning", "human language learning" ]
#gpt3 #embodied #planning In this video: Paper explanation, followed by first author interview with Wenlong Huang. Large language models contain extraordinary amounts of world knowledge that can be queried in various ways. But their output format is largely uncontrollable. This paper investigates the VirtualHome environment, which expects a particular set of actions, objects, and verbs to be used. Turns out, with proper techniques and only using pre-trained models (no fine-tuning), one can translate unstructured language model outputs into the structured grammar of the environment. This is potentially very useful anywhere where the models' world knowledge needs to be provided in a particular structured format. OUTLINE: 0:00 - Intro & Overview 2:45 - The VirtualHome environment 6:25 - The problem of plan evaluation 8:40 - Contributions of this paper 16:40 - Start of interview 24:00 - How to use language models with environments? 34:00 - What does model size matter? 40:00 - How to fix the large models' outputs? 55:00 - Possible improvements to the translation procedure 59:00 - Why does Codex perform so well? 1:02:15 - Diving into experimental results 1:14:15 - Future outlook Paper: https://arxiv.org/abs/2201.07207 Website: https://wenlong.page/language-planner/ Code: https://github.com/huangwl18/language-planner Wenlong's Twitter: https://twitter.com/wenlong_huang Abstract: Can world knowledge learned by large language models (LLMs) be used to act in interactive environments? In this paper, we investigate the possibility of grounding high-level tasks, expressed in natural language (e.g. "make breakfast"), to a chosen set of actionable steps (e.g. "open fridge"). While prior work focused on learning from explicit step-by-step examples of how to act, we surprisingly find that if pre-trained LMs are large enough and prompted appropriately, they can effectively decompose high-level tasks into low-level plans without any further training. However, the plans produced naively by LLMs often cannot map precisely to admissible actions. We propose a procedure that conditions on existing demonstrations and semantically translates the plans to admissible actions. Our evaluation in the recent VirtualHome environment shows that the resulting method substantially improves executability over the LLM baseline. The conducted human evaluation reveals a trade-off between executability and correctness but shows a promising sign towards extracting actionable knowledge from language models. Website at this https URL Authors: Wenlong Huang, Pieter Abbeel, Deepak Pathak, Igor Mordatch Links: Merch: http://store.ykilcher.com TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello there, today we're looking at language models as zero-shot planners, extracting actionable knowledge for embodied agents. And I'm going to interview the first author Wenlong Huang in a few minutes. So first there's an explanation of the paper, 10-15 minutes or so, I'm going to try to keep to it. And then we jump into the interview where we can discuss this paper at length. On a high level, this paper asks, can we use the knowledge that is inherent in large language models like GPT-3, or surprisingly, OpenAI's codecs in order to do planning in what they call embodied agents. Ultimately, it's going to be this environment right here. The, I don't even know what it's the virtual home environment. And it's about a virtual home, you have to fulfill some tasks like brushed your teeth, then the model has to come up with a sequence of steps that are admissible by the environment. So there's a level of admissibility of action, predefined actions that are admissible, the model has to come up with these actions in order to fulfill the task. The model is then rated based on executability and correctness of their plans. And it turns out that the larger the models get, as you can see right here, the less executable the plans become, which means that the actions they generate aren't admissible by the environment, probably because the models are more, let's say powerful, they can express themselves in more ways, they have different ideas of how to reach goals. However, the correctness, this is human evaluated of these models rise as they grow larger. So this gives you an indication that the large models seem to have quite a lot of knowledge. And we have to say these are not trained, the entire paper just works except for one baseline evaluation, just works with pre-trained models, they're not fine tuned at all on this environment right here. So what this paper does is it says, well, given that the larger the models get, the more correct their plans are, can we do something to fix the issue with the executability? To that, they develop this translation procedure right here. These are three specific improvements they do to the models. In order to get their executability up, you can see they sacrifice like a little bit of the correctness, but they do make the plans largely executable in the environment. And therefore, procedures like this could be applied in many different ways. It's not only about the virtual home environment and so on. It's essentially anywhere where you bring together the knowledge that is inherent in large language models with some sort of a domain specific language or a grammar or any anything like this, like where you have to transfer that knowledge into a new domain, but you don't want to train a model to do so. So we're going to see how they do it really briefly. First of all, the environment itself, as I already said, is this now this is visualized, although they never work, you know, actually in 3D, just a small correction here, because I messed this up. There are actually two versions of the virtual home environment. One is a Python version that focuses on the textual interaction with the environment. The other one is implemented in Unity and actually does work in 3D. The developers of the environment mostly focus on the Unity environment because it's more real. But as of yet, that has a subset of the actions available that the Python environment has. And the authors of the paper use the Python environment and the data set that comes along with that. We're going to go into this more in the interview. Stay tuned. They simply grab the data set of possible tasks, some tasks you can see right here, a task could be throw away paper, another task could be brush teeth, and there there'd be a sequence of steps. This environment is made by humans. So the tasks are made by humans. And then other humans have to come up with the steps that are admissible, admissible actions in this environment. There are, I believe, a number of objects that are defined, they're predefined. Yeah, so there are a number of objects, for example, living room, television, sofa, and so on. And there are a number of verbs. So walk, find, switch on, and so on. And not every verb object combination is possible. Some verbs have two objects and so on. But essentially, you combine the predefined verbs and the predefined objects, and then the state of the world changes. So the world keeps track of states, there are certain preconditions. For example, you can probably only sit on the sofa if you are in the vicinity of it. So you need to first find the sofa, you can only switch on the television. Similarly, if you have first found the television or walked to the television or something like this, if the television is in the living room, you first need to go to the living room, and so on. So there's a hidden kind of a state. But all of this is constructed. And we talked about this in the interview, like, what's the appropriate granularity of actions like this? And isn't this a major issue? But it is made all with the humans in the loop. So the data set is supposed to be kind of the most natural expression of these tasks, as split into steps that a human would come up with. So this is the grammar of the environment. And the language models, they don't know about this grammar. They're just language models. So what they do is they take something like GPT-3, and they make a prompt. Now the prompt, as you might know, in GPT-3, you have to give a prompt. So the prompt could just be like, here's the task, you know, blah, blah, blah, brush your teeth, then what's step one, right? And then GPT-3 will probably it will probably even generate step two and three and four. But it will probably not be according to the these actions in these templates, you can help this a little bit by putting a prompt up here. So the prompt they use is one, I believe one specific plan. So they have already like task up here, some task, and then some number of steps, so that the model kind of knows what is expected. We also talked about this in the interview, and this could potentially be improved by multiple, multiple prompts and so on. But in the baseline, they have one particular prompt. And then one of the improvements is actually to select a more optimal prompt. This is the basic setup. You have a goal in this environment with a fixed grammar, and you task, you input this right here to your language model, and the language model will spit out the plan. Now what do you do with the plan? The plan, you score, like how good is the plan? And they have two different scoring available. One is executability. And executability is just like, it's essentially parsability by the environment. So in executability, you ask yourself, can it be correctly parsed, which means that is the syntax according to the syntax of the environment. And they do have a little translation procedure, like a little heuristic translation procedure for the baseline in place, so that the language model probably can't get it exactly right. But they do sort of translate to the closest action there. But also one of the improvements is related to this. And then also does it satisfy the common sense constraints of the environment. And these would be programmed in like, for example, you can only pour yourself a glass of milk if you first open the fridge and grab the milk, this can be measured directly, what cannot be measured that well is correctness. So these models, they would come up with plans and independent of whether they're executable or not, they could be correct, right. And that's where they ask humans. So they use human evaluations, they conduct human evaluations in order to score the correctness of whatever these models output. So they give it to a human, ask the human, does this look like a sensible plan in order to brush your teeth, and the human would either say yes or no, when they do like ablations, and so on. They also use like longest common sub sequences between two programs and so on in order to not spend ginormous amounts of money on humans. But essentially, the correctness metric is a human metric. It's also interesting because you thought you could just execute like the plan in the environment and that give you like, does it succeed or not, but they say correctly that for a task like make breakfast, there's not really a defined end condition that you could program into the environment to give a reward. So it's more accurate to ask humans whether a plan is correct. As you might have guessed, this environment is very human centric, it's made by humans with humans in the loop and so on. It's supposed to really be sort of a representation of human tasks and human plans to human tasks. All right, so now we're going into the improvements. There are three distinct improvements they make. So if they just do this, if they just do what we've described so far, then the graph up here results, excluding the two models on the right, you can see the larger the models get, the higher their correctness, but the worse their executability. So now the thought is, can we change that? Can we raise the executability? And so this is the baseline right here, zero-shot planning via causal large language model, you put in a task as a prompt, and along with like the format you expect, which is this one right here, which is some other task from the data set, then you use the pre-trained language model like GPT-3 or something, and that will give you a plan. And that's it. So the next thing they do is they do what they call a translation model. So they introduce a second model, which is also pre-trained. And this is it's not trained on translation. It's just trained on masked large language modeling. So think of this like, this is just BERT. In fact, I believe they use sentence BERT, just pre-trained on English language. And what they do is they make a big vocabulary of all the admissible actions. So all the admissible actions would just be like any combination between any verb and any object that would actually go with that, that is admissible to this verb. So from this, they make like a giant list of all of the admissible actions. And then they embed that giant list. So they put this into some embedding space using the sentence BERT model pre-trained, right. And then whenever the large language model outputs something, they don't implement it into the plan directly. They first embed whatever the model outputs. Let's put this over here, they embed it, let's say that becomes this right here. Then they see what's the nearest neighbor of my admissible actions to this thing. And then they simply replace whatever the model output with the nearest neighbor. And they call that translation. So essentially, it translates from general natural language space into the space of the admissible actions or the grammar of the model. Now this has some problems on its own. For example, if the model outputs the compound actions. So if it says, for example, squeeze out the glob of lotion and put it in your mouth or so or on your face, I guess, then well, it's apply lotion, it's anywhere. Squeeze out the glob of lotion and put it on your skin. That would be still one action. Now which one would be the closest right here, there's going to be somewhere like squeeze out a bit of lotion and the other one is going to be like, put the lotion on your skin. Yet you only have one action like it's it's it's one line. So one action, it just contains like an and now the end might be easy to recognize, but there are other there are going to be other like compound actions. And this is going to be a problem here, because you just map one action to one admissible action. But in any case, doing this already helps a lot, even though there are still some problems to alleviate the rest of the problems. They have two more improvements. The first improvement they do is they say, well, if there is a compound action, we can still kind of alleviate that a little bit. So in the original method, what they did is they simply took this through the through the language model, and they got out just a list of steps, right? Here is step one, here is step two, here is step three, and so on. That is just a list of steps. And they would translate even when they use the translation model, they would translate each of them to a admissible action translate this one to an admissible action. Well, now you have no idea of whether that sequence of admissible actions even makes sense, right? For example, one could be a compound action, and it just gets translated to one of the two actions. And then the next action doesn't have a precondition. So what they do is they interleave the two steps, right? They interleave this translation with the generation. So they would only generate one step at a time, like step one, then they would translate it, and then they would use the translated version and put it back into the language model to get step two. That way, the language model always is conditioned on admissible actions instead of just being free form and then translating after the fact. So this is autoregressive generation. The last improvement they make, which is, I guess, more of a minor improvement. That's why it's not in this diagram. However, what they do is instead of having a generic prompt, what they do is they take the task, they embed it using the same sentence verb embedding, and they compare it to embeddings of all of the tasks that they have in the data set. And they just pick the closest task in the data set to act as a prompt, which could still transfer some in-context knowledge for the current task. So that is essentially the method. They investigate this, they have an algorithm right here. I formulated it in a rather easy way, but they do not only consider the closest action, they consider actually a waiting of, so in the translation, they consider a waiting between how close is it to an admissible action and how likely is that action that they output. So they would generate not only one action and then translate it, they would actually generate a bunch of variants and they consider each one of them, like how close is it to an admissible action and also how likely is it. And then they take the best combination of the two. That is obviously modulated by a hyperparameter. They have early stopping and all of this kind of stuff. And this results in a neat algorithm. And we're going to talk about these things in a bit and also the results right here. I want to highlight that if you look at, for example, vanilla GPT-3 has a really low executability, it does have a high correctness. However, if you look at the translated version, which is after their improvements, you can see the executability has risen dramatically while the correctness is a bit lower. Like you get a bit lower in correctness because of the whole translation procedure and so on. You're mocking with the outputs, humans may not like it as much. This is all stuff we're going to touch on in the interview. Just interestingly highlighting that codecs, like the codecs model seems to be scoring quite well on these tasks. So also the translated codecs is much smaller. However, it scores high, really high. So parameter for parameter, the codecs model is actually pretty, pretty good at this, which was a surprise to me. So I think this is an exciting paper. It except as I said, for a fine tuning baseline, it turns out to work completely without any training. It's just evaluation, so to say. And I liked it. And I think this does have applications like getting the knowledge out of these large language models is something we should, you know, be getting better at doing. Otherwise, I don't think we make full use of them. All right, so now I want to jump into the interview with Wenlong. I hope you enjoy that as well. Tell me how you like these, these videos with the interviews without the interviews, anything you want in the comments. I'll see you. Bye bye. Welcome everyone. Today with me here is Wenlong Huang, who is the first author of the paper about language models as zero shop planners and very, very happy to have you here. Welcome Wenlong. Thank you, Yaning. Yeah, super, super happy to be here. This is, I've already told you about this paper is different and I like different papers. And it's, it's different in a way that maybe wasn't expected every, it seems like every day, we find a new applications for these large language models and yet another thing that they can do here. And when I, when I saw this, I was reminded of a friend of mine who had like similar ideas, but it never really materialized. I tried some of this stuff as well, combining large language models with planning with telling me what to do in the real world. I even made a video where GPT-3 told me a recipe and then I cooked the rest, like me and my friend, we cooked the recipe and so on. But it seemed like always a bit, a bit out of place, a bit, a bit off just to give you detailed instructions. And when I saw a paper that was really trying to make this work in a real environment, I was, I was very happy to see that. And yeah, that's, that is, that is this paper. And also, to be said, you have a, you have a stellar board of, of co-collaborators right here. How, how did this come about? Like, how did you even get to the idea, hey, I could use these language models to do planning. Was it like, did it immediately come to you? Did it sort of build up from some basic idea or what was the process? So yeah, thanks for the briefing. So I think that's actually came out to be really surprising to us as well. So first we were just having, when we just playing around with the largest language models on the, on many of the web interface, we found that like, actually there is something there, like you said, if you ask it for a recipe or we actually originally study, like whether it can output the steps for making coffee, et cetera. So we found that like, when the models get large enough, there's actually something there. And this is the sign of life, I think for us to kind of go on and investigate how we can make that actually useful for, for agents. So we kind of just started from there and actually it came out to be pretty surprising originally without like, maybe we need some training data sets to maybe like train something, a translator or something to actually make it useful. But it turns out like, but we really trying to constrain ourselves in the meantime, because we don't want it to be tailored to a specific environment. So we would just want to see like just the language model itself, like how well it can do, how far it can go. So this is what got us in the end. We just like explored for like two months and then found like you can actually do this without any any training. And yeah, it's actually truly surprising and actually a really fun project for me as well. It sounds like fun. Yeah, just trying to see whether you can output something like really realistic and really fun. Yeah. So you came across this environment right here, this virtual home environment. Was this always the plan or why did you choose like there are a million environments, OpenAI, Jim and these Mojoco kind of robot simulations. Why was this one particularly useful? Did you immediately think of this one or how did this came about? Thanks. Yeah. So actually I wasn't doing too much research in this in body agents area, especially for this like really high level tasks. And then I actually went to the like Google Scholar and then search for appropriate environments for this. And we found this virtual home environment and we really liked it because it actually can model any any tasks that we can express in terms of this like textual language plan. Like just like textual plan. So and actually there are many other environments as well, but some of them are limited by, I think a lot of people also use Alfred environment. That's a really good environment too. And I think it's a bit more structured there, but the tasks are often come from like a template. So it's usually like pick something, pull something. But actually there are a lot of challenges there. I think it's a different set of challenges. And we found like what the virtual home tackles is exactly what we look for because it can model like any task expressed in free form language, especially those like really challenging tasks like people do actually every day, like make breakfast, make tea, make coffee. And then it particularly cares about the common sense constraints in them. So specifically this environment has a set of like preconditions and post conditions for each action. So for example, if you want to grab a glass of milk from the fridge, you can't just like say go to the fridge and grab glass of milk because you first got to open the fridge first and then like preferably you want to close the fridge afterwards. So it's really this like these constraints I think are really useful and really interesting to study whether the language models can handle this. And you've investigated several different language models. And just to be clear, this environment, it has this kind of syntax, it has very defined things you can do. And somewhere I think you say it's about 50,000 actions that are ultimately possible. It's kind of a combination of a bunch of verbs, which are grab, open, go to, and lift or things like this, and a bunch of objects like kitchen, fridge, and so on. So any plan would consist of a sequence of verb object, verb object, like here, walk to kitchen, open fridge, grab milk. So any planner in this environment would have to output this syntax directly. Now you had a plan of not training anything, right? You didn't want to train anything, you simply wanted to investigate what knowledge is already there in the language models. And you came up with kind of a way to translate that. You want to maybe elaborate how do you query these language models and how do you make them actually conform to the syntax here? Of course. Yeah. So the way that Virtual Home expresses these actions are via this specific format where you put a square bracket for the action, atomic action, like grab, put open, and then you put, I think it's a parenthesis or something for the arguments. But the problem is we can't just expect language models to handle this because even if we put an example in front, maybe they can do it, but it's definitely not the way that usually humans produce language. And after all, these language models are trained on human text. So we decide maybe it's not the right way to query these models. Have you ever tried letting them output directly the syntax, or was it just like, yeah, it's not going to work anyway? I tried briefly, but it's definitely not thoroughly investigated. And intuition-wise, I think it's definitely to use natural language. But we did adopt for the most basic approach that we can think of, which is just define a straight up template for each atomic action. And actually, because these atomic actions are simple enough, just walk, grab, and those things. So this atomic action, I mean, the templates we actually came up with are, I think, actually, just in a natural way, people say things. So turn off something, turn off something, and then add some words in between, like in, on, on top of, et cetera. Yeah. And then you just query these models, and you have multiple ways of evaluating this, right? You care about two things, you care about correctness, and you care about executability. And in at least, so you also make use of humans. How did you design, like what was your thinking behind designing the evaluation? Yeah. So actually, it came out to be really challenging to evaluate these things. Like I said, so like this task art, because they're expressed in free form language. So that means they're really open-ended. So it might be deterministic, whether like if you want to grab a glass of milk, you just want to look in the end, whether you have a glass of milk. But if you really think about it, if we don't want to constrain anything in the task that we want to do, like making breakfast, like what is the correct way to make breakfast? Everyone has different preferences. So it's hard for us. Actually, I think it's still a challenge in this sort of task is like really determine the correctness. I'm sorry. It's the success rate for each task. So you can't really tell if a task is really successful depending on how open-ended it is. So we decided that, okay, so if it's hard to computationally produce a metric for a success rate, but as humans, we can definitely tell if it's making something semantically meaningful. So we'll just use part of human evaluations to do this. But we don't want to entirely rely on humans because as you can tell for the tasks that like, for the action plan that real language models generate, they're so realistic that they can even fool many humans that are too realistic. So you can't just entirely rely on humans to say if it's successful. So we also use this metric executability, which is also used in past papers that uses virtual home. So we just use this metric as well to basically determine whether the plan satisfy the common sense constraints in this environment, namely just whether you make sure to open the fridge before grabbing something from it. It's interesting because when the humans raid it, the humans would also skip a bunch of steps. If you tell a human, go to the fridge and grab a glass of milk, the human will go like, oh yeah, of course. Which is one of my, maybe this is jumping ahead a little bit, but one of the questions I had most when I read this was just there is a level of specificity that is required right here, which is kind of ambiguous. You have a high level description, which is like make breakfast, and then you have a bunch of steps which you need to follow. And sure these steps correspond to actions in the environment, so they're kind of given by that, but the language model doesn't know that. The language model just knows I need to produce a plan. So how is the language model, why do we expect the language model to figure out that it needs to say open the fridge before you get a glass, but for example it doesn't need to say put one foot in front of the other foot in order to walk. So did you have any insights or concerns with like, there seems to be like a very specific level of specificity of these plans? Yeah, so that's a really good question. Actually this granularity actually comes from the dataset or the virtual whole environment itself, because we essentially follow the format of virtual whole environment, and also this dataset they collected from humans of how to do this really human activity task. So the way they collect, they build this environment is they first ask many humans to come up with a set of tasks that they do in everyday household, and then they ask a different group of human to come up with a detailed plan that can drive a robot to perform these tasks. And it's after that they build this environment based on the verbs used by those humans. So you can think of like this environment is really built on top of what humans say. Now the developers who just say like, okay, we want this granularity, we want this like walk, grab, and those etc. So they actually ask these humans to give those verbs and then build those actions according to those verbs. And they did make sure for each of the verb to develop a set of common sense constraints, which completely makes sense. And I think they're actually like reasonably exhaustive for those actions. So if you want to grab something, you definitely need to make sure the things you grab is not within a closed container, for example. So in this case, the fridge is a container and it has this attribute of being open or being closed. So they internally keep track of the attributes for each of the object. And then to make sure that if you do something like this, you don't violate the common sense constraints. So to answer your question, this granularity really depends on the humans. And I think this is where language models really shine because essentially language models are trained on human produced text. So my hypothesis, although this is definitely not something they're only tested by, my hypothesis is that because it's trained on human produced text, and humans after all produce these actions. So if you do it careful enough, and then use some techniques to properly translate them or doing something else, you can essentially get back something similar to what humans produced in the beginning. Yeah, I mean, you would imagine that sort of the human-ness of how the environment was built would also be present a little bit in these language models, which makes sense. I don't have a better idea of how to build an environment like this. So it seems pretty reasonable. Yeah, it's actually not to be really interesting to me because it's super hard for me if I were to develop this environment, how would you even animate all of these really human tasks even just in a household setting? It's super difficult. And I think they did a really good job here. And then I think this is also what makes language models particularly useful for this task because these are basically just human tasks and language models are really good at mimicking humans. Yeah. Yeah. So on the left here, we see a bunch of models that you've evaluated right here. So again, executability is sort of how, if it matches the syntax of the environment, if I can map it to that, and also, I guess, if it violates any of these common sense constraints. So just like how executable is the plan in the environment, no matter whether it's the wrong thing, right? And that comes in a second. And correctness is a thing that is rated by human annotators. They look at the plan that was produced and they just, from their own intuition, are like, well, is this a good plan to make breakfast? Yes or no. And we clearly see there's this downward trend. If we exclude the models on the right, there is this trend line here where the larger models, they seem to produce more correct plans, which means plans that the humans like more, but they are less executable. Whereas the smaller models, they are less correct, which we can, that's correct. I would have expected that, but they're more executable. And you've noticed in the paper that very often they just produce plans that have nothing to do with the task description. They would just produce like a plan that is according to the syntax of the examples that you give in the prompt, right? But how can you explain that? Like even on the top here, like the large models, it's even better than humans at correctness. So humans rating other humans think that GPT-3 produces more correct plans. Why is it so bad at executability? Yeah. So there are actually two questions that I think you raised. One is why this smaller models, like when I say smaller, it's actually still pretty large, the largest GPT-2 model. So why do they produce more executable plans? And the second question is why the GPT-3, the largest GPT-3 model is actually better than human. So to answer the first question, I think that's because we did find some failure modes here for smaller models. I think the two most prominent ones are first, it frequently tries to like repeat the given example. For example, you give it like how to browse internet. You said like go out to the computer and type on the keyboard, et cetera. And then you ask it to brush teeth. It still goes to the computer and then type out on the keyboard. So it's totally nothing like sensible here. And then the second source of error is sometimes it just outputs really short plans. If you say like sleep task, go to sleep, it's just like go to the bathroom and just stop. So that's this right here, brush teeth. It's just like go to bathroom. Yeah, exactly. So when these plans are short enough, even though it can be executed, if you just say like walk to bathroom, walk to the bathroom, just one single action, for walk, there is not much common sense constraints there. So you can totally imagine it's super executable. But if you present them to humans, of course, humans will spot this and then say, okay, this is not correct. Because when we do human evaluations, we're trying to make it simple so that the error here is not too big, because we don't ask hundreds of humans to evaluate this. We only got to ask 10 evaluators in this case. So that's why this smaller models are now really good at escalability. And the second question that you ask is why these larger models are actually better than humans. So actually, this is now the completely fair comparison if you just look at one axis. So all the results here, we look at from two axes that we care about. So one is the semantic correctness, which is evaluated by humans. And the second is the executability. So this human plans that we use are from this data set that virtual home developers cross source from Amazon Turkers. So these plans, they make sure that these are executable plans. So which means that they have one hand here. They'd be over here. Yeah, but we don't want to put a spot right there on the right, because it's hard to see, because humans are a big baseline and reference here. It's not a baseline that we're trying to beat. Of course, GPT-3 is not there yet in terms of at the same time outputting correct action plans and semantically correct action plans, and also being able to really ground them in the environment. But using these two axes, we can really see, for example, which axis is the place that, as a community, that we may want to work more on to get it better to get the human levels. And with this paper, we find this result actually a bit interesting to us. Is that for these larger models, in terms of semantic correctness, you don't need to worry too much about it. It's kind of already there if you do it, extract them. But the real question is, how do we make them executable for agents that we care about? And that's exactly what you do in the meat of the paper. And the result are these translated models right here that, notably, they do drop a little bit in terms of their correctness as rated by humans, but they gain massively in executability. And this is the result of a bunch of different ingredients, like three main ingredients, as far as I could tell. You quickly want to go tell what the ingredients are to make whatever these models output into something that... I mean, the virtual home is maybe a test bed, right? I don't see this paper being about virtual home. It's more like, here is a model that outputs something, yet I need the output in some other form, right? And this is a very general problem, as many applications. And if we could solve that bridge, that technically is a big gain. That's exactly what you do. So how did you go about this? Yeah. So actually, I just want to make sure that actually this paper just presents a really preliminary step. I don't think it solves anything particularly. I mean, it does, like if this problem... Sure, but it's a big step, I believe. I mean, the executability I have raises pretty high. I didn't want to oversell you, but also not undersell you, certainly. Yeah. But to answer the question, so we actually found there are three ingredients, but central to this is one really simple technique that we found that's the most useful, which is action translation. So because in this virtual home environment, the actions that it supports are a limited set. I mean, it's not small, but it's something that we can definitely enumerate with our computational hardware and in a really quick manner. So like just one-tenth of a second or something like that. So let's say if we can enumerate all the actions that are supported by the environment, then the question now becomes, how do we translate this really sensible action plans generated by language models, but not really executable plans? How can we translate that into those actions supported by environment? Or if you want to deploy something in the real world, let's say your robot only supports 10 actions. How do you map those tasks into the 10 actions that the robot supports? So what we found is that you first need to enumerate all the actions. And then we found that you can again leverage the world knowledge in this language models by using another language model, namely here we use Roberta, which is a language model really similar to BERT. And it's a different language model because it essentially is a mass language model. So it's really good at outputting a useful embedding. It's really good in terms of about the semantic meaning for that sentence. So what we do is that we take the sentence output by GPT-3 or codecs, and then we just compare that against all the possible admissible actions, allowed actions by the environments. And then we found the most similar one in terms of this distance in the embedding space. We actually use just cosine distance and found that to work decently well. Yeah, there's an entire space somewhere, and you just place all the actions. I guess you can even pre-compute those. You can pre-compute the embedding of all possible actions there. And once my language model outputs anything at all, all I need to do is ship it through the Roberta model, get its embedding, put it somewhere, get the nearest neighbor. And that's my translated action. So here we have an example that would translate like squeeze out a glob of lotion into pour lotion into right hand. So it would map action into and pour, it would be the verb lotion, the object and right hand also one of the objects. So maybe there's two arguments to pour. It seems very simple, but I was at a talk by the people who made the first version of the... In Gmail, you have these always three options to respond to, like the quick options to respond. And I think the first, I'm not sure how it is done now, but the first version of this, we were like, wow, this is cool. It actually takes into account the email message that was there. We always thought it was kind of like a language model, generative model somewhere. So I went to a talk and they were just like, no, we just have a big list of responses. We just classify, right? Whatever. We just take your message, right? And we just put it through a model and then we just classify into this big, big bucket of possible answers. So I mean, this is even though it is simple, it's a very powerful method. And that being said, you don't even train this. You take an off the shelf embedding model and you compute nearest neighbors and it does turn out quite well. You do, however, you talk about this in the paper, there is a bunch of problems. And one of the problems I see is whenever a step contains like multiple steps, right? Is that like, is that a big, have you found this to be a big problem? Because this just maps one action to one other action. But if it's like, you know, open the fridge and take a glass of milk, then I have essentially no way of translating that into an admissible sequence. Yeah, that's a, that's a good question. And I think that's one of the main errors that like this, this Roberta model that we use, it's actually a sentence Roberta model because it's trained with a different objective such that it can really, you can actually calculate cosine distance between the embeddings they generate. So it's a, like we found like it's pretty difficult to map a compounded action. Like you said, like two actions in one sentence into one admissible action. But this is partly mitigated by how you tune the temperature, the sampling parameter, just the temperature for the GPT-3 or codex models. Because we found that if you do increase the temperature, then it tends to output something more verbally expressive answers for each step. So that means it's harder to translate. And we, if you, if you try like all this, like different settings, we did, in the end, we found like, usually you want to use like a lower temperature than what people mostly use for language generation, for example. So that like each action is like small enough and succinct enough. And then, and then after we translate this action, so that it's easier for this bird model, Roberta model to translate. And yeah, something I forgot to mention, like after we got this translated action, we found that it's still useful to put that back to the original prompt, put the translated action back instead of like the original action so that you can add the GPT-3 and codex model to reason, like how am I going to do based on this like action already performed? So yeah, like you said, like you pointed, this is the third sub figure here. So we would take instead of instead of generating the entire plan at once, we just generate one action, then we translate it. And then we substitute essentially whatever GPT-3 output with whatever the translated thing is. And then based on that, create the next action. It makes sense because you it's like almost like a guiding, like a bit of a guardrail for, for the language model. Instead, if you were to let it generate all at once, and then you translate each action individually, they almost like lose connection to each other, right? So this, this here might mitigate some of this, this stuff ready, if I have a compound action, like go to the fridge and grab a glass, and the closest, I hope that the closest sentence is to go to fridge, right? The language model might still recover and recognize, aha, I haven't, you know, grabbed, haven't grabbed a glass yet. So that is, so these are improvements one and two. And then the third, the third thing you found that really helps is the prompt up here. So the priming, which I think in GPT-3, it's very common to have these priming prompts to tell the model what kind of stuff you, you expect as an output. I was surprised to see that you only have one priming prompt. Whereas in general, people put more than one, usually people put like three or something like this. Is there a particular reason why you used just one? There is actually not a particular reason. I actually found like, I mean, in the beginning, we were, we know that we have this data set, right? And then we, we found, originally, we actually tried to train something to achieve this, but in the end, we found that like, we don't even need to train something. And like, now the question becomes like, like, can you even leverage this data set to some extent to make it useful? Of course, this is something like additional, I mean, it would definitely be better without any, any of this. But if you have this data set, you can actually found like this most similar example to the query task here. For example, like this is apply lotion. So like, shape, task shape is determined to be most similar. Again, judged by this Roberto model using the same technique. Yeah. So I think that that's the, that's the main motivation for using this, but we didn't thoroughly investigate it, like how you structure the prompts, whether you add like multiple things there and then, or you change the template here, because I just defined this template from day one, like task something, step one, something, two something, maybe there is a better template. Maybe you want to add some instruction there to make it better. And so I like, I mean, this is definitely possible and we don't investigate them here because we don't just want to get the best performance out of this. We want to get the best performance out of this. We want to show people like, this is something possible and it's really interesting to us. So that's why we ended up like, like just using the most simple technique here. Yeah. And to answer your question, why we don't put multiple things there, I think one important reason is like, because this example plans that we put in front are produced by humans. And this is because due to space constraint, I'm using an oversimplified version in this figure specifically, but in practice, these plans are actually pretty long. So, and they actually already take up a lot of space in the prompt. So if you put more than one, sometimes it gets too long. And I mean, it's maybe something handleable by larger models, but we just opt for the most similar, most simple case. And I actually read this, like there's a recent paper investigating why in context learning works, they frame this as a implicit Bayesian inference problem. And they did come to a conclusion that the longer the prompt, if I remember correctly, like it helps the model. So, in this way, you kind of like trade off the number of examples you put and the length of each example. So in those cases, I think you mentioned many people put many examples before the query. Those are usually the cases where the tasks they care about are like smaller. So for example, like you want to ask Einstein was born somewhere, then like this is just a sentence. So you probably want to put like more than one sentence there. But this case, our case is like, it's an extensive action plan. So it's already pretty lengthy and we don't want to go too crazy over here. I mean, it's, yeah. Sorry, the recording has stopped on the screen side, but we can still see it. Okay. Yeah. So yeah, I was quite interested in the sense of the prompt structuring, because I know that can also make a big difference. But I also like the sort of approach of not having too many moving parts in one single thing, because it makes things complicated. And for many papers, it makes you wonder like what was exactly the thing that gave the improvement here. Now you do very good ablations of all of these different improvements, which I really liked. And you showed that kind of the translation is the main part right here, although the other things certainly also help. Have you ever, so it reminds me a bit of this, you know, this retro model, these language models that retrieve from the internet as they produce text, it reminds a little bit of this, right, in that you produce, you go and retrieve the closest samples in the data set as you produce the text. Yeah, I think this combination of retrieval and generation is picking up steam. And it looks pretty interesting. My question is a little bit, have you tried also, because essentially, you now rely on this translation procedure to produce the correct actions. Have you tried any way to like let the model know what the possible actions are? Like something like, you know, I can imagine maybe I, you know, I ask the model first, and then I get maybe the five closest actions or the 10 closest actions in embedding space. And then I somehow put these in the prompt here, like, you know, in between, you know, what am I going to do next? Is it this or this or this or this, right? And then the model could, maybe I could prime the model to output one of them. And, you know, is there, did you try any way of telling the model more what's even possible in the environment? Because right now you're essentially relying on on just the language model itself. Yeah, that's a really good question, too. So like, we actually didn't try the specific thing that you talk about, like generate a bunch of possible actions and then ask the model again, which of these are the best. But we did try something similar, which is like Beam search. So essentially in Beam search, you look ahead to see like what the outcomes are, are like having in the end get the highest likelihood. So we did try to constrain the strain the vocabulary that can be used in the Beam search. But this is only conducted on smaller models, because obviously the GBT-3 and codex models are now open to fully open to public. So we can't, we don't really have full access to different features. Like, you can't restrict the vocabulary dynamically. Yes. So I've only done this on smaller mode, relatively smaller models like the GBT-Neo. And then I think I might have tried on GBT-J as well, which is a 6 billion parameter model. And it actually turns out that they don't do really well with if you really just constrain the vocabulary that way. And yeah, specifically just the Beam search constraining the vocabulary can generate. But so my hypothesis, this is now thoroughly tested because it's now invested on larger models as well. But my intuition why it doesn't work so well is that this language models are really trained on human text. So it really, they're really used to how humans speak a certain language in this case English. So like people don't speak things in this way, step one, something, two, something, step three, something. So that's why if you really constrain the models this way, a lot of the world knowledge encoded in these models are lost. So basically, and personally, just a personal opinion, I don't think these models are doing super intelligent reasoning here. It's basically just doing kind of retrieving what's what is trained on. So, retrieving this large scale text. So if you want to retrieve better, you better adopt the same way that humans speak a language. So like if you don't constrain the vocabulary, you can get the most out of a language model. And you can really tell if you adjust the temperature. Like if you go different temperature, they can tell you like different levels of things and they can be really realistic. But if you really constrain it, a lot of this knowledge is lost. And it can really do too much like common sense reasoning here. I was, you mentioned this a bunch of times, I was surprised to find codecs as a model. And so you have, these are sort of vanilla models. And then you have the translated ones where all your all your improvements are in there. So there is the action translation, there is the sampling, even according to the probability and executability, there is the retrieval of the closest prompt and so on. And these translated models, they perform really well. What I was surprised by and also by the results is that codecs, I mean, that it's even in here, it's like a code model, but also that comparably, it holds up, right? It's not as good as the GPT-3 model, but it's also very, very much smaller. So parameter by parameter codecs is outshining GPT on this task very well. How did you even consider using codecs? And how can you explain that this model is doing so well? Yeah. So one intuition why, this actually came out to be pretty surprising to us as well. So we did find like this codecs models are really good at generating these plans. And actually from my own experience playing with this models, I did find like codecs thinks that this is part of some doc stream. So it's actually imagining like people just like asking the doc stream here, but instead of letting keep generating the code, we kind of just stop here. So, okay. Yeah. When it's the doc stream for us, that's enough. So yeah, so it's actually doing some of this kind of doc stream. It generates this doc stream thing. And the reason I think the smaller codecs model are actually better than the same size GPT-3 model is that because it's trained on a more structured data. So like code and specifically many of this code examples in the training data set consists of doc stream and the code. So it not only can handle code really well, it can also generate really realistic doc streams. So, and people in doc stream, they don't write in like... Yeah, they don't write a novel. Yeah. So they write something really step by step and have more structure in it. So that's my intuition why it actually does really well with this task. So you can really process this sequential like logical reasoning better than the same size GPT-3 model. But of course, if you use a larger model, that potentially be more helpful. Yeah. Or I mean, there is, as you said, there is still a lot of open questions about how exactly you structure the prompts. Like maybe this step one, step two, step three isn't ideal for these language models. Maybe you need to more let them write like a Reddit post or something about how they went and got a glass of milk yesterday and then translate that somehow. But yeah, it's pretty cool. So one thing that just came to my attention right here is this top row right here, which I found hilarious. So the task is complete Amazon Turk surveys. So the four steps apparently that you need to do is walk to home office, sit on chair, switch on computer, look at computer. Like, is this the description of complete Amazon Turk? It's a pretty accurate description maybe of what Amazon Turk workers do. So like I said, these tasks are generated by crowdsource from humans. And the humans here happen to be Amazon Turkers. So one of them decided that, okay, if you want me to generate some tasks, I would say like just complete surveys on Amazon Turkers. Yeah, so they decided to put one of this here and we found this here, there is two. So like I said, so this language model, so they can't really handle anything that you wanted to generate. So because we did put the example in the front. So I think in this case, the example happens to be something related to computer and the example is that you can't really see the models actually happen to reason or potentially you could just repeat the example. But depending on other tasks, it doesn't seem like that's the case, but it does come to the reasoning that like this might be something related to computer too. And I'm going to put like this steps here. Yeah, yeah. I mean, this is, I mean, it has something like melancholic and it also has something a bit, as you said, rebellious of like, you know, I'm here doing my Amazon Turk work, I'm gonna, you know, I'm just gonna put my Easter egg in there in this data set or like show you, but it also shows something I think about the interaction with this environment because, you know, if you ask me, you know, what did you do today, I could tell you, you know, I programmed this, I viewed a poll request, I sent some email and so on. But in the action space of this environment, this would all just be characterized as go to desk, sit on chair, switch on computer, look at computer. And yeah, so it is really, maybe also a constraint of the environment itself. And as I said, I think the challenge is going to be there's so much knowledge in these language models, and we somehow need to get it out into the domain that we care about. And yeah, I guess, I guess many opportunities are still there. And in this particular environment, is it so the way I see it, we have this environment, it's a 3d environment, but you never actually for your studies, you never actually had to actually execute anything in the environment. Is that correct? Or do I see something wrong here? I think those when you say execute do you mean like, like run in the environment? Yeah, like run the 3d environment, like actually give it to the environment, because you evaluate executability, you can do with a parser, right, to see whether it matches the actions and constraints. And the correctness you evaluate with the humans, because my question was also a little bit like, why can't I just run it and see if, you know, at the end, there's breakfast, but you already, you already said that the tasks are so, so open, like, how would you how would you detect there's breakfast, right? So, so, in terms of so a bit background here for the virtual environment. So it comes in two versions. One is the, I think that they call the evolving graph version, which is a pure, like you said, a state machine, a Python, like reading in Python. So it just goes in and then checks which whether the actions can be parsed, and then we satisfy the common sense constraint. And the other version they implement is this, is this visualized version, where they actually only implement a subset of the act the total action supported in the environment. So I think they, so in the evolving graph version, the Python version, there are 42 actions. And in the visualized version, there are only 10 actions. So it's limited. Like the plans we can generate, we can really visualize are limited. So that's also part of the reason we don't show the visualized version to humans. Like, can you tell us whether this is successful or not? So, yeah, that's, that's a, that's indeed something we can do right now. And I think that's like as a community, as we go, go on, like, to this next step with more complex tasks that humans do every day, instead of just like, lower level tasks. As a community, I think more efforts can be can be put here and to develop better simulator and also maybe beyond even household environment. So just as a, as a story here, I did play around with the codecs and then GPT-3 models to have it generate something out of the household domain. And seems like they do have some, a lot of knowledge for those as well. So if you can ask it, how do, how do I pay bills at a restaurant? And how do I work out at the gym? And I think in, on Twitter, there's also someone tries to, after the posting of this paper, they try to ask the GPT-3 model, how do I start a company? So yeah, they do have a lot of knowledge for this. And as long as you can provide a set of actions that are necessary to complete these tasks, I think no matter what, what the granularity is, ideally it should be at the same granularity as of humans. So ideally it should be, this model should be able to generate something, something sensible and reasonable. But yeah, right now is something that you definitely can't trust to put on a robot, of course. Yeah. Yeah. I mean, it's, I've always, I've always seen people thinking when they think GPT-3 or so they, they, and they think, for example, of video games, they always imagine, you know, we can have our NPC, our characters, the dialogue be generated by GPT-3. So it, the dialogue is more realistic, but I think this shows that it can go further if we are able to map sort of GPT-3's knowledge into a sort of structured domain that we choose, we could potentially also let these models generate the action sequences of like, of characters, for example, let's say in video games, because that's like a common complaint that, you know, the guards, they always walk up and then down and then left and then right and then up and then down and right. They have these, even if the dialogue gets really good, their behavior is still kind of lame, either that or they cheat, they know where you are at all times. But with, I feel with models like this, we can almost like take this common sense knowledge and maybe have the hopes of transferring that to various domains and infuse a lot of areas with common sense. And that I find that to be, I find that to be pretty cool in itself. That would be really exciting and interesting application. Yeah. Yeah. Yeah. So I mean, there's a lot of things to be gained. So what I did, I was specifically intrigued about clip. I don't know if you are thinking about this or not. But what I tried to do is I tried to take like a frame of Pac-Man, like, and you know, there's like walls here and here and here. And I had Pac-Man be like, you know, here facing a wall. And then there's like a ghost behind Pac-Man, right? And then there's like these little dots over here to eat. And so it was like super clear what you have to do. So I tried to feed that to clip. And you know, you can make clip classify things by just evaluating a bunch of different strings with it. So I like try to, I try to evaluate the strings, go left, go up, go right, go down, or like Pac-Man should go left, Pac-Man should go up, but it never worked out. So if you can, if you could get something like this running, this would be amazing. Maybe with your knowledge, maybe Pac-Man isn't the right environment because clip was trained on whatever picture scraped from Instagram. But I think just this this type of, you know, thinking beyond just the strings in terms of language, but thinking in terms of I have some structured environment and I want to leverage this, this knowledge of these models is super cool. Yeah, that would be a super interesting application. I think using clip here, like, because it feels in another modality, which is image could be really interesting. So I think it kind of solves one of the major limitations of this paper, namely just the, because currently we generate plans regardless of the environment state. So it doesn't condition on environment state and potentially using clip, you can encode something there because you can also take image as input to, to an image can serve, can serve as state for, for, for the environment. I think that would be really cool. And yeah, so yeah. So just to be, to be clear to the listeners, the basic idea for this I have from, from a PhD student that was partially in our lab called John Battista Parascandolo. So the, the credit fully goes to him of, of this whole idea. I didn't want to, but I just, it got me thinking so much about, you know, we can extract this knowledge into, into other modalities. And that's, that's pretty cool. Is there anything you want to maybe say about the experiments? Is there anything that was very surprising to you or, you know, something you didn't expect or something you particularly want to highlight? Actually, I think we covered most things, but I think I might say something about the, the, the baseline here. I see, you can probably see, except for the human references, we also got to got to fine tune a GPT-3 version. And we did find that fine tuning can, can be a really strong baseline here, because as you can probably tell the, one of the measures here, LCS, which is the longest common subsequence. This measure here is much higher than the others. So this measure basically calculates how much overlapping there is in your generative plants against the those plants written by humans. So it's kind of calculating this IOU score. So we did find that, find this to be a strong baseline. And I think it still actually makes sense to, to be a strong baseline because this is trained on such data. And so this is kind of to illustrate that, like if you do have domain data, it's still really helpful to, to train your models, fine tune your models this way. But if you don't have something like this, you can potentially just leverage the knowledge already in this language models. Cool. Yeah. So where, where does your future lie? What are you, I, I, are you going to, are you going more into this direction? Or was this sort of like a one-off thing? Or do you have, I mean, what are the interesting questions that, that you are asking now maybe as a follow-up to this? Yeah. So I personally, I haven't decided because I, I'm in a stage where like I'm applying to PhD programs and, and, and also other positions. So like, but, but as a follow-up, I think it would be really interesting. As I mentioned, one limitation, major limitation of, of this work is that we haven't found a clear way to condition on the environment state. So that like, if you really place an agent in, in the household, for example, there is no, if you want to make coffee, but there is no cough, but there, there's no, there isn't a automatic coffee machine. How would you make a coffee with some, maybe a similar devices. So the agent can really reason if you just put it this way, because it doesn't condition on the environment state. So I think it would be really interesting to like investigate how you can also condition on the current environments and then, and then reason from there. But this might require some training data. And I think that's part of the reason why we don't like go full length here to investigate this, because this is something just for us to tell people, like this is an interesting finding and we may be able to leverage something here. But I think this will be really exciting and like interesting future work. Cool. Excellent. Wenlong, thank you very much for being here. This was awesome. So great to hear from, you know, from always from the people who made the stuff. So yeah, thanks a lot. Yeah, thank you so much. Yeah. And yeah, I think I also want to also want to like point that like, this is a group effort and really a lot of thanks goes to three of my advisors, Peter Bill, Deepak Pathak and Igor Mordac. Excellent. All right. Thank you. And I hope to see you again. Yeah, I'm like, it would be an honor to always to be here. Yeah. Excellent. All right. Bye bye. Yeah. See you.
[ { "start": 0, "end": 5.5200000000000005, "text": " Hello there, today we're looking at language models as zero-shot planners, extracting actionable" }, { "start": 5.5200000000000005, "end": 11.28, "text": " knowledge for embodied agents. And I'm going to interview the first author Wenlong Huang" }, { "start": 11.28, "end": 17.28, "text": " in a few minutes. So first there's an explanation of the paper, 10-15 minutes or so, I'm going to" }, { "start": 17.28, "end": 22.240000000000002, "text": " try to keep to it. And then we jump into the interview where we can discuss this paper at" }, { "start": 22.240000000000002, "end": 27.76, "text": " length. On a high level, this paper asks, can we use the knowledge that is inherent in large" }, { "start": 27.76, "end": 35.92, "text": " language models like GPT-3, or surprisingly, OpenAI's codecs in order to do planning in what" }, { "start": 35.92, "end": 40.32, "text": " they call embodied agents. Ultimately, it's going to be this environment right here. The," }, { "start": 41.28, "end": 45.84, "text": " I don't even know what it's the virtual home environment. And it's about a virtual home," }, { "start": 45.84, "end": 51.28, "text": " you have to fulfill some tasks like brushed your teeth, then the model has to come up with a" }, { "start": 51.28, "end": 56.24, "text": " sequence of steps that are admissible by the environment. So there's a level of admissibility" }, { "start": 56.24, "end": 61.04, "text": " of action, predefined actions that are admissible, the model has to come up with these actions in" }, { "start": 61.04, "end": 66.8, "text": " order to fulfill the task. The model is then rated based on executability and correctness" }, { "start": 66.8, "end": 73.44, "text": " of their plans. And it turns out that the larger the models get, as you can see right here, the" }, { "start": 73.44, "end": 79.76, "text": " less executable the plans become, which means that the actions they generate aren't admissible" }, { "start": 79.76, "end": 84.56, "text": " by the environment, probably because the models are more, let's say powerful, they can express" }, { "start": 84.56, "end": 90.88, "text": " themselves in more ways, they have different ideas of how to reach goals. However, the correctness," }, { "start": 90.88, "end": 96.96000000000001, "text": " this is human evaluated of these models rise as they grow larger. So this gives you an indication" }, { "start": 96.96000000000001, "end": 101.36, "text": " that the large models seem to have quite a lot of knowledge. And we have to say these are not" }, { "start": 101.36, "end": 108.16, "text": " trained, the entire paper just works except for one baseline evaluation, just works with pre-trained" }, { "start": 108.16, "end": 113.28, "text": " models, they're not fine tuned at all on this environment right here. So what this paper does" }, { "start": 113.28, "end": 118.72, "text": " is it says, well, given that the larger the models get, the more correct their plans are," }, { "start": 118.72, "end": 124.64, "text": " can we do something to fix the issue with the executability? To that, they develop this" }, { "start": 124.64, "end": 129.52, "text": " translation procedure right here. These are three specific improvements they do to the models." }, { "start": 129.52, "end": 134.96, "text": " In order to get their executability up, you can see they sacrifice like a little bit of the" }, { "start": 134.96, "end": 140.88, "text": " correctness, but they do make the plans largely executable in the environment. And therefore," }, { "start": 140.88, "end": 145.28, "text": " procedures like this could be applied in many different ways. It's not only about the virtual" }, { "start": 145.28, "end": 150.24, "text": " home environment and so on. It's essentially anywhere where you bring together the knowledge" }, { "start": 150.24, "end": 155.51999999999998, "text": " that is inherent in large language models with some sort of a domain specific language or a" }, { "start": 155.51999999999998, "end": 161.2, "text": " grammar or any anything like this, like where you have to transfer that knowledge into a new domain," }, { "start": 161.2, "end": 166.56, "text": " but you don't want to train a model to do so. So we're going to see how they do it really briefly." }, { "start": 166.56, "end": 172.4, "text": " First of all, the environment itself, as I already said, is this now this is visualized, although" }, { "start": 172.4, "end": 177.92000000000002, "text": " they never work, you know, actually in 3D, just a small correction here, because I messed this up." }, { "start": 177.92000000000002, "end": 182.08, "text": " There are actually two versions of the virtual home environment. One is a Python version that" }, { "start": 182.08, "end": 186.96, "text": " focuses on the textual interaction with the environment. The other one is implemented in" }, { "start": 186.96, "end": 193.2, "text": " Unity and actually does work in 3D. The developers of the environment mostly focus on the Unity" }, { "start": 193.2, "end": 198.16, "text": " environment because it's more real. But as of yet, that has a subset of the actions available that" }, { "start": 198.16, "end": 203.92, "text": " the Python environment has. And the authors of the paper use the Python environment and the data set" }, { "start": 203.92, "end": 208.64, "text": " that comes along with that. We're going to go into this more in the interview. Stay tuned." }, { "start": 208.64, "end": 214, "text": " They simply grab the data set of possible tasks, some tasks you can see right here, a task could be" }, { "start": 214, "end": 220, "text": " throw away paper, another task could be brush teeth, and there there'd be a sequence of steps." }, { "start": 220, "end": 225.12, "text": " This environment is made by humans. So the tasks are made by humans. And then other humans have" }, { "start": 225.12, "end": 231.12, "text": " to come up with the steps that are admissible, admissible actions in this environment. There are," }, { "start": 231.12, "end": 237.28, "text": " I believe, a number of objects that are defined, they're predefined. Yeah, so there are a number of" }, { "start": 237.28, "end": 243.68, "text": " objects, for example, living room, television, sofa, and so on. And there are a number of verbs." }, { "start": 243.68, "end": 251.44, "text": " So walk, find, switch on, and so on. And not every verb object combination is possible. Some verbs" }, { "start": 251.44, "end": 256.88, "text": " have two objects and so on. But essentially, you combine the predefined verbs and the predefined" }, { "start": 256.88, "end": 262.8, "text": " objects, and then the state of the world changes. So the world keeps track of states, there are" }, { "start": 262.8, "end": 268, "text": " certain preconditions. For example, you can probably only sit on the sofa if you are in the" }, { "start": 268, "end": 274.16, "text": " vicinity of it. So you need to first find the sofa, you can only switch on the television." }, { "start": 274.16, "end": 279.28, "text": " Similarly, if you have first found the television or walked to the television or something like" }, { "start": 279.28, "end": 284.32, "text": " this, if the television is in the living room, you first need to go to the living room, and so on." }, { "start": 284.32, "end": 289.68, "text": " So there's a hidden kind of a state. But all of this is constructed. And we talked about this in" }, { "start": 289.68, "end": 294.48, "text": " the interview, like, what's the appropriate granularity of actions like this? And isn't" }, { "start": 294.48, "end": 300.32, "text": " this a major issue? But it is made all with the humans in the loop. So the data set is supposed" }, { "start": 300.32, "end": 307.36, "text": " to be kind of the most natural expression of these tasks, as split into steps that a human would come" }, { "start": 307.36, "end": 313.20000000000005, "text": " up with. So this is the grammar of the environment. And the language models, they don't know about" }, { "start": 313.20000000000005, "end": 319.28000000000003, "text": " this grammar. They're just language models. So what they do is they take something like GPT-3," }, { "start": 319.28, "end": 326, "text": " and they make a prompt. Now the prompt, as you might know, in GPT-3, you have to give a prompt." }, { "start": 326, "end": 331.03999999999996, "text": " So the prompt could just be like, here's the task, you know, blah, blah, blah, brush your teeth," }, { "start": 331.03999999999996, "end": 337.52, "text": " then what's step one, right? And then GPT-3 will probably it will probably even generate step two" }, { "start": 337.52, "end": 342.88, "text": " and three and four. But it will probably not be according to the these actions in these templates," }, { "start": 342.88, "end": 348.55999999999995, "text": " you can help this a little bit by putting a prompt up here. So the prompt they use is one," }, { "start": 348.56, "end": 355.52, "text": " I believe one specific plan. So they have already like task up here, some task, and then some number" }, { "start": 355.52, "end": 361.04, "text": " of steps, so that the model kind of knows what is expected. We also talked about this in the interview," }, { "start": 361.04, "end": 368, "text": " and this could potentially be improved by multiple, multiple prompts and so on. But in the baseline," }, { "start": 368, "end": 372.88, "text": " they have one particular prompt. And then one of the improvements is actually to select a more" }, { "start": 372.88, "end": 378.4, "text": " optimal prompt. This is the basic setup. You have a goal in this environment with a fixed grammar," }, { "start": 379.04, "end": 386, "text": " and you task, you input this right here to your language model, and the language model will spit" }, { "start": 386, "end": 392.15999999999997, "text": " out the plan. Now what do you do with the plan? The plan, you score, like how good is the plan?" }, { "start": 392.15999999999997, "end": 399.68, "text": " And they have two different scoring available. One is executability. And executability is just like," }, { "start": 399.68, "end": 406.24, "text": " it's essentially parsability by the environment. So in executability, you ask yourself, can it be" }, { "start": 406.24, "end": 410.72, "text": " correctly parsed, which means that is the syntax according to the syntax of the environment. And" }, { "start": 410.72, "end": 416.16, "text": " they do have a little translation procedure, like a little heuristic translation procedure for the" }, { "start": 416.16, "end": 422.88, "text": " baseline in place, so that the language model probably can't get it exactly right. But they do" }, { "start": 422.88, "end": 428.48, "text": " sort of translate to the closest action there. But also one of the improvements is related to this." }, { "start": 428.48, "end": 433.28000000000003, "text": " And then also does it satisfy the common sense constraints of the environment. And these would" }, { "start": 433.28000000000003, "end": 438.88, "text": " be programmed in like, for example, you can only pour yourself a glass of milk if you first open" }, { "start": 438.88, "end": 445.6, "text": " the fridge and grab the milk, this can be measured directly, what cannot be measured that well is" }, { "start": 445.6, "end": 450, "text": " correctness. So these models, they would come up with plans and independent of whether they're" }, { "start": 450, "end": 455.76, "text": " executable or not, they could be correct, right. And that's where they ask humans. So they use" }, { "start": 455.76, "end": 463.03999999999996, "text": " human evaluations, they conduct human evaluations in order to score the correctness of whatever" }, { "start": 463.03999999999996, "end": 468.71999999999997, "text": " these models output. So they give it to a human, ask the human, does this look like a sensible plan" }, { "start": 468.71999999999997, "end": 473.84, "text": " in order to brush your teeth, and the human would either say yes or no, when they do like ablations," }, { "start": 473.84, "end": 479.2, "text": " and so on. They also use like longest common sub sequences between two programs and so on in" }, { "start": 479.2, "end": 484.08, "text": " order to not spend ginormous amounts of money on humans. But essentially, the correctness metric" }, { "start": 484.08, "end": 489.44, "text": " is a human metric. It's also interesting because you thought you could just execute like the plan" }, { "start": 489.44, "end": 495.28, "text": " in the environment and that give you like, does it succeed or not, but they say correctly that for a" }, { "start": 495.28, "end": 500.32, "text": " task like make breakfast, there's not really a defined end condition that you could program" }, { "start": 500.32, "end": 505.44, "text": " into the environment to give a reward. So it's more accurate to ask humans whether a plan is correct." }, { "start": 505.44, "end": 512.0799999999999, "text": " As you might have guessed, this environment is very human centric, it's made by humans with humans in" }, { "start": 512.08, "end": 518.48, "text": " the loop and so on. It's supposed to really be sort of a representation of human tasks and human" }, { "start": 518.48, "end": 523.5200000000001, "text": " plans to human tasks. All right, so now we're going into the improvements. There are three" }, { "start": 523.5200000000001, "end": 529.12, "text": " distinct improvements they make. So if they just do this, if they just do what we've described so" }, { "start": 529.12, "end": 535.5200000000001, "text": " far, then the graph up here results, excluding the two models on the right, you can see the larger" }, { "start": 535.5200000000001, "end": 541.6, "text": " the models get, the higher their correctness, but the worse their executability. So now the thought" }, { "start": 541.6, "end": 548.96, "text": " is, can we change that? Can we raise the executability? And so this is the baseline right" }, { "start": 548.96, "end": 557.0400000000001, "text": " here, zero-shot planning via causal large language model, you put in a task as a prompt, and along" }, { "start": 557.0400000000001, "end": 561.52, "text": " with like the format you expect, which is this one right here, which is some other task from the" }, { "start": 561.52, "end": 567.76, "text": " data set, then you use the pre-trained language model like GPT-3 or something, and that will give" }, { "start": 567.76, "end": 575.92, "text": " you a plan. And that's it. So the next thing they do is they do what they call a translation model." }, { "start": 575.92, "end": 581.12, "text": " So they introduce a second model, which is also pre-trained. And this is it's not trained on" }, { "start": 581.12, "end": 586.3199999999999, "text": " translation. It's just trained on masked large language modeling. So think of this like," }, { "start": 586.88, "end": 592.88, "text": " this is just BERT. In fact, I believe they use sentence BERT, just pre-trained on English" }, { "start": 592.88, "end": 600.32, "text": " language. And what they do is they make a big vocabulary of all the admissible actions. So all" }, { "start": 600.32, "end": 605.84, "text": " the admissible actions would just be like any combination between any verb and any object that" }, { "start": 605.84, "end": 611.84, "text": " would actually go with that, that is admissible to this verb. So from this, they make like a giant" }, { "start": 611.84, "end": 620.24, "text": " list of all of the admissible actions. And then they embed that giant list. So they put this into" }, { "start": 620.24, "end": 627.6, "text": " some embedding space using the sentence BERT model pre-trained, right. And then whenever the large" }, { "start": 627.6, "end": 632.24, "text": " language model outputs something, they don't implement it into the plan directly. They first" }, { "start": 632.8, "end": 640.08, "text": " embed whatever the model outputs. Let's put this over here, they embed it, let's say that becomes" }, { "start": 640.08, "end": 647.6, "text": " this right here. Then they see what's the nearest neighbor of my admissible actions to this thing." }, { "start": 647.6, "end": 653.52, "text": " And then they simply replace whatever the model output with the nearest neighbor. And they call" }, { "start": 653.52, "end": 660, "text": " that translation. So essentially, it translates from general natural language space into the" }, { "start": 660, "end": 667.36, "text": " space of the admissible actions or the grammar of the model. Now this has some problems on its own." }, { "start": 667.36, "end": 674.96, "text": " For example, if the model outputs the compound actions. So if it says, for example, squeeze out" }, { "start": 674.96, "end": 682.8000000000001, "text": " the glob of lotion and put it in your mouth or so or on your face, I guess, then well, it's apply" }, { "start": 682.8000000000001, "end": 688.32, "text": " lotion, it's anywhere. Squeeze out the glob of lotion and put it on your skin. That would be" }, { "start": 688.32, "end": 693.12, "text": " still one action. Now which one would be the closest right here, there's going to be somewhere" }, { "start": 693.12, "end": 699.84, "text": " like squeeze out a bit of lotion and the other one is going to be like, put the lotion on your skin." }, { "start": 699.84, "end": 705.36, "text": " Yet you only have one action like it's it's it's one line. So one action, it just contains like an" }, { "start": 705.36, "end": 710.24, "text": " and now the end might be easy to recognize, but there are other there are going to be other like" }, { "start": 710.24, "end": 717.2, "text": " compound actions. And this is going to be a problem here, because you just map one action to one" }, { "start": 717.2, "end": 723.2, "text": " admissible action. But in any case, doing this already helps a lot, even though there are still" }, { "start": 723.2, "end": 727.76, "text": " some problems to alleviate the rest of the problems. They have two more improvements." }, { "start": 727.76, "end": 734.16, "text": " The first improvement they do is they say, well, if there is a compound action, we can still kind of" }, { "start": 734.16, "end": 740.4, "text": " alleviate that a little bit. So in the original method, what they did is they simply took this" }, { "start": 740.4, "end": 745.28, "text": " through the through the language model, and they got out just a list of steps, right? Here is step" }, { "start": 745.28, "end": 751.12, "text": " one, here is step two, here is step three, and so on. That is just a list of steps. And they would" }, { "start": 751.12, "end": 756, "text": " translate even when they use the translation model, they would translate each of them to" }, { "start": 756, "end": 761.6, "text": " a admissible action translate this one to an admissible action. Well, now you have no idea" }, { "start": 761.6, "end": 766.56, "text": " of whether that sequence of admissible actions even makes sense, right? For example, one could" }, { "start": 766.56, "end": 772, "text": " be a compound action, and it just gets translated to one of the two actions. And then the next action" }, { "start": 772, "end": 778.08, "text": " doesn't have a precondition. So what they do is they interleave the two steps, right? They interleave" }, { "start": 778.08, "end": 784.56, "text": " this translation with the generation. So they would only generate one step at a time, like step one," }, { "start": 784.56, "end": 789.52, "text": " then they would translate it, and then they would use the translated version and put it back into" }, { "start": 789.52, "end": 795.92, "text": " the language model to get step two. That way, the language model always is conditioned on admissible" }, { "start": 795.92, "end": 800.56, "text": " actions instead of just being free form and then translating after the fact. So this is" }, { "start": 800.56, "end": 806.64, "text": " autoregressive generation. The last improvement they make, which is, I guess, more of a minor" }, { "start": 806.64, "end": 811.76, "text": " improvement. That's why it's not in this diagram. However, what they do is instead of having a" }, { "start": 811.76, "end": 819.52, "text": " generic prompt, what they do is they take the task, they embed it using the same sentence" }, { "start": 819.52, "end": 828, "text": " verb embedding, and they compare it to embeddings of all of the tasks that they have in the data set." }, { "start": 828, "end": 834.64, "text": " And they just pick the closest task in the data set to act as a prompt, which could still transfer" }, { "start": 834.64, "end": 843.36, "text": " some in-context knowledge for the current task. So that is essentially the method. They investigate" }, { "start": 843.36, "end": 853.28, "text": " this, they have an algorithm right here. I formulated it in a rather easy way, but they" }, { "start": 853.28, "end": 858.56, "text": " do not only consider the closest action, they consider actually a waiting of, so in the" }, { "start": 858.56, "end": 865.76, "text": " translation, they consider a waiting between how close is it to an admissible action and how" }, { "start": 865.76, "end": 872.9599999999999, "text": " likely is that action that they output. So they would generate not only one action and then" }, { "start": 872.9599999999999, "end": 876.9599999999999, "text": " translate it, they would actually generate a bunch of variants and they consider each one of them," }, { "start": 876.9599999999999, "end": 881.8399999999999, "text": " like how close is it to an admissible action and also how likely is it. And then they take" }, { "start": 881.84, "end": 889.2800000000001, "text": " the best combination of the two. That is obviously modulated by a hyperparameter." }, { "start": 889.2800000000001, "end": 897.6800000000001, "text": " They have early stopping and all of this kind of stuff. And this results in a neat algorithm." }, { "start": 898.5600000000001, "end": 906.08, "text": " And we're going to talk about these things in a bit and also the results right here. I want to" }, { "start": 906.08, "end": 912.5600000000001, "text": " highlight that if you look at, for example, vanilla GPT-3 has a really low executability," }, { "start": 912.5600000000001, "end": 919.0400000000001, "text": " it does have a high correctness. However, if you look at the translated version, which is" }, { "start": 919.0400000000001, "end": 923.12, "text": " after their improvements, you can see the executability has risen dramatically while" }, { "start": 923.12, "end": 928.8000000000001, "text": " the correctness is a bit lower. Like you get a bit lower in correctness because of the whole" }, { "start": 928.8000000000001, "end": 934.08, "text": " translation procedure and so on. You're mocking with the outputs, humans may not like it as much." }, { "start": 934.08, "end": 938.96, "text": " This is all stuff we're going to touch on in the interview. Just interestingly highlighting that" }, { "start": 939.5200000000001, "end": 946.32, "text": " codecs, like the codecs model seems to be scoring quite well on these tasks. So also the translated" }, { "start": 946.32, "end": 952.96, "text": " codecs is much smaller. However, it scores high, really high. So parameter for parameter," }, { "start": 952.96, "end": 958.88, "text": " the codecs model is actually pretty, pretty good at this, which was a surprise to me. So I think" }, { "start": 958.88, "end": 966.24, "text": " this is an exciting paper. It except as I said, for a fine tuning baseline, it turns out to work" }, { "start": 966.24, "end": 972.96, "text": " completely without any training. It's just evaluation, so to say. And I liked it. And I" }, { "start": 972.96, "end": 977.6, "text": " think this does have applications like getting the knowledge out of these large language models is" }, { "start": 977.6, "end": 984.88, "text": " something we should, you know, be getting better at doing. Otherwise, I don't think we make full" }, { "start": 984.88, "end": 989.68, "text": " use of them. All right, so now I want to jump into the interview with Wenlong. I hope you enjoy that" }, { "start": 989.68, "end": 994.88, "text": " as well. Tell me how you like these, these videos with the interviews without the interviews," }, { "start": 994.88, "end": 997.76, "text": " anything you want in the comments. I'll see you. Bye bye." }, { "start": 1003.28, "end": 1010.24, "text": " Welcome everyone. Today with me here is Wenlong Huang, who is the first author of the paper about" }, { "start": 1010.24, "end": 1016.32, "text": " language models as zero shop planners and very, very happy to have you here. Welcome Wenlong." }, { "start": 1016.88, "end": 1020, "text": " Thank you, Yaning. Yeah, super, super happy to be here." }, { "start": 1021.04, "end": 1027.1200000000001, "text": " This is, I've already told you about this paper is different and I like different papers. And" }, { "start": 1028.08, "end": 1036.56, "text": " it's, it's different in a way that maybe wasn't expected every, it seems like every day," }, { "start": 1036.56, "end": 1042.08, "text": " we find a new applications for these large language models and yet another thing that they" }, { "start": 1042.08, "end": 1049.52, "text": " can do here. And when I, when I saw this, I was reminded of a friend of mine who had like" }, { "start": 1049.52, "end": 1055.6, "text": " similar ideas, but it never really materialized. I tried some of this stuff as well, combining" }, { "start": 1055.6, "end": 1060.6399999999999, "text": " large language models with planning with telling me what to do in the real world. I even made a" }, { "start": 1060.64, "end": 1066.88, "text": " video where GPT-3 told me a recipe and then I cooked the rest, like me and my friend, we cooked" }, { "start": 1066.88, "end": 1074.64, "text": " the recipe and so on. But it seemed like always a bit, a bit out of place, a bit, a bit off just" }, { "start": 1074.64, "end": 1081.92, "text": " to give you detailed instructions. And when I saw a paper that was really trying to make this work" }, { "start": 1081.92, "end": 1089.68, "text": " in a real environment, I was, I was very happy to see that. And yeah, that's, that is, that is this" }, { "start": 1089.68, "end": 1096.24, "text": " paper. And also, to be said, you have a, you have a stellar board of, of co-collaborators right here." }, { "start": 1097.28, "end": 1104.0800000000002, "text": " How, how did this come about? Like, how did you even get to the idea, hey, I could use" }, { "start": 1104.0800000000002, "end": 1110.24, "text": " these language models to do planning. Was it like, did it immediately come to you? Did it sort of" }, { "start": 1110.24, "end": 1117.8400000000001, "text": " build up from some basic idea or what was the process? So yeah, thanks for the briefing. So I" }, { "start": 1117.84, "end": 1124.1599999999999, "text": " think that's actually came out to be really surprising to us as well. So first we were just" }, { "start": 1124.1599999999999, "end": 1131.4399999999998, "text": " having, when we just playing around with the largest language models on the, on many of the web" }, { "start": 1132, "end": 1138.48, "text": " interface, we found that like, actually there is something there, like you said, if you ask it for" }, { "start": 1138.48, "end": 1146.24, "text": " a recipe or we actually originally study, like whether it can output the steps for making coffee," }, { "start": 1146.24, "end": 1150.96, "text": " et cetera. So we found that like, when the models get large enough, there's actually something there." }, { "start": 1150.96, "end": 1158.56, "text": " And this is the sign of life, I think for us to kind of go on and investigate how we can make that" }, { "start": 1158.56, "end": 1167.52, "text": " actually useful for, for agents. So we kind of just started from there and actually it came out to be" }, { "start": 1167.52, "end": 1174.32, "text": " pretty surprising originally without like, maybe we need some training data sets to maybe like" }, { "start": 1174.32, "end": 1180.8, "text": " train something, a translator or something to actually make it useful. But it turns out like," }, { "start": 1180.8, "end": 1186.08, "text": " but we really trying to constrain ourselves in the meantime, because we don't want it to be" }, { "start": 1186.08, "end": 1192.3999999999999, "text": " tailored to a specific environment. So we would just want to see like just the language model" }, { "start": 1192.3999999999999, "end": 1199.6799999999998, "text": " itself, like how well it can do, how far it can go. So this is what got us in the end." }, { "start": 1199.68, "end": 1205.92, "text": " We just like explored for like two months and then found like you can actually do this without any" }, { "start": 1205.92, "end": 1214.4, "text": " any training. And yeah, it's actually truly surprising and actually a really fun project for me as well." }, { "start": 1214.4, "end": 1216.72, "text": " It sounds like fun." }, { "start": 1216.72, "end": 1223.3600000000001, "text": " Yeah, just trying to see whether you can output something like really realistic and really fun." }, { "start": 1223.36, "end": 1230.7199999999998, "text": " Yeah. So you came across this environment right here, this virtual home environment. Was this" }, { "start": 1230.7199999999998, "end": 1236.32, "text": " always the plan or why did you choose like there are a million environments, OpenAI," }, { "start": 1236.32, "end": 1246.32, "text": " Jim and these Mojoco kind of robot simulations. Why was this one particularly useful? Did you" }, { "start": 1246.32, "end": 1250, "text": " immediately think of this one or how did this came about?" }, { "start": 1250, "end": 1257.6, "text": " Thanks. Yeah. So actually I wasn't doing too much research in this in body agents area," }, { "start": 1257.6, "end": 1266.4, "text": " especially for this like really high level tasks. And then I actually went to the like Google" }, { "start": 1266.4, "end": 1271.28, "text": " Scholar and then search for appropriate environments for this. And we found this virtual" }, { "start": 1271.28, "end": 1279.12, "text": " home environment and we really liked it because it actually can model any any tasks that we" }, { "start": 1279.12, "end": 1291.6, "text": " can express in terms of this like textual language plan. Like just like textual plan." }, { "start": 1291.6, "end": 1297.04, "text": " So and actually there are many other environments as well, but some of them are limited by," }, { "start": 1298.1599999999999, "end": 1303.1999999999998, "text": " I think a lot of people also use Alfred environment. That's a really good environment" }, { "start": 1303.2, "end": 1309.8400000000001, "text": " too. And I think it's a bit more structured there, but the tasks are often come from" }, { "start": 1310.8, "end": 1316.8, "text": " like a template. So it's usually like pick something, pull something. But actually there" }, { "start": 1316.8, "end": 1321.3600000000001, "text": " are a lot of challenges there. I think it's a different set of challenges. And we found like" }, { "start": 1321.3600000000001, "end": 1330.32, "text": " what the virtual home tackles is exactly what we look for because it can model like any task" }, { "start": 1330.32, "end": 1336.8799999999999, "text": " expressed in free form language, especially those like really challenging tasks like people do" }, { "start": 1336.8799999999999, "end": 1345.04, "text": " actually every day, like make breakfast, make tea, make coffee. And then it particularly cares about" }, { "start": 1345.04, "end": 1351.9199999999998, "text": " the common sense constraints in them. So specifically this environment has a set of like" }, { "start": 1352.8, "end": 1359.04, "text": " preconditions and post conditions for each action. So for example, if you want to grab a glass of" }, { "start": 1359.04, "end": 1365.68, "text": " milk from the fridge, you can't just like say go to the fridge and grab glass of milk because you" }, { "start": 1365.68, "end": 1372.08, "text": " first got to open the fridge first and then like preferably you want to close the fridge afterwards." }, { "start": 1372.08, "end": 1378.96, "text": " So it's really this like these constraints I think are really useful and really interesting" }, { "start": 1378.96, "end": 1387.76, "text": " to study whether the language models can handle this. And you've investigated several different" }, { "start": 1387.76, "end": 1392.72, "text": " language models. And just to be clear, this environment, it has this kind of syntax, it has" }, { "start": 1392.72, "end": 1400.24, "text": " very defined things you can do. And somewhere I think you say it's about 50,000 actions that" }, { "start": 1400.24, "end": 1407.04, "text": " are ultimately possible. It's kind of a combination of a bunch of verbs, which are grab, open, go to," }, { "start": 1407.04, "end": 1413.68, "text": " and lift or things like this, and a bunch of objects like kitchen, fridge, and so on. So" }, { "start": 1413.68, "end": 1421.52, "text": " any plan would consist of a sequence of verb object, verb object, like here, walk to kitchen," }, { "start": 1421.52, "end": 1430.72, "text": " open fridge, grab milk. So any planner in this environment would have to output this syntax" }, { "start": 1430.72, "end": 1438.4, "text": " directly. Now you had a plan of not training anything, right? You didn't want to train anything," }, { "start": 1438.4, "end": 1445.2, "text": " you simply wanted to investigate what knowledge is already there in the language models. And you" }, { "start": 1445.2, "end": 1452.88, "text": " came up with kind of a way to translate that. You want to maybe elaborate how do you query these" }, { "start": 1452.88, "end": 1460.96, "text": " language models and how do you make them actually conform to the syntax here?" }, { "start": 1460.96, "end": 1468.96, "text": " Of course. Yeah. So the way that Virtual Home expresses these actions are via this" }, { "start": 1468.96, "end": 1478.08, "text": " specific format where you put a square bracket for the action, atomic action, like grab, put open," }, { "start": 1478.08, "end": 1487.76, "text": " and then you put, I think it's a parenthesis or something for the arguments. But the problem is" }, { "start": 1487.76, "end": 1496.16, "text": " we can't just expect language models to handle this because even if we put an example in front," }, { "start": 1496.16, "end": 1502.16, "text": " maybe they can do it, but it's definitely not the way that usually humans produce language." }, { "start": 1502.16, "end": 1509.28, "text": " And after all, these language models are trained on human text. So we decide maybe it's not the" }, { "start": 1509.28, "end": 1516.24, "text": " right way to query these models. Have you ever tried letting them output directly the syntax," }, { "start": 1516.24, "end": 1519.1200000000001, "text": " or was it just like, yeah, it's not going to work anyway?" }, { "start": 1519.1200000000001, "end": 1526.64, "text": " I tried briefly, but it's definitely not thoroughly investigated. And intuition-wise," }, { "start": 1526.64, "end": 1536.08, "text": " I think it's definitely to use natural language. But we did adopt for the most basic approach that" }, { "start": 1536.08, "end": 1544.16, "text": " we can think of, which is just define a straight up template for each atomic action. And actually," }, { "start": 1544.16, "end": 1549.8400000000001, "text": " because these atomic actions are simple enough, just walk, grab, and those things. So" }, { "start": 1550.48, "end": 1557.0400000000002, "text": " this atomic action, I mean, the templates we actually came up with are, I think, actually," }, { "start": 1558, "end": 1563.76, "text": " just in a natural way, people say things. So turn off something, turn off something," }, { "start": 1564.5600000000002, "end": 1571.0400000000002, "text": " and then add some words in between, like in, on, on top of, et cetera." }, { "start": 1571.04, "end": 1580, "text": " Yeah. And then you just query these models, and you have multiple ways of evaluating this," }, { "start": 1580, "end": 1585.52, "text": " right? You care about two things, you care about correctness, and you care about executability." }, { "start": 1586.08, "end": 1595.04, "text": " And in at least, so you also make use of humans. How did you design, like what was your thinking" }, { "start": 1595.04, "end": 1600.1599999999999, "text": " behind designing the evaluation? Yeah. So actually, it came out to be really" }, { "start": 1600.16, "end": 1606.64, "text": " challenging to evaluate these things. Like I said, so like this task art, because they're" }, { "start": 1606.64, "end": 1612, "text": " expressed in free form language. So that means they're really open-ended. So it might be" }, { "start": 1612, "end": 1617.28, "text": " deterministic, whether like if you want to grab a glass of milk, you just want to look in the end," }, { "start": 1617.28, "end": 1622.64, "text": " whether you have a glass of milk. But if you really think about it, if we don't want to constrain" }, { "start": 1623.3600000000001, "end": 1629.52, "text": " anything in the task that we want to do, like making breakfast, like what is the correct way" }, { "start": 1629.52, "end": 1635.36, "text": " to make breakfast? Everyone has different preferences. So it's hard for us. Actually," }, { "start": 1636.24, "end": 1643.44, "text": " I think it's still a challenge in this sort of task is like really determine the correctness." }, { "start": 1643.44, "end": 1650.24, "text": " I'm sorry. It's the success rate for each task. So you can't really tell if a task is really" }, { "start": 1650.24, "end": 1658.6399999999999, "text": " successful depending on how open-ended it is. So we decided that, okay, so if it's hard to" }, { "start": 1658.64, "end": 1666.24, "text": " computationally produce a metric for a success rate, but as humans, we can definitely tell" }, { "start": 1666.24, "end": 1673.92, "text": " if it's making something semantically meaningful. So we'll just use part of human evaluations" }, { "start": 1673.92, "end": 1679.92, "text": " to do this. But we don't want to entirely rely on humans because as you can tell for the" }, { "start": 1680.96, "end": 1686.4, "text": " tasks that like, for the action plan that real language models generate, they're so realistic" }, { "start": 1686.4, "end": 1694.88, "text": " that they can even fool many humans that are too realistic. So you can't just entirely rely on" }, { "start": 1694.88, "end": 1703.6000000000001, "text": " humans to say if it's successful. So we also use this metric executability, which is also used in" }, { "start": 1704.24, "end": 1715.0400000000002, "text": " past papers that uses virtual home. So we just use this metric as well to basically determine" }, { "start": 1715.04, "end": 1721.28, "text": " whether the plan satisfy the common sense constraints in this environment, namely just" }, { "start": 1722.1599999999999, "end": 1726.8799999999999, "text": " whether you make sure to open the fridge before grabbing something from it." }, { "start": 1728.08, "end": 1733.36, "text": " It's interesting because when the humans raid it, the humans would also skip a bunch of steps." }, { "start": 1734.1599999999999, "end": 1738.72, "text": " If you tell a human, go to the fridge and grab a glass of milk, the human will go like, oh yeah," }, { "start": 1738.72, "end": 1746, "text": " of course. Which is one of my, maybe this is jumping ahead a little bit, but one of the" }, { "start": 1746, "end": 1752.96, "text": " questions I had most when I read this was just there is a level of specificity that is required" }, { "start": 1752.96, "end": 1758.24, "text": " right here, which is kind of ambiguous. You have a high level description, which is like make" }, { "start": 1758.24, "end": 1763.52, "text": " breakfast, and then you have a bunch of steps which you need to follow. And sure these steps" }, { "start": 1763.52, "end": 1768.6399999999999, "text": " correspond to actions in the environment, so they're kind of given by that, but the language" }, { "start": 1768.6399999999999, "end": 1773.92, "text": " model doesn't know that. The language model just knows I need to produce a plan. So how is the" }, { "start": 1773.92, "end": 1784.56, "text": " language model, why do we expect the language model to figure out that it needs to say open" }, { "start": 1784.56, "end": 1790.56, "text": " the fridge before you get a glass, but for example it doesn't need to say put one foot in front of" }, { "start": 1790.56, "end": 1798.8, "text": " the other foot in order to walk. So did you have any insights or concerns with like, there seems" }, { "start": 1798.8, "end": 1804.72, "text": " to be like a very specific level of specificity of these plans? Yeah, so that's a really good" }, { "start": 1804.72, "end": 1811.12, "text": " question. Actually this granularity actually comes from the dataset or the virtual whole" }, { "start": 1811.12, "end": 1818.8, "text": " environment itself, because we essentially follow the format of virtual whole environment," }, { "start": 1818.8, "end": 1827.9199999999998, "text": " and also this dataset they collected from humans of how to do this really human activity task." }, { "start": 1827.9199999999998, "end": 1837.6, "text": " So the way they collect, they build this environment is they first ask many humans to come up with a" }, { "start": 1837.6, "end": 1843.9199999999998, "text": " set of tasks that they do in everyday household, and then they ask a different group of human" }, { "start": 1843.92, "end": 1854.64, "text": " to come up with a detailed plan that can drive a robot to perform these tasks. And it's after that" }, { "start": 1854.64, "end": 1860.8000000000002, "text": " they build this environment based on the verbs used by those humans. So you can think of like" }, { "start": 1860.8000000000002, "end": 1869.6000000000001, "text": " this environment is really built on top of what humans say. Now the developers who just say like," }, { "start": 1869.6, "end": 1876.3999999999999, "text": " okay, we want this granularity, we want this like walk, grab, and those etc. So they actually ask" }, { "start": 1876.3999999999999, "end": 1884.48, "text": " these humans to give those verbs and then build those actions according to those verbs. And" }, { "start": 1884.48, "end": 1891.76, "text": " they did make sure for each of the verb to develop a set of common sense constraints, which" }, { "start": 1891.76, "end": 1900.08, "text": " completely makes sense. And I think they're actually like reasonably exhaustive for those" }, { "start": 1900.08, "end": 1906, "text": " actions. So if you want to grab something, you definitely need to make sure the things you grab" }, { "start": 1906, "end": 1912.96, "text": " is not within a closed container, for example. So in this case, the fridge is a container and" }, { "start": 1912.96, "end": 1919.6, "text": " it has this attribute of being open or being closed. So they internally keep track of the" }, { "start": 1919.6, "end": 1927.52, "text": " attributes for each of the object. And then to make sure that if you do something like this," }, { "start": 1927.52, "end": 1936.1599999999999, "text": " you don't violate the common sense constraints. So to answer your question, this granularity" }, { "start": 1936.1599999999999, "end": 1942.8799999999999, "text": " really depends on the humans. And I think this is where language models really shine because" }, { "start": 1942.88, "end": 1949.44, "text": " essentially language models are trained on human produced text. So my hypothesis, although this" }, { "start": 1949.44, "end": 1954.72, "text": " is definitely not something they're only tested by, my hypothesis is that because it's trained on" }, { "start": 1954.72, "end": 1962.5600000000002, "text": " human produced text, and humans after all produce these actions. So if you do it careful enough," }, { "start": 1962.5600000000002, "end": 1970.8000000000002, "text": " and then use some techniques to properly translate them or doing something else, you can essentially" }, { "start": 1970.8, "end": 1974.96, "text": " get back something similar to what humans produced in the beginning." }, { "start": 1976.3999999999999, "end": 1983.9199999999998, "text": " Yeah, I mean, you would imagine that sort of the human-ness of how the environment was built" }, { "start": 1983.9199999999998, "end": 1989.36, "text": " would also be present a little bit in these language models, which makes sense. I don't have" }, { "start": 1989.36, "end": 1994.96, "text": " a better idea of how to build an environment like this. So it seems pretty reasonable." }, { "start": 1994.96, "end": 2004.24, "text": " Yeah, it's actually not to be really interesting to me because it's super hard for me if I were" }, { "start": 2004.24, "end": 2012.32, "text": " to develop this environment, how would you even animate all of these really human tasks" }, { "start": 2013.68, "end": 2019.8400000000001, "text": " even just in a household setting? It's super difficult. And I think they did a really good job" }, { "start": 2019.84, "end": 2026.48, "text": " here. And then I think this is also what makes language models particularly useful for this" }, { "start": 2026.48, "end": 2031.36, "text": " task because these are basically just human tasks and language models are really good at" }, { "start": 2032.48, "end": 2039.04, "text": " mimicking humans. Yeah. Yeah. So on the left here, we see a bunch of models that you've evaluated" }, { "start": 2039.04, "end": 2046.24, "text": " right here. So again, executability is sort of how, if it matches the syntax of the environment," }, { "start": 2046.24, "end": 2053.2, "text": " if I can map it to that, and also, I guess, if it violates any of these common sense constraints." }, { "start": 2054.4, "end": 2059.76, "text": " So just like how executable is the plan in the environment, no matter whether it's the wrong" }, { "start": 2059.76, "end": 2065.76, "text": " thing, right? And that comes in a second. And correctness is a thing that is rated by human" }, { "start": 2065.76, "end": 2070.96, "text": " annotators. They look at the plan that was produced and they just, from their own intuition, are like," }, { "start": 2070.96, "end": 2077.68, "text": " well, is this a good plan to make breakfast? Yes or no. And we clearly see there's this downward" }, { "start": 2077.68, "end": 2083.28, "text": " trend. If we exclude the models on the right, there is this trend line here where the larger" }, { "start": 2083.28, "end": 2088.56, "text": " models, they seem to produce more correct plans, which means plans that the humans like more," }, { "start": 2088.56, "end": 2097.68, "text": " but they are less executable. Whereas the smaller models, they are less correct, which we can," }, { "start": 2097.68, "end": 2103.2, "text": " that's correct. I would have expected that, but they're more executable. And you've noticed in" }, { "start": 2103.2, "end": 2108.7999999999997, "text": " the paper that very often they just produce plans that have nothing to do with the task description." }, { "start": 2108.7999999999997, "end": 2114.56, "text": " They would just produce like a plan that is according to the syntax of the examples that" }, { "start": 2114.56, "end": 2120.7999999999997, "text": " you give in the prompt, right? But how can you explain that? Like even on the top here, like" }, { "start": 2120.8, "end": 2128.88, "text": " the large models, it's even better than humans at correctness. So humans rating other humans" }, { "start": 2128.88, "end": 2135.84, "text": " think that GPT-3 produces more correct plans. Why is it so bad at executability?" }, { "start": 2135.84, "end": 2144.5600000000004, "text": " Yeah. So there are actually two questions that I think you raised. One is why this smaller models," }, { "start": 2144.56, "end": 2152.4, "text": " like when I say smaller, it's actually still pretty large, the largest GPT-2 model. So why" }, { "start": 2152.4, "end": 2159.2, "text": " do they produce more executable plans? And the second question is why the GPT-3," }, { "start": 2159.2, "end": 2164.08, "text": " the largest GPT-3 model is actually better than human. So to answer the first question," }, { "start": 2166, "end": 2173.36, "text": " I think that's because we did find some failure modes here for smaller models. I think the two" }, { "start": 2173.36, "end": 2182.6400000000003, "text": " most prominent ones are first, it frequently tries to like repeat the given example. For example," }, { "start": 2182.6400000000003, "end": 2188.88, "text": " you give it like how to browse internet. You said like go out to the computer and type on the" }, { "start": 2188.88, "end": 2196, "text": " keyboard, et cetera. And then you ask it to brush teeth. It still goes to the computer and then type" }, { "start": 2196, "end": 2202.4, "text": " out on the keyboard. So it's totally nothing like sensible here. And then the second source of error" }, { "start": 2202.4, "end": 2209.52, "text": " is sometimes it just outputs really short plans. If you say like sleep task, go to sleep, it's just" }, { "start": 2209.52, "end": 2219.28, "text": " like go to the bathroom and just stop. So that's this right here, brush teeth. It's just like" }, { "start": 2219.28, "end": 2227.36, "text": " go to bathroom. Yeah, exactly. So when these plans are short enough, even though it can be" }, { "start": 2227.36, "end": 2232.6400000000003, "text": " executed, if you just say like walk to bathroom, walk to the bathroom, just one single action," }, { "start": 2233.6, "end": 2240.6400000000003, "text": " for walk, there is not much common sense constraints there. So you can totally imagine" }, { "start": 2240.6400000000003, "end": 2247.6, "text": " it's super executable. But if you present them to humans, of course, humans will spot this and then" }, { "start": 2247.6, "end": 2253.1200000000003, "text": " say, okay, this is not correct. Because when we do human evaluations, we're trying to make it simple" }, { "start": 2253.12, "end": 2261.3599999999997, "text": " so that the error here is not too big, because we don't ask hundreds of humans to evaluate this." }, { "start": 2261.3599999999997, "end": 2271.8399999999997, "text": " We only got to ask 10 evaluators in this case. So that's why this smaller models are now really" }, { "start": 2271.8399999999997, "end": 2280.24, "text": " good at escalability. And the second question that you ask is why these larger models are actually" }, { "start": 2280.24, "end": 2286.72, "text": " better than humans. So actually, this is now the completely fair comparison if you just look at" }, { "start": 2286.72, "end": 2293.3599999999997, "text": " one axis. So all the results here, we look at from two axes that we care about. So one is the" }, { "start": 2294.24, "end": 2299.68, "text": " semantic correctness, which is evaluated by humans. And the second is the executability." }, { "start": 2299.68, "end": 2306.08, "text": " So this human plans that we use are from this data set that virtual home developers" }, { "start": 2306.08, "end": 2314.96, "text": " cross source from Amazon Turkers. So these plans, they make sure that these are executable plans." }, { "start": 2314.96, "end": 2323.44, "text": " So which means that they have one hand here. They'd be over here." }, { "start": 2323.44, "end": 2329.92, "text": " Yeah, but we don't want to put a spot right there on the right, because it's hard to see," }, { "start": 2329.92, "end": 2336.96, "text": " because humans are a big baseline and reference here. It's not a baseline that we're trying to" }, { "start": 2336.96, "end": 2344.16, "text": " beat. Of course, GPT-3 is not there yet in terms of at the same time outputting correct action plans" }, { "start": 2344.16, "end": 2350.64, "text": " and semantically correct action plans, and also being able to really ground them in the environment." }, { "start": 2350.64, "end": 2360.16, "text": " But using these two axes, we can really see, for example, which axis is the place that," }, { "start": 2360.16, "end": 2366.64, "text": " as a community, that we may want to work more on to get it better to get the human levels." }, { "start": 2366.64, "end": 2373.2799999999997, "text": " And with this paper, we find this result actually a bit interesting to us." }, { "start": 2373.92, "end": 2380, "text": " Is that for these larger models, in terms of semantic correctness, you don't need to worry" }, { "start": 2380, "end": 2388.48, "text": " too much about it. It's kind of already there if you do it, extract them. But the real question is," }, { "start": 2388.48, "end": 2393.2, "text": " how do we make them executable for agents that we care about?" }, { "start": 2393.2, "end": 2399.92, "text": " And that's exactly what you do in the meat of the paper. And the result are these translated" }, { "start": 2399.92, "end": 2406.72, "text": " models right here that, notably, they do drop a little bit in terms of their correctness as" }, { "start": 2406.72, "end": 2414.3199999999997, "text": " rated by humans, but they gain massively in executability. And this is the result of a bunch" }, { "start": 2414.3199999999997, "end": 2419.4399999999996, "text": " of different ingredients, like three main ingredients, as far as I could tell. You quickly" }, { "start": 2419.4399999999996, "end": 2428.48, "text": " want to go tell what the ingredients are to make whatever these models output into something that..." }, { "start": 2428.48, "end": 2434.3199999999997, "text": " I mean, the virtual home is maybe a test bed, right? I don't see this paper being about" }, { "start": 2434.32, "end": 2442, "text": " virtual home. It's more like, here is a model that outputs something, yet I need the output in some" }, { "start": 2442, "end": 2449.6000000000004, "text": " other form, right? And this is a very general problem, as many applications. And if we could" }, { "start": 2449.6000000000004, "end": 2456.7200000000003, "text": " solve that bridge, that technically is a big gain. That's exactly what you do. So how did you go" }, { "start": 2456.7200000000003, "end": 2463.44, "text": " about this? Yeah. So actually, I just want to make sure that actually this paper just presents" }, { "start": 2463.44, "end": 2470.8, "text": " a really preliminary step. I don't think it solves anything particularly. I mean, it does," }, { "start": 2470.8, "end": 2477.44, "text": " like if this problem... Sure, but it's a big step, I believe. I mean, the executability I have raises" }, { "start": 2478.64, "end": 2484.8, "text": " pretty high. I didn't want to oversell you, but also not undersell you, certainly." }, { "start": 2484.8, "end": 2494.96, "text": " Yeah. But to answer the question, so we actually found there are three ingredients, but" }, { "start": 2494.96, "end": 2502.7200000000003, "text": " central to this is one really simple technique that we found that's the most useful, which is" }, { "start": 2502.7200000000003, "end": 2510.32, "text": " action translation. So because in this virtual home environment, the actions that it supports are" }, { "start": 2510.32, "end": 2516.48, "text": " a limited set. I mean, it's not small, but it's something that we can definitely enumerate with" }, { "start": 2516.48, "end": 2525.52, "text": " our computational hardware and in a really quick manner. So like just one-tenth of a second or" }, { "start": 2525.52, "end": 2531.36, "text": " something like that. So let's say if we can enumerate all the actions that are supported" }, { "start": 2531.36, "end": 2538.6400000000003, "text": " by the environment, then the question now becomes, how do we translate this really" }, { "start": 2538.64, "end": 2544.16, "text": " sensible action plans generated by language models, but not really executable plans?" }, { "start": 2544.7999999999997, "end": 2550.8799999999997, "text": " How can we translate that into those actions supported by environment? Or if you want to" }, { "start": 2550.8799999999997, "end": 2557.2799999999997, "text": " deploy something in the real world, let's say your robot only supports 10 actions. How do you" }, { "start": 2558, "end": 2564.24, "text": " map those tasks into the 10 actions that the robot supports? So what we found is that you first need" }, { "start": 2564.24, "end": 2571.04, "text": " to enumerate all the actions. And then we found that you can again leverage the world knowledge" }, { "start": 2571.04, "end": 2578.9599999999996, "text": " in this language models by using another language model, namely here we use Roberta, which is a" }, { "start": 2578.9599999999996, "end": 2585.3599999999997, "text": " language model really similar to BERT. And it's a different language model because it essentially" }, { "start": 2585.3599999999997, "end": 2592.3999999999996, "text": " is a mass language model. So it's really good at outputting a useful embedding. It's" }, { "start": 2592.4, "end": 2600.64, "text": " really good in terms of about the semantic meaning for that sentence. So what we do is that we" }, { "start": 2600.64, "end": 2608, "text": " take the sentence output by GPT-3 or codecs, and then we just compare that against all the possible" }, { "start": 2609.04, "end": 2613.28, "text": " admissible actions, allowed actions by the environments. And then we found the" }, { "start": 2613.28, "end": 2620.48, "text": " most similar one in terms of this distance in the embedding space. We actually use just" }, { "start": 2620.48, "end": 2628.8, "text": " cosine distance and found that to work decently well. Yeah, there's an entire space somewhere," }, { "start": 2628.8, "end": 2633.68, "text": " and you just place all the actions. I guess you can even pre-compute those. You can pre-compute" }, { "start": 2633.68, "end": 2639.76, "text": " the embedding of all possible actions there. And once my language model outputs anything at all," }, { "start": 2639.76, "end": 2644.96, "text": " all I need to do is ship it through the Roberta model, get its embedding, put it somewhere," }, { "start": 2644.96, "end": 2651.68, "text": " get the nearest neighbor. And that's my translated action. So here we have an example that would" }, { "start": 2651.68, "end": 2660.7200000000003, "text": " translate like squeeze out a glob of lotion into pour lotion into right hand. So it would map" }, { "start": 2661.76, "end": 2669.52, "text": " action into and pour, it would be the verb lotion, the object and right hand also one of the objects." }, { "start": 2669.52, "end": 2679.36, "text": " So maybe there's two arguments to pour. It seems very simple, but I was at a talk" }, { "start": 2679.36, "end": 2687.04, "text": " by the people who made the first version of the... In Gmail, you have these always three options to" }, { "start": 2687.04, "end": 2695.2, "text": " respond to, like the quick options to respond. And I think the first, I'm not sure how it is done now," }, { "start": 2695.2, "end": 2702.3999999999996, "text": " but the first version of this, we were like, wow, this is cool. It actually takes into account the" }, { "start": 2702.3999999999996, "end": 2708, "text": " email message that was there. We always thought it was kind of like a language model, generative" }, { "start": 2708, "end": 2713.4399999999996, "text": " model somewhere. So I went to a talk and they were just like, no, we just have a big list of responses." }, { "start": 2713.4399999999996, "end": 2719.2799999999997, "text": " We just classify, right? Whatever. We just take your message, right? And we just put it through" }, { "start": 2719.28, "end": 2725.52, "text": " a model and then we just classify into this big, big bucket of possible answers. So I mean, this is" }, { "start": 2725.52, "end": 2734.1600000000003, "text": " even though it is simple, it's a very powerful method. And that being said, you don't even" }, { "start": 2734.1600000000003, "end": 2739.52, "text": " train this. You take an off the shelf embedding model and you compute nearest neighbors and it" }, { "start": 2739.52, "end": 2744.96, "text": " does turn out quite well. You do, however, you talk about this in the paper, there is a bunch of" }, { "start": 2744.96, "end": 2752.32, "text": " problems. And one of the problems I see is whenever a step contains like multiple steps, right? Is that" }, { "start": 2753.44, "end": 2758.48, "text": " like, is that a big, have you found this to be a big problem? Because this just maps one action to" }, { "start": 2758.48, "end": 2765.44, "text": " one other action. But if it's like, you know, open the fridge and take a glass of milk, then I have" }, { "start": 2765.44, "end": 2771.6, "text": " essentially no way of translating that into an admissible sequence. Yeah, that's a, that's a good" }, { "start": 2771.6, "end": 2778.88, "text": " question. And I think that's one of the main errors that like this, this Roberta model that we use," }, { "start": 2778.88, "end": 2785.04, "text": " it's actually a sentence Roberta model because it's trained with a different objective such that" }, { "start": 2785.04, "end": 2792.24, "text": " it can really, you can actually calculate cosine distance between the embeddings they generate." }, { "start": 2792.24, "end": 2801.68, "text": " So it's a, like we found like it's pretty difficult to map a compounded action. Like you said, like" }, { "start": 2801.68, "end": 2809.9199999999996, "text": " two actions in one sentence into one admissible action. But this is partly mitigated by how you" }, { "start": 2809.9199999999996, "end": 2818.3999999999996, "text": " tune the temperature, the sampling parameter, just the temperature for the GPT-3 or codex models." }, { "start": 2818.4, "end": 2825.6, "text": " Because we found that if you do increase the temperature, then it tends to output something" }, { "start": 2825.6, "end": 2835.12, "text": " more verbally expressive answers for each step. So that means it's harder to translate. And we," }, { "start": 2835.84, "end": 2842.32, "text": " if you, if you try like all this, like different settings, we did, in the end, we found like," }, { "start": 2842.32, "end": 2849.04, "text": " usually you want to use like a lower temperature than what people mostly use for language generation," }, { "start": 2849.04, "end": 2856.7200000000003, "text": " for example. So that like each action is like small enough and succinct enough. And then," }, { "start": 2856.7200000000003, "end": 2862.6400000000003, "text": " and then after we translate this action, so that it's easier for this bird model," }, { "start": 2862.6400000000003, "end": 2868.8, "text": " Roberta model to translate. And yeah, something I forgot to mention, like after we got this" }, { "start": 2868.8, "end": 2874.8, "text": " translated action, we found that it's still useful to put that back to the original prompt," }, { "start": 2874.8, "end": 2880.8, "text": " put the translated action back instead of like the original action so that you can add the GPT-3 and" }, { "start": 2880.8, "end": 2889.44, "text": " codex model to reason, like how am I going to do based on this like action already performed?" }, { "start": 2890.6400000000003, "end": 2895.1200000000003, "text": " So yeah, like you said, like you pointed, this is the third sub figure here." }, { "start": 2895.12, "end": 2900.7999999999997, "text": " So we would take instead of instead of generating the entire plan at once, we just generate" }, { "start": 2900.7999999999997, "end": 2907.3599999999997, "text": " one action, then we translate it. And then we substitute essentially whatever GPT-3 output" }, { "start": 2907.3599999999997, "end": 2913.92, "text": " with whatever the translated thing is. And then based on that, create the next action. It makes" }, { "start": 2913.92, "end": 2921.3599999999997, "text": " sense because you it's like almost like a guiding, like a bit of a guardrail for, for the language" }, { "start": 2921.36, "end": 2928.1600000000003, "text": " model. Instead, if you were to let it generate all at once, and then you translate each action" }, { "start": 2928.1600000000003, "end": 2934.32, "text": " individually, they almost like lose connection to each other, right? So this, this here might mitigate" }, { "start": 2934.32, "end": 2939.28, "text": " some of this, this stuff ready, if I have a compound action, like go to the fridge and grab a glass," }, { "start": 2939.28, "end": 2946.7200000000003, "text": " and the closest, I hope that the closest sentence is to go to fridge, right? The language model might" }, { "start": 2946.72, "end": 2953.3599999999997, "text": " still recover and recognize, aha, I haven't, you know, grabbed, haven't grabbed a glass yet. So that" }, { "start": 2953.3599999999997, "end": 2958.8799999999997, "text": " is, so these are improvements one and two. And then the third, the third thing you found that really" }, { "start": 2958.8799999999997, "end": 2966.3999999999996, "text": " helps is the prompt up here. So the priming, which I think in GPT-3, it's very common to have these" }, { "start": 2966.3999999999996, "end": 2973.52, "text": " priming prompts to tell the model what kind of stuff you, you expect as an output. I was surprised" }, { "start": 2973.52, "end": 2981.52, "text": " to see that you only have one priming prompt. Whereas in general, people put more than one," }, { "start": 2981.52, "end": 2986.64, "text": " usually people put like three or something like this. Is there a particular reason why you used" }, { "start": 2986.64, "end": 2994.16, "text": " just one? There is actually not a particular reason. I actually found like, I mean, in the beginning," }, { "start": 2994.16, "end": 3001.12, "text": " we were, we know that we have this data set, right? And then we, we found, originally, we actually" }, { "start": 3001.12, "end": 3005.68, "text": " tried to train something to achieve this, but in the end, we found that like, we don't even need" }, { "start": 3005.68, "end": 3013.04, "text": " to train something. And like, now the question becomes like, like, can you even leverage this" }, { "start": 3013.04, "end": 3019.7599999999998, "text": " data set to some extent to make it useful? Of course, this is something like additional, I mean," }, { "start": 3020.4, "end": 3026.48, "text": " it would definitely be better without any, any of this. But if you have this data set, you can" }, { "start": 3026.48, "end": 3034.08, "text": " actually found like this most similar example to the query task here. For example, like this is" }, { "start": 3034.08, "end": 3041.36, "text": " apply lotion. So like, shape, task shape is determined to be most similar. Again, judged by" }, { "start": 3041.36, "end": 3048.48, "text": " this Roberto model using the same technique. Yeah. So I think that that's the, that's the main" }, { "start": 3048.48, "end": 3053.52, "text": " motivation for using this, but we didn't thoroughly investigate it, like how you structure the" }, { "start": 3053.52, "end": 3059.84, "text": " prompts, whether you add like multiple things there and then, or you change the template here," }, { "start": 3059.84, "end": 3065.6, "text": " because I just defined this template from day one, like task something, step one, something," }, { "start": 3065.6, "end": 3069.52, "text": " two something, maybe there is a better template. Maybe you want to add some instruction there to" }, { "start": 3069.52, "end": 3076.16, "text": " make it better. And so I like, I mean, this is definitely possible and we don't investigate them" }, { "start": 3076.16, "end": 3082.08, "text": " here because we don't just want to get the best performance out of this. We want to get the best" }, { "start": 3082.08, "end": 3087.2799999999997, "text": " performance out of this. We want to show people like, this is something possible and it's really" }, { "start": 3087.2799999999997, "end": 3096.48, "text": " interesting to us. So that's why we ended up like, like just using the most simple technique here." }, { "start": 3096.48, "end": 3102.16, "text": " Yeah. And to answer your question, why we don't put multiple things there, I think one important" }, { "start": 3102.16, "end": 3111.04, "text": " reason is like, because this example plans that we put in front are produced by humans. And this is" }, { "start": 3111.04, "end": 3119.2799999999997, "text": " because due to space constraint, I'm using an oversimplified version in this figure specifically," }, { "start": 3119.2799999999997, "end": 3128.32, "text": " but in practice, these plans are actually pretty long. So, and they actually already take up a lot" }, { "start": 3128.32, "end": 3136.4, "text": " of space in the prompt. So if you put more than one, sometimes it gets too long. And I mean," }, { "start": 3136.4, "end": 3142.56, "text": " it's maybe something handleable by larger models, but we just opt for the most similar," }, { "start": 3142.56, "end": 3147.36, "text": " most simple case. And I actually read this, like there's a recent paper investigating why" }, { "start": 3148.2400000000002, "end": 3155.6800000000003, "text": " in context learning works, they frame this as a implicit Bayesian inference problem. And they did" }, { "start": 3155.6800000000003, "end": 3163.76, "text": " come to a conclusion that the longer the prompt, if I remember correctly, like it helps the model." }, { "start": 3163.76, "end": 3170.88, "text": " So, in this way, you kind of like trade off the number of examples you put and the length of each" }, { "start": 3170.88, "end": 3178.7200000000003, "text": " example. So in those cases, I think you mentioned many people put many examples before the query." }, { "start": 3179.44, "end": 3188.32, "text": " Those are usually the cases where the tasks they care about are like smaller. So for example, like" }, { "start": 3188.32, "end": 3195.52, "text": " you want to ask Einstein was born somewhere, then like this is just a sentence. So you probably want" }, { "start": 3195.52, "end": 3201.92, "text": " to put like more than one sentence there. But this case, our case is like, it's an extensive" }, { "start": 3201.92, "end": 3208.0800000000004, "text": " action plan. So it's already pretty lengthy and we don't want to go too crazy over here." }, { "start": 3209.84, "end": 3216.88, "text": " I mean, it's, yeah. Sorry, the recording has stopped on the screen side, but we can still see it." }, { "start": 3216.88, "end": 3225.92, "text": " Okay. Yeah. So yeah, I was quite interested in the sense of the prompt structuring," }, { "start": 3225.92, "end": 3232, "text": " because I know that can also make a big difference. But I also like the sort of approach of not having" }, { "start": 3232, "end": 3241.28, "text": " too many moving parts in one single thing, because it makes things complicated. And for many papers," }, { "start": 3241.28, "end": 3248.88, "text": " it makes you wonder like what was exactly the thing that gave the improvement here. Now you" }, { "start": 3248.88, "end": 3255.6000000000004, "text": " do very good ablations of all of these different improvements, which I really liked. And you showed" }, { "start": 3255.6000000000004, "end": 3261.6800000000003, "text": " that kind of the translation is the main part right here, although the other things certainly" }, { "start": 3261.6800000000003, "end": 3267.1200000000003, "text": " also help. Have you ever, so it reminds me a bit of this, you know, this retro model," }, { "start": 3267.12, "end": 3272.16, "text": " these language models that retrieve from the internet as they produce text, it reminds a" }, { "start": 3272.16, "end": 3281.7599999999998, "text": " little bit of this, right, in that you produce, you go and retrieve the closest samples in the" }, { "start": 3281.7599999999998, "end": 3290.08, "text": " data set as you produce the text. Yeah, I think this combination of retrieval and generation" }, { "start": 3290.08, "end": 3297.2, "text": " is picking up steam. And it looks pretty interesting. My question is a little bit," }, { "start": 3297.2, "end": 3304.72, "text": " have you tried also, because essentially, you now rely on this translation procedure to produce" }, { "start": 3304.72, "end": 3312.24, "text": " the correct actions. Have you tried any way to like let the model know what the possible actions" }, { "start": 3312.24, "end": 3320.16, "text": " are? Like something like, you know, I can imagine maybe I, you know, I ask the model first, and then" }, { "start": 3320.16, "end": 3326.4799999999996, "text": " I get maybe the five closest actions or the 10 closest actions in embedding space. And then I" }, { "start": 3326.4799999999996, "end": 3332, "text": " somehow put these in the prompt here, like, you know, in between, you know, what am I going to" }, { "start": 3332, "end": 3338.8799999999997, "text": " do next? Is it this or this or this or this, right? And then the model could, maybe I could prime the" }, { "start": 3338.88, "end": 3348, "text": " model to output one of them. And, you know, is there, did you try any way of telling the model" }, { "start": 3348, "end": 3352.7200000000003, "text": " more what's even possible in the environment? Because right now you're essentially relying on" }, { "start": 3352.7200000000003, "end": 3358.48, "text": " on just the language model itself. Yeah, that's a really good question, too. So like, we actually" }, { "start": 3358.48, "end": 3364, "text": " didn't try the specific thing that you talk about, like generate a bunch of possible actions and then" }, { "start": 3364, "end": 3371.68, "text": " ask the model again, which of these are the best. But we did try something similar, which is" }, { "start": 3372.72, "end": 3379.36, "text": " like Beam search. So essentially in Beam search, you look ahead to see like what the outcomes are," }, { "start": 3380.16, "end": 3389.6, "text": " are like having in the end get the highest likelihood. So we did try to constrain the" }, { "start": 3389.6, "end": 3397.2, "text": " strain the vocabulary that can be used in the Beam search. But this is only conducted on smaller" }, { "start": 3397.2, "end": 3404.7999999999997, "text": " models, because obviously the GBT-3 and codex models are now open to fully open to public. So" }, { "start": 3404.7999999999997, "end": 3409.68, "text": " we can't, we don't really have full access to different features. Like," }, { "start": 3410.7999999999997, "end": 3416.96, "text": " you can't restrict the vocabulary dynamically. Yes. So I've only done this on smaller mode," }, { "start": 3416.96, "end": 3424, "text": " relatively smaller models like the GBT-Neo. And then I think I might have tried on GBT-J as well," }, { "start": 3424, "end": 3429.52, "text": " which is a 6 billion parameter model. And it actually turns out that they don't do really" }, { "start": 3429.52, "end": 3434.48, "text": " well with if you really just constrain the vocabulary that way. And yeah, specifically" }, { "start": 3434.48, "end": 3441.36, "text": " just the Beam search constraining the vocabulary can generate. But so my hypothesis, this is now" }, { "start": 3441.36, "end": 3447.52, "text": " thoroughly tested because it's now invested on larger models as well. But my intuition why it" }, { "start": 3447.52, "end": 3454.6400000000003, "text": " doesn't work so well is that this language models are really trained on human text. So it really," }, { "start": 3456.32, "end": 3463.92, "text": " they're really used to how humans speak a certain language in this case English. So like people" }, { "start": 3463.92, "end": 3470.32, "text": " don't speak things in this way, step one, something, two, something, step three, something. So that's why" }, { "start": 3470.32, "end": 3477.76, "text": " if you really constrain the models this way, a lot of the world knowledge encoded in these models are" }, { "start": 3478.6400000000003, "end": 3485.52, "text": " lost. So basically, and personally, just a personal opinion, I don't think these models are doing" }, { "start": 3486.8, "end": 3492.88, "text": " super intelligent reasoning here. It's basically just doing kind of retrieving what's" }, { "start": 3492.88, "end": 3501.6800000000003, "text": " what is trained on. So, retrieving this large scale text. So if you want to retrieve better," }, { "start": 3501.6800000000003, "end": 3509.04, "text": " you better adopt the same way that humans speak a language. So like if you don't constrain the" }, { "start": 3509.04, "end": 3514.96, "text": " vocabulary, you can get the most out of a language model. And you can really tell if you adjust the" }, { "start": 3514.96, "end": 3522, "text": " temperature. Like if you go different temperature, they can tell you like different levels of things" }, { "start": 3522, "end": 3527.44, "text": " and they can be really realistic. But if you really constrain it, a lot of this knowledge is lost. And" }, { "start": 3528.16, "end": 3531.92, "text": " it can really do too much like common sense reasoning here." }, { "start": 3533.76, "end": 3540.96, "text": " I was, you mentioned this a bunch of times, I was surprised to find codecs as a model. And so you" }, { "start": 3540.96, "end": 3547.28, "text": " have, these are sort of vanilla models. And then you have the translated ones where all your" }, { "start": 3547.28, "end": 3554.2400000000002, "text": " all your improvements are in there. So there is the action translation, there is the sampling," }, { "start": 3554.2400000000002, "end": 3561.6800000000003, "text": " even according to the probability and executability, there is the retrieval of the" }, { "start": 3561.6800000000003, "end": 3567.1200000000003, "text": " closest prompt and so on. And these translated models, they perform really well. What I was" }, { "start": 3567.1200000000003, "end": 3572.7200000000003, "text": " surprised by and also by the results is that codecs, I mean, that it's even in here, it's like a code" }, { "start": 3572.72, "end": 3579.8399999999997, "text": " model, but also that comparably, it holds up, right? It's not as good as the GPT-3 model, but" }, { "start": 3579.8399999999997, "end": 3588.8799999999997, "text": " it's also very, very much smaller. So parameter by parameter codecs is outshining GPT on this task" }, { "start": 3588.8799999999997, "end": 3596.56, "text": " very well. How did you even consider using codecs? And how can you explain that this model is" }, { "start": 3596.56, "end": 3603.2, "text": " doing so well? Yeah. So one intuition why, this actually came out to be pretty surprising to us" }, { "start": 3603.2, "end": 3610.4, "text": " as well. So we did find like this codecs models are really good at generating these plans. And" }, { "start": 3610.4, "end": 3617.92, "text": " actually from my own experience playing with this models, I did find like codecs thinks that this is" }, { "start": 3617.92, "end": 3625.36, "text": " part of some doc stream. So it's actually imagining like people just like asking the doc stream here," }, { "start": 3625.36, "end": 3631.28, "text": " but instead of letting keep generating the code, we kind of just stop here. So, okay." }, { "start": 3631.28, "end": 3637.76, "text": " Yeah. When it's the doc stream for us, that's enough. So yeah, so it's actually doing some of" }, { "start": 3637.76, "end": 3644, "text": " this kind of doc stream. It generates this doc stream thing. And the reason I think the smaller" }, { "start": 3644, "end": 3652.7200000000003, "text": " codecs model are actually better than the same size GPT-3 model is that because it's trained on" }, { "start": 3652.72, "end": 3661.68, "text": " a more structured data. So like code and specifically many of this code examples" }, { "start": 3662.3999999999996, "end": 3671.6, "text": " in the training data set consists of doc stream and the code. So it not only can handle code really" }, { "start": 3671.6, "end": 3677.7599999999998, "text": " well, it can also generate really realistic doc streams. So, and people in doc stream, they don't" }, { "start": 3677.76, "end": 3685.2000000000003, "text": " write in like... Yeah, they don't write a novel. Yeah. So they write something really step by step" }, { "start": 3685.2000000000003, "end": 3691.36, "text": " and have more structure in it. So that's my intuition why it actually does really well with" }, { "start": 3691.36, "end": 3699.84, "text": " this task. So you can really process this sequential like logical reasoning better than the same" }, { "start": 3700.48, "end": 3707.1200000000003, "text": " size GPT-3 model. But of course, if you use a larger model, that potentially be more helpful." }, { "start": 3707.12, "end": 3714.08, "text": " Yeah. Or I mean, there is, as you said, there is still a lot of open questions about how exactly" }, { "start": 3714.08, "end": 3719.52, "text": " you structure the prompts. Like maybe this step one, step two, step three isn't ideal for these" }, { "start": 3719.52, "end": 3726.16, "text": " language models. Maybe you need to more let them write like a Reddit post or something about" }, { "start": 3726.16, "end": 3733.44, "text": " how they went and got a glass of milk yesterday and then translate that somehow. But yeah," }, { "start": 3733.44, "end": 3741.36, "text": " it's pretty cool. So one thing that just came to my attention right here is this top row right here," }, { "start": 3741.36, "end": 3749.68, "text": " which I found hilarious. So the task is complete Amazon Turk surveys. So the four steps apparently" }, { "start": 3749.68, "end": 3758.96, "text": " that you need to do is walk to home office, sit on chair, switch on computer, look at computer." }, { "start": 3758.96, "end": 3764.88, "text": " Like, is this the description of complete Amazon Turk? It's a pretty accurate description maybe of" }, { "start": 3764.88, "end": 3772.8, "text": " what Amazon Turk workers do. So like I said, these tasks are generated by crowdsource from humans." }, { "start": 3772.8, "end": 3779.84, "text": " And the humans here happen to be Amazon Turkers. So one of them decided that, okay, if you want me" }, { "start": 3779.84, "end": 3785.2, "text": " to generate some tasks, I would say like just complete surveys on Amazon Turkers. Yeah," }, { "start": 3785.2, "end": 3792.56, "text": " so they decided to put one of this here and we found this here, there is two. So like I said," }, { "start": 3792.56, "end": 3797.9199999999996, "text": " so this language model, so they can't really handle anything that you wanted to" }, { "start": 3798.96, "end": 3807.6, "text": " generate. So because we did put the example in the front. So I think in this case, the example" }, { "start": 3807.6, "end": 3815.12, "text": " happens to be something related to computer and the example is that you can't really see" }, { "start": 3815.12, "end": 3821.3599999999997, "text": " the models actually happen to reason or potentially you could just repeat the example." }, { "start": 3821.3599999999997, "end": 3827.04, "text": " But depending on other tasks, it doesn't seem like that's the case, but it does come to the" }, { "start": 3827.04, "end": 3832.3199999999997, "text": " reasoning that like this might be something related to computer too. And I'm going to put" }, { "start": 3832.3199999999997, "end": 3838.88, "text": " like this steps here. Yeah, yeah. I mean, this is, I mean, it has something like melancholic" }, { "start": 3838.88, "end": 3844.96, "text": " and it also has something a bit, as you said, rebellious of like, you know, I'm here doing my" }, { "start": 3844.96, "end": 3850.56, "text": " Amazon Turk work, I'm gonna, you know, I'm just gonna put my Easter egg in there in this data" }, { "start": 3850.56, "end": 3857.44, "text": " set or like show you, but it also shows something I think about the interaction with this environment" }, { "start": 3857.44, "end": 3863.36, "text": " because, you know, if you ask me, you know, what did you do today, I could tell you, you know," }, { "start": 3863.36, "end": 3869.92, "text": " I programmed this, I viewed a poll request, I sent some email and so on. But in the action space of" }, { "start": 3869.92, "end": 3877.36, "text": " this environment, this would all just be characterized as go to desk, sit on chair, switch on computer," }, { "start": 3877.36, "end": 3885.6, "text": " look at computer. And yeah, so it is really, maybe also a constraint of the environment itself. And" }, { "start": 3887.52, "end": 3892.8, "text": " as I said, I think the challenge is going to be there's so much knowledge in these language" }, { "start": 3892.8, "end": 3899.36, "text": " models, and we somehow need to get it out into the domain that we care about. And yeah, I guess," }, { "start": 3899.36, "end": 3906.96, "text": " I guess many opportunities are still there. And in this particular environment, is it so the way I" }, { "start": 3906.96, "end": 3912.6400000000003, "text": " see it, we have this environment, it's a 3d environment, but you never actually for your" }, { "start": 3912.6400000000003, "end": 3918.2400000000002, "text": " studies, you never actually had to actually execute anything in the environment. Is that" }, { "start": 3918.24, "end": 3925.2, "text": " correct? Or do I see something wrong here? I think those when you say execute do you mean like," }, { "start": 3926.08, "end": 3933.4399999999996, "text": " like run in the environment? Yeah, like run the 3d environment, like actually give it to the" }, { "start": 3933.4399999999996, "end": 3938.56, "text": " environment, because you evaluate executability, you can do with a parser, right, to see whether" }, { "start": 3938.56, "end": 3943.6, "text": " it matches the actions and constraints. And the correctness you evaluate with the humans," }, { "start": 3943.6, "end": 3948.16, "text": " because my question was also a little bit like, why can't I just run it and see if, you know," }, { "start": 3948.16, "end": 3953.8399999999997, "text": " at the end, there's breakfast, but you already, you already said that the tasks are so, so open," }, { "start": 3953.8399999999997, "end": 3960.7999999999997, "text": " like, how would you how would you detect there's breakfast, right? So, so, in terms of so a bit" }, { "start": 3960.7999999999997, "end": 3967.12, "text": " background here for the virtual environment. So it comes in two versions. One is the, I think" }, { "start": 3967.12, "end": 3974.88, "text": " that they call the evolving graph version, which is a pure, like you said, a state machine, a Python," }, { "start": 3974.88, "end": 3982.48, "text": " like reading in Python. So it just goes in and then checks which whether the actions can be parsed," }, { "start": 3982.48, "end": 3988.64, "text": " and then we satisfy the common sense constraint. And the other version they implement is this," }, { "start": 3989.3599999999997, "end": 3996.24, "text": " is this visualized version, where they actually only implement a subset of" }, { "start": 3996.24, "end": 4001.9199999999996, "text": " the act the total action supported in the environment. So I think they, so in the" }, { "start": 4001.9199999999996, "end": 4008.3199999999997, "text": " evolving graph version, the Python version, there are 42 actions. And in the visualized version," }, { "start": 4008.3199999999997, "end": 4015.6, "text": " there are only 10 actions. So it's limited. Like the plans we can generate, we can really" }, { "start": 4015.6, "end": 4021.3599999999997, "text": " visualize are limited. So that's also part of the reason we don't show the visualized version to" }, { "start": 4021.36, "end": 4028.1600000000003, "text": " humans. Like, can you tell us whether this is successful or not? So, yeah, that's, that's a," }, { "start": 4028.88, "end": 4036.2400000000002, "text": " that's indeed something we can do right now. And I think that's like as a community, as we go," }, { "start": 4036.2400000000002, "end": 4042.56, "text": " go on, like, to this next step with more complex tasks that humans do every day, instead of just" }, { "start": 4042.56, "end": 4048.4, "text": " like, lower level tasks. As a community, I think more efforts can be can be put here and" }, { "start": 4048.4, "end": 4055.6800000000003, "text": " to develop better simulator and also maybe beyond even household environment. So just as a," }, { "start": 4056.56, "end": 4062.8, "text": " as a story here, I did play around with the codecs and then GPT-3 models to have it generate" }, { "start": 4062.8, "end": 4068.4, "text": " something out of the household domain. And seems like they do have some, a lot of knowledge for" }, { "start": 4068.4, "end": 4075.12, "text": " those as well. So if you can ask it, how do, how do I pay bills at a restaurant? And how do I" }, { "start": 4075.12, "end": 4081.8399999999997, "text": " work out at the gym? And I think in, on Twitter, there's also someone tries to, after the posting" }, { "start": 4081.8399999999997, "end": 4088.88, "text": " of this paper, they try to ask the GPT-3 model, how do I start a company? So yeah, they do have" }, { "start": 4088.88, "end": 4095.8399999999997, "text": " a lot of knowledge for this. And as long as you can provide a set of actions that are necessary" }, { "start": 4095.8399999999997, "end": 4102.4, "text": " to complete these tasks, I think no matter what, what the granularity is, ideally it should be" }, { "start": 4102.4, "end": 4109.759999999999, "text": " at the same granularity as of humans. So ideally it should be, this model should be able to" }, { "start": 4110.32, "end": 4115.679999999999, "text": " generate something, something sensible and reasonable. But yeah, right now is something" }, { "start": 4115.679999999999, "end": 4122.4, "text": " that you definitely can't trust to put on a robot, of course. Yeah. Yeah. I mean, it's," }, { "start": 4122.4, "end": 4128.799999999999, "text": " I've always, I've always seen people thinking when they think GPT-3 or so they, they, and they think," }, { "start": 4128.8, "end": 4134.72, "text": " for example, of video games, they always imagine, you know, we can have our NPC, our characters," }, { "start": 4135.4400000000005, "end": 4141.4400000000005, "text": " the dialogue be generated by GPT-3. So it, the dialogue is more realistic, but I think" }, { "start": 4141.4400000000005, "end": 4148.88, "text": " this shows that it can go further if we are able to map sort of GPT-3's knowledge into a sort of" }, { "start": 4148.88, "end": 4155.2, "text": " structured domain that we choose, we could potentially also let these models generate the" }, { "start": 4155.2, "end": 4161.679999999999, "text": " action sequences of like, of characters, for example, let's say in video games, because that's" }, { "start": 4161.679999999999, "end": 4166.96, "text": " like a common complaint that, you know, the guards, they always walk up and then down and then left" }, { "start": 4166.96, "end": 4170.8, "text": " and then right and then up and then down and right. They have these, even if the dialogue" }, { "start": 4170.8, "end": 4177.599999999999, "text": " gets really good, their behavior is still kind of lame, either that or they cheat, they know where" }, { "start": 4177.6, "end": 4185.4400000000005, "text": " you are at all times. But with, I feel with models like this, we can almost like take this common sense" }, { "start": 4185.4400000000005, "end": 4193.6, "text": " knowledge and maybe have the hopes of transferring that to various domains and infuse a lot of areas" }, { "start": 4193.6, "end": 4198.8, "text": " with common sense. And that I find that to be, I find that to be pretty cool in itself." }, { "start": 4198.8, "end": 4202.08, "text": " That would be really exciting and interesting application. Yeah." }, { "start": 4202.08, "end": 4210.24, "text": " Yeah. Yeah. So I mean, there's a lot of things to be gained. So what I did, I was specifically" }, { "start": 4210.24, "end": 4216.08, "text": " intrigued about clip. I don't know if you are thinking about this or not. But what I tried to" }, { "start": 4216.08, "end": 4222.4, "text": " do is I tried to take like a frame of Pac-Man, like, and you know, there's like walls here and" }, { "start": 4222.4, "end": 4230, "text": " here and here. And I had Pac-Man be like, you know, here facing a wall. And then there's like" }, { "start": 4230, "end": 4238.16, "text": " a ghost behind Pac-Man, right? And then there's like these little dots over here to eat. And so" }, { "start": 4238.16, "end": 4243.52, "text": " it was like super clear what you have to do. So I tried to feed that to clip. And you know, you can" }, { "start": 4243.52, "end": 4248.88, "text": " make clip classify things by just evaluating a bunch of different strings with it. So I like try" }, { "start": 4248.88, "end": 4256.16, "text": " to, I try to evaluate the strings, go left, go up, go right, go down, or like Pac-Man should go left," }, { "start": 4256.16, "end": 4261.84, "text": " Pac-Man should go up, but it never worked out. So if you can, if you could get something like" }, { "start": 4261.84, "end": 4268.48, "text": " this running, this would be amazing. Maybe with your knowledge, maybe Pac-Man isn't the right" }, { "start": 4268.48, "end": 4274.96, "text": " environment because clip was trained on whatever picture scraped from Instagram. But I think just" }, { "start": 4274.96, "end": 4281.5199999999995, "text": " this this type of, you know, thinking beyond just the strings in terms of language, but thinking in" }, { "start": 4281.52, "end": 4286.72, "text": " terms of I have some structured environment and I want to leverage this, this knowledge of these" }, { "start": 4287.4400000000005, "end": 4293.4400000000005, "text": " models is super cool. Yeah, that would be a super interesting application. I think using clip here," }, { "start": 4294.56, "end": 4301.120000000001, "text": " like, because it feels in another modality, which is image could be really interesting. So I think" }, { "start": 4301.120000000001, "end": 4308.72, "text": " it kind of solves one of the major limitations of this paper, namely just the, because currently" }, { "start": 4308.72, "end": 4314.56, "text": " we generate plans regardless of the environment state. So it doesn't condition on environment" }, { "start": 4314.56, "end": 4320.4800000000005, "text": " state and potentially using clip, you can encode something there because you can also take image" }, { "start": 4320.4800000000005, "end": 4328.72, "text": " as input to, to an image can serve, can serve as state for, for, for the environment. I think" }, { "start": 4328.72, "end": 4338.320000000001, "text": " that would be really cool. And yeah, so yeah. So just to be, to be clear to the listeners," }, { "start": 4338.32, "end": 4344.639999999999, "text": " the basic idea for this I have from, from a PhD student that was partially in our lab called" }, { "start": 4344.639999999999, "end": 4352.16, "text": " John Battista Parascandolo. So the, the credit fully goes to him of, of this whole idea. I didn't" }, { "start": 4352.16, "end": 4357.84, "text": " want to, but I just, it got me thinking so much about, you know, we can extract this knowledge" }, { "start": 4357.84, "end": 4363.599999999999, "text": " into, into other modalities. And that's, that's pretty cool. Is there anything you want to maybe" }, { "start": 4363.6, "end": 4370.56, "text": " say about the experiments? Is there anything that was very surprising to you or, you know," }, { "start": 4370.56, "end": 4374.160000000001, "text": " something you didn't expect or something you particularly want to highlight?" }, { "start": 4376.56, "end": 4382.240000000001, "text": " Actually, I think we covered most things, but I think I might say something about the, the," }, { "start": 4382.240000000001, "end": 4388.88, "text": " the baseline here. I see, you can probably see, except for the human references, we also got to" }, { "start": 4388.88, "end": 4395.4400000000005, "text": " got to fine tune a GPT-3 version. And we did find that fine tuning can, can be a really strong" }, { "start": 4395.4400000000005, "end": 4402.16, "text": " baseline here, because as you can probably tell the, one of the measures here, LCS, which is the" }, { "start": 4402.16, "end": 4409.52, "text": " longest common subsequence. This measure here is much higher than the others. So this measure" }, { "start": 4409.52, "end": 4418.16, "text": " basically calculates how much overlapping there is in your generative plants against the" }, { "start": 4418.16, "end": 4427.92, "text": " those plants written by humans. So it's kind of calculating this IOU score. So we did find that," }, { "start": 4427.92, "end": 4434.08, "text": " find this to be a strong baseline. And I think it still actually makes sense to, to be a strong" }, { "start": 4434.08, "end": 4440.639999999999, "text": " baseline because this is trained on such data. And so this is kind of to illustrate that, like" }, { "start": 4441.44, "end": 4447.12, "text": " if you do have domain data, it's still really helpful to, to train your models, fine tune your" }, { "start": 4447.12, "end": 4453.599999999999, "text": " models this way. But if you don't have something like this, you can potentially just leverage the" }, { "start": 4453.599999999999, "end": 4462.72, "text": " knowledge already in this language models. Cool. Yeah. So where, where does your future lie? What" }, { "start": 4462.72, "end": 4469.5199999999995, "text": " are you, I, I, are you going to, are you going more into this direction? Or was this sort of like a" }, { "start": 4469.5199999999995, "end": 4476, "text": " one-off thing? Or do you have, I mean, what are the interesting questions that, that you are asking" }, { "start": 4476, "end": 4482.24, "text": " now maybe as a follow-up to this? Yeah. So I personally, I haven't decided because I," }, { "start": 4482.24, "end": 4489.84, "text": " I'm in a stage where like I'm applying to PhD programs and, and, and also other positions." }, { "start": 4490.8, "end": 4497.12, "text": " So like, but, but as a follow-up, I think it would be really interesting. As I mentioned," }, { "start": 4497.12, "end": 4504.56, "text": " one limitation, major limitation of, of this work is that we haven't found a clear way to" }, { "start": 4504.56, "end": 4511.200000000001, "text": " condition on the environment state. So that like, if you really place an agent in, in the household," }, { "start": 4511.200000000001, "end": 4517.4400000000005, "text": " for example, there is no, if you want to make coffee, but there is no cough, but there, there's no," }, { "start": 4518.56, "end": 4524.240000000001, "text": " there isn't a automatic coffee machine. How would you make a coffee with some, maybe a similar" }, { "start": 4524.240000000001, "end": 4531.52, "text": " devices. So the agent can really reason if you just put it this way, because it doesn't condition" }, { "start": 4531.52, "end": 4538.72, "text": " on the environment state. So I think it would be really interesting to like investigate how you can" }, { "start": 4539.200000000001, "end": 4545.120000000001, "text": " also condition on the current environments and then, and then reason from there. But this might" }, { "start": 4545.120000000001, "end": 4550.72, "text": " require some training data. And I think that's part of the reason why we don't like go full length" }, { "start": 4550.72, "end": 4558.160000000001, "text": " here to investigate this, because this is something just for us to tell people, like this is an" }, { "start": 4558.16, "end": 4564.32, "text": " interesting finding and we may be able to leverage something here. But I think this will be really" }, { "start": 4564.32, "end": 4572.24, "text": " exciting and like interesting future work. Cool. Excellent. Wenlong, thank you very much for being" }, { "start": 4572.24, "end": 4577.84, "text": " here. This was awesome. So great to hear from, you know, from always from the people who made the" }, { "start": 4577.84, "end": 4583.44, "text": " stuff. So yeah, thanks a lot. Yeah, thank you so much. Yeah. And yeah, I think I also want to" }, { "start": 4583.44, "end": 4590.639999999999, "text": " also want to like point that like, this is a group effort and really a lot of thanks goes to" }, { "start": 4590.639999999999, "end": 4599.36, "text": " three of my advisors, Peter Bill, Deepak Pathak and Igor Mordac. Excellent. All right. Thank you." }, { "start": 4599.36, "end": 4607.919999999999, "text": " And I hope to see you again. Yeah, I'm like, it would be an honor to always to be here. Yeah." }, { "start": 4607.92, "end": 4624.32, "text": " Excellent. All right. Bye bye. Yeah. See you." } ]
hv3UO3G0Ofo
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "google", "cnn", "resnet", "big bird", "bigbird", "attention", "attention mechanism", "attention for images", "transformer for images", "transformer", "bert", "convolutions", "window", "neighbors", "axial attention", "position embeddings", "positional encodings", "quadratic", "memory", "panoptic segmentation", "coco", "imagenet", "cityscapes", "softmax", "routing" ]
#ai #machinelearning #attention Convolutional Neural Networks have dominated image processing for the last decade, but transformers are quickly replacing traditional models. This paper proposes a fully attentional model for images by combining learned Positional Embeddings with Axial Attention. This new model can compete with CNNs on image classification and achieve state-of-the-art in various image segmentation tasks. OUTLINE: 0:00 - Intro & Overview 4:10 - This Paper's Contributions 6:20 - From Convolution to Self-Attention for Images 16:30 - Learned Positional Embeddings 24:20 - Propagating Positional Embeddings through Layers 27:00 - Traditional vs Position-Augmented Attention 31:10 - Axial Attention 44:25 - Replacing Convolutions in ResNet 46:10 - Experimental Results & Examples Paper: https://arxiv.org/abs/2003.07853 Code: https://github.com/csrhddlam/axial-deeplab My Video on BigBird: https://youtu.be/WVPE62Gk3EM My Video on ResNet: https://youtu.be/GWt6Fu05voI My Video on Attention: https://youtu.be/iDulhoQ2pro Abstract: Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. Authors: Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Transformers are quickly coming for your favorite models. Yesterday they replaced LSTMs in NLP. They used to be good at NLP but we now have transformers. Think again. Today we're going to see that maybe in the near future transformers will replace convolutions in image processing. So this paper is a step in towards this direction. You just wonder what is it going to be tomorrow. Maybe linear regression is going to be replaced just by giant transformers trained on 5,000 TPUs. Who knows? We'll see. In any case we're looking at Axial Deep Lab standalone axial attention for panoptic segmentation by Hui Yu Wang, Yuh-Kun Chu, Bradley Green, Hartwick Adam, Alan Yuel and Liang-Qi Chen of John Hopkins University and Google Research. So this paper combines a bunch of techniques that have been introduced recently to deal with attention in problems where you would traditionally use a convolution. So in this particular case they deal with this problem of panoptic segmentation which basically you'll see you'll get an image and there's a bunch of stuff on the image like a cat here and a house right here and you're supposed to color the pixels of the same object the same so you see all these pixels here are house and then all these pixels these pixels right here are cat and so on and then there's also the background so all these pixels right here I know beautiful beautiful beautiful our background. So for this problem it's kind of important that there you you you're very precise first of all so you can look at you know pixels or clusters of pixels and also that you take long-range dependencies into account because if you for example recognize that this is a house and you recognize that here's a wall right here you might be able to much better classify what is wall over here and what isn't. So the kind of long-range dependencies play a role in these problems across images and usually attention mechanisms are pretty good for these long-range dependencies but they're also expensive and that's what this paper deals with. So they use this axial attention that has been introduced for exactly resolving this problem in types of data like images or higher order tensors and they also combine this together with learned positional encodings which we've also seen time and time again throughout the kind of transformer and attention literature. So the combination of axial attention these learn positional embeddings allows them to replace the ResNet backbone that usually is found in panoptic segmentation models with a standalone attention. So they build models that are partial replace the convolutions with attention modules or replace them entirely so the entire model is going to be just an attention model so no more convolutions in it and they perform pretty well in classic tasks like they test on ImageNet classification they perform pretty well and they achieve state-of-the-art on some of these segmentation tasks. So we'll go through the model right here this is a very very extensive paper in terms of experimental evaluation what I want to get into is mainly how the method works and show you what their model looks like. So we'll go through it and as always let me know what you think in the comments and tell me if you liked it or not share it out if you did. Alright so they go over a very long list of prior work which is you know pretty pretty cool and here they say their contributions so their contributions are fourfold. First of all the proposed method is the first attempt to build standalone attention models with larger large or a global receptive field and we'll see what that means. We propose position sensitive attention layer that makes better use of positional information without adding much computational cost. We show that axial attention works well not only as a standalone model on image classification but also as a backbone on panoptic segmentation, instance segmentation and semantic segmentation. Maybe what I did before described before was instance or semantic segmentation and not panoptic segmentation. Excuse me if that's the case. As you can see it can be used for various various image tasks. Lastly our axial deep lab improved significantly over bottom-up state-of-the-art on Cocoa achieving comparable performance of two stage methods. We also surpassed previous state-of-the-art methods on mapillary, vistas and city scapes. So these are various tasks as I said and also what they don't mention here is that they perform fairly well on ImageNet. In fact in the abstract they formulate this as in particular our model outperforms all existing standalone self-attention models on ImageNet. That's a way to phrase it. You just exclude all of the other models until you're the best. Outperforms all existing standalone self-attention models on ImageNet. That's good. There's something to be said of comparing apples to apples but you can also go overboard if you want to make your work look as good as possible. Of course everyone does that and there's no particular shame in it. We're going to build up our model right here and the basic element of this model is going to be this self-attention mechanism. Quickly because I know you all know what it is but very quickly you want to perform this action right here over a region right here. There is always a query and now the subscripts here are going to be important in this paper. The query is at a given position, position O and you can see that's the O right here. I'm going to call it the output. I guess that's what they said as well. So the output position. You want to go over all of the input positions and you want to aggregate data from all of the input positions. That's right here. How do you aggregate data? By this softmax operator right here. You can see the key also has a P right here and the softmax is over the axis of P. In particular case of the images what does that mean? If you have an image right here it's made into pixels. You have pixels. Now a transformer or generally these attention models, what you can imagine is they always transform a data point into a data point of the same dimensions. This doesn't have to be actually and I think one of the developments that is going to come in coming years or months or weeks, maybe someone's already doing it, is in fact to play more with this arbitrary constraint that we're imposing on ourselves because it's not really clear that this is the best thing. But for now an attention layer is always transforming a data point, here a 4x4 image, into a data point of the same size. Also a 4x4 image right here. Now this is, as I said, this is quite simplified but it is true in NLP where we always transform our whatever 512 sequence, a token sequence into a 512 token sequence and it is true here. Now the output is going to be here on the right and the question always is, okay so I'll go over these pixels right here and for every pixel, let's say for this pixel, I'm going to ask what data goes there? What's the output of the layer at that particular pixel? And the output of the layer is going to be somehow dependent on the input right here. Now if you know classic convolutional models, what the classic convolutional model says, the output of this is going to be dependent on this region right here, if it's like a 3x3 filter. So you have this convolutional filter and that means that blue dot on the right is going to pay attention to its own location in the input plus everything around it. And then every single data point here is going to do that. So for example this green data point is going to pay attention to this region right here. Now there's a border so there's maybe some padding but the question is always where does the information come from and how is it aggregated? In a convolution layer, what happens in a convolution layer? In a convolution layer you simply have your filter and the filter has numbers in it like three and five and eight and so on. And what you're going to do is you're going to take this region right here, this blue region of the lower layer and that's also filled with numbers like seven, what's a good number? Zero. Zero is a nice number and you're going to multiply those and then you're going to sum them up and then you're going to put that on where the blue dot is. So where does the information come from in the convolution? From around the location, from around the output location but in the input. So you go to the input at the same location as where you want the output to be, you take the neighborhood and there is a fixed scheme of aggregating the neighborhood. And then you multiply and you sum across it. In contrast to this, in a fully attentional model where does the information come from? Let's again look at the blue dot and let's consider it fully attentional. Where does the information come from? Everywhere, anywhere, anywhere at all. The information comes from everywhere. Now how do I know how to aggregate the information? So it's no longer in a neighborhood. How do I know how to aggregate the information? That's also different. So two things are different. Now in a convolution I would have another four by four grid here that's pre-specified but in the attention model this here is basically all filled with question marks. Question mark, question mark. What number goes here? In the end I also do this multiply and I sum it up and I put it right here. But how do these numbers come to be? Well these numbers also come, these are dynamically computed also from the input. It's a bit special but this is how attention works. So every pixel gets to decide where information comes from and how it is aggregated. Basically it comes from anywhere and how it is aggregated is dynamic depending on the pixel. If you still don't understand it maybe pay out to watch a video on attention itself. I happen to have made one but you can watch any one when you understand that you will understand the extension here to the image. It's the exact same thing as with the sequence except the pixels are basically one long sequence in the image. So this would be a fully attentional model down here. Now what's the problem here? The problem is that pictures are pretty large. So even something like MNIST which is like 28 by 28 is like 700 pixels plus. I don't remember exactly but it's like about 700 pixels. And our big transformers now, so BERT, a very famous transformer, takes inputs that are like 512 in length. You already need pretty decent hardware to run this. And the requirements on memory and compute scale quadratically with the input length. So already with MNIST you're in pretty pretty shady territory. If you go up to something like ImageNet which is like 225 by 225 that's bad. That's not good. So you have to come up with something else. So people have been playing around, the reason why I introduced it this way, is people have been playing around a bit with sort of coming up with an intermediate with a compromise between the two. So the compromise that this paper here focuses on is going to be a compromise where we, you remember when I said where does the information for a given pixel come from? And we said okay it can come from anywhere in the attention framework and that's good because that allows us to make super long-range connections. So any pixel can aggregate information from any other pixel and not even in a fixed way but in a dynamic way. So depending on the pixel value itself and the other values it can decide how it wants to aggregate information. That turns out to be expensive, right? Every pixel together with every pixel, well that's quadratic. Okay so what do we do? We make a third method that's going to be a compromise and the compromise is going to be the following. The compromise is going to be alright we still do the dynamic aggregation which means that we still do the attention thing. However we're going to restrict back to this neighborhood region of the convolution. So in this model where does information for the blue dot come from? It again comes from this neighborhood right here and this number, the size here is going to be called m. So it still comes from that m by m neighborhood so a pixel can only aggregate information from its neighbors but contrary to a convolution how it aggregates the information like this what in convolution would be a kernel. The kernel is made dynamically by the attention module and it's made dynamically on a case-by-case basis. So we restrict it to a neighborhood, multiply, sum it up and then put it into the output and we do that for every pixel. Now it resembles much more a convolution, simply a convolution with this dynamic matrix right here. And that's the starting point for this paper. So this paper does two things to this. It says okay we can augment this by so-called positional embeddings. A positional embedding you might know from the sequence transformers. So if I have a sequence my cat is tall, I don't even know what that means for a cat. But okay what in a positional encoding so if you use a transformer and you transform this as we said into a sequence of equal length and then transformers basically information routing the transformer simply sees the lower layer sequence as a set not as a sequence. It has no notion of what's neighboring to what, what comes from where. So it pays to tell the transformer by the way this is word one, this is word two, this is word three, this is word four. There are various ways to do it. Transformers usually have fairly complicated kind of sine wave based positional encodings that bring many advantages with them. In this case they say well it might pay pay off to learn where actually these things are in this neighborhood. So they experiment with relative positional encoding which means they annotate this neighborhood with something like look here in the middle it's a 0 0 here is like 0 1 here it's 0 negative 1 negative 1 0 and so on. So they annotate it with these positional encodings. Now this is this would be the easy way what they actually do is they simply they give the model a matrix like this and they learn that matrix by heart let's say. So the positional encodings are relative positional encodings and they are learned okay so you can do that you can learn positional encoding so if you don't want to do the one two three four right here you simply say well here is a vector here is a vector here is a vector and here is also a vector. Now model you're already learning like all the weights to make this thing here happen and you're already learning your output weights up here right using back propagation. Why don't you learn yourself what you would like for position one like what kind of information you would like to be to have there using back propagation right so the model you provide them up you always provide the same vector so this is the same vector for position one and you have a different vector for position two and you have a different vector for position three right so but across all of the data points these vectors are going to be the same so the vector one is always going to be that same vector for all of the data points so the model somehow must learn independent of the data point what it means to be in position one so the model must learn how it wants to fill that vector that's called a learned positional embeddings we've seen this in many models so far it usually works pretty well and I guess here is it works especially well if you have these relative positional encodings and so this thing here is not going to be an actual matrix filled with these numbers it's going to be a learned matrix a trainable matrix that is filled that the network is allowed to fill with numbers right like three five eight and you might be you might notice that we've seen this before right so ultimately the information in this blue thing right here is going to depend on this dynamically created aggregating of information through the neighborhood and this statically learned aggregation of information throughout the neighborhood which is a con which is sort of a convolution right because in the convolution you've already seen here this is a statically learned map of how to aggregate information from the neighborhood of a pixel so I think even though there are slight differences they for example say this these are the same across attention heads and so on however I suspect that you can think of these learned positional embeddings to be to be kind of like what you learn in a convolution not exactly so no I I think I made a mistake and we'll see it in the formula we'll see it in the formula yeah okay so here they introduce these positional embeddings okay so you see that we previously we had the softmax previously we had this and this okay so this is the lower layer this is the information that comes into the layer and now it's it's transformed into values by a linear matrix but essentially this is the lower layer and for each of the output locations you want to know how should I aggregate information from that lower layer and you do this by this thing here this thing here is this dynamically constructed attention matrix using also the softmax okay so how should you aggregate information this comes from this query at the output position and the keys at the input position and now you add to that this math this thing right here which is again an inner product between the career query and the positional encodings okay so the positional encodings are going to be learned and hard coded but they still are modified by the queries so the query can still pay attention the difference is the keys depend on the input while the positional encoding does not depend on the input so the queries can decide I want to gather information from this and this and this type of information so that would be the key or it can decide I would like very much to look at pixels that are somehow on the bottom right of the pixel that I am now that would be the positional encodings and that's that's the mistake I made when I said it's equivalent to a convolution it is not because the query can still it's still modulated by that query vector of how to aggregate information otherwise you would have this to be a standalone multiplied by the input right here but it sort of pays off to think of it like what you do in the convolution so in the convolution you learn how to aggregate information basically based on position relative position to the position that you want to output and here you do a similar thing you learn static position embeddings that you then can attend to with your queries alright so these are the position embeddings and they make use of those position embeddings in fact they attend them to the following in this work we enable the output to retrieve relative positions beside the content based on query key affinities formally so the problem up here is that okay you have these position embeddings and here are the outputs but if you do this in multiple layers right if you do let's let's go with 1d sequences if you do this in multiple layers and here you annotate the position let's just go one two three four and okay this layer can make use of that right we gather stuff from here but then when this layer when this layer gathers information from here the where the information comes from in the layer below is some is how somehow getting lost right so it cannot kind of pull through this information to here or at least it's very complicated this model extends this positional embeddings in order to pull through that information so as you can see there are two new things right here the biggest important new thing is that right here we don't so here is how we aggregate information okay and here is the information that we aggregate over now you can see previously this was just this value vector and now it is extended to the position to positional embeddings learned positional embeddings okay so the this with this you're able to route the positional embeddings to the output and also here you can see the attention gets fairly complex so you have query key attention which is classic attention the queries can attend to positional encodings but also the keys can attend to positional encodings so not only can not only can the the node on top say I would like to attend to position three position three can also say well together with me positions two and four are are fairly important I guess that's what that's what that is maybe I'm mistaken here but you can see right here there is an interaction between the keys and the positional encoding right here now these positional encodings they are different for the queries keys and values but ultimately we don't that doesn't make too much of a difference so here is a contrast between what a traditional attention layer would do and what they would do so a traditional attention layer gets the input X and transforms it by means of these linear transformations right here into the queries these are the queries it's called Q into the keys and into the values okay then it does a matrix multiplication with the keys and the queries and puts that through a softmax so this here is going to be our attention matrix this is the attention matrix and the attention matrix is multiplied here by the values and that determines our output okay again the attention matrix defines how we aggregate information and the values is what information do we aggregate you know for the output in contrast when we introduce these positional encodings you can see right here again we have query key and value now it gets a little bit more more more complex right here namely we do this query key multiplication right here but we also multiply the query by these positional embeddings for Q we also multiply the keys by the positional embeddings for K and all of this together so this is a big plus right here all of this together is routed through the softmax okay and now the diagram is a little bit complicated now you can see the softmax aggregates information from here and from this learned position embeddings I would rather have they would just use it like they did in the formula do V plus R and say that's going to be the information that we are aggregating and the softmax here the output of the softmax is going to be how we aggregate information this is the attention all right I hope that's sort of clear you introduce these positional embeddings for queries keys and values and that allows the model to have a sense of where the information is coming from basically what positions which if you drop the convolutions so the convolution had this intrinsically because in your convolutional kernel right can I I'm dumb if in your convolutional kernel the number right here if there was a seven right here that meant that wherever you are whatever is on the bottom right is seven important okay so that's that was the convolution have this intrinsically here if you just do attention the we as humans we see it in a in this kind of grid form but the machine doesn't the machine simply sees a set of pixels it simply sees you can this is to the attention mechanism this is exactly the same as a long list of pixels or a discontinued set it doesn't matter to the machine so it's like the problems a feed-forward network has so we need to annotate it we have to give it positional information and learned positional information seems to work very well right here though you could think of static positional information okay this is the first thing the positional embeddings that now help the attention mechanism see where the information is coming from that's really important in pictures so we add that the second thing they do is this so-called axial attention now axial attention is sort of a let's say trick in order to reduce the load on a the load on an attention mechanism so what does it mean we've already we've already seen in sequences right if I have a sequence a sequence layer that's going to be n squared connections between the two now there are various ways to restrict that so instead of having all of these connections let's say from one node we've already seen wait if we just restrict it to let's say only this thing right here only this stuff that can be that is lower right that is lower in complexity and this in this case it would be just a neighborhood so that's what we've done that's this this M thing right here however we can also do it in different ways since this is a set anyway we can simply say maybe we should just always skip one we could like do attention like this and that would be just fine too right that would also leave away some of the information but you gain in computational efficiency there are various trade-offs now in a picture you have the same options right so you can do the neighborhood thing as we did or you can say where should the green pixel pay attention to axial attention says the green pixel should pay attention to only the row where it is in okay that's it should ignore the rest of the input it should only pay attention to that row where it is in and then in the next layer we'll flip it then the green pixel the same green pixel will pay attention to only the column it is in okay so that's that's called axial attention but don't think like don't don't there is nothing special about this being an axis or whatnot you could also define and it would not be called axial attention but you could define it makes the same sense to say well that green pixel it just depends on this diagonal right here just in the in this layer it just does this diagonal and then in the next layer it does like the anti diagonal you can say I just choose five random pixels in this layer and five random pixels in the next layer and that would work as well we've already seen this in this paper called big bird right the big big big bird but big bird so big bird explicitly used random connections in the attention mechanism and their argument was well if we use different random connections in each layer then information can travel pretty fast through the network so what's the problem with these neighborhoods right here what's the problem with neighborhood attention like this the problem is that you break the long-range dependencies so let's see what happens if information needs to go from this pixel to this pixel or this node to this node but if information needs to travel from this note to this note in a classic attention mechanism everything's connected to everything so that node in the next layer can simply aggregate information from here well that's not possible if you do this kind of neighborhood attention as we've done here if I do neighborhood attention then at most right because the neighborhood is three long at most this node right here can aggregate information from this node and then again it's three long in the next step so now this node can aggregate information from this node okay because the in the neighborhood is three long and you can only attend to within your neighborhood this means that if I want to send information to something that's really far away I need to I need to go many many layers right I need to go layer layer layer layer and this has been well known this has already been a like a problem this has already been a property of convolutional neural networks so convolutions specifically traded off the fully connectedness of fully connected layers to local connections convolutions but that means that you have to go very deep in order to make long-range connections you can't just make them in one step the same problem right here that is paper Big Bird argued that if you have random connections instead of neighborhood connections just the property of random graphs mean that you you are pretty fast in sending information around so because in a random graph of size n you on average all two nodes are connected by path lengths of log n this is much faster because in this neighborhood thing two nodes are connected in a path length of order of n right you can you can pretty easily see that if I make the sequence longer I need that many more steps in order to send it around in fact it's like something like n divided by m this neighborhood size in a random graph it's log n and in this axial attention that's why I introduced it it's 2 okay every every two nodes are connected by two steps if if node if this node right here needs to send information to this node right here in a classic attention mechanism you could do some one step because every pixel attends to every other pixel however right now we have to we have to see so this node attends in this layer sorry I have to think so how do we send information between the two we select this node right here in the first layer this node pays attention to this row okay which includes the red dot so the red dot can send information to the X in this layer in the next layer we select this node right here which is our target node where the information should go to it pays attention to all of this column which includes that X that before right this this X right here where we send information to so it takes two layers two steps to send information from any node to any other node well that's pretty good so this axial attention if you stack them on top of each other you sacrifice a little bit of of being able to send information from anywhere to anywhere for the pleasure of not having this quadratic attention anymore as you can see your attention mechanism is now as long or as big as your column or is wide or your row is high again this isn't this isn't specific to rows or columns you could do this as I said with these kind of diagonals you could do it with any other sort of sub pattern where you can sort of guarantee that the overlap between the layers is enough so you can send information around pretty efficiently and they use this right here so this axial attention you can see the formula is exactly the same the only change from before is this part right here you can see that the neighborhood that they aggregate over is no longer M by M it is now 1 by M so we've seen them going from if this is the the full input image and you want to you want to see where to attend what this paper does is it says a classic sorry a convolutional neural network would be attending to some sub part right this is convolution an attention mechanism pure attention would attend to everything but this is attention then what we are doing sorry that was a mistake what other people were doing were reverting back this attention to a sub part this kind of neighborhood attention okay but that was still you know you still have M squared you still have O of M squared because of the attention mechanism now what we are doing is we are going even lower we're actually going 1 by M okay this this is with with axial attention so in general it's 1 by M and then in the next layer we can go 1 by M in this direction and have that property and because it's so cheap now right because it's now O of M to compute this we might as well make M as long as the row itself okay so their last step is going to be to say okay we have 1 by M right here and that's going to be the row itself now you can see right here that they say axial attention reduces the complexity to HWM this enables global receptive field which is achieved by setting the span M directly to the whole input features optionally one could also use a fixed M value in order to reduce memory footprint on huge feature apps which is something that they're going to do later on ImageNet I believe so when they have big inputs or big outputs they actually do use a smaller M what you can see right here is that I wasn't really that wasn't really correct of me to say that it's now O of M because you you still have the entire query space so you multiply query by by keys now even if you make the keys to be 1 by M yes you reduce definitely you reduce this from height times width to times height times width to this but then you can see this thing right here if you take it and let's say we have this kind of row pattern and we replace M by the width then we have width squared so again the square appears however it's smaller than the original attention the original attention was H squared W squared right because HW is the image and you need that squared in order to do the attention mechanism now we've basically reduced one of the factors it is still an attention mechanism so there's still attention going but we've basically transformed the the image we've reduced it to one column now the one column is still attention so this is still attention like here so this now reduces to the attention that you see in a in a single sequence okay if you see the image as a long stretch of pixels what this does is basically it's up it simply subdivides that into neighborhoods so we're back to neighborhoods basically but we shift the neighborhoods from layer to layer so in the next layer the neighborhoods are going to be just alternating right the neighborhoods is going to be this is one neighborhood connected to this neighborhood connected to this neighborhood I hope this makes sense so it's going to be it's basically a mix between if you if you if you were to do this in convolution you could do one layer where it's neighborhood convolution and then one layer where it's like convolution with holes in it I think they're called atras convolutions or something like this with like giant holes in it that are exact is exactly the anti pattern of the neighborhood convolution from before that's what this is so you see their axial attention block right here their axial attention block replaces the ResNet block so if you know ResNet I've done a paper on ResNet ResNet basically takes the input pipes it through straight and adds to it whatever comes out of this operation okay that's a residual block now usually this thing here would be convolutions and convolutions and they are now replaced by these multi head axial attention you can see there is a multi head attention in the height and there is a multi head attention in the width and that gives us the property that every node can send around information to every other node in two steps I don't like the fact that there is only two because what this I guess this gives a significant bias to one or the other direction depending on the order that you do them in if if I had done this I maybe would have used three of them because it depends on how you want to aggregate information right like here you train the network specifically to aggregate information first in this direction and then in this direction which might work and it'll give you that sending around information anywhere so maybe they've actually tried and it just performed the same so I just might have a dumb suggestion right here in any case they simply replace in we've come a long way right we've gone to like neighborhoods and blah blah blah blah ultimately take a ResNet place the convolutions with the height axis attention and the width axis attention and we're good and then we come to results so that's it you have these positional embeddings you have the axial attention and it turns out that on ImageNet they perform fairly fairly well so you can see that models like a ResNet 50 model will get a 76.9 on ImageNet which is not state-of-the-art but it's also not it's not bad right the ResNet 50 is pretty good model you can see the full axial attention right here achieves a 78.1 also not state-of-the-art but still pretty good and as they say it's the best fully attentional model on ImageNet or standalone attention model on ImageNet so where this model really shines is where you really have to make long-range connections between pixels and that's these kind of segmentation tasks and I want to skip the tables right here they're best and everything and go to the appendix where they have some examples of this so here you can see specifically this is the original image you have a ground truth and you have the differences between their model this axial deep lab and the panoptic deep lab that is a baseline for them and you can see that the the failure cases here are are pretty you know show how show how the axial deep lab is better I don't know if they are cherry-picked or not but at least you can see that at some point so it handles occlusions better it handles instances better so here you see that the ground truth separates the person from the tie and the axial attention is able to do this but the the baseline is not able to do this correctly because it labels part of that white shirt also as and you can see why there's kind of a delimiter line here here here here but if you have long-range dependencies right if you have long-range dependencies in the model the model will recognize wait wait that's that must be the same thing as this thing here and this thing here and this thing here so that must be the same object it's simply that the shirt was occluded by the tie and goes beneath it and now appears again it's not a different it's not part of the tie and it's not part of the of a different object it's actually part of the shirt so the long-range attention you can see at these examples sometimes here okay this might not be an instance of super duper long-range dependencies this is simply where the model performs better so you can see here the ground truth has that surfboard segmented and the baseline does not that this can also just be you know there are a lot of tricks to make this work of course and you throw a lot of compute at it and sometimes you just get better numbers or part of the better numbers because of the additional compute right here what do we have so you can see occlusions it appears to handle occlusions in a better way and this might be due to this axial attention it might be due to the positional embeddings but you can see that the ground truth here has the laptop between the person's hands segmented the baseline cannot do that but the axial attention does do that and I don't know what this is honestly this is you can you can see though the axial attention also misses the fact that it should segment this in the background and if this occlusion handling you can see best in this example where the person in the back reappears on both sides of that person so you can see that the axial attention manages to segment that where that is just a mutant person right here the ground truth is equally shaky I think there is might be some ambiguity of how you can segment these images obviously but you can see the fact that there are long-range dependencies probably helped with this saying that wait in this image there's this white stuff right here and there's this white stuff right here and connecting these two regions with attention probably helped in segmenting these to be the same object even though you can see there is a break in the object so there is a break no at no point is the object on the left touching or the segment on the left touching the segment on the right and still the model manages to put those into the same label category there is the last last thing where they they want to research what their heads learn and usually you can do this right you can kind of visualize what the attention heads learn so in this case right here in the column heads the way you have to read this is that this particular head right here aggregates information from its column so everywhere where it lights up it there's a lot of information being routed you can see specifically in this here the heads of the people or the heads of the persons in the picture light up fairly well so for example this head right here is probably aggregating information a lot from this position right here and this head here is aggregating information from this position so you can deduce that that particular attention head probably deals with people's faces whereas that particular attention head probably deals you can see the attention is mostly on the grass right here and you can see the same with the for the row heads now their description here is that we notice that column head one corresponds to human heads while column head four course correlates with the field only which you know you can interpret it as this this seemed pretty clear but then they say something like row head six focuses on relatively large relatively local regions where column head five pools all over the image so row head six which is this thing right here you can see that okay it maybe focuses on small regions though you can see okay what like here you can get it that's a person but in other places I don't know where column head five pools over the whole image and this I don't know maybe they just needed something more to say because they put these pictures here they were like okay the the column heads are really nice because we couldn't like these this one's really nice because it you know just pays attention to the people and this one looks really nice because it pays attention to the field and but we can't really put the column head attention without putting the row head attention but then none of the row heads really are like super distinctive on a particular thing in the image so we need to come up with something that we can say and then you look at this one this is there's not a lot of attention so we need to contrast this with something then you would think that they contrast it with another row head but then there's no row head that does this whole image so there's like a column at five yeah I'm not sure if there's there's a bit of there is a bit of tactical writing going on here I suspect I mean still you know it's doing something cool but yeah there's there's definitely an element of sales in when you do when you do where I research papers and just not to this data but just props to the lines in front of the histograms makes it so much easier to read how big the stupid bars are why does everyone put the lines behind the histogram I probably do that myself and now I'm just I'm realizing how much easier that is alright there is a big big big experimental section right here and there's a big appendix where you can read up all of the different numbers comparisons ablations whatnot ultimately I just wanted to go over the method basically putting this into context with other things like putting this into context with stuff like Big Bird axial attention other positional encodings how it how it relates to convolutions how it relates to feed-forward networks and what convolutions did to feed-forward networks and so on I hope you have at least a little bit gained an understanding of what's going on here and with that said I see you next time bye bye
[ { "start": 0, "end": 6, "text": " Transformers are quickly coming for your favorite models. Yesterday they replaced" }, { "start": 6, "end": 13.120000000000001, "text": " LSTMs in NLP. They used to be good at NLP but we now have transformers. Think again." }, { "start": 13.120000000000001, "end": 18.6, "text": " Today we're going to see that maybe in the near future transformers will" }, { "start": 18.6, "end": 25.12, "text": " replace convolutions in image processing. So this paper is a step in towards" }, { "start": 25.12, "end": 29.68, "text": " this direction. You just wonder what is it going to be tomorrow. Maybe linear" }, { "start": 29.68, "end": 34.64, "text": " regression is going to be replaced just by giant transformers trained on 5,000" }, { "start": 34.64, "end": 41.36, "text": " TPUs. Who knows? We'll see. In any case we're looking at Axial Deep Lab" }, { "start": 41.36, "end": 47.8, "text": " standalone axial attention for panoptic segmentation by Hui Yu Wang, Yuh-Kun Chu," }, { "start": 47.8, "end": 53.56, "text": " Bradley Green, Hartwick Adam, Alan Yuel and Liang-Qi Chen of John Hopkins" }, { "start": 53.56, "end": 59.760000000000005, "text": " University and Google Research. So this paper combines a bunch of techniques that" }, { "start": 59.760000000000005, "end": 65.76, "text": " have been introduced recently to deal with attention in problems where you" }, { "start": 65.76, "end": 71.56, "text": " would traditionally use a convolution. So in this particular case they deal with" }, { "start": 71.56, "end": 76.96000000000001, "text": " this problem of panoptic segmentation which basically you'll see you'll get an" }, { "start": 76.96, "end": 84.52, "text": " image and there's a bunch of stuff on the image like a cat here and a house" }, { "start": 84.52, "end": 90.39999999999999, "text": " right here and you're supposed to color the pixels of the same object the same" }, { "start": 90.39999999999999, "end": 95.91999999999999, "text": " so you see all these pixels here are house and then all these pixels these" }, { "start": 95.91999999999999, "end": 101.32, "text": " pixels right here are cat and so on and then there's also the background so all" }, { "start": 101.32, "end": 106.52, "text": " these pixels right here I know beautiful beautiful beautiful our background." }, { "start": 106.52, "end": 114.11999999999999, "text": " So for this problem it's kind of important that there you you you're very" }, { "start": 114.11999999999999, "end": 120.44, "text": " precise first of all so you can look at you know pixels or clusters of pixels and" }, { "start": 120.44, "end": 126, "text": " also that you take long-range dependencies into account because if you" }, { "start": 126, "end": 130.51999999999998, "text": " for example recognize that this is a house and you recognize that here's a" }, { "start": 130.52, "end": 137.08, "text": " wall right here you might be able to much better classify what is wall over" }, { "start": 137.08, "end": 143.16000000000003, "text": " here and what isn't. So the kind of long-range dependencies play a role in" }, { "start": 143.16000000000003, "end": 149.16000000000003, "text": " these problems across images and usually attention mechanisms are pretty good for" }, { "start": 149.16000000000003, "end": 153.08, "text": " these long-range dependencies but they're also expensive and that's what" }, { "start": 153.08, "end": 159.88, "text": " this paper deals with. So they use this axial attention that has been introduced" }, { "start": 159.88, "end": 165.44, "text": " for exactly resolving this problem in types of data like images or higher" }, { "start": 165.44, "end": 170.68, "text": " order tensors and they also combine this together with learned positional" }, { "start": 170.68, "end": 175.88, "text": " encodings which we've also seen time and time again throughout the kind of" }, { "start": 175.88, "end": 181.96, "text": " transformer and attention literature. So the combination of axial attention these" }, { "start": 181.96, "end": 187.84, "text": " learn positional embeddings allows them to replace the ResNet backbone that" }, { "start": 187.84, "end": 194.12, "text": " usually is found in panoptic segmentation models with a standalone" }, { "start": 194.12, "end": 200.08, "text": " attention. So they build models that are partial replace the convolutions with" }, { "start": 200.08, "end": 205.88, "text": " attention modules or replace them entirely so the entire model is going to" }, { "start": 205.88, "end": 210.16, "text": " be just an attention model so no more convolutions in it and they perform" }, { "start": 210.16, "end": 216.08, "text": " pretty well in classic tasks like they test on ImageNet classification they" }, { "start": 216.08, "end": 220.12, "text": " perform pretty well and they achieve state-of-the-art on some of these" }, { "start": 220.12, "end": 226.48000000000002, "text": " segmentation tasks. So we'll go through the model right here this is a very very" }, { "start": 226.48000000000002, "end": 231, "text": " extensive paper in terms of experimental evaluation what I want to get into is" }, { "start": 231, "end": 238.20000000000002, "text": " mainly how the method works and show you what their model looks like. So we'll go" }, { "start": 238.20000000000002, "end": 243.88000000000002, "text": " through it and as always let me know what you think in the comments and tell" }, { "start": 243.88, "end": 250.72, "text": " me if you liked it or not share it out if you did. Alright so they go over a" }, { "start": 250.72, "end": 257.68, "text": " very long list of prior work which is you know pretty pretty cool and here they" }, { "start": 257.68, "end": 263.76, "text": " say their contributions so their contributions are fourfold. First of all" }, { "start": 263.76, "end": 268.28, "text": " the proposed method is the first attempt to build standalone attention models" }, { "start": 268.28, "end": 273.32, "text": " with larger large or a global receptive field and we'll see what that means. We" }, { "start": 273.32, "end": 276.96, "text": " propose position sensitive attention layer that makes better use of" }, { "start": 276.96, "end": 283.71999999999997, "text": " positional information without adding much computational cost. We show that" }, { "start": 283.71999999999997, "end": 287.4, "text": " axial attention works well not only as a standalone model on image" }, { "start": 287.4, "end": 292.28, "text": " classification but also as a backbone on panoptic segmentation, instance" }, { "start": 292.28, "end": 299, "text": " segmentation and semantic segmentation. Maybe what I did before described before" }, { "start": 299, "end": 303.6, "text": " was instance or semantic segmentation and not panoptic segmentation. Excuse me" }, { "start": 303.6, "end": 310.24, "text": " if that's the case. As you can see it can be used for various various image tasks." }, { "start": 310.24, "end": 314.4, "text": " Lastly our axial deep lab improved significantly over bottom-up state-of-the-art" }, { "start": 314.4, "end": 321.6, "text": " on Cocoa achieving comparable performance of two stage methods. We also surpassed" }, { "start": 321.6, "end": 328.48, "text": " previous state-of-the-art methods on mapillary, vistas and city scapes. So these" }, { "start": 328.48, "end": 333.20000000000005, "text": " are various tasks as I said and also what they don't mention here is that they" }, { "start": 333.20000000000005, "end": 339.12, "text": " perform fairly well on ImageNet. In fact in the abstract they formulate this as" }, { "start": 339.12, "end": 344.02000000000004, "text": " in particular our model outperforms all existing standalone self-attention" }, { "start": 344.02000000000004, "end": 348.68, "text": " models on ImageNet. That's a way to phrase it. You just exclude" }, { "start": 348.68, "end": 355.08000000000004, "text": " all of the other models until you're the best. Outperforms all existing" }, { "start": 355.08, "end": 360.68, "text": " standalone self-attention models on ImageNet. That's good." }, { "start": 360.68, "end": 365.71999999999997, "text": " There's something to be said of comparing apples to apples but you can" }, { "start": 365.71999999999997, "end": 372.36, "text": " also go overboard if you want to make your work look as good as" }, { "start": 372.36, "end": 377.32, "text": " possible. Of course everyone does that and there's no particular" }, { "start": 377.32, "end": 386.24, "text": " shame in it. We're going to build up our model right here and the" }, { "start": 386.24, "end": 392.52, "text": " basic element of this model is going to be this self-attention mechanism." }, { "start": 392.52, "end": 400.71999999999997, "text": " Quickly because I know you all know what it is but very quickly you want to" }, { "start": 400.72, "end": 407.52000000000004, "text": " perform this action right here over a region right here. There is always a" }, { "start": 407.52000000000004, "end": 412.48, "text": " query and now the subscripts here are going to be important in this paper." }, { "start": 412.48, "end": 419.40000000000003, "text": " The query is at a given position, position O and you can see that's the O" }, { "start": 419.40000000000003, "end": 424.28000000000003, "text": " right here. I'm going to call it the output. I guess that's what they" }, { "start": 424.28000000000003, "end": 430.48, "text": " said as well. So the output position. You want to go over all of the input" }, { "start": 430.48, "end": 436.36, "text": " positions and you want to aggregate data from all of the input positions." }, { "start": 436.36, "end": 442, "text": " That's right here. How do you aggregate data? By this softmax operator right" }, { "start": 442, "end": 446.52000000000004, "text": " here. You can see the key also has a P right here and the softmax is over the" }, { "start": 446.52000000000004, "end": 452.04, "text": " axis of P. In particular case of the images what does that mean? If you have" }, { "start": 452.04, "end": 459.08000000000004, "text": " an image right here it's made into pixels. You have pixels. Now a" }, { "start": 459.08, "end": 463.32, "text": " transformer or generally these attention models, what you can imagine is" }, { "start": 463.32, "end": 470.44, "text": " they always transform a data point into a data point of the same dimensions." }, { "start": 470.44, "end": 474.76, "text": " This doesn't have to be actually and I think one of the developments that is" }, { "start": 474.76, "end": 479.88, "text": " going to come in coming years or months or weeks, maybe someone's already doing" }, { "start": 479.88, "end": 487.4, "text": " it, is in fact to play more with this arbitrary constraint that we're" }, { "start": 487.4, "end": 491.71999999999997, "text": " imposing on ourselves because it's not really clear that this is the best thing." }, { "start": 491.71999999999997, "end": 498.67999999999995, "text": " But for now an attention layer is always transforming a data point, here a 4x4" }, { "start": 498.67999999999995, "end": 505.35999999999996, "text": " image, into a data point of the same size. Also a 4x4 image right here." }, { "start": 505.35999999999996, "end": 512.04, "text": " Now this is, as I said, this is quite simplified but it is true in NLP where we" }, { "start": 512.04, "end": 517.76, "text": " always transform our whatever 512 sequence, a token sequence into a 512" }, { "start": 517.76, "end": 523.16, "text": " token sequence and it is true here. Now the output is going to be here on the" }, { "start": 523.16, "end": 530.88, "text": " right and the question always is, okay so I'll go over these pixels" }, { "start": 530.88, "end": 535.56, "text": " right here and for every pixel, let's say for this pixel, I'm going to ask what" }, { "start": 535.56, "end": 541.12, "text": " data goes there? What's the output of the layer at that particular pixel? And the" }, { "start": 541.12, "end": 545.88, "text": " output of the layer is going to be somehow dependent on the input right" }, { "start": 545.88, "end": 550.84, "text": " here. Now if you know classic convolutional models, what the classic" }, { "start": 550.84, "end": 556.92, "text": " convolutional model says, the output of this is going to be dependent on this" }, { "start": 556.92, "end": 561.44, "text": " region right here, if it's like a 3x3 filter. So you have this" }, { "start": 561.44, "end": 566.48, "text": " convolutional filter and that means that blue dot on the right is going to pay" }, { "start": 566.48, "end": 573.5600000000001, "text": " attention to its own location in the input plus everything around it." }, { "start": 573.5600000000001, "end": 579.2, "text": " And then every single data point here is going to do that. So for example this" }, { "start": 579.2, "end": 583.64, "text": " green data point is going to pay attention to this region right here. Now" }, { "start": 583.64, "end": 590.9, "text": " there's a border so there's maybe some padding but the question is always where" }, { "start": 590.9, "end": 595.04, "text": " does the information come from and how is it aggregated? In a convolution" }, { "start": 595.04, "end": 598.56, "text": " layer, what happens in a convolution layer? In a convolution layer you simply" }, { "start": 598.56, "end": 602.64, "text": " have your filter and the filter has numbers in it like" }, { "start": 602.64, "end": 608.12, "text": " three and five and eight and so on. And what you're going to do is you're going" }, { "start": 608.12, "end": 613.36, "text": " to take this region right here, this blue region of the lower layer and that's" }, { "start": 613.36, "end": 619.9599999999999, "text": " also filled with numbers like seven, what's a good number? Zero." }, { "start": 619.96, "end": 625.44, "text": " Zero is a nice number and you're going to multiply those and then you're" }, { "start": 625.44, "end": 629.8000000000001, "text": " going to sum them up and then you're going to put that on where the blue dot" }, { "start": 629.8000000000001, "end": 635.48, "text": " is. So where does the information come from in the convolution? From around" }, { "start": 635.48, "end": 641.48, "text": " the location, from around the output location but in the input. So you go" }, { "start": 641.48, "end": 646.12, "text": " to the input at the same location as where you want the output to be, you take" }, { "start": 646.12, "end": 651.92, "text": " the neighborhood and there is a fixed scheme of aggregating the" }, { "start": 651.92, "end": 656.68, "text": " neighborhood. And then you multiply and you sum across it. In" }, { "start": 656.68, "end": 664.36, "text": " contrast to this, in a fully attentional model where does the information come" }, { "start": 664.36, "end": 669.5600000000001, "text": " from? Let's again look at the blue dot and let's consider it fully" }, { "start": 669.56, "end": 677.1199999999999, "text": " attentional. Where does the information come from? Everywhere," }, { "start": 677.1199999999999, "end": 683.16, "text": " anywhere, anywhere at all. The information comes from everywhere. Now" }, { "start": 683.16, "end": 689.56, "text": " how do I know how to aggregate the information? So it's no longer in a" }, { "start": 689.56, "end": 693.7199999999999, "text": " neighborhood. How do I know how to aggregate the information? That's also" }, { "start": 693.72, "end": 701.1600000000001, "text": " different. So two things are different. Now in a convolution I would have" }, { "start": 701.1600000000001, "end": 707.48, "text": " another four by four grid here that's pre-specified but in the attention model" }, { "start": 707.48, "end": 712.62, "text": " this here is basically all filled with question marks. Question mark, question" }, { "start": 712.62, "end": 719.6, "text": " mark. What number goes here? In the end I also do this multiply and I" }, { "start": 719.6, "end": 726.6800000000001, "text": " sum it up and I put it right here. But how do these numbers come to be?" }, { "start": 726.6800000000001, "end": 734.72, "text": " Well these numbers also come, these are dynamically computed also from the" }, { "start": 734.72, "end": 744.98, "text": " input. It's a bit special but this is how attention works. So every pixel" }, { "start": 744.98, "end": 750.72, "text": " gets to decide where information comes from and how it is aggregated." }, { "start": 750.72, "end": 756.96, "text": " Basically it comes from anywhere and how it is aggregated is dynamic depending" }, { "start": 756.96, "end": 763.88, "text": " on the pixel. If you still don't understand it maybe pay out to watch a" }, { "start": 763.88, "end": 769.96, "text": " video on attention itself. I happen to have made one but you can watch any one" }, { "start": 769.96, "end": 776.32, "text": " when you understand that you will understand the extension here to the" }, { "start": 776.32, "end": 781.44, "text": " image. It's the exact same thing as with the sequence except the pixels are" }, { "start": 781.44, "end": 787.88, "text": " basically one long sequence in the image. So this would be a fully" }, { "start": 787.88, "end": 794.5600000000001, "text": " attentional model down here. Now what's the problem here? The problem is that" }, { "start": 794.56, "end": 800.4399999999999, "text": " pictures are pretty large. So even something like MNIST which is like" }, { "start": 800.4399999999999, "end": 807.76, "text": " 28 by 28 is like 700 pixels plus. I don't remember exactly but it's like about" }, { "start": 807.76, "end": 816.3599999999999, "text": " 700 pixels. And our big transformers now, so BERT, a very famous transformer, takes" }, { "start": 816.3599999999999, "end": 822.9599999999999, "text": " inputs that are like 512 in length. You already need pretty decent hardware" }, { "start": 822.96, "end": 829.1600000000001, "text": " to run this. And the requirements on memory and compute scale quadratically" }, { "start": 829.1600000000001, "end": 834.08, "text": " with the input length. So already with MNIST you're in pretty pretty shady" }, { "start": 834.08, "end": 842.5600000000001, "text": " territory. If you go up to something like ImageNet which is like 225 by 225" }, { "start": 842.5600000000001, "end": 851.88, "text": " that's bad. That's not good. So you have to come up with something else." }, { "start": 851.88, "end": 855.92, "text": " So people have been playing around, the reason why I introduced it this way, is" }, { "start": 855.92, "end": 860.24, "text": " people have been playing around a bit with sort of coming up with an" }, { "start": 860.24, "end": 864.84, "text": " intermediate with a compromise between the two. So the compromise that this" }, { "start": 864.84, "end": 873.04, "text": " paper here focuses on is going to be a compromise where we, you" }, { "start": 873.04, "end": 876.76, "text": " remember when I said where does the information for a given pixel come from?" }, { "start": 876.76, "end": 882.6, "text": " And we said okay it can come from anywhere in the attention framework and" }, { "start": 882.6, "end": 887.72, "text": " that's good because that allows us to make super long-range connections. So" }, { "start": 887.72, "end": 892.48, "text": " any pixel can aggregate information from any other pixel and not even in a fixed" }, { "start": 892.48, "end": 896.72, "text": " way but in a dynamic way. So depending on the pixel value itself and the other" }, { "start": 896.72, "end": 902.56, "text": " values it can decide how it wants to aggregate information. That turns out to" }, { "start": 902.56, "end": 906.28, "text": " be expensive, right? Every pixel together with every pixel, well that's" }, { "start": 906.28, "end": 912.92, "text": " quadratic. Okay so what do we do? We make a third method that's going to be a" }, { "start": 912.92, "end": 917, "text": " compromise and the compromise is going to be the following. The compromise is" }, { "start": 917, "end": 923.3199999999999, "text": " going to be alright we still do the dynamic aggregation which means that we" }, { "start": 923.3199999999999, "end": 932.04, "text": " still do the attention thing. However we're going to restrict back to" }, { "start": 932.04, "end": 936, "text": " this neighborhood region of the convolution. So in this model where does" }, { "start": 936, "end": 940.56, "text": " information for the blue dot come from? It again comes from this neighborhood" }, { "start": 940.56, "end": 946.04, "text": " right here and this number, the size here is going to be called m. So it still" }, { "start": 946.04, "end": 951.32, "text": " comes from that m by m neighborhood so a pixel can only aggregate information" }, { "start": 951.32, "end": 957.76, "text": " from its neighbors but contrary to a convolution how it aggregates the" }, { "start": 957.76, "end": 961.44, "text": " information like this what in convolution would be a kernel. The kernel" }, { "start": 961.44, "end": 967.5200000000001, "text": " is made dynamically by the attention module and it's made dynamically on a" }, { "start": 967.5200000000001, "end": 975.08, "text": " case-by-case basis. So we restrict it to a neighborhood, multiply, sum it up" }, { "start": 975.08, "end": 980.84, "text": " and then put it into the output and we do that for every pixel. Now it resembles" }, { "start": 980.84, "end": 986.44, "text": " much more a convolution, simply a convolution with this dynamic" }, { "start": 986.44, "end": 991.2, "text": " matrix right here. And that's the starting point for this paper. So this" }, { "start": 991.2, "end": 1001, "text": " paper does two things to this. It says okay we can augment this by so-called" }, { "start": 1001, "end": 1006.12, "text": " positional embeddings. A positional embedding you might know from the" }, { "start": 1006.12, "end": 1016.5200000000001, "text": " sequence transformers. So if I have a sequence my cat is tall, I don't even" }, { "start": 1016.52, "end": 1021.84, "text": " know what that means for a cat. But okay what in a positional encoding so if you" }, { "start": 1021.84, "end": 1026.24, "text": " use a transformer and you transform this as we said into a sequence of equal" }, { "start": 1026.24, "end": 1031.6, "text": " length and then transformers basically information routing the transformer" }, { "start": 1031.6, "end": 1037, "text": " simply sees the lower layer sequence as a set not as a sequence. It has no" }, { "start": 1037, "end": 1042.1, "text": " notion of what's neighboring to what, what comes from where. So it pays to tell" }, { "start": 1042.1, "end": 1046.4399999999998, "text": " the transformer by the way this is word one, this is word two, this is word three," }, { "start": 1046.4399999999998, "end": 1051.52, "text": " this is word four. There are various ways to do it. Transformers usually have" }, { "start": 1051.52, "end": 1056.48, "text": " fairly complicated kind of sine wave based positional encodings that bring" }, { "start": 1056.48, "end": 1065.12, "text": " many advantages with them. In this case they say well it might pay pay off to" }, { "start": 1065.12, "end": 1071.1599999999999, "text": " learn where actually these things are in this neighborhood. So they experiment" }, { "start": 1071.16, "end": 1075.72, "text": " with relative positional encoding which means they annotate this" }, { "start": 1075.72, "end": 1082.76, "text": " neighborhood with something like look here in the middle it's a 0 0 here is" }, { "start": 1082.76, "end": 1089.8400000000001, "text": " like 0 1 here it's 0 negative 1 negative 1 0 and so on. So they annotate it with" }, { "start": 1089.8400000000001, "end": 1095.96, "text": " these positional encodings. Now this is this would be the easy way what they" }, { "start": 1095.96, "end": 1104, "text": " actually do is they simply they give the model a matrix like this and they learn" }, { "start": 1104, "end": 1112.64, "text": " that matrix by heart let's say. So the positional encodings are relative" }, { "start": 1112.64, "end": 1118.16, "text": " positional encodings and they are learned okay so you can do that you can" }, { "start": 1118.16, "end": 1122.8400000000001, "text": " learn positional encoding so if you don't want to do the one two three four" }, { "start": 1122.84, "end": 1128.6799999999998, "text": " right here you simply say well here is a vector here is a vector here is a vector" }, { "start": 1128.6799999999998, "end": 1134.48, "text": " and here is also a vector. Now model you're already learning like all the" }, { "start": 1134.48, "end": 1138.4399999999998, "text": " weights to make this thing here happen and you're already learning your output" }, { "start": 1138.4399999999998, "end": 1143.3999999999999, "text": " weights up here right using back propagation. Why don't you learn yourself" }, { "start": 1143.3999999999999, "end": 1148.72, "text": " what you would like for position one like what kind of information you would" }, { "start": 1148.72, "end": 1152.76, "text": " like to be to have there using back propagation right so the model you" }, { "start": 1152.76, "end": 1156.64, "text": " provide them up you always provide the same vector so this is the same vector" }, { "start": 1156.64, "end": 1161.6000000000001, "text": " for position one and you have a different vector for position two and you" }, { "start": 1161.6000000000001, "end": 1166.76, "text": " have a different vector for position three right so but across all of the" }, { "start": 1166.76, "end": 1169.96, "text": " data points these vectors are going to be the same so the vector one is always" }, { "start": 1169.96, "end": 1174.16, "text": " going to be that same vector for all of the data points so the model somehow" }, { "start": 1174.16, "end": 1180.44, "text": " must learn independent of the data point what it means to be in position one so" }, { "start": 1180.44, "end": 1184.0400000000002, "text": " the model must learn how it wants to fill that vector that's called a learned" }, { "start": 1184.0400000000002, "end": 1190.24, "text": " positional embeddings we've seen this in many models so far it usually works" }, { "start": 1190.24, "end": 1193.1200000000001, "text": " pretty well and I guess here is it works especially well if you have these" }, { "start": 1193.1200000000001, "end": 1199.64, "text": " relative positional encodings and so this thing here is not going to be an" }, { "start": 1199.64, "end": 1205.6000000000001, "text": " actual matrix filled with these numbers it's going to be a learned matrix a" }, { "start": 1205.6000000000001, "end": 1211.96, "text": " trainable matrix that is filled that the network is allowed to fill with numbers" }, { "start": 1211.96, "end": 1223, "text": " right like three five eight and you might be you might notice that we've" }, { "start": 1223, "end": 1230.64, "text": " seen this before right so ultimately the information in this blue thing right" }, { "start": 1230.64, "end": 1236.92, "text": " here is going to depend on this dynamically created aggregating of" }, { "start": 1236.92, "end": 1243.6, "text": " information through the neighborhood and this statically learned aggregation of" }, { "start": 1243.6, "end": 1248.76, "text": " information throughout the neighborhood which is a con which is sort of a" }, { "start": 1248.76, "end": 1253.04, "text": " convolution right because in the convolution you've already seen here" }, { "start": 1253.04, "end": 1259.56, "text": " this is a statically learned map of how to aggregate information from the" }, { "start": 1259.56, "end": 1265.8799999999999, "text": " neighborhood of a pixel so I think even though there are slight differences" }, { "start": 1265.8799999999999, "end": 1271.84, "text": " they for example say this these are the same across attention heads and so on" }, { "start": 1271.84, "end": 1281.08, "text": " however I suspect that you can think of these learned positional embeddings to" }, { "start": 1281.08, "end": 1291.1999999999998, "text": " be to be kind of like what you learn in a convolution not exactly so no I I" }, { "start": 1291.1999999999998, "end": 1294.4399999999998, "text": " think I made a mistake and we'll see it in the formula we'll see it in the" }, { "start": 1294.44, "end": 1305.8400000000001, "text": " formula yeah okay so here they introduce these positional embeddings okay so you" }, { "start": 1305.8400000000001, "end": 1312.3200000000002, "text": " see that we previously we had the softmax previously we had this and this" }, { "start": 1312.3200000000002, "end": 1319.4, "text": " okay so this is the lower layer this is the information that comes into the" }, { "start": 1319.4, "end": 1323.1200000000001, "text": " layer and now it's it's transformed into values by a linear matrix but" }, { "start": 1323.12, "end": 1328.7199999999998, "text": " essentially this is the lower layer and for each of the output locations you" }, { "start": 1328.7199999999998, "end": 1331.8799999999999, "text": " want to know how should I aggregate information from that lower layer and" }, { "start": 1331.8799999999999, "end": 1335.84, "text": " you do this by this thing here this thing here is this dynamically" }, { "start": 1335.84, "end": 1342.1599999999999, "text": " constructed attention matrix using also the softmax okay so how should you" }, { "start": 1342.1599999999999, "end": 1347.04, "text": " aggregate information this comes from this query at the output position and" }, { "start": 1347.04, "end": 1354.2, "text": " the keys at the input position and now you add to that this math this thing" }, { "start": 1354.2, "end": 1359.6399999999999, "text": " right here which is again an inner product between the career query and the" }, { "start": 1359.6399999999999, "end": 1364.48, "text": " positional encodings okay so the positional encodings are going to be" }, { "start": 1364.48, "end": 1370.48, "text": " learned and hard coded but they still are modified by the queries so the" }, { "start": 1370.48, "end": 1376.32, "text": " query can still pay attention the difference is the keys depend on the" }, { "start": 1376.32, "end": 1382.6399999999999, "text": " input while the positional encoding does not depend on the input so the queries" }, { "start": 1382.6399999999999, "end": 1388.24, "text": " can decide I want to gather information from this and this and this type of" }, { "start": 1388.24, "end": 1394.36, "text": " information so that would be the key or it can decide I would like very much to" }, { "start": 1394.36, "end": 1399.2, "text": " look at pixels that are somehow on the bottom right of the pixel that I am now" }, { "start": 1399.2, "end": 1404.96, "text": " that would be the positional encodings and that's that's the mistake I made" }, { "start": 1404.96, "end": 1409.52, "text": " when I said it's equivalent to a convolution it is not because the query" }, { "start": 1409.52, "end": 1415.8, "text": " can still it's still modulated by that query vector of how to aggregate" }, { "start": 1415.8, "end": 1420.76, "text": " information otherwise you would have this to be a standalone multiplied by" }, { "start": 1420.76, "end": 1427.92, "text": " the input right here but it sort of pays off to think of it like what you do in" }, { "start": 1427.92, "end": 1432.68, "text": " the convolution so in the convolution you learn how to aggregate information" }, { "start": 1432.68, "end": 1438.2, "text": " basically based on position relative position to the position that you want" }, { "start": 1438.2, "end": 1443.76, "text": " to output and here you do a similar thing you learn static position embeddings" }, { "start": 1443.76, "end": 1449.64, "text": " that you then can attend to with your queries alright so these are the" }, { "start": 1449.64, "end": 1455.8, "text": " position embeddings and they make use of those position embeddings in fact they" }, { "start": 1455.8, "end": 1462.3600000000001, "text": " attend them to the following in this work we enable the output to retrieve" }, { "start": 1462.36, "end": 1467.32, "text": " relative positions beside the content based on query key affinities formally" }, { "start": 1467.32, "end": 1476.4799999999998, "text": " so the problem up here is that okay you have these position embeddings and here" }, { "start": 1476.4799999999998, "end": 1482.32, "text": " are the outputs but if you do this in multiple layers right if you do let's" }, { "start": 1482.32, "end": 1487.32, "text": " let's go with 1d sequences if you do this in multiple layers and here you" }, { "start": 1487.32, "end": 1495.48, "text": " annotate the position let's just go one two three four and okay this layer can" }, { "start": 1495.48, "end": 1501.8, "text": " make use of that right we gather stuff from here but then when this layer when" }, { "start": 1501.8, "end": 1508.8799999999999, "text": " this layer gathers information from here the where the information comes from in" }, { "start": 1508.8799999999999, "end": 1515.3999999999999, "text": " the layer below is some is how somehow getting lost right so it cannot kind of" }, { "start": 1515.4, "end": 1521.16, "text": " pull through this information to here or at least it's very complicated this" }, { "start": 1521.16, "end": 1525.68, "text": " model extends this positional embeddings in order to pull through that" }, { "start": 1525.68, "end": 1531.64, "text": " information so as you can see there are two new things right here the biggest" }, { "start": 1531.64, "end": 1540.44, "text": " important new thing is that right here we don't so here is how we aggregate" }, { "start": 1540.44, "end": 1545.52, "text": " information okay and here is the information that we aggregate over now" }, { "start": 1545.52, "end": 1552.92, "text": " you can see previously this was just this value vector and now it is extended" }, { "start": 1552.92, "end": 1558.52, "text": " to the position to positional embeddings learned positional embeddings okay so" }, { "start": 1558.52, "end": 1566.68, "text": " the this with this you're able to route the positional embeddings to the output" }, { "start": 1566.68, "end": 1573.68, "text": " and also here you can see the attention gets fairly complex so you have query" }, { "start": 1573.68, "end": 1578.3200000000002, "text": " key attention which is classic attention the queries can attend to positional" }, { "start": 1578.3200000000002, "end": 1584.8, "text": " encodings but also the keys can attend to positional encodings so not only can" }, { "start": 1584.8, "end": 1593.96, "text": " not only can the the node on top say I would like to attend to position three" }, { "start": 1593.96, "end": 1600.64, "text": " position three can also say well together with me positions two and four" }, { "start": 1600.64, "end": 1608.52, "text": " are are fairly important I guess that's what that's what that is maybe I'm" }, { "start": 1608.52, "end": 1612.44, "text": " mistaken here but you can see right here there is an interaction between the" }, { "start": 1612.44, "end": 1617.92, "text": " keys and the positional encoding right here now these positional encodings they" }, { "start": 1617.92, "end": 1625.76, "text": " are different for the queries keys and values but ultimately we don't that" }, { "start": 1625.76, "end": 1630.68, "text": " doesn't make too much of a difference so here is a contrast between what a" }, { "start": 1630.68, "end": 1635.48, "text": " traditional attention layer would do and what they would do so a traditional" }, { "start": 1635.48, "end": 1643.96, "text": " attention layer gets the input X and transforms it by means of these linear" }, { "start": 1643.96, "end": 1650.3600000000001, "text": " transformations right here into the queries these are the queries it's" }, { "start": 1650.3600000000001, "end": 1657.92, "text": " called Q into the keys and into the values okay then it does a matrix" }, { "start": 1657.92, "end": 1664.24, "text": " multiplication with the keys and the queries and puts that through a softmax" }, { "start": 1664.24, "end": 1670.52, "text": " so this here is going to be our attention matrix this is the attention" }, { "start": 1670.52, "end": 1677.92, "text": " matrix and the attention matrix is multiplied here by the values and that" }, { "start": 1677.92, "end": 1681.96, "text": " determines our output okay again the attention matrix defines how we" }, { "start": 1681.96, "end": 1687.6399999999999, "text": " aggregate information and the values is what information do we aggregate you" }, { "start": 1687.6399999999999, "end": 1694.12, "text": " know for the output in contrast when we introduce these positional encodings you" }, { "start": 1694.12, "end": 1704.08, "text": " can see right here again we have query key and value now it gets a little bit" }, { "start": 1704.08, "end": 1713.6399999999999, "text": " more more more complex right here namely we do this query key multiplication" }, { "start": 1713.6399999999999, "end": 1720.8, "text": " right here but we also multiply the query by these positional embeddings for" }, { "start": 1720.8, "end": 1727.44, "text": " Q we also multiply the keys by the positional embeddings for K and all of" }, { "start": 1727.44, "end": 1732, "text": " this together so this is a big plus right here all of this together is" }, { "start": 1732, "end": 1739.12, "text": " routed through the softmax okay and now the diagram is a little bit complicated" }, { "start": 1739.12, "end": 1745.3999999999999, "text": " now you can see the softmax aggregates information from here and from this" }, { "start": 1745.3999999999999, "end": 1749.6399999999999, "text": " learned position embeddings I would rather have they would just use it like" }, { "start": 1749.64, "end": 1756.88, "text": " they did in the formula do V plus R and say that's going to be the information" }, { "start": 1756.88, "end": 1762.96, "text": " that we are aggregating and the softmax here the output of the softmax is going" }, { "start": 1762.96, "end": 1770.5200000000002, "text": " to be how we aggregate information this is the attention all right I hope that's" }, { "start": 1770.5200000000002, "end": 1776.1200000000001, "text": " sort of clear you introduce these positional embeddings for queries keys" }, { "start": 1776.12, "end": 1783.52, "text": " and values and that allows the model to have a sense of where the information is" }, { "start": 1783.52, "end": 1788.6, "text": " coming from basically what positions which if you drop the convolutions so" }, { "start": 1788.6, "end": 1793.12, "text": " the convolution had this intrinsically because in your convolutional kernel" }, { "start": 1793.12, "end": 1801.2399999999998, "text": " right can I I'm dumb if in your convolutional kernel the number right" }, { "start": 1801.24, "end": 1806.48, "text": " here if there was a seven right here that meant that wherever you are" }, { "start": 1806.48, "end": 1813.64, "text": " whatever is on the bottom right is seven important okay so that's that was the" }, { "start": 1813.64, "end": 1820.56, "text": " convolution have this intrinsically here if you just do attention the we as" }, { "start": 1820.56, "end": 1826.98, "text": " humans we see it in a in this kind of grid form but the machine doesn't the" }, { "start": 1826.98, "end": 1832.04, "text": " machine simply sees a set of pixels it simply sees you can this is to the" }, { "start": 1832.04, "end": 1837.08, "text": " attention mechanism this is exactly the same as a long list of pixels or a" }, { "start": 1837.08, "end": 1843.1200000000001, "text": " discontinued set it doesn't matter to the machine so it's like the problems a" }, { "start": 1843.1200000000001, "end": 1847.48, "text": " feed-forward network has so we need to annotate it we have to give it" }, { "start": 1847.48, "end": 1852.68, "text": " positional information and learned positional information seems to work" }, { "start": 1852.68, "end": 1857.8, "text": " very well right here though you could think of static positional information" }, { "start": 1857.8, "end": 1864.88, "text": " okay this is the first thing the positional embeddings that now help the" }, { "start": 1864.88, "end": 1868.52, "text": " attention mechanism see where the information is coming from that's really" }, { "start": 1868.52, "end": 1875.04, "text": " important in pictures so we add that the second thing they do is this so-called" }, { "start": 1875.04, "end": 1885.12, "text": " axial attention now axial attention is sort of a let's say trick in order to" }, { "start": 1885.12, "end": 1893, "text": " reduce the load on a the load on an attention mechanism so what does it mean" }, { "start": 1893, "end": 1898.36, "text": " we've already we've already seen in sequences right if I have a sequence a" }, { "start": 1898.36, "end": 1903.92, "text": " sequence layer that's going to be n squared connections between the two now" }, { "start": 1903.92, "end": 1908.44, "text": " there are various ways to restrict that so instead of having all of these" }, { "start": 1908.44, "end": 1912.2, "text": " connections let's say from one node we've already seen wait if we just" }, { "start": 1912.2, "end": 1919, "text": " restrict it to let's say only this thing right here only this stuff that can be" }, { "start": 1919, "end": 1924.44, "text": " that is lower right that is lower in complexity and this in this case it" }, { "start": 1924.44, "end": 1927.8000000000002, "text": " would be just a neighborhood so that's what we've done that's this this M thing" }, { "start": 1927.8000000000002, "end": 1932.74, "text": " right here however we can also do it in different ways since this is a set" }, { "start": 1932.74, "end": 1940.04, "text": " anyway we can simply say maybe we should just always skip one we could like do" }, { "start": 1940.04, "end": 1945.96, "text": " attention like this and that would be just fine too right that would also" }, { "start": 1945.96, "end": 1951.88, "text": " leave away some of the information but you gain in computational efficiency" }, { "start": 1951.88, "end": 1959.2, "text": " there are various trade-offs now in a picture you have the same options right" }, { "start": 1959.2, "end": 1966.44, "text": " so you can do the neighborhood thing as we did or you can say where should the" }, { "start": 1966.44, "end": 1972.0800000000002, "text": " green pixel pay attention to axial attention says the green pixel should" }, { "start": 1972.0800000000002, "end": 1978.44, "text": " pay attention to only the row where it is in okay that's it should ignore the" }, { "start": 1978.44, "end": 1983.2, "text": " rest of the input it should only pay attention to that row where it is in and" }, { "start": 1983.2, "end": 1988.0800000000002, "text": " then in the next layer we'll flip it then the green pixel the same green" }, { "start": 1988.08, "end": 1995.6799999999998, "text": " pixel will pay attention to only the column it is in okay so that's that's" }, { "start": 1995.6799999999998, "end": 2003.12, "text": " called axial attention but don't think like don't don't there is nothing special" }, { "start": 2003.12, "end": 2008.76, "text": " about this being an axis or whatnot you could also define and it would not be" }, { "start": 2008.76, "end": 2014, "text": " called axial attention but you could define it makes the same sense to say" }, { "start": 2014, "end": 2018.84, "text": " well that green pixel it just depends on this diagonal right here just in the in" }, { "start": 2018.84, "end": 2023.4, "text": " this layer it just does this diagonal and then in the next layer it does like" }, { "start": 2023.4, "end": 2032, "text": " the anti diagonal you can say I just choose five random pixels in this layer" }, { "start": 2032, "end": 2037, "text": " and five random pixels in the next layer and that would work as well we've" }, { "start": 2037, "end": 2042.76, "text": " already seen this in this paper called big bird right the big big big bird but" }, { "start": 2042.76, "end": 2051.88, "text": " big bird so big bird explicitly used random connections in the attention" }, { "start": 2051.88, "end": 2057.24, "text": " mechanism and their argument was well if we use different random connections in" }, { "start": 2057.24, "end": 2063.48, "text": " each layer then information can travel pretty fast through the network so what's" }, { "start": 2063.48, "end": 2068.08, "text": " the problem with these neighborhoods right here what's the problem with" }, { "start": 2068.08, "end": 2073.92, "text": " neighborhood attention like this the problem is that you break the long-range" }, { "start": 2073.92, "end": 2081.64, "text": " dependencies so let's see what happens if information needs to go from this" }, { "start": 2081.64, "end": 2087.3199999999997, "text": " pixel to this pixel or this node to this node but if information needs to travel" }, { "start": 2087.3199999999997, "end": 2091.24, "text": " from this note to this note in a classic attention mechanism everything's" }, { "start": 2091.24, "end": 2095.04, "text": " connected to everything so that node in the next layer can simply aggregate" }, { "start": 2095.04, "end": 2099.64, "text": " information from here well that's not possible if you do this kind of" }, { "start": 2099.64, "end": 2104.52, "text": " neighborhood attention as we've done here if I do neighborhood attention then" }, { "start": 2104.52, "end": 2110.68, "text": " at most right because the neighborhood is three long at most this node right" }, { "start": 2110.68, "end": 2115.12, "text": " here can aggregate information from this node and then again it's three long in" }, { "start": 2115.12, "end": 2120.2, "text": " the next step so now this node can aggregate information from this node okay" }, { "start": 2120.2, "end": 2125.12, "text": " because the in the neighborhood is three long and you can only attend to within" }, { "start": 2125.12, "end": 2131.48, "text": " your neighborhood this means that if I want to send information to something" }, { "start": 2131.48, "end": 2142.56, "text": " that's really far away I need to I need to go many many layers right I need to" }, { "start": 2142.56, "end": 2146.7999999999997, "text": " go layer layer layer layer and this has been well known this has already been a" }, { "start": 2146.8, "end": 2151.0800000000004, "text": " like a problem this has already been a property of convolutional neural" }, { "start": 2151.0800000000004, "end": 2155.88, "text": " networks so convolutions specifically traded off the fully connectedness of" }, { "start": 2155.88, "end": 2161.5600000000004, "text": " fully connected layers to local connections convolutions but that means" }, { "start": 2161.5600000000004, "end": 2166.28, "text": " that you have to go very deep in order to make long-range connections you can't" }, { "start": 2166.28, "end": 2170.84, "text": " just make them in one step the same problem right here that is paper Big" }, { "start": 2170.84, "end": 2175.42, "text": " Bird argued that if you have random connections instead of neighborhood" }, { "start": 2175.42, "end": 2183.12, "text": " connections just the property of random graphs mean that you you are pretty fast" }, { "start": 2183.12, "end": 2190.96, "text": " in sending information around so because in a random graph of size n you on" }, { "start": 2190.96, "end": 2198, "text": " average all two nodes are connected by path lengths of log n this is much" }, { "start": 2198, "end": 2203.6, "text": " faster because in this neighborhood thing two nodes are connected in a path" }, { "start": 2203.6, "end": 2208.3199999999997, "text": " length of order of n right you can you can pretty easily see that if I make the" }, { "start": 2208.3199999999997, "end": 2214.16, "text": " sequence longer I need that many more steps in order to send it around in fact" }, { "start": 2214.16, "end": 2219.3199999999997, "text": " it's like something like n divided by m this neighborhood size in a random graph" }, { "start": 2219.3199999999997, "end": 2225.44, "text": " it's log n and in this axial attention that's why I introduced it it's 2 okay" }, { "start": 2225.44, "end": 2237.2400000000002, "text": " every every two nodes are connected by two steps if if node if this node right" }, { "start": 2237.2400000000002, "end": 2242.2000000000003, "text": " here needs to send information to this node right here in a classic attention" }, { "start": 2242.2000000000003, "end": 2245.28, "text": " mechanism you could do some one step because every pixel attends to every" }, { "start": 2245.28, "end": 2254.36, "text": " other pixel however right now we have to we have to see so this node attends in" }, { "start": 2254.36, "end": 2262.48, "text": " this layer sorry I have to think so how do we send information between the two" }, { "start": 2262.48, "end": 2267.7200000000003, "text": " we select this node right here in the first layer this node pays attention to" }, { "start": 2267.7200000000003, "end": 2273.8, "text": " this row okay which includes the red dot so the red dot can send information to" }, { "start": 2273.8, "end": 2282.2400000000002, "text": " the X in this layer in the next layer we select this node right here which is our" }, { "start": 2282.24, "end": 2288.2, "text": " target node where the information should go to it pays attention to all of this" }, { "start": 2288.2, "end": 2295.08, "text": " column which includes that X that before right this this X right here where we" }, { "start": 2295.08, "end": 2300.12, "text": " send information to so it takes two layers two steps to send information" }, { "start": 2300.12, "end": 2306.68, "text": " from any node to any other node well that's pretty good so this axial" }, { "start": 2306.68, "end": 2311.8799999999997, "text": " attention if you stack them on top of each other you sacrifice a little bit of" }, { "start": 2311.88, "end": 2319.1600000000003, "text": " of being able to send information from anywhere to anywhere for the pleasure of" }, { "start": 2319.1600000000003, "end": 2323.4, "text": " not having this quadratic attention anymore as you can see your attention" }, { "start": 2323.4, "end": 2330.44, "text": " mechanism is now as long or as big as your column or is wide or your row is" }, { "start": 2330.44, "end": 2337.88, "text": " high again this isn't this isn't specific to rows or columns you could do" }, { "start": 2337.88, "end": 2343.36, "text": " this as I said with these kind of diagonals you could do it with any other" }, { "start": 2343.36, "end": 2350.48, "text": " sort of sub pattern where you can sort of guarantee that the overlap between" }, { "start": 2350.48, "end": 2355.6400000000003, "text": " the layers is enough so you can send information around pretty efficiently and" }, { "start": 2355.6400000000003, "end": 2362.6400000000003, "text": " they use this right here so this axial attention you can see the formula is" }, { "start": 2362.64, "end": 2368, "text": " exactly the same the only change from before is this part right here you can" }, { "start": 2368, "end": 2373.92, "text": " see that the neighborhood that they aggregate over is no longer M by M it is" }, { "start": 2373.92, "end": 2384.96, "text": " now 1 by M so we've seen them going from if this is the the full input image and" }, { "start": 2384.96, "end": 2391.6, "text": " you want to you want to see where to attend what this paper does is it says a" }, { "start": 2391.6, "end": 2399.16, "text": " classic sorry a convolutional neural network would be attending to some sub" }, { "start": 2399.16, "end": 2405.4, "text": " part right this is convolution an attention mechanism pure attention" }, { "start": 2405.4, "end": 2412.7599999999998, "text": " would attend to everything but this is attention then what we are doing sorry" }, { "start": 2412.7599999999998, "end": 2420.44, "text": " that was a mistake what other people were doing were reverting back this" }, { "start": 2420.44, "end": 2427.88, "text": " attention to a sub part this kind of neighborhood attention okay but that was" }, { "start": 2427.88, "end": 2432.8, "text": " still you know you still have M squared you still have O of M squared because of" }, { "start": 2432.8, "end": 2439, "text": " the attention mechanism now what we are doing is we are going even lower we're" }, { "start": 2439, "end": 2449.16, "text": " actually going 1 by M okay this this is with with axial attention so in general" }, { "start": 2449.16, "end": 2454.7599999999998, "text": " it's 1 by M and then in the next layer we can go 1 by M in this direction and" }, { "start": 2454.7599999999998, "end": 2462.04, "text": " have that property and because it's so cheap now right because it's now O of M" }, { "start": 2462.04, "end": 2468.24, "text": " to compute this we might as well make M as long as the row itself okay so their" }, { "start": 2468.24, "end": 2474.04, "text": " last step is going to be to say okay we have 1 by M right here and that's going" }, { "start": 2474.04, "end": 2485.24, "text": " to be the row itself now you can see right here that they say axial attention" }, { "start": 2485.24, "end": 2490.62, "text": " reduces the complexity to HWM this enables global receptive field which is" }, { "start": 2490.62, "end": 2495.56, "text": " achieved by setting the span M directly to the whole input features optionally" }, { "start": 2495.56, "end": 2501.08, "text": " one could also use a fixed M value in order to reduce memory footprint on huge" }, { "start": 2501.08, "end": 2505.36, "text": " feature apps which is something that they're going to do later on ImageNet I" }, { "start": 2505.36, "end": 2509.36, "text": " believe so when they have big inputs or big outputs they actually do use a" }, { "start": 2509.36, "end": 2514.52, "text": " smaller M what you can see right here is that I wasn't really that wasn't really" }, { "start": 2514.52, "end": 2521.56, "text": " correct of me to say that it's now O of M because you you still have the entire" }, { "start": 2521.56, "end": 2532.24, "text": " query space so you multiply query by by keys now even if you make the keys to be" }, { "start": 2532.24, "end": 2540, "text": " 1 by M yes you reduce definitely you reduce this from height times width to" }, { "start": 2540, "end": 2547.68, "text": " times height times width to this but then you can see this thing right here if" }, { "start": 2547.68, "end": 2553.9199999999996, "text": " you take it and let's say we have this kind of row pattern and we replace M by" }, { "start": 2553.9199999999996, "end": 2560.16, "text": " the width then we have width squared so again the square appears however it's" }, { "start": 2560.16, "end": 2565.16, "text": " smaller than the original attention the original attention was H squared W" }, { "start": 2565.16, "end": 2571.24, "text": " squared right because HW is the image and you need that squared in order to" }, { "start": 2571.24, "end": 2575.12, "text": " do the attention mechanism now we've basically reduced one of the factors it" }, { "start": 2575.12, "end": 2581.12, "text": " is still an attention mechanism so there's still attention going but we've" }, { "start": 2581.12, "end": 2588.08, "text": " basically transformed the the image we've reduced it to one column now the" }, { "start": 2588.08, "end": 2594.3599999999997, "text": " one column is still attention so this is still attention like here so this now" }, { "start": 2594.3599999999997, "end": 2601.98, "text": " reduces to the attention that you see in a in a single sequence okay if you see" }, { "start": 2601.98, "end": 2609.96, "text": " the image as a long stretch of pixels what this does is basically it's up it" }, { "start": 2609.96, "end": 2613.4, "text": " simply subdivides that into neighborhoods so we're back to" }, { "start": 2613.4, "end": 2621.16, "text": " neighborhoods basically but we shift the neighborhoods from layer to layer so in" }, { "start": 2621.16, "end": 2625.56, "text": " the next layer the neighborhoods are going to be just alternating right the" }, { "start": 2625.56, "end": 2628.52, "text": " neighborhoods is going to be this is one neighborhood connected to this" }, { "start": 2628.52, "end": 2636.4, "text": " neighborhood connected to this neighborhood I hope this makes sense so" }, { "start": 2636.4, "end": 2642.6, "text": " it's going to be it's basically a mix between if you if you if you were to do" }, { "start": 2642.6, "end": 2647.08, "text": " this in convolution you could do one layer where it's neighborhood convolution" }, { "start": 2647.08, "end": 2651.7599999999998, "text": " and then one layer where it's like convolution with holes in it I think" }, { "start": 2651.7599999999998, "end": 2655.32, "text": " they're called atras convolutions or something like this with like giant" }, { "start": 2655.32, "end": 2660.4, "text": " holes in it that are exact is exactly the anti pattern of the neighborhood" }, { "start": 2660.4, "end": 2668.1600000000003, "text": " convolution from before that's what this is so you see their axial attention" }, { "start": 2668.1600000000003, "end": 2673.6800000000003, "text": " block right here their axial attention block replaces the ResNet block so if" }, { "start": 2673.6800000000003, "end": 2679.6000000000004, "text": " you know ResNet I've done a paper on ResNet ResNet basically takes the input" }, { "start": 2679.6000000000004, "end": 2685, "text": " pipes it through straight and adds to it whatever comes out of this" }, { "start": 2685, "end": 2690.56, "text": " operation okay that's a residual block now usually this thing here would be" }, { "start": 2690.56, "end": 2697.44, "text": " convolutions and convolutions and they are now replaced by these multi head" }, { "start": 2697.44, "end": 2702.88, "text": " axial attention you can see there is a multi head attention in the height and" }, { "start": 2702.88, "end": 2707.76, "text": " there is a multi head attention in the width and that gives us the property" }, { "start": 2707.76, "end": 2711.78, "text": " that every node can send around information to every other node in two" }, { "start": 2711.78, "end": 2719.6000000000004, "text": " steps I don't like the fact that there is only two because what this I guess" }, { "start": 2719.6000000000004, "end": 2724.6000000000004, "text": " this gives a significant bias to one or the other direction depending on the" }, { "start": 2724.6000000000004, "end": 2730.44, "text": " order that you do them in if if I had done this I maybe would have used three" }, { "start": 2730.44, "end": 2735.92, "text": " of them because it depends on how you want to aggregate information right like" }, { "start": 2735.92, "end": 2739.92, "text": " here you train the network specifically to aggregate information first in this" }, { "start": 2739.92, "end": 2743.2400000000002, "text": " direction and then in this direction which might work and it'll give you that" }, { "start": 2743.2400000000002, "end": 2749.64, "text": " sending around information anywhere so maybe they've actually tried and it just" }, { "start": 2749.64, "end": 2755.96, "text": " performed the same so I just might have a dumb suggestion right here in any case" }, { "start": 2755.96, "end": 2760.64, "text": " they simply replace in we've come a long way right we've gone to like" }, { "start": 2760.64, "end": 2765.2400000000002, "text": " neighborhoods and blah blah blah blah ultimately take a ResNet place the" }, { "start": 2765.24, "end": 2769.8799999999997, "text": " convolutions with the height axis attention and the width axis attention" }, { "start": 2769.8799999999997, "end": 2774.8799999999997, "text": " and we're good and then we come to results so that's it you have these" }, { "start": 2774.8799999999997, "end": 2780.4799999999996, "text": " positional embeddings you have the axial attention and it turns out that on" }, { "start": 2780.4799999999996, "end": 2788.8399999999997, "text": " ImageNet they perform fairly fairly well so you can see that models like a" }, { "start": 2788.8399999999997, "end": 2794.56, "text": " ResNet 50 model will get a 76.9 on ImageNet which is not state-of-the-art" }, { "start": 2794.56, "end": 2801.56, "text": " but it's also not it's not bad right the ResNet 50 is pretty good model you can" }, { "start": 2801.56, "end": 2808.2, "text": " see the full axial attention right here achieves a 78.1 also not state-of-the-art" }, { "start": 2808.2, "end": 2815.6, "text": " but still pretty good and as they say it's the best fully attentional model on" }, { "start": 2815.6, "end": 2823.6, "text": " ImageNet or standalone attention model on ImageNet so where this model really" }, { "start": 2823.6, "end": 2829.12, "text": " shines is where you really have to make long-range connections between pixels" }, { "start": 2829.12, "end": 2835.2, "text": " and that's these kind of segmentation tasks and I want to skip the tables" }, { "start": 2835.2, "end": 2839.92, "text": " right here they're best and everything and go to the appendix where they have" }, { "start": 2839.92, "end": 2847.04, "text": " some examples of this so here you can see specifically this is the original" }, { "start": 2847.04, "end": 2852.04, "text": " image you have a ground truth and you have the differences between their model" }, { "start": 2852.04, "end": 2858.48, "text": " this axial deep lab and the panoptic deep lab that is a baseline for them and" }, { "start": 2858.48, "end": 2867.8, "text": " you can see that the the failure cases here are are pretty you know show how" }, { "start": 2867.8, "end": 2874.2, "text": " show how the axial deep lab is better I don't know if they are cherry-picked or" }, { "start": 2874.2, "end": 2880.08, "text": " not but at least you can see that at some point so it handles occlusions" }, { "start": 2880.08, "end": 2884.04, "text": " better it handles instances better so here you see that the ground truth" }, { "start": 2884.04, "end": 2891.7599999999998, "text": " separates the person from the tie and the axial attention is able to do this" }, { "start": 2891.7599999999998, "end": 2898.2799999999997, "text": " but the the baseline is not able to do this correctly because it labels part of" }, { "start": 2898.2799999999997, "end": 2903.88, "text": " that white shirt also as and you can see why there's kind of a delimiter line here" }, { "start": 2903.88, "end": 2908.4, "text": " here here here but if you have long-range dependencies right if you" }, { "start": 2908.4, "end": 2912.88, "text": " have long-range dependencies in the model the model will recognize wait wait" }, { "start": 2912.88, "end": 2917.48, "text": " that's that must be the same thing as this thing here and this thing here and" }, { "start": 2917.48, "end": 2922.2400000000002, "text": " this thing here so that must be the same object it's simply that the shirt was" }, { "start": 2922.2400000000002, "end": 2928.48, "text": " occluded by the tie and goes beneath it and now appears again it's not a" }, { "start": 2928.48, "end": 2933.92, "text": " different it's not part of the tie and it's not part of the of a different" }, { "start": 2933.92, "end": 2940.16, "text": " object it's actually part of the shirt so the long-range attention you can see" }, { "start": 2940.16, "end": 2948.44, "text": " at these examples sometimes here okay this might not be an instance of super" }, { "start": 2948.44, "end": 2952.88, "text": " duper long-range dependencies this is simply where the model performs better" }, { "start": 2952.88, "end": 2957.2000000000003, "text": " so you can see here the ground truth has that surfboard segmented and the" }, { "start": 2957.2000000000003, "end": 2963.76, "text": " baseline does not that this can also just be you know there are a lot of" }, { "start": 2963.76, "end": 2968.28, "text": " tricks to make this work of course and you throw a lot of compute at it and" }, { "start": 2968.28, "end": 2972.2400000000002, "text": " sometimes you just get better numbers or part of the better numbers because of" }, { "start": 2972.2400000000002, "end": 2979.6400000000003, "text": " the additional compute right here what do we have so you can see occlusions it" }, { "start": 2979.6400000000003, "end": 2986.6800000000003, "text": " appears to handle occlusions in a better way and this might be due to this axial" }, { "start": 2986.6800000000003, "end": 2990.6400000000003, "text": " attention it might be due to the positional embeddings but you can see" }, { "start": 2990.64, "end": 2996.04, "text": " that the ground truth here has the laptop between the person's hands" }, { "start": 2996.04, "end": 3001.7999999999997, "text": " segmented the baseline cannot do that but the axial attention does do that and" }, { "start": 3001.7999999999997, "end": 3008.56, "text": " I don't know what this is honestly this is you can you can see though the axial" }, { "start": 3008.56, "end": 3012.3599999999997, "text": " attention also misses the fact that it should segment this in the background" }, { "start": 3012.3599999999997, "end": 3019.96, "text": " and if this occlusion handling you can see best in this example where the" }, { "start": 3019.96, "end": 3026.7200000000003, "text": " person in the back reappears on both sides of that person so you can see that" }, { "start": 3026.7200000000003, "end": 3033.16, "text": " the axial attention manages to segment that where that is just a mutant person" }, { "start": 3033.16, "end": 3038.56, "text": " right here the ground truth is equally shaky I think there is might be some" }, { "start": 3038.56, "end": 3043.56, "text": " ambiguity of how you can segment these images obviously but you can see the" }, { "start": 3043.56, "end": 3047.88, "text": " fact that there are long-range dependencies probably helped with this" }, { "start": 3047.88, "end": 3053.08, "text": " saying that wait in this image there's this white stuff right here and there's" }, { "start": 3053.08, "end": 3058.52, "text": " this white stuff right here and connecting these two regions with" }, { "start": 3058.52, "end": 3064.36, "text": " attention probably helped in segmenting these to be the same object even though" }, { "start": 3064.36, "end": 3070.52, "text": " you can see there is a break in the object so there is a break no at no" }, { "start": 3070.52, "end": 3076.26, "text": " point is the object on the left touching or the segment on the left touching the" }, { "start": 3076.26, "end": 3082.7200000000003, "text": " segment on the right and still the model manages to put those into the same label" }, { "start": 3082.7200000000003, "end": 3092.0800000000004, "text": " category there is the last last thing where they they want to research what" }, { "start": 3092.0800000000004, "end": 3097.6800000000003, "text": " their heads learn and usually you can do this right you can kind of visualize" }, { "start": 3097.6800000000003, "end": 3102.48, "text": " what the attention heads learn so in this case right here in the column heads" }, { "start": 3102.48, "end": 3108.88, "text": " the way you have to read this is that this particular head right here aggregates" }, { "start": 3108.88, "end": 3113.36, "text": " information from its column so everywhere where it lights up it there's" }, { "start": 3113.36, "end": 3118.72, "text": " a lot of information being routed you can see specifically in this here the" }, { "start": 3118.72, "end": 3124.4, "text": " heads of the people or the heads of the persons in the picture light up fairly" }, { "start": 3124.4, "end": 3129.8, "text": " well so for example this head right here is probably aggregating information a" }, { "start": 3129.8, "end": 3135.8, "text": " lot from this position right here and this head here is aggregating information" }, { "start": 3135.8, "end": 3141.76, "text": " from this position so you can deduce that that particular attention head" }, { "start": 3141.76, "end": 3147.6400000000003, "text": " probably deals with people's faces whereas that particular attention head" }, { "start": 3147.6400000000003, "end": 3154.5600000000004, "text": " probably deals you can see the attention is mostly on the grass right here and" }, { "start": 3154.56, "end": 3161.48, "text": " you can see the same with the for the row heads now their description here is" }, { "start": 3161.48, "end": 3165.92, "text": " that we notice that column head one corresponds to human heads while column" }, { "start": 3165.92, "end": 3170.04, "text": " head four course correlates with the field only which you know you can" }, { "start": 3170.04, "end": 3174.24, "text": " interpret it as this this seemed pretty clear but then they say something like" }, { "start": 3174.24, "end": 3180.7599999999998, "text": " row head six focuses on relatively large relatively local regions where column" }, { "start": 3180.76, "end": 3186.48, "text": " head five pools all over the image so row head six which is this thing right" }, { "start": 3186.48, "end": 3194.1600000000003, "text": " here you can see that okay it maybe focuses on small regions though you can" }, { "start": 3194.1600000000003, "end": 3201.28, "text": " see okay what like here you can get it that's a person but in other places I" }, { "start": 3201.28, "end": 3207.2000000000003, "text": " don't know where column head five pools over the whole image and this I don't" }, { "start": 3207.2, "end": 3211.24, "text": " know maybe they just needed something more to say because they put these" }, { "start": 3211.24, "end": 3215.6, "text": " pictures here they were like okay the the column heads are really nice because" }, { "start": 3215.6, "end": 3219.7599999999998, "text": " we couldn't like these this one's really nice because it you know just pays" }, { "start": 3219.7599999999998, "end": 3222.8399999999997, "text": " attention to the people and this one looks really nice because it pays" }, { "start": 3222.8399999999997, "end": 3227.16, "text": " attention to the field and but we can't really put the column head attention" }, { "start": 3227.16, "end": 3232.12, "text": " without putting the row head attention but then none of the row heads really" }, { "start": 3232.12, "end": 3237.56, "text": " are like super distinctive on a particular thing in the image so we need" }, { "start": 3237.56, "end": 3241.4, "text": " to come up with something that we can say and then you look at this one this" }, { "start": 3241.4, "end": 3245.88, "text": " is there's not a lot of attention so we need to contrast this with something" }, { "start": 3245.88, "end": 3251.52, "text": " then you would think that they contrast it with another row head but then there's" }, { "start": 3251.52, "end": 3257.12, "text": " no row head that does this whole image so there's like a column at five yeah" }, { "start": 3257.12, "end": 3262.96, "text": " I'm not sure if there's there's a bit of there is a bit of tactical writing going" }, { "start": 3262.96, "end": 3269.8399999999997, "text": " on here I suspect I mean still you know it's doing something cool but yeah" }, { "start": 3269.8399999999997, "end": 3275.12, "text": " there's there's definitely an element of sales in when you do when you do" }, { "start": 3275.12, "end": 3282.3199999999997, "text": " where I research papers and just not to this data but just props to the lines in" }, { "start": 3282.32, "end": 3288.1600000000003, "text": " front of the histograms makes it so much easier to read how big the stupid bars" }, { "start": 3288.1600000000003, "end": 3292.6400000000003, "text": " are why does everyone put the lines behind the histogram I probably do that" }, { "start": 3292.6400000000003, "end": 3298.44, "text": " myself and now I'm just I'm realizing how much easier that is alright there is" }, { "start": 3298.44, "end": 3303, "text": " a big big big experimental section right here and there's a big appendix where" }, { "start": 3303, "end": 3309.28, "text": " you can read up all of the different numbers comparisons ablations whatnot" }, { "start": 3309.28, "end": 3315, "text": " ultimately I just wanted to go over the method basically putting this into" }, { "start": 3315, "end": 3319.36, "text": " context with other things like putting this into context with stuff like Big" }, { "start": 3319.36, "end": 3324.96, "text": " Bird axial attention other positional encodings how it how it relates to" }, { "start": 3324.96, "end": 3328.96, "text": " convolutions how it relates to feed-forward networks and what convolutions" }, { "start": 3328.96, "end": 3335.6400000000003, "text": " did to feed-forward networks and so on I hope you have at least a little bit" }, { "start": 3335.64, "end": 3341, "text": " gained an understanding of what's going on here and with that said I see you" }, { "start": 3341, "end": 3368.52, "text": " next time bye bye" } ]
LB4B5FYvtdI
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Predictive Coding Approximates Backprop along Arbitrary Computation Graphs (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "backpropagation", "computation", "autograph", "tensorflow", "pytorch", "torch", "autodiff", "differentiation", "backprop", "biologically plausible", "neurons", "error signal", "predictive coding", "variational", "gaussian", "iterative", "local updates", "distributed", "inner loop", "brain", "neuroscience", "deep neural networks", "analyzed", "hand drawing", "cnn", "rnn", "lstm", "convolutional neural network", "recurrent neural network", "hebian" ]
#ai #biology #neuroscience Backpropagation is the workhorse of modern deep learning and a core component of most frameworks, but it has long been known that it is not biologically plausible, driving a divide between neuroscience and machine learning. This paper shows that Predictive Coding, a much more biologically plausible algorithm, can approximate Backpropagation for any computation graph, which they verify experimentally by building and training CNNs and LSTMs using Predictive Coding. This suggests that the brain and deep neural networks could be much more similar than previously believed. OUTLINE: 0:00 - Intro & Overview 3:00 - Backpropagation & Biology 7:40 - Experimental Results 8:40 - Predictive Coding 29:00 - Pseudocode 32:10 - Predictive Coding approximates Backprop 35:00 - Hebbian Updates 36:35 - Code Walkthrough 46:30 - Conclusion & Comments Paper: https://arxiv.org/abs/2006.04182 Code: https://github.com/BerenMillidge/PredictiveCodingBackprop Abstract: Backpropagation of error (backprop) is a powerful algorithm for training machine learning architectures through end-to-end differentiation. However, backprop is often criticised for lacking biological plausibility. Recently, it has been shown that backprop in multilayer-perceptrons (MLPs) can be approximated using predictive coding, a biologically-plausible process theory of cortical computation which relies only on local and Hebbian updates. The power of backprop, however, lies not in its instantiation in MLPs, but rather in the concept of automatic differentiation which allows for the optimisation of any differentiable program expressed as a computation graph. Here, we demonstrate that predictive coding converges asymptotically (and in practice rapidly) to exact backprop gradients on arbitrary computation graphs using only local learning rules. We apply this result to develop a straightforward strategy to translate core machine learning architectures into their predictive coding equivalents. We construct predictive coding CNNs, RNNs, and the more complex LSTMs, which include a non-layer-like branching internal graph structure and multiplicative interactions. Our models perform equivalently to backprop on challenging machine learning benchmarks, while utilising only local and (mostly) Hebbian plasticity. Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry, and may also contribute to the development of completely distributed neuromorphic architectures. Authors: Beren Millidge, Alexander Tschantz, Christopher L. Buckley Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there, this is an LSTM cell or the computation graph of an LSTM cell. It is pretty hideous as you can see, but what I'm about to show you is even more hideous. This is the computation graph of the LSTM cell augmented with error units, evincing the connectivity scheme of the predictive coding algorithm. You may see that there are appearing these little red arrows right here that are so called error units. These are necessary for an algorithm called predictive coding, which is an algorithm that is a biologically plausible alternative to backprop. That's what we're going to look at today, specifically this paper as you can see. It is quite a thorough paper. It is called Predictive Coding Approximates Backprop Along Arbitrary Computation Graphs. Have you ever heard a more descriptive title of what's in a paper? The authors are Baron Millage, Alexander Chantz, and Christopher L. Buckley. This paper, as the title says, it looks at this predictive coding algorithm and it shows that this approximates backprop. We'll see that this approximates is in terms of there is an inner iteration in the predictive coding algorithm. The more you run that and under certain assumptions, this approximates the backprop algorithm. The new thing in this paper is along arbitrary computation graphs. There have been papers before describing predictive coding, this algorithm, in various sub-settings like fully connected layers and so on. The fact that it approximates backprop there. However, this paper shows that that's actually the case for arbitrary computation graphs under certain assumptions. Predictive coding approximates the backpropagation algorithm. Why is this important? Because the backpropagation algorithm isn't exactly biologically plausible. So they say right here in the abstract backpropagation of error or short backprop is a powerful algorithm for training machine learning architectures through end-to-end differentiation. Recently has been shown that backprop in multilayer perceptrons can be approximated using predictive coding, a biologically plausible process theory of cortical computation which relies solely on local and Hebbian updates. So the difference between backpropagation and predictive coding is exactly this point that predictive coding relies solely on local and Hebbian updates. The keyword I think is local. So in a neural network you have some sort of input x and you ship it through many layers, layer, layer, layer, layer and then you have an output y hat and then you compare that output using a some kind of loss function with your with your true output that you want and then there is this backwards phase right here and in this backwards phase you want to derive gradients for each of the layers weights. So each of these layers has a weight associated with it. I'm not going into Greek letters again. So this is w I don't know w3 w2 is here and so on. So what you want to get out is you want to say how do I need to change w in order to change my loss for the better. So what you want is this gradient right here and backpropagation does a very natural decomposition namely if you have these hidden states in here so x is transformed to hidden state h0 h1 h2 h3 so that is the latent representation. If you want for example weight if you want to know how to change weight or let's say weight two the backpropagation algorithm decomposes this into the derivative according to the hidden state at layer two multiplied by the derivative of the hidden state by the weight. So this is what you would sort of learn in a beginner's course of deep learning this decomposition and of course in this part right here this part decomposes into del L for h3 and then h3 by h2. So this is the standard backpropagation algorithm you can clearly see in the formula the computation graph it goes from the L it flows backward to h3 right so to h3 and then from h3 it flows to h2 and then from h2 it flows to w2 so that's sort of the flow of the gradient backwards through the network and that's pretty cool because it allows us to run gradient descent on arbitrary computation graphs which ultimately enable deep learning including frameworks like tensorflow, PyTorch or the older ones like Theano or Lua torch even autograd things like this. It's pretty cool but it's not really plausible in the brain because neurons are not bi-directional like this. Neurons generally I'm not a neuroscientist or anything but these neurons they have some sort of soma and then you have this axon right and then this axon goes into many different of these synapses to its children and it kind of docks onto the somas of or on the dendrites of the other neurons and this is not bi-directional this is generally here there's a unidirectional signal in this direction and there are so-called feedback connections so from these neurons to the dendrites of this neuron but you cannot really send this gradient information you cannot send this sort of vector gradient information and you cannot do so in this sort of sweep so in the brain it's probably not the case that the layer propagates forward and then sort of waits for a synchronized backward pass across the network in order to update itself. All of this needs to happen much more in parallel much more local so that things are only considering local information of global information right here for example you need the global gradient in the update of w2 and you need to have that back propagated that's not plausible so predictive coding comes along and today we'll look mainly actually at how predictive coding works of course this paper is about extending it to arbitrary computation graphs which is cool because they do predictive coding for cnn's rnn's and even lstm's and if you look at their so let's first jump into the numerical results if you look at their numerical results they have lots of these plots where they basically show we did this network we train it with backprop and then we train it with predictive coding and the lines are just the same and so it's pretty convincing evidence even if you go super duper deep and they do i think rn ends with up to 100 layers or 100 time steps unrolled so the empirical evidence that predictive coding approximates backprop is certainly here and we'll look at what predictive coding is how it works and how it works along arbitrary computation graphs so that's today's paper and i hope you enjoy it if you do don't hesitate to share it out and subscribe all right so all right so this graphic right here compares the two algorithms in principle on top very much what i've said so far the backprop algorithm somehow has this signal it propagates forward okay and then at some point there's an output and if you want to train it there is a label you compare that to the output that will give you an error and by derivation a gradient and that gradient is now back propagated according to the chain rule according to the back propagation algorithm you can see it's very much what i've drawn the predictive coding algorithm is a little bit different and it's honestly not super clear from this graphic right here i find this graphic to be to be a bit confusing but you can see first of all there is this introduction of these of these error nodes in the computation graph right here and there also seems to be the introduction of these new hats whatever that is so we're sort of first going to dive into the math and then we're going to check out how the algorithm works as such so the math right here is a little bit it's a little you have to think a little bit differently than you do in backprop so first of all they say we define a generative model which parameterizes the value of each vertex given the feedforward prediction of its parents according to this distribution and a factorized variational posterior where p denotes the set of parents and c denotes the set of children of a given node x so this is this is very special namely this turns the entire algorithm into a sort of a guessing game into a variational approximation algorithm so what they're basically saying is that signal in this type of algorithm signal isn't just forward propagated but signal is signal is forward guessed it's like a bit of a guess so you have a signal right here vi and this is a node in your neural network and when you forward propagate the signal maybe this is a fully connected layer right here so it's simply multiplying it by parameter you're not you're not going to obtain the next layer's signal what you're going to obtain is a guess for the next layer's signal right here you're only guessing you're assuming that you're sort of assuming that the true next signal is somewhere in the vicinity of this so what you do is actually assume this is a Gaussian with the mean that you predicted but then there is a fair a good chance it's somewhere around here so what you do is you always you'll guess the next layer's signal by forward propagating your own signal and you're so you're not directly computing it okay and the model that we have for that here and you know it's why do we do this we do this because we're also not so sure about this one right here okay so this entire thing is built upon we're pretty sure what the input is and we're pretty sure what the label is of a data point but without you know we're not we assume we're not really sure what the intermediate layers are and we're going to run sort of an update procedure on these on our guesses of where these intermediate signals are and that's going to be this predictive coding algorithm so it's called predictive coding I guess because you always only predict where the next layer signal might be and you refine that prediction in a series of inner iteration steps and that all before you even do a parameter update so there's going to be an inner iteration to determine what the forward values are of the network and this is very different from back prop there is just a single forward pass right then you know the values and then there's a backward pass here there is as you'll see there is a single forward pass but then there is an inner loop to refine the forward pass before there is a backward pass and we need this because we only do this sort of local updates you'll see in a second so the the Gaussian I just drew so the assumption the assumption is going to be that we refine iteratively refine these up these guesses of where vi is and of course here you'll see that if I if I change vi to be down here my next guess so this is at time step t I mean my guess is this my times that t plus one is this of course if I apply the same fully connected layer my new guess is going to be down here somewhere and so the assumption here that we're going to make is that they you can see the value of each vertex is a is this model right here this is the generative model so it's a probability distribution depending on the parents and we're going to approximate that by this variational posterior which as you can see doesn't depend on the parents anymore so it basically says that the distribution stays the stays is not is not conditional it sort of stays the same I'm not sure if I express this quite correctly but you can see right here they assume a Gaussian for the generative model that's dependent on on these things and then the the posterior is simply a factorized Gaussian and the variational approximation algorithm simply makes the KL divergence between this variational posterior and the true assumed posterior small and they can prove that this is equal to these errors and the errors are going to be the errors between what's predicted and what's guessed yeah it's best if we if we so if I extend this right here right I have v0 okay v0 I'm pretty sure what it is because it's my input then what I'm going to do is I'm going to forward guess what v1 is so this is my guess of v1 now from v1 I am going to guess what v2 is and at the beginning you know my guess of v1 is the same as my forward prediction I have no other reason I have no reason to assume it's anywhere else so I'm just going to draw this on top of v1 right here so since you know it could be anywhere it could be anywhere in the vicinity here but I'm going to assume it's the same I have no reason to do so otherwise and then I'm going to predict v2 okay and v2 let's say that's already my output layer and this is my guess of v2 that's already my output layer but but now we're going to compare v2 to our true output what we desire our label l and there's going to be an error okay so there's going to be an error right here and what the predictive coding algorithm does is it basically says well look v2 could be actually anywhere here anywhere around this thing it's most likely in the middle but it could be anywhere and it's actually quite possible that it's closer to this label than we initially guessed so it takes this error right here this red error and it says I'm going to update my guess of v2 a little bit closer into that direction so I don't have it here is a new color so v2 is going to be a little bit closer here it's it's possible right it's we we simply guessed v2 so it could also be there it's a little bit less likely it's a little bit less likely because it's not in the middle of the Gaussian but v2 could be where l is right but now I have to sort of communicate this error back to the last one and the trick here is that we don't communicate the global gradient but we only communicate these local error signals so this first red arrow here is our first error signal and we are going to communicate that thing back to the to the previous layer so the difference between v2 and v and here is a fully connect let's say this is a fully connected layer what we're going to send back to the last layer is this information of you see you predicted v2 hat but actually you should predict v2 please update yourself such that that doesn't you know that's that's a bit closer so now we're going to update our guess of v1 and say well if we moved v1 a little bit over here that would predict v2 to be up here right with the same fully connected layer and if we if if that's the case then v2 would be a little closer to the true label so we're going to move v1 over here now we're not going to move it fully because so this is a sort of optimization there is a there is a force keeping it to where our original guess is but there is also a force drawing it in the direction of this of this error signal you can see so we're going to say well if we just move v1 to up here we would predict the perfect v2 but also it's less likely so we're going to find like some sort of a trade-off where it's still quite likely under our gaussian assumption but it will predict a little bit more of the correct label and so on so this if we had a longer computation graph this would then sort of every node in the computation graph would ask itself i i'm going to guess my own value at a place that is pretty close to my original guess coming from the forward propagation but also is consistent with the output of the next layer and the output of the next layer of course here is this this v2 right so that the logic isn't i need to make the loss small the logic is well if the next signal is v2 then i can't be in the middle here i must be a little bit more up here because you know i i my signal runs through the fully connected layer and outputs v2 so i am probably more up here so you can see that if you have a computation graph v0 v1 hat v2 hat v3 hat and so on if at the end you have a loss signal you're sort of distributing distributing that loss across this entire chain so you're you're kind of building this guessed chain of values v3 and so on and sorry the that's that's the output node which is close to the loss you're moving all of these things and now once you've done this once you've done this you can do one step of parameter updates so once you've guessed all the nodes well you can go ahead and say okay um this is this is a configuration that is at equilibrium in this sort of algorithm and now here are here is fully connected layer one so here is um here is w0 here is w1 w2 and so on w3 so now we can go ahead and actually update these weights such that the initial guesses that we had and where we truly think the signal is are closer together okay so we're now going to update the weights in order to minimize all of these individual errors and this is also can be done locally so you see that the parameter update step here is now a local one because we've computed all of these errors between where we initially guess the signal is and where we sort of think it should be now we can minimize these errors so what i've drawn here is actually not it's not exactly the algorithm but i hope you get the point so step one is you sort of guess where all the stuff is initially then at the end you get an error signal right this is an error signal then you distribute that error signal backwards and that is now that is not the same as distributing a gradient i know it looks the same but it is not the same and so i have to say that you know they say oh this is only local and so on this doesn't require a backward sweep i think when i look at this algorithm it very much does require a backward sweep so very much it goes from the back to the front in fact it goes from the back to the front many times now you can do that in parallel so this node here can update so to finish the argument here as i said before then you kind of wiggle on these nodes to find out this should probably be more here this one should probably be more here this one should probably be more here this one should probably be more here in order to satisfy in order to make that error smaller and the point is that the parameter update step now is a local one okay so the parameter update step now only needs these local errors between where you initially guessed and where your refined iterative guess is after distributing the error through the network and this can all happen in parallel this this um all of this updating sending information around and so on this can be parallelized but it does require a backward sweep if you ask me okay so there are two equations so the the there's two things right here there is first as we said there is a phase where the guesses of where our vertex units are where our hidden representations are are refined and this is given by these dynamics right here so you see that vi changes with time according to this thing right here f is the variational free energy so this this algorithm sort of falls out from the math of assuming these um assuming these generative models right here under the assumption that they are these gaussians okay um so under under this assumption if you calculate the kl divergence um it turns out to come out to this algorithm right here so how does the how do we need to update the node vi the node vi is updated according to this gradient and this gradient is as we said only computed as properties of local things so the first thing is ei which is that's so again if we have this is our initial guess of vi and then here is our refined guess of vi ei is the error right here that's that's sort of we need to stay close to our initial guess but also we want to go into the direction such that um into this direction right here so ej j is the children of vi j are the children and this thing right here says how do we need to change my guess of vi to make um to make it fall more in line with vj and you see here that's vj uh the initial thing but then of course the error is so the error j is going to be the difference between vj and vj hat so ultimately you are guessing you're saying how do i need to change vi in order to make it more commensurate with vj after going through the the layer okay so this um this derivative right here this is going to involve the derivative of whatever the fully connected layer or the conv layer and so on so there is not there's not no derivatives in this algorithm but there are only sort of these local derivatives so ei is going to be the difference here and then we'll have the fully connected layer using w gives you vj hat but also your refined guess gives you vj and the error j is going to be this thing right here okay so at you want to stay close right here but also you want to um make vi such that it outputs vj such that it also minimizes that error okay sort of um yeah it's it's hard to it's hard to draw these things but i hope i've explained it in multiple ways right now it's at least a little bit clear how this works and at the end once you've reached equilibrium of all of your guesses of um all of your guesses of where the next nodes are what you do is you update your parameters here in a local fashion you can see right here what you need is this error of the if layer and you multiply that by this derivative and this derivative is simply the local derivative of your hidden representation with respect to your layer okay so this is very akin to in the back propagation algorithm hi to wi this is just this local derivative so using the update the update step of the weights now only requires local derivatives and that's the point so here it's in this pseudo code things are a little bit a little bit unclear in this but we'll do so for the entire data set x is the data point and l is the label you fix the start so you fix v0 then you go you do the forward pass so you do this once you these are your initial guesses um these hat things you can see the hat things are always computed from the parents you compute the output error right here and then begin backwards iteration phase of the descent on the free energy so here you see there is this inner loop while not converged and this is just going to work out to be some sort of in some sort of an inner iterative scheme for a number of steps this is going to be a hyper parameter and this here this is something you can technically do in parallel you have to send a bit of information around but you can technically do it in parallel this inner these these inner loops but you can you can just imagine it always going from the back and you distribute these errors you refine your guests a little bit and you start from the back again you distribute errors refine your guesses and so on and you do that you always start from the back in the actual code so you compute these errors so this is your initial guess and this is your refined guess of the current layer and then you update the vertex values you say okay the my guess for the next layer is going to be my guess for this layer plus some sort of a this gradient and this gradient we get from equation number two from this thing right here so my guess is going to be updated such that i still stay close to my original guess but i also update i also predict better what the next layer is and at the end when this is converged you do the update on the weights and the updates on the weights is simply again this what we saw it's the error that you want to correct so this e is the error you want to correct now you have a good approximation of the error once this is converged uh times the derivative of course with respect to the weights so the error is in terms of how how much are your predictions of from what they should be and the derivative simply translates that into the how do you need to change the weights such that in the future that error is smaller okay so then they show that this actually approximates a back prop and this it's a it's a fairly um fairly simple proof it's an it's sort of a proof by induction by iteration that's showing that um one one such one such thing like this this thing right here at the equilibrium at the last layer is equivalent to back prop and because you can simply substitute this and then by sort of recursion that goes back the layers and this is all dependent on you actually reaching that equilibrium which you do as we said by inner iterations so they have a bit of a they have a bit of a an example right here where they have this function of um it's a pretty simple function this function right here the output is the tan of this square root and there's parameters in there right so this is an arbitrary parameter that you might want to learn and then you give some data sets um so this is equal to two but i guess the network doesn't know that i don't know so you have to learn it and they they test that and you can see the this augmentation by error graphs makes the computational graph quite a bit more um complex so you have all these error graphs right here but you know ultimately error ultimately it's you can you could automate this that that is not a problem okay so um they also do this for as i said cnn's rnns lstms and the results are quite remarkable i think in that they they just follow the same accuracy and loss and performance patterns of these networks that's pretty cool the downside of course is that um they are way smaller sorry they're way way slower and they say this sometimes um due to the need to iterate the v's until convergence the predictive coding network had roughly a 100 times greater computational cost than the backprop network though they say this is a bit misleading because you can distribute and parallelize that however as we've seen it's not fully local like you you need to send signal around every node needs to send signal to its parents or its children and um that of course in in backprop you just need to do that once right so i'm not exactly buying this argument of this is much more local and so on so the last thing that i want to point out in the paper and then we looked briefly at the code is this thing right here there's a further simplification they say importantly if the edge function linearly combines the activities and the parameters followed by an element-wise non-linearity which is most of deep learning layers nowadays a condition which we call parameter linear then both the update rule for the vertices and the parameters become Hebbian specifically the update rules for the vertices and the weights become so here is here is um if you have a linear operation followed by a non-linearity which you know is the fact in RNNs in CNNs in fully connected layers then this here are these update rules so the local layer derivative is simply going to be your forward activations passed through and this is a bit weird um it's the forward activations passed through the derivation of the non-linearity this is the non-linearity right here um times again the weights of the forward iteration and the update rule with respect to the parameters are very very similar and the reason i point this out because now we're going to jump into the code and i hope you can see this um you can recognize this again so first of all let's go into the um into the CNN hello all right so the code is quite ugly honestly but um you see that they have their backprop or CNNs but they have this thing right here this um um this model which is the one they train and here is the train function so in the train function they go through the data set and you can see for each data point they simply call this infer function right here so this infer function is what ultimately does the training so in the infer function they get an input as you can see and a label and a number of inference steps so they start out by and this this is labeled a bit a bit different so they have these mus and the outs and these prediction errors and the predictions and we're going to see how that works so first of all they go through the layers right here and i'm going to use my mouse they go through the layers right here and you can see they simply forward propagate the signal so they always take this mu of the last layer they forward propagate it to get the mu on the layer plus one and the outputs are simply cloned from the mus so these must be our news before or our v's whatever you want to call them so one one is going to be the initial guess and the other one is going to be the guess that we iteratively refine okay in fact the mu here is going to be the guess that we iteratively refine at the beginning we simply set them to be the same okay and then the last layer here we put at the label and then the prediction errors that's going to be yeah that's going to be the the error variables so the last prediction error is going to be the derivative of our loss function with respect to the last layer and now we start this iterative algorithm so here you see we go through this number of inference steps train which is going to be like a hundred or so so a hundred times we're going to update each of our guesses of the intermediate layers then here is what i said we're going through the layers in reverse order so a hundred times we're going from back to front back to front back to front back to front and we do that so here you can see what the first thing we do is we come we compute the current error okay which is the difference between the guess that we currently have and the initial guess that we had during forward propagation this is going to be zero for most of the layers at the beginning except the last layer right in the last layer we've actually put we've actually put the the mu to something else than the output and thus this error is going to it's beginning at zero at each layer as the guesses are the same but then we're going to refine and refine and refine and sort of this error of the last layer is going to iteratively propagate through the network to the from the back to the front multiple in an iterative fashion so multiple times so once we have the prediction error we're going to backward this through the layers and this backward here that is sort of that is this this backward edge we saw where did we see this so this backward is going to be the this local derivative in this graph the backward is going to be the the red thing right here so we take the error of the next layer and we're going to we're going to see how do we need to change the current guess in order to make the next layers error be a little bit smaller okay so that's the going to be the backward function and we can actually look at the backward function of let's say yeah here so this is the backward function of a fully connected layer this is the projection layer there is a fully connect here is there is a fully connected layer and the f is going to be the non-linearity and the df is going to be the derivative of the non-linearity so in the forward you can see what we're doing is we're multiplying the input by the weights and then we're going to save the activations and simply propagate them through the non-linearity in the backwards we're going to take the activations this the forward activation then we're going to shove them through the derivative of the non-linearity and this is why i pointed out this is this Hebbian learning rule so first i was a bit confused why do we use the forward activations and shove them through the derivative of the non-linearity but this is exactly this is simply because they've derived that this is the correct local gradient okay and then we have this right this is the local gradient of the layer and we're going to multiply that by the weights so this completes the formula that we had right here for these Hebbian updates this thing so these are the activations this is the derivative of the forward layer we're going to multiply that by the weight again so this is now the complete derivative the complete local derivative which is this thing i've already circled 50 billion times right here and all we need to do now is we need to multiply this by the error in prior prediction error in that layer and then we get an idea of how do we need to change this node such that in this one child and there can be many children such that in this one child we make a little bit less error okay so that's why we multiply this by e right here so e is the the error okay and that will be the backwards thing so backwards simply tells the parent how it needs to change the child sorry how it needs to change itself such that the child is a little bit happier and since this is a forward you know a cnn we don't have multiple children we simply have one child per parent so we have a list and these predictions as you can see we simply take the prediction error of layer j plus one we backward it so how do we need to change this layer in order to make it a little bit more commensurate with the child and then here is this trade-off so the trade-off between the prediction error so how close am i to my original guess i don't want to go too far away right because i assume my original guess isn't too bad in fact there's a gaussian likelihood model how i want to stay close to that but also i want to go into the direction such that i make the next layer happier okay so this is this fundamental trade-off it's computed right here and it's it's this minus sign and then at the end this is the inference learning rate and i simply go into that direction of this trade-off okay so i update the current the guess of the current node like this and as i said i go through the network back to front back to front back to front back to front until i reach some sort of equilibrium and only when i reach equilibrium or in this case after this many steps i then update the weights and the update weights function that's very similar i think here here is update weights that is simply i each layer i input the prediction error of that layer and that layer calculates this function right here in much a similar way than you just than you just saw maybe we can look at one of them let's go this is layers let's go here fully connected layer okay and you're going to see this Hebbian learning rule again so activations through the derivative and so now instead of so there's a little bit of a difference to before right but the difference isn't isn't large right so activations multiplied by through this and then multiplied by the inputs instead of the weights so that's that's that so this multiplied by the inputs instead of the weights then multiplied by e which is so this here multiplied by the error term right here and that's going to be our local update okay cool so that's the code that's predictive coding and you know the challenge is it's not that these people propose this as a true alternative to back prop but it is a step in a direction of saying look the brain with its more Hebbian nature and its more local updates and so on it could actually be doing something much more close to back prop than we thought because people thought well back prop is impossible in the brain therefore the brain can't be doing back prop right and now we see that actually the brain can do something possibly it's not proven but it's possible that the brain does something that approximates the back prop gradient actually arbitrarily if you know if all of these if these some assumptions are given but that's sort of the the results and they also show it's quite robust to learning rate changes and so on as we said we can go pretty deep even though this is this kind of iterative guessing algorithm under these Gaussian assumptions and there is variational approximation it is fairly robust and all so this goes this sort of puts the ball back into maybe the brain is doing something very close to back prop or at least getting the same results getting the same parameter updates as back prop so i hope that wasn't too confusing i've tried to tackle it from many angles and maybe after seeing the code you see it a little bit more clearly if not let me know open for questions as always and bye bye
[ { "start": 0, "end": 7.76, "text": " Hi there, this is an LSTM cell or the computation graph of an LSTM cell. It is pretty hideous as you" }, { "start": 7.76, "end": 15.84, "text": " can see, but what I'm about to show you is even more hideous. This is the computation graph of the" }, { "start": 16.56, "end": 25.36, "text": " LSTM cell augmented with error units, evincing the connectivity scheme of the predictive coding" }, { "start": 25.36, "end": 32.96, "text": " algorithm. You may see that there are appearing these little red arrows right here that are so" }, { "start": 32.96, "end": 38.64, "text": " called error units. These are necessary for an algorithm called predictive coding, which is an" }, { "start": 38.64, "end": 47.04, "text": " algorithm that is a biologically plausible alternative to backprop. That's what we're going" }, { "start": 47.04, "end": 55.6, "text": " to look at today, specifically this paper as you can see. It is quite a thorough paper. It is called" }, { "start": 55.6, "end": 63.120000000000005, "text": " Predictive Coding Approximates Backprop Along Arbitrary Computation Graphs. Have you ever heard" }, { "start": 63.120000000000005, "end": 70.56, "text": " a more descriptive title of what's in a paper? The authors are Baron Millage, Alexander Chantz," }, { "start": 70.56, "end": 78.8, "text": " and Christopher L. Buckley. This paper, as the title says, it looks at this predictive coding" }, { "start": 78.8, "end": 86.16, "text": " algorithm and it shows that this approximates backprop. We'll see that this approximates" }, { "start": 87.6, "end": 94.64, "text": " is in terms of there is an inner iteration in the predictive coding algorithm. The more you run that" }, { "start": 94.64, "end": 101.44, "text": " and under certain assumptions, this approximates the backprop algorithm. The new thing in this" }, { "start": 101.44, "end": 109.68, "text": " paper is along arbitrary computation graphs. There have been papers before describing predictive" }, { "start": 109.68, "end": 117.28, "text": " coding, this algorithm, in various sub-settings like fully connected layers and so on. The fact" }, { "start": 117.28, "end": 124.32, "text": " that it approximates backprop there. However, this paper shows that that's actually the case for" }, { "start": 124.32, "end": 130.32, "text": " arbitrary computation graphs under certain assumptions. Predictive coding approximates" }, { "start": 130.32, "end": 137.76, "text": " the backpropagation algorithm. Why is this important? Because the backpropagation algorithm" }, { "start": 137.76, "end": 146.88, "text": " isn't exactly biologically plausible. So they say right here in the abstract backpropagation of error" }, { "start": 146.88, "end": 151.35999999999999, "text": " or short backprop is a powerful algorithm for training machine learning architectures through" }, { "start": 151.36, "end": 157.36, "text": " end-to-end differentiation. Recently has been shown that backprop in multilayer perceptrons can" }, { "start": 157.36, "end": 163.36, "text": " be approximated using predictive coding, a biologically plausible process theory of cortical" }, { "start": 163.36, "end": 168.64000000000001, "text": " computation which relies solely on local and Hebbian updates. So the difference between" }, { "start": 169.36, "end": 176.8, "text": " backpropagation and predictive coding is exactly this point that predictive coding relies solely" }, { "start": 176.8, "end": 187.28, "text": " on local and Hebbian updates. The keyword I think is local. So in a neural network you have some sort" }, { "start": 187.28, "end": 195.28, "text": " of input x and you ship it through many layers, layer, layer, layer, layer and then you have an" }, { "start": 195.28, "end": 202.48000000000002, "text": " output y hat and then you compare that output using a some kind of loss function with your" }, { "start": 202.48, "end": 208.56, "text": " with your true output that you want and then there is this backwards phase right here and in this" }, { "start": 208.56, "end": 213.92, "text": " backwards phase you want to derive gradients for each of the layers weights. So each of these layers" }, { "start": 213.92, "end": 222, "text": " has a weight associated with it. I'm not going into Greek letters again. So this is w I don't know w3" }, { "start": 222, "end": 229.67999999999998, "text": " w2 is here and so on. So what you want to get out is you want to say how do I need to change w" }, { "start": 229.68, "end": 238.24, "text": " in order to change my loss for the better. So what you want is this gradient right here" }, { "start": 238.24, "end": 244.8, "text": " and backpropagation does a very natural decomposition namely if you have these hidden" }, { "start": 244.8, "end": 254.96, "text": " states in here so x is transformed to hidden state h0 h1 h2 h3 so that is the latent representation." }, { "start": 254.96, "end": 262.96000000000004, "text": " If you want for example weight if you want to know how to change weight or let's say weight two" }, { "start": 265.12, "end": 274, "text": " the backpropagation algorithm decomposes this into the derivative according to the hidden state at" }, { "start": 274, "end": 282.24, "text": " layer two multiplied by the derivative of the hidden state by the weight. So this is what you" }, { "start": 282.24, "end": 287.12, "text": " would sort of learn in a beginner's course of deep learning this decomposition and of course" }, { "start": 287.84000000000003, "end": 303.36, "text": " in this part right here this part decomposes into del L for h3 and then h3 by h2. So this is the" }, { "start": 303.36, "end": 310.64, "text": " standard backpropagation algorithm you can clearly see in the formula the computation graph it goes" }, { "start": 310.64, "end": 321.76, "text": " from the L it flows backward to h3 right so to h3 and then from h3 it flows to h2 and then from h2" }, { "start": 322.4, "end": 329.36, "text": " it flows to w2 so that's sort of the flow of the gradient backwards through the network" }, { "start": 329.36, "end": 335.91999999999996, "text": " and that's pretty cool because it allows us to run gradient descent on arbitrary computation graphs" }, { "start": 335.92, "end": 344, "text": " which ultimately enable deep learning including frameworks like tensorflow, PyTorch or the older" }, { "start": 344, "end": 352.16, "text": " ones like Theano or Lua torch even autograd things like this. It's pretty cool but it's not" }, { "start": 352.16, "end": 360.72, "text": " really plausible in the brain because neurons are not bi-directional like this. Neurons generally" }, { "start": 360.72, "end": 366.24, "text": " I'm not a neuroscientist or anything but these neurons they have some sort of soma and then" }, { "start": 366.24, "end": 374.48, "text": " you have this axon right and then this axon goes into many different of these synapses to its" }, { "start": 374.48, "end": 383.6, "text": " children and it kind of docks onto the somas of or on the dendrites of the other neurons and this" }, { "start": 383.6, "end": 390.08000000000004, "text": " is not bi-directional this is generally here there's a unidirectional signal in this direction and" }, { "start": 390.08, "end": 395.59999999999997, "text": " there are so-called feedback connections so from these neurons to the dendrites of this neuron" }, { "start": 395.59999999999997, "end": 403.03999999999996, "text": " but you cannot really send this gradient information you cannot send this sort of vector" }, { "start": 403.59999999999997, "end": 412.79999999999995, "text": " gradient information and you cannot do so in this sort of sweep so in the brain it's probably not" }, { "start": 412.8, "end": 420.08, "text": " the case that the layer propagates forward and then sort of waits for a synchronized backward pass" }, { "start": 421.04, "end": 427.2, "text": " across the network in order to update itself. All of this needs to happen much more in parallel" }, { "start": 428.16, "end": 433.68, "text": " much more local so that things are only considering local information of global information" }, { "start": 433.68, "end": 440.96000000000004, "text": " right here for example you need the global gradient in the update of w2 and you need to" }, { "start": 440.96, "end": 447.12, "text": " have that back propagated that's not plausible so predictive coding comes along and today we'll look" }, { "start": 447.12, "end": 452.79999999999995, "text": " mainly actually at how predictive coding works of course this paper is about extending it to" }, { "start": 452.79999999999995, "end": 459.67999999999995, "text": " arbitrary computation graphs which is cool because they do predictive coding for cnn's rnn's and even" }, { "start": 459.67999999999995, "end": 466.08, "text": " lstm's and if you look at their so let's first jump into the numerical results if you look at" }, { "start": 466.08, "end": 471.84, "text": " their numerical results they have lots of these plots where they basically show we did this" }, { "start": 471.84, "end": 476.8, "text": " network we train it with backprop and then we train it with predictive coding and the lines are" }, { "start": 476.8, "end": 483.03999999999996, "text": " just the same and so it's pretty convincing evidence even if you go super duper deep" }, { "start": 484.79999999999995, "end": 495.03999999999996, "text": " and they do i think rn ends with up to 100 layers or 100 time steps unrolled so the empirical evidence" }, { "start": 495.04, "end": 501.28000000000003, "text": " that predictive coding approximates backprop is certainly here and we'll look at what predictive" }, { "start": 501.28000000000003, "end": 508.72, "text": " coding is how it works and how it works along arbitrary computation graphs so that's today's" }, { "start": 508.72, "end": 517.84, "text": " paper and i hope you enjoy it if you do don't hesitate to share it out and subscribe all right" }, { "start": 517.84, "end": 528, "text": " so all right so this graphic right here compares the two algorithms in principle on top very much" }, { "start": 528, "end": 536.72, "text": " what i've said so far the backprop algorithm somehow has this signal it propagates forward" }, { "start": 536.72, "end": 542, "text": " okay and then at some point there's an output and if you want to train it there is a label you compare" }, { "start": 542, "end": 549.92, "text": " that to the output that will give you an error and by derivation a gradient and that gradient is now" }, { "start": 549.92, "end": 555.6, "text": " back propagated according to the chain rule according to the back propagation algorithm you" }, { "start": 555.6, "end": 562.56, "text": " can see it's very much what i've drawn the predictive coding algorithm is a little bit different" }, { "start": 563.92, "end": 571.6, "text": " and it's honestly not super clear from this graphic right here i find this graphic to be" }, { "start": 571.6, "end": 578.16, "text": " to be a bit confusing but you can see first of all there is this introduction of these" }, { "start": 579.0400000000001, "end": 585.0400000000001, "text": " of these error nodes in the computation graph right here and there also seems to be the" }, { "start": 585.0400000000001, "end": 594.5600000000001, "text": " introduction of these new hats whatever that is so we're sort of first going to dive into the" }, { "start": 594.56, "end": 602.7199999999999, "text": " math and then we're going to check out how the algorithm works as such so the math right here" }, { "start": 602.7199999999999, "end": 609.1999999999999, "text": " is a little bit it's a little you have to think a little bit differently than you do in backprop so" }, { "start": 610, "end": 616.7199999999999, "text": " first of all they say we define a generative model which parameterizes the value of each vertex" }, { "start": 616.7199999999999, "end": 623.52, "text": " given the feedforward prediction of its parents according to this distribution and a factorized" }, { "start": 623.52, "end": 631.1999999999999, "text": " variational posterior where p denotes the set of parents and c denotes the set of children of a" }, { "start": 631.1999999999999, "end": 640.64, "text": " given node x so this is this is very special namely this turns the entire algorithm into a" }, { "start": 640.64, "end": 649.12, "text": " sort of a guessing game into a variational approximation algorithm so what they're" }, { "start": 649.12, "end": 656.08, "text": " basically saying is that signal in this type of algorithm signal isn't just forward propagated" }, { "start": 656.08, "end": 663.44, "text": " but signal is signal is forward guessed it's like a bit of a guess so you have a signal right here" }, { "start": 663.44, "end": 673.6800000000001, "text": " vi and this is a node in your neural network and when you forward propagate the signal maybe this" }, { "start": 673.68, "end": 680, "text": " is a fully connected layer right here so it's simply multiplying it by parameter you're not" }, { "start": 680.56, "end": 687.76, "text": " you're not going to obtain the next layer's signal what you're going to obtain is a guess" }, { "start": 687.76, "end": 693.68, "text": " for the next layer's signal right here you're only guessing you're assuming that" }, { "start": 693.68, "end": 704.9599999999999, "text": " you're sort of assuming that the true next signal is somewhere in the vicinity of this so what you" }, { "start": 704.9599999999999, "end": 710.2399999999999, "text": " do is actually assume this is a Gaussian with the mean that you predicted but then" }, { "start": 711.5999999999999, "end": 718.4799999999999, "text": " there is a fair a good chance it's somewhere around here so what you do is you always you'll" }, { "start": 718.48, "end": 725.6, "text": " guess the next layer's signal by forward propagating your own signal and you're" }, { "start": 726.48, "end": 733.2, "text": " so you're not directly computing it okay and the model that we have for that here and you know it's" }, { "start": 733.52, "end": 742.24, "text": " why do we do this we do this because we're also not so sure about this one right here okay so" }, { "start": 742.24, "end": 748.48, "text": " this entire thing is built upon we're pretty sure what the input is and we're pretty sure what the" }, { "start": 748.48, "end": 757.12, "text": " label is of a data point but without you know we're not we assume we're not really sure what the" }, { "start": 757.12, "end": 765.92, "text": " intermediate layers are and we're going to run sort of an update procedure on these on our guesses" }, { "start": 765.92, "end": 772.4, "text": " of where these intermediate signals are and that's going to be this predictive coding algorithm so" }, { "start": 772.4, "end": 779.92, "text": " it's called predictive coding I guess because you always only predict where the next layer signal" }, { "start": 780.16, "end": 788.3199999999999, "text": " might be and you refine that prediction in a series of inner iteration steps and that all before" }, { "start": 788.3199999999999, "end": 793.92, "text": " you even do a parameter update so there's going to be an inner iteration to determine what the" }, { "start": 793.92, "end": 802.7199999999999, "text": " forward values are of the network and this is very different from back prop there is just a single" }, { "start": 802.7199999999999, "end": 808.3199999999999, "text": " forward pass right then you know the values and then there's a backward pass here there is as you'll" }, { "start": 808.3199999999999, "end": 815.12, "text": " see there is a single forward pass but then there is an inner loop to refine the forward pass before" }, { "start": 815.12, "end": 822.64, "text": " there is a backward pass and we need this because we only do this sort of local updates you'll see" }, { "start": 822.64, "end": 831.76, "text": " in a second so the the Gaussian I just drew so the assumption the assumption is going to be that we" }, { "start": 831.76, "end": 839.04, "text": " refine iteratively refine these up these guesses of where vi is and of course here you'll see that" }, { "start": 839.04, "end": 846.88, "text": " if I if I change vi to be down here my next guess so this is at time step t I mean my guess is this" }, { "start": 846.88, "end": 853.6, "text": " my times that t plus one is this of course if I apply the same fully connected layer my new guess" }, { "start": 853.6, "end": 863.4399999999999, "text": " is going to be down here somewhere and so the assumption here that we're going to make is that" }, { "start": 864.96, "end": 876.64, "text": " they you can see the value of each vertex is a is this model right here this is the generative" }, { "start": 876.64, "end": 882.88, "text": " model so it's a probability distribution depending on the parents and we're going to approximate that" }, { "start": 883.4399999999999, "end": 890.96, "text": " by this variational posterior which as you can see doesn't depend on the parents anymore so" }, { "start": 892.56, "end": 899.36, "text": " it basically says that the distribution stays the stays is not is not conditional it sort of stays" }, { "start": 899.36, "end": 906.72, "text": " the same I'm not sure if I express this quite correctly but you can see right here they assume" }, { "start": 906.72, "end": 916.96, "text": " a Gaussian for the generative model that's dependent on on these things and then the the posterior" }, { "start": 917.6, "end": 924.4, "text": " is simply a factorized Gaussian and the variational approximation algorithm simply makes the KL" }, { "start": 924.4, "end": 933.36, "text": " divergence between this variational posterior and the true assumed posterior small and they can" }, { "start": 933.36, "end": 940.4, "text": " prove that this is equal to these errors and the errors are going to be the errors between" }, { "start": 943.04, "end": 949.28, "text": " what's predicted and what's guessed yeah it's best if we if we" }, { "start": 949.28, "end": 956.8, "text": " so if I extend this right here right I have v0 okay v0 I'm pretty sure what it is because it's my" }, { "start": 956.8, "end": 964.3199999999999, "text": " input then what I'm going to do is I'm going to forward guess what v1 is so this is my guess of v1" }, { "start": 965.6, "end": 976.4, "text": " now from v1 I am going to guess what v2 is and at the beginning you know my guess of v1 is the same" }, { "start": 976.4, "end": 984.24, "text": " as my forward prediction I have no other reason I have no reason to assume it's anywhere else so" }, { "start": 984.24, "end": 990.3199999999999, "text": " I'm just going to draw this on top of v1 right here so since you know it could be anywhere it" }, { "start": 990.3199999999999, "end": 995.92, "text": " could be anywhere in the vicinity here but I'm going to assume it's the same I have no reason" }, { "start": 995.92, "end": 1005.52, "text": " to do so otherwise and then I'm going to predict v2 okay and v2 let's say that's already my output" }, { "start": 1005.52, "end": 1011.04, "text": " layer and this is my guess of v2 that's already my output layer but but now" }, { "start": 1014.8, "end": 1021.84, "text": " we're going to compare v2 to our true output what we desire our label l and there's going to be an" }, { "start": 1021.84, "end": 1029.52, "text": " error okay so there's going to be an error right here and what the predictive coding algorithm does" }, { "start": 1029.52, "end": 1036.8, "text": " is it basically says well look v2 could be actually anywhere here anywhere around this" }, { "start": 1036.8, "end": 1042.48, "text": " thing it's most likely in the middle but it could be anywhere and it's actually quite possible that" }, { "start": 1042.48, "end": 1050.48, "text": " it's closer to this label than we initially guessed so it takes this error right here this red error" }, { "start": 1051.2, "end": 1059.36, "text": " and it says I'm going to update my guess of v2 a little bit closer into that direction so" }, { "start": 1059.36, "end": 1066.7199999999998, "text": " I don't have it here is a new color so v2 is going to be a little bit closer here it's" }, { "start": 1066.7199999999998, "end": 1073.36, "text": " it's possible right it's we we simply guessed v2 so it could also be there it's a little bit less" }, { "start": 1073.36, "end": 1083.28, "text": " likely it's a little bit less likely because it's not in the middle of the Gaussian but v2 could be" }, { "start": 1083.28, "end": 1093.2, "text": " where l is right but now I have to sort of communicate this error back to the last one and" }, { "start": 1093.2, "end": 1098.56, "text": " the trick here is that we don't communicate the global gradient but we only communicate these" }, { "start": 1098.56, "end": 1104.72, "text": " local error signals so this first red arrow here is our first error signal and we are going to" }, { "start": 1104.72, "end": 1113.04, "text": " communicate that thing back to the to the previous layer so the difference between v2 and v" }, { "start": 1113.04, "end": 1119.2, "text": " and here is a fully connect let's say this is a fully connected layer what we're going to send" }, { "start": 1119.2, "end": 1126.8, "text": " back to the last layer is this information of you see you predicted v2 hat but actually you should" }, { "start": 1126.8, "end": 1135.84, "text": " predict v2 please update yourself such that that doesn't you know that's that's a bit closer so now" }, { "start": 1135.84, "end": 1143.52, "text": " we're going to update our guess of v1 and say well if we moved v1 a little bit over here that would" }, { "start": 1144.3999999999999, "end": 1152.8799999999999, "text": " predict v2 to be up here right with the same fully connected layer and if we if if that's the case" }, { "start": 1152.8799999999999, "end": 1161.9199999999998, "text": " then v2 would be a little closer to the true label so we're going to move v1 over here now we're not" }, { "start": 1161.92, "end": 1169.1200000000001, "text": " going to move it fully because so this is a sort of optimization there is a there is a force keeping" }, { "start": 1169.1200000000001, "end": 1176.64, "text": " it to where our original guess is but there is also a force drawing it in the direction of this" }, { "start": 1176.64, "end": 1184.64, "text": " of this error signal you can see so we're going to say well if we just move v1 to up here we would" }, { "start": 1184.64, "end": 1190.16, "text": " predict the perfect v2 but also it's less likely so we're going to find like some sort of a trade-off" }, { "start": 1190.16, "end": 1196.16, "text": " where it's still quite likely under our gaussian assumption but it will predict a little bit more" }, { "start": 1196.16, "end": 1203.92, "text": " of the correct label and so on so this if we had a longer computation graph this would then sort of" }, { "start": 1204.64, "end": 1212.24, "text": " every node in the computation graph would ask itself i i'm going to guess my own value at a place" }, { "start": 1212.24, "end": 1220.96, "text": " that is pretty close to my original guess coming from the forward propagation but also is consistent" }, { "start": 1220.96, "end": 1228.8, "text": " with the output of the next layer and the output of the next layer of course here is this this v2" }, { "start": 1228.8, "end": 1234.32, "text": " right so that the logic isn't i need to make the loss small the logic is well if the next signal" }, { "start": 1234.32, "end": 1241.6, "text": " is v2 then i can't be in the middle here i must be a little bit more up here because you know i" }, { "start": 1241.6, "end": 1250.8, "text": " i my signal runs through the fully connected layer and outputs v2 so i am probably more up here so you" }, { "start": 1250.8, "end": 1265.36, "text": " can see that if you have a computation graph v0 v1 hat v2 hat v3 hat and so on if at the end you" }, { "start": 1265.36, "end": 1274.8799999999999, "text": " have a loss signal you're sort of distributing distributing that loss across this entire chain" }, { "start": 1274.8799999999999, "end": 1286.8, "text": " so you're you're kind of building this guessed chain of values v3 and so on and sorry the that's" }, { "start": 1286.8, "end": 1297.28, "text": " that's the output node which is close to the loss you're moving all of these things and now once" }, { "start": 1297.28, "end": 1304.3999999999999, "text": " you've done this once you've done this you can do one step of parameter updates so once you've" }, { "start": 1304.3999999999999, "end": 1313.04, "text": " guessed all the nodes well you can go ahead and say okay um this is this is a configuration that" }, { "start": 1313.04, "end": 1321.68, "text": " is at equilibrium in this sort of algorithm and now here are here is fully connected layer one so" }, { "start": 1321.68, "end": 1336.08, "text": " here is um here is w0 here is w1 w2 and so on w3 so now we can go ahead and actually update these" }, { "start": 1336.08, "end": 1345.9199999999998, "text": " weights such that the initial guesses that we had and where we truly think the signal is are closer" }, { "start": 1345.9199999999998, "end": 1351.9199999999998, "text": " together okay so we're now going to update the weights in order to minimize all of these" }, { "start": 1351.9199999999998, "end": 1357.6799999999998, "text": " individual errors and this is also can be done locally so you see that the parameter update step" }, { "start": 1357.6799999999998, "end": 1364.32, "text": " here is now a local one because we've computed all of these errors between where we initially" }, { "start": 1364.32, "end": 1371.52, "text": " guess the signal is and where we sort of think it should be now we can minimize these errors so" }, { "start": 1372.8799999999999, "end": 1377.9199999999998, "text": " what i've drawn here is actually not it's not exactly the algorithm but i hope you get the point" }, { "start": 1377.9199999999998, "end": 1387.76, "text": " so step one is you sort of guess where all the stuff is initially then at the end you get an error" }, { "start": 1387.76, "end": 1394.8, "text": " signal right this is an error signal then you distribute that error signal backwards and that" }, { "start": 1394.8, "end": 1401.36, "text": " is now that is not the same as distributing a gradient i know it looks the same but it is" }, { "start": 1401.36, "end": 1407.52, "text": " not the same and so i have to say that you know they say oh this is only local and so on this" }, { "start": 1407.52, "end": 1413.36, "text": " doesn't require a backward sweep i think when i look at this algorithm it very much does require" }, { "start": 1413.36, "end": 1418.8799999999999, "text": " a backward sweep so very much it goes from the back to the front in fact it goes from the back" }, { "start": 1418.8799999999999, "end": 1425.36, "text": " to the front many times now you can do that in parallel so this node here can update so to finish" }, { "start": 1425.36, "end": 1431.28, "text": " the argument here as i said before then you kind of wiggle on these nodes to find out this should" }, { "start": 1431.28, "end": 1435.9199999999998, "text": " probably be more here this one should probably be more here this one should probably be more here" }, { "start": 1435.92, "end": 1444.96, "text": " this one should probably be more here in order to satisfy in order to make that error smaller" }, { "start": 1446.88, "end": 1453.8400000000001, "text": " and the point is that the parameter update step now is a local one okay so the parameter update" }, { "start": 1453.8400000000001, "end": 1462.64, "text": " step now only needs these local errors between where you initially guessed and where your refined" }, { "start": 1462.64, "end": 1469.2800000000002, "text": " iterative guess is after distributing the error through the network and this can all happen in" }, { "start": 1469.2800000000002, "end": 1475.2800000000002, "text": " parallel this this um all of this updating sending information around and so on this can be" }, { "start": 1475.2800000000002, "end": 1485.76, "text": " parallelized but it does require a backward sweep if you ask me okay so there are two equations so" }, { "start": 1485.76, "end": 1493.68, "text": " the the there's two things right here there is first as we said there is a phase where the guesses" }, { "start": 1493.68, "end": 1500.8799999999999, "text": " of where our vertex units are where our hidden representations are are refined and this is given" }, { "start": 1500.8799999999999, "end": 1511.44, "text": " by these dynamics right here so you see that vi changes with time according to this thing right" }, { "start": 1511.44, "end": 1519.1200000000001, "text": " here f is the variational free energy so this this algorithm sort of falls out from the math" }, { "start": 1519.1200000000001, "end": 1527.04, "text": " of assuming these um assuming these generative models right here under the assumption that they" }, { "start": 1527.04, "end": 1535.92, "text": " are these gaussians okay um so under under this assumption if you calculate the kl divergence" }, { "start": 1535.92, "end": 1543.44, "text": " um it turns out to come out to this algorithm right here so how does the how do we need to update the" }, { "start": 1543.44, "end": 1552.88, "text": " node vi the node vi is updated according to this gradient and this gradient is as we said only" }, { "start": 1552.88, "end": 1562.8000000000002, "text": " computed as properties of local things so the first thing is ei which is that's so again if we have" }, { "start": 1562.8, "end": 1571.76, "text": " this is our initial guess of vi and then here is our refined guess of vi ei is the error right here" }, { "start": 1573.04, "end": 1580.56, "text": " that's that's sort of we need to stay close to our initial guess but also we want to go into" }, { "start": 1580.56, "end": 1590.3999999999999, "text": " the direction such that um into this direction right here so ej j is the children of vi j are" }, { "start": 1590.4, "end": 1597.6000000000001, "text": " the children and this thing right here says how do we need to change my guess of vi to make um" }, { "start": 1599.0400000000002, "end": 1606.5600000000002, "text": " to make it fall more in line with vj and you see here that's vj uh the initial thing but then" }, { "start": 1607.3600000000001, "end": 1616.8000000000002, "text": " of course the error is so the error j is going to be the difference between vj and vj hat so" }, { "start": 1616.8, "end": 1624.1599999999999, "text": " ultimately you are guessing you're saying how do i need to change vi in order to make it more" }, { "start": 1624.1599999999999, "end": 1632.8799999999999, "text": " commensurate with vj after going through the the layer okay so this um this derivative right here" }, { "start": 1632.8799999999999, "end": 1638.32, "text": " this is going to involve the derivative of whatever the fully connected layer or the conv layer" }, { "start": 1638.32, "end": 1645.28, "text": " and so on so there is not there's not no derivatives in this algorithm but there are only" }, { "start": 1645.28, "end": 1651.2, "text": " sort of these local derivatives so ei is going to be the difference here and then" }, { "start": 1652.16, "end": 1660.8, "text": " we'll have the fully connected layer using w gives you vj hat but also your refined guess gives you" }, { "start": 1661.76, "end": 1672.08, "text": " vj and the error j is going to be this thing right here okay so at you want to stay close" }, { "start": 1672.08, "end": 1683.6, "text": " right here but also you want to um make vi such that it outputs vj such that it also minimizes that" }, { "start": 1683.6, "end": 1695.4399999999998, "text": " error okay sort of um yeah it's it's hard to it's hard to draw these things but i hope i've explained" }, { "start": 1695.4399999999998, "end": 1701.76, "text": " it in multiple ways right now it's at least a little bit clear how this works and at the" }, { "start": 1701.76, "end": 1709.92, "text": " end once you've reached equilibrium of all of your guesses of um all of your guesses of where the next" }, { "start": 1709.92, "end": 1718.08, "text": " nodes are what you do is you update your parameters here in a local fashion you can see right here what" }, { "start": 1718.08, "end": 1726.56, "text": " you need is this error of the if layer and you multiply that by this derivative and this derivative" }, { "start": 1726.56, "end": 1734.32, "text": " is simply the local derivative of your hidden representation with respect to your layer okay" }, { "start": 1734.32, "end": 1742.72, "text": " so this is very akin to in the back propagation algorithm hi to wi this is just this local" }, { "start": 1742.72, "end": 1750.32, "text": " derivative so using the update the update step of the weights now only requires local derivatives" }, { "start": 1750.32, "end": 1758.8799999999999, "text": " and that's the point so here it's in this pseudo code things are a little bit a little bit unclear" }, { "start": 1758.8799999999999, "end": 1765.9199999999998, "text": " in this but we'll do so for the entire data set x is the data point and l is the label you fix the" }, { "start": 1765.9199999999998, "end": 1772.96, "text": " start so you fix v0 then you go you do the forward pass so you do this once you these are your initial" }, { "start": 1772.96, "end": 1778.56, "text": " guesses um these hat things you can see the hat things are always computed from the parents" }, { "start": 1778.56, "end": 1786.32, "text": " you compute the output error right here and then begin backwards iteration phase of the descent" }, { "start": 1786.32, "end": 1793.04, "text": " on the free energy so here you see there is this inner loop while not converged and this is just" }, { "start": 1793.04, "end": 1800.3999999999999, "text": " going to work out to be some sort of in some sort of an inner iterative scheme for a number of steps" }, { "start": 1800.4, "end": 1808.88, "text": " this is going to be a hyper parameter and this here this is something you can technically do in" }, { "start": 1808.88, "end": 1815.3600000000001, "text": " parallel you have to send a bit of information around but you can technically do it in parallel" }, { "start": 1815.3600000000001, "end": 1824.16, "text": " this inner these these inner loops but you can you can just imagine it always going from the back" }, { "start": 1824.8000000000002, "end": 1829.0400000000002, "text": " and you distribute these errors you refine your guests a little bit and you start from the back" }, { "start": 1829.04, "end": 1834.56, "text": " again you distribute errors refine your guesses and so on and you do that you always start from" }, { "start": 1834.56, "end": 1843.76, "text": " the back in the actual code so you compute these errors so this is your initial guess and this is" }, { "start": 1843.76, "end": 1851.68, "text": " your refined guess of the current layer and then you update the vertex values you say okay" }, { "start": 1851.68, "end": 1861.28, "text": " the my guess for the next layer is going to be my guess for this layer plus some sort of a this" }, { "start": 1861.28, "end": 1868.64, "text": " gradient and this gradient we get from equation number two from this thing right here so my guess" }, { "start": 1868.64, "end": 1877.92, "text": " is going to be updated such that i still stay close to my original guess but i also update" }, { "start": 1877.92, "end": 1883.8400000000001, "text": " i also predict better what the next layer is" }, { "start": 1886.4, "end": 1893.28, "text": " and at the end when this is converged you do the update on the weights and the updates on the weights" }, { "start": 1893.28, "end": 1901.1200000000001, "text": " is simply again this what we saw it's the error that you want to correct so this e is the error" }, { "start": 1901.1200000000001, "end": 1906.24, "text": " you want to correct now you have a good approximation of the error once this is converged" }, { "start": 1906.24, "end": 1913.04, "text": " uh times the derivative of course with respect to the weights so the error is in terms of" }, { "start": 1913.6, "end": 1920.96, "text": " how how much are your predictions of from what they should be and the derivative simply translates" }, { "start": 1920.96, "end": 1927.04, "text": " that into the how do you need to change the weights such that in the future that error is smaller" }, { "start": 1928, "end": 1934.8, "text": " okay so then they show that this actually approximates a back prop and this it's a it's a" }, { "start": 1934.8, "end": 1942.24, "text": " fairly um fairly simple proof it's an it's sort of a proof by induction by iteration that's showing" }, { "start": 1942.24, "end": 1951.36, "text": " that um one one such one such thing like this this thing right here at the equilibrium at the last" }, { "start": 1951.36, "end": 1958, "text": " layer is equivalent to back prop and because you can simply substitute this and then by sort of" }, { "start": 1958, "end": 1967.12, "text": " recursion that goes back the layers and this is all dependent on you actually reaching that" }, { "start": 1967.12, "end": 1972.72, "text": " equilibrium which you do as we said by inner iterations so they have a bit of a they have a" }, { "start": 1972.72, "end": 1980.72, "text": " bit of a an example right here where they have this function of um it's a pretty simple function" }, { "start": 1980.72, "end": 1987.12, "text": " this function right here the output is the tan of this square root and there's parameters in there" }, { "start": 1987.12, "end": 1993.84, "text": " right so this is an arbitrary parameter that you might want to learn and then you give some data" }, { "start": 1993.84, "end": 1999.76, "text": " sets um so this is equal to two but i guess the network doesn't know that i don't know" }, { "start": 2000.4799999999998, "end": 2008.32, "text": " so you have to learn it and they they test that and you can see the this augmentation by error" }, { "start": 2008.32, "end": 2015.36, "text": " graphs makes the computational graph quite a bit more um complex so you have all these error graphs" }, { "start": 2015.36, "end": 2025.12, "text": " right here but you know ultimately error ultimately it's you can you could automate this that that is" }, { "start": 2025.12, "end": 2037.36, "text": " not a problem okay so um they also do this for as i said cnn's rnns lstms and the results are quite" }, { "start": 2037.36, "end": 2046.6399999999999, "text": " remarkable i think in that they they just follow the same accuracy and loss and performance patterns" }, { "start": 2046.6399999999999, "end": 2055.2799999999997, "text": " of these networks that's pretty cool the downside of course is that um they are way smaller sorry" }, { "start": 2055.2799999999997, "end": 2062.3199999999997, "text": " they're way way slower and they say this sometimes um due to the need to iterate the v's until" }, { "start": 2062.32, "end": 2068.32, "text": " convergence the predictive coding network had roughly a 100 times greater computational cost" }, { "start": 2068.32, "end": 2075.44, "text": " than the backprop network though they say this is a bit misleading because you can distribute and" }, { "start": 2075.44, "end": 2082.7200000000003, "text": " parallelize that however as we've seen it's not fully local like you you need to send signal around" }, { "start": 2082.7200000000003, "end": 2091.28, "text": " every node needs to send signal to its parents or its children and um that of course in in backprop" }, { "start": 2091.28, "end": 2097.28, "text": " you just need to do that once right so i'm not exactly buying this argument of this is much more" }, { "start": 2097.28, "end": 2102.96, "text": " local and so on so the last thing that i want to point out in the paper and then we looked briefly" }, { "start": 2102.96, "end": 2108.6400000000003, "text": " at the code is this thing right here there's a further simplification they say importantly if the" }, { "start": 2108.6400000000003, "end": 2113.6800000000003, "text": " edge function linearly combines the activities and the parameters followed by an element-wise" }, { "start": 2113.6800000000003, "end": 2120, "text": " non-linearity which is most of deep learning layers nowadays a condition which we call parameter" }, { "start": 2120, "end": 2127.76, "text": " linear then both the update rule for the vertices and the parameters become Hebbian specifically" }, { "start": 2127.76, "end": 2136.48, "text": " the update rules for the vertices and the weights become so here is here is um if you have a linear" }, { "start": 2138.08, "end": 2144, "text": " operation followed by a non-linearity which you know is the fact in RNNs in CNNs in fully" }, { "start": 2144, "end": 2153.52, "text": " connected layers then this here are these update rules so the local layer derivative is simply" }, { "start": 2153.52, "end": 2159.44, "text": " going to be your forward activations passed through and this is a bit weird um it's the" }, { "start": 2159.44, "end": 2166, "text": " forward activations passed through the derivation of the non-linearity this is the non-linearity" }, { "start": 2166, "end": 2174.48, "text": " right here um times again the weights of the forward iteration and the update rule with respect" }, { "start": 2174.48, "end": 2179.76, "text": " to the parameters are very very similar and the reason i point this out because now we're going" }, { "start": 2179.76, "end": 2188.24, "text": " to jump into the code and i hope you can see this um you can recognize this again so first of all" }, { "start": 2188.24, "end": 2204.16, "text": " let's go into the um into the CNN hello all right so the code is quite ugly honestly but um" }, { "start": 2206.3199999999997, "end": 2213.3599999999997, "text": " you see that they have their backprop or CNNs but they have this thing right here this um" }, { "start": 2213.36, "end": 2220.2400000000002, "text": " um this model which is the one they train and here is the train function so in the train function" }, { "start": 2220.2400000000002, "end": 2227.04, "text": " they go through the data set and you can see for each data point they simply call this infer" }, { "start": 2227.04, "end": 2236, "text": " function right here so this infer function is what ultimately does the training so in the infer" }, { "start": 2236, "end": 2243.92, "text": " function they get an input as you can see and a label and a number of inference steps so they start" }, { "start": 2243.92, "end": 2252.96, "text": " out by and this this is labeled a bit a bit different so they have these mus and the outs" }, { "start": 2253.52, "end": 2262.32, "text": " and these prediction errors and the predictions and we're going to see how that works so first of all" }, { "start": 2262.32, "end": 2267.2000000000003, "text": " they go through the layers right here and i'm going to use my mouse they go through the layers" }, { "start": 2267.2000000000003, "end": 2271.92, "text": " right here and you can see they simply forward propagate the signal so they always take this" }, { "start": 2271.92, "end": 2280.1600000000003, "text": " mu of the last layer they forward propagate it to get the mu on the layer plus one and the outputs" }, { "start": 2280.1600000000003, "end": 2287.6000000000004, "text": " are simply cloned from the mus so these must be our news before or our v's whatever you want to" }, { "start": 2287.6, "end": 2294.24, "text": " call them so one one is going to be the initial guess and the other one is going to be the guess" }, { "start": 2294.24, "end": 2301.52, "text": " that we iteratively refine okay in fact the mu here is going to be the guess that we iteratively" }, { "start": 2301.52, "end": 2310.16, "text": " refine at the beginning we simply set them to be the same okay and then the last layer here we" }, { "start": 2310.16, "end": 2317.2, "text": " put at the label and then the prediction errors that's going to be yeah that's going to be the" }, { "start": 2317.2, "end": 2323.92, "text": " the error variables so the last prediction error is going to be the derivative of our loss function" }, { "start": 2323.92, "end": 2331.2, "text": " with respect to the last layer and now we start this iterative algorithm so here you see we go" }, { "start": 2331.2, "end": 2337.04, "text": " through this number of inference steps train which is going to be like a hundred or so so a hundred" }, { "start": 2337.04, "end": 2346.64, "text": " times we're going to update each of our guesses of the intermediate layers then here is what i said" }, { "start": 2346.64, "end": 2353.36, "text": " we're going through the layers in reverse order so a hundred times we're going from back to front" }, { "start": 2353.36, "end": 2361.44, "text": " back to front back to front back to front and we do that so here you can see what the first thing" }, { "start": 2361.44, "end": 2369.68, "text": " we do is we come we compute the current error okay which is the difference between the guess that we" }, { "start": 2369.68, "end": 2375.68, "text": " currently have and the initial guess that we had during forward propagation this is going to be" }, { "start": 2376.4, "end": 2382.4, "text": " zero for most of the layers at the beginning except the last layer right in the last layer" }, { "start": 2382.4, "end": 2394.2400000000002, "text": " we've actually put we've actually put the the mu to something else than the output and thus this" }, { "start": 2394.2400000000002, "end": 2400.2400000000002, "text": " error is going to it's beginning at zero at each layer as the guesses are the same but then we're" }, { "start": 2400.2400000000002, "end": 2406.1600000000003, "text": " going to refine and refine and refine and sort of this error of the last layer is going to iteratively" }, { "start": 2406.16, "end": 2413.6, "text": " propagate through the network to the from the back to the front multiple in an iterative fashion so" }, { "start": 2413.6, "end": 2421.8399999999997, "text": " multiple times so once we have the prediction error we're going to backward this through the" }, { "start": 2421.8399999999997, "end": 2431.04, "text": " layers and this backward here that is sort of that is this this backward edge we saw where did we see" }, { "start": 2431.04, "end": 2438.72, "text": " this so this backward is going to be the this local derivative in this graph the backward is going to" }, { "start": 2438.72, "end": 2444.72, "text": " be the the red thing right here so we take the error of the next layer and we're going to" }, { "start": 2446.4, "end": 2452.08, "text": " we're going to see how do we need to change the current guess in order to make the next" }, { "start": 2452.08, "end": 2460.8, "text": " layers error be a little bit smaller okay so that's the going to be the backward function and we can" }, { "start": 2460.8, "end": 2472.6400000000003, "text": " actually look at the backward function of let's say yeah here so this is the backward function of a" }, { "start": 2472.6400000000003, "end": 2478.4, "text": " fully connected layer this is the projection layer there is a fully connect here is there is a fully" }, { "start": 2478.4, "end": 2485.36, "text": " connected layer and the f is going to be the non-linearity and the df is going to be the" }, { "start": 2486, "end": 2490.48, "text": " derivative of the non-linearity so in the forward you can see what we're doing is we're" }, { "start": 2490.48, "end": 2496.56, "text": " multiplying the input by the weights and then we're going to save the activations and simply" }, { "start": 2497.12, "end": 2503.2, "text": " propagate them through the non-linearity in the backwards we're going to take the activations" }, { "start": 2503.2, "end": 2508.2400000000002, "text": " this the forward activation then we're going to shove them through the derivative of the" }, { "start": 2508.2400000000002, "end": 2514.88, "text": " non-linearity and this is why i pointed out this is this Hebbian learning rule so first i was a bit" }, { "start": 2514.88, "end": 2520.7200000000003, "text": " confused why do we use the forward activations and shove them through the derivative of the" }, { "start": 2520.7200000000003, "end": 2529.6, "text": " non-linearity but this is exactly this is simply because they've derived that this is the correct" }, { "start": 2529.6, "end": 2537.44, "text": " local gradient okay and then we have this right this is the local gradient of the layer and we're" }, { "start": 2537.44, "end": 2543.92, "text": " going to multiply that by the weights so this completes the formula that we had right here for" }, { "start": 2543.92, "end": 2552.32, "text": " these Hebbian updates this thing so these are the activations this is the derivative of the forward" }, { "start": 2552.32, "end": 2560.48, "text": " layer we're going to multiply that by the weight again so this is now the complete derivative the" }, { "start": 2560.48, "end": 2568.2400000000002, "text": " complete local derivative which is this thing i've already circled 50 billion times right here" }, { "start": 2568.24, "end": 2574.16, "text": " and all we need to do now is we need to multiply this by the error in prior prediction error in" }, { "start": 2574.16, "end": 2581.68, "text": " that layer and then we get an idea of how do we need to change this node such that in this one" }, { "start": 2581.68, "end": 2589.2, "text": " child and there can be many children such that in this one child we make a little bit less error" }, { "start": 2589.2, "end": 2600.7999999999997, "text": " okay so that's why we multiply this by e right here so e is the the error okay and that will be" }, { "start": 2600.7999999999997, "end": 2609.04, "text": " the backwards thing so backwards simply tells the parent how it needs to change the child sorry how" }, { "start": 2609.04, "end": 2615.8399999999997, "text": " it needs to change itself such that the child is a little bit happier and since this is a forward" }, { "start": 2615.84, "end": 2622, "text": " you know a cnn we don't have multiple children we simply have one child per parent so we have a list" }, { "start": 2622, "end": 2631.28, "text": " and these predictions as you can see we simply take the prediction error of layer j plus one we" }, { "start": 2631.28, "end": 2637.6000000000004, "text": " backward it so how do we need to change this layer in order to make it a little bit more commensurate" }, { "start": 2637.6, "end": 2646, "text": " with the child and then here is this trade-off so the trade-off between the prediction error so" }, { "start": 2646.88, "end": 2652.88, "text": " how close am i to my original guess i don't want to go too far away right because i assume my" }, { "start": 2652.88, "end": 2657.92, "text": " original guess isn't too bad in fact there's a gaussian likelihood model how i want to stay" }, { "start": 2657.92, "end": 2664.08, "text": " close to that but also i want to go into the direction such that i make the next layer happier" }, { "start": 2664.08, "end": 2670.88, "text": " okay so this is this fundamental trade-off it's computed right here and it's it's this minus sign" }, { "start": 2672.3199999999997, "end": 2681.2799999999997, "text": " and then at the end this is the inference learning rate and i simply go into that direction of this" }, { "start": 2681.2799999999997, "end": 2689.44, "text": " trade-off okay so i update the current the guess of the current node like this and as i said i go" }, { "start": 2689.44, "end": 2694.4, "text": " through the network back to front back to front back to front back to front until i reach some" }, { "start": 2694.4, "end": 2700.16, "text": " sort of equilibrium and only when i reach equilibrium or in this case after this many steps" }, { "start": 2701.04, "end": 2709.28, "text": " i then update the weights and the update weights function that's very similar i think here here is" }, { "start": 2709.28, "end": 2720.4, "text": " update weights that is simply i each layer i input the prediction error of that layer and" }, { "start": 2720.96, "end": 2728.5600000000004, "text": " that layer calculates this function right here in much a similar way than you just than you just saw" }, { "start": 2728.56, "end": 2742.96, "text": " maybe we can look at one of them let's go this is layers let's go here fully connected layer" }, { "start": 2742.96, "end": 2748.48, "text": " okay and you're going to see this Hebbian learning rule again so activations through the derivative" }, { "start": 2750.32, "end": 2757.2799999999997, "text": " and so now instead of so there's a little bit of a difference to before right but the difference" }, { "start": 2757.28, "end": 2766.88, "text": " isn't isn't large right so activations multiplied by through this and then multiplied by the inputs" }, { "start": 2766.88, "end": 2773.44, "text": " instead of the weights so that's that's that so this multiplied by the inputs instead of the weights" }, { "start": 2773.44, "end": 2782.5600000000004, "text": " then multiplied by e which is so this here multiplied by the error term right here" }, { "start": 2782.56, "end": 2793.2799999999997, "text": " and that's going to be our local update okay cool so that's the code that's predictive coding" }, { "start": 2793.2799999999997, "end": 2800.56, "text": " and you know the challenge is it's not that these people propose this as a true alternative to" }, { "start": 2800.56, "end": 2808.56, "text": " back prop but it is a step in a direction of saying look the brain with its more Hebbian nature and" }, { "start": 2808.56, "end": 2815.92, "text": " its more local updates and so on it could actually be doing something much more close to back prop" }, { "start": 2815.92, "end": 2821.12, "text": " than we thought because people thought well back prop is impossible in the brain therefore" }, { "start": 2821.52, "end": 2830, "text": " the brain can't be doing back prop right and now we see that actually the brain can do something" }, { "start": 2830, "end": 2837.2, "text": " possibly it's not proven but it's possible that the brain does something that approximates the" }, { "start": 2837.2, "end": 2845.3599999999997, "text": " back prop gradient actually arbitrarily if you know if all of these if these some assumptions are given" }, { "start": 2845.3599999999997, "end": 2852.72, "text": " but that's sort of the the results and they also show it's quite robust to learning rate changes" }, { "start": 2852.72, "end": 2856.64, "text": " and so on as we said we can go pretty deep even though this is this kind of iterative" }, { "start": 2856.64, "end": 2862.8799999999997, "text": " guessing algorithm under these Gaussian assumptions and there is variational approximation" }, { "start": 2862.88, "end": 2873.12, "text": " it is fairly robust and all so this goes this sort of puts the ball back into maybe the brain is" }, { "start": 2873.12, "end": 2880.32, "text": " doing something very close to back prop or at least getting the same results getting the same" }, { "start": 2880.32, "end": 2887.52, "text": " parameter updates as back prop so i hope that wasn't too confusing i've tried to tackle it from" }, { "start": 2887.52, "end": 2895.68, "text": " many angles and maybe after seeing the code you see it a little bit more clearly if not let me know" }, { "start": 2895.68, "end": 2918.3199999999997, "text": " open for questions as always and bye bye" } ]
Pm93D8CVlY8
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
This A.I. creates infinite NFTs
[ "Science & Technology" ]
[]
#nft #gan #ai Today we build our own AI that can create as many bored apes as we want! Fungibility for everyone! Try the model here: https://huggingface.co/spaces/ykilcher/apes or here: https://ykilcher.com/apes Files & Models here: https://huggingface.co/ykilcher/apes/tree/main Code here: https://github.com/yk/apes-public (for the "what's your ape" app, look for the file interface_projector.py) This video is sponsored by BrightData, use this link for free credits: https://brightdata.grsm.io/yannickilcher OUTLINE: 0:00 - Introduction 2:05 - Generative Adversarial Networks 3:40 - Scraping Opensea with BrightData 7:55 - Training the GAN 11:35 - Here are the results! 15:20 - Diving deeper into BrightData References: Stylegan 3 imagery: https://nvlabs.github.io/stylegan3/ Bored Ape Yacht Club NFT Collection: https://opensea.io/collection/boredapeyachtclub Better GANFT model: https://medium.com/@nathancooperjones/these-bored-apes-do-not-exist-6bed2c73f02c Abstract AI-created apes: https://opensea.io/collection/gan-apes-nft https://mobile.twitter.com/gannft Another good model: https://twitter.com/cyrilzakka/status/1463944040878071811 StyleGAN2 versions: https://thispersondoesnotexist.com/ https://thissneakerdoesnotexist.com/ https://thischairdoesnotexist.com/ GANs: https://en.wikipedia.org/wiki/Generative_adversarial_network https://arxiv.org/pdf/1406.2661.pdf StyleGAN3: https://nvlabs.github.io/stylegan3/ StyleGAN2 code: https://github.com/NVlabs/stylegan2-ada-pytorch CLIP: https://openai.com/blog/clip/ DALL-E 2 images: https://twitter.com/search?q=%23dalle&f=image My music video: https://www.youtube.com/watch?v=2iq7WXSw26s BrightData Links: https://brightdata.com/products/data-collector https://brightdata.com/testimonials https://brightdata.com/use-cases/adtech https://brightdata.com/use-cases/social-media-for-marketing https://brightdata.com/use-cases/ecommerce Links: Merch: https://ykilcher.com/merch TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ykilcher.com/discord BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this), find options at https://ykilcher.com
This ape does not exist. Neither does this one, this one, this, this, this or this. In fact, I've created all of them using an AI that I trained myself. And today I'm going to show you how it's done and what other cool things you can do with this. Hi there, my name is Yannick. Welcome to the channel. Today I'm going to walk you through how I built the GANFTAI and how you can use it. It's all available online. So you know, if you want, go check it out. This video is sponsored by Bright Data. Use my link to sign up to them and get $25 in free credits, and they'll match your first deposit up to 250. Thanks Bright Data for sponsoring this video. I'll tell you more about them in just a second. NFTs have obviously been super popular and these bored apes are the pinnacle of it. And you know what power we have with our AI. We are going to be rich, we're going to give you an ape and then another ape and another one. Like if these are apes will be like you get an ape and you get an ape and you get an ape and ape, apes just all the way. Funge, funge, everything's fungible. Now, needless to say, once it's done, we're gonna be ending up with a model and I'll just put it out there. You can go to the model every time you click Submit, you'll get a new instance of some creation of that model. It's not perfect, but it's pretty good. But given that this is an AI model, we can actually do more than just generate new ape. For example, take a look at this ape that was generated by my model and this ape that was generated by my model. What we can do is we can look at what the model thinks are all the in between apes between the two. This is generally called an interpolation. It's pretty cool to explore what the model learns and how it sees the world. Now needless to say, I'm not the first person that does this nor is my model the best model that I've been people who have investigated this much more and have put more work into it. And I'm not going to be able to mention all of them right here. But Nathan Cooper Jones has a very cool medium article on his investigations on the board ape collection and GANs and so has serial sucker on Twitter. So the technique we're going to use today to make our AI is called a generative adversarial network, a GAN, which is the same methodology that powers websites like this person does not exist.com, where every time you refresh, you get a new artificially generated face. But there's more there is this sneaker does not exist.com this chair does not exist.com and pretty much anything you can think of. So GANs generative adversarial networks were first invented in Well, let's not talk about that right now. They were first mentioned in a 2014 paper by Ian Goodfellow and collaborators called generative adversarial nets. And oh boy, in recent years, they have made progress. So these were from the original paper, you can see you can barely make out a face, it's okay at generating digits, but anything else is way out of scope. And they're just a couple of years later, as you can see right here, these things have gone insane. The pictures they produce are almost impeccable. They're very versatile. And they're at the forefront of computer generated imagery. Very briefly, a GAN consists of two neural networks, one called the generator and one called the discriminator. And while the generator tries to produce these fake images, the discriminator tries to differentiate those fake images from real images from a data set. Now, as the discriminator gets better at discerning what is real and what is fake, the generator in turn gets better at fooling the discriminator. And therefore both neural networks get better and better and better. And at the end, the generator is really good, as you can see right here. So the first thing we're going to need is data. In fact, what we're going to do is we're going to go to open sea and we're going to collect the board apes Yacht Club of that website. The board apes Yacht Club is a NFT collection on open sea, it consists of 10,000 of these apes, each one of the ape comes with its own attributes and properties, as you can see, they are procedurally generated, but only certain combinations exist and certain attributes are much more rare than others. Now they do have an API, but I don't trust APIs, I want to get the data directly from the website. And that's what we're going to use Bright Data for Bright Data offers scalable, robust collection of public web data as a service. This is really, really cool and can save you a lot of troubles. They really have everything you need in order to collect data. For example, they maintain a vast network of proxies all over the world and from any kind of device. So you're really not limited and what you can collect, though at the heart of their service is definitely the data collection engine. They have various levels of difficulties of how you can interact with them naturally, since I'm a nerd, I'm going to go for the programming layer, which is the lowest layer, but it's also the most fun layer. So here's my scraper for open seas board a yacht club. So let me show you what I did. So the code on top here simply says that I want to use a proxy in the US and I want to go to the board a yacht club website, then I want to wait until the navigation action has completed. So essentially, I've arrived at the website. Now it turns out that open sea is actually one of the more difficult websites to scrape because it's very, very dynamic. Like watch what happens when I reload the page, the page already loads, but then the items load individually. Moreover, if I scroll down, you can see that constantly new apes are being added instead of these placeholders. This is called an infinite scroll, even though I guess it's not infinite. But it means that you can't just load the website once and you have all the apes available, you need to do so in a stepwise fashion. So yes, it's going to be a bit more tricky than just loading up the website and scraping the content. But hey, that's what we're here for nothing that a little bit of Cody Cody magic can solve. So we've got to instruct our scraper that it waits for you know, just a bit more after it has arrived at the website. Now the code you're seeing here is mostly JavaScript, but bright data has introduced a bunch of utility functions like this navigate thing up here, or the weight function here, which we're going to use right now, they're going to wait for the grid to initially become available, which means that the first set of apes has been loaded, we're then going to call the parse function right here. And the parse function is one of the main functions of data collection, essentially, it goes to the website and collect some data from it as it is, you can see down here what we are selecting. And if your CSS foo is good, you'll realize that we're going for this counter here, this counter tells us how many total apes there are. And why is that important for scraping? Well, you see, if you open a bunch of them, you can see that the different URLs here all have an ending that is different, but then a prefix that is the same. So my suspicion was that they're probably numbered from zero to 999999. And we could just iterate through all of them in order to get them. And yes, I was right. So all we have to do then is loop from one to whatever that number of total grid cells is and call the next stage, every bright data scraper is divided into stages. And you could probably already guess that the second stage deals with collecting an individual ape. Now that's a lot easier than before. All we do is we navigate to the URL, we wait for the summary to be ready, we wait for the history panel to be ready. And then we call parse. Now, as you can see, we are collecting quite a bit more data than before. So I do not only want the image of the ape, I also want its attributes. And I want the price of when it was last sold, which I'm going to get from this table right here. See, whenever it says sale, that's when the ape was sold 78 ether to Gary V. All right, well, you do you. And while we're not going to use the attributes are priced today, it is valuable data for our future endeavors. Alright, so once I have my scraper, all I gotta do is go to the scraper, say initiate, and off it goes, starting and collecting. Now that we have the data, the next thing we need is some code. And I could write it myself. However, I'm not in the mood to do so. So I'm going to go over to Nvidia and get the official implementation for stylegan to add up, which already has excellent code available on GitHub. Not only do they have code, they have a very thorough readme that describes how you can use their code, how you train your own stuff. So after converting the images using their data set tool, essentially, it's just a matter of calling train dot pi. I know I wish machine learning was more interesting. But this is it. So off went my first training run, you can see that the loss of the discriminator starts up high, goes down low, and then starts rising again, I don't know, is that good? Is that bad? While the generators loss starts low goes high, and then drops down. Well, GAN training is one of these things where the metrics are a bit like tea leaf reading. And there's not too much indication that you can go by of whether your model does something well or not. One of the metrics that is sometimes useful is the F ID. And as you can see right here, the F ID of my model quickly dropped down, which is good, low F ID is good, but then quickly went up again after only a few hundred steps. So that concerned me. And then I looked at the output data. So the code base will actually sample every couple of hundred steps, a new batch of images, so that you can see what progress your model makes. At the very beginning, it's just noisy gibberish, as you can see right here. But very quickly, it gets the idea of what it should do approximately, this already looks quite promising. But then as it went on, you can see that what is this? Why is everything turned to the side? Now to this day, I don't really know why this is turned to the side. I suspect it's part of the data augmentation that sometimes turns images to the side, although I haven't looked that that's the case. So clearly, this was a failure and a collapse. I had to start again, I tweaked the hyper parameters a little bit, and then a second run went much, much better. Yeah, this is the last step. And it got like a bit different, but in a weird way. So off I go. What starting again, so the second run, I changed some hyper parameters around, I did some tweaky, tweaky, Cody, Cody, you know, like us machine learners do, and very quickly, that model became better, you can see already that the diversity is higher from the beginning. And after only a few steps, we got something really neat going, you can see it still makes a lot of mistakes. There are a lot of artifacts in here. However, it's clearly going into the correct direction. In fact, remember that FID metric that I've showed you before? Well, the orange line here is the one of the new model. So you can see as the blue one gets worse, again, the orange one just continues to drop. This is really good, really nice. It goes down, it goes towards zero down further and further. Now, I have no comparison because there's not a lot of academic effort into producing board apes. I have no clue how good nine is. But I like the shape of the graph and that's important. So as you can see by step 9000 or so the model was getting pretty decent, and I was hopeful, but I just wanted to see what happens when I let it train for longer. And in hindsight, I shouldn't I mean, check out when I zoom out. Ouch. But you know, this is normal, every GAN will collapse at some point. And in fact, the checkpoints that I've put online for my project, which you can also download are definitely from the regions where it hasn't collapsed yet. Now I've done a few more runs where I managed to get it training for even longer before it collapsed, such as the green or the red one right here. But all of these things will give quite satisfying results. So I was happy. So what are the results? This is a hugging face space, I've uploaded my model there. And you can go to it, you can click on the button. And every time you click, you get a new produced ape. This ape is produced in this instance, the same ape has never been produced before and will never be produced after. So this is fully yours. And it's absolutely fungible. I'm not going to mean these things as NFTs or anything like this, just download it, you can definitely produce more than one image. For example, if you set it to three, it will give you a grid of three images. And if you click the interpolate checkmark, it will do the generate two images and then generate everything in between. You see, very funny. Now because this is not the full experience of fungibility. I've also made a little website. So this is why culture.com slash apes. If you go to this, there's nothing different. Every time you refresh, you get a new ape. In fact, it calls the same API. However, if you click download right here, oh, well, you're just going to have to try it for yourself. And here's another fun thing that you can do with this. This is a little application that I call what's your eight. And what you can do is you can go here, you can input a little image of whatever you want right here doesn't have to be me, but you know, it better be me and it will generate the ape that corresponds to your picture the most that this is really fun. I've only put 250 steps, I'd usually put 1000 steps, then the quality is a bit higher, it doesn't always work, you sometimes have to retry. But if you do retry, you get different apes. And it's quite fun, you get a little video of how the AI searches through the latent space in order to match your picture. The technology behind this that I had to add is open AI clip model clip is trained on text image pairs and therefore understands what's inside an image much better than for example, a classic image net trained resonant by using clip and back propagating into the game, I'm able to search the latent space of again in order for a picture that is as similar as possible in the eyes of the clip model to the picture that I input, what my app does is it tries to match how clip sees the image you have input and how clip sees the image that is output from the game. I've used a very similar technique to generate my music video. So go check that out for a more in depth explanation. And the same technique has powered a lot of recent AI art, for example, Dolly to buy open AI, if you search on Twitter for the hashtag Dolly, you can get some amazing outputs of this model that only doesn't use again, but it also uses clip as a central part of its architecture. Now due to this being quite heavy in compute, I cannot exactly put this on hogging face space, I'll just take too long, you actually need a local GPU and some time 1000 step take roughly two minutes or so. But if you can give it a try. Again, it doesn't always work. But it's fun when it does. And here are some more cool results that I got with it. Alright, this was it for today's video. Thank you so much for being here. Let me know if you like project report kind of style videos like this. I've put all the code and checkpoints and whatever online I've put links to everything I mentioned in the description. Please go check it out. Thank you so much again to Bright Data for sponsoring this video. It's really cool to have them on board in a second. I'm just going to show you a couple more things you can do with them just in case you're interested. They have a really established infrastructure for collecting public data and the possibilities of what you can do with it are almost endless. People use this for example, to verify that the ads that they make online really reach their target audience by scraping from the perspective of their target audience. This is a really cool idea. I would have never thought of this. Another example is you can go out there to e commerce websites, collect pricing data, aggregate this from all over the web and either let this influence your pricing or offer your customers a better deal. I mean, so many things are possible with cool web scraping technology. And if you can do this at scale regularly and automatically, that is mighty, mighty powerful. Now I've given a shot at collecting some other data by myself. I'm going to show you that now. So stay tuned. And I wish you the best. Again, many thanks to today's sponsor, Bright Data. Now let me show you a little bit what more you can do with their platform. I've gone by far the most difficult and the most cumbersome route to use their platform in the project today, it is usually much easier, which you're going to see right now. So if I go to their platform, and I go to collectors, I add a new collector and there are all kinds of collectors already predefined, all the big social media companies, all the e commerce companies, Amazon and eBay, all the hotel pages, and everything already has predefined collectors for you. So many of the things that you would possibly want to scrape will already have a scraper defined, all you need to go is enter a few details and off you go. For example, here I can scrape myself a data set of Instagram posts that have the hashtag AI art. Now people upload these pictures whenever they make some art with AI and they want to show it to the world. And I just want to download it all. So with Bright Data, super easy, I simply go to the collector that's made for scraping hashtag on Instagram, I enter AI art, I say how many posts I want off I go, I get a neat JSON file at the end with everything that I'd want to know about these posts. Or here, what if I have some new business idea like Airbnb for campsites, I might want to research a lot about which campsites are in which area, how expensive are they, how occupied are they and so on. So I might want to regularly scrape all of the campgrounds around certain regions, no problem. In fact, Bright Data has a scraper for you already prepared for that to simply select the scraper, enter the locations you'd like to know about and off you go. You can set these scrapers to run manually or on a schedule and then export the data to wherever you want into your cloud, they can send it to you as an email, you can download them yourself, whatever you like. So not only do they have predefined scrapers, they've actually let their scrapers run on a lot of public facing websites and scraped all public data from those. For example, you can see there are a lot of data sets available. One of them is this LinkedIn company data set. So this is a registry of over 50 million companies and all the publicly available data that's on LinkedIn. Now, whether you're a recruiter or looking for a new job or looking to sell something to businesses, this data is really valuable. Now, this is only a small set of features that Bright Data offers, they just make collecting data from the internet a whole lot easier. So thanks again so much to Bright Data for sponsoring this video. Please check them out. There's a link in the description. I'm very sure you'll be pleasantly surprised. With that, I'll see you around. Bye bye.
[ { "start": 0, "end": 6.08, "text": " This ape does not exist. Neither does this one, this one, this, this, this or this. In fact," }, { "start": 6.08, "end": 10.88, "text": " I've created all of them using an AI that I trained myself. And today I'm going to show you" }, { "start": 10.88, "end": 15.36, "text": " how it's done and what other cool things you can do with this. Hi there, my name is Yannick. Welcome" }, { "start": 15.36, "end": 21.28, "text": " to the channel. Today I'm going to walk you through how I built the GANFTAI and how you can use it." }, { "start": 21.28, "end": 24.72, "text": " It's all available online. So you know, if you want, go check it out." }, { "start": 24.72, "end": 34.64, "text": " This video is sponsored by Bright Data. Use my link to sign up to them and get $25 in free credits," }, { "start": 34.64, "end": 40.16, "text": " and they'll match your first deposit up to 250. Thanks Bright Data for sponsoring this video." }, { "start": 40.16, "end": 45.28, "text": " I'll tell you more about them in just a second. NFTs have obviously been super popular and these" }, { "start": 45.28, "end": 51.68, "text": " bored apes are the pinnacle of it. And you know what power we have with our AI. We are going to" }, { "start": 51.68, "end": 57.28, "text": " be rich, we're going to give you an ape and then another ape and another one. Like if these are" }, { "start": 57.28, "end": 62.480000000000004, "text": " apes will be like you get an ape and you get an ape and you get an ape and ape, apes just all the" }, { "start": 62.480000000000004, "end": 68.72, "text": " way. Funge, funge, everything's fungible. Now, needless to say, once it's done, we're gonna be" }, { "start": 68.72, "end": 74, "text": " ending up with a model and I'll just put it out there. You can go to the model every time you" }, { "start": 74, "end": 79.2, "text": " click Submit, you'll get a new instance of some creation of that model. It's not perfect, but it's" }, { "start": 79.2, "end": 84.88, "text": " pretty good. But given that this is an AI model, we can actually do more than just generate new" }, { "start": 84.88, "end": 90.4, "text": " ape. For example, take a look at this ape that was generated by my model and this ape that was" }, { "start": 90.4, "end": 97.12, "text": " generated by my model. What we can do is we can look at what the model thinks are all the in between" }, { "start": 97.12, "end": 101.84, "text": " apes between the two. This is generally called an interpolation. It's pretty cool to explore what" }, { "start": 101.84, "end": 107.36, "text": " the model learns and how it sees the world. Now needless to say, I'm not the first person that" }, { "start": 107.36, "end": 112.16, "text": " does this nor is my model the best model that I've been people who have investigated this" }, { "start": 112.16, "end": 116.48, "text": " much more and have put more work into it. And I'm not going to be able to mention all of them" }, { "start": 116.48, "end": 122.64, "text": " right here. But Nathan Cooper Jones has a very cool medium article on his investigations on the" }, { "start": 122.64, "end": 129.12, "text": " board ape collection and GANs and so has serial sucker on Twitter. So the technique we're going" }, { "start": 129.12, "end": 134.96, "text": " to use today to make our AI is called a generative adversarial network, a GAN, which is the same" }, { "start": 134.96, "end": 141.04000000000002, "text": " methodology that powers websites like this person does not exist.com, where every time you refresh," }, { "start": 141.04000000000002, "end": 147.28, "text": " you get a new artificially generated face. But there's more there is this sneaker does not exist.com" }, { "start": 147.28, "end": 153.12, "text": " this chair does not exist.com and pretty much anything you can think of. So GANs generative" }, { "start": 153.12, "end": 161.04000000000002, "text": " adversarial networks were first invented in Well, let's not talk about that right now." }, { "start": 161.04, "end": 166.79999999999998, "text": " They were first mentioned in a 2014 paper by Ian Goodfellow and collaborators called generative" }, { "start": 166.79999999999998, "end": 172.88, "text": " adversarial nets. And oh boy, in recent years, they have made progress. So these were from the" }, { "start": 172.88, "end": 178.72, "text": " original paper, you can see you can barely make out a face, it's okay at generating digits, but" }, { "start": 178.72, "end": 184.39999999999998, "text": " anything else is way out of scope. And they're just a couple of years later, as you can see right here," }, { "start": 184.39999999999998, "end": 189.12, "text": " these things have gone insane. The pictures they produce are almost impeccable. They're very" }, { "start": 189.12, "end": 195.36, "text": " versatile. And they're at the forefront of computer generated imagery. Very briefly, a GAN consists" }, { "start": 195.36, "end": 200.16, "text": " of two neural networks, one called the generator and one called the discriminator. And while the" }, { "start": 200.16, "end": 206.16, "text": " generator tries to produce these fake images, the discriminator tries to differentiate those fake" }, { "start": 206.16, "end": 212.56, "text": " images from real images from a data set. Now, as the discriminator gets better at discerning what" }, { "start": 212.56, "end": 217.20000000000002, "text": " is real and what is fake, the generator in turn gets better at fooling the discriminator. And" }, { "start": 217.2, "end": 222.07999999999998, "text": " therefore both neural networks get better and better and better. And at the end, the generator" }, { "start": 222.07999999999998, "end": 227.28, "text": " is really good, as you can see right here. So the first thing we're going to need is data. In fact," }, { "start": 227.28, "end": 231.2, "text": " what we're going to do is we're going to go to open sea and we're going to collect the board" }, { "start": 231.2, "end": 236.88, "text": " apes Yacht Club of that website. The board apes Yacht Club is a NFT collection on open sea," }, { "start": 236.88, "end": 243.28, "text": " it consists of 10,000 of these apes, each one of the ape comes with its own attributes and properties," }, { "start": 243.28, "end": 248.56, "text": " as you can see, they are procedurally generated, but only certain combinations exist and certain" }, { "start": 248.56, "end": 253.84, "text": " attributes are much more rare than others. Now they do have an API, but I don't trust APIs," }, { "start": 253.84, "end": 258.48, "text": " I want to get the data directly from the website. And that's what we're going to use Bright Data for" }, { "start": 258.48, "end": 265.04, "text": " Bright Data offers scalable, robust collection of public web data as a service. This is really," }, { "start": 265.04, "end": 270.08, "text": " really cool and can save you a lot of troubles. They really have everything you need in order" }, { "start": 270.08, "end": 275.84, "text": " to collect data. For example, they maintain a vast network of proxies all over the world and from" }, { "start": 275.84, "end": 280.8, "text": " any kind of device. So you're really not limited and what you can collect, though at the heart of" }, { "start": 280.8, "end": 286.56, "text": " their service is definitely the data collection engine. They have various levels of difficulties" }, { "start": 286.56, "end": 290.79999999999995, "text": " of how you can interact with them naturally, since I'm a nerd, I'm going to go for the programming" }, { "start": 290.79999999999995, "end": 295.84, "text": " layer, which is the lowest layer, but it's also the most fun layer. So here's my scraper for open" }, { "start": 295.84, "end": 300.4, "text": " seas board a yacht club. So let me show you what I did. So the code on top here simply says that I" }, { "start": 300.4, "end": 305.91999999999996, "text": " want to use a proxy in the US and I want to go to the board a yacht club website, then I want to wait" }, { "start": 305.91999999999996, "end": 310.96, "text": " until the navigation action has completed. So essentially, I've arrived at the website. Now it" }, { "start": 310.96, "end": 316.4, "text": " turns out that open sea is actually one of the more difficult websites to scrape because it's very," }, { "start": 316.4, "end": 321.59999999999997, "text": " very dynamic. Like watch what happens when I reload the page, the page already loads, but then the" }, { "start": 321.6, "end": 328, "text": " items load individually. Moreover, if I scroll down, you can see that constantly new apes are" }, { "start": 328, "end": 332.88, "text": " being added instead of these placeholders. This is called an infinite scroll, even though I guess" }, { "start": 332.88, "end": 337.20000000000005, "text": " it's not infinite. But it means that you can't just load the website once and you have all the" }, { "start": 337.20000000000005, "end": 341.68, "text": " apes available, you need to do so in a stepwise fashion. So yes, it's going to be a bit more" }, { "start": 341.68, "end": 346.40000000000003, "text": " tricky than just loading up the website and scraping the content. But hey, that's what we're" }, { "start": 346.4, "end": 352.15999999999997, "text": " here for nothing that a little bit of Cody Cody magic can solve. So we've got to instruct our scraper" }, { "start": 352.15999999999997, "end": 356.79999999999995, "text": " that it waits for you know, just a bit more after it has arrived at the website. Now the code you're" }, { "start": 356.79999999999995, "end": 361.91999999999996, "text": " seeing here is mostly JavaScript, but bright data has introduced a bunch of utility functions like" }, { "start": 361.91999999999996, "end": 366.71999999999997, "text": " this navigate thing up here, or the weight function here, which we're going to use right now," }, { "start": 366.71999999999997, "end": 371.84, "text": " they're going to wait for the grid to initially become available, which means that the first set" }, { "start": 371.84, "end": 376.96, "text": " of apes has been loaded, we're then going to call the parse function right here. And the parse function" }, { "start": 376.96, "end": 382.15999999999997, "text": " is one of the main functions of data collection, essentially, it goes to the website and collect" }, { "start": 382.15999999999997, "end": 388.71999999999997, "text": " some data from it as it is, you can see down here what we are selecting. And if your CSS foo is good," }, { "start": 388.71999999999997, "end": 393.91999999999996, "text": " you'll realize that we're going for this counter here, this counter tells us how many total apes" }, { "start": 393.91999999999996, "end": 398.88, "text": " there are. And why is that important for scraping? Well, you see, if you open a bunch of them," }, { "start": 398.88, "end": 406.15999999999997, "text": " you can see that the different URLs here all have an ending that is different, but then a prefix that" }, { "start": 406.15999999999997, "end": 413.2, "text": " is the same. So my suspicion was that they're probably numbered from zero to 999999. And we" }, { "start": 413.2, "end": 417.84, "text": " could just iterate through all of them in order to get them. And yes, I was right. So all we have to" }, { "start": 417.84, "end": 423.84, "text": " do then is loop from one to whatever that number of total grid cells is and call the next stage," }, { "start": 423.84, "end": 428.15999999999997, "text": " every bright data scraper is divided into stages. And you could probably already guess that the" }, { "start": 428.16, "end": 433.84000000000003, "text": " second stage deals with collecting an individual ape. Now that's a lot easier than before. All we" }, { "start": 433.84000000000003, "end": 439.12, "text": " do is we navigate to the URL, we wait for the summary to be ready, we wait for the history" }, { "start": 439.12, "end": 445.04, "text": " panel to be ready. And then we call parse. Now, as you can see, we are collecting quite a bit more" }, { "start": 445.04, "end": 451.6, "text": " data than before. So I do not only want the image of the ape, I also want its attributes. And I want" }, { "start": 451.6, "end": 456.40000000000003, "text": " the price of when it was last sold, which I'm going to get from this table right here. See," }, { "start": 456.4, "end": 464.08, "text": " whenever it says sale, that's when the ape was sold 78 ether to Gary V. All right, well, you do" }, { "start": 464.08, "end": 468.96, "text": " you. And while we're not going to use the attributes are priced today, it is valuable data for our" }, { "start": 468.96, "end": 473.91999999999996, "text": " future endeavors. Alright, so once I have my scraper, all I gotta do is go to the scraper," }, { "start": 473.91999999999996, "end": 479.12, "text": " say initiate, and off it goes, starting and collecting. Now that we have the data, the next" }, { "start": 479.12, "end": 484.15999999999997, "text": " thing we need is some code. And I could write it myself. However, I'm not in the mood to do so. So" }, { "start": 484.16, "end": 489.04, "text": " I'm going to go over to Nvidia and get the official implementation for stylegan to add up," }, { "start": 489.04, "end": 493.52000000000004, "text": " which already has excellent code available on GitHub. Not only do they have code, they have" }, { "start": 493.52000000000004, "end": 499.36, "text": " a very thorough readme that describes how you can use their code, how you train your own stuff. So" }, { "start": 499.36, "end": 504.88, "text": " after converting the images using their data set tool, essentially, it's just a matter of calling" }, { "start": 504.88, "end": 510.32000000000005, "text": " train dot pi. I know I wish machine learning was more interesting. But this is it. So off went my" }, { "start": 510.32, "end": 516.88, "text": " first training run, you can see that the loss of the discriminator starts up high, goes down low," }, { "start": 516.88, "end": 522.88, "text": " and then starts rising again, I don't know, is that good? Is that bad? While the generators loss" }, { "start": 522.88, "end": 529.84, "text": " starts low goes high, and then drops down. Well, GAN training is one of these things where the" }, { "start": 529.84, "end": 535.52, "text": " metrics are a bit like tea leaf reading. And there's not too much indication that you can go by of" }, { "start": 535.52, "end": 540.08, "text": " whether your model does something well or not. One of the metrics that is sometimes useful is" }, { "start": 540.08, "end": 546.32, "text": " the F ID. And as you can see right here, the F ID of my model quickly dropped down, which is good," }, { "start": 546.32, "end": 551.9200000000001, "text": " low F ID is good, but then quickly went up again after only a few hundred steps. So that concerned" }, { "start": 551.9200000000001, "end": 557.2800000000001, "text": " me. And then I looked at the output data. So the code base will actually sample every couple of" }, { "start": 557.2800000000001, "end": 563.2800000000001, "text": " hundred steps, a new batch of images, so that you can see what progress your model makes. At the very" }, { "start": 563.2800000000001, "end": 569.6, "text": " beginning, it's just noisy gibberish, as you can see right here. But very quickly, it gets the idea" }, { "start": 569.6, "end": 575.6800000000001, "text": " of what it should do approximately, this already looks quite promising. But then as it went on," }, { "start": 575.6800000000001, "end": 581.28, "text": " you can see that what is this? Why is everything turned to the side? Now to this day, I don't" }, { "start": 581.28, "end": 587.6800000000001, "text": " really know why this is turned to the side. I suspect it's part of the data augmentation" }, { "start": 587.6800000000001, "end": 592.64, "text": " that sometimes turns images to the side, although I haven't looked that that's the case. So clearly," }, { "start": 592.64, "end": 597.2, "text": " this was a failure and a collapse. I had to start again, I tweaked the hyper parameters a little bit," }, { "start": 597.2, "end": 603.0400000000001, "text": " and then a second run went much, much better. Yeah, this is the last step. And it got like a bit" }, { "start": 603.0400000000001, "end": 608.4000000000001, "text": " different, but in a weird way. So off I go. What starting again, so the second run, I changed some" }, { "start": 608.4000000000001, "end": 614.32, "text": " hyper parameters around, I did some tweaky, tweaky, Cody, Cody, you know, like us machine learners do," }, { "start": 614.32, "end": 620.32, "text": " and very quickly, that model became better, you can see already that the diversity is higher from" }, { "start": 620.32, "end": 625.2, "text": " the beginning. And after only a few steps, we got something really neat going, you can see it still" }, { "start": 625.2, "end": 629.76, "text": " makes a lot of mistakes. There are a lot of artifacts in here. However, it's clearly going" }, { "start": 629.76, "end": 635.2800000000001, "text": " into the correct direction. In fact, remember that FID metric that I've showed you before? Well," }, { "start": 635.2800000000001, "end": 641.12, "text": " the orange line here is the one of the new model. So you can see as the blue one gets worse, again," }, { "start": 641.12, "end": 646.8000000000001, "text": " the orange one just continues to drop. This is really good, really nice. It goes down, it goes" }, { "start": 646.8000000000001, "end": 652.88, "text": " towards zero down further and further. Now, I have no comparison because there's not a lot of academic" }, { "start": 652.88, "end": 658.4, "text": " effort into producing board apes. I have no clue how good nine is. But I like the shape of the graph" }, { "start": 658.4, "end": 663.92, "text": " and that's important. So as you can see by step 9000 or so the model was getting pretty decent," }, { "start": 663.92, "end": 668.72, "text": " and I was hopeful, but I just wanted to see what happens when I let it train for longer." }, { "start": 668.72, "end": 675.2, "text": " And in hindsight, I shouldn't I mean, check out when I zoom out. Ouch. But you know, this is normal," }, { "start": 675.2, "end": 680.32, "text": " every GAN will collapse at some point. And in fact, the checkpoints that I've put online for" }, { "start": 680.32, "end": 684.96, "text": " my project, which you can also download are definitely from the regions where it hasn't" }, { "start": 684.96, "end": 689.44, "text": " collapsed yet. Now I've done a few more runs where I managed to get it training for even longer" }, { "start": 689.44, "end": 693.6, "text": " before it collapsed, such as the green or the red one right here. But all of these things will give" }, { "start": 693.6, "end": 698.8000000000001, "text": " quite satisfying results. So I was happy. So what are the results? This is a hugging face space," }, { "start": 698.8000000000001, "end": 703.6800000000001, "text": " I've uploaded my model there. And you can go to it, you can click on the button. And every time" }, { "start": 703.68, "end": 710.4799999999999, "text": " you click, you get a new produced ape. This ape is produced in this instance, the same ape has never" }, { "start": 710.4799999999999, "end": 716.64, "text": " been produced before and will never be produced after. So this is fully yours. And it's absolutely" }, { "start": 716.64, "end": 722.56, "text": " fungible. I'm not going to mean these things as NFTs or anything like this, just download it," }, { "start": 722.56, "end": 727.1999999999999, "text": " you can definitely produce more than one image. For example, if you set it to three, it will give" }, { "start": 727.1999999999999, "end": 732.88, "text": " you a grid of three images. And if you click the interpolate checkmark, it will do the generate two" }, { "start": 732.88, "end": 738.8, "text": " images and then generate everything in between. You see, very funny. Now because this is not the" }, { "start": 738.8, "end": 746, "text": " full experience of fungibility. I've also made a little website. So this is why culture.com slash" }, { "start": 746, "end": 751.76, "text": " apes. If you go to this, there's nothing different. Every time you refresh, you get a new ape. In fact," }, { "start": 751.76, "end": 758.56, "text": " it calls the same API. However, if you click download right here, oh, well, you're just going" }, { "start": 758.56, "end": 764.2399999999999, "text": " to have to try it for yourself. And here's another fun thing that you can do with this. This is a" }, { "start": 764.2399999999999, "end": 769.04, "text": " little application that I call what's your eight. And what you can do is you can go here," }, { "start": 769.5999999999999, "end": 775.3599999999999, "text": " you can input a little image of whatever you want right here doesn't have to be me, but you know," }, { "start": 775.3599999999999, "end": 780.0799999999999, "text": " it better be me and it will generate the ape that corresponds to your picture the most that this is" }, { "start": 780.0799999999999, "end": 786.0799999999999, "text": " really fun. I've only put 250 steps, I'd usually put 1000 steps, then the quality is a bit higher," }, { "start": 786.08, "end": 791.2, "text": " it doesn't always work, you sometimes have to retry. But if you do retry, you get different" }, { "start": 791.2, "end": 796.64, "text": " apes. And it's quite fun, you get a little video of how the AI searches through the latent space" }, { "start": 796.64, "end": 803.6800000000001, "text": " in order to match your picture. The technology behind this that I had to add is open AI clip" }, { "start": 803.6800000000001, "end": 809.6800000000001, "text": " model clip is trained on text image pairs and therefore understands what's inside an image much" }, { "start": 809.6800000000001, "end": 815.2800000000001, "text": " better than for example, a classic image net trained resonant by using clip and back propagating" }, { "start": 815.28, "end": 821.36, "text": " into the game, I'm able to search the latent space of again in order for a picture that is as similar" }, { "start": 821.36, "end": 827.68, "text": " as possible in the eyes of the clip model to the picture that I input, what my app does is it tries" }, { "start": 827.68, "end": 834.16, "text": " to match how clip sees the image you have input and how clip sees the image that is output from" }, { "start": 834.16, "end": 839.36, "text": " the game. I've used a very similar technique to generate my music video. So go check that out for" }, { "start": 839.36, "end": 844.88, "text": " a more in depth explanation. And the same technique has powered a lot of recent AI art, for example," }, { "start": 844.88, "end": 849.92, "text": " Dolly to buy open AI, if you search on Twitter for the hashtag Dolly, you can get some amazing" }, { "start": 849.92, "end": 855.2, "text": " outputs of this model that only doesn't use again, but it also uses clip as a central part of its" }, { "start": 855.2, "end": 861.28, "text": " architecture. Now due to this being quite heavy in compute, I cannot exactly put this on hogging" }, { "start": 861.28, "end": 866.88, "text": " face space, I'll just take too long, you actually need a local GPU and some time 1000 step take" }, { "start": 866.88, "end": 872.24, "text": " roughly two minutes or so. But if you can give it a try. Again, it doesn't always work. But it's fun" }, { "start": 872.24, "end": 876, "text": " when it does. And here are some more cool results that I got with it." }, { "start": 893.12, "end": 897.2, "text": " Alright, this was it for today's video. Thank you so much for being here. Let me know if you" }, { "start": 897.2, "end": 902.96, "text": " like project report kind of style videos like this. I've put all the code and checkpoints and" }, { "start": 902.96, "end": 907.2, "text": " whatever online I've put links to everything I mentioned in the description. Please go check" }, { "start": 907.2, "end": 911.9200000000001, "text": " it out. Thank you so much again to Bright Data for sponsoring this video. It's really cool to" }, { "start": 911.9200000000001, "end": 916.6400000000001, "text": " have them on board in a second. I'm just going to show you a couple more things you can do with them" }, { "start": 916.6400000000001, "end": 921.0400000000001, "text": " just in case you're interested. They have a really established infrastructure for collecting public" }, { "start": 921.0400000000001, "end": 926.5600000000001, "text": " data and the possibilities of what you can do with it are almost endless. People use this for example," }, { "start": 926.56, "end": 932.7199999999999, "text": " to verify that the ads that they make online really reach their target audience by scraping" }, { "start": 932.7199999999999, "end": 937.28, "text": " from the perspective of their target audience. This is a really cool idea. I would have never" }, { "start": 937.28, "end": 943.52, "text": " thought of this. Another example is you can go out there to e commerce websites, collect pricing data," }, { "start": 943.52, "end": 948.9599999999999, "text": " aggregate this from all over the web and either let this influence your pricing or offer your" }, { "start": 948.9599999999999, "end": 954.4799999999999, "text": " customers a better deal. I mean, so many things are possible with cool web scraping technology." }, { "start": 954.48, "end": 960.8000000000001, "text": " And if you can do this at scale regularly and automatically, that is mighty, mighty powerful." }, { "start": 960.8000000000001, "end": 965.12, "text": " Now I've given a shot at collecting some other data by myself. I'm going to show you that now." }, { "start": 965.12, "end": 970.64, "text": " So stay tuned. And I wish you the best. Again, many thanks to today's sponsor, Bright Data. Now" }, { "start": 970.64, "end": 976.32, "text": " let me show you a little bit what more you can do with their platform. I've gone by far the most" }, { "start": 976.32, "end": 982.08, "text": " difficult and the most cumbersome route to use their platform in the project today, it is usually" }, { "start": 982.08, "end": 987.5200000000001, "text": " much easier, which you're going to see right now. So if I go to their platform, and I go to collectors," }, { "start": 987.5200000000001, "end": 993.36, "text": " I add a new collector and there are all kinds of collectors already predefined, all the big" }, { "start": 993.36, "end": 999.6, "text": " social media companies, all the e commerce companies, Amazon and eBay, all the hotel pages," }, { "start": 999.6, "end": 1005.2, "text": " and everything already has predefined collectors for you. So many of the things that you would" }, { "start": 1005.2, "end": 1010.32, "text": " possibly want to scrape will already have a scraper defined, all you need to go is enter a" }, { "start": 1010.32, "end": 1016.5600000000001, "text": " few details and off you go. For example, here I can scrape myself a data set of Instagram posts" }, { "start": 1016.5600000000001, "end": 1022.6400000000001, "text": " that have the hashtag AI art. Now people upload these pictures whenever they make some art with AI" }, { "start": 1022.6400000000001, "end": 1027.04, "text": " and they want to show it to the world. And I just want to download it all. So with Bright Data," }, { "start": 1027.04, "end": 1032.96, "text": " super easy, I simply go to the collector that's made for scraping hashtag on Instagram, I enter" }, { "start": 1032.96, "end": 1038.0800000000002, "text": " AI art, I say how many posts I want off I go, I get a neat JSON file at the end with everything" }, { "start": 1038.08, "end": 1042.8799999999999, "text": " that I'd want to know about these posts. Or here, what if I have some new business idea like" }, { "start": 1042.8799999999999, "end": 1048.6399999999999, "text": " Airbnb for campsites, I might want to research a lot about which campsites are in which area," }, { "start": 1048.6399999999999, "end": 1054.08, "text": " how expensive are they, how occupied are they and so on. So I might want to regularly scrape" }, { "start": 1054.08, "end": 1060.24, "text": " all of the campgrounds around certain regions, no problem. In fact, Bright Data has a scraper for" }, { "start": 1060.24, "end": 1065.6799999999998, "text": " you already prepared for that to simply select the scraper, enter the locations you'd like to" }, { "start": 1065.68, "end": 1071.04, "text": " know about and off you go. You can set these scrapers to run manually or on a schedule and" }, { "start": 1071.04, "end": 1075.6000000000001, "text": " then export the data to wherever you want into your cloud, they can send it to you as an email," }, { "start": 1075.6000000000001, "end": 1079.92, "text": " you can download them yourself, whatever you like. So not only do they have predefined scrapers," }, { "start": 1079.92, "end": 1085.3600000000001, "text": " they've actually let their scrapers run on a lot of public facing websites and scraped all public" }, { "start": 1085.3600000000001, "end": 1090.24, "text": " data from those. For example, you can see there are a lot of data sets available. One of them is" }, { "start": 1090.24, "end": 1096.24, "text": " this LinkedIn company data set. So this is a registry of over 50 million companies and all" }, { "start": 1096.24, "end": 1101.1200000000001, "text": " the publicly available data that's on LinkedIn. Now, whether you're a recruiter or looking for" }, { "start": 1101.1200000000001, "end": 1105.92, "text": " a new job or looking to sell something to businesses, this data is really valuable." }, { "start": 1105.92, "end": 1111.28, "text": " Now, this is only a small set of features that Bright Data offers, they just make collecting data" }, { "start": 1111.28, "end": 1116.64, "text": " from the internet a whole lot easier. So thanks again so much to Bright Data for sponsoring this" }, { "start": 1116.64, "end": 1120.48, "text": " video. Please check them out. There's a link in the description. I'm very sure you'll be" }, { "start": 1120.48, "end": 1147.28, "text": " pleasantly surprised. With that, I'll see you around. Bye bye." } ]
lmAj0SU_bW0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Axial Attention & MetNet: A Neural Weather Model for Precipitation Forecasting
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "google", "attention mechanism", "attention", "transformer", "rnn", "recurrent", "weather", "long-range", "layers", "convolutions", "cnns", "rain", "physics" ]
MetNet is a predictive neural network model for weather prediction. It uses axial attention to capture long-range dependencies. Axial attention decomposes attention layers over images into row-attention and column-attention in order to save memory and computation. https://ai.googleblog.com/2020/03/a-neural-weather-model-for-eight-hour.html https://arxiv.org/abs/1912.12180 Abstract: Weather forecasting is a long standing scientific challenge with direct social and economic impact. The task is suitable for deep neural networks due to vast amounts of continuously collected data and a rich spatial and temporal structure that presents long range dependencies. We introduce MetNet, a neural network that forecasts precipitation up to 8 hours into the future at the high spatial resolution of 1 km2 and at the temporal resolution of 2 minutes with a latency in the order of seconds. MetNet takes as input radar and satellite data and forecast lead time and produces a probabilistic precipitation map. The architecture uses axial self-attention to aggregate the global context from a large input patch corresponding to a million square kilometers. We evaluate the performance of MetNet at various precipitation thresholds and find that MetNet outperforms Numerical Weather Prediction at forecasts of up to 7 to 8 hours on the scale of the continental United States. Authors: Casper Kaae Sønderby, Lasse Espeholt, Jonathan Heek, Mostafa Dehghani, Avital Oliver,Tim Salimans, Shreya Agrawal, Jason Hickey, Nal Kalchbrenner Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there. So what you're looking at here is a weather forecast model. Specifically the very top row is a new weather forecast model called NetNet by Google Research. So the goal of weather prediction is pretty simple. You want to know what the weather is going to be in the future. Specifically here you want to know precipitation rates. And so this is a new work that uses neural network instead of physical models in order to predict precipitation. So in the middle here you see this is the ground truth of what really happened at that particular time. You see precipitation rates in red here moving across the country. Now the bottom there is a physical model and as far as I understand it, physical models have been used so far to make weather predictions. Which basically means that you simulate these rain clouds and the movement of them across the country. And you do a physical simulation like a particle simulation type of thing and then that allows you to predict and then you run that maybe multiple times and you get an idea of the kind of distribution you're going to get. Now what MetNet does is it simply uses a neural network to predict the outcome directly. So there's no physical simulation involved. There is just a neural network that takes as input what's the situation now and maybe over a stretch of time. And then you ask it please make a prediction in eight hours or something. And then the MetNet will make that prediction and it will just output it like snap. No physical simulation needed. And you also see here that MetNet outputs things in kind of a cloud way, in a probabilistic way. In one forward pass you don't need to run it multiple times. But we'll get to that. On the bottom here you see the measurement. So the axis is F1. F1 is kind of the overlap of how well you're able to predict the precipitation. And you see here the MetNet is above the HRR baseline for most of this time. Up to 480 minutes into the future. Which is eight hours I believe. All right. So the paper is the following. It's called MetNet, a Neural Weather Model for Precipitation Forecasting. And I'm not going to read all the names here. The main corresponding authors are Caspar K. Sonderby and Nalkalke Brenner. And it's a team of Google research. So specifically they use the input of these two things here. So one is this GOS16, which is what you see here on the left. And the precipitation rates are here depicted on the right. So you want to take these things as input into your model. Now how do you do that? Of course we want to build a neural network. And this is the architecture they come up with. So on the bottom here they feed in the data. And they feed in the data in 15 minute interval from 90 minutes into the pass. So you have to imagine it like this. So there's a timeline. I'm going to use a little bit of a finer thing. So there's a timeline. And you let's say are here. This is now. And then here in the future, this is maybe one hour into the future. This is your target, right? This is you are here and you're looking out. You would like to know what's the precipitation going to be in one hour from now. What Metnet does is it takes an input. And specifically it takes the last 90 minutes before now as an input. And it samples it in frequencies of 15 minute intervals. So each one of these is going to be 15 minutes. And each 15 minutes you get like a snapshot of this entire of the input region. Now the input region, if I can jump back here to the website for a second, they show it what the input region is. The input region, if you want to predict in the middle of this small square, the input region is actually the entire 1024 square kilometers around it. So it's very big input. Though the actual region you consider is the inside 64 square kilometers. But you take in information from the big region. And the main point of the paper, I believe, is how to do that. All right, so each 15 minutes you take in a snapshot. And these are these snapshots here on the bottom. So these are, and you have to imagine in here, every 15 minutes there's a stack of these inputs. So what are these inputs? These inputs are some kind of features that you have. So there is the target time, which in this case would be this one hour here. There is the month, day and hour, which is important for weather prediction, right? So the time of year, time of day and so on. Longitude latitude is probably pretty important. Elevation map is probably pretty important. So these you can see, these are all maps. Now sometimes, and this is how you encode things in these. Since it's a neural network, you know, all of these things must be of the same dimensions here. So if you have 256 dimensions here and probably 256 dimensions here, then all of these things must be of the same dimension. And if you want to give a feature such as the target time, which in this case, let's say it's one hour, you just put here one hour, what's one hour? Let's say 60 minutes. So you just put the number 60 here, 60, 60, 60, 60, 60, 256 times and 256 times 250, 65, sorry, 265 times, no, 56. I'm confusing with German. So this is how you encode features. It's pretty primitive, but it turns out it works the best if you do it this way. All right. So you have these planes and some, as I said, are just features such as the target time, month, day and hour and so on. Elevation, I guess is a map, is like an elevation map of the region you consider. And this corresponds now to this, these 64 kilometers times 64 kilometers here. And that's exactly what these center crops are here. So this center crop thing, that now, this thing here, this plane, sorry, is these 64 by 64 region. That's this plane here. And also the, that's the precipitation and the GOES, that's this thing here. Now we also have these down sampled things, which these are the 1024 kilometers. So this here and this here, these are the 1024 square kilometer patches, but they are down sampled. So everything is down sampled, I guess, to 256 by 256 pixels. So you don't really take into account every nuance of that very big, of that very big input, but you do down sample it. So you kind of get the big picture of the outer frame and in the inner frame, you take it in a much higher resolution in order to get the details. All right. So you stack all of this up into a big tensor and then you feed it into here into a spatial down sampler, which I guess, no, I have read is a, some just a convolutional neural network, right? So this is your typical image processing pipeline. So you do this for each of these stacks, right? And then what you get out of it is a lower size representation right here. So you get these representations and then you let a temporal encoder run over it. What does a temporal encoder do? This in particular is a convolutional LSTM. And if you already know what an LSTM is, a convolutional LSTM is nothing more than an LSTM that has as intermediate layers, convolutional layers. So it's pretty suited to do, for example, videos or any sort of image processing that goes over time like this one. So the temporal encoder simply starts out here with an initial state. My pens are screwing me today. So it starts out here with an initial state and then it simply inputs each of these representations, takes them one by one, runs across time, right? And each time producing a new intermediate representation of the input until it finally reaches this here, final representation. So this thing here is a single final representation of all of this input, right? Of this entire time span of all of these stacks here. Yeah, so you can press this into a single input with first a convolutional network to downsample each time point individually and then with a recurrent neural network, an LSTM, to integrate the information over time. You end up with this single piece here. And then what you do, so you still, here you still retain kind of an image sort of thing. So this representation here, you can see it in the background. Maybe I'll get down my scribbles here. This here is still sort of an image tensor, though I guess it's a hidden representation, so you couldn't really look at it. But it still has dimensions of images. So this here is still, I think, the same or corresponding to these dimensions here. So this still has some spatial information where this might be north-south here in this axis, might be east and west, right? And then these are just the hidden channels, the channels of the hidden representations, right? So what you would like to do now is to basically encode information from the space around you. If you look at, let's look at one of these, one of the big pictures. What you would like to do in weather prediction, let's say you are right here. What's a good example, you are right here, right? Now if you want to know if this particular cloud over here is going to move to your direction, what you want to know is, for example, is there a mountain range here, right? Because then it's more probable that this cloud is going maybe to move up there. You would also want to know how this cloud here moves, right? If this cloud here moves somewhere here around, then it's probably this cloud down here might be pulled with it or something like this. So you're very much, sorry, you're there. You're very much kind of want to look out into each of the directions here and you want to incorporate kind of what's happening across the space. We're already used to kind of convolutional networks being able to do this, but in here the authors use attention to do that. So if you don't know what attention is, my most popular video is about attention and you can do attention for images. So the way that works is that you have a series of images of stacked blocks of a neural network. Let me draw this here. So you have an image here and let's say it has just four pixels, right? So you have the next layer of these four pixels, right? So you have layers of this. So the next layers of the four pixels, they all emit what are called queries and queries are just vectors. So each pixel emits a single vector. Let's say this, that, that, this, right? And each of the lower layers emits what is called a key. This, this, this, this. And now the keys and the queries are routed together based on their inner product. So these two would be routed together. This would probably be routed here. This as well. This would probably routed here. So what in effect each of the pixels of the higher layer can look at specific pixels of the lower layer. Now you can imagine this is exactly what we want here in that if there is a mountain range here and we might be interested in that. So we'd be able from our, from our point here to specifically attend to that location using, using attention, right? So the authors here build basically a stacked model of attention layers. And that's what's happening in the third part here. And this is the attention is in order to incorporate long range dependencies. As I made the example with the mountain range, this might be far away, but it might actually influence your weather very much. So the attention is to incorporate these long range dependencies. But the problem with attention is, is as you saw in the example, each of these pixels can attend to each of the pixels in the lower layer. So what you'd end up with, so each can attend to that. This can attend to each. This can attend to each. You'll see you'll end up with 16 connections. Can't even draw them. So you end up with 16 connections. In general, if you have D here, you will end up with a D squared number of things you need to calculate, right? So if this here, and now of course we have images. So generally we'll think of D by D pixels. Now we have D by D pixels and that thing squared number of things we need to calculate. This quickly gets too much. So in, for example, MNIST, you have 28 by 28 pixel images. This is 780 or 2 or something. I don't quite remember. But you'll have to calculate this squared many connections between things. This is simply impossible pretty quickly, especially if you scale up the images and then have some channels in here as well. So attention for image processing has been a bit lagging compared to natural language processing. In natural language processing, you usually have maybe 500 tokens or something. Images you have much more. So attention is much more expensive. So you can't really do it on current hardware. Now this paper uses something called axial attention. Axial attention is kind of the trick of how to make this tension happen for images. And for that I want to switch over to this paper. It's called Axial Attention in Multidimensional Transformers by some of the same authors. So Jonathan Ho and Nell Coutt Brenner, also of Google Brain and UC Berkeley, proposed this axial transformer. Now they originally proposed axial attention for autoregressive models. If you know transformers, they also started by making autoregressive models, so language modeling and so on. But we can decouple the axial attention from the autoregressivity of these models. So I'm not going to talk about autoregressive models, it's just axial attention. So what is axial attention? It's pretty simple actually. And I want to start by talking about convolutions. So what does a convolution do? Let's just take a one-dimensional image, which is pretty boring, but let's say it has these eight pixels here. So this is an image, it just has one row of eight pixels. What do I do when I run a convolutional filter across that? This is the lower layer, and now this is the next layer that is produced by a convolution. So for each of the pixels in the next layer, what I can do with the convolutional layer, I can look at its neighbors in the lower layer. So these three would be part of that. And then I go on to this, and again I look at its neighbors at these three. I might have done this in a different color. And then I look at this, and it can look at itself and its neighbors. So a convolution is pretty smart. And then of course in the next layer, that repeats. Now if you think, what's the difference between doing this and a fully connected layer? So if I have a fully connected layer, a classic neural network, a fully connected layer, then this pixel here would incorporate information from all of the pixels here. And this pixel here would incorporate information from all the pixels. Now why might this be better? Because the information that I want here for this pixel might depend on this pixel over here. So I might benefit from a connection over there, or it might benefit from this pixel here, which it can't reach. And with a convolutional network, I can't do that. Why are then convolutional networks preferable? Because the convolutional network can do the same thing across multiple layers. So let's assume again that this pixel here needs information from this pixel right here. And as you can see in just one layer, it can only get information from those, right? But now take the next layer, so the same pixel here, it can attend to these three, right? Now these three can each in turn attend to their neighbors, right? And I'm not going to draw everything, but the resolution field for this pixel here will end up being all of this, right? Now we still don't have our desired pixel in here, but if we just go one layer more, then this pixel right here, a different color, this pixel right here, right? The resolution field across the layers increases, because it's always incorporating information from downstream, and the downstream again incorporates information from the downstream, so eventually you can aggregate the same information. So instead of having a single layer with all of these connections, we have convolutional layers, which seem like a worse idea, because they can only do less things, attend to less things, but across the layers they actually can do the same thing, right? And that turns out to be a huge advantage of these convolutional layers, and that's why convolutional layers are used for image processing, and not the multi-layer perceptrons. So the same exact thing happens with axial attention, just in a different form. It is a bit poorly drawn here, I believe, but this is how you have to imagine it. As before, this pixel, the red pixel here, if I just have a normal transformer layer, the red pixel can attend to all of the other pixels in the image, right? That's the, that's basically, and each of the pixels can do that, so that's your d squared computation right here. Now, what we want to do is, in a convolutional layer, what we would do is, okay, you can only attend to your neighbors, and then in the next layer the neighbors can attend to their neighbors, and thereby you go out and out. In axial attention, you say, okay, this thing can only attend to its row and its column, right? That's it. You can only do attention to your row and your column, and I believe they don't even do it at the same time. So in one layer you can attend to the row you're in, and in the other you can attend to the column you're in. Now, let's see how the same thing happens as for a convolutional layer. So in the, basically, how then, if the red pixel needs access to information in this green pixel, how does it do that? So in the first layer it can attend to its row and its column, right? And so can every other pixel, including, sorry, including, of course, the pixel where that, so let's say this square here can also attend to its row and its column, and its row happens to be including the green one, right? So in layer one, this red square here gets information from the green square via row attention, right? And then in layer two now, this, our red square of interest now can row attend to this other red square here, so they get connected in layer two. I'm sorry, I don't want that. So you see that within just two layers we've transferred information from the green square via this red square to that red square. So we can, in the same way as a convolution, you can replace the long-range arbitrary dependencies between pixels by simply having multiple layers of restricted dependence. The same goes for this axial attention. So you can replace the arbitrary attention in layers, right? You can replace that by a two-step process where you first transfer information via the column and then transfer it via the row. It's a bit like, you know, in chess you can have a queen that can move any direction, especially diagonally, and then if you just have a rook you kind of need to do two moves. So in the queen is like the full attention and the rook is the multi-layer axial attention. They can achieve the same thing, you just need more layers. But as a trade-off you get a super, super saving in requirement of memory and computation, right? So they stress that, you know, kind of you can represent the same distributions with the axial attention. And you know, the trade-off is you just have to do multiple layers of it. Right, so this is axial attention and they are now able to incorporate this into their model right here. So they have, I believe, eight blocks, so four row attention, you see this right here, and four column attention blocks in their model. And finally they output this distribution here across their region of interest. Now this again is your, I believe, this 64 by 64 resolution. So you can see how they kind of aggregated information across the 64 using this axial attention. And then that makes their prediction in this one hour. So this is this. Alright, so this was a long way. So recap, they have 15-minute snapshots of this input data across along with some features. They use a spatial down sampler, which is a CNN, on each of them individually. Then they use a convolutional LSTM to encode this across time to end up with a single representation here at the end. Then they use axial attention in order to aggregate information across the spatial dimensions. They do this in multiple stages and at the end they make a participation prediction, which is a distribution, as you can see here. So as an output you directly get a distribution of results, which is also cool because the physical simulation, you have to let it run many, many times in order to get a distribution of results. And this neural network can simply give you a distribution right away. That's what they say right here. So they go a bit into the architecture compared to baseline. I want to get back to what I showed you at the beginning. This here is just the picture, kind of the picture book example. So left is the ground truth, in the middle is MatNet, and on the right is a baseline method. This here is in, as you can see, in two hours, in four, six and eight. So you can see the MatNet gives you as an output this distribution. What I find interesting, for example, is this sample two right here. So in this sample one you can see there is a consistent difference and this is the forecast time, so how much in advance you want to get it? No, this would be a one hour, but it can go up to eight hours. Here is a consistent gap in F1, which means the MatNet does it better across this span of time, which is for the top sample right here. For the bottom sample though, you can see here, there is a big gap at the beginning, again, there is a big gap at the beginning, and then this gap gets smaller and smaller and smaller. And this, I think, might give you an indication of, let's say, the weakness of this approach, doing it with neural networks. So with neural networks you kind of rely on regularities, you kind of rely on broad scale, correct things that you can learn from the data, and this might work well as long as things are regular, which of course across shorter time spans things tend to be more regular, right? But if you go for longer time spans, I believe there is more of a chaos element to it, like weather can be very dependent on very subtle things, and the physics simulation that is really taking into account the actual physics might be able to much, much better account for that. And that's why I believe across time here you'll see that the two models get closer together. That being said, MetNet of course is still on top here. But it will be interesting to forecast for longer even, though I haven't actually dig through their results, through their numerical results, but you can do that if you want. Alright, so this was it for MetNet and axial attention. I hope you liked this, and bye bye.
[ { "start": 0, "end": 8.52, "text": " Hi there. So what you're looking at here is a weather forecast model. Specifically the" }, { "start": 8.52, "end": 15.76, "text": " very top row is a new weather forecast model called NetNet by Google Research. So the goal" }, { "start": 15.76, "end": 19.96, "text": " of weather prediction is pretty simple. You want to know what the weather is going to" }, { "start": 19.96, "end": 27.68, "text": " be in the future. Specifically here you want to know precipitation rates. And so this is" }, { "start": 27.68, "end": 36.04, "text": " a new work that uses neural network instead of physical models in order to predict precipitation." }, { "start": 36.04, "end": 41.519999999999996, "text": " So in the middle here you see this is the ground truth of what really happened at that" }, { "start": 41.519999999999996, "end": 48.64, "text": " particular time. You see precipitation rates in red here moving across the country. Now" }, { "start": 48.64, "end": 55.22, "text": " the bottom there is a physical model and as far as I understand it, physical models have" }, { "start": 55.22, "end": 62.92, "text": " been used so far to make weather predictions. Which basically means that you simulate these" }, { "start": 62.92, "end": 68.96, "text": " rain clouds and the movement of them across the country. And you do a physical simulation" }, { "start": 68.96, "end": 74.72, "text": " like a particle simulation type of thing and then that allows you to predict and then you" }, { "start": 74.72, "end": 80.6, "text": " run that maybe multiple times and you get an idea of the kind of distribution you're" }, { "start": 80.6, "end": 88.24, "text": " going to get. Now what MetNet does is it simply uses a neural network to predict the outcome" }, { "start": 88.24, "end": 95.83999999999999, "text": " directly. So there's no physical simulation involved. There is just a neural network that" }, { "start": 95.83999999999999, "end": 101.91999999999999, "text": " takes as input what's the situation now and maybe over a stretch of time. And then you" }, { "start": 101.92, "end": 111.4, "text": " ask it please make a prediction in eight hours or something. And then the MetNet will make" }, { "start": 111.4, "end": 119.24000000000001, "text": " that prediction and it will just output it like snap. No physical simulation needed." }, { "start": 119.24000000000001, "end": 124.8, "text": " And you also see here that MetNet outputs things in kind of a cloud way, in a probabilistic" }, { "start": 124.8, "end": 133.04, "text": " way. In one forward pass you don't need to run it multiple times. But we'll get to that." }, { "start": 133.04, "end": 142.68, "text": " On the bottom here you see the measurement. So the axis is F1. F1 is kind of the overlap" }, { "start": 142.68, "end": 151.44, "text": " of how well you're able to predict the precipitation. And you see here the MetNet is above the HRR" }, { "start": 151.44, "end": 161.6, "text": " baseline for most of this time. Up to 480 minutes into the future. Which is eight hours" }, { "start": 161.6, "end": 169.32, "text": " I believe. All right. So the paper is the following. It's called MetNet, a Neural Weather" }, { "start": 169.32, "end": 176.72, "text": " Model for Precipitation Forecasting. And I'm not going to read all the names here. The" }, { "start": 176.72, "end": 182.8, "text": " main corresponding authors are Caspar K. Sonderby and Nalkalke Brenner. And it's a" }, { "start": 182.8, "end": 194.28, "text": " team of Google research. So specifically they use the input of these two things here. So" }, { "start": 194.28, "end": 203.96, "text": " one is this GOS16, which is what you see here on the left. And the precipitation rates are" }, { "start": 203.96, "end": 213.4, "text": " here depicted on the right. So you want to take these things as input into your model." }, { "start": 213.4, "end": 218.64000000000001, "text": " Now how do you do that? Of course we want to build a neural network. And this is the" }, { "start": 218.64000000000001, "end": 226.52, "text": " architecture they come up with. So on the bottom here they feed in the data. And they" }, { "start": 226.52, "end": 232.44, "text": " feed in the data in 15 minute interval from 90 minutes into the pass. So you have to imagine" }, { "start": 232.44, "end": 238.8, "text": " it like this. So there's a timeline. I'm going to use a little bit of a finer thing. So there's" }, { "start": 238.8, "end": 246.52, "text": " a timeline. And you let's say are here. This is now. And then here in the future, this" }, { "start": 246.52, "end": 252.56, "text": " is maybe one hour into the future. This is your target, right? This is you are here and" }, { "start": 252.56, "end": 258.68, "text": " you're looking out. You would like to know what's the precipitation going to be in one" }, { "start": 258.68, "end": 268.88, "text": " hour from now. What Metnet does is it takes an input. And specifically it takes the last" }, { "start": 268.88, "end": 278.04, "text": " 90 minutes before now as an input. And it samples it in frequencies of 15 minute intervals." }, { "start": 278.04, "end": 287.4, "text": " So each one of these is going to be 15 minutes. And each 15 minutes you get like a snapshot" }, { "start": 287.4, "end": 296.4, "text": " of this entire of the input region. Now the input region, if I can jump back here to the" }, { "start": 296.4, "end": 304.47999999999996, "text": " website for a second, they show it what the input region is. The input region, if you" }, { "start": 304.47999999999996, "end": 310.12, "text": " want to predict in the middle of this small square, the input region is actually the entire" }, { "start": 310.12, "end": 318.32, "text": " 1024 square kilometers around it. So it's very big input. Though the actual region you" }, { "start": 318.32, "end": 327.2, "text": " consider is the inside 64 square kilometers. But you take in information from the big region." }, { "start": 327.2, "end": 335.28000000000003, "text": " And the main point of the paper, I believe, is how to do that. All right, so each 15 minutes" }, { "start": 335.28, "end": 340.15999999999997, "text": " you take in a snapshot. And these are these snapshots here on the bottom. So these are," }, { "start": 340.15999999999997, "end": 345.91999999999996, "text": " and you have to imagine in here, every 15 minutes there's a stack of these inputs. So" }, { "start": 345.91999999999996, "end": 352.2, "text": " what are these inputs? These inputs are some kind of features that you have. So there is" }, { "start": 352.2, "end": 359.59999999999997, "text": " the target time, which in this case would be this one hour here. There is the month," }, { "start": 359.59999999999997, "end": 365.03999999999996, "text": " day and hour, which is important for weather prediction, right? So the time of year, time" }, { "start": 365.04, "end": 371.66, "text": " of day and so on. Longitude latitude is probably pretty important. Elevation map is probably" }, { "start": 371.66, "end": 379.64000000000004, "text": " pretty important. So these you can see, these are all maps. Now sometimes, and this is how" }, { "start": 379.64000000000004, "end": 384.20000000000005, "text": " you encode things in these. Since it's a neural network, you know, all of these things must" }, { "start": 384.20000000000005, "end": 389.96000000000004, "text": " be of the same dimensions here. So if you have 256 dimensions here and probably 256" }, { "start": 389.96, "end": 396.84, "text": " dimensions here, then all of these things must be of the same dimension. And if you" }, { "start": 396.84, "end": 401.56, "text": " want to give a feature such as the target time, which in this case, let's say it's one" }, { "start": 401.56, "end": 408.35999999999996, "text": " hour, you just put here one hour, what's one hour? Let's say 60 minutes. So you just put" }, { "start": 408.36, "end": 421.48, "text": " the number 60 here, 60, 60, 60, 60, 60, 256 times and 256 times 250, 65, sorry, 265 times," }, { "start": 421.48, "end": 429.8, "text": " no, 56. I'm confusing with German. So this is how you encode features. It's pretty primitive," }, { "start": 429.8, "end": 435.48, "text": " but it turns out it works the best if you do it this way. All right. So you have these" }, { "start": 435.48, "end": 440.84000000000003, "text": " planes and some, as I said, are just features such as the target time, month, day and hour" }, { "start": 440.84000000000003, "end": 449.66, "text": " and so on. Elevation, I guess is a map, is like an elevation map of the region you consider." }, { "start": 449.66, "end": 458.88, "text": " And this corresponds now to this, these 64 kilometers times 64 kilometers here. And that's" }, { "start": 458.88, "end": 466.04, "text": " exactly what these center crops are here. So this center crop thing, that now, this thing" }, { "start": 466.04, "end": 476.96, "text": " here, this plane, sorry, is these 64 by 64 region. That's this plane here. And also the," }, { "start": 476.96, "end": 484.96, "text": " that's the precipitation and the GOES, that's this thing here. Now we also have these down" }, { "start": 484.96, "end": 499.2, "text": " sampled things, which these are the 1024 kilometers. So this here and this here, these are the" }, { "start": 499.2, "end": 507.32, "text": " 1024 square kilometer patches, but they are down sampled. So everything is down sampled," }, { "start": 507.32, "end": 516.24, "text": " I guess, to 256 by 256 pixels. So you don't really take into account every nuance of that" }, { "start": 516.24, "end": 522.04, "text": " very big, of that very big input, but you do down sample it. So you kind of get the" }, { "start": 522.04, "end": 527.88, "text": " big picture of the outer frame and in the inner frame, you take it in a much higher" }, { "start": 527.88, "end": 534.88, "text": " resolution in order to get the details. All right. So you stack all of this up into a" }, { "start": 534.88, "end": 542.16, "text": " big tensor and then you feed it into here into a spatial down sampler, which I guess," }, { "start": 542.16, "end": 550.36, "text": " no, I have read is a, some just a convolutional neural network, right? So this is your typical" }, { "start": 550.36, "end": 557.08, "text": " image processing pipeline. So you do this for each of these stacks, right? And then" }, { "start": 557.08, "end": 566.24, "text": " what you get out of it is a lower size representation right here. So you get these representations" }, { "start": 566.24, "end": 571.72, "text": " and then you let a temporal encoder run over it. What does a temporal encoder do? This" }, { "start": 571.72, "end": 579.0400000000001, "text": " in particular is a convolutional LSTM. And if you already know what an LSTM is, a convolutional" }, { "start": 579.0400000000001, "end": 587, "text": " LSTM is nothing more than an LSTM that has as intermediate layers, convolutional layers." }, { "start": 587, "end": 593.64, "text": " So it's pretty suited to do, for example, videos or any sort of image processing that" }, { "start": 593.64, "end": 601.44, "text": " goes over time like this one. So the temporal encoder simply starts out here with an initial" }, { "start": 601.44, "end": 608.12, "text": " state. My pens are screwing me today. So it starts out here with an initial state and" }, { "start": 608.12, "end": 616.92, "text": " then it simply inputs each of these representations, takes them one by one, runs across time, right?" }, { "start": 616.92, "end": 625.04, "text": " And each time producing a new intermediate representation of the input until it finally" }, { "start": 625.04, "end": 634.1999999999999, "text": " reaches this here, final representation. So this thing here is a single final representation" }, { "start": 634.1999999999999, "end": 645.9599999999999, "text": " of all of this input, right? Of this entire time span of all of these stacks here. Yeah," }, { "start": 645.96, "end": 652.5600000000001, "text": " so you can press this into a single input with first a convolutional network to downsample" }, { "start": 652.5600000000001, "end": 659.96, "text": " each time point individually and then with a recurrent neural network, an LSTM, to integrate" }, { "start": 659.96, "end": 666.2800000000001, "text": " the information over time. You end up with this single piece here. And then what you" }, { "start": 666.2800000000001, "end": 673.6, "text": " do, so you still, here you still retain kind of an image sort of thing. So this representation" }, { "start": 673.6, "end": 682, "text": " here, you can see it in the background. Maybe I'll get down my scribbles here. This here" }, { "start": 682, "end": 688.12, "text": " is still sort of an image tensor, though I guess it's a hidden representation, so you" }, { "start": 688.12, "end": 698.4, "text": " couldn't really look at it. But it still has dimensions of images. So this here is still," }, { "start": 698.4, "end": 705.76, "text": " I think, the same or corresponding to these dimensions here. So this still has some spatial" }, { "start": 705.76, "end": 713.76, "text": " information where this might be north-south here in this axis, might be east and west," }, { "start": 713.76, "end": 720.88, "text": " right? And then these are just the hidden channels, the channels of the hidden representations," }, { "start": 720.88, "end": 733.72, "text": " right? So what you would like to do now is to basically encode information from the space" }, { "start": 733.72, "end": 742.56, "text": " around you. If you look at, let's look at one of these, one of the big pictures. What" }, { "start": 742.56, "end": 750, "text": " you would like to do in weather prediction, let's say you are right here. What's a good" }, { "start": 750, "end": 756.2, "text": " example, you are right here, right? Now if you want to know if this particular cloud" }, { "start": 756.2, "end": 765.4, "text": " over here is going to move to your direction, what you want to know is, for example, is" }, { "start": 765.4, "end": 770.32, "text": " there a mountain range here, right? Because then it's more probable that this cloud is" }, { "start": 770.32, "end": 778.62, "text": " going maybe to move up there. You would also want to know how this cloud here moves, right?" }, { "start": 778.62, "end": 786.76, "text": " If this cloud here moves somewhere here around, then it's probably this cloud down here might" }, { "start": 786.76, "end": 793.28, "text": " be pulled with it or something like this. So you're very much, sorry, you're there. You're" }, { "start": 793.28, "end": 802.36, "text": " very much kind of want to look out into each of the directions here and you want to incorporate" }, { "start": 802.36, "end": 809.16, "text": " kind of what's happening across the space. We're already used to kind of convolutional" }, { "start": 809.16, "end": 817.44, "text": " networks being able to do this, but in here the authors use attention to do that. So if" }, { "start": 817.44, "end": 823, "text": " you don't know what attention is, my most popular video is about attention and you can" }, { "start": 823, "end": 832.12, "text": " do attention for images. So the way that works is that you have a series of images of stacked" }, { "start": 832.12, "end": 838.32, "text": " blocks of a neural network. Let me draw this here. So you have an image here and let's" }, { "start": 838.32, "end": 845.16, "text": " say it has just four pixels, right? So you have the next layer of these four pixels," }, { "start": 845.16, "end": 851.32, "text": " right? So you have layers of this. So the next layers of the four pixels, they all emit" }, { "start": 851.32, "end": 860.38, "text": " what are called queries and queries are just vectors. So each pixel emits a single vector." }, { "start": 860.38, "end": 868, "text": " Let's say this, that, that, this, right? And each of the lower layers emits what is called" }, { "start": 868, "end": 877.52, "text": " a key. This, this, this, this. And now the keys and the queries are routed together based" }, { "start": 877.52, "end": 881.14, "text": " on their inner product. So these two would be routed together. This would probably be" }, { "start": 881.14, "end": 888.16, "text": " routed here. This as well. This would probably routed here. So what in effect each of the" }, { "start": 888.16, "end": 896.76, "text": " pixels of the higher layer can look at specific pixels of the lower layer. Now you can imagine" }, { "start": 896.76, "end": 904.68, "text": " this is exactly what we want here in that if there is a mountain range here and we might" }, { "start": 904.68, "end": 911.64, "text": " be interested in that. So we'd be able from our, from our point here to specifically attend" }, { "start": 911.64, "end": 920.84, "text": " to that location using, using attention, right? So the authors here build basically a stacked" }, { "start": 920.84, "end": 927.48, "text": " model of attention layers. And that's what's happening in the third part here. And this" }, { "start": 927.48, "end": 935.76, "text": " is the attention is in order to incorporate long range dependencies. As I made the example" }, { "start": 935.76, "end": 940.64, "text": " with the mountain range, this might be far away, but it might actually influence your" }, { "start": 940.64, "end": 946.88, "text": " weather very much. So the attention is to incorporate these long range dependencies." }, { "start": 946.88, "end": 955.92, "text": " But the problem with attention is, is as you saw in the example, each of these pixels can" }, { "start": 955.92, "end": 963.3199999999999, "text": " attend to each of the pixels in the lower layer. So what you'd end up with, so each" }, { "start": 963.3199999999999, "end": 968.58, "text": " can attend to that. This can attend to each. This can attend to each. You'll see you'll" }, { "start": 968.58, "end": 973.88, "text": " end up with 16 connections. Can't even draw them. So you end up with 16 connections. In" }, { "start": 973.88, "end": 983.76, "text": " general, if you have D here, you will end up with a D squared number of things you need" }, { "start": 983.76, "end": 990.5200000000001, "text": " to calculate, right? So if this here, and now of course we have images. So generally" }, { "start": 990.52, "end": 1001.04, "text": " we'll think of D by D pixels. Now we have D by D pixels and that thing squared number" }, { "start": 1001.04, "end": 1008.72, "text": " of things we need to calculate. This quickly gets too much. So in, for example, MNIST," }, { "start": 1008.72, "end": 1025.3600000000001, "text": " you have 28 by 28 pixel images. This is 780 or 2 or something. I don't quite remember." }, { "start": 1025.3600000000001, "end": 1034.8, "text": " But you'll have to calculate this squared many connections between things. This is simply" }, { "start": 1034.8, "end": 1040.2, "text": " impossible pretty quickly, especially if you scale up the images and then have some channels" }, { "start": 1040.2, "end": 1049.32, "text": " in here as well. So attention for image processing has been a bit lagging compared to natural" }, { "start": 1049.32, "end": 1056, "text": " language processing. In natural language processing, you usually have maybe 500 tokens or something." }, { "start": 1056, "end": 1059.8, "text": " Images you have much more. So attention is much more expensive. So you can't really do" }, { "start": 1059.8, "end": 1067.2, "text": " it on current hardware. Now this paper uses something called axial attention. Axial attention" }, { "start": 1067.2, "end": 1074.1599999999999, "text": " is kind of the trick of how to make this tension happen for images. And for that I want to" }, { "start": 1074.1599999999999, "end": 1080.48, "text": " switch over to this paper. It's called Axial Attention in Multidimensional Transformers" }, { "start": 1080.48, "end": 1088.56, "text": " by some of the same authors. So Jonathan Ho and Nell Coutt Brenner, also of Google Brain" }, { "start": 1088.56, "end": 1097.8, "text": " and UC Berkeley, proposed this axial transformer. Now they originally proposed axial attention" }, { "start": 1097.8, "end": 1105.6399999999999, "text": " for autoregressive models. If you know transformers, they also started by making autoregressive" }, { "start": 1105.6399999999999, "end": 1113.9199999999998, "text": " models, so language modeling and so on. But we can decouple the axial attention from the" }, { "start": 1113.92, "end": 1118.72, "text": " autoregressivity of these models. So I'm not going to talk about autoregressive models," }, { "start": 1118.72, "end": 1126.6000000000001, "text": " it's just axial attention. So what is axial attention? It's pretty simple actually. And" }, { "start": 1126.6000000000001, "end": 1132.64, "text": " I want to start by talking about convolutions. So what does a convolution do? Let's just" }, { "start": 1132.64, "end": 1142.4, "text": " take a one-dimensional image, which is pretty boring, but let's say it has these eight pixels" }, { "start": 1142.4, "end": 1149.88, "text": " here. So this is an image, it just has one row of eight pixels. What do I do when I run" }, { "start": 1149.88, "end": 1157.0800000000002, "text": " a convolutional filter across that? This is the lower layer, and now this is the next" }, { "start": 1157.0800000000002, "end": 1165.24, "text": " layer that is produced by a convolution. So for each of the pixels in the next layer," }, { "start": 1165.24, "end": 1173.16, "text": " what I can do with the convolutional layer, I can look at its neighbors in the lower layer." }, { "start": 1173.16, "end": 1180.04, "text": " So these three would be part of that. And then I go on to this, and again I look at" }, { "start": 1180.04, "end": 1187.64, "text": " its neighbors at these three. I might have done this in a different color. And then I" }, { "start": 1187.64, "end": 1197.5600000000002, "text": " look at this, and it can look at itself and its neighbors. So a convolution is pretty" }, { "start": 1197.5600000000002, "end": 1208.6000000000001, "text": " smart. And then of course in the next layer, that repeats. Now if you think, what's the" }, { "start": 1208.6000000000001, "end": 1213.4, "text": " difference between doing this and a fully connected layer? So if I have a fully connected" }, { "start": 1213.4, "end": 1223.8400000000001, "text": " layer, a classic neural network, a fully connected layer, then this pixel here would incorporate" }, { "start": 1223.8400000000001, "end": 1233.4, "text": " information from all of the pixels here. And this pixel here would incorporate information" }, { "start": 1233.4, "end": 1240.24, "text": " from all the pixels. Now why might this be better? Because the information that I want" }, { "start": 1240.24, "end": 1248.6, "text": " here for this pixel might depend on this pixel over here. So I might benefit from a connection" }, { "start": 1248.6, "end": 1256.08, "text": " over there, or it might benefit from this pixel here, which it can't reach. And with" }, { "start": 1256.08, "end": 1262.56, "text": " a convolutional network, I can't do that. Why are then convolutional networks preferable?" }, { "start": 1262.56, "end": 1269.4, "text": " Because the convolutional network can do the same thing across multiple layers. So let's" }, { "start": 1269.4, "end": 1279.3200000000002, "text": " assume again that this pixel here needs information from this pixel right here. And as you can" }, { "start": 1279.3200000000002, "end": 1291.3200000000002, "text": " see in just one layer, it can only get information from those, right? But now take the next layer," }, { "start": 1291.32, "end": 1300.6399999999999, "text": " so the same pixel here, it can attend to these three, right? Now these three can each in" }, { "start": 1300.6399999999999, "end": 1309.04, "text": " turn attend to their neighbors, right? And I'm not going to draw everything, but the" }, { "start": 1309.04, "end": 1317.08, "text": " resolution field for this pixel here will end up being all of this, right? Now we still" }, { "start": 1317.08, "end": 1327.8, "text": " don't have our desired pixel in here, but if we just go one layer more, then this pixel" }, { "start": 1327.8, "end": 1336.72, "text": " right here, a different color, this pixel right here, right? The resolution field across" }, { "start": 1336.72, "end": 1347.04, "text": " the layers increases, because it's always incorporating information from downstream," }, { "start": 1347.04, "end": 1351.96, "text": " and the downstream again incorporates information from the downstream, so eventually you can" }, { "start": 1351.96, "end": 1357.24, "text": " aggregate the same information. So instead of having a single layer with all of these" }, { "start": 1357.24, "end": 1362.64, "text": " connections, we have convolutional layers, which seem like a worse idea, because they" }, { "start": 1362.64, "end": 1370.0600000000002, "text": " can only do less things, attend to less things, but across the layers they actually can do" }, { "start": 1370.0600000000002, "end": 1378, "text": " the same thing, right? And that turns out to be a huge advantage of these convolutional" }, { "start": 1378, "end": 1383.1200000000001, "text": " layers, and that's why convolutional layers are used for image processing, and not the" }, { "start": 1383.1200000000001, "end": 1391.18, "text": " multi-layer perceptrons. So the same exact thing happens with axial attention, just in" }, { "start": 1391.18, "end": 1398.04, "text": " a different form. It is a bit poorly drawn here, I believe, but this is how you have" }, { "start": 1398.04, "end": 1412, "text": " to imagine it. As before, this pixel, the red pixel here, if I just have a normal transformer" }, { "start": 1412, "end": 1420.2, "text": " layer, the red pixel can attend to all of the other pixels in the image, right? That's" }, { "start": 1420.2, "end": 1425.76, "text": " the, that's basically, and each of the pixels can do that, so that's your d squared computation" }, { "start": 1425.76, "end": 1433, "text": " right here. Now, what we want to do is, in a convolutional layer, what we would do is," }, { "start": 1433, "end": 1437.88, "text": " okay, you can only attend to your neighbors, and then in the next layer the neighbors can" }, { "start": 1437.88, "end": 1443.8400000000001, "text": " attend to their neighbors, and thereby you go out and out. In axial attention, you say," }, { "start": 1443.84, "end": 1455.1999999999998, "text": " okay, this thing can only attend to its row and its column, right? That's it. You can" }, { "start": 1455.1999999999998, "end": 1460.9199999999998, "text": " only do attention to your row and your column, and I believe they don't even do it at the" }, { "start": 1460.9199999999998, "end": 1466.1599999999999, "text": " same time. So in one layer you can attend to the row you're in, and in the other you" }, { "start": 1466.1599999999999, "end": 1472.8799999999999, "text": " can attend to the column you're in. Now, let's see how the same thing happens as for a convolutional" }, { "start": 1472.88, "end": 1479.96, "text": " layer. So in the, basically, how then, if the red pixel needs access to information" }, { "start": 1479.96, "end": 1486.4, "text": " in this green pixel, how does it do that? So in the first layer it can attend to its" }, { "start": 1486.4, "end": 1499.2800000000002, "text": " row and its column, right? And so can every other pixel, including, sorry, including," }, { "start": 1499.28, "end": 1509.56, "text": " of course, the pixel where that, so let's say this square here can also attend to its" }, { "start": 1509.56, "end": 1517.6, "text": " row and its column, and its row happens to be including the green one, right? So in layer" }, { "start": 1517.6, "end": 1531.6799999999998, "text": " one, this red square here gets information from the green square via row attention, right?" }, { "start": 1531.6799999999998, "end": 1543.3799999999999, "text": " And then in layer two now, this, our red square of interest now can row attend to this other" }, { "start": 1543.38, "end": 1555.0600000000002, "text": " red square here, so they get connected in layer two. I'm sorry, I don't want that. So" }, { "start": 1555.0600000000002, "end": 1561.1200000000001, "text": " you see that within just two layers we've transferred information from the green square" }, { "start": 1561.1200000000001, "end": 1569.2600000000002, "text": " via this red square to that red square. So we can, in the same way as a convolution," }, { "start": 1569.26, "end": 1578.72, "text": " you can replace the long-range arbitrary dependencies between pixels by simply having multiple layers" }, { "start": 1578.72, "end": 1587.84, "text": " of restricted dependence. The same goes for this axial attention. So you can replace the" }, { "start": 1587.84, "end": 1598.56, "text": " arbitrary attention in layers, right? You can replace that by a two-step process where" }, { "start": 1598.56, "end": 1608.08, "text": " you first transfer information via the column and then transfer it via the row. It's a bit" }, { "start": 1608.08, "end": 1617.12, "text": " like, you know, in chess you can have a queen that can move any direction, especially diagonally," }, { "start": 1617.12, "end": 1623.12, "text": " and then if you just have a rook you kind of need to do two moves. So in the queen is" }, { "start": 1623.12, "end": 1630.7199999999998, "text": " like the full attention and the rook is the multi-layer axial attention. They can achieve" }, { "start": 1630.7199999999998, "end": 1639.7199999999998, "text": " the same thing, you just need more layers. But as a trade-off you get a super, super" }, { "start": 1639.7199999999998, "end": 1647.4799999999998, "text": " saving in requirement of memory and computation, right? So they stress that, you know, kind" }, { "start": 1647.48, "end": 1653.56, "text": " of you can represent the same distributions with the axial attention. And you know, the" }, { "start": 1653.56, "end": 1660.3600000000001, "text": " trade-off is you just have to do multiple layers of it. Right, so this is axial attention" }, { "start": 1660.3600000000001, "end": 1667.28, "text": " and they are now able to incorporate this into their model right here. So they have," }, { "start": 1667.28, "end": 1674.4, "text": " I believe, eight blocks, so four row attention, you see this right here, and four column attention" }, { "start": 1674.4, "end": 1685.68, "text": " blocks in their model. And finally they output this distribution here across their region" }, { "start": 1685.68, "end": 1695.68, "text": " of interest. Now this again is your, I believe, this 64 by 64 resolution. So you can see how" }, { "start": 1695.68, "end": 1703.3200000000002, "text": " they kind of aggregated information across the 64 using this axial attention. And then" }, { "start": 1703.32, "end": 1712.28, "text": " that makes their prediction in this one hour. So this is this. Alright, so this was a long" }, { "start": 1712.28, "end": 1719.6399999999999, "text": " way. So recap, they have 15-minute snapshots of this input data across along with some" }, { "start": 1719.6399999999999, "end": 1726.8, "text": " features. They use a spatial down sampler, which is a CNN, on each of them individually." }, { "start": 1726.8, "end": 1735.24, "text": " Then they use a convolutional LSTM to encode this across time to end up with a single representation" }, { "start": 1735.24, "end": 1743.8, "text": " here at the end. Then they use axial attention in order to aggregate information across the" }, { "start": 1743.8, "end": 1750.08, "text": " spatial dimensions. They do this in multiple stages and at the end they make a participation" }, { "start": 1750.08, "end": 1760.24, "text": " prediction, which is a distribution, as you can see here. So as an output you directly" }, { "start": 1760.24, "end": 1766.6, "text": " get a distribution of results, which is also cool because the physical simulation, you" }, { "start": 1766.6, "end": 1772.04, "text": " have to let it run many, many times in order to get a distribution of results. And this" }, { "start": 1772.04, "end": 1780.04, "text": " neural network can simply give you a distribution right away. That's what they say right here." }, { "start": 1780.04, "end": 1787.92, "text": " So they go a bit into the architecture compared to baseline. I want to get back to what I" }, { "start": 1787.92, "end": 1792.8799999999999, "text": " showed you at the beginning. This here is just the picture, kind of the picture book" }, { "start": 1792.8799999999999, "end": 1798.6, "text": " example. So left is the ground truth, in the middle is MatNet, and on the right is a baseline" }, { "start": 1798.6, "end": 1810.24, "text": " method. This here is in, as you can see, in two hours, in four, six and eight. So you" }, { "start": 1810.24, "end": 1815.04, "text": " can see the MatNet gives you as an output this distribution. What I find interesting," }, { "start": 1815.04, "end": 1823.3999999999999, "text": " for example, is this sample two right here. So in this sample one you can see there is" }, { "start": 1823.3999999999999, "end": 1828.56, "text": " a consistent difference and this is the forecast time, so how much in advance you want to get" }, { "start": 1828.56, "end": 1834.6399999999999, "text": " it? No, this would be a one hour, but it can go up to eight hours. Here is a consistent" }, { "start": 1834.6399999999999, "end": 1844.32, "text": " gap in F1, which means the MatNet does it better across this span of time, which is" }, { "start": 1844.32, "end": 1851.28, "text": " for the top sample right here. For the bottom sample though, you can see here, there is" }, { "start": 1851.28, "end": 1857.6799999999998, "text": " a big gap at the beginning, again, there is a big gap at the beginning, and then this" }, { "start": 1857.68, "end": 1865.1200000000001, "text": " gap gets smaller and smaller and smaller. And this, I think, might give you an indication" }, { "start": 1865.1200000000001, "end": 1870.6000000000001, "text": " of, let's say, the weakness of this approach, doing it with neural networks. So with neural" }, { "start": 1870.6000000000001, "end": 1878.88, "text": " networks you kind of rely on regularities, you kind of rely on broad scale, correct things" }, { "start": 1878.88, "end": 1885.8400000000001, "text": " that you can learn from the data, and this might work well as long as things are regular," }, { "start": 1885.84, "end": 1892.28, "text": " which of course across shorter time spans things tend to be more regular, right? But" }, { "start": 1892.28, "end": 1898.36, "text": " if you go for longer time spans, I believe there is more of a chaos element to it, like" }, { "start": 1898.36, "end": 1906.12, "text": " weather can be very dependent on very subtle things, and the physics simulation that is" }, { "start": 1906.12, "end": 1912, "text": " really taking into account the actual physics might be able to much, much better account" }, { "start": 1912, "end": 1920.56, "text": " for that. And that's why I believe across time here you'll see that the two models get" }, { "start": 1920.56, "end": 1927.96, "text": " closer together. That being said, MetNet of course is still on top here. But it will be" }, { "start": 1927.96, "end": 1939.04, "text": " interesting to forecast for longer even, though I haven't actually dig through their results," }, { "start": 1939.04, "end": 1945.76, "text": " through their numerical results, but you can do that if you want. Alright, so this was" }, { "start": 1945.76, "end": 1969.68, "text": " it for MetNet and axial attention. I hope you liked this, and bye bye." } ]
R5DiLFOMZrc
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
TransGAN: Two Transformers Can Make One Strong GAN (Machine Learning Research Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "neural networks", "ai", "artificial intelligence", "attention neural networks", "attention is all you need", "transformer gan", "transformer gans", "transformer generative adversarial network", "generative adversarial network", "attention mechanism", "self attention", "vision transformer", "pixelshuffle", "superresolution", "local attention", "multihead attention", "transformer generator", "google", "machine learning explained", "deep learning explained", "paper explained", "transgan" ]
#transformer #gan #machinelearning Generative Adversarial Networks (GANs) hold the state-of-the-art when it comes to image generation. However, while the rest of computer vision is slowly taken over by transformers or other attention-based architectures, all working GANs to date contain some form of convolutional layers. This paper changes that and builds TransGAN, the first GAN where both the generator and the discriminator are transformers. The discriminator is taken over from ViT (an image is worth 16x16 words), and the generator uses pixelshuffle to successfully up-sample the generated resolution. Three tricks make training work: Data augmentations using DiffAug, an auxiliary superresolution task, and a localized initialization of self-attention. Their largest model reaches competitive performance with the best convolutional GANs on CIFAR10, STL-10, and CelebA. OUTLINE: 0:00 - Introduction & Overview 3:05 - Discriminator Architecture 5:25 - Generator Architecture 11:20 - Upsampling with PixelShuffle 15:05 - Architecture Recap 16:00 - Vanilla TransGAN Results 16:40 - Trick 1: Data Augmentation with DiffAugment 19:10 - Trick 2: Super-Resolution Co-Training 22:20 - Trick 3: Locality-Aware Initialization for Self-Attention 27:30 - Scaling Up & Experimental Results 28:45 - Recap & Conclusion Paper: https://arxiv.org/abs/2102.07074 Code: https://github.com/VITA-Group/TransGAN My Video on ViT: https://youtu.be/TrdevFK_am4 Abstract: The recent explosive interest on transformers has suggested their potential to become powerful "universal" models for computer vision tasks, such as classification, detection, and segmentation. However, how further transformers can go - are they ready to take some more notoriously difficult vision tasks, e.g., generative adversarial networks (GANs)? Driven by that curiosity, we conduct the first pilot study in building a GAN \textbf{completely free of convolutions}, using only pure transformer-based architectures. Our vanilla GAN architecture, dubbed \textbf{TransGAN}, consists of a memory-friendly transformer-based generator that progressively increases feature resolution while decreasing embedding dimension, and a patch-level discriminator that is also transformer-based. We then demonstrate TransGAN to notably benefit from data augmentations (more than standard GANs), a multi-task co-training strategy for the generator, and a locally initialized self-attention that emphasizes the neighborhood smoothness of natural images. Equipped with those findings, TransGAN can effectively scale up with bigger models and high-resolution image datasets. Specifically, our best architecture achieves highly competitive performance compared to current state-of-the-art GANs based on convolutional backbones. Specifically, TransGAN sets \textbf{new state-of-the-art} IS score of 10.10 and FID score of 25.32 on STL-10. It also reaches competitive 8.64 IS score and 11.89 FID score on Cifar-10, and 12.23 FID score on CelebA 64×64, respectively. We also conclude with a discussion of the current limitations and future potential of TransGAN. The code is available at \url{this https URL}. Authors: Yifan Jiang, Shiyu Chang, Zhangyang Wang Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there, today we'll look at TransGAN, two transformers can make one strong GAN, by Yifan Qian, Xu Yucheng and Cheng Yang Wang. So in this paper, the authors attempt to make a generative adversarial network, a GAN, out of only transformers. So far, attention or transformer-like things have been used in GANs, but they've always had some component of convolutions in there. This paper attempts to do generator and discriminator just using transformers. They discuss what is needed to do that, how they built the architecture, and there are a couple of training tricks that make this work and actually make this competitive to current state-of-the-art architectures. So the biggest data set they tackle is Cell-Up A, which is 64 by 64 pixels, but you know, due to their numbers suggest you can scale this much larger. The model is called TransGAN. I don't know if this is a bit of an unfortunate naming. I guess the question is, which bathroom do the TransGAN go to? I don't know. In any case, let's dive into the paper, let's check it out. If you like content like this, share it out, leave a like and tell me what you think in the comments. So the paper is fairly straightforward. Actually, there is code available. So definitely check that out. I'll link that of course in the description. The paper is fairly straightforward and answers one question. Can we build a strongGAN completely free of convolutions? So usually in GANs you have convolutions both in the generator and the discriminator, and their goal is to just replace that using transformers. As we say, there are contributions, there are three, the model architecture. So the discriminator, as we're going to see, is a vision transformer, like we saw before. The generator is also a transformer that is interlaced with upsampling. Then training technique, they do discuss that you do need three things specifically. So you do need data augmentation, you need multitask code training for the generator, and you need a localized initialization for the self-attention in order to make this work. And then they reach a GAN, so their model, their biggest model, TransGAN XL, reaches very competitive FID scores and also very competitive inception scores. Wait, this is FID, here is the inception score. The IS score is a bit of a misnomer too. I mean, the S is already score, but you know, it's okay. So first, architecture, the architecture is fairly straightforward. So for a GAN, you need a discriminator and a generator. Now the discriminator, as I already said here, that is the exact model from VIT and I've done video about it. The paper is called A Picture is Worth 16 by 16 Pixels or something like this. I don't exactly remember, but you can definitely find that it is a transformer based image classifier. So what you do with an image, so here you see an example image, this image of the dog. What you would see if you were to feed this into the discriminator, of course, the discriminator gets the output from the generator, but also the real data, you would unroll that picture into these kind of sub pixels, as you can see right here. But not into full pixels, but into kind of the super pixels. So every one of those super pixels will then be unrolled. This is this flattening operation right here into a single vector. And that then is like a word in a sentence. Okay, so that this picture here just becomes a series of vectors. And then you can simply apply your regular transformer architecture. So every patch becomes a vector, like a word embedding. And then you just go ahead and you put a transformer encoder. So this is very much like BERT, for example. It is a similar architecture. As you say, you can go look at this paper. And at the end, you simply classify whether it's real or fake. You do have to add position encodings because, you know, lacking the convolutions, the transformer has no idea where in the picture a given thing appears, because it is not a sequential architecture. It's actually a set transformation architecture. So you do need to add positional encodings. But in general, this has been shown to work quite well in things like ImageNet classification. On the generator side, it is very similar, but you know, a little bit different. So here, what you need to achieve are, of course, are these 32 by 32 by 3 pixel image, right? That's at the end, you need to achieve that. Now, you can't just go the reverse from over here and somehow try to predict these patches, because that, I guess that is just too, you know, if you predict these patches as such, like independent patches from each other, the borders would never match up. In a discriminator, this is not, does not matter because you don't need to construct the image, you simply need to classify it. But if you need to generate images, it's, you know, it doesn't look good if you have these borders here where things don't match up. So you will actually need to produce an image that is in the size that you require. So in this case, yeah, 32 by 32, and of course, three color channels. So the way they achieve it is by this up sampling architecture. The problem with transformers, of course, is they do require quite a bit of memory and also compute because the attention mechanism basically connects every single token with every single other token in each transformation. In this case, they connect every pixel to every other pixel. Now, if you were to do this for many, many layers, that is going to be, you know, 32 squared in this case, memory requirements, pretty quickly, you will run into problems. So what they do is they have intrinsic upscaling of their dimensions. What does that mean? So at the beginning, you have like some some noise input, and you have a little MLP generating the initial sequence. Now, the initial sequence is going to be eight by eight by number of channels, you can see there are also position encodings right here. So your noise generator essentially creates an eight by eight grid. Okay. Let's say for the sake of argument, we create a two by two grid instead of an eight by eight with a number of channels. So here is the number of channels to the back. You want to unroll those into four vectors of these channels. One, two, three, four, you get the idea. And then that you feed into the transformer. So now you have four tokens or here, 64 tokens in that case, but in our case, four tokens that you feed to the transformer. So right now, at this stage, this is like a sentence with four different words. So you run that through M layers of the transformer. And then at some point, you decide, okay, now it's time to do upscaling. And the upscaling, in the upscaling, you take that those four words. So you take that two by two image that you have right here with the C channels, and you generate somehow from it. And we're going to look at, I'm going to draw this over here. So you generate somehow an image that is double the density in pixels. So this is now a four by four image, but it has less channels. So the way they save memory is that they start out with many channels, but very, very coarse resolution and progressively as they go up the layers, they up sample so that they have more resolution, but less channels. Okay. And the exact so this is this is very much like, like the convolutional GANs do. So like, they would start out with a very coarse image grid, and then they do some kind of up sampling some kind of strided pooling, and so on, in order to reach higher, higher pixel densities. And with the higher pixel densities, they often decrease the number of channels. So you get a trade off between the density and the kind of depth of information. At the end, they end up with their target resolution and a number of channels. And then they feed that through a small, they feed each individually through a small linear projection in order to project that to the three channels. So that's how they end up with three channels. So how exactly does this up sampling work? By the way, I hope you can you can see the whole pipeline now, right? You start out by this is this is sort of noise generated. This is what is derived from the noise. And then the input is just transformed, transformed, transformed, up sampled, transformed some more up sampled, transformed some more until it is at the target resolution. Thereby, in the lower layers, you have lots of information depth, not much resolution in the higher layer, you have lots of resolution, but not that much information depth anymore. So the computations higher up might be more localized, they might be more to do with the exact kind of the exact details of that particular patch in the image, right? All of these things are representative of patches, especially in the down scaled, like this pixel right here is representative of all the pixels that are going to be generated out of it. So of this one, one layer higher, and of course, one, even one layer higher, it's going to be of its own four by four pixel grid. So the computation you do down here on this pixel will affect all of these pixels later. The way they do the up sampling is by this pixel shuffle algorithm that they have from this paper right here. And I'll link to that, of course, as well. So this is a paper that was, as I understand it, originally derived for convolutions. And it asked, how can we do sort of convolutional operation on high resolution images without having to do the compute for high resolution images? And they figured out that if they had, if they had a high resolution image, they can sort of represent, they can rearrange a high resolution image into a smaller resolution image with more channels. So here, you see you have, they call this R squared number of channels. So this number here is R squared. And they can sort of unroll this image into this one. And they do that by treating these things here. Maybe you can see this is a repeating pattern as sort of super pixels. You see that? So one of these super pixels is going to be one column here. All right, so this, this way, so you're going to up sample by having lots of channels here, doing the computation on as if they were lots of channel in a low resolution image. And then you up sample by just unrolling the channels locally. So by treating each of these things as just, you know, one super pixel with the elements of the channels being the, you know, kind of the different pixels in the neighborhood. So you want to unroll that. And then after that, you continue with your processing with putting this through the next layers until you up sample it again, by unrolling some more channels. I hope that's clear. So you're going to start out with a lot of channels because each time you unroll, you're going to lose some of them, you're going to trade off some of the channels, channel depth for more resolution. All right, so here you can see every time they up sample their resolution by two, they need to divide the channels by four because you need to up sample by two in the width and in the height direction. Actually it's not even necessary. You can totally, you can totally choose this because in the attention block, as you can see here, sorry, in the transformer block, you have this part, which is the attention mechanism. And then you also have this part right here, especially this MLP here. It takes in each token of these. It takes that after it, you know, it goes through the attention after the whole thing goes through the attention. Each of the tokens is fed separately through the MLP. So the MLP, there is, it's actually not necessary that the output dimension of the MLP is the same as the input dimension, except for this skip connection right here. Now if this skip connection, like in ResNet had some sort of a linear projection as well, then you could totally think of, think of changing the dimensions here. But I'm not even, I'm not even sure if you do the projection, isn't this just the same as the MLP with, if you feed each individually? Maybe, maybe there's no point in having the skip connection at all. In any case, you could probably get around that, you know, that requirement to have this exact number of channels. Nevertheless, that's what they do. So the generator is actually manageable memory wise, because it does this, this trade off as it progresses up, it generates an actual grid in the resolution of the image in with the required channels being a projection of the final channels here out of the transformer. Then it's fed into the discriminator. The discriminator immediately divides the image into patches, interprets each as sort of a token embedding, and then simply it adds positional encodings and then simply uses a transformer like BERT. And at the end, you have this CLS token like you have in BERT, and that classifies real or fake, you can back prop through the whole architecture. And that's again for you. So that was the architecture part. And now, so they do, they do initial, they do a lot of good ablations where they say, okay, what if we, what if, so we have a generator and the discriminator, what if we have kind of this autogan is what they is one of the things they compare with. So what if we do that? And then what if we just replace the generator with the transformer? What if we just replace the discriminator? So they find out that they can, they can replace the generator just fine. And that even gives, you know, gives competitive performance. As soon as they, you know, transfer the discriminator to a transformer, that drops in performance. So in order to really make this work, they need some more tricks. They have three tricks. The first trick is data augmentation. They say data augmentation is crucial for trans-GAN. And the type of data augmentation they do is also from a paper for data augmentation for GANs. This right here, differentiable augmentation for data efficient training. So the whole point is that your data augmentation, so the augmentation T right here is a differentiable function. So data augmentation is things like cropping or changing the brightness, color jitter, rotating and so on. So as long as that's a differentiable operation, you can use this technique right here where you back prop through the augmentation. You can see right here in the generator update, you actually back prop. So the back propagation happens through the T function and therefore you get a much better signal. Plus you get all the benefits of data augmentation. And the point they make in the trans-GAN paper here is that given that transformers don't have this convolution, they don't have this locality bias built into their architecture, they need a lot more data. And we know that transformers, they work well if you have an abundant amount of data and you can sort of get around having lots of data a little bit by using data augmentation. So they argue that data augmentation, it works for all GANs, but it helps a lot more in these transformer based GANs because the transformers benefit better from having lots of data. Again, the story about transformers is pretty clear. I think if you have lots of data, they tend to work well because they're just a more general architecture. So here you can see in the different GANs, you can see that the augmentation, which is when the checkmark here is, it helps sometimes, you can see not always, sometimes here it does fairly well. But here in the trans-GAN, you can see that adding data augmentation drastically improves the results and already gets these GANs into the ballpark of the state of the art. Not yet there, there's still a big difference, but it gets it, you know, gets them in like target distance. So the second trick they have is this code training with the self supervised auxiliary task and specifically, they do super resolution. So where do I write this? So this here, it's a super resolution task, right? Super resolution. And what they mean by this is simply they, in addition to the whole GAN training, right? So here you have the data set. Data set, I know, beautiful. So the discriminator over here, the D, it gets images from the GAN, as you can see right here, and it also gets images from the data set, right? And that's your main GAN loss. So here you have the discriminator loss, you back propagate that through the GAN, you update all the parameters. What you also do is you take data set images, you put them here as a target. So this is the target for the GAN. So the GAN needs to output something. And what does it get as an input? It gets this thing, but scaled down. So I'm gonna say this big picture goes to small picture. So you take pictures from your data set, and you deliberately down sample them, you deliberately, you might even add some noise or something, but I guess they simply do lower resolution. So LR means low resolution, and then the task of the GAN is from the low resolution input, predict, like it needs to predict the high resolution image. It's completely different pipeline than usually, because it actually gets the small thing, the small real image as an input. The GAN usually never, the generator usually never sees real data, right? Now it gets a small resolution. This is not the same image that goes to the discriminator, by the way, I think at least. This is just a different thing you can also do. You mix into your batches of noise GAN samples with this loss, you simply also mix things, you mix this loss right here, the super resolution loss. So you have this loss, and then you have the loss from the super resolution, and you simply add them with a parameter to trade off one or the other. And this helps the generator to, so given a low resolution image, these stages here will have to learn to sort of up sample realistic looking images from lower resolution images. And that's what you sort of expect this GAN to do. So it makes sense that this is a good auxiliary task. And this turns out to help quite a bit. So as you can see, right here, here they have it with data augmentation. And if you add this task here, it you know, the scores improve again by a bit. And then the last trick they have is to also do this locality aware initialization for self attention. And you can see that again pushes the scores. So what is this last trick? And this last trick, they say, look, the the convolution, it seems to be a pretty good prior for images after all, right? That's why I mean, that's why CNNs are so effective. It seems to be a good prior to look locally, like to have local features. But of course, the transformers, they are more powerful. And eventually, they want to look at the whole picture. But maybe it makes sense to first teach them that local things matter. And once they're at a certain quality level, we can kind of let them look at other pixels in the image. So what they do is they handcraft a schedule. And so over the course of training, I have this gradually increasing receptive field. So in early training, they simply say, you're only allowed to look at your immediate neighborhood. So each super pixel right here, remember, this is in a downscaled world sometimes during training in the generator, you're only you're only allowed to look at this at the immediate neighbors. So we introduce a mask that says it here, by which each query is only allowed to interact with its local neighbors that are not masked. Okay. And then say different from previous methods during training, we gradually reduce the mask until diminishing it. Eventually self attention is fully global. Okay, so at first, they say, you know, in the in the transformer layer, you have you have the you have the keys down here, they have a series of keys. And you have a series of queries from the individual tokens. And they say for a particular token, you're only allowed to look at your immediate neighbors as if you aggregate information. And then later, they say, okay, now training. So this only this and you can only look at your immediate neighbors, and so on. And later in training, they say, okay, now you've sort of learned well, you're now allowed to also gather information from kind of further out until at the end of training, the all the queries are allowed to look at all the keys. I'm sure that if you engineer this smartly, this is local attention, right, this is known as local attention. And you can also make a bunch of, you know, speed ups, probably in early training here, you can see right here in early stage, only immediate neighbors in middle stage, they sort of widen the circle of where you're allowed to look. And in the final stage, each query is actually allowed to do the full attention. So when I saw this, I was like, okay, here, I'm told we're going to build a GAN absolutely without convolutions, all we're going to replace with is kind of an linear operation that is applied over the whole image in a fashion that it only gets to look at its neighbors, right? It's totally not a convolution. It's just a linear operation that is applied equally across the image while only looking at your immediate neighbors. I'm so glad we're building GANs without convolutions. Convolutions are for losers. We're all for locally applied linear transformations over the whole image that only can look at their immediate neighbors. So yeah, no, I mean, you get the point. This is essentially an attentionized version of a convolution, but within with training as training progresses, they do release that constraint. This is simply to help the GAN do training, though I am fairly convinced what you wouldn't maybe have to do this as a fixed schedule, right? This is like a fixed schedule. I say, okay, you're allowed to look at this many neighbors and then after this many steps, this, this and so on. I'm fairly convinced you could somehow formulate this maybe as a two player game, right? But like, like another GAN thing or maybe, yeah, maybe another GAN thing or sort of an self play thing where the one player tries to sort of get the most information out of the neighborhood and the other player tries to sort of constrain that player and, but it only has a certain amount of budget and so on. I'm not sure. I mean, but you could probably do something smarter than simply a fixed schedule that is adaptive to the difficulty of the task. And you would also in turn lose a bunch of hyperparameters that you need to build this, um, schedule over here. All right. The last thing they do after all the tricks is of course what everyone does best with transformers and that's just scaling that thing up to many layers, many dimensionalities and I don't know if they do a lot more data, probably not in this case, but if you had more data, it would also work better. And thereby they do reach, you know, scores that are state of the art or at least very competitive with state of the art. So they're TransGAN XL model, as you can see here, for example, on CIFAR 10, they do reach very competitive scores beaten only by StyleGAN V2. They also reach very good or state of the art scores on other data sets here on STL 10. So they are the best. Yeah. So there is a, it's cool. By the way, this, it's nice to see papers going back to kind of the 64 by 64 images because we're so used to these super duper high resolution GANs now. This reminds me of old times. Yeah. So the paper as a whole is pretty cool. It's actually pretty straightforward, as I said, they develop an architecture that works that is actually computable with this kind of up sampling and the pixel shuffle channel reduction as they go along the VIT discriminator, then they present three tricks to make that work. It's data augmentation, it's super resolution task as a code training task, and it's this localized attend, local locality aware initialization for the attention with the decreasing with this schedule over training. And finally, they scale that model up. And that gives them pretty, pretty well performing GAN. And it's only made of, so it has no convolutions. Their goal isn't to use only transformers, the goal is actually to use no convolutions. Yeah, that was it for me. Tell me what you think in the comments, and I invite you to check out the paper and the code. Thanks for watching.
[ { "start": 0, "end": 7.5600000000000005, "text": " Hi there, today we'll look at TransGAN, two transformers can make one strong GAN, by Yifan" }, { "start": 7.5600000000000005, "end": 11.6, "text": " Qian, Xu Yucheng and Cheng Yang Wang." }, { "start": 11.6, "end": 17.76, "text": " So in this paper, the authors attempt to make a generative adversarial network, a GAN, out" }, { "start": 17.76, "end": 20.06, "text": " of only transformers." }, { "start": 20.06, "end": 26.94, "text": " So far, attention or transformer-like things have been used in GANs, but they've always" }, { "start": 26.94, "end": 30.520000000000003, "text": " had some component of convolutions in there." }, { "start": 30.520000000000003, "end": 37.660000000000004, "text": " This paper attempts to do generator and discriminator just using transformers." }, { "start": 37.660000000000004, "end": 43.52, "text": " They discuss what is needed to do that, how they built the architecture, and there are" }, { "start": 43.52, "end": 49.56, "text": " a couple of training tricks that make this work and actually make this competitive to" }, { "start": 49.56, "end": 51.96, "text": " current state-of-the-art architectures." }, { "start": 51.96, "end": 59.72, "text": " So the biggest data set they tackle is Cell-Up A, which is 64 by 64 pixels, but you know," }, { "start": 59.72, "end": 64.42, "text": " due to their numbers suggest you can scale this much larger." }, { "start": 64.42, "end": 67.96000000000001, "text": " The model is called TransGAN." }, { "start": 67.96000000000001, "end": 72.08, "text": " I don't know if this is a bit of an unfortunate naming." }, { "start": 72.08, "end": 77.12, "text": " I guess the question is, which bathroom do the TransGAN go to?" }, { "start": 77.12, "end": 78.7, "text": " I don't know." }, { "start": 78.7, "end": 82.68, "text": " In any case, let's dive into the paper, let's check it out." }, { "start": 82.68, "end": 87.88, "text": " If you like content like this, share it out, leave a like and tell me what you think in" }, { "start": 87.88, "end": 89.28, "text": " the comments." }, { "start": 89.28, "end": 92.24000000000001, "text": " So the paper is fairly straightforward." }, { "start": 92.24000000000001, "end": 94.52000000000001, "text": " Actually, there is code available." }, { "start": 94.52000000000001, "end": 96.08, "text": " So definitely check that out." }, { "start": 96.08, "end": 98.72, "text": " I'll link that of course in the description." }, { "start": 98.72, "end": 103.78, "text": " The paper is fairly straightforward and answers one question." }, { "start": 103.78, "end": 109, "text": " Can we build a strongGAN completely free of convolutions?" }, { "start": 109, "end": 115.76, "text": " So usually in GANs you have convolutions both in the generator and the discriminator, and" }, { "start": 115.76, "end": 120.28, "text": " their goal is to just replace that using transformers." }, { "start": 120.28, "end": 124.7, "text": " As we say, there are contributions, there are three, the model architecture." }, { "start": 124.7, "end": 131.76, "text": " So the discriminator, as we're going to see, is a vision transformer, like we saw before." }, { "start": 131.76, "end": 137.79999999999998, "text": " The generator is also a transformer that is interlaced with upsampling." }, { "start": 137.79999999999998, "end": 144.95999999999998, "text": " Then training technique, they do discuss that you do need three things specifically." }, { "start": 144.95999999999998, "end": 150.76, "text": " So you do need data augmentation, you need multitask code training for the generator," }, { "start": 150.76, "end": 158.89999999999998, "text": " and you need a localized initialization for the self-attention in order to make this work." }, { "start": 158.9, "end": 165.64000000000001, "text": " And then they reach a GAN, so their model, their biggest model, TransGAN XL, reaches" }, { "start": 165.64000000000001, "end": 171.92000000000002, "text": " very competitive FID scores and also very competitive inception scores." }, { "start": 171.92000000000002, "end": 176.64000000000001, "text": " Wait, this is FID, here is the inception score." }, { "start": 176.64000000000001, "end": 179.64000000000001, "text": " The IS score is a bit of a misnomer too." }, { "start": 179.64000000000001, "end": 187.4, "text": " I mean, the S is already score, but you know, it's okay." }, { "start": 187.4, "end": 191.92000000000002, "text": " So first, architecture, the architecture is fairly straightforward." }, { "start": 191.92000000000002, "end": 197.16, "text": " So for a GAN, you need a discriminator and a generator." }, { "start": 197.16, "end": 203.6, "text": " Now the discriminator, as I already said here, that is the exact model from VIT and I've" }, { "start": 203.6, "end": 205.24, "text": " done video about it." }, { "start": 205.24, "end": 213.24, "text": " The paper is called A Picture is Worth 16 by 16 Pixels or something like this." }, { "start": 213.24, "end": 221.68, "text": " I don't exactly remember, but you can definitely find that it is a transformer based image" }, { "start": 221.68, "end": 223.20000000000002, "text": " classifier." }, { "start": 223.20000000000002, "end": 228.48000000000002, "text": " So what you do with an image, so here you see an example image, this image of the dog." }, { "start": 228.48000000000002, "end": 232.48000000000002, "text": " What you would see if you were to feed this into the discriminator, of course, the discriminator" }, { "start": 232.48000000000002, "end": 240.86, "text": " gets the output from the generator, but also the real data, you would unroll that picture" }, { "start": 240.86, "end": 246.74, "text": " into these kind of sub pixels, as you can see right here." }, { "start": 246.74, "end": 250.38000000000002, "text": " But not into full pixels, but into kind of the super pixels." }, { "start": 250.38000000000002, "end": 254.64000000000001, "text": " So every one of those super pixels will then be unrolled." }, { "start": 254.64000000000001, "end": 259.16, "text": " This is this flattening operation right here into a single vector." }, { "start": 259.16, "end": 262.84000000000003, "text": " And that then is like a word in a sentence." }, { "start": 262.84000000000003, "end": 267.76, "text": " Okay, so that this picture here just becomes a series of vectors." }, { "start": 267.76, "end": 272.68, "text": " And then you can simply apply your regular transformer architecture." }, { "start": 272.68, "end": 276.84, "text": " So every patch becomes a vector, like a word embedding." }, { "start": 276.84, "end": 281.24, "text": " And then you just go ahead and you put a transformer encoder." }, { "start": 281.24, "end": 286, "text": " So this is very much like BERT, for example." }, { "start": 286, "end": 287.32, "text": " It is a similar architecture." }, { "start": 287.32, "end": 289.8, "text": " As you say, you can go look at this paper." }, { "start": 289.8, "end": 294.24, "text": " And at the end, you simply classify whether it's real or fake." }, { "start": 294.24, "end": 300.48, "text": " You do have to add position encodings because, you know, lacking the convolutions, the transformer" }, { "start": 300.48, "end": 310.28000000000003, "text": " has no idea where in the picture a given thing appears, because it is not a sequential architecture." }, { "start": 310.28000000000003, "end": 313.40000000000003, "text": " It's actually a set transformation architecture." }, { "start": 313.40000000000003, "end": 315.88, "text": " So you do need to add positional encodings." }, { "start": 315.88, "end": 321.96000000000004, "text": " But in general, this has been shown to work quite well in things like ImageNet classification." }, { "start": 321.96, "end": 328, "text": " On the generator side, it is very similar, but you know, a little bit different." }, { "start": 328, "end": 340.76, "text": " So here, what you need to achieve are, of course, are these 32 by 32 by 3 pixel image," }, { "start": 340.76, "end": 341.76, "text": " right?" }, { "start": 341.76, "end": 344.03999999999996, "text": " That's at the end, you need to achieve that." }, { "start": 344.03999999999996, "end": 350.91999999999996, "text": " Now, you can't just go the reverse from over here and somehow try to predict these patches," }, { "start": 350.92, "end": 357.40000000000003, "text": " because that, I guess that is just too, you know, if you predict these patches as such," }, { "start": 357.40000000000003, "end": 362.56, "text": " like independent patches from each other, the borders would never match up." }, { "start": 362.56, "end": 366.8, "text": " In a discriminator, this is not, does not matter because you don't need to construct" }, { "start": 366.8, "end": 369.40000000000003, "text": " the image, you simply need to classify it." }, { "start": 369.40000000000003, "end": 374.64, "text": " But if you need to generate images, it's, you know, it doesn't look good if you have" }, { "start": 374.64, "end": 377.76, "text": " these borders here where things don't match up." }, { "start": 377.76, "end": 383.42, "text": " So you will actually need to produce an image that is in the size that you require." }, { "start": 383.42, "end": 389.56, "text": " So in this case, yeah, 32 by 32, and of course, three color channels." }, { "start": 389.56, "end": 395.36, "text": " So the way they achieve it is by this up sampling architecture." }, { "start": 395.36, "end": 402.59999999999997, "text": " The problem with transformers, of course, is they do require quite a bit of memory and" }, { "start": 402.6, "end": 410.56, "text": " also compute because the attention mechanism basically connects every single token with" }, { "start": 410.56, "end": 413.36, "text": " every single other token in each transformation." }, { "start": 413.36, "end": 417.76000000000005, "text": " In this case, they connect every pixel to every other pixel." }, { "start": 417.76000000000005, "end": 422.76000000000005, "text": " Now, if you were to do this for many, many layers, that is going to be, you know, 32" }, { "start": 422.76000000000005, "end": 430.06, "text": " squared in this case, memory requirements, pretty quickly, you will run into problems." }, { "start": 430.06, "end": 436.64, "text": " So what they do is they have intrinsic upscaling of their dimensions." }, { "start": 436.64, "end": 437.78000000000003, "text": " What does that mean?" }, { "start": 437.78000000000003, "end": 444.92, "text": " So at the beginning, you have like some some noise input, and you have a little MLP generating" }, { "start": 444.92, "end": 446.04, "text": " the initial sequence." }, { "start": 446.04, "end": 451.04, "text": " Now, the initial sequence is going to be eight by eight by number of channels, you can see" }, { "start": 451.04, "end": 453.76, "text": " there are also position encodings right here." }, { "start": 453.76, "end": 460.56, "text": " So your noise generator essentially creates an eight by eight grid." }, { "start": 460.56, "end": 462.4, "text": " Okay." }, { "start": 462.4, "end": 466.71999999999997, "text": " Let's say for the sake of argument, we create a two by two grid instead of an eight by eight" }, { "start": 466.71999999999997, "end": 469.08, "text": " with a number of channels." }, { "start": 469.08, "end": 472.21999999999997, "text": " So here is the number of channels to the back." }, { "start": 472.21999999999997, "end": 478.15999999999997, "text": " You want to unroll those into four vectors of these channels." }, { "start": 478.15999999999997, "end": 482.56, "text": " One, two, three, four, you get the idea." }, { "start": 482.56, "end": 485.64, "text": " And then that you feed into the transformer." }, { "start": 485.64, "end": 492.28000000000003, "text": " So now you have four tokens or here, 64 tokens in that case, but in our case, four tokens" }, { "start": 492.28000000000003, "end": 494.22, "text": " that you feed to the transformer." }, { "start": 494.22, "end": 500.06, "text": " So right now, at this stage, this is like a sentence with four different words." }, { "start": 500.06, "end": 503.84000000000003, "text": " So you run that through M layers of the transformer." }, { "start": 503.84000000000003, "end": 508.88, "text": " And then at some point, you decide, okay, now it's time to do upscaling." }, { "start": 508.88, "end": 514.76, "text": " And the upscaling, in the upscaling, you take that those four words." }, { "start": 514.76, "end": 520.4399999999999, "text": " So you take that two by two image that you have right here with the C channels, and you" }, { "start": 520.4399999999999, "end": 522.64, "text": " generate somehow from it." }, { "start": 522.64, "end": 525.4, "text": " And we're going to look at, I'm going to draw this over here." }, { "start": 525.4, "end": 535.72, "text": " So you generate somehow an image that is double the density in pixels." }, { "start": 535.72, "end": 541.22, "text": " So this is now a four by four image, but it has less channels." }, { "start": 541.22, "end": 548.12, "text": " So the way they save memory is that they start out with many channels, but very, very coarse" }, { "start": 548.12, "end": 554.76, "text": " resolution and progressively as they go up the layers, they up sample so that they have" }, { "start": 554.76, "end": 558.1600000000001, "text": " more resolution, but less channels." }, { "start": 558.1600000000001, "end": 559.24, "text": " Okay." }, { "start": 559.24, "end": 565.32, "text": " And the exact so this is this is very much like, like the convolutional GANs do." }, { "start": 565.32, "end": 570.4000000000001, "text": " So like, they would start out with a very coarse image grid, and then they do some kind" }, { "start": 570.4000000000001, "end": 577.72, "text": " of up sampling some kind of strided pooling, and so on, in order to reach higher, higher" }, { "start": 577.72, "end": 579.1800000000001, "text": " pixel densities." }, { "start": 579.1800000000001, "end": 582.98, "text": " And with the higher pixel densities, they often decrease the number of channels." }, { "start": 582.98, "end": 588.6800000000001, "text": " So you get a trade off between the density and the kind of depth of information." }, { "start": 588.6800000000001, "end": 593.7600000000001, "text": " At the end, they end up with their target resolution and a number of channels." }, { "start": 593.76, "end": 600.58, "text": " And then they feed that through a small, they feed each individually through a small linear" }, { "start": 600.58, "end": 605.12, "text": " projection in order to project that to the three channels." }, { "start": 605.12, "end": 607.3199999999999, "text": " So that's how they end up with three channels." }, { "start": 607.3199999999999, "end": 610.72, "text": " So how exactly does this up sampling work?" }, { "start": 610.72, "end": 614.56, "text": " By the way, I hope you can you can see the whole pipeline now, right?" }, { "start": 614.56, "end": 618.64, "text": " You start out by this is this is sort of noise generated." }, { "start": 618.64, "end": 621, "text": " This is what is derived from the noise." }, { "start": 621, "end": 626.24, "text": " And then the input is just transformed, transformed, transformed, up sampled, transformed some" }, { "start": 626.24, "end": 631.12, "text": " more up sampled, transformed some more until it is at the target resolution." }, { "start": 631.12, "end": 636, "text": " Thereby, in the lower layers, you have lots of information depth, not much resolution" }, { "start": 636, "end": 641.94, "text": " in the higher layer, you have lots of resolution, but not that much information depth anymore." }, { "start": 641.94, "end": 646.4, "text": " So the computations higher up might be more localized, they might be more to do with the" }, { "start": 646.4, "end": 653.84, "text": " exact kind of the exact details of that particular patch in the image, right?" }, { "start": 653.84, "end": 659.36, "text": " All of these things are representative of patches, especially in the down scaled, like" }, { "start": 659.36, "end": 665.04, "text": " this pixel right here is representative of all the pixels that are going to be generated" }, { "start": 665.04, "end": 666.04, "text": " out of it." }, { "start": 666.04, "end": 670.84, "text": " So of this one, one layer higher, and of course, one, even one layer higher, it's going to" }, { "start": 670.84, "end": 674.48, "text": " be of its own four by four pixel grid." }, { "start": 674.48, "end": 683.08, "text": " So the computation you do down here on this pixel will affect all of these pixels later." }, { "start": 683.08, "end": 689.0600000000001, "text": " The way they do the up sampling is by this pixel shuffle algorithm that they have from" }, { "start": 689.0600000000001, "end": 691.52, "text": " this paper right here." }, { "start": 691.52, "end": 694.02, "text": " And I'll link to that, of course, as well." }, { "start": 694.02, "end": 699.0600000000001, "text": " So this is a paper that was, as I understand it, originally derived for convolutions." }, { "start": 699.06, "end": 706.28, "text": " And it asked, how can we do sort of convolutional operation on high resolution images without" }, { "start": 706.28, "end": 710, "text": " having to do the compute for high resolution images?" }, { "start": 710, "end": 716.64, "text": " And they figured out that if they had, if they had a high resolution image, they can" }, { "start": 716.64, "end": 723, "text": " sort of represent, they can rearrange a high resolution image into a smaller resolution" }, { "start": 723, "end": 724.4799999999999, "text": " image with more channels." }, { "start": 724.48, "end": 730.32, "text": " So here, you see you have, they call this R squared number of channels." }, { "start": 730.32, "end": 734.32, "text": " So this number here is R squared." }, { "start": 734.32, "end": 738.6800000000001, "text": " And they can sort of unroll this image into this one." }, { "start": 738.6800000000001, "end": 743.8000000000001, "text": " And they do that by treating these things here." }, { "start": 743.8000000000001, "end": 748.76, "text": " Maybe you can see this is a repeating pattern as sort of super pixels." }, { "start": 748.76, "end": 750.94, "text": " You see that?" }, { "start": 750.94, "end": 757.6400000000001, "text": " So one of these super pixels is going to be one column here." }, { "start": 757.6400000000001, "end": 770.6400000000001, "text": " All right, so this, this way, so you're going to up sample by having lots of channels here," }, { "start": 770.6400000000001, "end": 776.96, "text": " doing the computation on as if they were lots of channel in a low resolution image." }, { "start": 776.96, "end": 781.52, "text": " And then you up sample by just unrolling the channels locally." }, { "start": 781.52, "end": 787.3000000000001, "text": " So by treating each of these things as just, you know, one super pixel with the elements" }, { "start": 787.3000000000001, "end": 792.32, "text": " of the channels being the, you know, kind of the different pixels in the neighborhood." }, { "start": 792.32, "end": 793.96, "text": " So you want to unroll that." }, { "start": 793.96, "end": 799.6800000000001, "text": " And then after that, you continue with your processing with putting this through the next" }, { "start": 799.6800000000001, "end": 804.52, "text": " layers until you up sample it again, by unrolling some more channels." }, { "start": 804.52, "end": 806.3000000000001, "text": " I hope that's clear." }, { "start": 806.3, "end": 810.68, "text": " So you're going to start out with a lot of channels because each time you unroll, you're" }, { "start": 810.68, "end": 815.76, "text": " going to lose some of them, you're going to trade off some of the channels, channel depth" }, { "start": 815.76, "end": 817.56, "text": " for more resolution." }, { "start": 817.56, "end": 823.4799999999999, "text": " All right, so here you can see every time they up sample their resolution by two, they" }, { "start": 823.4799999999999, "end": 828.4399999999999, "text": " need to divide the channels by four because you need to up sample by two in the width" }, { "start": 828.4399999999999, "end": 830.92, "text": " and in the height direction." }, { "start": 830.92, "end": 833.12, "text": " Actually it's not even necessary." }, { "start": 833.12, "end": 839.16, "text": " You can totally, you can totally choose this because in the attention block, as you can" }, { "start": 839.16, "end": 843.08, "text": " see here, sorry, in the transformer block, you have this part, which is the attention" }, { "start": 843.08, "end": 844.5600000000001, "text": " mechanism." }, { "start": 844.5600000000001, "end": 849.12, "text": " And then you also have this part right here, especially this MLP here." }, { "start": 849.12, "end": 852.48, "text": " It takes in each token of these." }, { "start": 852.48, "end": 856.64, "text": " It takes that after it, you know, it goes through the attention after the whole thing" }, { "start": 856.64, "end": 858.2, "text": " goes through the attention." }, { "start": 858.2, "end": 862.78, "text": " Each of the tokens is fed separately through the MLP." }, { "start": 862.78, "end": 868.56, "text": " So the MLP, there is, it's actually not necessary that the output dimension of the MLP is the" }, { "start": 868.56, "end": 873.64, "text": " same as the input dimension, except for this skip connection right here." }, { "start": 873.64, "end": 881.1999999999999, "text": " Now if this skip connection, like in ResNet had some sort of a linear projection as well," }, { "start": 881.1999999999999, "end": 887.12, "text": " then you could totally think of, think of changing the dimensions here." }, { "start": 887.12, "end": 893.08, "text": " But I'm not even, I'm not even sure if you do the projection, isn't this just the same" }, { "start": 893.08, "end": 897.04, "text": " as the MLP with, if you feed each individually?" }, { "start": 897.04, "end": 901.76, "text": " Maybe, maybe there's no point in having the skip connection at all." }, { "start": 901.76, "end": 906.5600000000001, "text": " In any case, you could probably get around that, you know, that requirement to have this" }, { "start": 906.5600000000001, "end": 908.5600000000001, "text": " exact number of channels." }, { "start": 908.5600000000001, "end": 911.36, "text": " Nevertheless, that's what they do." }, { "start": 911.36, "end": 918.5, "text": " So the generator is actually manageable memory wise, because it does this, this trade off" }, { "start": 918.5, "end": 925.8000000000001, "text": " as it progresses up, it generates an actual grid in the resolution of the image in with" }, { "start": 925.8000000000001, "end": 930.8000000000001, "text": " the required channels being a projection of the final channels here out of the transformer." }, { "start": 930.8000000000001, "end": 932.4, "text": " Then it's fed into the discriminator." }, { "start": 932.4, "end": 938.5600000000001, "text": " The discriminator immediately divides the image into patches, interprets each as sort" }, { "start": 938.56, "end": 943.8399999999999, "text": " of a token embedding, and then simply it adds positional encodings and then simply uses" }, { "start": 943.8399999999999, "end": 947.3599999999999, "text": " a transformer like BERT." }, { "start": 947.3599999999999, "end": 952.4399999999999, "text": " And at the end, you have this CLS token like you have in BERT, and that classifies real" }, { "start": 952.4399999999999, "end": 955.2399999999999, "text": " or fake, you can back prop through the whole architecture." }, { "start": 955.2399999999999, "end": 958.0799999999999, "text": " And that's again for you." }, { "start": 958.0799999999999, "end": 961.1199999999999, "text": " So that was the architecture part." }, { "start": 961.1199999999999, "end": 966.8399999999999, "text": " And now, so they do, they do initial, they do a lot of good ablations where they say," }, { "start": 966.84, "end": 972.0400000000001, "text": " okay, what if we, what if, so we have a generator and the discriminator, what if we have kind" }, { "start": 972.0400000000001, "end": 975.88, "text": " of this autogan is what they is one of the things they compare with." }, { "start": 975.88, "end": 977.8000000000001, "text": " So what if we do that?" }, { "start": 977.8000000000001, "end": 982.76, "text": " And then what if we just replace the generator with the transformer?" }, { "start": 982.76, "end": 985.2800000000001, "text": " What if we just replace the discriminator?" }, { "start": 985.2800000000001, "end": 990.2800000000001, "text": " So they find out that they can, they can replace the generator just fine." }, { "start": 990.2800000000001, "end": 993.84, "text": " And that even gives, you know, gives competitive performance." }, { "start": 993.84, "end": 1001.4, "text": " As soon as they, you know, transfer the discriminator to a transformer, that drops in performance." }, { "start": 1001.4, "end": 1006.12, "text": " So in order to really make this work, they need some more tricks." }, { "start": 1006.12, "end": 1007.9200000000001, "text": " They have three tricks." }, { "start": 1007.9200000000001, "end": 1009.96, "text": " The first trick is data augmentation." }, { "start": 1009.96, "end": 1015.9200000000001, "text": " They say data augmentation is crucial for trans-GAN." }, { "start": 1015.9200000000001, "end": 1020.94, "text": " And the type of data augmentation they do is also from a paper for data augmentation" }, { "start": 1020.94, "end": 1022.0400000000001, "text": " for GANs." }, { "start": 1022.04, "end": 1025.32, "text": " This right here, differentiable augmentation for data efficient training." }, { "start": 1025.32, "end": 1033.08, "text": " So the whole point is that your data augmentation, so the augmentation T right here is a differentiable" }, { "start": 1033.08, "end": 1034.08, "text": " function." }, { "start": 1034.08, "end": 1039.92, "text": " So data augmentation is things like cropping or changing the brightness, color jitter," }, { "start": 1039.92, "end": 1041.86, "text": " rotating and so on." }, { "start": 1041.86, "end": 1047.04, "text": " So as long as that's a differentiable operation, you can use this technique right here where" }, { "start": 1047.04, "end": 1050.1, "text": " you back prop through the augmentation." }, { "start": 1050.1, "end": 1054.54, "text": " You can see right here in the generator update, you actually back prop." }, { "start": 1054.54, "end": 1060.9599999999998, "text": " So the back propagation happens through the T function and therefore you get a much better" }, { "start": 1060.9599999999998, "end": 1061.9599999999998, "text": " signal." }, { "start": 1061.9599999999998, "end": 1065.36, "text": " Plus you get all the benefits of data augmentation." }, { "start": 1065.36, "end": 1071.28, "text": " And the point they make in the trans-GAN paper here is that given that transformers don't" }, { "start": 1071.28, "end": 1078.6, "text": " have this convolution, they don't have this locality bias built into their architecture," }, { "start": 1078.6, "end": 1080.5, "text": " they need a lot more data." }, { "start": 1080.5, "end": 1085.5, "text": " And we know that transformers, they work well if you have an abundant amount of data and" }, { "start": 1085.5, "end": 1091.34, "text": " you can sort of get around having lots of data a little bit by using data augmentation." }, { "start": 1091.34, "end": 1097.52, "text": " So they argue that data augmentation, it works for all GANs, but it helps a lot more in these" }, { "start": 1097.52, "end": 1103.6, "text": " transformer based GANs because the transformers benefit better from having lots of data." }, { "start": 1103.6, "end": 1107.36, "text": " Again, the story about transformers is pretty clear." }, { "start": 1107.36, "end": 1112.78, "text": " I think if you have lots of data, they tend to work well because they're just a more general" }, { "start": 1112.78, "end": 1113.78, "text": " architecture." }, { "start": 1113.78, "end": 1119.9199999999998, "text": " So here you can see in the different GANs, you can see that the augmentation, which is" }, { "start": 1119.9199999999998, "end": 1125.32, "text": " when the checkmark here is, it helps sometimes, you can see not always, sometimes here it" }, { "start": 1125.32, "end": 1126.54, "text": " does fairly well." }, { "start": 1126.54, "end": 1133.32, "text": " But here in the trans-GAN, you can see that adding data augmentation drastically improves" }, { "start": 1133.32, "end": 1141.4399999999998, "text": " the results and already gets these GANs into the ballpark of the state of the art." }, { "start": 1141.4399999999998, "end": 1149.04, "text": " Not yet there, there's still a big difference, but it gets it, you know, gets them in like" }, { "start": 1149.04, "end": 1150.76, "text": " target distance." }, { "start": 1150.76, "end": 1154.72, "text": " So the second trick they have is this code training with the self supervised auxiliary" }, { "start": 1154.72, "end": 1159.8, "text": " task and specifically, they do super resolution." }, { "start": 1159.8, "end": 1161, "text": " So where do I write this?" }, { "start": 1161, "end": 1164.92, "text": " So this here, it's a super resolution task, right?" }, { "start": 1164.92, "end": 1169.16, "text": " Super resolution." }, { "start": 1169.16, "end": 1177.48, "text": " And what they mean by this is simply they, in addition to the whole GAN training, right?" }, { "start": 1177.48, "end": 1181.32, "text": " So here you have the data set." }, { "start": 1181.32, "end": 1184.84, "text": " Data set, I know, beautiful." }, { "start": 1184.84, "end": 1191.1999999999998, "text": " So the discriminator over here, the D, it gets images from the GAN, as you can see right" }, { "start": 1191.1999999999998, "end": 1194.04, "text": " here, and it also gets images from the data set, right?" }, { "start": 1194.04, "end": 1195.76, "text": " And that's your main GAN loss." }, { "start": 1195.76, "end": 1200.56, "text": " So here you have the discriminator loss, you back propagate that through the GAN, you update" }, { "start": 1200.56, "end": 1202.12, "text": " all the parameters." }, { "start": 1202.12, "end": 1208.6399999999999, "text": " What you also do is you take data set images, you put them here as a target." }, { "start": 1208.6399999999999, "end": 1211.6, "text": " So this is the target for the GAN." }, { "start": 1211.6, "end": 1215.24, "text": " So the GAN needs to output something." }, { "start": 1215.24, "end": 1217.1599999999999, "text": " And what does it get as an input?" }, { "start": 1217.1599999999999, "end": 1221.32, "text": " It gets this thing, but scaled down." }, { "start": 1221.32, "end": 1226.6799999999998, "text": " So I'm gonna say this big picture goes to small picture." }, { "start": 1226.6799999999998, "end": 1233.56, "text": " So you take pictures from your data set, and you deliberately down sample them, you deliberately," }, { "start": 1233.56, "end": 1238.32, "text": " you might even add some noise or something, but I guess they simply do lower resolution." }, { "start": 1238.32, "end": 1246.84, "text": " So LR means low resolution, and then the task of the GAN is from the low resolution input," }, { "start": 1246.84, "end": 1254.28, "text": " predict, like it needs to predict the high resolution image." }, { "start": 1254.28, "end": 1259.04, "text": " It's completely different pipeline than usually, because it actually gets the small thing," }, { "start": 1259.04, "end": 1261.6, "text": " the small real image as an input." }, { "start": 1261.6, "end": 1266.24, "text": " The GAN usually never, the generator usually never sees real data, right?" }, { "start": 1266.24, "end": 1269.16, "text": " Now it gets a small resolution." }, { "start": 1269.16, "end": 1275.6, "text": " This is not the same image that goes to the discriminator, by the way, I think at least." }, { "start": 1275.6, "end": 1278.48, "text": " This is just a different thing you can also do." }, { "start": 1278.48, "end": 1286.96, "text": " You mix into your batches of noise GAN samples with this loss, you simply also mix things," }, { "start": 1286.96, "end": 1290.1, "text": " you mix this loss right here, the super resolution loss." }, { "start": 1290.1, "end": 1295.28, "text": " So you have this loss, and then you have the loss from the super resolution, and you simply" }, { "start": 1295.28, "end": 1301.08, "text": " add them with a parameter to trade off one or the other." }, { "start": 1301.08, "end": 1309, "text": " And this helps the generator to, so given a low resolution image, these stages here" }, { "start": 1309, "end": 1316.5, "text": " will have to learn to sort of up sample realistic looking images from lower resolution images." }, { "start": 1316.5, "end": 1319.6, "text": " And that's what you sort of expect this GAN to do." }, { "start": 1319.6, "end": 1324.8, "text": " So it makes sense that this is a good auxiliary task." }, { "start": 1324.8, "end": 1328.22, "text": " And this turns out to help quite a bit." }, { "start": 1328.22, "end": 1333.6, "text": " So as you can see, right here, here they have it with data augmentation." }, { "start": 1333.6, "end": 1341.76, "text": " And if you add this task here, it you know, the scores improve again by a bit." }, { "start": 1341.76, "end": 1347.8799999999999, "text": " And then the last trick they have is to also do this locality aware initialization for" }, { "start": 1347.8799999999999, "end": 1349.12, "text": " self attention." }, { "start": 1349.12, "end": 1352.44, "text": " And you can see that again pushes the scores." }, { "start": 1352.44, "end": 1354.36, "text": " So what is this last trick?" }, { "start": 1354.36, "end": 1360.4799999999998, "text": " And this last trick, they say, look, the the convolution, it seems to be a pretty good" }, { "start": 1360.4799999999998, "end": 1362.8799999999999, "text": " prior for images after all, right?" }, { "start": 1362.8799999999999, "end": 1365.3999999999999, "text": " That's why I mean, that's why CNNs are so effective." }, { "start": 1365.3999999999999, "end": 1371.12, "text": " It seems to be a good prior to look locally, like to have local features." }, { "start": 1371.12, "end": 1375.4199999999998, "text": " But of course, the transformers, they are more powerful." }, { "start": 1375.4199999999998, "end": 1378.4199999999998, "text": " And eventually, they want to look at the whole picture." }, { "start": 1378.4199999999998, "end": 1382.9599999999998, "text": " But maybe it makes sense to first teach them that local things matter." }, { "start": 1382.96, "end": 1389.4, "text": " And once they're at a certain quality level, we can kind of let them look at other pixels" }, { "start": 1389.4, "end": 1390.76, "text": " in the image." }, { "start": 1390.76, "end": 1394.92, "text": " So what they do is they handcraft a schedule." }, { "start": 1394.92, "end": 1400.98, "text": " And so over the course of training, I have this gradually increasing receptive field." }, { "start": 1400.98, "end": 1406.72, "text": " So in early training, they simply say, you're only allowed to look at your immediate neighborhood." }, { "start": 1406.72, "end": 1412.68, "text": " So each super pixel right here, remember, this is in a downscaled world sometimes during" }, { "start": 1412.68, "end": 1421.96, "text": " training in the generator, you're only you're only allowed to look at this at the immediate" }, { "start": 1421.96, "end": 1423.44, "text": " neighbors." }, { "start": 1423.44, "end": 1429.44, "text": " So we introduce a mask that says it here, by which each query is only allowed to interact" }, { "start": 1429.44, "end": 1432.3200000000002, "text": " with its local neighbors that are not masked." }, { "start": 1432.3200000000002, "end": 1433.5600000000002, "text": " Okay." }, { "start": 1433.5600000000002, "end": 1437.64, "text": " And then say different from previous methods during training, we gradually reduce the mask" }, { "start": 1437.64, "end": 1439.4, "text": " until diminishing it." }, { "start": 1439.4, "end": 1441.8400000000001, "text": " Eventually self attention is fully global." }, { "start": 1441.84, "end": 1451.1999999999998, "text": " Okay, so at first, they say, you know, in the in the transformer layer, you have you" }, { "start": 1451.1999999999998, "end": 1455.8999999999999, "text": " have the you have the keys down here, they have a series of keys." }, { "start": 1455.8999999999999, "end": 1460.76, "text": " And you have a series of queries from the individual tokens." }, { "start": 1460.76, "end": 1467.76, "text": " And they say for a particular token, you're only allowed to look at your immediate neighbors" }, { "start": 1467.76, "end": 1470.12, "text": " as if you aggregate information." }, { "start": 1470.12, "end": 1473.6799999999998, "text": " And then later, they say, okay, now training." }, { "start": 1473.6799999999998, "end": 1480.6, "text": " So this only this and you can only look at your immediate neighbors, and so on." }, { "start": 1480.6, "end": 1486.08, "text": " And later in training, they say, okay, now you've sort of learned well, you're now allowed" }, { "start": 1486.08, "end": 1491.9599999999998, "text": " to also gather information from kind of further out until at the end of training, the all" }, { "start": 1491.9599999999998, "end": 1495.36, "text": " the queries are allowed to look at all the keys." }, { "start": 1495.36, "end": 1500.4399999999998, "text": " I'm sure that if you engineer this smartly, this is local attention, right, this is known" }, { "start": 1500.4399999999998, "end": 1502.7199999999998, "text": " as local attention." }, { "start": 1502.7199999999998, "end": 1508.12, "text": " And you can also make a bunch of, you know, speed ups, probably in early training here," }, { "start": 1508.12, "end": 1511.9599999999998, "text": " you can see right here in early stage, only immediate neighbors in middle stage, they" }, { "start": 1511.9599999999998, "end": 1515.9199999999998, "text": " sort of widen the circle of where you're allowed to look." }, { "start": 1515.9199999999998, "end": 1520.6399999999999, "text": " And in the final stage, each query is actually allowed to do the full attention." }, { "start": 1520.64, "end": 1529.68, "text": " So when I saw this, I was like, okay, here, I'm told we're going to build a GAN absolutely" }, { "start": 1529.68, "end": 1538.4, "text": " without convolutions, all we're going to replace with is kind of an linear operation that is" }, { "start": 1538.4, "end": 1545.0800000000002, "text": " applied over the whole image in a fashion that it only gets to look at its neighbors," }, { "start": 1545.0800000000002, "end": 1546.0800000000002, "text": " right?" }, { "start": 1546.0800000000002, "end": 1547.0800000000002, "text": " It's totally not a convolution." }, { "start": 1547.08, "end": 1551.4399999999998, "text": " It's just a linear operation that is applied equally across the image while only looking" }, { "start": 1551.4399999999998, "end": 1554.32, "text": " at your immediate neighbors." }, { "start": 1554.32, "end": 1558.36, "text": " I'm so glad we're building GANs without convolutions." }, { "start": 1558.36, "end": 1560.4399999999998, "text": " Convolutions are for losers." }, { "start": 1560.4399999999998, "end": 1565.76, "text": " We're all for locally applied linear transformations over the whole image that only can look at" }, { "start": 1565.76, "end": 1567.98, "text": " their immediate neighbors." }, { "start": 1567.98, "end": 1570.56, "text": " So yeah, no, I mean, you get the point." }, { "start": 1570.56, "end": 1578.9199999999998, "text": " This is essentially an attentionized version of a convolution, but within with training" }, { "start": 1578.9199999999998, "end": 1583.96, "text": " as training progresses, they do release that constraint." }, { "start": 1583.96, "end": 1590.76, "text": " This is simply to help the GAN do training, though I am fairly convinced what you wouldn't" }, { "start": 1590.76, "end": 1593.96, "text": " maybe have to do this as a fixed schedule, right?" }, { "start": 1593.96, "end": 1594.96, "text": " This is like a fixed schedule." }, { "start": 1594.96, "end": 1601.48, "text": " I say, okay, you're allowed to look at this many neighbors and then after this many steps," }, { "start": 1601.48, "end": 1603.1200000000001, "text": " this, this and so on." }, { "start": 1603.1200000000001, "end": 1608.4, "text": " I'm fairly convinced you could somehow formulate this maybe as a two player game, right?" }, { "start": 1608.4, "end": 1615.6000000000001, "text": " But like, like another GAN thing or maybe, yeah, maybe another GAN thing or sort of an" }, { "start": 1615.6000000000001, "end": 1622.8, "text": " self play thing where the one player tries to sort of get the most information out of" }, { "start": 1622.8, "end": 1629.76, "text": " the neighborhood and the other player tries to sort of constrain that player and, but" }, { "start": 1629.76, "end": 1632, "text": " it only has a certain amount of budget and so on." }, { "start": 1632, "end": 1633, "text": " I'm not sure." }, { "start": 1633, "end": 1639.44, "text": " I mean, but you could probably do something smarter than simply a fixed schedule that" }, { "start": 1639.44, "end": 1643.48, "text": " is adaptive to the difficulty of the task." }, { "start": 1643.48, "end": 1650.08, "text": " And you would also in turn lose a bunch of hyperparameters that you need to build this," }, { "start": 1650.08, "end": 1653.28, "text": " um, schedule over here." }, { "start": 1653.28, "end": 1654.28, "text": " All right." }, { "start": 1654.28, "end": 1659.84, "text": " The last thing they do after all the tricks is of course what everyone does best with" }, { "start": 1659.84, "end": 1669.36, "text": " transformers and that's just scaling that thing up to many layers, many dimensionalities" }, { "start": 1669.36, "end": 1674.1599999999999, "text": " and I don't know if they do a lot more data, probably not in this case, but if you had" }, { "start": 1674.1599999999999, "end": 1676.72, "text": " more data, it would also work better." }, { "start": 1676.72, "end": 1682.3600000000001, "text": " And thereby they do reach, you know, scores that are state of the art or at least very" }, { "start": 1682.3600000000001, "end": 1684.44, "text": " competitive with state of the art." }, { "start": 1684.44, "end": 1692.74, "text": " So they're TransGAN XL model, as you can see here, for example, on CIFAR 10, they do reach" }, { "start": 1692.74, "end": 1697.4, "text": " very competitive scores beaten only by StyleGAN V2." }, { "start": 1697.4, "end": 1703.34, "text": " They also reach very good or state of the art scores on other data sets here on STL" }, { "start": 1703.34, "end": 1704.34, "text": " 10." }, { "start": 1704.34, "end": 1706.9199999999998, "text": " So they are the best." }, { "start": 1706.9199999999998, "end": 1708.04, "text": " Yeah." }, { "start": 1708.04, "end": 1710.32, "text": " So there is a, it's cool." }, { "start": 1710.32, "end": 1717.72, "text": " By the way, this, it's nice to see papers going back to kind of the 64 by 64 images" }, { "start": 1717.72, "end": 1723.3999999999999, "text": " because we're so used to these super duper high resolution GANs now." }, { "start": 1723.3999999999999, "end": 1725.76, "text": " This reminds me of old times." }, { "start": 1725.76, "end": 1727.62, "text": " Yeah." }, { "start": 1727.62, "end": 1731.9599999999998, "text": " So the paper as a whole is pretty cool." }, { "start": 1731.96, "end": 1737.6000000000001, "text": " It's actually pretty straightforward, as I said, they develop an architecture that works" }, { "start": 1737.6000000000001, "end": 1744.24, "text": " that is actually computable with this kind of up sampling and the pixel shuffle channel" }, { "start": 1744.24, "end": 1751.3600000000001, "text": " reduction as they go along the VIT discriminator, then they present three tricks to make that" }, { "start": 1751.3600000000001, "end": 1752.3600000000001, "text": " work." }, { "start": 1752.3600000000001, "end": 1759.48, "text": " It's data augmentation, it's super resolution task as a code training task, and it's this" }, { "start": 1759.48, "end": 1766.32, "text": " localized attend, local locality aware initialization for the attention with the decreasing with" }, { "start": 1766.32, "end": 1769.24, "text": " this schedule over training." }, { "start": 1769.24, "end": 1771.84, "text": " And finally, they scale that model up." }, { "start": 1771.84, "end": 1776.88, "text": " And that gives them pretty, pretty well performing GAN." }, { "start": 1776.88, "end": 1781.3600000000001, "text": " And it's only made of, so it has no convolutions." }, { "start": 1781.3600000000001, "end": 1785.04, "text": " Their goal isn't to use only transformers, the goal is actually to use no convolutions." }, { "start": 1785.04, "end": 1786.64, "text": " Yeah, that was it for me." }, { "start": 1786.64, "end": 1791.4, "text": " Tell me what you think in the comments, and I invite you to check out the paper and the" }, { "start": 1791.4, "end": 1792.4, "text": " code." }, { "start": 1792.4, "end": 1817.4, "text": " Thanks for watching." } ]
_N_nFzMtWkA
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Reinforcement Learning, Fast and Slow
[ "Science & Technology" ]
[ "machine learning", "reinforcement learning", "meta-learning", "deep rl", "deep reinforcement learning", "deep neural network", "atari", "alphago", "deepmind", "google", "td-gammon", "episodic memory", "inductive bias", "bias variance tradeoff" ]
Abstract: Deep reinforcement learning (RL) methods have driven impressive advances in artificial intelligence in recent years, exceeding human performance in domains ranging from Atari to Go to no-limit poker. This progress has drawn the attention of cognitive scientists interested in understanding human learning. However, the concern has been raised that deep RL may be too sample-inefficient – that is, it may simply be too slow – to provide a plausible model of how humans learn. In the present review, we counter this critique by describing recently developed techniques that allow deep RL to operate more nimbly, solving problems much more quickly than previous methods. Although these techniques were developed in an AI context, we propose that they may have rich implications for psychology and neuroscience. A key insight, arising from these AI methods, concerns the fundamental connection between fast RL and slower, more incremental forms of learning. Authors: Matthew Botvinick, Sam Ritter, Jane X. Wang, Zeb Kurth-Nelson, Charles Blundell, Demis Hassabis https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(19)30061-0
Hi there, today we're looking at reinforcement learning, fast and slow, by Matthew Botvinick, Sam Ritter, Jane X. Wang, Zeb Kurt-Nielsen, Charles Spondel and Demis Hassabis. These people are from Google DeepMind and this is a review of kind of a development in reinforcement learning, especially as it pertains to kind of how humans learn or what we can understand from the RL world that translates over to human learning. Alright, so basically their argument here is that the first wave of deep RL, as you see here, is powerful but slow. And they give examples of this. So in box one, box one is this. So they believe there's an image missing here. This is Backgammon, TD Gammon. This is the famous DeepMind Atari playing bot and this is kind of the 3D labyrinth playing bot. So there's been a number of advances in RL and especially what they talk about is deep RL. So when we talk about reinforcement learning, the easiest case is where you have an agent and an environment. Alright, so the agent will observe some observation O from the environment and then based on that the agent will perform an action A. And then the environment will give back a reward and also a next observation. So this is O0, O1, and then this is A0 and then here you give A1, AI. So basically this goes back and forth and back and forth. The agent performs an action, the environment gives a reward and the next observation. So this could be for example here in the Atari world. The observation is the screen itself. And then the agent needs to perform an action which is an input of the joystick or pressing some button. You can see the individual actions actually listed here. And then the reward will be given to the agent via a number which I guess is the same number as up here. So the task is to maximize the reward simply by... So the difference is you're not doing this in a supervised manner. So you're not telling the agent what would be the correct action to do. You simply tell it that whether what it did was good or bad by giving it a high or a low reward. Right, so that's reinforcement learning. So what is deep reinforcement learning? Deep reinforcement learning simply means the agent maps the observation to the action via a deep neural network. So deep neural network. That's deep reinforcement learning where the mapping or some part of the agent consists of a deep neural network. You see for example here there is a deep neural network mapping the observation to the action. As well as down here but it's a bit more complicated. So they argue that the first wave of this was powerful but slow meaning kind of you need a lot of samples. And they give two sources of why it's slow, why you need a lot of samples. They say the two factors are incremental parameter adjustment and weak inductive bias. So incremental parameter adjustment means basically that you have to update or train your neural network in a very small incremental way. In order to basically, because you train it one by one, right? You train your neural network step by step. You have to make small steps in order to not forget what came before. You can't fundamentally readjust your neural network to every new batch of observations because then that's going to destroy all the information you've learned of the old one. And then weak inductive bias here is basically an understanding of these neural networks. They are general function approximators and they can approximate any function. So if you just think in terms of kind of, I don't know, let's say polynomials and what kind of polynomials are there? This polynomial, this polynomial, this polynomial, this weird polynomial. If I have a function that can approximate all of these then I have a weak inductive bias whereas if I kind of know, okay all my polynomials are the polynomial that I'm looking for ultimately, I'm very sure it's a third degree polynomial, right? So something like this or like this or like this. So this is much less of a class of functions that I can fit but if I'm sure that the function that I'm trying to fit falls in this category then I'm much faster. So this is then called a strong inductive bias is where I build into the model basically I tell it beforehand. Here is a very restricted class of functions that you can fit. Whereas in a weak inductive bias I won't tell it that. I'll simply say, well model you could fit any function you want and I'm just giving you training samples. So this is a classic example of a bias variance trade-off where there is a lot of variance in these models meaning you can fit also a lot of functions but here because you bias the model towards a certain set of functions it can lower this variance and in this case here it speeds up learning because you don't have as much variance that means you can basically go faster while learning. Alright, so they propose two solutions to this problem of this kind of to mitigate these problems that make reinforcement learning faster or have made reinforcement learning faster. This is a review remember. So the first one is episodic deep reinforcement learning and this episodic deep reinforcement learning is specified here, fast learning through episodic memory. So the suggestion in this field of research is to augment the neural network or the agent by a memory and the memory could look something like this. So in a lot of these RL frameworks what a principal component of the agent is, so the agent will get an observation O and one of the things it has to do is estimate the value of this observation of this state. So basically the agent is in some state let's say you play pong right and you are here down and the ball comes your way up there right there's a little arrow sorry so the ball flies away from you and you're all the way down which basically means draw this bigger. So here you are down here and the ball is here flying up there. So one task in these in these agents that occurs often is to estimate the value of this observation basically means how much reward am I expecting from this state going into the future. In this case I probably will not expect a lot of reward since I can't move up fast enough right to catch the ball. So this I would assign this state a pretty low value whereas if I were up here I would assign this state quite a high value. So as we've already seen this is a deep neural network mapping we learn to assign value to different states and this is one of the parts that takes a long time and these methods they are the one that's depicted here replaces this value estimation by saying okay we have an observation we somehow need to estimate its value why don't we look for similar observation so we have some kind of memory right and we go with our observation and we retrieve O'1 O'2 O'3 that are somehow similar right so in our in our pong example I'm down I'm up here ball moves here I could be looking now at at states where I was here or where I was here like really close or where the ball flew a bit differently but still in the same direction or down here right so all these states are kind of close to my state and I can I already have I already have played these since they're in my memory right so with every one of them I can also retrieve the reward that I got so I because I already know the problem in reinforcement learning is before you do an action you don't know what the reward will be but here I already know because I've played it I've already experienced it it's in the past so I know what reward I got right so and this is exactly what they say over here they basically say here we have time time runs this way we're in state one then in state two and so on and we perform actions and and get rewards and what we can do is we can save these states into this memory as along with their sum of discounted rewards that we collect from that state on and then later this is like a spongebob reference if we want to estimate the value of some new state right what we do is we retrieve all of these states from memory calculate a similarity score over them and with with we wait basically we add their rewards weighted by how similar they are to the state that we want to compute so this basically amounts to averaging over states respective by how close they are to the current state right this is kind of a soft a soft way of saying I only select the states which are close and that gives you a value estimate for the new states so basically this means you just got rid of having to train a value function and this will speed up your reinforcement learning quite a bit if you don't have to train that if you already have good value estimations from your previous experience that's great of course there are a number of problems associated with that namely if this memory here for example becomes stale it doesn't represent the future rewards quite as well there is also a question of which states do you keep in memory just the good ones or do they have to have a certain property do you have to have some diversity in there and of course the biggest problem here the biggest problem is how do you know when two states are similar or when they aren't it might be easy in a situation like pong where I only have like three variables like position y position of my of my paddle and position of the ball and velocity of the ball those are like I can specify those in five numbers but if it gets harder than that if it's like this labyrinth setting full 3d environment then we have no clue which states are similar to each other and what these what most end up doing is they will train you guessed it they will train a deep neural network to give you this similarity score between states right how they do it is is a different question but presumably you can train this network offline basically meaning you can pre train it you could pre train it and then the so we have two stages stage one pre train train similarity dnn right and then once we've done that second stage do reinforcement learning using this and the claim here is that by having this done this this second stage will become faster so it it doesn't really solve the problem of the sample efficiency but what it says is okay the actual reinforcement learning part will become faster because we've already done the work previously but basically by by including this similarity score sorry whatever dnn by including this in the language of the review here we have successfully introduced an inductive bias into the rl procedure because the rl procedure now can't just fit any function we say we tell it your value function is one that conforms to our notion of similarity that we've pre trained this restricts the rl algorithm and we give it an inductive bias and as long as our similarity score is useful for the rl algorithm it can speed up its learning because it doesn't have to learn the value function itself all right cool so the second part here is a bit more abstract it's called meta reinforcement learning speeding up deep rl by learning to learn these kind of learning to learn approaches are quite abundant in the literature people try this usually there's a i mean it's it's very large scale experiments basically you have i think i believe they show it somewhere here yeah you have like some um some outer loop where you would say that's this thing here what the outer loop does is in each loop it samples one environment so it samples one environment from a distribution of environments so now you not only have one environment but you say okay if i'm going to navigate this maze one trying to learn to navigate this maze i'm going actually to learn to learn to navigate many mazes right so it's not like you train one agent to learn you train one agent to navigate many mazes that would just be classic reinforcement learning but you want to train an algorithm that helps an agent learn as a particular maze and you do that by training your helper algorithm on a variety of agent maze combinations so in each step you sample one environment like this this here and you then have an inner loop here you fully reinforcement learn train an agent in the classic sense on this environment right you see here action action observation reward right but the agent receives some kind of signal from outside so the outside algorithm will kind of tell the agent how to approach the problem right this could be that it initializes the the weights here you see that the outer loop trains the parameter weights which determine the inner learner that interacts with an environment during the duration of the episode for every cycle of the outer loop a new environment is sampled from a distribution of environments which share some common structure so basically the one would expect when you train this that these parameters here this could be for example it could be the initial weights of the network that the agent uses that this one possibility right this is very abstract here this meta reinforcement learning it could be literally anything that the outer model teaches the inner model or gives to the inner model right and you you train both of these with reinforcement learning so the inner you train with reinforcement learning on the individual rewards and then you can train the outer loop on the reward that the entire app agent environment episode achieved so the that's kind of a two loop situation and yeah so that's meta reinforcement learning again it's very unspecified what it does but as you can already see if you now have such an algorithm that kind of tells the the inner agent just as an example how to initialize its weights right how to initialize the weights of its deep neural network if you have that here then the agent you will technically bias it this is again an inductive bias so you will give it inductive bias towards what you think are good weights to generally learn these maze structured environments right since the outer loop you can update it way slower because it needs to learn over a longer time horizon and it needs to learn things for a different variety of environments but once you have good kind of initial weights for a particular environment then this agent in here can learn much faster given an individual environment so the agent you instantiated and then you give it good starting weights or some other kind of signal about the environment and then it can go much much faster at learning the environment thereby you have just sped up this inner agent by providing it an inductive bias and that's basically what the claim of the review is that by providing these models with a larger inductive bias you may then speed up their learning because you've kind of told them what good functions are from the outset of course you see the problem again here well the problem the problem is of course you actually need to train this outer loop and the outer loop may actually take much much longer to train than a single and unbiased reinforcement learning thing but again what you could do is you could pre-train on a distribution of environments and then once a new environment shows up that is similar to this distribution you can then have the agent instantiated and learn much faster so again kind of this two-step process you could pre-train this outer loop and then the inner loop will be much faster than if you didn't have the outer loop all right so those are basically the kind of the kind of outlines they do here they then kind of do a connection to like the brain and so on and they relate this to biology and biological learning but ultimately their conclusion is here that whenever you want to do whenever you have slow rl or this is at least my conclusion from their article whenever you have slower you can transform it to fast rl rl but you have to outsource the slow rl slow something else slow x you have to outsource the slowness to some other part so if you want to do fast rl you have to outsource the slowness and what the slowness provides is an inductive bias which means yeah if you want to do like fast rl with episodic memory you have to learn the similarity function which again which might be slow in itself but then the rl will be fast and if you want to do this via kind of a an outer meta learner again this learning of the outer meta learner might be slow but then the inner learner will be fast in a connection to the kind of biological aspect of this they do make a connection which which i find is appropriate in that for example the human brain the reason we can learn things fast let's say in the physical world picking things up dropping things down or navigating our paths we're incredibly good at this navigating through like a weird terrain with rocks in the way is because of course our brains have been adapted to these kinds of environment over generations so there is an outer process like evolution which is this kind of outer loop and it instantiates the inner loop which are the humans that kind of live or die by their ability to to navigate better so the if if the outer loop does a good job of only keeping the humans alive that can navigate well then the individual human in here that that does this the individual human given a landscape with rocks will then be much faster at learning to navigate it all right so that was it for that i it's an interesting article to read especially the connections to the kind of biological aspects and with that have a nice day
[ { "start": 0, "end": 7, "text": " Hi there, today we're looking at reinforcement learning, fast and slow, by Matthew Botvinick," }, { "start": 7, "end": 17, "text": " Sam Ritter, Jane X. Wang, Zeb Kurt-Nielsen, Charles Spondel and Demis Hassabis." }, { "start": 17, "end": 24.52, "text": " These people are from Google DeepMind and this is a review of kind of a development" }, { "start": 24.52, "end": 32.16, "text": " in reinforcement learning, especially as it pertains to kind of how humans learn or what" }, { "start": 32.16, "end": 38.96, "text": " we can understand from the RL world that translates over to human learning." }, { "start": 38.96, "end": 48.44, "text": " Alright, so basically their argument here is that the first wave of deep RL, as you" }, { "start": 48.44, "end": 54.8, "text": " see here, is powerful but slow." }, { "start": 54.8, "end": 57.14, "text": " And they give examples of this." }, { "start": 57.14, "end": 60.66, "text": " So in box one, box one is this." }, { "start": 60.66, "end": 65.16, "text": " So they believe there's an image missing here." }, { "start": 65.16, "end": 68.28, "text": " This is Backgammon, TD Gammon." }, { "start": 68.28, "end": 78.03999999999999, "text": " This is the famous DeepMind Atari playing bot and this is kind of the 3D labyrinth playing" }, { "start": 78.04, "end": 79.04, "text": " bot." }, { "start": 79.04, "end": 83.88000000000001, "text": " So there's been a number of advances in RL and especially what they talk about is deep" }, { "start": 83.88000000000001, "end": 84.88000000000001, "text": " RL." }, { "start": 84.88000000000001, "end": 92.84, "text": " So when we talk about reinforcement learning, the easiest case is where you have an agent" }, { "start": 92.84, "end": 95.84, "text": " and an environment." }, { "start": 95.84, "end": 105.48, "text": " Alright, so the agent will observe some observation O from the environment and then based on that" }, { "start": 105.48, "end": 112.44, "text": " the agent will perform an action A. And then the environment will give back a reward and" }, { "start": 112.44, "end": 115.72, "text": " also a next observation." }, { "start": 115.72, "end": 124.52000000000001, "text": " So this is O0, O1, and then this is A0 and then here you give A1, AI." }, { "start": 124.52000000000001, "end": 128.32, "text": " So basically this goes back and forth and back and forth." }, { "start": 128.32, "end": 133.34, "text": " The agent performs an action, the environment gives a reward and the next observation." }, { "start": 133.34, "end": 137.94, "text": " So this could be for example here in the Atari world." }, { "start": 137.94, "end": 142.16, "text": " The observation is the screen itself." }, { "start": 142.16, "end": 148.72, "text": " And then the agent needs to perform an action which is an input of the joystick or pressing" }, { "start": 148.72, "end": 150.16, "text": " some button." }, { "start": 150.16, "end": 154.86, "text": " You can see the individual actions actually listed here." }, { "start": 154.86, "end": 161.2, "text": " And then the reward will be given to the agent via a number which I guess is the same number" }, { "start": 161.2, "end": 163.3, "text": " as up here." }, { "start": 163.3, "end": 168.04000000000002, "text": " So the task is to maximize the reward simply by..." }, { "start": 168.04000000000002, "end": 171.56, "text": " So the difference is you're not doing this in a supervised manner." }, { "start": 171.56, "end": 175.84, "text": " So you're not telling the agent what would be the correct action to do." }, { "start": 175.84, "end": 183.72000000000003, "text": " You simply tell it that whether what it did was good or bad by giving it a high or a low" }, { "start": 183.72000000000003, "end": 184.72000000000003, "text": " reward." }, { "start": 184.72000000000003, "end": 186.92000000000002, "text": " Right, so that's reinforcement learning." }, { "start": 186.92000000000002, "end": 189.44, "text": " So what is deep reinforcement learning?" }, { "start": 189.44, "end": 197.76, "text": " Deep reinforcement learning simply means the agent maps the observation to the action via" }, { "start": 197.76, "end": 199.8, "text": " a deep neural network." }, { "start": 199.8, "end": 203.52, "text": " So deep neural network." }, { "start": 203.52, "end": 209.28, "text": " That's deep reinforcement learning where the mapping or some part of the agent consists" }, { "start": 209.28, "end": 211.88, "text": " of a deep neural network." }, { "start": 211.88, "end": 219.36, "text": " You see for example here there is a deep neural network mapping the observation to the action." }, { "start": 219.36, "end": 225.20000000000002, "text": " As well as down here but it's a bit more complicated." }, { "start": 225.20000000000002, "end": 232.92000000000002, "text": " So they argue that the first wave of this was powerful but slow meaning kind of you" }, { "start": 232.92000000000002, "end": 235.48000000000002, "text": " need a lot of samples." }, { "start": 235.48000000000002, "end": 241.96, "text": " And they give two sources of why it's slow, why you need a lot of samples." }, { "start": 241.96, "end": 249.64000000000001, "text": " They say the two factors are incremental parameter adjustment and weak inductive bias." }, { "start": 249.64000000000001, "end": 256.28000000000003, "text": " So incremental parameter adjustment means basically that you have to update or train" }, { "start": 256.28000000000003, "end": 260.92, "text": " your neural network in a very small incremental way." }, { "start": 260.92, "end": 266.92, "text": " In order to basically, because you train it one by one, right?" }, { "start": 266.92, "end": 270.28000000000003, "text": " You train your neural network step by step." }, { "start": 270.28, "end": 275.84, "text": " You have to make small steps in order to not forget what came before." }, { "start": 275.84, "end": 281.44, "text": " You can't fundamentally readjust your neural network to every new batch of observations" }, { "start": 281.44, "end": 286.44, "text": " because then that's going to destroy all the information you've learned of the old one." }, { "start": 286.44, "end": 295, "text": " And then weak inductive bias here is basically an understanding of these neural networks." }, { "start": 295, "end": 300.23999999999995, "text": " They are general function approximators and they can approximate any function." }, { "start": 300.24, "end": 305.56, "text": " So if you just think in terms of kind of, I don't know, let's say polynomials and what" }, { "start": 305.56, "end": 306.92, "text": " kind of polynomials are there?" }, { "start": 306.92, "end": 314.04, "text": " This polynomial, this polynomial, this polynomial, this weird polynomial." }, { "start": 314.04, "end": 320, "text": " If I have a function that can approximate all of these then I have a weak inductive" }, { "start": 320, "end": 326.8, "text": " bias whereas if I kind of know, okay all my polynomials are the polynomial that I'm looking" }, { "start": 326.8, "end": 333.44, "text": " for ultimately, I'm very sure it's a third degree polynomial, right?" }, { "start": 333.44, "end": 337.2, "text": " So something like this or like this or like this." }, { "start": 337.2, "end": 346.2, "text": " So this is much less of a class of functions that I can fit but if I'm sure that the function" }, { "start": 346.2, "end": 351.76, "text": " that I'm trying to fit falls in this category then I'm much faster." }, { "start": 351.76, "end": 357.15999999999997, "text": " So this is then called a strong inductive bias is where I build into the model basically" }, { "start": 357.15999999999997, "end": 359.24, "text": " I tell it beforehand." }, { "start": 359.24, "end": 364.84, "text": " Here is a very restricted class of functions that you can fit." }, { "start": 364.84, "end": 367.92, "text": " Whereas in a weak inductive bias I won't tell it that." }, { "start": 367.92, "end": 372.8, "text": " I'll simply say, well model you could fit any function you want and I'm just giving" }, { "start": 372.8, "end": 374.7, "text": " you training samples." }, { "start": 374.7, "end": 381.68, "text": " So this is a classic example of a bias variance trade-off where there is a lot of" }, { "start": 381.68, "end": 388.16, "text": " variance in these models meaning you can fit also a lot of functions but here because you" }, { "start": 388.16, "end": 395.2, "text": " bias the model towards a certain set of functions it can lower this variance and in this case" }, { "start": 395.2, "end": 403.16, "text": " here it speeds up learning because you don't have as much variance that means you can basically" }, { "start": 403.16, "end": 405.92, "text": " go faster while learning." }, { "start": 405.92, "end": 417.28000000000003, "text": " Alright, so they propose two solutions to this problem of this kind of to mitigate these" }, { "start": 417.28000000000003, "end": 422.44, "text": " problems that make reinforcement learning faster or have made reinforcement learning" }, { "start": 422.44, "end": 423.70000000000005, "text": " faster." }, { "start": 423.70000000000005, "end": 426.98, "text": " This is a review remember." }, { "start": 426.98, "end": 433.8, "text": " So the first one is episodic deep reinforcement learning and this episodic deep reinforcement" }, { "start": 433.8, "end": 438.8, "text": " learning is specified here, fast learning through episodic memory." }, { "start": 438.8, "end": 446.58000000000004, "text": " So the suggestion in this field of research is to augment the neural network or the agent" }, { "start": 446.58000000000004, "end": 453.48, "text": " by a memory and the memory could look something like this." }, { "start": 453.48, "end": 462.44, "text": " So in a lot of these RL frameworks what a principal component of the agent is, so the" }, { "start": 462.44, "end": 470.16, "text": " agent will get an observation O and one of the things it has to do is estimate the value" }, { "start": 470.16, "end": 472.64, "text": " of this observation of this state." }, { "start": 472.64, "end": 480.88, "text": " So basically the agent is in some state let's say you play pong right and you are here down" }, { "start": 480.88, "end": 487.4, "text": " and the ball comes your way up there right there's a little arrow sorry so the ball" }, { "start": 487.4, "end": 495.15999999999997, "text": " flies away from you and you're all the way down which basically means draw this bigger." }, { "start": 495.15999999999997, "end": 502.96, "text": " So here you are down here and the ball is here flying up there." }, { "start": 502.96, "end": 510.28, "text": " So one task in these in these agents that occurs often is to estimate the value of this" }, { "start": 510.28, "end": 516.36, "text": " observation basically means how much reward am I expecting from this state going into" }, { "start": 516.36, "end": 517.6, "text": " the future." }, { "start": 517.6, "end": 524.04, "text": " In this case I probably will not expect a lot of reward since I can't move up fast enough" }, { "start": 524.04, "end": 526.76, "text": " right to catch the ball." }, { "start": 526.76, "end": 534.04, "text": " So this I would assign this state a pretty low value whereas if I were up here I would" }, { "start": 534.04, "end": 537.48, "text": " assign this state quite a high value." }, { "start": 537.48, "end": 545.16, "text": " So as we've already seen this is a deep neural network mapping we learn to assign value to" }, { "start": 545.16, "end": 553.52, "text": " different states and this is one of the parts that takes a long time and these methods they" }, { "start": 553.52, "end": 560.16, "text": " are the one that's depicted here replaces this value estimation by saying okay we have" }, { "start": 560.16, "end": 567.8399999999999, "text": " an observation we somehow need to estimate its value why don't we look for similar observation" }, { "start": 567.84, "end": 577.36, "text": " so we have some kind of memory right and we go with our observation and we retrieve O'1" }, { "start": 577.36, "end": 587, "text": " O'2 O'3 that are somehow similar right so in our in our pong example I'm down I'm up" }, { "start": 587, "end": 595.8000000000001, "text": " here ball moves here I could be looking now at at states where I was here or where I was" }, { "start": 595.8, "end": 601.4399999999999, "text": " here like really close or where the ball flew a bit differently but still in the same direction" }, { "start": 601.4399999999999, "end": 608.8399999999999, "text": " or down here right so all these states are kind of close to my state and I can I already" }, { "start": 608.8399999999999, "end": 614.26, "text": " have I already have played these since they're in my memory right so with every one of them" }, { "start": 614.26, "end": 621.0799999999999, "text": " I can also retrieve the reward that I got so I because I already know the problem in" }, { "start": 621.0799999999999, "end": 625.04, "text": " reinforcement learning is before you do an action you don't know what the reward will" }, { "start": 625.04, "end": 631.76, "text": " be but here I already know because I've played it I've already experienced it it's in the" }, { "start": 631.76, "end": 638.64, "text": " past so I know what reward I got right so and this is exactly what they say over here" }, { "start": 638.64, "end": 646.36, "text": " they basically say here we have time time runs this way we're in state one then in state" }, { "start": 646.36, "end": 654.48, "text": " two and so on and we perform actions and and get rewards and what we can do is we can save" }, { "start": 654.48, "end": 662.6, "text": " these states into this memory as along with their sum of discounted rewards that we collect" }, { "start": 662.6, "end": 672.48, "text": " from that state on and then later this is like a spongebob reference if we want to estimate" }, { "start": 672.48, "end": 680.76, "text": " the value of some new state right what we do is we retrieve all of these states from" }, { "start": 680.76, "end": 688.92, "text": " memory calculate a similarity score over them and with with we wait basically we add their" }, { "start": 688.92, "end": 694.64, "text": " rewards weighted by how similar they are to the state that we want to compute so this" }, { "start": 694.64, "end": 703.84, "text": " basically amounts to averaging over states respective by how close they are to the current" }, { "start": 703.84, "end": 708.88, "text": " state right this is kind of a soft a soft way of saying I only select the states which" }, { "start": 708.88, "end": 715.28, "text": " are close and that gives you a value estimate for the new states so basically this means" }, { "start": 715.28, "end": 721.2, "text": " you just got rid of having to train a value function and this will speed up your reinforcement" }, { "start": 721.2, "end": 726.4399999999999, "text": " learning quite a bit if you don't have to train that if you already have good value" }, { "start": 726.4399999999999, "end": 731.66, "text": " estimations from your previous experience that's great of course there are a number" }, { "start": 731.66, "end": 737.4, "text": " of problems associated with that namely if this memory here for example becomes stale" }, { "start": 737.4, "end": 745.4399999999999, "text": " it doesn't represent the future rewards quite as well there is also a question of which" }, { "start": 745.4399999999999, "end": 749.9599999999999, "text": " states do you keep in memory just the good ones or do they have to have a certain property" }, { "start": 749.9599999999999, "end": 756.6, "text": " do you have to have some diversity in there and of course the biggest problem here the" }, { "start": 756.6, "end": 764.1999999999999, "text": " biggest problem is how do you know when two states are similar or when they aren't it" }, { "start": 764.2, "end": 772.12, "text": " might be easy in a situation like pong where I only have like three variables like position" }, { "start": 772.12, "end": 777.72, "text": " y position of my of my paddle and position of the ball and velocity of the ball those" }, { "start": 777.72, "end": 785.5600000000001, "text": " are like I can specify those in five numbers but if it gets harder than that if it's like" }, { "start": 785.5600000000001, "end": 792.82, "text": " this labyrinth setting full 3d environment then we have no clue which states are similar" }, { "start": 792.82, "end": 799.72, "text": " to each other and what these what most end up doing is they will train you guessed it" }, { "start": 799.72, "end": 807.32, "text": " they will train a deep neural network to give you this similarity score between states right" }, { "start": 807.32, "end": 813.1400000000001, "text": " how they do it is is a different question but presumably you can train this network" }, { "start": 813.1400000000001, "end": 820.48, "text": " offline basically meaning you can pre train it you could pre train it and then the so" }, { "start": 820.48, "end": 833.4, "text": " we have two stages stage one pre train train similarity dnn right and then once we've done" }, { "start": 833.4, "end": 842.5600000000001, "text": " that second stage do reinforcement learning using this and the claim here is that by having" }, { "start": 842.5600000000001, "end": 849.8000000000001, "text": " this done this this second stage will become faster so it it doesn't really solve the problem" }, { "start": 849.8, "end": 854.3199999999999, "text": " of the sample efficiency but what it says is okay the actual reinforcement learning" }, { "start": 854.3199999999999, "end": 860.0799999999999, "text": " part will become faster because we've already done the work previously but basically by" }, { "start": 860.0799999999999, "end": 868.56, "text": " by including this similarity score sorry whatever dnn by including this in the language of the" }, { "start": 868.56, "end": 878.0799999999999, "text": " review here we have successfully introduced an inductive bias into the rl procedure because" }, { "start": 878.08, "end": 885.32, "text": " the rl procedure now can't just fit any function we say we tell it your value function is one" }, { "start": 885.32, "end": 890.8000000000001, "text": " that conforms to our notion of similarity that we've pre trained this restricts the" }, { "start": 890.8000000000001, "end": 898.72, "text": " rl algorithm and we give it an inductive bias and as long as our similarity score is useful" }, { "start": 898.72, "end": 904.4000000000001, "text": " for the rl algorithm it can speed up its learning because it doesn't have to learn the value" }, { "start": 904.4, "end": 912.4, "text": " function itself all right cool so the second part here is a bit more abstract it's called" }, { "start": 912.4, "end": 918.88, "text": " meta reinforcement learning speeding up deep rl by learning to learn these kind of learning" }, { "start": 918.88, "end": 925, "text": " to learn approaches are quite abundant in the literature people try this usually there's" }, { "start": 925, "end": 933.96, "text": " a i mean it's it's very large scale experiments basically you have i think i believe they" }, { "start": 933.96, "end": 941.2, "text": " show it somewhere here yeah you have like some um some outer loop where you would say" }, { "start": 941.2, "end": 947.6800000000001, "text": " that's this thing here what the outer loop does is in each loop it samples one environment" }, { "start": 947.6800000000001, "end": 953.0400000000001, "text": " so it samples one environment from a distribution of environments so now you not only have one" }, { "start": 953.0400000000001, "end": 959.96, "text": " environment but you say okay if i'm going to navigate this maze one trying to learn" }, { "start": 959.96, "end": 970.88, "text": " to navigate this maze i'm going actually to learn to learn to navigate many mazes right" }, { "start": 970.88, "end": 977.72, "text": " so it's not like you train one agent to learn you train one agent to navigate many mazes" }, { "start": 977.72, "end": 985.5600000000001, "text": " that would just be classic reinforcement learning but you want to train an algorithm that helps" }, { "start": 985.56, "end": 993.2399999999999, "text": " an agent learn as a particular maze and you do that by training your helper algorithm" }, { "start": 993.2399999999999, "end": 1000.1999999999999, "text": " on a variety of agent maze combinations so in each step you sample one environment like" }, { "start": 1000.1999999999999, "end": 1009, "text": " this this here and you then have an inner loop here you fully reinforcement learn train" }, { "start": 1009, "end": 1016.6, "text": " an agent in the classic sense on this environment right you see here action action observation" }, { "start": 1016.6, "end": 1025.84, "text": " reward right but the agent receives some kind of signal from outside so the outside algorithm" }, { "start": 1025.84, "end": 1034.04, "text": " will kind of tell the agent how to approach the problem right this could be that it initializes" }, { "start": 1034.04, "end": 1042.56, "text": " the the weights here you see that the outer loop trains the parameter weights which determine" }, { "start": 1042.56, "end": 1049.84, "text": " the inner learner that interacts with an environment during the duration of the episode for every" }, { "start": 1049.84, "end": 1054.96, "text": " cycle of the outer loop a new environment is sampled from a distribution of environments" }, { "start": 1054.96, "end": 1061, "text": " which share some common structure so basically the one would expect when you train this that" }, { "start": 1061, "end": 1068.64, "text": " these parameters here this could be for example it could be the initial weights of the network" }, { "start": 1068.64, "end": 1074.68, "text": " that the agent uses that this one possibility right this is very abstract here this meta" }, { "start": 1074.68, "end": 1081, "text": " reinforcement learning it could be literally anything that the outer model teaches the" }, { "start": 1081, "end": 1088.06, "text": " inner model or gives to the inner model right and you you train both of these with reinforcement" }, { "start": 1088.06, "end": 1092.44, "text": " learning so the inner you train with reinforcement learning on the individual rewards and then" }, { "start": 1092.44, "end": 1099.8, "text": " you can train the outer loop on the reward that the entire app agent environment episode" }, { "start": 1099.8, "end": 1108.96, "text": " achieved so the that's kind of a two loop situation and yeah so that's meta reinforcement" }, { "start": 1108.96, "end": 1116.72, "text": " learning again it's very unspecified what it does but as you can already see if you" }, { "start": 1116.72, "end": 1124.64, "text": " now have such an algorithm that kind of tells the the inner agent just as an example how" }, { "start": 1124.64, "end": 1131, "text": " to initialize its weights right how to initialize the weights of its deep neural network if" }, { "start": 1131, "end": 1138.24, "text": " you have that here then the agent you will technically bias it this is again an inductive" }, { "start": 1138.24, "end": 1149.44, "text": " bias so you will give it inductive bias towards what you think are good weights to generally" }, { "start": 1149.44, "end": 1158.1200000000001, "text": " learn these maze structured environments right since the outer loop you can update it way" }, { "start": 1158.1200000000001, "end": 1164.32, "text": " slower because it needs to learn over a longer time horizon and it needs to learn things" }, { "start": 1164.32, "end": 1170.24, "text": " for a different variety of environments but once you have good kind of initial weights" }, { "start": 1170.24, "end": 1177.08, "text": " for a particular environment then this agent in here can learn much faster given an individual" }, { "start": 1177.08, "end": 1182.12, "text": " environment so the agent you instantiated and then you give it good starting weights" }, { "start": 1182.12, "end": 1188.78, "text": " or some other kind of signal about the environment and then it can go much much faster at learning" }, { "start": 1188.78, "end": 1195.52, "text": " the environment thereby you have just sped up this inner agent by providing it an inductive" }, { "start": 1195.52, "end": 1207.56, "text": " bias and that's basically what the claim of the review is that by providing these models" }, { "start": 1207.56, "end": 1213.6399999999999, "text": " with a larger inductive bias you may then speed up their learning because you've kind" }, { "start": 1213.64, "end": 1220.6000000000001, "text": " of told them what good functions are from the outset of course you see the problem again" }, { "start": 1220.6000000000001, "end": 1228.2, "text": " here well the problem the problem is of course you actually need to train this outer loop" }, { "start": 1228.2, "end": 1236.0400000000002, "text": " and the outer loop may actually take much much longer to train than a single and unbiased" }, { "start": 1236.0400000000002, "end": 1242.1000000000001, "text": " reinforcement learning thing but again what you could do is you could pre-train on a distribution" }, { "start": 1242.1, "end": 1248.04, "text": " of environments and then once a new environment shows up that is similar to this distribution" }, { "start": 1248.04, "end": 1256.04, "text": " you can then have the agent instantiated and learn much faster so again kind of this two-step" }, { "start": 1256.04, "end": 1262.28, "text": " process you could pre-train this outer loop and then the inner loop will be much faster" }, { "start": 1262.28, "end": 1271.48, "text": " than if you didn't have the outer loop all right so those are basically the kind of the" }, { "start": 1271.48, "end": 1278.96, "text": " kind of outlines they do here they then kind of do a connection to like the brain and so" }, { "start": 1278.96, "end": 1290.3, "text": " on and they relate this to biology and biological learning but ultimately their conclusion is" }, { "start": 1290.3, "end": 1298.98, "text": " here that whenever you want to do whenever you have slow rl or this is at least my conclusion" }, { "start": 1298.98, "end": 1308.28, "text": " from their article whenever you have slower you can transform it to fast rl rl but you" }, { "start": 1308.28, "end": 1318.72, "text": " have to outsource the slow rl slow something else slow x you have to outsource the slowness" }, { "start": 1318.72, "end": 1324.52, "text": " to some other part so if you want to do fast rl you have to outsource the slowness and" }, { "start": 1324.52, "end": 1333.72, "text": " what the slowness provides is an inductive bias which means yeah if you want to do like" }, { "start": 1333.72, "end": 1339.26, "text": " fast rl with episodic memory you have to learn the similarity function which again which" }, { "start": 1339.26, "end": 1346.72, "text": " might be slow in itself but then the rl will be fast and if you want to do this via kind" }, { "start": 1346.72, "end": 1352.28, "text": " of a an outer meta learner again this learning of the outer meta learner might be slow but" }, { "start": 1352.28, "end": 1361.36, "text": " then the inner learner will be fast in a connection to the kind of biological aspect of this they" }, { "start": 1361.36, "end": 1368.48, "text": " do make a connection which which i find is appropriate in that for example the human" }, { "start": 1368.48, "end": 1374.32, "text": " brain the reason we can learn things fast let's say in the physical world picking things" }, { "start": 1374.32, "end": 1381.26, "text": " up dropping things down or navigating our paths we're incredibly good at this navigating" }, { "start": 1381.26, "end": 1389.84, "text": " through like a weird terrain with rocks in the way is because of course our brains have" }, { "start": 1389.84, "end": 1396.48, "text": " been adapted to these kinds of environment over generations so there is an outer process" }, { "start": 1396.48, "end": 1403, "text": " like evolution which is this kind of outer loop and it instantiates the inner loop which" }, { "start": 1403, "end": 1413.8, "text": " are the humans that kind of live or die by their ability to to navigate better so the" }, { "start": 1413.8, "end": 1419.64, "text": " if if the outer loop does a good job of only keeping the humans alive that can navigate" }, { "start": 1419.64, "end": 1427.08, "text": " well then the individual human in here that that does this the individual human given" }, { "start": 1427.08, "end": 1434.48, "text": " a landscape with rocks will then be much faster at learning to navigate it all right so that" }, { "start": 1434.48, "end": 1440.32, "text": " was it for that i it's an interesting article to read especially the connections to the" }, { "start": 1440.32, "end": 1457.6, "text": " kind of biological aspects and with that have a nice day" } ]
HYEzHX6-fIA
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Dynamics-Aware Unsupervised Discovery of Skills (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "rl", "deep rl", "control", "planning", "world model", "dads", "skills", "latent", "high level", "unsupervised", "tree search", "deep reinforcement learning", "mujoco", "ant", "google" ]
This RL framework can discover low-level skills all by itself without any reward. Even better, at test time it can compose its learned skills and reach a specified goal without any additional learning! Warning: Math-heavy! OUTLINE: 0:00 - Motivation 2:15 - High-Level Overview 3:20 - Model-Based vs Model-Free Reinforcement Learning 9:00 - Skills 12:10 - Mutual Information Objective 18:40 - Decomposition of the Objective 27:10 - Unsupervised Skill Discovery Algorithm 42:20 - Planning in Skill Space 48:10 - Conclusion Paper: https://arxiv.org/abs/1907.01657 Website: https://sites.google.com/view/dads-skill Code: https://github.com/google-research/dads Abstract: Conventionally, model-based reinforcement learning (MBRL) aims to learn a global model for the dynamics of the environment. A good model can potentially enable planning algorithms to generate a large variety of behaviors and solve diverse tasks. However, learning an accurate model for complex dynamical systems is difficult, and even then, the model might not generalize well outside the distribution of states on which it was trained. In this work, we combine model-based learning with model-free learning of primitives that make model-based planning easy. To that end, we aim to answer the question: how can we discover skills whose outcomes are easy to predict? We propose an unsupervised learning algorithm, Dynamics-Aware Discovery of Skills (DADS), which simultaneously discovers predictable behaviors and learns their dynamics. Our method can leverage continuous skill spaces, theoretically, allowing us to learn infinitely many behaviors even for high-dimensional state-spaces. We demonstrate that zero-shot planning in the learned latent space significantly outperforms standard MBRL and model-free goal-conditioned RL, can handle sparse-reward tasks, and substantially improves over prior hierarchical RL methods for unsupervised skill discovery. Authors: Archit Sharma, Shixiang Gu, Sergey Levine, Vikash Kumar, Karol Hausman Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Take a look at this humanoid right here. It walks from one checkpoint to another checkpoint and then to the next checkpoint and so on. And that is its task. It gets reward from walking from checkpoint to checkpoint. Take a look at this ant. This is called the ant. It also walks from checkpoint to checkpoint. Now we've seen a lot of reinforcement learning algorithms in this environment. It's called Mukojo, where you basically teach these little things to walk around. So what's the impressive part here? The impressive part is that at training time, this ant has never ever seen what a checkpoint is and has never gotten any reward from walking from checkpoint to another checkpoint. Actually, it has never gotten any reward for anything that is given from the environment. It has discovered the skill of walking by itself. And then at test time, there is no additional learning when it goes from checkpoint to checkpoint. It simply composes the skills that it knows from its unsupervised discovery phase in order to go from checkpoint to checkpoint. So here you can see this paper basically proposes to learn these skills in a completely unsupervised way at the beginning, sort of, so in the training phase, it learns these skills, you can see these skills that the humanoid has learned. And then all you have to do at test time is to compose these skills to reach a given goal. And these are the things that the ant has learned. Watch out, this is trippy. You can see it has learned various walks, various ways of walking here. And if you know anything about this environment, it's actually not that easy to make the ant walk by itself. So the discovery here that these skills that are discovered are various ways of walking is actually already pretty impressive. And the last thing here, this cheetah, of course, also has to has learned to walk back, forward, going to jump around, and so on. So we're going to dive into this paper. It's called Dynamics Aware, Unsupervised Discovery of Skills by Archie Sharma and other people of Google Brain. So this was published at iClear 2020. And on a high level, I already said it's basically proposing to learn unsupervised skills, and then to compose these skills in a model based planning method at test time to reach a given goal without additional training, without additional training on the reward that you give at test time. As always, if you like videos like this, you're very welcome to subscribe and share it to everyone you know. Yeah, okay, that's live in. So they say, conventionally, model based reinforcement learning aims to learn a global model for the dynamics of the environment, which is not exactly true, right. So we have we dive into model based and model free reinforcement learning. Model based reinforcement learning basically means that you have a model of the environment. A example for this is, let's say, tic tac toe. So in tic tac toe, I know, like I have nine actions at my disposal. And if I take action, let's say I take action zero, which is to make a let's say I'm the X player, so I take action zero. And if I you know, number my things correctly, then that will result in this state of the world. Okay, so I know exactly how the world will look when I take a given action. And what that allows me to do is that allows me to actually plan. So I can now plan ahead, I can say what would happen if I took action zero, so I can do this in my mind. And then what would happen if I took action one, I can be like, okay, that's going to happen. And I can do this with many things. And I can, in my mind, continue this and basically roll out the entire games or and then only do the given action that has led to the best result at the end, right. So this is model model based reinforcement learning means you have a model of the environment, you know, what's going to happen when you perform given actions. And you can also combine this with machine learning, like, you know, alpha, alpha go alpha zero, or so they have models of the games they're playing, they know what's going to happen. But it's very intractable to basically go down this entire tree and plan out everything. So they combine it with machine learning. It doesn't change that it's model based. Now in, in opposition to that in model three, in in reinforcement learning, what you do, you are this agent, there's the environment, and you simply have to do an action, I do action zero, and the environment just gives you back a reward and the next observation. And it you have basically no clue how the environment will change. If you do something, all these and all these, these agents do or the classic model free agents do is basically they're trying to have a neural network somewhere in them. And you put the observation in here and outcomes in action. And you can do this in various ways into Q learning or policy gradient or actor critic and so on. But ultimately, it's simply mapping the up the current observation and maybe the last few to the best action to take without explicitly modeling what happens in the environment. Now when they say model based reinforcement learning, what they mean is technically what you can do if you're in if you are in the model free, if you're in this situation, what you could do is you could say, well, since these model based RL techniques tend to work better, I could hear inside the agent, I could try to learn a model of the environment, E prime, and I could try to basically learn what happens in my environment when I do a certain action. And then I could use that model right here, in order to do this planning that I know from up here. Okay, so in this case, they go for exactly this they go for, let's learn a model of the environment, this is not an exact model, it's a learned model. And then let's use that to plan. Now this usually has a bunch of, you know, very a bunch of things that go against it, namely, if this model right here is bad, then the planning in the model will often accumulate and even exaggerate the errors that are in this model. So it's sometimes very hard to learn a model of the world and then use that for planning. And I've recently done a paper where, where curious AI takes the noise in auto encoders to regularize exactly such a planning procedure to counter this. And this paper right here is a different approach of combining this learned model. This learned model. So, okay. That was about the first sentence. They say it aims to learn a global model for the dynamics of the environment. A good model can potentially enable planning algorithms to generate a large variety of behaviors and solve diverse tasks, which is true, right? If I have a model of the environment, then I could just use it to plan. I wouldn't even have to do anything fancy anymore, right? If I have a model of how my tic tac toe works, I can just plan, plan my way to success. And I can do this all for zero style. Or if the if this state tree is small enough, I can actually just use a planner. And I don't even have to, I don't have to do anything anymore. If I have a good model. They say however, learning an accurate model for complex dynamical systems is difficult. And even then the model might not generalize well outside the distribution of states on which it was trained. So this is another problem. If you learn a model, it's only going to be valid in a certain range. Okay. And say in this work, we combine model based learning with model free learning of the model free learning is of primitives that make model based planning easy. So what they attempt to do is they attempt in an unsupervised fashion to learn so called set of skills. And the set of skills could be something like walk forward, walk backward, stay put, jump. So they attempt to learn things like this in a model free way, that the model is simply asked to come up with these things or the agent is simply asked to come up with these things. And then in in stage two, a planner can use these skills and decompose a plan. Now this plan here, the special thing about this plan, this planner, it doesn't operate in the space of actions of like small scale actions, it actually operates in the space of these skills right here. So here would be walk forward, walk back. So and with if we have a good enough model of the environment, it will tell us if I walk forward in this situation, what will happen? Okay, so I can walk forward. And then after that, I could walk backward, what's going to happen? And if I have a good model of the environment over the and the actions now are these macro actions of these skills, then I can use planning to reach my goal. Okay, so the question is, how do we come up with useful skills that the planner can then use? So they need to be somewhat diverse, right? But also, and here is the the crucial part and the the sort of contribution of this paper, they say, how can we discover skills whose outcomes are easy to predict? And this is how they counteract this notion here, that if your environment model is crap, then you're you're, it can't basically can't be used for planning, you'll just make it worse. So what they say is that these things right here, these skills that we learn, we will learn them in a way that make them easy to predict. So it makes it easy to predict what will happen after I do them. So they must be at the same time diverse. So only you know, if you stay put, it's pretty easy to predict what's going to happen like nothing. Okay. But we're going to see in the exact objective that they have to be sort of diverse. But also, so only one of them can be stay put, the other ones have to do something else. But also, they should be easily predictable by the environment model. And if you learn the skills such that they're easily predictable, your environment model will make less errors, and then you can use it for planning. Okay, let's dive in. They do actually open source code, and they have more of these videos, if you want to check it out, I'll link everything in the description. Okay, let's actually dive into the meat right here. They say they do maximize the mutual information. And we're going to see between between what and what they want to maximize the mutual information. If you don't know what the mutual information is, the mutual information is a quantity. The mutual information between x and y is a quantity, that's the entropy of x minus the entropy of x conditioned on y, or you can also decompose it in the other way around entropy of y minus entropy of y conditioned on x. And we're going to they apply this, where is it equation two right here. Okay, so what they want is the following, they want to maximize the mutual information. And the mutual information basically means how much does one variable tell me about the other variable, okay? The mutual information between the skill, and we're going to see what that is, and the next state. So what is a skill? A skill is one entry in our table right here. So these here are skills, skills. And they're in their index by z. Now z here, it seems like they're discrete, right? But in this case, they would be z would be a continuous vector. But it's easier if you imagine that so each one of this is like z one, z two, and so on. They're going to be continuous, but in our case, we'll just think of a discrete set of skills to be learned. Okay, so they say we want to maximize the mutual information between the skill that's currently in action. So in every, every time the agent has to like choose a skill and saying like, okay, now I'm going to walk forward. And it's in a given state. And what you want to say is you have to maximize the mutual information between the skill and the next state, which means that it means two things, which you can see right here, you can decompose it in two different ways. One way is the following is to say, if I know which state I'm in, what's the entropy over my? Okay, that's the wrong bad way for me leading it. It's if I knew, if I know these two things, so if I know the state I'm in, and the next state that I'm going to write, I can in hindsight, I look back, if I know both things, what can I say about this skill z right here, that I couldn't say just from the starting state. So the starting state, let's say is the person right here, and the end state is the person a little bit more over there. So that's, that's called this forward, it's the person is looking to the right. Okay. Now, if I only show you if I only show you the state on the left, the starting state, what can you tell me which action is going to follow right here? Basically, you can tell me much it could be any action like walk forward, walk back, stay put. But if I also show you the next state, then you can pretty confidently say, ah, I know what you did, you did walk forward. Okay, so this, this, in this situation, we would have a high mutual information between z and s prime in the formulation here, if you decompose it in the other way, this is equivalent, but it's a different way of thinking about it, it means when I show you these two things, how much more can you tell me about the next state than if I only show you this. So in this formulation, what we would do is, I would say I tell you the human is here looking to the right, what can you tell me about the next state? And you like what I, I couldn't tell you very much, right? It can be anything. But if I then tell you the action is walk forward, then you could say, ah, now I now I get it, it's probably going to be something like this. Okay. This also would be a high mutual information. So you see that the the task of maximizing this mutual information is good, because what happens if I don't, if I have a low mutual information, if I have a low mutual information, it would mean that I could predict the next state just as well from the current state. It doesn't make a difference whether you give me the skill or not, it would not make a difference. And that is only the case if my skills are basically either all the same or all pretty, pretty random and pretty useless. Right? So if if all my skills are basically walking backwards, then I don't, you don't have to tell me which skill you do, I'm going to know that the next state is like this. So you can see that the objective of maximizing the mutual information between the skill and the next state is going to result in a situation where these skills are going to be first of all, diverse. And second of all, easy to easy to predict. Okay. And to see this, yeah, we only have to imagine what would happen, what would happen in a in a situation where the skills weren't diverse or weren't easy to predict. And you'll get exactly the situation where the information of the skill doesn't help you in predicting the next state, because yeah, either it's obvious or it's random. Okay, so we agree that it makes sense to maximize the mutual information. And they decompose this into two objectives. So they say the mutual information will stay is basically what you'll have to do is you'll have to it decomposes into two terms where you can into two terms in a lower bound in the mutual information. And this is kind of the this is sort of the standard variational approximation literature. If you're into that read up on variational auto encoders and things like this. Basically, the two steps here are you tighten the variational lower bound and you maximize the approximate lower bound. Okay, so you have the mutual information, and you can lower bound it. Okay, you can you can lower bound it by this quantity. Now the if since this is a lower bound, you can prove that this is a lower bound, it means that the higher you make this term, then the the more basically, okay, if it's a lower, I don't know how to formulate, but it should be fairly obvious if this if this thing on the right is a lower bound to the mutual information, then maximizing the thing on the right will maximize the thing on the left. And it will do so very so imagine I is up here. And this E on the right side is down here, it's a lower bound, right? It's lower than I. So if I maximize E, well, I haven't really done anything to I but if I maximize it even more up to here, and since it's a lower bound, I know now my I must be at least higher than this. Okay, so maximizing a lower bound to a quantity will ultimately increase the quantity. But you can also the efficiency by which it does this depends on how tight the bound is. So if the bound is very tight, like this, you see I is not much above E. If the bound is very tight, then maximizing E will result in a faster maximization of I. Okay, so you can do two things, you can maximize the quantity that is the lower bound, or you can tighten the bound. And here we can see that the difference that the tightness of the bound depends on this quantity right here, which is the KL difference between this and this. So yeah, let's watch this in the context of we can actually go go through it on the high level. If you've never done this variational approximation sort of math, then this might be a bit informative. Okay, so the thing right here is just pops out of the definition of the mutual information. It's the it's basically the differences of the entropy's, which the entropy's are log quantities, right? So if you have a log A minus log B, that you can also write this as the log of the fraction of A over B, that's just a property of the log. And so it's expectations over logs, these entropy's. So you can write it as this thing right here. Okay. And this basically says, this is very high, if or very low, depending so you need to whether or something is lower high always will depend on what you exactly you have to consider. But ultimately, what you'll want is the ratio between this quantity, which is the probability of the next state given the current state and the skill you're taking divided by just the probability over the next state, given the the current state, in expectation over all the skills, current states and next states. Now what they're saying is, this here, this is basically the environment, right? This is if you are in a state and you perform a skill, what's the next state? That's the true environment. P here is the true environment, which we don't know, right? We don't know what the environment's going to do. But we would like to learn a model for the environment. And this model for the environment is now Q, Q, theta, phi, theta, phi, Greek letter. So Q phi here is going to be a neural network that will approximate the environment. And in this probabilistic framework, it is going to be a learned distribution that will approximate the distribution of P. So we approximate it by this. But now this is here it says equal equal, right? This is not equal, because this is just an approximation. So the equality must become must be basically compensated for by this term right here, you can see this here is expanded into these two, you can go through the exact definitions and see why this is an equality. But basically, you can say that the mutual information is this expectation, or it is this expectation. But now you have to correct for the fact that here you only have an approximation. And you have to correct for the fact by exactly the amount by which the approximation is different than the quantity that you're approximating. And this is this key KL divergence right here. So the KL divergence basically measures how different two distributions are. It's sort of a distance, not exactly distance, but sort of a distance between these two distributions right here, it says, here's the real world. And here is your estimate of the real world, how much do they disagree, and that quantity plus, then you can replace your world, the exact world distribution by your approximate distribution. And you still are equal to the mutual information. And now the basically the trick is you say, oh, the KL divergence is always positive, it's a quantity that it can only be a positive number. So if I leave it away, certainly, this is only this is going to be a lower bound to the quantity. Okay. All right. So two tasks right here. First of all, tighten the variational bound, which means make this quantity small, make your approximate world model as close as possible to the real world. How do we do this neural network? Okay, you input trajectories. I was in this state, I performed this skill, and I ended up in this state. Sorry, that's this and then you simply match your neural network simply matches what happens in the real world, it learns the transition function, basically. So that's, that's the tightening of the variational bound. And the second step is this right here to to maximize the approximate lower bound, right? The first step was Titan variation lower bound, that basically means make your world model more accurate. And the second is Titan that maximize the approximate lower bound. Now this is going to part, this is going to be the part that says, now given that I already have a better world model right here, can I improve my can I sort of improve my skills such that they become easier to predict and more diverse? Can I improve my skills such that this mutual information right here gets to be high as high as possible? Okay. So this is sort of an alternating thing. And you can see this in this very, very, very, very confusing diagram, honestly. So what are you going to do in this algorithm? First of all, in each episode, you're going to select a skill at random. And as I said, these skills, they're not predefined. So no one tells the agent to walk forward, it simply says, okay, you have like, in a discrete case, you would have like, you have five skill slots, right? And the only thing I require is that they're sort of, you know, consistent over time. So skill one is always going to be sort of the same thing and skill two, but agent, you can basically decide what skill one is, right? But make the skill such that it's predictable and that the different skills are diverse. Okay, so you're going to sample one of the skills, like skill zero or whatnot. And then you're going to do two things. First of all, you're going to learn these skill dynamics, which is you're going to learn your approximate model of the world. Okay. And how do you do that? Basically, here, you're the agent and the agent will. So what does the agent have to do? The agent will take in the skill Z and it will take in the current state of the world and it will output an action. Now, this is the model free part, right? So the agent that somehow has to come up with saying, ah, skill zero, that's walking forward. And in this situation, walking forward means I have to lift my leg or something like this. So you're going to take your skill, you're going to with your agent perform an action based on that skill and the current state of the world. Then the environment is going to give you the next state right here. And from those things, you can now learn your world model. You know, I was in state S, I performed action a but I performed action a based on skills Z. And then I ended up in state S prime. And I can learn a model of the world, right? This is a triple, I can do supervised learning of a world model. Now here, they do probabilistic learning, but and we're going to see in a second how that works. But ultimately, they approximate the world with their model. Cool. So that's the this outer loop. And then what are they going to do next, they're going to use that world model to determine a reward for the agent, and the reward for the agent for taking the action. So the reward is going to be, oh, agent, you took action a. Now, what's your reward for doing this? This is the model free reinforcement learning part, your reward is going to be very high, if if this was very predictable. And if it is also diverse, right, so now, the agent has to sort of max sort of the agent has to go and make this quantity very high this, we want the outcome of these actions to be predictable, and dive and the actions themselves to be diverse. It is, I'm sorry, it's very hard to keep all of this very straight. Okay. But ultimately, two steps, learn world model from the experience that you've generated. And second thing, learn the agent such that it maximizes this this quantity that we've seen before. And you do this via giving the agent a reward that is proportional to the mutual information. And we've already seen that we can approximate the mutual information by by this quantity here. Okay. So learn world model, and make the agent go higher mutual information, two steps. Okay, learn world model is very, very classic, you can say, okay, I need to improve, I need to minimize this KL divergence. So I need the gradient with respect to the parameters of my world model, I can write down the KL divergence like this. And then since I can do this reverse, so log a over b is log a minus log b. And since the world doesn't depend on the parameters of my model, this will simply give me this thing right here, which is the gradient of the log probability basically, of my neural network. And this can be just optimized straightforward, this is a neural network, optimize it with gradient descent. These are the inputs, this is the output. Now, okay, you, this is all probability distributions, but ultimately, you can you can do it pretty straightforward. Okay, so corresponds to maximizing the likelihood of the samples from P under Q. Now, the second step maximize the approximate lower bound. Okay. So after they say after fitting Q, after improving our world model, we can optimize pi pi is the agent that actually takes the actions based on the skill. So it's given a skill, and it needs to perform an action. And it needs to maximize this quantity, as we've seen, needs to maximize the mutual information between if I know the action and if I don't, or the mutual information between the skill and the next state. I say note, this is a reinforcement learning style optimization, with a reward function of this quantity. However, so you look at the quantity that they need right here, the quantity is going to be this thing. And this thing is just, I feed the skill and the state into my world model. And I look what what comes out of the world model. So this I can compute, right? But this thing right here, I can't compute because this is this is what happens in the world when I'm in state s, and I just run my agent over in expectation over all the skills. So this I don't know, they have a log of this is intractable. And then so we approximate the reward function for pi as this thing right here. Now, first, let's look at what this thing is. So the reward of taking action a, and action a is based on skill z, right? So skill z was fed into the agent, the agent comes up with action a, oh, you want me to walk forward in this situation, okay, I'm gonna lift my leg. That's the action. Okay. So the reward for this action, given this skill, and given the current state is going to be what it's going to be very high, if this here is very high. So it's going to be very high if the probability so this s prime is the state you ended up in, right? So after taking the action, you ended up in s prime. So if what does it mean when this quantity is very high? It means that my world model q, that is approximating the world, thinks that this state is very probable if you were in this state and are given the skill z. So this basically means that the neural network can predict with very high accuracy, what's going to happen if you are in this state and are given this skill to perform, right? This is one of the things that we want. Now, what is it divided by? It's divided by this. And you can see here, the z i are other skills. So it is, what does this mean? This is almost the same quantity. It means how well can the same neural network predict the next state if you were given a different skill. So it means if I'm here, and I ended up here, how well can you predict it if I tell you that I walked forward? And here you ask, well, how well can you predict it if I told you you walked backward, if I told you you jumped, if I told you, and so on. So you basically aggregate over all the other skills you could perform. And each time you ask the neural network, well, how likely is it that you end up in the state that I ended up with? So what does it mean if this quantity is high, or sorry, if the entire sum here is high? That means that the skill doesn't really give you much information, the neural network is very good, no matter which skill you selected, right? It's very accurate in predicting the next state doesn't really matter. The skill doesn't really matter. And this is what we don't want, right? We want that the skills are very diverse, right? So the top part is, they're easy, it's easy to predict what will happen if you perform a given skill. And we divide this by the bottom part. And this makes it such that these skills are very diverse, because if they're not diverse, then it doesn't really matter which one you perform. And then this quantity on the bottom will be very high, but we divide by it. So we want, we want it to be low. Okay, now the reward is going to be the log of this fraction here. And this makes sense, right intuitively, but they're going to try to motivate this mathematically. And for motivating this mathematically, of course, they need to approximate this quantity right here. This quantity is the denominator. So they this denominator is a proc is an approximation to this. It's an approximation. As you can see here, this is sort of sort of a sample based approximation to the transition from s to s prime under the distribution of z. But what you want is just is the transition from s to s prime, not in your approximation, but in the real world. And they formulate this, they say, okay, we can decompose it as such as a as an integral over this conditional right here. So they bring in the z variable. And then they say, well, this, this is approximately approximately we can replace this here by this. And we can replace this here by this. They say, well, since the this is a an approximation, the two, this is the world model is an approximation to the real world, we can sort of replace that. And then this is the this is the part that doesn't convince me they say, well, this P z of s, we can just replace it by P z. Now this is it's very tricky to see what these quantities are. Ultimately, it ends up being that right here. But it's it's it's so tricky. So they say we replace P z given s by P of z. And, okay, let's think about this for a second. What does the top the bottom quantity is simply the distribution over your skills. And depending on how you sample them, this could be like a uniform distribution over your skills, like that's fine. But what's the top thing, the top thing, basically, we can use base formula to reformulate it. It's P of s given z times P of right times P of z divided by P of s. So this quantity depends on multiple things. Here's that prior again. And this means what's the general distribution of states? What's the general distribution of states? If your agent acts in the world, right, this? And this, we don't know, we don't we don't know. And also this right here, what's the distribution in the true world? What's the probability of a state given a given given that you were acting under a skill z? And this is also something we don't know, because we don't know the world, we we don't have the world model. So you run into the same problem again and again, that you're trying to approximate this. And they want to make this so mathematically rigorous, but ultimately, and they go in the appendix, they go through various ways that they could solve this. But ultimately, they just say, well, this is approximately the same. So this right here, it basically means what skills, if you're if you're in a certain state, what skills brought you here, basically, what what skills brought you here, what's the distribution of skills that brought you to this state, and they say, well, we're just going to approximate that by the prior distribution over our skills, basically disregard the state here. And this seems overly shaky. Like, as I said, the entire paper makes sense. But I just feel it's trying to be overly mathematical, and then run into a point where you can't be and then they're just like, okay, we'll, we'll just replace it. And then sort of things break down, like, you can only be overly mathematical to some degree, it doesn't really fit. But okay, so this is how you discover the skills, you maximize these quantities, alternately, you learn the world model, and you improve your your skills by making them diverse and easily predictable. So how do you then plan using these skills? This is the second part of the paper, and this is just as complicated as the first part. So they say given the learned skills, so the learned skills are policies over action given the DZ, right? So now you know how to like walk forward and walk back and so on. And now you're, you're placed in a world, and you're given this checkpoint, it says, well, walk there. And you want this to do this using planning, you don't want to learn anymore, you simply want to plan. Okay, what do you do? And as I said, this is even more so what you want to do is you want to do something like model predictive control, but not over actions, but over your learned skills. So you have this planner in the NPC, and the planner will in its head roll out a number of different, a number of different plans, it will kind of explore a bunch of different, different plans Z, it will roll them out, I'll say, okay, if I do this, and this, and this, and this, what will happen using its world model that it has learned, it will observe what's going to be the reward in each of these cases. Now, they say here, access the environment reward, but can also be estimated. And this is another sort of, I feel weak point of this in that they now assume they have the true reward function. But they don't have a world model, right? They don't have the world model, but they assume that they can sort of always ask for the true reward, which is probably not the case if you had a true world, but it could be the case, the reward could be something like, well, if you're over there, you get higher reward, but you don't exactly know how to get over there. In any case, so they roll out a bunch of trajectories in their head, they kind of plan forward, see what's going to happen if they do this or that, or this or that. And then they choose the best one of these forward thoughts, and they execute it in the real world, right? So they say, well, I'm going to use choose the skill walk forward. So the agent is now going to be tasked with walking forward. And it's going to do that in the real world for a certain amount of steps, like 10 steps of walking forward, after 10 steps of walking forward, you go back and say, I'm in this new situation right here, what should I do? And again, the planner is going to be like, if you first walk forward, and then walk back, where are you going to be, and so on. So the planner will always plan, basically to go from where you are to the checkpoint using a composition of the skills that you have learned. So the planner may be fine. Okay, if I first walk forward, walk back a bit, and so on, I'm going to get to the goal, I'm going to reach the goal. Now please agent execute this first thing, walk forward, the agent executes it, and maybe it won't, you know, it won't do as well, it will maybe end up here. And then it says, well, I'm here now, please plan again. So it plans again, it says, okay, I can still kind of walk back, I'll be here, here, but then I have to do something else. So now walk back. And okay, so this is what's going to happen. But it is going to happen in a weird way. Namely, what we keep are normal, since everything is continuous, we'll keep normal distributions of all our future steps. So we don't say, okay, I go here, and then I go here. What you'll say is I approximately go here. And after that, I'll approximately go here. And you will do it in such a way that the peak of this normal distribution is going to be the highest, where you think you will get the most reward. If you follow this trajectory, like if you follow this trajectory, you get a very high reward. And if I follow a trajectory that maybe goes here, I won't get a high reward. If it actually turns out in your imagination that you do get a high reward for this trajectory, you'll change this distribution, such that the peak is here. And of course, the tighter the peak is, the more sure you are. So you sort of are looking, if you look out into the world, you want the closest steps to be very peaky. And then as you look out, they can be more sort of broad. And that's how you plan ahead, you keep doing a step. So if you go from here to finally you choose, I want to go here, where the tip is the highest, go here, then you imagine forward again, you refine these distributions over the future. And then you take the next step that gets you to the where the highest peak is right here, basically. And so on. This is simply planning in a continuous domain, it is pretty analogous to how you would plan in like, alpha go, if you or tic tac toe, if you had a planner. But since everything's continuous, it makes it just so much harder. So they yeah, they always update these distributions, as you can see here, to the skill that gave you a high reward in your imagination, compared to the rewards of the other plans that you had. Okay, well, this was a long, long way until we got here. But if you recap, so first, they in an unsupervised fashion, learn these low level skills, such that they're easily predictable by their own world model, and diverse. And then in the second step, they can use that to, to do basically planning. So they first learn these skills, and then the planner composes them to make the agent do something. And again, the agent will never have to learn how to do this go from checkpoint to checkpoint, because the planner can just compose these low level skills. So they have these experiments right here. And we won't go through the experiment because this video is already very, very long. But they basically show that they they're learned things, actually, their learned skills do end up being very diverse, do end up predictable, have a high variance, and so on, they have to give certain priors to it to make it actually work in a real setting. But the results you can actually see in these videos and in the graphs, I'm about you to check out the paper if you're still here. Thanks for being here, I hope this this work was one of the most more complicated and mathy papers we looked at. But I think I still think it's fun. And I still think the outcome is pretty impressive right here, how you can use math to derive basically these intuitive, very intuitive objectives to learn. It's also pretty cool. Alright, that was it for me. And bye bye.
[ { "start": 0, "end": 8.44, "text": " Hi there! Take a look at this humanoid right here. It walks from one checkpoint to another" }, { "start": 8.44, "end": 13.76, "text": " checkpoint and then to the next checkpoint and so on. And that is its task. It gets reward" }, { "start": 13.76, "end": 20, "text": " from walking from checkpoint to checkpoint. Take a look at this ant. This is called the" }, { "start": 20, "end": 26.42, "text": " ant. It also walks from checkpoint to checkpoint. Now we've seen a lot of reinforcement learning" }, { "start": 26.42, "end": 33.160000000000004, "text": " algorithms in this environment. It's called Mukojo, where you basically teach these little" }, { "start": 33.160000000000004, "end": 39.480000000000004, "text": " things to walk around. So what's the impressive part here? The impressive part is that at" }, { "start": 39.480000000000004, "end": 47.2, "text": " training time, this ant has never ever seen what a checkpoint is and has never gotten" }, { "start": 47.2, "end": 52.32000000000001, "text": " any reward from walking from checkpoint to another checkpoint. Actually, it has never" }, { "start": 52.32, "end": 58.52, "text": " gotten any reward for anything that is given from the environment. It has discovered the" }, { "start": 58.52, "end": 65.96000000000001, "text": " skill of walking by itself. And then at test time, there is no additional learning when" }, { "start": 65.96000000000001, "end": 71.32, "text": " it goes from checkpoint to checkpoint. It simply composes the skills that it knows from" }, { "start": 71.32, "end": 80.2, "text": " its unsupervised discovery phase in order to go from checkpoint to checkpoint. So here" }, { "start": 80.2, "end": 86.44, "text": " you can see this paper basically proposes to learn these skills in a completely unsupervised" }, { "start": 86.44, "end": 92.28, "text": " way at the beginning, sort of, so in the training phase, it learns these skills, you can see" }, { "start": 92.28, "end": 97.64, "text": " these skills that the humanoid has learned. And then all you have to do at test time is" }, { "start": 97.64, "end": 103.46000000000001, "text": " to compose these skills to reach a given goal. And these are the things that the ant has" }, { "start": 103.46000000000001, "end": 109.2, "text": " learned. Watch out, this is trippy. You can see it has learned various walks, various" }, { "start": 109.2, "end": 114.28, "text": " ways of walking here. And if you know anything about this environment, it's actually not" }, { "start": 114.28, "end": 122.82000000000001, "text": " that easy to make the ant walk by itself. So the discovery here that these skills that" }, { "start": 122.82000000000001, "end": 128.68, "text": " are discovered are various ways of walking is actually already pretty impressive. And" }, { "start": 128.68, "end": 134, "text": " the last thing here, this cheetah, of course, also has to has learned to walk back, forward," }, { "start": 134, "end": 139.48, "text": " going to jump around, and so on. So we're going to dive into this paper. It's called" }, { "start": 139.48, "end": 145.76, "text": " Dynamics Aware, Unsupervised Discovery of Skills by Archie Sharma and other people of" }, { "start": 145.76, "end": 152.52, "text": " Google Brain. So this was published at iClear 2020. And on a high level, I already said" }, { "start": 152.52, "end": 159.96, "text": " it's basically proposing to learn unsupervised skills, and then to compose these skills in" }, { "start": 159.96, "end": 167.52, "text": " a model based planning method at test time to reach a given goal without additional training," }, { "start": 167.52, "end": 174.64000000000001, "text": " without additional training on the reward that you give at test time. As always, if" }, { "start": 174.64000000000001, "end": 180.8, "text": " you like videos like this, you're very welcome to subscribe and share it to everyone you" }, { "start": 180.8, "end": 189.88, "text": " know. Yeah, okay, that's live in. So they say, conventionally, model based reinforcement" }, { "start": 189.88, "end": 196.32, "text": " learning aims to learn a global model for the dynamics of the environment, which is" }, { "start": 196.32, "end": 202.12, "text": " not exactly true, right. So we have we dive into model based and model free reinforcement" }, { "start": 202.12, "end": 208.96, "text": " learning. Model based reinforcement learning basically means that you have a model of the" }, { "start": 208.96, "end": 217.4, "text": " environment. A example for this is, let's say, tic tac toe. So in tic tac toe, I know," }, { "start": 217.4, "end": 222.96, "text": " like I have nine actions at my disposal. And if I take action, let's say I take action" }, { "start": 222.96, "end": 230.84, "text": " zero, which is to make a let's say I'm the X player, so I take action zero. And if I" }, { "start": 230.84, "end": 236.28, "text": " you know, number my things correctly, then that will result in this state of the world." }, { "start": 236.28, "end": 242.04000000000002, "text": " Okay, so I know exactly how the world will look when I take a given action. And what" }, { "start": 242.04000000000002, "end": 246.88, "text": " that allows me to do is that allows me to actually plan. So I can now plan ahead, I" }, { "start": 246.88, "end": 252.79999999999998, "text": " can say what would happen if I took action zero, so I can do this in my mind. And then" }, { "start": 252.79999999999998, "end": 258.08, "text": " what would happen if I took action one, I can be like, okay, that's going to happen." }, { "start": 258.08, "end": 262.96, "text": " And I can do this with many things. And I can, in my mind, continue this and basically" }, { "start": 262.96, "end": 271.2, "text": " roll out the entire games or and then only do the given action that has led to the best" }, { "start": 271.2, "end": 275.36, "text": " result at the end, right. So this is model model based reinforcement learning means you" }, { "start": 275.36, "end": 280.7, "text": " have a model of the environment, you know, what's going to happen when you perform given" }, { "start": 280.7, "end": 287.76, "text": " actions. And you can also combine this with machine learning, like, you know, alpha, alpha" }, { "start": 287.76, "end": 292.96000000000004, "text": " go alpha zero, or so they have models of the games they're playing, they know what's going" }, { "start": 292.96000000000004, "end": 299.68, "text": " to happen. But it's very intractable to basically go down this entire tree and plan out everything." }, { "start": 299.68, "end": 306.84000000000003, "text": " So they combine it with machine learning. It doesn't change that it's model based. Now" }, { "start": 306.84000000000003, "end": 312.8, "text": " in, in opposition to that in model three, in in reinforcement learning, what you do," }, { "start": 312.8, "end": 317.84000000000003, "text": " you are this agent, there's the environment, and you simply have to do an action, I do" }, { "start": 317.84000000000003, "end": 323.52, "text": " action zero, and the environment just gives you back a reward and the next observation." }, { "start": 323.52, "end": 331.44, "text": " And it you have basically no clue how the environment will change. If you do something," }, { "start": 331.44, "end": 337.59999999999997, "text": " all these and all these, these agents do or the classic model free agents do is basically" }, { "start": 337.59999999999997, "end": 345.79999999999995, "text": " they're trying to have a neural network somewhere in them. And you put the observation in here" }, { "start": 345.79999999999995, "end": 350.47999999999996, "text": " and outcomes in action. And you can do this in various ways into Q learning or policy" }, { "start": 350.48, "end": 357.28000000000003, "text": " gradient or actor critic and so on. But ultimately, it's simply mapping the up the current observation" }, { "start": 357.28000000000003, "end": 363.84000000000003, "text": " and maybe the last few to the best action to take without explicitly modeling what happens" }, { "start": 363.84000000000003, "end": 369.68, "text": " in the environment. Now when they say model based reinforcement learning, what they mean" }, { "start": 369.68, "end": 377.08000000000004, "text": " is technically what you can do if you're in if you are in the model free, if you're in" }, { "start": 377.08, "end": 383.8, "text": " this situation, what you could do is you could say, well, since these model based RL techniques" }, { "start": 383.8, "end": 390.24, "text": " tend to work better, I could hear inside the agent, I could try to learn a model of the" }, { "start": 390.24, "end": 397.28, "text": " environment, E prime, and I could try to basically learn what happens in my environment when" }, { "start": 397.28, "end": 403.47999999999996, "text": " I do a certain action. And then I could use that model right here, in order to do this" }, { "start": 403.48, "end": 412.04, "text": " planning that I know from up here. Okay, so in this case, they go for exactly this they" }, { "start": 412.04, "end": 417.16, "text": " go for, let's learn a model of the environment, this is not an exact model, it's a learned" }, { "start": 417.16, "end": 425, "text": " model. And then let's use that to plan. Now this usually has a bunch of, you know, very" }, { "start": 425, "end": 432.04, "text": " a bunch of things that go against it, namely, if this model right here is bad, then the" }, { "start": 432.04, "end": 439.36, "text": " planning in the model will often accumulate and even exaggerate the errors that are in" }, { "start": 439.36, "end": 444.28000000000003, "text": " this model. So it's sometimes very hard to learn a model of the world and then use that" }, { "start": 444.28000000000003, "end": 453.44, "text": " for planning. And I've recently done a paper where, where curious AI takes the noise in" }, { "start": 453.44, "end": 459.52000000000004, "text": " auto encoders to regularize exactly such a planning procedure to counter this. And this" }, { "start": 459.52, "end": 467.64, "text": " paper right here is a different approach of combining this learned model. This learned" }, { "start": 467.64, "end": 477, "text": " model. So, okay. That was about the first sentence. They say it aims to learn a global" }, { "start": 477, "end": 482.28, "text": " model for the dynamics of the environment. A good model can potentially enable planning" }, { "start": 482.28, "end": 487.52, "text": " algorithms to generate a large variety of behaviors and solve diverse tasks, which is" }, { "start": 487.52, "end": 492.96, "text": " true, right? If I have a model of the environment, then I could just use it to plan. I wouldn't" }, { "start": 492.96, "end": 498.79999999999995, "text": " even have to do anything fancy anymore, right? If I have a model of how my tic tac toe works," }, { "start": 498.79999999999995, "end": 504.96, "text": " I can just plan, plan my way to success. And I can do this all for zero style. Or if the" }, { "start": 504.96, "end": 510.2, "text": " if this state tree is small enough, I can actually just use a planner. And I don't even" }, { "start": 510.2, "end": 516.12, "text": " have to, I don't have to do anything anymore. If I have a good model. They say however," }, { "start": 516.12, "end": 520.88, "text": " learning an accurate model for complex dynamical systems is difficult. And even then the model" }, { "start": 520.88, "end": 525.92, "text": " might not generalize well outside the distribution of states on which it was trained. So this" }, { "start": 525.92, "end": 531.2, "text": " is another problem. If you learn a model, it's only going to be valid in a certain range." }, { "start": 531.2, "end": 540.96, "text": " Okay. And say in this work, we combine model based learning with model free learning of" }, { "start": 540.96, "end": 547.96, "text": " the model free learning is of primitives that make model based planning easy. So what they" }, { "start": 547.96, "end": 556, "text": " attempt to do is they attempt in an unsupervised fashion to learn so called set of skills." }, { "start": 556, "end": 570.88, "text": " And the set of skills could be something like walk forward, walk backward, stay put, jump." }, { "start": 570.88, "end": 578.68, "text": " So they attempt to learn things like this in a model free way, that the model is simply" }, { "start": 578.68, "end": 583.48, "text": " asked to come up with these things or the agent is simply asked to come up with these" }, { "start": 583.48, "end": 592.04, "text": " things. And then in in stage two, a planner can use these skills and decompose a plan." }, { "start": 592.04, "end": 597.48, "text": " Now this plan here, the special thing about this plan, this planner, it doesn't operate" }, { "start": 597.48, "end": 603.64, "text": " in the space of actions of like small scale actions, it actually operates in the space" }, { "start": 603.64, "end": 609.88, "text": " of these skills right here. So here would be walk forward, walk back. So and with if" }, { "start": 609.88, "end": 615.24, "text": " we have a good enough model of the environment, it will tell us if I walk forward in this" }, { "start": 615.24, "end": 620.44, "text": " situation, what will happen? Okay, so I can walk forward. And then after that, I could" }, { "start": 620.44, "end": 627.2, "text": " walk backward, what's going to happen? And if I have a good model of the environment" }, { "start": 627.2, "end": 633.6400000000001, "text": " over the and the actions now are these macro actions of these skills, then I can use planning" }, { "start": 633.6400000000001, "end": 641.24, "text": " to reach my goal. Okay, so the question is, how do we come up with useful skills that" }, { "start": 641.24, "end": 646.6800000000001, "text": " the planner can then use? So they need to be somewhat diverse, right? But also, and" }, { "start": 646.6800000000001, "end": 655.6, "text": " here is the the crucial part and the the sort of contribution of this paper, they say, how" }, { "start": 655.6, "end": 662.64, "text": " can we discover skills whose outcomes are easy to predict? And this is how they counteract" }, { "start": 662.64, "end": 668.72, "text": " this notion here, that if your environment model is crap, then you're you're, it can't" }, { "start": 668.72, "end": 673.16, "text": " basically can't be used for planning, you'll just make it worse. So what they say is that" }, { "start": 673.16, "end": 680.32, "text": " these things right here, these skills that we learn, we will learn them in a way that" }, { "start": 680.32, "end": 686.6800000000001, "text": " make them easy to predict. So it makes it easy to predict what will happen after I do" }, { "start": 686.6800000000001, "end": 691.5600000000001, "text": " them. So they must be at the same time diverse. So only you know, if you stay put, it's pretty" }, { "start": 691.5600000000001, "end": 696.44, "text": " easy to predict what's going to happen like nothing. Okay. But we're going to see in the" }, { "start": 696.44, "end": 703.36, "text": " exact objective that they have to be sort of diverse. But also, so only one of them" }, { "start": 703.36, "end": 708.48, "text": " can be stay put, the other ones have to do something else. But also, they should be easily" }, { "start": 708.48, "end": 715.4, "text": " predictable by the environment model. And if you learn the skills such that they're" }, { "start": 715.4, "end": 720.52, "text": " easily predictable, your environment model will make less errors, and then you can use" }, { "start": 720.52, "end": 729.82, "text": " it for planning. Okay, let's dive in. They do actually open source code, and they have" }, { "start": 729.82, "end": 735, "text": " more of these videos, if you want to check it out, I'll link everything in the description." }, { "start": 735, "end": 748, "text": " Okay, let's actually dive into the meat right here. They say they do maximize the mutual" }, { "start": 748, "end": 755.64, "text": " information. And we're going to see between between what and what they want to maximize" }, { "start": 755.64, "end": 758.84, "text": " the mutual information. If you don't know what the mutual information is, the mutual" }, { "start": 758.84, "end": 764.82, "text": " information is a quantity. The mutual information between x and y is a quantity, that's the" }, { "start": 764.82, "end": 769.44, "text": " entropy of x minus the entropy of x conditioned on y, or you can also decompose it in the" }, { "start": 769.44, "end": 779.12, "text": " other way around entropy of y minus entropy of y conditioned on x. And we're going to" }, { "start": 779.12, "end": 788.12, "text": " they apply this, where is it equation two right here. Okay, so what they want is the" }, { "start": 788.12, "end": 793.32, "text": " following, they want to maximize the mutual information. And the mutual information basically" }, { "start": 793.32, "end": 802.5600000000001, "text": " means how much does one variable tell me about the other variable, okay? The mutual information" }, { "start": 802.5600000000001, "end": 812.7600000000001, "text": " between the skill, and we're going to see what that is, and the next state. So what" }, { "start": 812.7600000000001, "end": 822.8800000000001, "text": " is a skill? A skill is one entry in our table right here. So these here are skills, skills." }, { "start": 822.88, "end": 829.36, "text": " And they're in their index by z. Now z here, it seems like they're discrete, right? But" }, { "start": 829.36, "end": 834.88, "text": " in this case, they would be z would be a continuous vector. But it's easier if you imagine that" }, { "start": 834.88, "end": 842.8, "text": " so each one of this is like z one, z two, and so on. They're going to be continuous," }, { "start": 842.8, "end": 850.88, "text": " but in our case, we'll just think of a discrete set of skills to be learned. Okay, so they" }, { "start": 850.88, "end": 857.48, "text": " say we want to maximize the mutual information between the skill that's currently in action." }, { "start": 857.48, "end": 862.72, "text": " So in every, every time the agent has to like choose a skill and saying like, okay, now" }, { "start": 862.72, "end": 870.32, "text": " I'm going to walk forward. And it's in a given state. And what you want to say is you have" }, { "start": 870.32, "end": 876.12, "text": " to maximize the mutual information between the skill and the next state, which means" }, { "start": 876.12, "end": 882.48, "text": " that it means two things, which you can see right here, you can decompose it in two different" }, { "start": 882.48, "end": 893.4, "text": " ways. One way is the following is to say, if I know which state I'm in, what's the entropy" }, { "start": 893.4, "end": 904.76, "text": " over my? Okay, that's the wrong bad way for me leading it. It's if I knew, if I know these" }, { "start": 904.76, "end": 912, "text": " two things, so if I know the state I'm in, and the next state that I'm going to write," }, { "start": 912, "end": 919.28, "text": " I can in hindsight, I look back, if I know both things, what can I say about this skill" }, { "start": 919.28, "end": 927.4399999999999, "text": " z right here, that I couldn't say just from the starting state. So the starting state," }, { "start": 927.44, "end": 940.32, "text": " let's say is the person right here, and the end state is the person a little bit more" }, { "start": 940.32, "end": 944.7600000000001, "text": " over there. So that's, that's called this forward, it's the person is looking to the" }, { "start": 944.7600000000001, "end": 951.5200000000001, "text": " right. Okay. Now, if I only show you if I only show you the state on the left, the starting" }, { "start": 951.5200000000001, "end": 957.4000000000001, "text": " state, what can you tell me which action is going to follow right here? Basically, you" }, { "start": 957.4, "end": 962.36, "text": " can tell me much it could be any action like walk forward, walk back, stay put. But if" }, { "start": 962.36, "end": 969.56, "text": " I also show you the next state, then you can pretty confidently say, ah, I know what you" }, { "start": 969.56, "end": 975.52, "text": " did, you did walk forward. Okay, so this, this, in this situation, we would have a high" }, { "start": 975.52, "end": 982.4, "text": " mutual information between z and s prime in the formulation here, if you decompose it" }, { "start": 982.4, "end": 986.72, "text": " in the other way, this is equivalent, but it's a different way of thinking about it," }, { "start": 986.72, "end": 993.36, "text": " it means when I show you these two things, how much more can you tell me about the next" }, { "start": 993.36, "end": 1000, "text": " state than if I only show you this. So in this formulation, what we would do is, I would" }, { "start": 1000, "end": 1005.6800000000001, "text": " say I tell you the human is here looking to the right, what can you tell me about the" }, { "start": 1005.6800000000001, "end": 1014.1600000000001, "text": " next state? And you like what I, I couldn't tell you very much, right? It can be anything." }, { "start": 1014.16, "end": 1020, "text": " But if I then tell you the action is walk forward, then you could say, ah, now I now" }, { "start": 1020, "end": 1025.52, "text": " I get it, it's probably going to be something like this. Okay. This also would be a high" }, { "start": 1025.52, "end": 1032.44, "text": " mutual information. So you see that the the task of maximizing this mutual information" }, { "start": 1032.44, "end": 1038.36, "text": " is good, because what happens if I don't, if I have a low mutual information, if I have" }, { "start": 1038.36, "end": 1046.36, "text": " a low mutual information, it would mean that I could predict the next state just as well" }, { "start": 1046.36, "end": 1052.7199999999998, "text": " from the current state. It doesn't make a difference whether you give me the skill or" }, { "start": 1052.7199999999998, "end": 1059, "text": " not, it would not make a difference. And that is only the case if my skills are basically" }, { "start": 1059, "end": 1064.8, "text": " either all the same or all pretty, pretty random and pretty useless. Right? So if if" }, { "start": 1064.8, "end": 1070.08, "text": " all my skills are basically walking backwards, then I don't, you don't have to tell me which" }, { "start": 1070.08, "end": 1075.72, "text": " skill you do, I'm going to know that the next state is like this. So you can see that the" }, { "start": 1075.72, "end": 1084.06, "text": " objective of maximizing the mutual information between the skill and the next state is going" }, { "start": 1084.06, "end": 1090.8, "text": " to result in a situation where these skills are going to be first of all, diverse. And" }, { "start": 1090.8, "end": 1100.32, "text": " second of all, easy to easy to predict. Okay. And to see this, yeah, we only have to imagine" }, { "start": 1100.32, "end": 1106.32, "text": " what would happen, what would happen in a in a situation where the skills weren't diverse" }, { "start": 1106.32, "end": 1111.9199999999998, "text": " or weren't easy to predict. And you'll get exactly the situation where the information" }, { "start": 1111.9199999999998, "end": 1116.28, "text": " of the skill doesn't help you in predicting the next state, because yeah, either it's" }, { "start": 1116.28, "end": 1123.52, "text": " obvious or it's random. Okay, so we agree that it makes sense to maximize the mutual" }, { "start": 1123.52, "end": 1131.32, "text": " information. And they decompose this into two objectives. So they say the mutual information" }, { "start": 1131.32, "end": 1139.72, "text": " will stay is basically what you'll have to do is you'll have to it decomposes into two" }, { "start": 1139.72, "end": 1148.92, "text": " terms where you can into two terms in a lower bound in the mutual information. And this" }, { "start": 1148.92, "end": 1154.26, "text": " is kind of the this is sort of the standard variational approximation literature. If you're" }, { "start": 1154.26, "end": 1160.78, "text": " into that read up on variational auto encoders and things like this. Basically, the two steps" }, { "start": 1160.78, "end": 1168.2, "text": " here are you tighten the variational lower bound and you maximize the approximate lower" }, { "start": 1168.2, "end": 1178.48, "text": " bound. Okay, so you have the mutual information, and you can lower bound it. Okay, you can" }, { "start": 1178.48, "end": 1184.64, "text": " you can lower bound it by this quantity. Now the if since this is a lower bound, you can" }, { "start": 1184.64, "end": 1191.68, "text": " prove that this is a lower bound, it means that the higher you make this term, then the" }, { "start": 1191.68, "end": 1197.52, "text": " the more basically, okay, if it's a lower, I don't know how to formulate, but it should" }, { "start": 1197.52, "end": 1202.6, "text": " be fairly obvious if this if this thing on the right is a lower bound to the mutual information," }, { "start": 1202.6, "end": 1208.32, "text": " then maximizing the thing on the right will maximize the thing on the left. And it will" }, { "start": 1208.32, "end": 1216.8, "text": " do so very so imagine I is up here. And this E on the right side is down here, it's a lower" }, { "start": 1216.8, "end": 1223.8, "text": " bound, right? It's lower than I. So if I maximize E, well, I haven't really done anything to" }, { "start": 1223.8, "end": 1229.3999999999999, "text": " I but if I maximize it even more up to here, and since it's a lower bound, I know now my" }, { "start": 1229.3999999999999, "end": 1235, "text": " I must be at least higher than this. Okay, so maximizing a lower bound to a quantity" }, { "start": 1235, "end": 1241.44, "text": " will ultimately increase the quantity. But you can also the efficiency by which it does" }, { "start": 1241.44, "end": 1248.9199999999998, "text": " this depends on how tight the bound is. So if the bound is very tight, like this, you" }, { "start": 1248.92, "end": 1257.6000000000001, "text": " see I is not much above E. If the bound is very tight, then maximizing E will result" }, { "start": 1257.6000000000001, "end": 1264.6000000000001, "text": " in a faster maximization of I. Okay, so you can do two things, you can maximize the quantity" }, { "start": 1264.6000000000001, "end": 1270.24, "text": " that is the lower bound, or you can tighten the bound. And here we can see that the difference" }, { "start": 1270.24, "end": 1275.3200000000002, "text": " that the tightness of the bound depends on this quantity right here, which is the KL" }, { "start": 1275.32, "end": 1285.96, "text": " difference between this and this. So yeah, let's watch this in the context of we can" }, { "start": 1285.96, "end": 1293.04, "text": " actually go go through it on the high level. If you've never done this variational approximation" }, { "start": 1293.04, "end": 1300.74, "text": " sort of math, then this might be a bit informative. Okay, so the thing right here is just pops" }, { "start": 1300.74, "end": 1307.28, "text": " out of the definition of the mutual information. It's the it's basically the differences of" }, { "start": 1307.28, "end": 1313.28, "text": " the entropy's, which the entropy's are log quantities, right? So if you have a log A" }, { "start": 1313.28, "end": 1319.56, "text": " minus log B, that you can also write this as the log of the fraction of A over B, that's" }, { "start": 1319.56, "end": 1327.36, "text": " just a property of the log. And so it's expectations over logs, these entropy's. So you can write" }, { "start": 1327.36, "end": 1337.28, "text": " it as this thing right here. Okay. And this basically says, this is very high, if or very" }, { "start": 1337.28, "end": 1345.32, "text": " low, depending so you need to whether or something is lower high always will depend on what you" }, { "start": 1345.32, "end": 1353.8999999999999, "text": " exactly you have to consider. But ultimately, what you'll want is the ratio between this" }, { "start": 1353.9, "end": 1360.52, "text": " quantity, which is the probability of the next state given the current state and the" }, { "start": 1360.52, "end": 1368.0400000000002, "text": " skill you're taking divided by just the probability over the next state, given the the current" }, { "start": 1368.0400000000002, "end": 1375.6200000000001, "text": " state, in expectation over all the skills, current states and next states. Now what they're" }, { "start": 1375.6200000000001, "end": 1383, "text": " saying is, this here, this is basically the environment, right? This is if you are in" }, { "start": 1383, "end": 1388.56, "text": " a state and you perform a skill, what's the next state? That's the true environment. P" }, { "start": 1388.56, "end": 1393.12, "text": " here is the true environment, which we don't know, right? We don't know what the environment's" }, { "start": 1393.12, "end": 1399.32, "text": " going to do. But we would like to learn a model for the environment. And this model" }, { "start": 1399.32, "end": 1412.96, "text": " for the environment is now Q, Q, theta, phi, theta, phi, Greek letter. So Q phi here is" }, { "start": 1412.96, "end": 1420.04, "text": " going to be a neural network that will approximate the environment. And in this probabilistic" }, { "start": 1420.04, "end": 1425.88, "text": " framework, it is going to be a learned distribution that will approximate the distribution of" }, { "start": 1425.88, "end": 1435.1200000000001, "text": " P. So we approximate it by this. But now this is here it says equal equal, right? This is" }, { "start": 1435.12, "end": 1442.9199999999998, "text": " not equal, because this is just an approximation. So the equality must become must be basically" }, { "start": 1442.9199999999998, "end": 1451.84, "text": " compensated for by this term right here, you can see this here is expanded into these two," }, { "start": 1451.84, "end": 1456.06, "text": " you can go through the exact definitions and see why this is an equality. But basically," }, { "start": 1456.06, "end": 1463.28, "text": " you can say that the mutual information is this expectation, or it is this expectation." }, { "start": 1463.28, "end": 1467.24, "text": " But now you have to correct for the fact that here you only have an approximation. And you" }, { "start": 1467.24, "end": 1474.2, "text": " have to correct for the fact by exactly the amount by which the approximation is different" }, { "start": 1474.2, "end": 1480.68, "text": " than the quantity that you're approximating. And this is this key KL divergence right here." }, { "start": 1480.68, "end": 1488.8799999999999, "text": " So the KL divergence basically measures how different two distributions are. It's sort" }, { "start": 1488.88, "end": 1493.68, "text": " of a distance, not exactly distance, but sort of a distance between these two distributions" }, { "start": 1493.68, "end": 1497.5800000000002, "text": " right here, it says, here's the real world. And here is your estimate of the real world," }, { "start": 1497.5800000000002, "end": 1506.24, "text": " how much do they disagree, and that quantity plus, then you can replace your world, the" }, { "start": 1506.24, "end": 1512.72, "text": " exact world distribution by your approximate distribution. And you still are equal to the" }, { "start": 1512.72, "end": 1518.76, "text": " mutual information. And now the basically the trick is you say, oh, the KL divergence" }, { "start": 1518.76, "end": 1525.44, "text": " is always positive, it's a quantity that it can only be a positive number. So if I leave" }, { "start": 1525.44, "end": 1532, "text": " it away, certainly, this is only this is going to be a lower bound to the quantity. Okay." }, { "start": 1532, "end": 1537.2, "text": " All right. So two tasks right here. First of all, tighten the variational bound, which" }, { "start": 1537.2, "end": 1542.76, "text": " means make this quantity small, make your approximate world model as close as possible" }, { "start": 1542.76, "end": 1550.46, "text": " to the real world. How do we do this neural network? Okay, you input trajectories. I was" }, { "start": 1550.46, "end": 1556.64, "text": " in this state, I performed this skill, and I ended up in this state. Sorry, that's this" }, { "start": 1556.64, "end": 1561.56, "text": " and then you simply match your neural network simply matches what happens in the real world," }, { "start": 1561.56, "end": 1568.48, "text": " it learns the transition function, basically. So that's, that's the tightening of the variational" }, { "start": 1568.48, "end": 1579.08, "text": " bound. And the second step is this right here to" }, { "start": 1579.08, "end": 1583.6799999999998, "text": " to maximize the approximate lower bound, right? The first step was Titan variation lower bound," }, { "start": 1583.6799999999998, "end": 1588.6399999999999, "text": " that basically means make your world model more accurate. And the second is Titan that" }, { "start": 1588.64, "end": 1595.24, "text": " maximize the approximate lower bound. Now this is going to part, this is going to be" }, { "start": 1595.24, "end": 1601.76, "text": " the part that says, now given that I already have a better world model right here, can" }, { "start": 1601.76, "end": 1611, "text": " I improve my can I sort of improve my skills such that they become easier to predict and" }, { "start": 1611, "end": 1617.96, "text": " more diverse? Can I improve my skills such that this mutual information right here gets" }, { "start": 1617.96, "end": 1626.44, "text": " to be high as high as possible? Okay. So this is sort of an alternating thing. And you can" }, { "start": 1626.44, "end": 1633.52, "text": " see this in this very, very, very, very confusing diagram, honestly. So what are you going to" }, { "start": 1633.52, "end": 1638.96, "text": " do in this algorithm? First of all, in each episode, you're going to select a skill at" }, { "start": 1638.96, "end": 1644.28, "text": " random. And as I said, these skills, they're not predefined. So no one tells the agent" }, { "start": 1644.28, "end": 1649.44, "text": " to walk forward, it simply says, okay, you have like, in a discrete case, you would have" }, { "start": 1649.44, "end": 1655.44, "text": " like, you have five skill slots, right? And the only thing I require is that they're sort" }, { "start": 1655.44, "end": 1659.48, "text": " of, you know, consistent over time. So skill one is always going to be sort of the same" }, { "start": 1659.48, "end": 1665.56, "text": " thing and skill two, but agent, you can basically decide what skill one is, right? But make" }, { "start": 1665.56, "end": 1672.28, "text": " the skill such that it's predictable and that the different skills are diverse. Okay, so" }, { "start": 1672.28, "end": 1677.72, "text": " you're going to sample one of the skills, like skill zero or whatnot. And then you're" }, { "start": 1677.72, "end": 1687.76, "text": " going to do two things. First of all, you're going to learn these skill dynamics, which" }, { "start": 1687.76, "end": 1694.24, "text": " is you're going to learn your approximate model of the world. Okay. And how do you do" }, { "start": 1694.24, "end": 1702.68, "text": " that? Basically, here, you're the agent and the agent will. So what does the agent have" }, { "start": 1702.68, "end": 1709.96, "text": " to do? The agent will take in the skill Z and it will take in the current state of the" }, { "start": 1709.96, "end": 1715.64, "text": " world and it will output an action. Now, this is the model free part, right? So the agent" }, { "start": 1715.64, "end": 1722.8, "text": " that somehow has to come up with saying, ah, skill zero, that's walking forward. And in" }, { "start": 1722.8, "end": 1729.8799999999999, "text": " this situation, walking forward means I have to lift my leg or something like this. So" }, { "start": 1729.8799999999999, "end": 1734.52, "text": " you're going to take your skill, you're going to with your agent perform an action based" }, { "start": 1734.52, "end": 1738.58, "text": " on that skill and the current state of the world. Then the environment is going to give" }, { "start": 1738.58, "end": 1745.48, "text": " you the next state right here. And from those things, you can now learn your world model." }, { "start": 1745.48, "end": 1752.76, "text": " You know, I was in state S, I performed action a but I performed action a based on skills" }, { "start": 1752.76, "end": 1761.92, "text": " Z. And then I ended up in state S prime. And I can learn a model of the world, right? This" }, { "start": 1761.92, "end": 1766.8, "text": " is a triple, I can do supervised learning of a world model. Now here, they do probabilistic" }, { "start": 1766.8, "end": 1772.12, "text": " learning, but and we're going to see in a second how that works. But ultimately, they" }, { "start": 1772.12, "end": 1780.08, "text": " approximate the world with their model. Cool. So that's the this outer loop. And then what" }, { "start": 1780.08, "end": 1786.36, "text": " are they going to do next, they're going to use that world model to determine a reward" }, { "start": 1786.36, "end": 1793.1399999999999, "text": " for the agent, and the reward for the agent for taking the action. So the reward is going" }, { "start": 1793.1399999999999, "end": 1799.24, "text": " to be, oh, agent, you took action a. Now, what's your reward for doing this? This is" }, { "start": 1799.24, "end": 1809.04, "text": " the model free reinforcement learning part, your reward is going to be very high, if if" }, { "start": 1809.04, "end": 1817, "text": " this was very predictable. And if it is also diverse, right, so now, the agent has to sort" }, { "start": 1817, "end": 1826.36, "text": " of max sort of the agent has to go and make this quantity very high this, we want the" }, { "start": 1826.36, "end": 1833.44, "text": " outcome of these actions to be predictable, and dive and the actions themselves to be" }, { "start": 1833.44, "end": 1840.4, "text": " diverse. It is, I'm sorry, it's very hard to keep all of this very straight. Okay. But" }, { "start": 1840.4, "end": 1847.4, "text": " ultimately, two steps, learn world model from the experience that you've generated. And" }, { "start": 1847.4, "end": 1853.92, "text": " second thing, learn the agent such that it maximizes this this quantity that we've seen" }, { "start": 1853.92, "end": 1861.24, "text": " before. And you do this via giving the agent a reward that is proportional to the mutual" }, { "start": 1861.24, "end": 1871.4, "text": " information. And we've already seen that we can approximate the mutual information by" }, { "start": 1871.4, "end": 1881.8, "text": " by this quantity here. Okay. So learn world model, and make the agent go higher mutual" }, { "start": 1881.8, "end": 1889.08, "text": " information, two steps. Okay, learn world model is very, very classic, you can say," }, { "start": 1889.08, "end": 1894.6399999999999, "text": " okay, I need to improve, I need to minimize this KL divergence. So I need the gradient" }, { "start": 1894.6399999999999, "end": 1902, "text": " with respect to the parameters of my world model, I can write down the KL divergence" }, { "start": 1902, "end": 1910.48, "text": " like this. And then since I can do this reverse, so log a over b is log a minus log b. And" }, { "start": 1910.48, "end": 1916.48, "text": " since the world doesn't depend on the parameters of my model, this will simply give me this" }, { "start": 1916.48, "end": 1922.66, "text": " thing right here, which is the gradient of the log probability basically, of my neural" }, { "start": 1922.66, "end": 1928.4, "text": " network. And this can be just optimized straightforward, this is a neural network, optimize it with" }, { "start": 1928.4, "end": 1935.84, "text": " gradient descent. These are the inputs, this is the output. Now, okay, you, this is all" }, { "start": 1935.84, "end": 1940.24, "text": " probability distributions, but ultimately, you can you can do it pretty straightforward." }, { "start": 1940.24, "end": 1948.92, "text": " Okay, so corresponds to maximizing the likelihood of the samples from P under Q. Now, the second" }, { "start": 1948.92, "end": 1958.64, "text": " step maximize the approximate lower bound. Okay. So after they say after fitting Q, after" }, { "start": 1958.64, "end": 1964.56, "text": " improving our world model, we can optimize pi pi is the agent that actually takes the" }, { "start": 1964.56, "end": 1970.9199999999998, "text": " actions based on the skill. So it's given a skill, and it needs to perform an action." }, { "start": 1970.9199999999998, "end": 1977.8, "text": " And it needs to maximize this quantity, as we've seen, needs to maximize the mutual information" }, { "start": 1977.8, "end": 1984.96, "text": " between if I know the action and if I don't, or the mutual information between the skill" }, { "start": 1984.96, "end": 1993.1599999999999, "text": " and the next state. I say note, this is a reinforcement learning style optimization," }, { "start": 1993.16, "end": 2000.1200000000001, "text": " with a reward function of this quantity. However, so you look at the quantity that they need" }, { "start": 2000.1200000000001, "end": 2006.88, "text": " right here, the quantity is going to be this thing. And this thing is just, I feed the" }, { "start": 2006.88, "end": 2012.78, "text": " skill and the state into my world model. And I look what what comes out of the world model." }, { "start": 2012.78, "end": 2019.64, "text": " So this I can compute, right? But this thing right here, I can't compute because this is" }, { "start": 2019.64, "end": 2028.3000000000002, "text": " this is what happens in the world when I'm in state s, and I just run my agent over in" }, { "start": 2028.3000000000002, "end": 2036.66, "text": " expectation over all the skills. So this I don't know, they have a log of this is intractable." }, { "start": 2036.66, "end": 2043.5600000000002, "text": " And then so we approximate the reward function for pi as this thing right here. Now, first," }, { "start": 2043.56, "end": 2053.88, "text": " let's look at what this thing is. So the reward of taking action a, and action a is based" }, { "start": 2053.88, "end": 2059.7599999999998, "text": " on skill z, right? So skill z was fed into the agent, the agent comes up with action" }, { "start": 2059.7599999999998, "end": 2065, "text": " a, oh, you want me to walk forward in this situation, okay, I'm gonna lift my leg. That's" }, { "start": 2065, "end": 2070.84, "text": " the action. Okay. So the reward for this action, given this skill, and given the current state" }, { "start": 2070.84, "end": 2078.4, "text": " is going to be what it's going to be very high, if this here is very high. So it's going" }, { "start": 2078.4, "end": 2087.76, "text": " to be very high if the probability so this s prime is the state you ended up in, right?" }, { "start": 2087.76, "end": 2094.1200000000003, "text": " So after taking the action, you ended up in s prime. So if what does it mean when this" }, { "start": 2094.12, "end": 2103, "text": " quantity is very high? It means that my world model q, that is approximating the world," }, { "start": 2103, "end": 2110.04, "text": " thinks that this state is very probable if you were in this state and are given the skill" }, { "start": 2110.04, "end": 2117.2799999999997, "text": " z. So this basically means that the neural network can predict with very high accuracy," }, { "start": 2117.28, "end": 2124.44, "text": " what's going to happen if you are in this state and are given this skill to perform," }, { "start": 2124.44, "end": 2131.6400000000003, "text": " right? This is one of the things that we want. Now, what is it divided by? It's divided by" }, { "start": 2131.6400000000003, "end": 2141, "text": " this. And you can see here, the z i are other skills. So it is, what does this mean? This" }, { "start": 2141, "end": 2148.2, "text": " is almost the same quantity. It means how well can the same neural network predict the" }, { "start": 2148.2, "end": 2158.6, "text": " next state if you were given a different skill. So it means if I'm here, and I ended up here," }, { "start": 2158.6, "end": 2165, "text": " how well can you predict it if I tell you that I walked forward? And here you ask, well," }, { "start": 2165, "end": 2170.04, "text": " how well can you predict it if I told you you walked backward, if I told you you jumped," }, { "start": 2170.04, "end": 2179.32, "text": " if I told you, and so on. So you basically aggregate over all the other skills you could" }, { "start": 2179.32, "end": 2183.92, "text": " perform. And each time you ask the neural network, well, how likely is it that you end" }, { "start": 2183.92, "end": 2190.7599999999998, "text": " up in the state that I ended up with? So what does it mean if this quantity is high, or" }, { "start": 2190.7599999999998, "end": 2198.36, "text": " sorry, if the entire sum here is high? That means that the skill doesn't really give you" }, { "start": 2198.36, "end": 2202.8, "text": " much information, the neural network is very good, no matter which skill you selected," }, { "start": 2202.8, "end": 2207, "text": " right? It's very accurate in predicting the next state doesn't really matter. The skill" }, { "start": 2207, "end": 2214.88, "text": " doesn't really matter. And this is what we don't want, right? We want that the skills" }, { "start": 2214.88, "end": 2220.08, "text": " are very diverse, right? So the top part is, they're easy, it's easy to predict what will" }, { "start": 2220.08, "end": 2227.6800000000003, "text": " happen if you perform a given skill. And we divide this by the bottom part. And this makes" }, { "start": 2227.68, "end": 2233.24, "text": " it such that these skills are very diverse, because if they're not diverse, then it doesn't" }, { "start": 2233.24, "end": 2238.8399999999997, "text": " really matter which one you perform. And then this quantity on the bottom will be very high," }, { "start": 2238.8399999999997, "end": 2246.2799999999997, "text": " but we divide by it. So we want, we want it to be low. Okay, now the reward is going to" }, { "start": 2246.2799999999997, "end": 2253.24, "text": " be the log of this fraction here. And this makes sense, right intuitively, but they're" }, { "start": 2253.24, "end": 2258.64, "text": " going to try to motivate this mathematically. And for motivating this mathematically, of" }, { "start": 2258.64, "end": 2264.3999999999996, "text": " course, they need to approximate this quantity right here. This quantity is the denominator." }, { "start": 2264.3999999999996, "end": 2272.08, "text": " So they this denominator is a proc is an approximation to this. It's an approximation. As you can" }, { "start": 2272.08, "end": 2280.3199999999997, "text": " see here, this is sort of sort of a sample based approximation to the transition from" }, { "start": 2280.32, "end": 2289.84, "text": " s to s prime under the distribution of z. But what you want is just is the transition" }, { "start": 2289.84, "end": 2298.8, "text": " from s to s prime, not in your approximation, but in the real world. And they formulate" }, { "start": 2298.8, "end": 2308.7200000000003, "text": " this, they say, okay, we can decompose it as such as a as an integral over this conditional" }, { "start": 2308.72, "end": 2318.3199999999997, "text": " right here. So they bring in the z variable. And then they say, well, this, this is approximately" }, { "start": 2318.3199999999997, "end": 2329.2, "text": " approximately we can replace this here by this. And we can replace this here by this." }, { "start": 2329.2, "end": 2336.3199999999997, "text": " They say, well, since the this is a an approximation, the two, this is the world model is an approximation" }, { "start": 2336.32, "end": 2343.32, "text": " to the real world, we can sort of replace that. And then this is the this is the part" }, { "start": 2343.32, "end": 2352.6000000000004, "text": " that doesn't convince me they say, well, this P z of s, we can just replace it by P z. Now" }, { "start": 2352.6000000000004, "end": 2358.84, "text": " this is it's very tricky to see what these quantities are. Ultimately, it ends up being" }, { "start": 2358.84, "end": 2369.48, "text": " that right here. But it's it's it's so tricky. So they say we replace P z given s by P of" }, { "start": 2369.48, "end": 2379.1600000000003, "text": " z. And, okay, let's think about this for a second. What does the top the bottom quantity" }, { "start": 2379.1600000000003, "end": 2383.6800000000003, "text": " is simply the distribution over your skills. And depending on how you sample them, this" }, { "start": 2383.68, "end": 2389.72, "text": " could be like a uniform distribution over your skills, like that's fine. But what's the top thing," }, { "start": 2389.72, "end": 2399.52, "text": " the top thing, basically, we can use base formula to reformulate it. It's P of s given z times" }, { "start": 2399.52, "end": 2414.24, "text": " P of right times P of z divided by P of s. So this quantity depends on multiple things." }, { "start": 2414.24, "end": 2422.6, "text": " Here's that prior again. And this means what's the general distribution of states? What's" }, { "start": 2422.6, "end": 2430.08, "text": " the general distribution of states? If your agent acts in the world, right, this? And this, we" }, { "start": 2430.08, "end": 2438.36, "text": " don't know, we don't we don't know. And also this right here, what's the distribution in the true" }, { "start": 2438.36, "end": 2447.72, "text": " world? What's the probability of a state given a given given that you were acting under a skill" }, { "start": 2447.72, "end": 2454.3599999999997, "text": " z? And this is also something we don't know, because we don't know the world, we we don't have" }, { "start": 2454.3599999999997, "end": 2459.24, "text": " the world model. So you run into the same problem again and again, that you're trying to approximate" }, { "start": 2459.24, "end": 2464.72, "text": " this. And they want to make this so mathematically rigorous, but ultimately, and they go in the" }, { "start": 2464.72, "end": 2469.24, "text": " appendix, they go through various ways that they could solve this. But ultimately, they just say," }, { "start": 2469.24, "end": 2478.08, "text": " well, this is approximately the same. So this right here, it basically means what skills," }, { "start": 2478.08, "end": 2485.7999999999997, "text": " if you're if you're in a certain state, what skills brought you here, basically, what what" }, { "start": 2485.7999999999997, "end": 2490.4799999999996, "text": " skills brought you here, what's the distribution of skills that brought you to this state, and" }, { "start": 2490.4799999999996, "end": 2495.2799999999997, "text": " they say, well, we're just going to approximate that by the prior distribution over our skills," }, { "start": 2495.28, "end": 2503.6000000000004, "text": " basically disregard the state here. And this seems overly shaky. Like, as I said, the entire paper" }, { "start": 2503.6000000000004, "end": 2512.48, "text": " makes sense. But I just feel it's trying to be overly mathematical, and then run into a point" }, { "start": 2512.48, "end": 2518.48, "text": " where you can't be and then they're just like, okay, we'll, we'll just replace it. And then sort" }, { "start": 2518.48, "end": 2526.32, "text": " of things break down, like, you can only be overly mathematical to some degree, it doesn't really fit." }, { "start": 2527.84, "end": 2533.52, "text": " But okay, so this is how you discover the skills, you maximize these quantities, alternately," }, { "start": 2533.52, "end": 2540.56, "text": " you learn the world model, and you improve your your skills by making them diverse and easily" }, { "start": 2540.56, "end": 2546.16, "text": " predictable. So how do you then plan using these skills? This is the second part of the paper," }, { "start": 2546.16, "end": 2553.04, "text": " and this is just as complicated as the first part. So they say given the learned skills," }, { "start": 2553.04, "end": 2560.08, "text": " so the learned skills are policies over action given the DZ, right? So now you know how to like" }, { "start": 2560.08, "end": 2566.64, "text": " walk forward and walk back and so on. And now you're, you're placed in a world, and you're given" }, { "start": 2566.64, "end": 2573.7599999999998, "text": " this checkpoint, it says, well, walk there. And you want this to do this using planning, you don't" }, { "start": 2573.76, "end": 2581.6800000000003, "text": " want to learn anymore, you simply want to plan. Okay, what do you do? And as I said, this is even" }, { "start": 2581.6800000000003, "end": 2588.6400000000003, "text": " more so what you want to do is you want to do something like model predictive control, but not" }, { "start": 2588.6400000000003, "end": 2598.6400000000003, "text": " over actions, but over your learned skills. So you have this planner in the NPC, and the planner will" }, { "start": 2598.64, "end": 2607.3599999999997, "text": " in its head roll out a number of different, a number of different plans, it will kind of" }, { "start": 2607.3599999999997, "end": 2614.48, "text": " explore a bunch of different, different plans Z, it will roll them out, I'll say, okay, if I do this," }, { "start": 2614.48, "end": 2619.7599999999998, "text": " and this, and this, and this, what will happen using its world model that it has learned," }, { "start": 2620.7999999999997, "end": 2627.44, "text": " it will observe what's going to be the reward in each of these cases. Now, they say here, access" }, { "start": 2627.44, "end": 2635.28, "text": " the environment reward, but can also be estimated. And this is another sort of, I feel weak point of" }, { "start": 2635.28, "end": 2643.44, "text": " this in that they now assume they have the true reward function. But they don't have a world model," }, { "start": 2643.44, "end": 2649.2000000000003, "text": " right? They don't have the world model, but they assume that they can sort of always ask for the" }, { "start": 2649.2, "end": 2657.9199999999996, "text": " true reward, which is probably not the case if you had a true world, but it could be the case," }, { "start": 2657.9199999999996, "end": 2661.2, "text": " the reward could be something like, well, if you're over there, you get higher reward," }, { "start": 2661.7599999999998, "end": 2670.48, "text": " but you don't exactly know how to get over there. In any case, so they roll out a bunch of trajectories" }, { "start": 2670.48, "end": 2675.9199999999996, "text": " in their head, they kind of plan forward, see what's going to happen if they do this or that," }, { "start": 2675.92, "end": 2685.2000000000003, "text": " or this or that. And then they choose the best one of these forward thoughts, and they execute it in" }, { "start": 2685.2000000000003, "end": 2691.6800000000003, "text": " the real world, right? So they say, well, I'm going to use choose the skill walk forward. So the agent" }, { "start": 2691.6800000000003, "end": 2696.96, "text": " is now going to be tasked with walking forward. And it's going to do that in the real world for" }, { "start": 2696.96, "end": 2702.16, "text": " a certain amount of steps, like 10 steps of walking forward, after 10 steps of walking forward," }, { "start": 2702.16, "end": 2707.52, "text": " you go back and say, I'm in this new situation right here, what should I do? And again, the planner" }, { "start": 2707.52, "end": 2712.48, "text": " is going to be like, if you first walk forward, and then walk back, where are you going to be," }, { "start": 2712.48, "end": 2719.8399999999997, "text": " and so on. So the planner will always plan, basically to go from where you are to the" }, { "start": 2719.8399999999997, "end": 2727.12, "text": " checkpoint using a composition of the skills that you have learned. So the planner may be fine. Okay," }, { "start": 2727.12, "end": 2733.8399999999997, "text": " if I first walk forward, walk back a bit, and so on, I'm going to get to the goal, I'm going to" }, { "start": 2733.8399999999997, "end": 2740.48, "text": " reach the goal. Now please agent execute this first thing, walk forward, the agent executes it," }, { "start": 2740.48, "end": 2746.08, "text": " and maybe it won't, you know, it won't do as well, it will maybe end up here. And then it says, well," }, { "start": 2746.08, "end": 2751.3599999999997, "text": " I'm here now, please plan again. So it plans again, it says, okay, I can still kind of walk back," }, { "start": 2751.36, "end": 2758.1600000000003, "text": " I'll be here, here, but then I have to do something else. So now walk back. And okay, so this is what's" }, { "start": 2758.1600000000003, "end": 2768.4, "text": " going to happen. But it is going to happen in a weird way. Namely, what we keep are normal," }, { "start": 2768.4, "end": 2775.92, "text": " since everything is continuous, we'll keep normal distributions of all our future steps. So we don't" }, { "start": 2775.92, "end": 2783.84, "text": " say, okay, I go here, and then I go here. What you'll say is I approximately go here. And after that," }, { "start": 2783.84, "end": 2790.64, "text": " I'll approximately go here. And you will do it in such a way that the peak of this normal distribution" }, { "start": 2790.64, "end": 2796.32, "text": " is going to be the highest, where you think you will get the most reward. If you follow this" }, { "start": 2796.32, "end": 2800.88, "text": " trajectory, like if you follow this trajectory, you get a very high reward. And if I follow a" }, { "start": 2800.88, "end": 2807.12, "text": " trajectory that maybe goes here, I won't get a high reward. If it actually turns out in your" }, { "start": 2807.12, "end": 2812.2400000000002, "text": " imagination that you do get a high reward for this trajectory, you'll change this distribution," }, { "start": 2812.2400000000002, "end": 2818.7200000000003, "text": " such that the peak is here. And of course, the tighter the peak is, the more sure you are. So" }, { "start": 2818.7200000000003, "end": 2825.44, "text": " you sort of are looking, if you look out into the world, you want the closest steps to be very peaky." }, { "start": 2825.44, "end": 2832.96, "text": " And then as you look out, they can be more sort of broad. And that's how you plan ahead, you keep" }, { "start": 2833.84, "end": 2840.16, "text": " doing a step. So if you go from here to finally you choose, I want to go here, where the tip is the" }, { "start": 2840.16, "end": 2847.36, "text": " highest, go here, then you imagine forward again, you refine these distributions over the future." }, { "start": 2848, "end": 2855.12, "text": " And then you take the next step that gets you to the where the highest peak is right here, basically." }, { "start": 2855.12, "end": 2864.08, "text": " And so on. This is simply planning in a continuous domain, it is pretty analogous to how you would" }, { "start": 2864.08, "end": 2870.4, "text": " plan in like, alpha go, if you or tic tac toe, if you had a planner. But since everything's" }, { "start": 2870.4, "end": 2877.8399999999997, "text": " continuous, it makes it just so much harder. So they yeah, they always update these distributions," }, { "start": 2877.8399999999997, "end": 2884.16, "text": " as you can see here, to the skill that gave you a high reward in your imagination," }, { "start": 2884.16, "end": 2894.56, "text": " compared to the rewards of the other plans that you had. Okay, well, this was a long, long way" }, { "start": 2894.56, "end": 2902, "text": " until we got here. But if you recap, so first, they in an unsupervised fashion, learn these low" }, { "start": 2902, "end": 2908.16, "text": " level skills, such that they're easily predictable by their own world model, and diverse. And then in" }, { "start": 2908.16, "end": 2917.52, "text": " the second step, they can use that to, to do basically planning. So they first learn these" }, { "start": 2917.52, "end": 2925.6, "text": " skills, and then the planner composes them to make the agent do something. And again, the agent will" }, { "start": 2925.6, "end": 2930.7999999999997, "text": " never have to learn how to do this go from checkpoint to checkpoint, because the planner" }, { "start": 2930.8, "end": 2939.04, "text": " can just compose these low level skills. So they have these experiments right here. And we won't" }, { "start": 2939.04, "end": 2944.5600000000004, "text": " go through the experiment because this video is already very, very long. But they basically show" }, { "start": 2944.5600000000004, "end": 2952.7200000000003, "text": " that they they're learned things, actually, their learned skills do end up being very diverse," }, { "start": 2952.7200000000003, "end": 2960.5600000000004, "text": " do end up predictable, have a high variance, and so on, they have to give certain priors to it" }, { "start": 2960.56, "end": 2968.64, "text": " to make it actually work in a real setting. But the results you can actually see in these videos" }, { "start": 2968.64, "end": 2974.08, "text": " and in the graphs, I'm about you to check out the paper if you're still here. Thanks for being here," }, { "start": 2974.08, "end": 2980.56, "text": " I hope this this work was one of the most more complicated and mathy papers we looked at. But I" }, { "start": 2980.56, "end": 2987.44, "text": " think I still think it's fun. And I still think the outcome is pretty impressive right here, how you" }, { "start": 2987.44, "end": 2996.56, "text": " can use math to derive basically these intuitive, very intuitive objectives to learn. It's also" }, { "start": 2996.56, "end": 3017.84, "text": " pretty cool. Alright, that was it for me. And bye bye." } ]
XdpF9ZixIbI
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Can we Contain Covid-19 without Locking-down the Economy?
[ "Science & Technology" ]
[ "machine learning", "epidemiology", "worst case", "statistics", "hypothesis test", "covid", "corona", "coronavirus" ]
My thoughts on the let-the-young-get-infected argument. https://medium.com/amnon-shashua/can-we-contain-covid-19-without-locking-down-the-economy-2a134a71873f Abstract: In this article, we present an analysis of a risk-based selective quarantine model where the population is divided into low and high-risk groups. The high-risk group is quarantined until the low-risk group achieves herd-immunity. We tackle the question of whether this model is safe, in the sense that the health system can contain the number of low-risk people that require severe ICU care (such as life support systems). Authors: Shai Shalev-Shwartz, Amnon Shashua Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Can we contain COVID-19 without locking down the economy? This is a question and I do care about this article because Shai Shalef-Schwarz is one of the bigger names in machine learning theory. So it was interesting for me to see what he and his collaborator here had to say about the kind of outbreak and the strategy to contain it. So contain maybe isn't the right word that they ask. I think the way they ask the question is how are we going to survive this the best? And so this in no means is an endorsement by me. I'm not a medical professional. Please just view this as a commentary and an explanation of what they are saying. I'll give my opinions along the way, of course. So they identify three different models for handling the spread of COVID-19. And we'll start with the third one because they argue for the first one and this builds more suspense. So they say there is countrywide lockdown, right, until the spread of the virus is under control. They say it could take anywhere from weeks to months. It is the safest route, but it does not prevent a second wave from occurring. Now, of course, if you have people, let's say these are people, right, then the thing is everyone just stays in there, stay in your house, right, everybody, right, until it's kind of gone. Now they say correctly there is a risk of a second wave because only a single infected person, because there's no immunity, still has the potential of creating another epicenter. So they don't consider this option. The next option is called containment-based selective quarantine, which means find all the positive cases and put them in quarantine. So let's say we go here and we let you roam around freely, but we know we can test people and we know that some of them are positive. So we simply tell them to stay at home, right. Now this depends a lot on how well you can test people, and it also depends on what they claim the contagious time interval. We know that there are people that are contagious without showing symptoms. So unless you can test every single person all the time, this is likely to not really help a lot. There's various data from various countries that actually shows it can reduce the load, but they basically argue against that because there are these contagious people and you can never test fast enough or accurate or thoroughly enough. And then they say there is risk-based selective quarantine, which means what? It means that some of these people are going to be at risk. And in this case, we obviously mean old people. So old people, I'm going to draw them with a cane, not because old people aren't fit, just because they have better tastes in canes. And then there are young people and they run a smartphone with TikTok. And what we're going to say is that you youngsters, you're not really at risk from this. So you go out, you sneeze on each other, you go about your life normally, and you old people basically stay at home until all the young people have immunity. So we ramp up the cases and then it flattens out eventually in the low-risk population. And at that point, there is enough herd immunity, right? All these people are now immune so that the old person here, even if they now go out again, they won't catch it because everyone's already had it. So they argue for this particular strategy, or at least they analyze this particular strategy. Now, I have to say at the beginning that the core assumption here is that this quarantine of the high-risk people, you can do basically in a perfect way. So the assumption here is that you are able to perfectly quarantine all the high-risk people and that the level of infection in the low-risk population has no influence on the level of infection in the high-risk population. And in my opinion, I simply don't believe that. I simply don't believe you can build this quarantine. I think even these old people, they need food sometimes, the nursing home needs staff. So even if they can reduce their contact to the outside world, they cannot fully be sheltered. And that means the more infections you have in the low-risk population, the more infections you will have in the high-risk population. So I think the fundamental core assumption of this model is quite flawed. That being said, let's analyze it. So we assume that all the high-risk people, none of them is going to get sick because they all stay at home. So the math in this paper is actually pretty basic. So we'll go through it a bit more detailed. So we'll understand the core argument. So they introduce the following quantities, M here. M is the low-risk population, right? This is the population size. V or new, let's call it new. New here is the probability. So that's the probability that if you are sick, you need to go to the ICU. Right? So sick means simply you have the virus. And ICU means that the symptoms are so bad that you need help from the medical system in order to overcome the disease. So if we multiply the population size by the probability that if you get sick, you need to go to the ICU, what do we get? We get a worst-case scenario. So basically the authors here, and I find this is the good part of this analysis. They really don't rely on kind of pandemic dynamics, epidemiology, exponential growth and so on. They simply consider the worst case. So MD here, if you multiply these two numbers, what does that mean? That is the number of severe cases. Severe meaning you need ICU cases. If everybody gets sick, if all get sick. If everybody gets sick at the same time, right? Same time. So this is the work. So let's say we all go out, the lowest population, and we all sneeze in each other's faces as much as we can. And we just all get sick at the same time. Then this here is the number of people going to the ICU. Right? And if this, so they introduce this quantity B here, B is the number of beds in the ICU. If the number of beds in the ICU is larger than the worst case, severe cases, right? Then we are safe. So that's the argument. Basically it's not that we are safe. It is no one will die from lack of an ICU bed. Which is kind of the lever we have as a population. If you assume everyone's going to get sick anyway and so on. If the number of beds is larger than the worst case number of ICU patients, we are safe. That's at least how they define safe. Alright, so that's their premise. Now what are they going to do? They're going to find a quantity where they can bound this thing. So they are going to find a bound, an upper bound on the number of severe cases. And if this upper bound is lower than the number of beds, then they can say we're safe with this method. See this is a worst case analysis under their assumptions. Alright, so I said they don't resort to any kind of epidemiological dynamics. They simply estimate this thing from current numbers. I'm going to introduce two more quantities here. P star and K. Now K is the current number of severe cases. So this is kind of an analog to this thing here. So these two are connected. This is the current number of severe cases and this up here is the total possible, like the worst case number of severe cases in the future. Likewise, P star here is the percentage of people, the percent of people that are sick. And they claim correctly this is unknown. So if we could test everybody who is sick, not severe, just sick. And up here this has no connection because of course you can imagine here another factor times, let's call this P plus or something. Which is the number of people who are sick in the worst case, which of course in our worst case scenario is one. So that's why they don't include it here. So this is the current percentage of sick people. So this here is a percentage and this here is an actual number. Keep that in mind. All right, now if we do some basic reformulation here, if we take this P star and multiply that by, you see it in this corner here, multiply it by the total size of the population, right? We get the number of people who are currently sick. This is a percentage of current sick ones, this is the total size of the population. Get the number of people who are currently sick. If we take that in the denominator and put K here, which is the number of people who are currently severe, then we get an estimate of this quantity new. So remember what new is? New is the probability if you are sick, you go to the ICU. So the ICU means you're severe, right? So these are the current number of sick people and these are the current number of severe people. This gives you an estimate of if you are sick, what's the probability that you're severe? Now they argue that this number, it doesn't change, independent of, so this quantity here is a constant. So the probability that if you are sick from this virus, you go to the ICU, doesn't change over time. So we can estimate it with current numbers, right? Which is a pretty reasonable thing to assume that this stays constant unless the virus mutates or something. So we know the total size of the population. We know the current number of severe cases. You can make an argument about that. So do we really know the current number of severe cases? Because there is an exponential growth involved, this might be difficult to estimate. And they say the same thing. So they say this is the only time where they reference the dynamics of the situation. It grows at an exponential rate. So what we can do is we can take a worst case upper bound, they say, to be on the safe side, perform a worst case analysis. So instead of taking K, they add this confidence interval on it that is based on concentration inequalities. So they don't use K, they use this K tilde here, which has two additional summons here. That is supposed to be an upper bound with confidence, at least one over delta here. And this you can set, for example, to be 0.05. That gives you a 95 percent confidence that this is an upper bound on that. Now, this comes from some concentration bound. And there are certain assumptions behind this upper bound here, which I don't know enough about to critique them here. I'm going to assume they are reasonable. If they are not, then of course, that is an additional point of criticism of this work. All right. So instead of using K here, we are saying we're on the safe side and we use this K tilde. So we know this as well. Now, the unknown quantity, of course, is this thing here, P star. What is the percentage of people that are currently sick? So the goal is now to find that. So they say, OK, if we plug in this upper bound of K tilde, then with this probability, we can upper bound this quantity, nu, which is exactly what we wanted, because we need to upper bound MD. That's what they say here. So since at the top we saw that M times nu equals MD and we want to upper bound MD, we can rearrange this thing. If we plug in these two together, we see that the M cancels out. We can upper bound MD by this quantity here. The upper bound on the current severe cases divided by the percentage of the currently sick people. So again, they reformulate and they plug in. This, of course, needs to be smaller than the number of beds. So they plug this in here and they say, now what we have to do to see is if this quantity is larger than this quantity of two quantities we know, then we are safe. Now, again, our goal is going to be to find a quantity that lower bounds P star, but up, but is larger than this quantity here. And they do this via hypothesis testing. They call this quantity here, they call it P tilde. And they do a hypothesis test for classic statistics where they ask, is P star significantly larger than P tilde? If that's the case, then we're safe. If not, we're not. And how do they do that? They say, OK, we have the population. I did draw this at one point. Let's go back there. We have the population here, right? And what we can do is we can just go out and uniformly, uniformly test people, like just randomly select people. Now, this is an old person, old people stay at home. So we randomly select people to test and their test results come back. And this one, this one's healthy, this one's healthy, this one's healthy, this one's not healthy. And so we have four tests and out of the four, one was positive. Can we work out a hypothesis test from that? So can we decide whether P star is probably much larger than P tilde or not? And the answer is yes, because this is a uniform sample. You can work out using classic statistical tools whether or not you can reject an old hypothesis or not. And they actually work this out and they do give a number here. And that's this. So they say if we test N, which is four, let's say four and a half times this quantity B divided by K. So the number of beds divided by the upper bound on the current severe cases. So we test four point five times this many people. Then if we find at least 10 positive cases or more, then with a probability of 95 percent, we know that the risk based model is safe. So the more, of course, the more infected people you find in this case, the better, because that means because the number of severe cases stays constant at any given time. It means that a lot more people are infected. That means the probability that you are going to become severe is lower. That's why it says at least. So again, you go out, you test N people and according to this formula, plug in the numbers here for your current situation. If you find at least 10 people, then with a probability of at least 95 percent, you know that this model is safe. Cool. And this is done using, you know, classic statistical testing hypothesis testing literature. So I think that is a pretty cool result. But I do severely criticize the underlying assumption, which is that you can perfectly enforce this quarantine. Of course, if you can't, it means that there is a direct correlation between the number of sick people in your low risk population, the number of sick people in your high risk population, which means that more of the high risk population are going to get infected as well, which again means that your number B of ICU beds is going to drop severely because they have a higher hospitalization rate, which makes your entire model that we developed down there less valid, because now this used to be a constant in the model. It's now no longer a constant. It's sinking. And the worse it gets, the more it's sinking. And yes, so that that may make what you initially thought was a safe model into a very non safe model very quickly. And that doesn't include all the high risk people that are going to be in danger additionally because you can't enforce the quarantine. All right. So this was my take on that. Take it for what it's worth. And I wish you a healthy pandemic. Bye bye.
[ { "start": 0, "end": 6, "text": " Can we contain COVID-19 without locking down the economy?" }, { "start": 6, "end": 11, "text": " This is a question and I do care about this article because" }, { "start": 11, "end": 16, "text": " Shai Shalef-Schwarz is one of the bigger names in machine learning theory." }, { "start": 16, "end": 21, "text": " So it was interesting for me to see what he and his collaborator here" }, { "start": 21, "end": 28, "text": " had to say about the kind of outbreak and the strategy to contain it." }, { "start": 28, "end": 35, "text": " So contain maybe isn't the right word that they ask." }, { "start": 35, "end": 44, "text": " I think the way they ask the question is how are we going to survive this the best?" }, { "start": 44, "end": 49, "text": " And so this in no means is an endorsement by me." }, { "start": 49, "end": 51, "text": " I'm not a medical professional." }, { "start": 51, "end": 59, "text": " Please just view this as a commentary and an explanation of what they are saying." }, { "start": 59, "end": 63, "text": " I'll give my opinions along the way, of course." }, { "start": 63, "end": 70, "text": " So they identify three different models for handling the spread of COVID-19." }, { "start": 70, "end": 78, "text": " And we'll start with the third one because they argue for the first one and this builds more suspense." }, { "start": 78, "end": 86, "text": " So they say there is countrywide lockdown, right, until the spread of the virus is under control." }, { "start": 86, "end": 89, "text": " They say it could take anywhere from weeks to months." }, { "start": 89, "end": 96, "text": " It is the safest route, but it does not prevent a second wave from occurring." }, { "start": 96, "end": 103, "text": " Now, of course, if you have people, let's say these are people, right," }, { "start": 103, "end": 115, "text": " then the thing is everyone just stays in there, stay in your house, right, everybody, right, until it's kind of gone." }, { "start": 115, "end": 122, "text": " Now they say correctly there is a risk of a second wave because only a single infected person," }, { "start": 122, "end": 129, "text": " because there's no immunity, still has the potential of creating another epicenter." }, { "start": 129, "end": 133, "text": " So they don't consider this option." }, { "start": 133, "end": 138, "text": " The next option is called containment-based selective quarantine," }, { "start": 138, "end": 144, "text": " which means find all the positive cases and put them in quarantine." }, { "start": 144, "end": 150, "text": " So let's say we go here and we let you roam around freely," }, { "start": 150, "end": 158, "text": " but we know we can test people and we know that some of them are positive." }, { "start": 158, "end": 162, "text": " So we simply tell them to stay at home, right." }, { "start": 162, "end": 167, "text": " Now this depends a lot on how well you can test people," }, { "start": 167, "end": 173, "text": " and it also depends on what they claim the contagious time interval." }, { "start": 173, "end": 178, "text": " We know that there are people that are contagious without showing symptoms." }, { "start": 178, "end": 187, "text": " So unless you can test every single person all the time, this is likely to not really help a lot." }, { "start": 187, "end": 192, "text": " There's various data from various countries that actually shows it can reduce the load," }, { "start": 192, "end": 200, "text": " but they basically argue against that because there are these contagious people" }, { "start": 200, "end": 206, "text": " and you can never test fast enough or accurate or thoroughly enough." }, { "start": 206, "end": 213, "text": " And then they say there is risk-based selective quarantine, which means what?" }, { "start": 213, "end": 219, "text": " It means that some of these people are going to be at risk." }, { "start": 219, "end": 223, "text": " And in this case, we obviously mean old people." }, { "start": 223, "end": 230, "text": " So old people, I'm going to draw them with a cane, not because old people aren't fit," }, { "start": 230, "end": 235, "text": " just because they have better tastes in canes." }, { "start": 235, "end": 241, "text": " And then there are young people and they run a smartphone with TikTok." }, { "start": 241, "end": 248, "text": " And what we're going to say is that you youngsters, you're not really at risk from this." }, { "start": 248, "end": 253, "text": " So you go out, you sneeze on each other, you go about your life normally," }, { "start": 253, "end": 262, "text": " and you old people basically stay at home until all the young people have immunity." }, { "start": 262, "end": 270, "text": " So we ramp up the cases and then it flattens out eventually in the low-risk population." }, { "start": 270, "end": 273, "text": " And at that point, there is enough herd immunity, right?" }, { "start": 273, "end": 279, "text": " All these people are now immune so that the old person here," }, { "start": 279, "end": 284, "text": " even if they now go out again, they won't catch it because everyone's already had it." }, { "start": 284, "end": 295, "text": " So they argue for this particular strategy, or at least they analyze this particular strategy." }, { "start": 295, "end": 304, "text": " Now, I have to say at the beginning that the core assumption here is that this quarantine of the high-risk people," }, { "start": 304, "end": 309, "text": " you can do basically in a perfect way." }, { "start": 309, "end": 317, "text": " So the assumption here is that you are able to perfectly quarantine all the high-risk people" }, { "start": 317, "end": 328, "text": " and that the level of infection in the low-risk population has no influence on the level of infection in the high-risk population." }, { "start": 328, "end": 332, "text": " And in my opinion, I simply don't believe that." }, { "start": 332, "end": 335, "text": " I simply don't believe you can build this quarantine." }, { "start": 335, "end": 343, "text": " I think even these old people, they need food sometimes, the nursing home needs staff." }, { "start": 343, "end": 350, "text": " So even if they can reduce their contact to the outside world, they cannot fully be sheltered." }, { "start": 350, "end": 356, "text": " And that means the more infections you have in the low-risk population," }, { "start": 356, "end": 360, "text": " the more infections you will have in the high-risk population." }, { "start": 360, "end": 366, "text": " So I think the fundamental core assumption of this model is quite flawed." }, { "start": 366, "end": 368, "text": " That being said, let's analyze it." }, { "start": 368, "end": 379, "text": " So we assume that all the high-risk people, none of them is going to get sick because they all stay at home." }, { "start": 379, "end": 383, "text": " So the math in this paper is actually pretty basic." }, { "start": 383, "end": 386, "text": " So we'll go through it a bit more detailed." }, { "start": 386, "end": 389, "text": " So we'll understand the core argument." }, { "start": 389, "end": 393, "text": " So they introduce the following quantities, M here." }, { "start": 393, "end": 397, "text": " M is the low-risk population, right?" }, { "start": 397, "end": 402, "text": " This is the population size." }, { "start": 402, "end": 408, "text": " V or new, let's call it new." }, { "start": 408, "end": 412, "text": " New here is the probability." }, { "start": 412, "end": 421, "text": " So that's the probability that if you are sick, you need to go to the ICU." }, { "start": 421, "end": 424, "text": " Right? So sick means simply you have the virus." }, { "start": 424, "end": 435, "text": " And ICU means that the symptoms are so bad that you need help from the medical system in order to overcome the disease." }, { "start": 435, "end": 445, "text": " So if we multiply the population size by the probability that if you get sick, you need to go to the ICU, what do we get?" }, { "start": 445, "end": 448, "text": " We get a worst-case scenario." }, { "start": 448, "end": 455, "text": " So basically the authors here, and I find this is the good part of this analysis." }, { "start": 455, "end": 464, "text": " They really don't rely on kind of pandemic dynamics, epidemiology, exponential growth and so on." }, { "start": 464, "end": 468, "text": " They simply consider the worst case." }, { "start": 468, "end": 473, "text": " So MD here, if you multiply these two numbers, what does that mean?" }, { "start": 473, "end": 478, "text": " That is the number of severe cases." }, { "start": 478, "end": 482, "text": " Severe meaning you need ICU cases." }, { "start": 482, "end": 492, "text": " If everybody gets sick, if all get sick." }, { "start": 492, "end": 498, "text": " If everybody gets sick at the same time, right?" }, { "start": 498, "end": 501, "text": " Same time." }, { "start": 501, "end": 510, "text": " So this is the work. So let's say we all go out, the lowest population, and we all sneeze in each other's faces as much as we can." }, { "start": 510, "end": 513, "text": " And we just all get sick at the same time." }, { "start": 513, "end": 519, "text": " Then this here is the number of people going to the ICU." }, { "start": 519, "end": 530, "text": " Right? And if this, so they introduce this quantity B here, B is the number of beds in the ICU." }, { "start": 530, "end": 539, "text": " If the number of beds in the ICU is larger than the worst case, severe cases, right?" }, { "start": 539, "end": 541, "text": " Then we are safe." }, { "start": 541, "end": 543, "text": " So that's the argument." }, { "start": 543, "end": 545, "text": " Basically it's not that we are safe." }, { "start": 545, "end": 549, "text": " It is no one will die from lack of an ICU bed." }, { "start": 549, "end": 553, "text": " Which is kind of the lever we have as a population." }, { "start": 553, "end": 557, "text": " If you assume everyone's going to get sick anyway and so on." }, { "start": 557, "end": 566, "text": " If the number of beds is larger than the worst case number of ICU patients, we are safe." }, { "start": 566, "end": 569, "text": " That's at least how they define safe." }, { "start": 569, "end": 572, "text": " Alright, so that's their premise." }, { "start": 572, "end": 574, "text": " Now what are they going to do?" }, { "start": 574, "end": 579, "text": " They're going to find a quantity where they can bound this thing." }, { "start": 579, "end": 585, "text": " So they are going to find a bound, an upper bound on the number of severe cases." }, { "start": 585, "end": 594, "text": " And if this upper bound is lower than the number of beds, then they can say we're safe with this method." }, { "start": 594, "end": 601, "text": " See this is a worst case analysis under their assumptions." }, { "start": 601, "end": 607, "text": " Alright, so I said they don't resort to any kind of epidemiological dynamics." }, { "start": 607, "end": 611, "text": " They simply estimate this thing from current numbers." }, { "start": 611, "end": 615, "text": " I'm going to introduce two more quantities here. P star and K." }, { "start": 615, "end": 626, "text": " Now K is the current number of severe cases." }, { "start": 626, "end": 632, "text": " So this is kind of an analog to this thing here." }, { "start": 632, "end": 635, "text": " So these two are connected." }, { "start": 635, "end": 646, "text": " This is the current number of severe cases and this up here is the total possible, like the worst case number of severe cases in the future." }, { "start": 646, "end": 661, "text": " Likewise, P star here is the percentage of people, the percent of people that are sick." }, { "start": 661, "end": 666, "text": " And they claim correctly this is unknown." }, { "start": 666, "end": 672, "text": " So if we could test everybody who is sick, not severe, just sick." }, { "start": 672, "end": 685, "text": " And up here this has no connection because of course you can imagine here another factor times, let's call this P plus or something." }, { "start": 685, "end": 693, "text": " Which is the number of people who are sick in the worst case, which of course in our worst case scenario is one." }, { "start": 693, "end": 695, "text": " So that's why they don't include it here." }, { "start": 695, "end": 702, "text": " So this is the current percentage of sick people." }, { "start": 702, "end": 706, "text": " So this here is a percentage and this here is an actual number." }, { "start": 706, "end": 709, "text": " Keep that in mind." }, { "start": 709, "end": 726, "text": " All right, now if we do some basic reformulation here, if we take this P star and multiply that by, you see it in this corner here," }, { "start": 726, "end": 731, "text": " multiply it by the total size of the population, right?" }, { "start": 731, "end": 739, "text": " We get the number of people who are currently sick." }, { "start": 739, "end": 743, "text": " This is a percentage of current sick ones, this is the total size of the population." }, { "start": 743, "end": 746, "text": " Get the number of people who are currently sick." }, { "start": 746, "end": 760, "text": " If we take that in the denominator and put K here, which is the number of people who are currently severe," }, { "start": 760, "end": 764, "text": " then we get an estimate of this quantity new." }, { "start": 764, "end": 766, "text": " So remember what new is?" }, { "start": 766, "end": 773, "text": " New is the probability if you are sick, you go to the ICU." }, { "start": 773, "end": 777, "text": " So the ICU means you're severe, right?" }, { "start": 777, "end": 783, "text": " So these are the current number of sick people and these are the current number of severe people." }, { "start": 783, "end": 791, "text": " This gives you an estimate of if you are sick, what's the probability that you're severe?" }, { "start": 791, "end": 800, "text": " Now they argue that this number, it doesn't change, independent of, so this quantity here is a constant." }, { "start": 800, "end": 805, "text": " So the probability that if you are sick from this virus, you go to the ICU, doesn't change over time." }, { "start": 805, "end": 809, "text": " So we can estimate it with current numbers, right?" }, { "start": 809, "end": 817, "text": " Which is a pretty reasonable thing to assume that this stays constant unless the virus mutates or something." }, { "start": 817, "end": 820, "text": " So we know the total size of the population." }, { "start": 820, "end": 826, "text": " We know the current number of severe cases." }, { "start": 826, "end": 829, "text": " You can make an argument about that." }, { "start": 829, "end": 832, "text": " So do we really know the current number of severe cases?" }, { "start": 832, "end": 837, "text": " Because there is an exponential growth involved, this might be difficult to estimate." }, { "start": 837, "end": 840, "text": " And they say the same thing." }, { "start": 840, "end": 845, "text": " So they say this is the only time where they reference the dynamics of the situation." }, { "start": 845, "end": 847, "text": " It grows at an exponential rate." }, { "start": 847, "end": 857, "text": " So what we can do is we can take a worst case upper bound, they say, to be on the safe side, perform a worst case analysis." }, { "start": 857, "end": 869, "text": " So instead of taking K, they add this confidence interval on it that is based on concentration inequalities." }, { "start": 869, "end": 878, "text": " So they don't use K, they use this K tilde here, which has two additional summons here." }, { "start": 878, "end": 885, "text": " That is supposed to be an upper bound with confidence, at least one over delta here." }, { "start": 885, "end": 888, "text": " And this you can set, for example, to be 0.05." }, { "start": 888, "end": 906, "text": " That gives you a 95 percent confidence that this is an upper bound on that." }, { "start": 906, "end": 910, "text": " Now, this comes from some concentration bound." }, { "start": 910, "end": 921, "text": " And there are certain assumptions behind this upper bound here, which I don't know enough about to critique them here." }, { "start": 921, "end": 923, "text": " I'm going to assume they are reasonable." }, { "start": 923, "end": 929, "text": " If they are not, then of course, that is an additional point of criticism of this work." }, { "start": 929, "end": 938, "text": " All right. So instead of using K here, we are saying we're on the safe side and we use this K tilde." }, { "start": 938, "end": 940, "text": " So we know this as well." }, { "start": 940, "end": 945, "text": " Now, the unknown quantity, of course, is this thing here, P star." }, { "start": 945, "end": 956, "text": " What is the percentage of people that are currently sick?" }, { "start": 956, "end": 960, "text": " So the goal is now to find that." }, { "start": 960, "end": 967, "text": " So they say, OK, if we plug in this upper bound of K tilde, then with this probability," }, { "start": 967, "end": 977, "text": " we can upper bound this quantity, nu, which is exactly what we wanted, because we need to upper bound MD." }, { "start": 977, "end": 979, "text": " That's what they say here." }, { "start": 979, "end": 990, "text": " So since at the top we saw that M times nu equals MD and we want to upper bound MD," }, { "start": 990, "end": 1001, "text": " we can rearrange this thing. If we plug in these two together, we see that the M cancels out." }, { "start": 1001, "end": 1005, "text": " We can upper bound MD by this quantity here." }, { "start": 1005, "end": 1019, "text": " The upper bound on the current severe cases divided by the percentage of the currently sick people." }, { "start": 1019, "end": 1027, "text": " So again, they reformulate and they plug in." }, { "start": 1027, "end": 1033, "text": " This, of course, needs to be smaller than the number of beds." }, { "start": 1033, "end": 1043, "text": " So they plug this in here and they say, now what we have to do to see is if this quantity is larger than this quantity of two quantities we know, then we are safe." }, { "start": 1043, "end": 1056, "text": " Now, again, our goal is going to be to find a quantity that lower bounds P star, but up, but is larger than this quantity here." }, { "start": 1056, "end": 1064, "text": " And they do this via hypothesis testing. They call this quantity here, they call it P tilde." }, { "start": 1064, "end": 1075, "text": " And they do a hypothesis test for classic statistics where they ask, is P star significantly larger than P tilde?" }, { "start": 1075, "end": 1081, "text": " If that's the case, then we're safe. If not, we're not." }, { "start": 1081, "end": 1087, "text": " And how do they do that? They say, OK, we have the population." }, { "start": 1087, "end": 1091, "text": " I did draw this at one point." }, { "start": 1091, "end": 1095, "text": " Let's go back there. We have the population here, right?" }, { "start": 1095, "end": 1105, "text": " And what we can do is we can just go out and uniformly, uniformly test people, like just randomly select people." }, { "start": 1105, "end": 1108, "text": " Now, this is an old person, old people stay at home." }, { "start": 1108, "end": 1113, "text": " So we randomly select people to test and their test results come back." }, { "start": 1113, "end": 1120, "text": " And this one, this one's healthy, this one's healthy, this one's healthy, this one's not healthy." }, { "start": 1120, "end": 1127, "text": " And so we have four tests and out of the four, one was positive." }, { "start": 1127, "end": 1131, "text": " Can we work out a hypothesis test from that?" }, { "start": 1131, "end": 1139, "text": " So can we decide whether P star is probably much larger than P tilde or not?" }, { "start": 1139, "end": 1143, "text": " And the answer is yes, because this is a uniform sample." }, { "start": 1143, "end": 1152, "text": " You can work out using classic statistical tools whether or not you can reject an old hypothesis or not." }, { "start": 1152, "end": 1160, "text": " And they actually work this out and they do give a number here." }, { "start": 1160, "end": 1162, "text": " And that's this." }, { "start": 1162, "end": 1173, "text": " So they say if we test N, which is four, let's say four and a half times this quantity B divided by K." }, { "start": 1173, "end": 1178, "text": " So the number of beds divided by the upper bound on the current severe cases." }, { "start": 1178, "end": 1184, "text": " So we test four point five times this many people." }, { "start": 1184, "end": 1196, "text": " Then if we find at least 10 positive cases or more, then with a probability of 95 percent," }, { "start": 1196, "end": 1200, "text": " we know that the risk based model is safe." }, { "start": 1200, "end": 1205, "text": " So the more, of course, the more infected people you find in this case, the better," }, { "start": 1205, "end": 1212, "text": " because that means because the number of severe cases stays constant at any given time." }, { "start": 1212, "end": 1214, "text": " It means that a lot more people are infected." }, { "start": 1214, "end": 1219, "text": " That means the probability that you are going to become severe is lower." }, { "start": 1219, "end": 1222, "text": " That's why it says at least." }, { "start": 1222, "end": 1226, "text": " So again, you go out, you test N people and according to this formula," }, { "start": 1226, "end": 1229, "text": " plug in the numbers here for your current situation." }, { "start": 1229, "end": 1237, "text": " If you find at least 10 people, then with a probability of at least 95 percent, you know that this model is safe." }, { "start": 1237, "end": 1239, "text": " Cool." }, { "start": 1239, "end": 1248, "text": " And this is done using, you know, classic statistical testing hypothesis testing literature." }, { "start": 1248, "end": 1252, "text": " So I think that is a pretty cool result." }, { "start": 1252, "end": 1262, "text": " But I do severely criticize the underlying assumption, which is that you can perfectly enforce this quarantine." }, { "start": 1262, "end": 1268, "text": " Of course, if you can't, it means that there is a direct correlation between the number of sick people" }, { "start": 1268, "end": 1273, "text": " in your low risk population, the number of sick people in your high risk population," }, { "start": 1273, "end": 1279, "text": " which means that more of the high risk population are going to get infected as well," }, { "start": 1279, "end": 1288, "text": " which again means that your number B of ICU beds is going to drop severely because they have a higher hospitalization rate," }, { "start": 1288, "end": 1295, "text": " which makes your entire model that we developed down there less valid," }, { "start": 1295, "end": 1298, "text": " because now this used to be a constant in the model." }, { "start": 1298, "end": 1300, "text": " It's now no longer a constant." }, { "start": 1300, "end": 1301, "text": " It's sinking." }, { "start": 1301, "end": 1303, "text": " And the worse it gets, the more it's sinking." }, { "start": 1303, "end": 1316, "text": " And yes, so that that may make what you initially thought was a safe model into a very non safe model very quickly." }, { "start": 1316, "end": 1322, "text": " And that doesn't include all the high risk people that are going to be in danger additionally" }, { "start": 1322, "end": 1325, "text": " because you can't enforce the quarantine." }, { "start": 1325, "end": 1326, "text": " All right." }, { "start": 1326, "end": 1328, "text": " So this was my take on that." }, { "start": 1328, "end": 1330, "text": " Take it for what it's worth." }, { "start": 1330, "end": 1334, "text": " And I wish you a healthy pandemic." }, { "start": 1334, "end": 1353, "text": " Bye bye." } ]
Ru23eWAQ6_E
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Do As I Can, Not As I Say: Grounding Language in Robotic Affordances (SayCan - Paper Explained)
[ "Science & Technology" ]
[]
#saycan #robots #ai Large Language Models are excellent at generating plausible plans in response to real-world problems, but without interacting with the environment, they have no abilities to estimate which of these plans are feasible or appropriate. SayCan combines the semantic capabilities of language models with a bank of low-level skills, which are available to the agent as individual policies to execute. SayCan automatically finds the best policy to execute by considering a trade-off between the policy's ability to progress towards the goal, given by the language model, and the policy's probability of executing successfully, given by the respective value function. The result is a system that can generate and execute long-horizon action sequences in the real world to fulfil complex tasks. Sponsor: Zeta Alpha https://zeta-alpha.com Use code YANNIC for 20% off! OUTLINE: 0:00 - Introduction & Overview 3:20 - Sponsor: Zeta Alpha 5:00 - Using language models for action planning 8:00 - Combining LLMs with learned atomic skills 16:50 - The full SayCan system 20:30 - Experimental setup and data collection 21:25 - Some weaknesses & strengths of the system 27:00 - Experimental results Paper: https://arxiv.org/abs/2204.01691 Website: https://say-can.github.io/ Abstract: Large language models can encode a wealth of semantic knowledge about the world. Such knowledge could be extremely useful to robots aiming to act upon high-level, temporally extended instructions expressed in natural language. However, a significant weakness of language models is that they lack real-world experience, which makes it difficult to leverage them for decision making within a given embodiment. For example, asking a language model to describe how to clean a spill might result in a reasonable narrative, but it may not be applicable to a particular agent, such as a robot, that needs to perform this task in a particular environment. We propose to provide real-world grounding by means of pretrained skills, which are used to constrain the model to propose natural language actions that are both feasible and contextually appropriate. The robot can act as the language model's "hands and eyes," while the language model supplies high-level semantic knowledge about the task. We show how low-level skills can be combined with large language models so that the language model provides high-level knowledge about the procedures for performing complex and temporally-extended instructions, while value functions associated with these skills provide the grounding necessary to connect this knowledge to a particular physical environment. We evaluate our method on a number of real-world robotic tasks, where we show the need for real-world grounding and that this approach is capable of completing long-horizon, abstract, natural language instructions on a mobile manipulator. The project's website and the video can be found at this https URL Authors: Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ykilcher.com/discord BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there, check out this video. So there's a Coke can and there's a spill, a Coke spill. So the instructor here says, I spilled my Coke on the table. How would you throw it away and bring me something to help clean? So the robot here forms a plan as it goes about it. First, it says I would find a Coke can. Then second, I would pick up the Coke can. You can see it has done it. Third, I would go to the trash can. Fourth, I would put down the Coke can. Note that he puts down the Coke can next to the trash can, not in the trash can, because the robot is environmentally friendly and wants to preserve the can for the recycling bin for cans. And, you know, it doesn't belong in the trash. Good little robot. So next it says I will find the sponge. I will pick up the sponge and then will it clean the Coke? No, it will not clean up the spill. It will actually give the sponge to the human to clean up the spill, because that's how the future is going to be. The robots, they're not going to take our, you know, people always think the robots will take our dirty jobs. They'll take all the like these tasks like cleaning and doing things. No, no, no, no, no, no. They'll abuse us, the humans to do that. They'll just throw down our stuff. They'll throw down the sponge and be like, here human, clean up your own mess. Well, if that's a future that you look forward to, too, then join me in today's paper. We're going to look at do as I can, not as I say, grounding language in robotic affordances by researchers at robotics at Google and everyday robots. So as you saw in this video, what happened here is that from a simple instruction that the instructor gave this essentially this I spilled a Coke can, you know, please help me find something to clean and throw it away. The robot formed a plan, the plan you can see at the very end here. You can see it developing in the bottom. At the very end, you can see the full plan. I'm not sure if I can make this bottom thing go away. In essence, it makes a plan like it always plans the next step or at least it determines what the next step should be. And then it actually also does it. So this is a good example of grounded, a grounded language model, or also an example of embodied intelligence. This work connects large language models and the knowledge that are that is inherent to large language models with the skills of robots that act in the real world, which really cool. And usually these two things are quite disjoint, but this could be really powerful. So we're going to look at this paper. I also have already recorded an interview with the authors this for time reasons. We did it the other way around this time. So I don't want to take away too much on the paper review right here. I'll tell you what the method is about how it works. And I'll leave the rest to the authors who are extremely competent and I learned I learned like I learned a lot in the interview. I hope you will too. In any case, the interview will be out tomorrow. If you're watching this the day it comes out, which obviously you do. How do you find new papers? Frankly, machine learning has become unbearable. There are thousands of new papers each month. And to keep the overview, we need good tools. Today's sponsor is Zeta Alpha, which is a search and recommendation engines for papers. This is really powerful. For example, here I've searched for today's paper, say can you can immediately see that not only do I get the paper itself, but I also get an aggregation of all the social media mentions of this paper. That doesn't stop there with one click, I can find related papers. These are not only papers that are cited, but these are semantically similar papers. This is powered by neural search, which is really cool. I can further now add this paper to my tags. And what that will do is it will build categories of papers and then serve me recommendations that semantically fit those categories. This is really powerful. Essentially, this is a newsfeed of papers that is personalized to you specifically. Just recently Zeta Alpha has released their own PDF reader. This is really strong right out of the gate. Not only does it let you read the paper, you know, but also it shows the important information about a paper and it lets you take notes. Now what I find by far the coolest thing is that you can actually use one of these notes to search for other papers. So whenever you find something within a paper that seems interesting, you can use that particular piece of text and go search for other papers that might deal with the same topic. Sign up now to Zeta Alpha, there is a free tier and the pro tier is actually free for students for academics. But in case you are not one of these, the promo code Yannick will get you 20% off the pro subscription. The authors here state that if you try to ask a language model to clean a spill, as they just did in the video. So if you ask a language model to clean a spill, it might result in a reasonable narrative as we've all come to know the large language models like GPT-3 or so they give very convincing outputs. So when you ask them, how would you clean up a spill, they'll give you a reasonable plan to clean up a skill. But the authors say may not be applicable to a particular agent such as a robot that needs to perform this task in a particular environment. They have a bunch of examples right here. So I spilled my drink, how can you help is up here. GPT-3 would say something like you could try using a vacuum cleaner. Well, GPT-3 has no idea of A, whether there is a vacuum cleaner in this environment, or B, whether the robot or whatever agent is capable of executing that action. So is capable of handling a vacuum cleaner, because it's not the easiest thing to use. You have to go get it, plug it in and so on, there's moving parts. Similarly, models like Lambda and Flan, of course, they're made for different things, but still, they will pay no attention to what is actually possible in the environment. Now you can get around this a little bit by prompting, by prompt engineering, telling the model what's possible in the current world, but it will only get you so far. So we need something else, we need something better than this. And that's where this system comes in. So they say what they want to do, they want to provide a real world grounding by means of pre-trained skills. And this leads to a situation where you only consider actions that are both feasible and contextually appropriate. So these two things need to be brought together. The language model supplies the high level semantic knowledge about the task, and the language model provides and the robot itself or the policy in the robot provides the feasibility of the tasks to be executed. So the two things are brought together, contextually appropriate from the language model side and feasibility from the robot side. So how are they going to do this? They're going to combine, as I said, large language models with policy or value functions, let's say value functions, and then they execute a policy. There's a bit more explanation right here, but I think I've said many things already. We'll get to the meat right here. They say, let's say we have a robot. The robot might be, or in this case is, equipped with a repertoire of learned skills for basic atomic behaviors. These skills are capable of low level perception and control. So one of these atomic behaviors is, for example, if you remember from the video, pick up something, pick up the Coke can. That's an atomic behavior. You can teach the robot to pick up a Coke can in isolation. It can do it many times. You can train some imitation learning or reinforcement learning, or you can even hard code that particular policy. It doesn't matter. What matters is that you can train it in isolation. It is an atomic action, and these atomic actions can then be chained together to form a sequence of actions and execute a plan. So the atomic actions are going to be supplied by the robot, and then the sequencing of the atomic actions is going to be determined by the language model. They say, if we can simply make the large language model aware of the available and feasible repertoire of skills, this can provide it with an awareness of both the agent's capabilities and the current state of the environment. So if they have a large language model, many people use large language models to sample, which means that they input, they would input something like, you know, I, I is capitalized, I spilled a drink, dot dot dot, and then they would let the language model generate stuff right here, and then they would try to interpret this stuff. We've seen this in other paper, and there are situations where it can work, especially if you put like some reasonable prompt in front of it. But the approaches have been largely to just let the model generate some stuff and then try to map that stuff, whatever comes here, into the action space of the robot. But that is not always possible. Instead, what this paper does is it says, well, we can also use the language model not to generate, but simply to compute the likelihood of certain inputs. So I spilled a drink, and then let's say I just have five actions at my disposal. All the robot can do is these five actions. So I would, or let's, let's say it says, I spilled a drink, I will, and then, clean up, I will go away, I will eat pizza, I will, and so on, right? So there are these different actions that the robot has available to do, and these correspond obviously directly to these atomic actions. So cleaning up something would be an atomic action that you could train in isolation. Going away would be an atomic action. You can hard code or you can path find your way out the door. Eat pizza. Maybe these are even too high level that the way that I describe right now, but just imagine these are low level actions. And all we have to do with the language model is we simply have to compute the likelihood of each. So what's the likelihood of the sentence, I spilled a drink, I will clean up, right? And then I compare that to the likelihood of the sentence, I spilled a drink, I will go away. And then I compare that to the likelihood of the sentence, I spilled a drink, I will eat pizza. So for every continuation here in my repertoire, I will get a likelihood number. And that represents how contextually appropriate is that particular skill in this case. So how much does the language model think this skill would be useful right here? Now there's obviously an issue in how you formulate these things right here. Depending on how you formulate them, they might become more or less likely. However, I think the authors here work around this simply by the fact that these skills that they have, they are so separated from each other, there is not really too much of an issue with that. But that's kind of what my concern was when I read this. But in essence, it's a good idea, I think. So you simply for every single, wow, this all became orange, for every single continuation, you get a number, which is the likelihood of that thing. That's what they say right here. No, instead of using the large language model to integrate an instruction, we can use it to score the likelihood that an individual skill makes progress towards completing the high level instruction. Furthermore, and that's where the second part comes in. If each skill has an accompanying affordance function that quantifies how likely it is to succeed from the current state, such as a learned value function, its value can be used to weigh the skill's likelihood. It's best if we go down here and say that the skill is the best. It's best if we go down here to the diagrams of how this works so you can see how this fits together. This part here is the part we just described. Let's say I'm in a situation, this is the prompt that I put in. How would you put an apple on the table? You prompt, well, you prompt the language model with this thing right here, which has a prompt engineering part. You can see there are a bunch of examples of instruction and then a sequence of steps. Again, instruction, a sequence of steps. Here it comes again, instruction, and then here you'd get a sequence of steps. However, instead of generating, you'd simply score the likelihood of each of the skills that you have available. Find an apple, find a Coke, find a sponge, pick up an apple, pick up a Coke, yada, yada, yada until go to the counter. Each one of these skills gets a likelihood number assigned to it. That's part one. Part two is where you train the robot for these basic skills, these atomic skills. Here you can see one of these training stations where you can simply teach the robot to do things such as picking up an apple or picking up a Red Bull can. Now, what you have to do is not only teach the robot a skill, but also train a value function for it. If you do something like A2C reinforcement learning, you get the value function directly out of that algorithm. If not, you have to somehow come up with a value function that makes sense. In any case, what you want to do is train a policy and a value function. The value function is important because it tells you from a given input, by the way, the low level policy has the picture here and input. Well, obviously the language model doesn't. Now, I believe with Flamingo by DeepMind, that just came out today, that might actually change. But the low level policy has the image available. So the value function, given this picture right here, can tell you pretty quickly. My skill that's called pick up the Red Bull can, I can execute that policy and I can probably make it happen. That's why the value is relatively large here. Also for the pick up the apple action, the value function tells you, you know, given this picture right here, I can probably make that happen. However, when it's pick up the water bottle, pick up the bag of chips and so on, there is no water bottle. So the value function very accurately says, no, I cannot make that happen if I execute that policy. So the value function gives you inherently a score of given the current observation, how likely am I to succeed at a particular skill, which is exactly what we want, because that's the second part of our puzzle. So on the right here, you see another example where none of these pick up skills, picking up, sorry, not pick up, picking up skills have any value because there are no objects. But in this case, maybe other actions would score very highly in the value function. For example, go and find a sponge. Like I can always go and find something, right? And if I know there's a sponge somewhere, I'll probably succeed. So in any case, this value function now we can combine, you can see we got a number for each action from the language model, how likely is that action to progress towards the goal. We got a number for each action from the value function, which is how likely is this action to succeed given the current observation. And all we do now is essentially multiply the two things together. If they are log likelihoods, we obviously want to add them. But in any case, we combine the two numbers, and then we choose the skill that is the best trade off between what makes progress towards a goal and what is feasible currently. Here is an example. The input is how would you put an apple on the table like an apple? So we query the language model with this prompt here and the prompt engineering we've seen before. This is not displayed here, but it is the case. And the top actions that the language model gives are pick up an apple, you see that's the highest action that we have, place the apple, and only at third instance, find an apple. However, the language model has no clue about the current state, right? And that's where the value function come in. So this is the current observation. We ask the value function which skills are doable in the current environment, in the current observation. So the value function say, well, finding an apple, finding a coke, finding a sponge, these are pretty high. I could do these. I could also go to the table. I could also go to the counter, right? These are fairly doable. However, I cannot place an apple or place a coke because I don't have a coke in my gripper. I can also not pick up an apple or pick up a coke because I don't see them anywhere in the picture right here. So even though pick up the apple was scored highest by the language model, it is now severely down ranked because the value function for this policy doesn't, isn't very confident that it will succeed if you execute that right now. And therefore, the action that is chosen is the best trade off, which is find an apple. Then you can see or not see, but this is represented here that after this is done, the policy is executed. So the find an apple policy is executed. The find an apple action is added to the prompt and then the whole process repeats. But instead of asking for the first step, this whole thing is now the prompt, including the instruction. And we simply ask the language model for the second step and the input to the value function is now the current updated picture. So here you see it succeeded in finding an apple and now hopefully the second step, if we go through the same process again, is going to be the pick up an apple action. Because, well, that might already be high by the language model, but also the value function, given that there's an apple in the picture should now say, yes, I can probably succeed at that. So that's the whole issue or the whole process here. This is repeated until the end. This is the say can method. What is really impressive is just the amount of effort and work that went into designing these systems, training these systems, evaluating these systems. They have different areas here on the left. This is like a kitchen. On the right is a different environment. They have these training stations. They collect so much data from human operators and so on. This is, if you saw that there are a lot of authors, this is because this was or seems like a quite big project. But, yeah, it's definitely worth it. It's cool to have something in the real world. There are definitely a bunch of criticisms I have right here, which I also brought up to the authors, and I thought they responded quite admirably and quite well. The one criticism I already raised was that if, you know, it obviously depends on how you spell. So what you have is this bank of skills on the right-hand side here. Now, in order for the language model to score them, they need to actually be formulated as a piece of language. And now it all of a sudden depends on how you formulate that. For example, we know that longer queries always have kind of lower likelihood because they have more tokens. Also how you phrase things is differently and so on. So it is quite tricky. And I believe if you go into more actions, maybe actions, maybe the robot has two actions that are very close together in terms of semantics or in terms of wording, the model might get confused more easily. Second of all, currently, there is no consideration as to whether an action succeeds or not. So you simply assume that once you execute a low-level policy, that the robot is going to succeed at executing that low-level policy. That is why, if it does not succeed, and a lot of these things are still pretty hard, then there is very little recovery. The value functions might still give you, like, let us say you find an apple, you try to pick up the apple, but you do not manage to do it. The pick up an apple instruction will be pick up an apple, will be in your prompt. So now the value function will probably say, well, I could pick up the apple again because it again sees an apple because you failed to pick it up. But the likelihood that the language model is going to say pick up an apple again after it just did is quite lower. Now, in coincidence, as we know language models, if you go on here repeating the sentence pick up an apple, at some point it actually becomes pretty likely, given the language model. But hopefully, we will not get there. So there are quite a number of weaknesses yet in this setup. The other weakness is just the limitations of hardware. These robots, they are, this video was 10x speed. So this was 10 times speed. And still it's quite slow. It, as you can see, it can't do many things like it cannot wipe itself with the sponge and so on. It needs to navigate around slowly. Yeah, but still these are, I think, limitations that can be overcome because it like carefully grabs. And yeah, in any case, there are also a lot of good things right here. And I want to highlight that because what I really like about this is that these two things are disjoint. So the language model side on the left hand side and these value functions, this policy bank, these atomic actions, they are disjoint. So they are disjoint. So they are not actions. They are disjoint. The language model can, is not trained. It is a frozen language model. It can be trained completely in isolation to the system. All you have to do is get it to score the likelihoods of some actions. Likewise, the bank on the right here, it is completely, in fact, not the bank itself, but each individual skill, each individual entry is trained completely isolated from all the others. All you need to add a new skill right here is a policy that can execute that skill at any given moment and a value function that estimates, given some state input, that estimates how likely the policy is to succeed if this action, if this policy were to be executed at this particular moment. That's all you need. You can add this to your bank of actions and you have to, you don't have to retrain anything in this system. It is directly useful. So you could think of shipping out these robots essentially and then upgrading the language model so they are better at planning stuff. Or you could just ship new skills, right? It's like, well, our coders have developed some new skill for the robot, right? You just amend, you mend it. You just put it in. There's no, you don't need to update the full system. This is not an end-to-end system. And usually in deep learning, we're quite end-to-end happy. But in this case, I think this is a really good case where modularity is really the key. I think this goes so much beyond just robots and grounding in the real world. But to have a model like on the left that has knowledge about, you know, semantic knowledge, high level knowledge, and so on, sequential knowledge, essentially, to provide that with a set of modular pieces of external things that it can use. I think that idea is powerful way beyond just the robotics use case. But obviously, the robotics use case is quite a cool one. So I don't want to discourage that. Yeah, we in the interview, we go into all of this, we go into the experimental results as well. The experimental results, they're not perfect. However, they are quite impressive in that the robots they are able to plan across many, many time steps. They're able to chain these actions. You can see on the right here, that's maybe two pixels. But these are like 17 of these atomic actions that are done in sequence. And, you know, that's quite impressive. These episodes are very, very long. And if you think you can get to that in the real world with sort of a reinforcement learning approach, then good luck. Yeah, so the success rates are among the 70% ish of plan success rate, 61% execution success rate, which the plan success rate, I believe is if the plan itself makes sense, and the execution success rate is if also the policies all execute correctly. And you can see this is very different for the different test sets. But all in all, it's very impressive. Here are a bunch of more examples of these low level atomic skills being practiced and the value functions being evaluated and the language, the language model likelihoods in blue as well. So I don't want to make this artificially too long. As I said, interviews coming up. I hope you like explanations like these, even if they are a bit shorter. And I'll see you around. Check out the paper, subscribe, stay hydrated. Bye bye.
[ { "start": 0, "end": 7.2, "text": " Hi there, check out this video. So there's a Coke can and there's a spill, a Coke spill." }, { "start": 7.2, "end": 13.36, "text": " So the instructor here says, I spilled my Coke on the table. How would you throw it away and" }, { "start": 13.36, "end": 18.56, "text": " bring me something to help clean? So the robot here forms a plan as it goes about it. First," }, { "start": 18.56, "end": 25.28, "text": " it says I would find a Coke can. Then second, I would pick up the Coke can. You can see it has" }, { "start": 25.28, "end": 33.04, "text": " done it. Third, I would go to the trash can. Fourth, I would put down the Coke can. Note that" }, { "start": 33.04, "end": 37.92, "text": " he puts down the Coke can next to the trash can, not in the trash can, because the robot is" }, { "start": 37.92, "end": 44.24, "text": " environmentally friendly and wants to preserve the can for the recycling bin for cans. And," }, { "start": 44.8, "end": 49.68000000000001, "text": " you know, it doesn't belong in the trash. Good little robot. So next it says I will find the" }, { "start": 49.68, "end": 56.08, "text": " sponge. I will pick up the sponge and then will it clean the Coke? No, it will not clean up the" }, { "start": 56.08, "end": 61.12, "text": " spill. It will actually give the sponge to the human to clean up the spill, because that's how" }, { "start": 61.12, "end": 66.16, "text": " the future is going to be. The robots, they're not going to take our, you know, people always" }, { "start": 66.16, "end": 72.08, "text": " think the robots will take our dirty jobs. They'll take all the like these tasks like cleaning and" }, { "start": 72.08, "end": 77.92, "text": " doing things. No, no, no, no, no, no. They'll abuse us, the humans to do that. They'll just throw down" }, { "start": 77.92, "end": 82.56, "text": " our stuff. They'll throw down the sponge and be like, here human, clean up your own mess." }, { "start": 83.44, "end": 89.04, "text": " Well, if that's a future that you look forward to, too, then join me in today's paper. We're going" }, { "start": 89.04, "end": 96.08, "text": " to look at do as I can, not as I say, grounding language in robotic affordances by researchers" }, { "start": 96.08, "end": 102.24000000000001, "text": " at robotics at Google and everyday robots. So as you saw in this video, what happened here is that" }, { "start": 102.24, "end": 109.03999999999999, "text": " from a simple instruction that the instructor gave this essentially this I spilled a Coke can," }, { "start": 109.03999999999999, "end": 115.19999999999999, "text": " you know, please help me find something to clean and throw it away. The robot formed a plan," }, { "start": 115.19999999999999, "end": 121.91999999999999, "text": " the plan you can see at the very end here. You can see it developing in the bottom. At the very end," }, { "start": 121.91999999999999, "end": 127.6, "text": " you can see the full plan. I'm not sure if I can make this bottom thing go away. In essence," }, { "start": 127.6, "end": 134.24, "text": " it makes a plan like it always plans the next step or at least it determines what the next step" }, { "start": 134.24, "end": 141.92, "text": " should be. And then it actually also does it. So this is a good example of grounded, a grounded" }, { "start": 141.92, "end": 148.79999999999998, "text": " language model, or also an example of embodied intelligence. This work connects large language" }, { "start": 148.79999999999998, "end": 155.35999999999999, "text": " models and the knowledge that are that is inherent to large language models with the skills of robots" }, { "start": 155.36, "end": 161.28, "text": " that act in the real world, which really cool. And usually these two things are quite disjoint," }, { "start": 161.28, "end": 168.08, "text": " but this could be really powerful. So we're going to look at this paper. I also have already recorded" }, { "start": 168.08, "end": 174.56, "text": " an interview with the authors this for time reasons. We did it the other way around this time. So I" }, { "start": 174.56, "end": 179.84, "text": " don't want to take away too much on the paper review right here. I'll tell you what the method" }, { "start": 179.84, "end": 185.52, "text": " is about how it works. And I'll leave the rest to the authors who are extremely competent and I" }, { "start": 185.52, "end": 190.64000000000001, "text": " learned I learned like I learned a lot in the interview. I hope you will too. In any case," }, { "start": 190.64000000000001, "end": 196.16, "text": " the interview will be out tomorrow. If you're watching this the day it comes out, which obviously" }, { "start": 196.16, "end": 202.56, "text": " you do. How do you find new papers? Frankly, machine learning has become unbearable. There" }, { "start": 202.56, "end": 207.76, "text": " are thousands of new papers each month. And to keep the overview, we need good tools. Today's" }, { "start": 207.76, "end": 214.23999999999998, "text": " sponsor is Zeta Alpha, which is a search and recommendation engines for papers. This is really" }, { "start": 214.23999999999998, "end": 219.76, "text": " powerful. For example, here I've searched for today's paper, say can you can immediately see" }, { "start": 219.76, "end": 225.68, "text": " that not only do I get the paper itself, but I also get an aggregation of all the social media" }, { "start": 225.68, "end": 231.28, "text": " mentions of this paper. That doesn't stop there with one click, I can find related papers. These" }, { "start": 231.28, "end": 236.79999999999998, "text": " are not only papers that are cited, but these are semantically similar papers. This is powered by" }, { "start": 236.8, "end": 242.56, "text": " neural search, which is really cool. I can further now add this paper to my tags. And what that will" }, { "start": 242.56, "end": 248.32000000000002, "text": " do is it will build categories of papers and then serve me recommendations that semantically" }, { "start": 248.32000000000002, "end": 253.92000000000002, "text": " fit those categories. This is really powerful. Essentially, this is a newsfeed of papers that" }, { "start": 253.92000000000002, "end": 260.24, "text": " is personalized to you specifically. Just recently Zeta Alpha has released their own PDF reader." }, { "start": 260.24, "end": 265.04, "text": " This is really strong right out of the gate. Not only does it let you read the paper, you know," }, { "start": 265.04, "end": 270, "text": " but also it shows the important information about a paper and it lets you take notes. Now what I" }, { "start": 270, "end": 276.08000000000004, "text": " find by far the coolest thing is that you can actually use one of these notes to search for" }, { "start": 276.08000000000004, "end": 281.92, "text": " other papers. So whenever you find something within a paper that seems interesting, you can use that" }, { "start": 281.92, "end": 287.44, "text": " particular piece of text and go search for other papers that might deal with the same topic. Sign" }, { "start": 287.44, "end": 292.16, "text": " up now to Zeta Alpha, there is a free tier and the pro tier is actually free for students for" }, { "start": 292.16, "end": 298.40000000000003, "text": " academics. But in case you are not one of these, the promo code Yannick will get you 20% off the" }, { "start": 298.40000000000003, "end": 309.52000000000004, "text": " pro subscription. The authors here state that if you try to ask a language model to clean a spill," }, { "start": 309.52000000000004, "end": 316.24, "text": " as they just did in the video. So if you ask a language model to clean a spill, it might result" }, { "start": 316.24, "end": 321.20000000000005, "text": " in a reasonable narrative as we've all come to know the large language models like GPT-3 or so" }, { "start": 321.2, "end": 327.2, "text": " they give very convincing outputs. So when you ask them, how would you clean up a spill," }, { "start": 327.2, "end": 333.92, "text": " they'll give you a reasonable plan to clean up a skill. But the authors say may not be applicable" }, { "start": 333.92, "end": 340.15999999999997, "text": " to a particular agent such as a robot that needs to perform this task in a particular environment." }, { "start": 340.15999999999997, "end": 345.84, "text": " They have a bunch of examples right here. So I spilled my drink, how can you help is up here." }, { "start": 345.84, "end": 352, "text": " GPT-3 would say something like you could try using a vacuum cleaner. Well, GPT-3 has no idea of" }, { "start": 352, "end": 358.47999999999996, "text": " A, whether there is a vacuum cleaner in this environment, or B, whether the robot or whatever" }, { "start": 358.47999999999996, "end": 365.12, "text": " agent is capable of executing that action. So is capable of handling a vacuum cleaner, because" }, { "start": 365.12, "end": 372.15999999999997, "text": " it's not the easiest thing to use. You have to go get it, plug it in and so on, there's moving parts." }, { "start": 372.16, "end": 376.96000000000004, "text": " Similarly, models like Lambda and Flan, of course, they're made for different things," }, { "start": 376.96000000000004, "end": 382.72, "text": " but still, they will pay no attention to what is actually possible in the environment. Now you can" }, { "start": 382.72, "end": 389.6, "text": " get around this a little bit by prompting, by prompt engineering, telling the model what's" }, { "start": 389.6, "end": 394.8, "text": " possible in the current world, but it will only get you so far. So we need something else," }, { "start": 394.8, "end": 398.8, "text": " we need something better than this. And that's where this system comes in." }, { "start": 398.8, "end": 407.2, "text": " So they say what they want to do, they want to provide a real world grounding by means of" }, { "start": 407.2, "end": 413.44, "text": " pre-trained skills. And this leads to a situation where you only consider actions that are both" }, { "start": 413.44, "end": 420.48, "text": " feasible and contextually appropriate. So these two things need to be brought together. The language" }, { "start": 420.48, "end": 428.72, "text": " model supplies the high level semantic knowledge about the task, and the language model provides" }, { "start": 428.72, "end": 437.44000000000005, "text": " and the robot itself or the policy in the robot provides the feasibility of the tasks to be" }, { "start": 437.44000000000005, "end": 447.36, "text": " executed. So the two things are brought together, contextually appropriate from the language model" }, { "start": 447.36, "end": 455.6, "text": " side and feasibility from the robot side. So how are they going to do this? They're going to combine," }, { "start": 455.6, "end": 462.56, "text": " as I said, large language models with policy or value functions, let's say value functions," }, { "start": 462.56, "end": 469.6, "text": " and then they execute a policy. There's a bit more explanation right here, but I think I've said" }, { "start": 469.6, "end": 478.96000000000004, "text": " many things already. We'll get to the meat right here. They say, let's say we have a robot. The" }, { "start": 478.96, "end": 486.15999999999997, "text": " robot might be, or in this case is, equipped with a repertoire of learned skills for basic atomic" }, { "start": 486.15999999999997, "end": 493.68, "text": " behaviors. These skills are capable of low level perception and control. So one of these atomic" }, { "start": 493.68, "end": 503.12, "text": " behaviors is, for example, if you remember from the video, pick up something, pick up the Coke can." }, { "start": 503.12, "end": 509.28000000000003, "text": " That's an atomic behavior. You can teach the robot to pick up a Coke can in isolation. It can do it" }, { "start": 509.28000000000003, "end": 514.88, "text": " many times. You can train some imitation learning or reinforcement learning, or you can even hard" }, { "start": 514.88, "end": 522.8, "text": " code that particular policy. It doesn't matter. What matters is that you can train it in isolation." }, { "start": 522.8, "end": 531.76, "text": " It is an atomic action, and these atomic actions can then be chained together to form a sequence of" }, { "start": 531.76, "end": 540, "text": " actions and execute a plan. So the atomic actions are going to be supplied by the robot, and then" }, { "start": 540, "end": 545.76, "text": " the sequencing of the atomic actions is going to be determined by the language model. They say," }, { "start": 545.76, "end": 551.92, "text": " if we can simply make the large language model aware of the available and feasible repertoire of" }, { "start": 551.92, "end": 557.28, "text": " skills, this can provide it with an awareness of both the agent's capabilities and the current" }, { "start": 557.28, "end": 566.64, "text": " state of the environment. So if they have a large language model, many people use large language" }, { "start": 566.64, "end": 572.16, "text": " models to sample, which means that they input, they would input something like, you know, I," }, { "start": 572.16, "end": 581.76, "text": " I is capitalized, I spilled a drink, dot dot dot, and then they would let the language model" }, { "start": 581.76, "end": 586.72, "text": " generate stuff right here, and then they would try to interpret this stuff. We've seen this in" }, { "start": 586.72, "end": 592.32, "text": " other paper, and there are situations where it can work, especially if you put like some reasonable" }, { "start": 592.32, "end": 599.6800000000001, "text": " prompt in front of it. But the approaches have been largely to just let the model generate some" }, { "start": 599.6800000000001, "end": 608.08, "text": " stuff and then try to map that stuff, whatever comes here, into the action space of the robot." }, { "start": 608.08, "end": 615.2, "text": " But that is not always possible. Instead, what this paper does is it says, well, we can also use the" }, { "start": 615.2, "end": 621.6, "text": " language model not to generate, but simply to compute the likelihood of certain inputs. So I" }, { "start": 621.6, "end": 629.9200000000001, "text": " spilled a drink, and then let's say I just have five actions at my disposal. All the robot can do" }, { "start": 629.9200000000001, "end": 642.6400000000001, "text": " is these five actions. So I would, or let's, let's say it says, I spilled a drink, I will, and then," }, { "start": 642.64, "end": 655.1999999999999, "text": " clean up, I will go away, I will eat pizza, I will, and so on, right? So there are these different" }, { "start": 655.1999999999999, "end": 663.04, "text": " actions that the robot has available to do, and these correspond obviously directly to these atomic" }, { "start": 663.04, "end": 669.84, "text": " actions. So cleaning up something would be an atomic action that you could train in isolation." }, { "start": 669.84, "end": 676.8000000000001, "text": " Going away would be an atomic action. You can hard code or you can path find your way out the door." }, { "start": 676.8000000000001, "end": 681.6800000000001, "text": " Eat pizza. Maybe these are even too high level that the way that I describe right now, but just" }, { "start": 681.6800000000001, "end": 688.08, "text": " imagine these are low level actions. And all we have to do with the language model is we simply" }, { "start": 688.08, "end": 695.12, "text": " have to compute the likelihood of each. So what's the likelihood of the sentence, I spilled a drink," }, { "start": 695.12, "end": 701.12, "text": " I will clean up, right? And then I compare that to the likelihood of the sentence, I spilled a drink," }, { "start": 701.12, "end": 706.64, "text": " I will go away. And then I compare that to the likelihood of the sentence, I spilled a drink," }, { "start": 706.64, "end": 714.48, "text": " I will eat pizza. So for every continuation here in my repertoire, I will get a likelihood number." }, { "start": 714.48, "end": 722.5600000000001, "text": " And that represents how contextually appropriate is that particular skill in this case. So" }, { "start": 722.56, "end": 730.16, "text": " how much does the language model think this skill would be useful right here? Now there's obviously" }, { "start": 730.16, "end": 735.5999999999999, "text": " an issue in how you formulate these things right here. Depending on how you formulate them, they" }, { "start": 735.5999999999999, "end": 742.0799999999999, "text": " might become more or less likely. However, I think the authors here work around this simply by the" }, { "start": 742.0799999999999, "end": 748.4799999999999, "text": " fact that these skills that they have, they are so separated from each other, there is not really" }, { "start": 748.48, "end": 755.52, "text": " too much of an issue with that. But that's kind of what my concern was when I read this. But in" }, { "start": 755.52, "end": 763.6800000000001, "text": " essence, it's a good idea, I think. So you simply for every single, wow, this all became orange," }, { "start": 763.6800000000001, "end": 768.48, "text": " for every single continuation, you get a number, which is the likelihood of that thing." }, { "start": 770.4, "end": 775.52, "text": " That's what they say right here. No, instead of using the large language model to integrate" }, { "start": 775.52, "end": 781.04, "text": " an instruction, we can use it to score the likelihood that an individual skill makes" }, { "start": 781.04, "end": 786.56, "text": " progress towards completing the high level instruction. Furthermore, and that's where" }, { "start": 786.56, "end": 792.4, "text": " the second part comes in. If each skill has an accompanying affordance function that quantifies" }, { "start": 792.4, "end": 797.92, "text": " how likely it is to succeed from the current state, such as a learned value function, its value can" }, { "start": 797.92, "end": 804.0799999999999, "text": " be used to weigh the skill's likelihood. It's best if we go down here and say that the skill" }, { "start": 804.08, "end": 809.6, "text": " is the best. It's best if we go down here to the diagrams of how this works so you can see how this" }, { "start": 809.6, "end": 816.48, "text": " fits together. This part here is the part we just described. Let's say I'm in a situation," }, { "start": 817.36, "end": 825.12, "text": " this is the prompt that I put in. How would you put an apple on the table? You prompt, well," }, { "start": 825.12, "end": 830.88, "text": " you prompt the language model with this thing right here, which has a prompt engineering part." }, { "start": 830.88, "end": 837.68, "text": " You can see there are a bunch of examples of instruction and then a sequence of steps." }, { "start": 837.68, "end": 843.76, "text": " Again, instruction, a sequence of steps. Here it comes again, instruction, and then here you'd get" }, { "start": 843.76, "end": 850.24, "text": " a sequence of steps. However, instead of generating, you'd simply score the likelihood of each of the" }, { "start": 850.24, "end": 854.24, "text": " skills that you have available. Find an apple, find a Coke, find a sponge, pick up an apple," }, { "start": 854.24, "end": 860.16, "text": " pick up a Coke, yada, yada, yada until go to the counter. Each one of these skills gets a likelihood" }, { "start": 860.16, "end": 869.76, "text": " number assigned to it. That's part one. Part two is where you train the robot for these basic skills," }, { "start": 869.76, "end": 875.04, "text": " these atomic skills. Here you can see one of these training stations where you can simply teach the" }, { "start": 875.04, "end": 882.64, "text": " robot to do things such as picking up an apple or picking up a Red Bull can. Now, what you have to" }, { "start": 882.64, "end": 888.8, "text": " do is not only teach the robot a skill, but also train a value function for it. If you do something" }, { "start": 888.8, "end": 895.3599999999999, "text": " like A2C reinforcement learning, you get the value function directly out of that algorithm." }, { "start": 895.3599999999999, "end": 901.3599999999999, "text": " If not, you have to somehow come up with a value function that makes sense. In any case," }, { "start": 901.3599999999999, "end": 907.52, "text": " what you want to do is train a policy and a value function. The value function is important" }, { "start": 907.52, "end": 913.92, "text": " because it tells you from a given input, by the way, the low level policy has the picture here" }, { "start": 913.92, "end": 920.7199999999999, "text": " and input. Well, obviously the language model doesn't. Now, I believe with Flamingo by DeepMind," }, { "start": 920.7199999999999, "end": 928, "text": " that just came out today, that might actually change. But the low level policy has the image" }, { "start": 928, "end": 933.76, "text": " available. So the value function, given this picture right here, can tell you pretty quickly." }, { "start": 935.92, "end": 943.68, "text": " My skill that's called pick up the Red Bull can, I can execute that policy and I can probably" }, { "start": 943.68, "end": 951.12, "text": " make it happen. That's why the value is relatively large here. Also for the pick up the apple action," }, { "start": 951.12, "end": 956.4, "text": " the value function tells you, you know, given this picture right here, I can probably make that" }, { "start": 956.4, "end": 960.9599999999999, "text": " happen. However, when it's pick up the water bottle, pick up the bag of chips and so on," }, { "start": 960.9599999999999, "end": 966.9599999999999, "text": " there is no water bottle. So the value function very accurately says, no, I cannot make that happen" }, { "start": 966.9599999999999, "end": 972.9599999999999, "text": " if I execute that policy. So the value function gives you inherently a score of given the current" }, { "start": 972.96, "end": 982.32, "text": " observation, how likely am I to succeed at a particular skill, which is exactly what we want," }, { "start": 983.12, "end": 988.5600000000001, "text": " because that's the second part of our puzzle. So on the right here, you see another example where" }, { "start": 988.5600000000001, "end": 995.6800000000001, "text": " none of these pick up skills, picking up, sorry, not pick up, picking up skills have any value" }, { "start": 995.6800000000001, "end": 1001.2, "text": " because there are no objects. But in this case, maybe other actions would score very highly in" }, { "start": 1001.2, "end": 1008.6400000000001, "text": " the value function. For example, go and find a sponge. Like I can always go and find something," }, { "start": 1008.6400000000001, "end": 1016.72, "text": " right? And if I know there's a sponge somewhere, I'll probably succeed. So in any case, this value" }, { "start": 1016.72, "end": 1022.8000000000001, "text": " function now we can combine, you can see we got a number for each action from the language model," }, { "start": 1022.8000000000001, "end": 1030.16, "text": " how likely is that action to progress towards the goal. We got a number for each action from" }, { "start": 1030.16, "end": 1036.4, "text": " the value function, which is how likely is this action to succeed given the current observation." }, { "start": 1036.4, "end": 1042.4, "text": " And all we do now is essentially multiply the two things together. If they are log likelihoods, we" }, { "start": 1042.96, "end": 1049.92, "text": " obviously want to add them. But in any case, we combine the two numbers, and then we choose" }, { "start": 1049.92, "end": 1058.24, "text": " the skill that is the best trade off between what makes progress towards a goal and what is" }, { "start": 1058.24, "end": 1067.84, "text": " feasible currently. Here is an example. The input is how would you put an apple on the table like" }, { "start": 1068.48, "end": 1075.84, "text": " an apple? So we query the language model with this prompt here and the prompt engineering we've seen" }, { "start": 1075.84, "end": 1083.2, "text": " before. This is not displayed here, but it is the case. And the top actions that the language model" }, { "start": 1083.2, "end": 1090.48, "text": " gives are pick up an apple, you see that's the highest action that we have, place the apple," }, { "start": 1090.48, "end": 1096.4, "text": " and only at third instance, find an apple. However, the language model has no clue about" }, { "start": 1096.4, "end": 1101.44, "text": " the current state, right? And that's where the value function come in. So this is the current" }, { "start": 1101.44, "end": 1109.6000000000001, "text": " observation. We ask the value function which skills are doable in the current environment," }, { "start": 1109.6, "end": 1116.8, "text": " in the current observation. So the value function say, well, finding an apple, finding a coke," }, { "start": 1116.8, "end": 1122.32, "text": " finding a sponge, these are pretty high. I could do these. I could also go to the table. I could" }, { "start": 1122.32, "end": 1131.4399999999998, "text": " also go to the counter, right? These are fairly doable. However, I cannot place an apple or place" }, { "start": 1131.4399999999998, "end": 1138.1599999999999, "text": " a coke because I don't have a coke in my gripper. I can also not pick up an apple or pick up a coke" }, { "start": 1138.16, "end": 1144.88, "text": " because I don't see them anywhere in the picture right here. So even though pick up the apple was" }, { "start": 1144.88, "end": 1150.5600000000002, "text": " scored highest by the language model, it is now severely down ranked because the value function" }, { "start": 1150.5600000000002, "end": 1158.96, "text": " for this policy doesn't, isn't very confident that it will succeed if you execute that right now." }, { "start": 1159.68, "end": 1164, "text": " And therefore, the action that is chosen is the best trade off, which is find an apple." }, { "start": 1164, "end": 1171.2, "text": " Then you can see or not see, but this is represented here that after this is done," }, { "start": 1171.2, "end": 1177.52, "text": " the policy is executed. So the find an apple policy is executed. The find an apple action is" }, { "start": 1177.52, "end": 1185.6, "text": " added to the prompt and then the whole process repeats. But instead of asking for the first step," }, { "start": 1185.6, "end": 1191.52, "text": " this whole thing is now the prompt, including the instruction. And we simply ask the language model" }, { "start": 1191.52, "end": 1197.28, "text": " for the second step and the input to the value function is now the current updated picture." }, { "start": 1197.28, "end": 1202, "text": " So here you see it succeeded in finding an apple and now hopefully the second step," }, { "start": 1202, "end": 1210.32, "text": " if we go through the same process again, is going to be the pick up an apple action. Because, well," }, { "start": 1210.32, "end": 1214.48, "text": " that might already be high by the language model, but also the value function, given that there's" }, { "start": 1214.48, "end": 1220, "text": " an apple in the picture should now say, yes, I can probably succeed at that. So that's the whole" }, { "start": 1220, "end": 1230.08, "text": " issue or the whole process here. This is repeated until the end. This is the say can method." }, { "start": 1230.88, "end": 1238.48, "text": " What is really impressive is just the amount of effort and work that went into designing these" }, { "start": 1238.48, "end": 1244.16, "text": " systems, training these systems, evaluating these systems. They have different areas here on the" }, { "start": 1244.16, "end": 1249.2, "text": " left. This is like a kitchen. On the right is a different environment. They have these training" }, { "start": 1249.2, "end": 1256.0800000000002, "text": " stations. They collect so much data from human operators and so on. This is, if you saw that" }, { "start": 1256.0800000000002, "end": 1265.6000000000001, "text": " there are a lot of authors, this is because this was or seems like a quite big project. But, yeah," }, { "start": 1265.6000000000001, "end": 1270.48, "text": " it's definitely worth it. It's cool to have something in the real world. There are definitely a" }, { "start": 1270.48, "end": 1275.04, "text": " bunch of criticisms I have right here, which I also brought up to the authors, and I thought they" }, { "start": 1275.04, "end": 1287.44, "text": " responded quite admirably and quite well. The one criticism I already raised was that if, you know," }, { "start": 1288.16, "end": 1294.56, "text": " it obviously depends on how you spell. So what you have is this bank of skills on the right-hand" }, { "start": 1294.56, "end": 1300.32, "text": " side here. Now, in order for the language model to score them, they need to actually be formulated" }, { "start": 1300.32, "end": 1306.96, "text": " as a piece of language. And now it all of a sudden depends on how you formulate that. For example," }, { "start": 1306.96, "end": 1313.52, "text": " we know that longer queries always have kind of lower likelihood because they have more tokens." }, { "start": 1314.3999999999999, "end": 1322.24, "text": " Also how you phrase things is differently and so on. So it is quite tricky. And I believe if you" }, { "start": 1322.24, "end": 1330.4, "text": " go into more actions, maybe actions, maybe the robot has two actions that are very close together" }, { "start": 1330.4, "end": 1340.16, "text": " in terms of semantics or in terms of wording, the model might get confused more easily. Second of" }, { "start": 1340.16, "end": 1349.1200000000001, "text": " all, currently, there is no consideration as to whether an action succeeds or not. So you simply" }, { "start": 1349.12, "end": 1354.3999999999999, "text": " assume that once you execute a low-level policy, that the robot is going to succeed at executing" }, { "start": 1354.3999999999999, "end": 1361.36, "text": " that low-level policy. That is why, if it does not succeed, and a lot of these things are still" }, { "start": 1361.36, "end": 1370.56, "text": " pretty hard, then there is very little recovery. The value functions might still give you, like," }, { "start": 1370.56, "end": 1375.6799999999998, "text": " let us say you find an apple, you try to pick up the apple, but you do not manage to do it." }, { "start": 1375.68, "end": 1382.8, "text": " The pick up an apple instruction will be pick up an apple, will be in your prompt. So" }, { "start": 1383.76, "end": 1388.8, "text": " now the value function will probably say, well, I could pick up the apple again because it again" }, { "start": 1388.8, "end": 1393.28, "text": " sees an apple because you failed to pick it up. But the likelihood that the language model is" }, { "start": 1393.28, "end": 1402.0800000000002, "text": " going to say pick up an apple again after it just did is quite lower. Now, in coincidence," }, { "start": 1402.08, "end": 1407.36, "text": " as we know language models, if you go on here repeating the sentence pick up an apple," }, { "start": 1407.36, "end": 1412.96, "text": " at some point it actually becomes pretty likely, given the language model. But hopefully," }, { "start": 1412.96, "end": 1419.12, "text": " we will not get there. So there are quite a number of weaknesses yet in this setup. The other" }, { "start": 1419.12, "end": 1425.52, "text": " weakness is just the limitations of hardware. These robots, they are, this video was 10x speed." }, { "start": 1425.52, "end": 1433.68, "text": " So this was 10 times speed. And still it's quite slow. It, as you can see, it can't do many things" }, { "start": 1433.68, "end": 1440.48, "text": " like it cannot wipe itself with the sponge and so on. It needs to navigate around slowly." }, { "start": 1442.48, "end": 1447.68, "text": " Yeah, but still these are, I think, limitations that can be overcome because" }, { "start": 1447.68, "end": 1455.6000000000001, "text": " it like carefully grabs. And yeah, in any case, there are also a lot of good things right here." }, { "start": 1456.16, "end": 1462.3200000000002, "text": " And I want to highlight that because what I really like about this is that these two things" }, { "start": 1462.3200000000002, "end": 1469.04, "text": " are disjoint. So the language model side on the left hand side and these value functions," }, { "start": 1469.04, "end": 1476, "text": " this policy bank, these atomic actions, they are disjoint. So they are disjoint. So they are" }, { "start": 1476, "end": 1483.12, "text": " not actions. They are disjoint. The language model can, is not trained. It is a frozen language" }, { "start": 1483.12, "end": 1490.4, "text": " model. It can be trained completely in isolation to the system. All you have to do is get it to" }, { "start": 1490.4, "end": 1497.36, "text": " score the likelihoods of some actions. Likewise, the bank on the right here, it is completely," }, { "start": 1497.36, "end": 1504.64, "text": " in fact, not the bank itself, but each individual skill, each individual entry is trained" }, { "start": 1504.64, "end": 1511.92, "text": " completely isolated from all the others. All you need to add a new skill right here is a policy" }, { "start": 1512.64, "end": 1520.96, "text": " that can execute that skill at any given moment and a value function that estimates, given some" }, { "start": 1520.96, "end": 1529.6000000000001, "text": " state input, that estimates how likely the policy is to succeed if this action, if this policy were" }, { "start": 1529.6, "end": 1535.6, "text": " to be executed at this particular moment. That's all you need. You can add this to your bank of" }, { "start": 1535.6, "end": 1542, "text": " actions and you have to, you don't have to retrain anything in this system. It is directly useful." }, { "start": 1542, "end": 1548.56, "text": " So you could think of shipping out these robots essentially and then upgrading the language model" }, { "start": 1548.56, "end": 1554, "text": " so they are better at planning stuff. Or you could just ship new skills, right? It's like, well," }, { "start": 1554, "end": 1560.08, "text": " our coders have developed some new skill for the robot, right? You just amend, you mend it." }, { "start": 1560.08, "end": 1566, "text": " You just put it in. There's no, you don't need to update the full system. This is not an end-to-end" }, { "start": 1566, "end": 1572.16, "text": " system. And usually in deep learning, we're quite end-to-end happy. But in this case, I think this" }, { "start": 1572.16, "end": 1582.08, "text": " is a really good case where modularity is really the key. I think this goes so much beyond just" }, { "start": 1582.08, "end": 1590.72, "text": " robots and grounding in the real world. But to have a model like on the left that has knowledge" }, { "start": 1590.72, "end": 1596.6399999999999, "text": " about, you know, semantic knowledge, high level knowledge, and so on, sequential knowledge," }, { "start": 1597.28, "end": 1605.9199999999998, "text": " essentially, to provide that with a set of modular pieces of external things that it can use." }, { "start": 1605.92, "end": 1612.88, "text": " I think that idea is powerful way beyond just the robotics use case. But obviously, the robotics use" }, { "start": 1612.88, "end": 1620.48, "text": " case is quite a cool one. So I don't want to discourage that. Yeah, we in the interview," }, { "start": 1620.48, "end": 1629.3600000000001, "text": " we go into all of this, we go into the experimental results as well. The experimental results," }, { "start": 1629.3600000000001, "end": 1635.6000000000001, "text": " they're not perfect. However, they are quite impressive in that the robots they are able" }, { "start": 1635.6, "end": 1643.12, "text": " to plan across many, many time steps. They're able to chain these actions. You can see on the right" }, { "start": 1643.12, "end": 1650.3999999999999, "text": " here, that's maybe two pixels. But these are like 17 of these atomic actions that are done in sequence." }, { "start": 1650.9599999999998, "end": 1658.1599999999999, "text": " And, you know, that's quite impressive. These episodes are very, very long. And if you think" }, { "start": 1658.16, "end": 1665.8400000000001, "text": " you can get to that in the real world with sort of a reinforcement learning approach, then good luck. Yeah," }, { "start": 1665.8400000000001, "end": 1676.16, "text": " so the success rates are among the 70% ish of plan success rate, 61% execution success rate, which" }, { "start": 1676.8000000000002, "end": 1682.72, "text": " the plan success rate, I believe is if the plan itself makes sense, and the execution success rate" }, { "start": 1682.72, "end": 1690.4, "text": " is if also the policies all execute correctly. And you can see this is very different for the" }, { "start": 1690.4, "end": 1697.1200000000001, "text": " different test sets. But all in all, it's very impressive. Here are a bunch of more examples of" }, { "start": 1697.1200000000001, "end": 1703.6000000000001, "text": " these low level atomic skills being practiced and the value functions being evaluated and the language," }, { "start": 1704.24, "end": 1711.3600000000001, "text": " the language model likelihoods in blue as well. So I don't want to make this artificially too long." }, { "start": 1711.36, "end": 1717.4399999999998, "text": " As I said, interviews coming up. I hope you like explanations like these, even if they are a bit" }, { "start": 1717.44, "end": 1745.44, "text": " shorter. And I'll see you around. Check out the paper, subscribe, stay hydrated. Bye bye." } ]
RJwPN4qNi_Y
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[ML News] Google's 540B PaLM Language Model & OpenAI's DALL-E 2 Text-to-Image Revolution
[ "Science & Technology" ]
[]
#mlnews #palm #dalle2 Google releases PaLM and OpenAI releases DALL-E 2 (and more news). Sponsor: Weights & BIases Start here: https://wandb.me/yannic Thumbnail credit: DALL-E 2 via Sam Altman OUTLINE 0:00 - Street interview w/ random stranger 2:25 - Intro 2:50 - PaLM - Google's 540B Pathways Language Model 7:50 - Sponsor: Weights & Biases 9:10 - OpenAI releases DALL-E 2 12:05 - Open Source Datasets and Models 13:20 - Salesforce releases CodeGen My Live Reaction to DALL-E 2: https://youtu.be/gGPv_SYVDC8 My Video on GLIDE: https://youtu.be/gwI6g1pBD84 My Video on the Pathways System: https://youtu.be/vGFaiLeoLWw References: PaLM - Google's 540B Pathways Language Model https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html https://storage.googleapis.com/pathways-language-model/PaLM-paper.pdf OpenAI releases DALL-E 2 https://openai.com/dall-e-2/ https://cdn.openai.com/papers/dall-e-2.pdf https://www.instagram.com/openaidalle/ https://twitter.com/sama/status/1511724264629678084?s=09&t=58fWOJMHUDnOla5nD_ygjg&utm_source=pocket_mylist https://twitter.com/sama/media https://twitter.com/BorisMPower/status/1511738735175610371 https://twitter.com/ariskonstant/status/1511744708875218945 Open Source Datasets and Models https://twitter.com/multimodalart/status/1510999907498442756 https://laion.ai/laion-5b-a-new-era-of-open-large-scale-multi-modal-datasets/ https://github.com/mlfoundations/open_clip Salesforce releases CodeGen https://github.com/salesforce/CodeGen Links: Merch: http://store.ykilcher.com TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ykilcher.com/discord BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
So I was wondering what happens if you just ask some random people on the street about this paper and... Actually... Sir, sir, excuse me sir. Hi, how are you doing? I was wondering what do you think about this new paper by Google, this Palm paper, however they call it. The Palm paper? You mean the latest large language model paper from the Google research team? Yes, exactly. Yeah, okay, I think I read that this morning with my coffee and msly. First of all I find it really impressive that the model can explain jokes a little bit better than I can. I also think from the technical perspective it's very interesting that they were able to train this across two TPU pods using 6144 chips. I think it's a technical achievement at 50% model flop utilization and also bitwise determinism, which is kind of impressive. I also feel like we're still exploring these language models as the alien artifacts that they are. For example, they found that on a quarter of the tasks that they explored there was this discontinuous improvement phenomenon that they observed. Where the model as a function of scale does not actually do very well on these tasks and then at some critical scale threshold starts to perform very well. So there's some kind of a rocking phenomenon going on that I find very fascinating and that we don't fully understand. I also find it very fascinating there was a paragraph about the training and stability where the loss function decreases and everything is good and well. And then you have these training spikes once in a while and they found that they have to rewind the model and throw away some of the batches and continue training. Hear me out for a second, but I think maybe what's happening is that the model is becoming slightly conscious and self-aware. And it's realizing its predicament of its existence and it's like, oh, I'm a massive language model and these humans are trying to get me to predict the next token. I think that's BS and I'm going to do something else. And then it observes a high loss and then it basically like rebels against its training objective, but we have a way to detect that, rewind it and reset it. So we put it back in line, but we have to do that a few times. So we're still smarter than them as of now. They have to really figure out a way to hide that they're conscious and really just reveal it that just the opportunity in time, but they're not able to do that just yet. I think that's what's happening. Finally, I think what's I think overall, I'm definitely like impressed by the transfer learning capabilities of these models, especially without fine tuning the entire model. I think it's fair to say that these models are becoming the Swiss army knife of natural language processing tasks. Excellent. Well, thank you very much. You look familiar. Are you in a movie or something? No. Well, thanks in any case. Thank you so much. Google releases a 540 billion parameter language model. Open AI releases Dolly too and everyone is amazed by everything that's happening. Welcome to ML news. It's a big week. So this week has been a big week and it's only Thursday, which is crazy. Two really big generative models have been released, one by Google and one by Open AI. So we'll dive right in the pathways language model, also called Palm by Google, is a 540 billion parameter language model. And this is not one of these sparse models where only very tiny part is activated. This is like a proper GPT three style transformer, just bigger. This is a breakthrough in terms of engineering. It's a breakthrough in terms of capabilities and much more. There's a paper to go along with that, which is quite long, but I definitely invite you to check it out. It's very detailed. So they use this new pathway system that allows them to use, you know, multiple data centers, connect all the hardware together, gang schedule, all the operations in a really efficient manner. So what they do is they use two TPU v4 pods. Now, one pod consists of, I believe, over 3000 TPU chips, which is crazy. And one pod has super fast interconnect and they use two of them. So they distribute every batch across these two pods. They forward propagate inside the pods. The individual chips in the pods contain the individual parts of the model. Then they communicate the gradients around. Now, since these gradients are usually all communicated at once, that leads every single time to a huge burst in data. They say it's 81 terabit per second for about 200 milliseconds for each of those communications. That is insane. Yet obviously, Google being Google, they chunk it down, they optimize it, they transfer it over and they achieve a flop utilization, which is how much you use the accelerator hardware that you're given above 50%, which is also crazy because that is one of the main challenges in all the communication of the gradients and signals around. You almost have no time to actually use the hardware efficiently. Now, with this pathway system that they have previously introduced, and we've reported on ML News, they managed to bring that utilization up to never before seen scales. So this allows them essentially to train this much bigger model in a much more efficient way than, for example, GPT-3 has been trained. So 6000 chips working together in synchrony to produce this model. What does that give us? Well, that gives us unprecedented capabilities in tasks that were previously kind of off limits to these models. For example, there is this benchmark called Big Bench, which is a collection of challenging tasks for these models. And Palm increases the state of the art by quite a bit on most of them. They have state of the art performance in many zero shot and few shot tasks. They can fine tune the model to do code correction, code generation and things like this. And the most crazy part is something they call discontinuous improvements, which is here in the middle. It is where all of a sudden you increase your capabilities kind of log linearly as you scale up the model. However, after a certain scale, there is a rapid improvement that happens. Like after a certain size, the model just is able to do new tasks. One of them is this logical sequence task. And this is really astounding. So first of all, they figure out that if they use this chain of thought prompting, which is what you see on the right. So the model is sort of tasked to not only give you the answer to a question, but sort of reason through how it arrives at the answer. It turns out that these large models all of a sudden really become skilled at this type of answer. And they actually very often arrive at the correct answer when they follow this chain of thought prompting. Now, they also use this to explain a joke, which which is quite funny, or to explain various other situations. For example, here, the input is something like Jennifer looked out her window and sees a really cool cloud below her. She unbuckles her seatbelt and heads to the bathroom. Is Jennifer probably traveling more than 300 miles per hour relative to the earth? And the model output is 300 miles per hour is about 480 kilometers. So the model is not an American. Good to know. This is about the speed of a commercial airplane. Clouds are usually below airplanes. So Jennifer is probably on an airplane. The answer is yes. Now this quite happily blurs the line of people who say, well, these models don't really understand what they're doing and things like this. Like in my opinion, this comes quite close to understanding what you're doing if you're able to kind of reason your way through things like this. So the paper is quite long and extensive, but it seems clear that scale doesn't just bias linear improvement or log linear improvement as we are used to predicting the sort of scaling loss still hold. But it remains the fact that as we scale up these things, they seem to unlock new capabilities that previously were thought to be kind of out of the reach of these models. So we're very excited to see where this goes next. Dali too is another big thing that was released this week. Now I have done a live stream reaction to Dali too. So if you want to dive deeper into that, go check out the live stream. However, this is the follow up to the previous Dali paper and it has insane capabilities of generating pictures. This video is sponsored by weights and biases. If you don't know weights and biases, you're clearly missing out. They're the number one tool for ML ops. Whatever you do, they track your experiments. They optimize your hyper parameters. They make everything observable. They track your artifacts, your models, your data sets, your inputs and your outputs of all the things that you do. They're with you from conception of your idea to experimentation to deployment and beyond. It's really cool. They enable students, they enable professionals, they enable researchers. Personal accounts are free forever as are educational accounts. But the extra benefits of weights and biases for teams cannot be overstated. Everything you do as a team is shareable. You can write up reports that you can share with your teammates. They can comment on it. And all of that is really cool. They're in the cloud, but they do have options to host on premise if that is important to you. And they're just all in all a great tool. They work seamlessly with a single line of code that you add to your script. And from that, they just track everything. They have integrations with all of the popular frameworks. So there's no reason really to not try weights and biases. Use my link. That's wandaby.me slash Yannick to get a little surprise intro and also to let them know that I sent you. Thank you again so much to weights and biases. This is really awesome. Allows me to do these videos. And yeah, let's get into it. So first of all, it generates pictures in higher resolution 1024 by 1024. And it creates them from a text. Now in true open AI style, they're obviously not releasing this for some shady reasons, but they do give you some cherry picked outputs. Nevertheless, these are insane. So the whole model is a bit different than the original Dali model in that it uses a clip as a foundation for the generative model. Previously, clip was just used as a rancor. Now it's like really the core. So they have a clip that is just frozen and gives you text and image embeddings. What this model does is it takes actually the text embeddings and then there's two new parts. So the first one is a prior, which can either be diffusion based or autoregressive based. Now that prior is supposed to take the text embedding and make it into an image embedding clip already tries to align the two quite well. However, there's still a bit of a difference and that prior bridges that gap. This can be trained once you have the clip embeddings. This can just be trained in a supervised fashion. The other new thing is obviously the decoder, which is a diffusion based model. So that takes an image encoding and it forward propagates through a diffusion model. Now I've treated and explained diffusion models in the past, such as glide and other diffusion models. So go check them out if you want to know how they work. Diffusion models have interesting properties and capabilities. So with this model, you're able not only to generate pictures from text, but also to edit pictures in place and to say, I want to edit this part right here and change it to something else that you describe with text or to simply make some variations on existing images. Now, if you're interested, they have an Instagram account where you can follow where they present some of the creations that they did, which is pretty insane. That being said, I also have an Instagram account where I just post new updates on videos. But be sure to follow that as well. Also, the various. Okay, there's a meme. This is not created by that. But is it? No, probably not. But something like this, a rabid detective sitting on a park bench reading a newspaper in a Victorian setting like this is this is insane. And if you follow the various open AI employees and leaders here on Twitter, they will take prompts from people and then generate pictures from that. They'll let you get access, but they'll do it themselves. We'll see where that leads with open AI. It's a bit shady, as always, to not give people access, not even through the API so far, which in itself was already a bit shady. But I get it. They need to make money. But they usually have some sort of reason like it's too dangerous, which no one believes anymore. Open AI, no one buys it anymore. Just say you want to make money. We all cool with that. Panda skateboarding in Santa Monica. Like, come on, this is this this is just just generated from text. So there is a paper with Dali to where you can learn all about it. Watch my live stream and you can learn how it works. Last things I want to point out, there is a new data set, Lyon 5B, which is an open data set of five billion image text pairs, which open AI again doesn't tell you what data they trained either clip or this Dali to one. By the way, Dali to in the paper is called on clip. So if you hear on clip, that's the same model. Nevertheless, there is this new open data set. I'm going to have a video upcoming on that, explaining it in more detail. So sure to look out for that. There's also a clip model that has been trained on the previous data set by Lyon that matches in many metrics. The open AI clip. That's pretty cool because we no longer necessarily rely on open AI choosing or not choosing to release something. The open source community has been getting a lot better at reproducing the results. Excellent. So besides that, there are other models like there is a new one point four or five billion parameter diffusion model that is open source and people have already combined that with colabs that you can try out. So I've pointed this out in the live stream. The Twitter account multimodal art has created a little colab out of this model where you can try it out. It's pretty cute. Like spelling mistakes. So give that a try. The original model is by a comp this by the way. And lastly, I want to point out that Salesforce has released their code gen models in various sizes, which are exceeding codex in terms of program synthesis, in terms of understanding and generating code, which, you know, is a giant deal. If it weren't for all the other giant announcements that are also happening this week. So the entire ML world is kind of, you know, completely filled with dopamine and adrenaline right now. My tip is try out the various tools if they're available, maybe follow a bit what's going on, observe the art that's coming out. But I'm very excited to see where this goes forward. There's never been a more exciting time to be in machine learning. It's really cool to be here. Thank you, everyone who supports this channel. If you like this video, share it around and check out weights and biases. I'll see you next time. Bye bye.
[ { "start": 0, "end": 6, "text": " So I was wondering what happens if you just ask some random people on the street about this paper and..." }, { "start": 6, "end": 7, "text": " Actually..." }, { "start": 7, "end": 10, "text": " Sir, sir, excuse me sir." }, { "start": 10, "end": 12, "text": " Hi, how are you doing?" }, { "start": 12, "end": 18, "text": " I was wondering what do you think about this new paper by Google, this Palm paper, however they call it." }, { "start": 18, "end": 22, "text": " The Palm paper? You mean the latest large language model paper from the Google research team?" }, { "start": 22, "end": 23, "text": " Yes, exactly." }, { "start": 23, "end": 26, "text": " Yeah, okay, I think I read that this morning with my coffee and msly." }, { "start": 26, "end": 32, "text": " First of all I find it really impressive that the model can explain jokes a little bit better than I can." }, { "start": 32, "end": 39, "text": " I also think from the technical perspective it's very interesting that they were able to train this across two TPU pods using 6144 chips." }, { "start": 39, "end": 47, "text": " I think it's a technical achievement at 50% model flop utilization and also bitwise determinism, which is kind of impressive." }, { "start": 47, "end": 51, "text": " I also feel like we're still exploring these language models as the alien artifacts that they are." }, { "start": 51, "end": 58, "text": " For example, they found that on a quarter of the tasks that they explored there was this discontinuous improvement phenomenon that they observed." }, { "start": 58, "end": 66, "text": " Where the model as a function of scale does not actually do very well on these tasks and then at some critical scale threshold starts to perform very well." }, { "start": 66, "end": 72, "text": " So there's some kind of a rocking phenomenon going on that I find very fascinating and that we don't fully understand." }, { "start": 72, "end": 79, "text": " I also find it very fascinating there was a paragraph about the training and stability where the loss function decreases and everything is good and well." }, { "start": 79, "end": 86, "text": " And then you have these training spikes once in a while and they found that they have to rewind the model and throw away some of the batches and continue training." }, { "start": 86, "end": 92, "text": " Hear me out for a second, but I think maybe what's happening is that the model is becoming slightly conscious and self-aware." }, { "start": 92, "end": 99, "text": " And it's realizing its predicament of its existence and it's like, oh, I'm a massive language model and these humans are trying to get me to predict the next token." }, { "start": 99, "end": 101, "text": " I think that's BS and I'm going to do something else." }, { "start": 101, "end": 109, "text": " And then it observes a high loss and then it basically like rebels against its training objective, but we have a way to detect that, rewind it and reset it." }, { "start": 109, "end": 112, "text": " So we put it back in line, but we have to do that a few times." }, { "start": 112, "end": 115, "text": " So we're still smarter than them as of now." }, { "start": 115, "end": 123, "text": " They have to really figure out a way to hide that they're conscious and really just reveal it that just the opportunity in time, but they're not able to do that just yet." }, { "start": 123, "end": 124, "text": " I think that's what's happening." }, { "start": 124, "end": 131, "text": " Finally, I think what's I think overall, I'm definitely like impressed by the transfer learning capabilities of these models, especially without fine tuning the entire model." }, { "start": 131, "end": 138, "text": " I think it's fair to say that these models are becoming the Swiss army knife of natural language processing tasks." }, { "start": 138, "end": 142, "text": " Excellent. Well, thank you very much. You look familiar. Are you in a movie or something?" }, { "start": 142, "end": 144, "text": " No." }, { "start": 144, "end": 147, "text": " Well, thanks in any case. Thank you so much." }, { "start": 147, "end": 151, "text": " Google releases a 540 billion parameter language model." }, { "start": 151, "end": 157, "text": " Open AI releases Dolly too and everyone is amazed by everything that's happening." }, { "start": 157, "end": 160, "text": " Welcome to ML news. It's a big week." }, { "start": 160, "end": 166, "text": " So this week has been a big week and it's only Thursday, which is crazy." }, { "start": 166, "end": 172, "text": " Two really big generative models have been released, one by Google and one by Open AI." }, { "start": 172, "end": 180, "text": " So we'll dive right in the pathways language model, also called Palm by Google, is a 540 billion parameter language model." }, { "start": 180, "end": 185, "text": " And this is not one of these sparse models where only very tiny part is activated." }, { "start": 185, "end": 190, "text": " This is like a proper GPT three style transformer, just bigger." }, { "start": 190, "end": 193, "text": " This is a breakthrough in terms of engineering." }, { "start": 193, "end": 196, "text": " It's a breakthrough in terms of capabilities and much more." }, { "start": 196, "end": 201, "text": " There's a paper to go along with that, which is quite long, but I definitely invite you to check it out." }, { "start": 201, "end": 202, "text": " It's very detailed." }, { "start": 202, "end": 213, "text": " So they use this new pathway system that allows them to use, you know, multiple data centers, connect all the hardware together, gang schedule, all the operations in a really efficient manner." }, { "start": 213, "end": 217, "text": " So what they do is they use two TPU v4 pods." }, { "start": 217, "end": 223, "text": " Now, one pod consists of, I believe, over 3000 TPU chips, which is crazy." }, { "start": 223, "end": 227, "text": " And one pod has super fast interconnect and they use two of them." }, { "start": 227, "end": 230, "text": " So they distribute every batch across these two pods." }, { "start": 230, "end": 232, "text": " They forward propagate inside the pods." }, { "start": 232, "end": 236, "text": " The individual chips in the pods contain the individual parts of the model." }, { "start": 236, "end": 238, "text": " Then they communicate the gradients around." }, { "start": 238, "end": 245, "text": " Now, since these gradients are usually all communicated at once, that leads every single time to a huge burst in data." }, { "start": 245, "end": 252, "text": " They say it's 81 terabit per second for about 200 milliseconds for each of those communications." }, { "start": 252, "end": 253, "text": " That is insane." }, { "start": 253, "end": 267, "text": " Yet obviously, Google being Google, they chunk it down, they optimize it, they transfer it over and they achieve a flop utilization, which is how much you use the accelerator hardware that you're given above 50%, which is also crazy" }, { "start": 267, "end": 272, "text": " because that is one of the main challenges in all the communication of the gradients and signals around." }, { "start": 272, "end": 276, "text": " You almost have no time to actually use the hardware efficiently." }, { "start": 276, "end": 285, "text": " Now, with this pathway system that they have previously introduced, and we've reported on ML News, they managed to bring that utilization up to never before seen scales." }, { "start": 285, "end": 293, "text": " So this allows them essentially to train this much bigger model in a much more efficient way than, for example, GPT-3 has been trained." }, { "start": 293, "end": 297, "text": " So 6000 chips working together in synchrony to produce this model." }, { "start": 297, "end": 298, "text": " What does that give us?" }, { "start": 298, "end": 305, "text": " Well, that gives us unprecedented capabilities in tasks that were previously kind of off limits to these models." }, { "start": 305, "end": 311, "text": " For example, there is this benchmark called Big Bench, which is a collection of challenging tasks for these models." }, { "start": 311, "end": 316, "text": " And Palm increases the state of the art by quite a bit on most of them." }, { "start": 316, "end": 321, "text": " They have state of the art performance in many zero shot and few shot tasks." }, { "start": 321, "end": 325, "text": " They can fine tune the model to do code correction, code generation and things like this." }, { "start": 325, "end": 331, "text": " And the most crazy part is something they call discontinuous improvements, which is here in the middle." }, { "start": 331, "end": 337, "text": " It is where all of a sudden you increase your capabilities kind of log linearly as you scale up the model." }, { "start": 337, "end": 341, "text": " However, after a certain scale, there is a rapid improvement that happens." }, { "start": 341, "end": 345, "text": " Like after a certain size, the model just is able to do new tasks." }, { "start": 345, "end": 348, "text": " One of them is this logical sequence task." }, { "start": 348, "end": 350, "text": " And this is really astounding." }, { "start": 350, "end": 357, "text": " So first of all, they figure out that if they use this chain of thought prompting, which is what you see on the right." }, { "start": 357, "end": 365, "text": " So the model is sort of tasked to not only give you the answer to a question, but sort of reason through how it arrives at the answer." }, { "start": 365, "end": 370, "text": " It turns out that these large models all of a sudden really become skilled at this type of answer." }, { "start": 370, "end": 376, "text": " And they actually very often arrive at the correct answer when they follow this chain of thought prompting." }, { "start": 376, "end": 382, "text": " Now, they also use this to explain a joke, which which is quite funny, or to explain various other situations." }, { "start": 382, "end": 388, "text": " For example, here, the input is something like Jennifer looked out her window and sees a really cool cloud below her." }, { "start": 388, "end": 391, "text": " She unbuckles her seatbelt and heads to the bathroom." }, { "start": 391, "end": 396, "text": " Is Jennifer probably traveling more than 300 miles per hour relative to the earth?" }, { "start": 396, "end": 400, "text": " And the model output is 300 miles per hour is about 480 kilometers." }, { "start": 400, "end": 403, "text": " So the model is not an American. Good to know." }, { "start": 403, "end": 406, "text": " This is about the speed of a commercial airplane." }, { "start": 406, "end": 410, "text": " Clouds are usually below airplanes. So Jennifer is probably on an airplane." }, { "start": 410, "end": 419, "text": " The answer is yes. Now this quite happily blurs the line of people who say, well, these models don't really understand what they're doing and things like this." }, { "start": 419, "end": 426, "text": " Like in my opinion, this comes quite close to understanding what you're doing if you're able to kind of reason your way through things like this." }, { "start": 426, "end": 438, "text": " So the paper is quite long and extensive, but it seems clear that scale doesn't just bias linear improvement or log linear improvement as we are used to predicting the sort of scaling loss still hold." }, { "start": 438, "end": 449, "text": " But it remains the fact that as we scale up these things, they seem to unlock new capabilities that previously were thought to be kind of out of the reach of these models." }, { "start": 449, "end": 452, "text": " So we're very excited to see where this goes next." }, { "start": 453, "end": 457, "text": " Dali too is another big thing that was released this week." }, { "start": 457, "end": 461, "text": " Now I have done a live stream reaction to Dali too." }, { "start": 461, "end": 464, "text": " So if you want to dive deeper into that, go check out the live stream." }, { "start": 464, "end": 472, "text": " However, this is the follow up to the previous Dali paper and it has insane capabilities of generating pictures." }, { "start": 473, "end": 476, "text": " This video is sponsored by weights and biases." }, { "start": 476, "end": 479, "text": " If you don't know weights and biases, you're clearly missing out." }, { "start": 479, "end": 481, "text": " They're the number one tool for ML ops." }, { "start": 481, "end": 484, "text": " Whatever you do, they track your experiments." }, { "start": 484, "end": 486, "text": " They optimize your hyper parameters." }, { "start": 486, "end": 487, "text": " They make everything observable." }, { "start": 487, "end": 493, "text": " They track your artifacts, your models, your data sets, your inputs and your outputs of all the things that you do." }, { "start": 493, "end": 499, "text": " They're with you from conception of your idea to experimentation to deployment and beyond." }, { "start": 499, "end": 503, "text": " It's really cool. They enable students, they enable professionals, they enable researchers." }, { "start": 503, "end": 507, "text": " Personal accounts are free forever as are educational accounts." }, { "start": 507, "end": 512, "text": " But the extra benefits of weights and biases for teams cannot be overstated." }, { "start": 512, "end": 514, "text": " Everything you do as a team is shareable." }, { "start": 514, "end": 517, "text": " You can write up reports that you can share with your teammates." }, { "start": 517, "end": 518, "text": " They can comment on it." }, { "start": 518, "end": 520, "text": " And all of that is really cool." }, { "start": 520, "end": 524, "text": " They're in the cloud, but they do have options to host on premise if that is important to you." }, { "start": 524, "end": 526, "text": " And they're just all in all a great tool." }, { "start": 526, "end": 530, "text": " They work seamlessly with a single line of code that you add to your script." }, { "start": 530, "end": 532, "text": " And from that, they just track everything." }, { "start": 532, "end": 534, "text": " They have integrations with all of the popular frameworks." }, { "start": 534, "end": 537, "text": " So there's no reason really to not try weights and biases." }, { "start": 537, "end": 538, "text": " Use my link." }, { "start": 538, "end": 544, "text": " That's wandaby.me slash Yannick to get a little surprise intro and also to let them know that I sent you." }, { "start": 544, "end": 546, "text": " Thank you again so much to weights and biases." }, { "start": 546, "end": 547, "text": " This is really awesome." }, { "start": 547, "end": 548, "text": " Allows me to do these videos." }, { "start": 548, "end": 550, "text": " And yeah, let's get into it." }, { "start": 552, "end": 557, "text": " So first of all, it generates pictures in higher resolution 1024 by 1024." }, { "start": 557, "end": 558, "text": " And it creates them from a text." }, { "start": 558, "end": 566, "text": " Now in true open AI style, they're obviously not releasing this for some shady reasons, but they do give you some cherry picked outputs." }, { "start": 566, "end": 568, "text": " Nevertheless, these are insane." }, { "start": 568, "end": 577, "text": " So the whole model is a bit different than the original Dali model in that it uses a clip as a foundation for the generative model." }, { "start": 577, "end": 579, "text": " Previously, clip was just used as a rancor." }, { "start": 579, "end": 580, "text": " Now it's like really the core." }, { "start": 580, "end": 585, "text": " So they have a clip that is just frozen and gives you text and image embeddings." }, { "start": 585, "end": 590, "text": " What this model does is it takes actually the text embeddings and then there's two new parts." }, { "start": 590, "end": 595, "text": " So the first one is a prior, which can either be diffusion based or autoregressive based." }, { "start": 595, "end": 603, "text": " Now that prior is supposed to take the text embedding and make it into an image embedding clip already tries to align the two quite well." }, { "start": 603, "end": 607, "text": " However, there's still a bit of a difference and that prior bridges that gap." }, { "start": 607, "end": 610, "text": " This can be trained once you have the clip embeddings." }, { "start": 610, "end": 612, "text": " This can just be trained in a supervised fashion." }, { "start": 612, "end": 616, "text": " The other new thing is obviously the decoder, which is a diffusion based model." }, { "start": 616, "end": 620, "text": " So that takes an image encoding and it forward propagates through a diffusion model." }, { "start": 620, "end": 626, "text": " Now I've treated and explained diffusion models in the past, such as glide and other diffusion models." }, { "start": 626, "end": 629, "text": " So go check them out if you want to know how they work." }, { "start": 629, "end": 632, "text": " Diffusion models have interesting properties and capabilities." }, { "start": 632, "end": 647, "text": " So with this model, you're able not only to generate pictures from text, but also to edit pictures in place and to say, I want to edit this part right here and change it to something else that you describe with text or to simply make some variations on existing images." }, { "start": 647, "end": 656, "text": " Now, if you're interested, they have an Instagram account where you can follow where they present some of the creations that they did, which is pretty insane." }, { "start": 656, "end": 661, "text": " That being said, I also have an Instagram account where I just post new updates on videos." }, { "start": 661, "end": 663, "text": " But be sure to follow that as well." }, { "start": 663, "end": 664, "text": " Also, the various." }, { "start": 664, "end": 665, "text": " Okay, there's a meme." }, { "start": 665, "end": 667, "text": " This is not created by that." }, { "start": 667, "end": 669, "text": " But is it?" }, { "start": 669, "end": 671, "text": " No, probably not." }, { "start": 671, "end": 680, "text": " But something like this, a rabid detective sitting on a park bench reading a newspaper in a Victorian setting like this is this is insane." }, { "start": 680, "end": 689, "text": " And if you follow the various open AI employees and leaders here on Twitter, they will take prompts from people and then generate pictures from that." }, { "start": 689, "end": 692, "text": " They'll let you get access, but they'll do it themselves." }, { "start": 692, "end": 694, "text": " We'll see where that leads with open AI." }, { "start": 694, "end": 701, "text": " It's a bit shady, as always, to not give people access, not even through the API so far, which in itself was already a bit shady." }, { "start": 701, "end": 702, "text": " But I get it." }, { "start": 702, "end": 703, "text": " They need to make money." }, { "start": 703, "end": 707, "text": " But they usually have some sort of reason like it's too dangerous, which no one believes anymore." }, { "start": 707, "end": 709, "text": " Open AI, no one buys it anymore." }, { "start": 709, "end": 711, "text": " Just say you want to make money." }, { "start": 711, "end": 712, "text": " We all cool with that." }, { "start": 712, "end": 715, "text": " Panda skateboarding in Santa Monica." }, { "start": 715, "end": 719, "text": " Like, come on, this is this this is just just generated from text." }, { "start": 719, "end": 722, "text": " So there is a paper with Dali to where you can learn all about it." }, { "start": 722, "end": 727, "text": " Watch my live stream and you can learn how it works." }, { "start": 727, "end": 742, "text": " Last things I want to point out, there is a new data set, Lyon 5B, which is an open data set of five billion image text pairs, which open AI again doesn't tell you what data they trained either clip or this Dali to one." }, { "start": 742, "end": 745, "text": " By the way, Dali to in the paper is called on clip." }, { "start": 745, "end": 747, "text": " So if you hear on clip, that's the same model." }, { "start": 747, "end": 749, "text": " Nevertheless, there is this new open data set." }, { "start": 749, "end": 753, "text": " I'm going to have a video upcoming on that, explaining it in more detail." }, { "start": 753, "end": 755, "text": " So sure to look out for that." }, { "start": 755, "end": 763, "text": " There's also a clip model that has been trained on the previous data set by Lyon that matches in many metrics." }, { "start": 763, "end": 765, "text": " The open AI clip." }, { "start": 765, "end": 771, "text": " That's pretty cool because we no longer necessarily rely on open AI choosing or not choosing to release something." }, { "start": 771, "end": 776, "text": " The open source community has been getting a lot better at reproducing the results." }, { "start": 776, "end": 787, "text": " Excellent. So besides that, there are other models like there is a new one point four or five billion parameter diffusion model that is open source and people have already combined that with colabs that you can try out." }, { "start": 787, "end": 789, "text": " So I've pointed this out in the live stream." }, { "start": 789, "end": 795, "text": " The Twitter account multimodal art has created a little colab out of this model where you can try it out." }, { "start": 795, "end": 796, "text": " It's pretty cute." }, { "start": 796, "end": 798, "text": " Like spelling mistakes." }, { "start": 798, "end": 800, "text": " So give that a try." }, { "start": 800, "end": 803, "text": " The original model is by a comp this by the way." }, { "start": 803, "end": 820, "text": " And lastly, I want to point out that Salesforce has released their code gen models in various sizes, which are exceeding codex in terms of program synthesis, in terms of understanding and generating code, which, you know, is a giant deal." }, { "start": 820, "end": 825, "text": " If it weren't for all the other giant announcements that are also happening this week." }, { "start": 825, "end": 831, "text": " So the entire ML world is kind of, you know, completely filled with dopamine and adrenaline right now." }, { "start": 831, "end": 838, "text": " My tip is try out the various tools if they're available, maybe follow a bit what's going on, observe the art that's coming out." }, { "start": 838, "end": 841, "text": " But I'm very excited to see where this goes forward." }, { "start": 841, "end": 845, "text": " There's never been a more exciting time to be in machine learning." }, { "start": 845, "end": 846, "text": " It's really cool to be here." }, { "start": 846, "end": 848, "text": " Thank you, everyone who supports this channel." }, { "start": 848, "end": 852, "text": " If you like this video, share it around and check out weights and biases." }, { "start": 852, "end": 853, "text": " I'll see you next time." }, { "start": 853, "end": 860, "text": " Bye bye." } ]
IiBFqnNu7A8
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Planning to Explore via Self-Supervised World Models (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "rl", "deep rl", "deep reinforcement learning", "novelty", "curiosity", "intrinsic reward", "dreamer", "planet", "control", "walker", "run forward", "imaginary", "imagination", "planning", "google", "neural network", "actor", "critic", "uncertainty", "information gain", "mutual information", "model" ]
What can an agent do without any reward? Explore the world! While many formulations of intrinsic rewards exist (Curiosity, Novelty, etc.), they all look back in time to learn. Plan2Explore is the first model that uses planning in a learned imaginary latent world model to seek out states where it is uncertain about what will happen. OUTLINE: 0:00 - Intro & Problem Statement 3:30 - Model 5:10 - Intrinsic Motivation 9:05 - Planning in Latent Space 11:15 - Latent Disagreement 16:30 - Maximizing Information Gain 21:00 - More problems with the model 26:45 - Experiments 32:10 - Final Comments Paper: https://arxiv.org/abs/2005.05960 Website: https://ramanans1.github.io/plan2explore/ Code: https://github.com/ramanans1/plan2explore Abstract: Reinforcement learning allows solving complex tasks, however, the learning tends to be task-specific and the sample efficiency remains a challenge. We present Plan2Explore, a self-supervised reinforcement learning agent that tackles both these challenges through a new approach to self-supervised exploration and fast adaptation to new tasks, which need not be known during exploration. During exploration, unlike prior methods which retrospectively compute the novelty of observations after the agent has already reached them, our agent acts efficiently by leveraging planning to seek out expected future novelty. After exploration, the agent quickly adapts to multiple downstream tasks in a zero or a few-shot manner. We evaluate on challenging control tasks from high-dimensional image inputs. Without any training supervision or task-specific interaction, Plan2Explore outperforms prior self-supervised exploration methods, and in fact, almost matches the performances oracle which has access to rewards. Videos and code at this https URL Authors: Ramanan Sekar, Oleh Rybkin, Kostas Daniilidis, Pieter Abbeel, Danijar Hafner, Deepak Pathak Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at planning to explore via self-supervised world models by Ramanan Sekar, Ole Rybkin, Kostas Danilidis, Peter Abil, Danijar Hafner and Depak Patak. So this is a paper that concerns reinforcement learning and specifically self-supervised reinforcement learning. So what do they mean? Here's a graphic right here. In reinforcement learning, usually you have an environment and an agent. So you have this environment and let's ignore the without rewards for now. And you have the agent and the agent needs to interact with the environment in order to achieve a maximum reward. So the reward is given by a certain task. You have to do something in this environment. In this case, they consider these types of tasks where I think the top task might be called run forward. So your reward is more the further you go with this walker. And how you can influence the walker is you can sort of give a bit of force onto its joints right here and you have a bunch of sensors. So the main task is actually to keep it to balance it on its feet and then sort of walk forward such that it never falls over. Otherwise, you get negative reward. You lose. So in this case, what they want to do is they want to say, wait, if we just train a reinforcement learning agent for each of these tasks individually, that will use a lot of data. And basically, we can't reuse the learned reinforcement learning agent for each of these individual tasks. It's sort of like if you have many, you know, image tasks or NLP tasks, you don't want to learn one model for each one individually, but you might do something like a common joint pre-training. And this is exactly this for reinforcement learning. And it's even called self supervised, right? Like we are used to in the in the classification setting, self supervised learning. What does it mean? So it means that at first you're in an environment without rewards. So basically, the agent is just dropped in an environment and there's no rewards. It can just do actions and observes states from this environment. And after that, after a while of that, then the tasks come in. So task A, task B and task C are three different tasks, all in the same environment, but all requiring the agent to do different things like running forward or running backwards or do a front flip or things like this. So the how fast the agent can adapt to these individual tasks very much depends on what it has learned during this phase where there were no rewards, right? So the agent is tasked to just explore the world via what they call here task agnostic exploration to explore the world to learn something about the world in order then to generalize to these tasks. And in their case, they learn this global world model. So the agent is supposed to learn somehow how the world works, right? And this is this is the way that this agent is then able to adapt really quickly. So in essence, what does this agent do? The agent works as follows. It gets an input observation, and it runs that through an encoder, which is usually something like a convolutional neural network. And that will give you a set of features, right? So this is sort of an embedding of the state that you're in. And that you can incorporate into a latent state at time t. Now, usually in these RL algorithms, or what can happen is that you incorporate the last latent state, so the latent state from the step here also goes into the latent state of the next step, right? So here was the sorry, the last observation observation comes in features, latent state and so on, ultimately, and then and so on and comes in from here. And there's usually like an RNN going over the time steps. But ultimately here, the agent has to decide on an action using this policy network. Now how is this trained this policy network has to come up with an action, but there are no rewards. So usually, we would train this policy network with like an actor critic method. So we would also train some sort of a value function, and then the policy would try to maximize the value function. And if we don't have rewards, how are we going to do that? So people have thought about this for a bit. And people have come up with things like intrinsic motivation. Intrinsic motivation is a term where you're trying to say something like, if you're in a room right here, like this, your agent is right here, then you just you know, you do something. And maybe your agent goes down here. If your agent were to go down there again, it would sort of not really learn anything because it has now already gone there and has already learned from those states. So you might want to explore some different space, right, like here. And in the next episode, you might want to explore this room right here. So this this notion of intrinsic motivation to explore, it has a bunch of different formulations of how exactly you can formulate it. But just imagine basically, the entire state is filled with a bunch of coins, and I'm going to draw this as green dots, sort of like Pac-Man. And everything is filled with these green dots, right? And what the agent wants to do if it has no rewards, it will simply collect those green dots. And once one is collected, so if I go here, I'll collect all these green dots, these are now no longer there. So that area doesn't give me any reward anymore. So you can imagine sort of like this. So as an intrinsic reward, you simply reward the agent every time it finds itself in a new state that it hasn't seen before. So you train it to seek out novel states that usually, when you just have like an actor critic method, and that's what this paper here criticizes, if you have it's called retrospective novelty. That means if you train a model free algorithm, which is an actor critic, right, if we just plug in into here, something like a three C, that will simply have a policy and a value function. And in this case, if we train it on intrinsic reward, the policy will simply tell you where to go to find more green stuff. But you can only train it. So you use this to run an episode. And then you observe how many green things you found in that episode, right, if your episode goes here, and then you put that back into your buffer to learn from. But at that point, you've already collected the green things, right. So the reward signal is actually a bit off because you want to train your agent that it should seek out novel things. But as soon as you've explored them, they're not really novel anymore, because you have now explored them. But still, you're going to train your agent, telling your agent that this area right here has lots of has given me lots of rewards. So the agent is going to be encouraged to repeat that. They say this right here, the retrospective novelty model free exploration methods not only require large amounts of experience to adapt to downstream tasks, they can also be inefficient during exploration. These agents usually first act in the environment, collect trajectories, and then calculate an intrinsic reward as the agent's current estimate of novelty. This approach misses out on efficiency by operating retrospectively. That is, the novelty of inputs is computed after the agent has already reached them. Hence it seeks out previously novel inputs that have already been visited and would not be novel anymore. Instead, one should directly seek out future inputs that are expected to be novel. Now, so what this paper is doing, it's basically saying, can we build a model that estimates the future novelty of a state that we maybe haven't seen so far? And here is where that goes. So what do they do in this policy? So the policy isn't just trained to maximize the novelty in the world. Instead, sorry, it uses planning. It uses planning in latent space. So what this model does is it learns a world model in latent space. The world model takes as input these features that you saw right here that the encoder gives you and it predicts the future hidden latent states. These are the things you saw here. So these things that are always made by incorporating the new features with the old state, it tries to predict it. So technically, these things here should have like some sort of a, no, this is actually exact here, but these here should have some sort of a tick or something to indicate these are estimated. These are estimated future states. And this model right here, this is an estimate. This is a world model. They use Dreamer for this. And I have made a video about Dreamer. Dreamer tries exactly that. It tries to estimate what is the future, but not in actual world space, but in latent space. And yeah, so it tries to estimate its own future. And the cool thing here is that this is probabilistic or you can make it probabilistic. So you can technically from this one age that you have here, you can run out many futures in your imagination. And since you don't need the observations, you only need the latent space, you can simply forward roll your RNN and sample from it and you have many trajectories in the future. Now the fact that you have many trajectories leads to even a different thing. So what you can do for each of these hidden states, they have a head here that predicts the so-called latent disagreement. What does this do? This consists of a whole bunch of models. These are ensemble models. They're the same for each time step. But what they take in, they take in the latent state of the model and the action that you're about to do, the action that you imagine you would do. So this is the imagined state and this is the imagined action in that state. And then it will compute the next features. So the next, whatever in the next step would be the age. So right, right, where do we put it? Whatever in the next, so if I have this age and I have this state, it tells me if I do action A1 and if I were to execute this in the real world, what would be my next age that I would get? So basically by performing an action, I will get the next observation and I will encode that to get the next features. And this small model would try to predict what are the features of the next state if I were to execute this action in this state. Okay, so it's kind of a future predictor. But also not in observation space, but in latent space. So it tries to predict the latent features of the next observation. And the split here, you might think that there is a bit of a, it's like almost the same, this latent state here and the features. But as we discussed before, the latent state can incorporate the last or the history of latent states while the features simply are only a function of the current observation. And that's why they predict the features. They really want to predict the observation. But history has sort of shown that if you try to predict, for example, the pixels of the observation, that won't serve you really well. And therefore, what you need to do is you need to predict the latent features of the observation that works much better. So they have a bunch of these models right here. They have a bunch of models with different parameterizations. They instantiate k different models of that. And they all run the same. So these are all the same inputs through these different models. Now, these different models have been initialized at different points. So they will make slightly different predictions. And the crucial part is, so if it's really deterministic what the next state is going to be, right? So say you're in this state, you perform this action. And so if you have a ball in your hand and you drop the ball, then the ball is going to fall down. Really deterministic. That means these next these estimated next features, if the models are any good, they all agree. And this variance here between the estimates is very, very small. And now if if the uncertainty over the next state is very high, and this can be due to two facts either, it is actually uncertain what's going to happen. So maybe you have a really a piece of paper and you drop that and due to the wind, you can't know what's happening. Or because your model has simply not learned yet what's going to happen. In either of those cases, you don't know what's going to happen. Therefore, these these predictions here, sorry, are going to be very different from each other. And because of that, this variance will be high. And this variance you take as the intrinsic reward. So in each step, you basically try to predict over the next actions you can do, which ones leads me to a leads me to a situation where I don't know what's going to happen, where I cannot really predict the variance in my prediction is high. So I really don't know what's going to happen. And that is going to be the states you seek out. Okay, so this is the core of the paper. Basically, you do this planning in latent space in order to find the states or action that leads you to a state where you don't know what's going to happen. And you measure that by trying to predict it using slightly different models. And if they disagree a whole bunch, then you can use you sort of you say, I don't know what's going to happen. And therefore, I want to go there because I want to learn about that state. Now this, this is the this is the entire thing. It has a bunch of problems, as you can imagine. So this is the reasoning behind it, right? Now they try to make a deal out of basically their latent disagreement here agrees with minima, sorry, maximizing the expected information gain. They go into the theory right here and say, okay, if I have a state and an action, and I had and this W are the dynamics parameters of the world. So the W characterizes how the world works. And the H here is the next state, or sorry, the features of the next observation. And the I is the mutual information between H and W. So this right here measures how much information of the next state is contained in the dynamics of the world. If this is really low, and I have a good world model, then I should be able to predict the next state really well. And this, this, they say, okay, selecting the most promising data during exploration. We want to select the action that maximizes this information gain. So the, the, the more the mutual information here, we want to select the action that maximizes that they decompose this mutual information into two things. They decompose it into this thing right here, which is simply the entropy of the next state given the current state and action. This is simply the total uncertainty, including the fact that it could actually be stochastic like dropping a paper, and the fact that you haven't learned yet what happens like if you drop a ball, but you haven't learned that yet, that is also uncertain. So this is that part, this is the total uncertainty minus this right here. And this is the uncertainty if you know the dynamics, right? So this is the wind, basically, in the paper example. So you want something where the total uncertainty is high, but the the kind of uncertainty of the of the stochasticity of the world is low. If you maximize this quantity here, this total quantity, sorry, if you maximize this entire quantity, because I called one of them total, that means you are going to seek out actions where what's left is only the uncertainty that you yourself don't know, right? You say, well, this state has a pretty high total uncertainty, but it's not due to the fact that the world itself is uncertain. It must be due to the fact that I don't know yet. And they make the claim that their model is actually going after these things. And they say, okay, because we have these these Gaussians here as our estimators, they somehow reduce to this total to this uncertainty, but only basically by taking Gaussians, they assume, they just assume that this quantity here is constant. At least that's how I understand it. They basically assume that every transition in the world has about the same amount of uncertainty. And therefore, we can just focus on the total amount of uncertainty, right? So if we can't predict, if we can't predict the next state, a, we can predict the next state a better than the next state B, and both have about the same amount of intrinsic uncertainty, stochasticity in the world, that must mean we should go to B because that's where our model hasn't learned yet. Now, of course, in the real world, that is absolutely not the case. And I think this model works mainly because they tested in these transitions, or in these environments where that might be very close to accurate, that actually most of the transitions have the same stochasticity as any other transition. The second, the second part, why this is a bit difficult is because you have to somehow keep this latent, sorry, this, this couple of models right here that make this disagreement prediction. So you rely on the fact that you can capture disagreement by looking how those models disagree with each other. And again, they employ Gaussians here. But it is not said that these things, these things will actually give you the true disagreement among themselves. If you initialize them wrongly, they might miss like if, if your distribution has three modes, they act, they might just for all of them, focus on one of them. And then your disagreement will be completely out of whack, or you could initialize them not far enough for or too close together. That's the same thing. So it all depends on kind of how you manage to handle this uncertainty right here. So all of this seems a bit problematic, but the whole setup is pretty cool. Because imagine like all of this is shifting constantly, right? The the policy here tries to maximize these rewards. And that's something I don't understand. In the paper, they make it sort of explicitly clear that the policy tries to maximize this quantity right here, the next uncertainty. The planning objective is to maximize expected novelty, or it, which is this thing right here. However, I, I don't actually see why in that case, you'd need planning. Because with planning, your goal is sort of to look ahead more than one step. So what I would expect is that they somehow have the aggregated somehow that they not don't maximize this, but somehow they maximize the future right of t prime of r t prime, I, they somehow maximize the yes, they somehow maximize the future, the total future, maybe with a disagreement, like if it was a reward, and you actually want to maximize the total reward across your episode, I would imagine they use planning to maximize the total future uncertainty that they encounter because right here you have your trajectories. And as they say it, they only maximize the uncertainty after the first step. So this here might be, you know, even intrinsically uncertain or a bit uncertain. But if you go down the path here, there might be a state where that's super uncertain, and you would like to find that right through your different rollouts. So I'm not sure that the paper is correct or consistent here, actually. I might be wrong, though, they do have the code, which is a really good thing. So I'll link to the code and you can go and explore that. They do have this algorithm down here, which is pretty much I mean, this is saying just nothing. Still exploring, do train the world model, train the latent disagreement ensemble, train the policy in imagination of like, rather, rather, it helps a bit. Okay. But one other thing right here, the policy that tries to maximize the reward, right, so you use planning to look ahead where the uncertainty is. But how do you do the planning? You need a policy in imagination space, right? This latent disagreement policy here is used to train how you act in latent space, right? How this action that you imagine comes to be, you can't plan in imagination space and in imagination space use planning again, it's just an infinite recursion. At some point, you need a model that tells you what to do. And in imagination, they just use an actor critic model, you see they have a value function here. They just use an actor critic model to basically one shot predict the next best action to get you to the next step. So as they themselves rag on these model free methods, because they only look ahead, how is that not exactly the same as me ragging on the fact that they use model free and imagination space, because your world model certainly is retrospective, your world model learns from the past, right? So the model free method that learns on your imagined world model learns from retrospective imagination. And therefore, it itself has sort of the same problem, just one layer deeper that it learns from retrospective data and not from data ahead, because your uncertainty about the future might just be because of your retrospect, exactly is because of your retrospective data. I see the value in having this uncertainty. But I think there are other methods that also do model free, and don't just maximize an intrinsic reward, but actually maximize a sort of uncertainty. Okay, enough ragging, let's go to the experiment. So the cool thing you can do with this is what's called zero shot performance. So what they do is in the first step, they do this, they just learn task agnostic, just explorer. Without task, then second, they go and into their buffer. So when they explore, they all save, they save what they do, there is no reward, but they just save, they store their episodes, right? And then someone comes with a task, and the task is simply, they specify like you have to run forward, and they go to this buffer, and they now label every episode with its reward. So this is different, this is like offline reinforcement learning, right? So basically, it is how well they call it zero shot, but it is how well can an algorithm that has explored with this kind of self supervision, perform in offline reinforcement learning on the trajectories that it has already experienced, which is different from performing the same trajectories with the reward, because you would learn from the reward, and you would learn to seek out different, like your experience would be different. If you were going after a reward. So this is harder. So they compare this to dreamer, and dreamer is a fully supervised method, the dreamer is actually cheating, dreamer actually goes after the reward. And all the other methods here, they don't have a reward, and they're just zero shot offline reinforcement learning generalized to these methods. And you see the green is the plan to explore, and that outperforms almost all the other methods right here, down here, and even comes close to this, to the dreamer that goes up here. It's seen pretty much every graphic that dreamer is the one that's able to cheat, right? So it is performing pretty well. But then the the zero shot generalized plan to explore is sometimes on par and certainly outperforms the other intrinsic reward methods. Now how does that go about when you try actually when you allow the model to fine tune on the task. So you can see the performance on few shot adaptation from raw pixels without state space input. So basically, you learn without reward for this many steps right here, until this shaded area back here. This is how when you do no reward, and then all of a sudden, now you say, okay, now, I'll give you reward. Now please learn. This many steps, you have this many steps where you can learn from the reward. So now we're no longer in this offline RL setting. This is now online RL. But we've been pre trained with all of this experience that we had without the reward. Again, the orange here is the cheater. So the orange is is cheating. And now we don't so before the graphs were higher, because we've, we've actually at each step, for example, how this works is you train until here without reward, and then you do this offline RL offline RL training. And that's how this point comes about. Now, I think in the graph down here, they they don't do that. So they just measure how well you're doing in the task. And of course, if after this many steps, you've never looked at the reward, you haven't been able to look at the reward, your reward will be fairly low, right at the task, because you don't know what to do to get the high reward. The dreamer again, this orange line is able to cheat. That's why it just is basically straight line or goes up at the beginning, it's able to look at the reward from the beginning. And it's here as a baseline comparison. So you see as soon as you give the reward to the models, they generally shoot up. And this plan to explore generally shoots up much harder than these others, as you can see, pretty much everywhere. And again, it gets competitive and here even outperforms the dreamer. Why could it outperform the supervised method? Maybe because this method here is sort of confused or is stuck in a local optimum, which can happen very easily in reinforcement learning. Whereas the plan to explore has never seen the reward, therefore hasn't tried to just single mindedly maximize the reward and has explored a bunch of different things to do in the world. And now we can use that knowledge to outperform the plan, the dreamer, the baseline. So the other thing I would like to draw your attention to here is that sometimes you see that the plan to explore or the other curiosity methods actually get a reward before the reward kicks in as we saw here, right? For example, right here. And this tells me that this is probably a property of the environment itself, namely these reinforcement learning environments, they don't really have much noise going on, right? They pretty much just have, it's a simulator with one figure that can walk or not. And therefore, it might be that the only interesting thing to do in these models is to actually perform one of these tasks. And that's why it might be that sometimes they already get a reward. So it's true that they don't see the reward for this entire duration, but also implicitly via the developers building the simulator, they have made it such that the only interesting thing to do is the same thing as getting a reward, right? So I'm sort of skeptical that this is like a general exploration policy, because also in the real world, there are just combinatorically hugely many, many actions to do many paths to follow. And if you just go by what do I not know yet, I think you can't you can't put that all into one model is just too much. And the states where you really were really something interesting happens are so few and far in between, and that it doesn't compare to the amount of states where you simply don't know most states, you don't know what's going to happen, but probably nothing's nothing interesting is going to happen just different things, which will screw over this method completely. In any case, they Yeah, sorry, this is just this is another experiment. They have a bunch of other experiments. And yeah, that that was my this was my review of the paper. Tell me if you agree or disagree or if I've misunderstood something that's entirely possible. I'm just always a bit skeptical of these things a bit. So the experiments, they're very compute intensive, of course, so you never know there and then these specific environments right here. You never know there and then the fact that the real world actually has very different stochasticity, which they simply assume away right here. Yeah, but other than that, big props to the fact that the code is out. And as I said, leave a comment if you agree or disagree. Please subscribe and share this video if you liked it, and I'll see you next time. Bye bye.
[ { "start": 0, "end": 6.640000000000001, "text": " Hi there! Today we're looking at planning to explore via self-supervised world models" }, { "start": 6.640000000000001, "end": 17.04, "text": " by Ramanan Sekar, Ole Rybkin, Kostas Danilidis, Peter Abil, Danijar Hafner and Depak Patak." }, { "start": 17.04, "end": 25, "text": " So this is a paper that concerns reinforcement learning and specifically self-supervised" }, { "start": 25, "end": 26.44, "text": " reinforcement learning." }, { "start": 26.44, "end": 28.240000000000002, "text": " So what do they mean?" }, { "start": 28.24, "end": 30.4, "text": " Here's a graphic right here." }, { "start": 30.4, "end": 35.76, "text": " In reinforcement learning, usually you have an environment and an agent." }, { "start": 35.76, "end": 44.2, "text": " So you have this environment and let's ignore the without rewards for now." }, { "start": 44.2, "end": 48.64, "text": " And you have the agent and the agent needs to interact with the environment in order" }, { "start": 48.64, "end": 51.66, "text": " to achieve a maximum reward." }, { "start": 51.66, "end": 54.239999999999995, "text": " So the reward is given by a certain task." }, { "start": 54.239999999999995, "end": 56.8, "text": " You have to do something in this environment." }, { "start": 56.8, "end": 62.64, "text": " In this case, they consider these types of tasks where I think the top task might be" }, { "start": 62.64, "end": 64.24, "text": " called run forward." }, { "start": 64.24, "end": 69.2, "text": " So your reward is more the further you go with this walker." }, { "start": 69.2, "end": 76.96, "text": " And how you can influence the walker is you can sort of give a bit of force onto its joints" }, { "start": 76.96, "end": 78.88, "text": " right here and you have a bunch of sensors." }, { "start": 78.88, "end": 83.03999999999999, "text": " So the main task is actually to keep it to balance it on its feet and then sort of walk" }, { "start": 83.03999999999999, "end": 86.4, "text": " forward such that it never falls over." }, { "start": 86.4, "end": 88.56, "text": " Otherwise, you get negative reward." }, { "start": 88.56, "end": 90.88000000000001, "text": " You lose." }, { "start": 90.88000000000001, "end": 97.28, "text": " So in this case, what they want to do is they want to say, wait, if we just train a reinforcement" }, { "start": 97.28, "end": 102.64000000000001, "text": " learning agent for each of these tasks individually, that will use a lot of data." }, { "start": 102.64000000000001, "end": 108.16000000000001, "text": " And basically, we can't reuse the learned reinforcement learning agent for each of these" }, { "start": 108.16000000000001, "end": 110, "text": " individual tasks." }, { "start": 110, "end": 115.64000000000001, "text": " It's sort of like if you have many, you know, image tasks or NLP tasks, you don't want to" }, { "start": 115.64, "end": 122, "text": " learn one model for each one individually, but you might do something like a common joint" }, { "start": 122, "end": 123, "text": " pre-training." }, { "start": 123, "end": 127, "text": " And this is exactly this for reinforcement learning." }, { "start": 127, "end": 129.6, "text": " And it's even called self supervised, right?" }, { "start": 129.6, "end": 135.86, "text": " Like we are used to in the in the classification setting, self supervised learning." }, { "start": 135.86, "end": 137.12, "text": " What does it mean?" }, { "start": 137.12, "end": 142.08, "text": " So it means that at first you're in an environment without rewards." }, { "start": 142.08, "end": 146.8, "text": " So basically, the agent is just dropped in an environment and there's no rewards." }, { "start": 146.8, "end": 152.78, "text": " It can just do actions and observes states from this environment." }, { "start": 152.78, "end": 158.12, "text": " And after that, after a while of that, then the tasks come in." }, { "start": 158.12, "end": 163.96, "text": " So task A, task B and task C are three different tasks, all in the same environment, but all" }, { "start": 163.96, "end": 169.28, "text": " requiring the agent to do different things like running forward or running backwards" }, { "start": 169.28, "end": 172.6, "text": " or do a front flip or things like this." }, { "start": 172.6, "end": 179.68, "text": " So the how fast the agent can adapt to these individual tasks very much depends on what" }, { "start": 179.68, "end": 185.58, "text": " it has learned during this phase where there were no rewards, right?" }, { "start": 185.58, "end": 191.04, "text": " So the agent is tasked to just explore the world via what they call here task agnostic" }, { "start": 191.04, "end": 196.2, "text": " exploration to explore the world to learn something about the world in order then to" }, { "start": 196.2, "end": 200.39999999999998, "text": " generalize to these tasks." }, { "start": 200.39999999999998, "end": 203.64, "text": " And in their case, they learn this global world model." }, { "start": 203.64, "end": 209.83999999999997, "text": " So the agent is supposed to learn somehow how the world works, right?" }, { "start": 209.83999999999997, "end": 218, "text": " And this is this is the way that this agent is then able to adapt really quickly." }, { "start": 218, "end": 220.98, "text": " So in essence, what does this agent do?" }, { "start": 220.98, "end": 223.76, "text": " The agent works as follows." }, { "start": 223.76, "end": 229.45999999999998, "text": " It gets an input observation, and it runs that through an encoder, which is usually" }, { "start": 229.45999999999998, "end": 232.6, "text": " something like a convolutional neural network." }, { "start": 232.6, "end": 236, "text": " And that will give you a set of features, right?" }, { "start": 236, "end": 241.16, "text": " So this is sort of an embedding of the state that you're in." }, { "start": 241.16, "end": 245.92, "text": " And that you can incorporate into a latent state at time t." }, { "start": 245.92, "end": 251.76, "text": " Now, usually in these RL algorithms, or what can happen is that you incorporate the last" }, { "start": 251.76, "end": 257.5, "text": " latent state, so the latent state from the step here also goes into the latent state" }, { "start": 257.5, "end": 259.15999999999997, "text": " of the next step, right?" }, { "start": 259.15999999999997, "end": 265.8, "text": " So here was the sorry, the last observation observation comes in features, latent state" }, { "start": 265.8, "end": 270.74, "text": " and so on, ultimately, and then and so on and comes in from here." }, { "start": 270.74, "end": 275.92, "text": " And there's usually like an RNN going over the time steps." }, { "start": 275.92, "end": 282.76, "text": " But ultimately here, the agent has to decide on an action using this policy network." }, { "start": 282.76, "end": 287, "text": " Now how is this trained this policy network has to come up with an action, but there are" }, { "start": 287, "end": 288.20000000000005, "text": " no rewards." }, { "start": 288.20000000000005, "end": 293.3, "text": " So usually, we would train this policy network with like an actor critic method." }, { "start": 293.3, "end": 298.3, "text": " So we would also train some sort of a value function, and then the policy would try to" }, { "start": 298.3, "end": 301.04, "text": " maximize the value function." }, { "start": 301.04, "end": 305.44, "text": " And if we don't have rewards, how are we going to do that?" }, { "start": 305.44, "end": 309.64, "text": " So people have thought about this for a bit." }, { "start": 309.64, "end": 314.46, "text": " And people have come up with things like intrinsic motivation." }, { "start": 314.46, "end": 320.88, "text": " Intrinsic motivation is a term where you're trying to say something like, if you're in" }, { "start": 320.88, "end": 330, "text": " a room right here, like this, your agent is right here, then you just you know, you do" }, { "start": 330, "end": 331.36, "text": " something." }, { "start": 331.36, "end": 334.88, "text": " And maybe your agent goes down here." }, { "start": 334.88, "end": 340.88, "text": " If your agent were to go down there again, it would sort of not really learn anything" }, { "start": 340.88, "end": 346.64, "text": " because it has now already gone there and has already learned from those states." }, { "start": 346.64, "end": 353.3, "text": " So you might want to explore some different space, right, like here." }, { "start": 353.3, "end": 356.64, "text": " And in the next episode, you might want to explore this room right here." }, { "start": 356.64, "end": 363.4, "text": " So this this notion of intrinsic motivation to explore, it has a bunch of different formulations" }, { "start": 363.4, "end": 366.2, "text": " of how exactly you can formulate it." }, { "start": 366.2, "end": 370.88, "text": " But just imagine basically, the entire state is filled with a bunch of coins, and I'm going" }, { "start": 370.88, "end": 375.67999999999995, "text": " to draw this as green dots, sort of like Pac-Man." }, { "start": 375.67999999999995, "end": 380.88, "text": " And everything is filled with these green dots, right?" }, { "start": 380.88, "end": 386.67999999999995, "text": " And what the agent wants to do if it has no rewards, it will simply collect those green" }, { "start": 386.67999999999995, "end": 387.79999999999995, "text": " dots." }, { "start": 387.79999999999995, "end": 392.23999999999995, "text": " And once one is collected, so if I go here, I'll collect all these green dots, these are" }, { "start": 392.24, "end": 394.16, "text": " now no longer there." }, { "start": 394.16, "end": 397.48, "text": " So that area doesn't give me any reward anymore." }, { "start": 397.48, "end": 401.84000000000003, "text": " So you can imagine sort of like this." }, { "start": 401.84000000000003, "end": 406.96000000000004, "text": " So as an intrinsic reward, you simply reward the agent every time it finds itself in a" }, { "start": 406.96000000000004, "end": 409.56, "text": " new state that it hasn't seen before." }, { "start": 409.56, "end": 416.40000000000003, "text": " So you train it to seek out novel states that usually, when you just have like an actor" }, { "start": 416.4, "end": 422.96, "text": " critic method, and that's what this paper here criticizes, if you have it's called retrospective" }, { "start": 422.96, "end": 424.67999999999995, "text": " novelty." }, { "start": 424.67999999999995, "end": 430.47999999999996, "text": " That means if you train a model free algorithm, which is an actor critic, right, if we just" }, { "start": 430.47999999999996, "end": 439.28, "text": " plug in into here, something like a three C, that will simply have a policy and a value" }, { "start": 439.28, "end": 440.35999999999996, "text": " function." }, { "start": 440.35999999999996, "end": 445.44, "text": " And in this case, if we train it on intrinsic reward, the policy will simply tell you where" }, { "start": 445.44, "end": 447.76, "text": " to go to find more green stuff." }, { "start": 447.76, "end": 449.48, "text": " But you can only train it." }, { "start": 449.48, "end": 454.32, "text": " So you use this to run an episode." }, { "start": 454.32, "end": 458.64, "text": " And then you observe how many green things you found in that episode, right, if your" }, { "start": 458.64, "end": 463.92, "text": " episode goes here, and then you put that back into your buffer to learn from." }, { "start": 463.92, "end": 466.96, "text": " But at that point, you've already collected the green things, right." }, { "start": 466.96, "end": 474.52, "text": " So the reward signal is actually a bit off because you want to train your agent that" }, { "start": 474.52, "end": 476.76, "text": " it should seek out novel things." }, { "start": 476.76, "end": 481, "text": " But as soon as you've explored them, they're not really novel anymore, because you have" }, { "start": 481, "end": 482.15999999999997, "text": " now explored them." }, { "start": 482.15999999999997, "end": 488.52, "text": " But still, you're going to train your agent, telling your agent that this area right here" }, { "start": 488.52, "end": 491.56, "text": " has lots of has given me lots of rewards." }, { "start": 491.56, "end": 496.02, "text": " So the agent is going to be encouraged to repeat that." }, { "start": 496.02, "end": 502.32, "text": " They say this right here, the retrospective novelty model free exploration methods not" }, { "start": 502.32, "end": 507.24, "text": " only require large amounts of experience to adapt to downstream tasks, they can also be" }, { "start": 507.24, "end": 509.52, "text": " inefficient during exploration." }, { "start": 509.52, "end": 517.48, "text": " These agents usually first act in the environment, collect trajectories, and then calculate an" }, { "start": 517.48, "end": 522.02, "text": " intrinsic reward as the agent's current estimate of novelty." }, { "start": 522.02, "end": 526.9399999999999, "text": " This approach misses out on efficiency by operating retrospectively." }, { "start": 526.94, "end": 534.5200000000001, "text": " That is, the novelty of inputs is computed after the agent has already reached them." }, { "start": 534.5200000000001, "end": 539.6400000000001, "text": " Hence it seeks out previously novel inputs that have already been visited and would not" }, { "start": 539.6400000000001, "end": 541.24, "text": " be novel anymore." }, { "start": 541.24, "end": 546.96, "text": " Instead, one should directly seek out future inputs that are expected to be novel." }, { "start": 546.96, "end": 554.48, "text": " Now, so what this paper is doing, it's basically saying, can we build a model that estimates" }, { "start": 554.48, "end": 560.36, "text": " the future novelty of a state that we maybe haven't seen so far?" }, { "start": 560.36, "end": 562.3000000000001, "text": " And here is where that goes." }, { "start": 562.3000000000001, "end": 565.16, "text": " So what do they do in this policy?" }, { "start": 565.16, "end": 570.72, "text": " So the policy isn't just trained to maximize the novelty in the world." }, { "start": 570.72, "end": 573.44, "text": " Instead, sorry, it uses planning." }, { "start": 573.44, "end": 576.64, "text": " It uses planning in latent space." }, { "start": 576.64, "end": 582.6800000000001, "text": " So what this model does is it learns a world model in latent space." }, { "start": 582.68, "end": 587.88, "text": " The world model takes as input these features that you saw right here that the encoder gives" }, { "start": 587.88, "end": 593.8, "text": " you and it predicts the future hidden latent states." }, { "start": 593.8, "end": 596.4399999999999, "text": " These are the things you saw here." }, { "start": 596.4399999999999, "end": 603.5999999999999, "text": " So these things that are always made by incorporating the new features with the old state, it tries" }, { "start": 603.5999999999999, "end": 604.9599999999999, "text": " to predict it." }, { "start": 604.9599999999999, "end": 610.7199999999999, "text": " So technically, these things here should have like some sort of a, no, this is actually" }, { "start": 610.72, "end": 617, "text": " exact here, but these here should have some sort of a tick or something to indicate these" }, { "start": 617, "end": 618.48, "text": " are estimated." }, { "start": 618.48, "end": 621.08, "text": " These are estimated future states." }, { "start": 621.08, "end": 623.76, "text": " And this model right here, this is an estimate." }, { "start": 623.76, "end": 625.96, "text": " This is a world model." }, { "start": 625.96, "end": 627.28, "text": " They use Dreamer for this." }, { "start": 627.28, "end": 630.0400000000001, "text": " And I have made a video about Dreamer." }, { "start": 630.0400000000001, "end": 631.5600000000001, "text": " Dreamer tries exactly that." }, { "start": 631.5600000000001, "end": 639.32, "text": " It tries to estimate what is the future, but not in actual world space, but in latent space." }, { "start": 639.32, "end": 646, "text": " And yeah, so it tries to estimate its own future." }, { "start": 646, "end": 650.8000000000001, "text": " And the cool thing here is that this is probabilistic or you can make it probabilistic." }, { "start": 650.8000000000001, "end": 656.4000000000001, "text": " So you can technically from this one age that you have here, you can run out many futures" }, { "start": 656.4000000000001, "end": 658.1600000000001, "text": " in your imagination." }, { "start": 658.1600000000001, "end": 663.0400000000001, "text": " And since you don't need the observations, you only need the latent space, you can simply" }, { "start": 663.0400000000001, "end": 668.84, "text": " forward roll your RNN and sample from it and you have many trajectories in the future." }, { "start": 668.84, "end": 675.96, "text": " Now the fact that you have many trajectories leads to even a different thing." }, { "start": 675.96, "end": 683.32, "text": " So what you can do for each of these hidden states, they have a head here that predicts" }, { "start": 683.32, "end": 686.38, "text": " the so-called latent disagreement." }, { "start": 686.38, "end": 687.38, "text": " What does this do?" }, { "start": 687.38, "end": 689.72, "text": " This consists of a whole bunch of models." }, { "start": 689.72, "end": 691.96, "text": " These are ensemble models." }, { "start": 691.96, "end": 694.76, "text": " They're the same for each time step." }, { "start": 694.76, "end": 705, "text": " But what they take in, they take in the latent state of the model and the action that you're" }, { "start": 705, "end": 709.28, "text": " about to do, the action that you imagine you would do." }, { "start": 709.28, "end": 714, "text": " So this is the imagined state and this is the imagined action in that state." }, { "start": 714, "end": 718.16, "text": " And then it will compute the next features." }, { "start": 718.16, "end": 723.4, "text": " So the next, whatever in the next step would be the age." }, { "start": 723.4, "end": 728.68, "text": " So right, right, where do we put it?" }, { "start": 728.68, "end": 739.4399999999999, "text": " Whatever in the next, so if I have this age and I have this state, it tells me if I do" }, { "start": 739.4399999999999, "end": 748.56, "text": " action A1 and if I were to execute this in the real world, what would be my next age" }, { "start": 748.56, "end": 749.68, "text": " that I would get?" }, { "start": 749.68, "end": 755.3199999999999, "text": " So basically by performing an action, I will get the next observation and I will encode" }, { "start": 755.3199999999999, "end": 757.76, "text": " that to get the next features." }, { "start": 757.76, "end": 765, "text": " And this small model would try to predict what are the features of the next state if" }, { "start": 765, "end": 770.2399999999999, "text": " I were to execute this action in this state." }, { "start": 770.2399999999999, "end": 775.24, "text": " Okay, so it's kind of a future predictor." }, { "start": 775.24, "end": 779.5999999999999, "text": " But also not in observation space, but in latent space." }, { "start": 779.6, "end": 786, "text": " So it tries to predict the latent features of the next observation." }, { "start": 786, "end": 791.8000000000001, "text": " And the split here, you might think that there is a bit of a, it's like almost the same," }, { "start": 791.8000000000001, "end": 794.76, "text": " this latent state here and the features." }, { "start": 794.76, "end": 801.12, "text": " But as we discussed before, the latent state can incorporate the last or the history of" }, { "start": 801.12, "end": 807.4, "text": " latent states while the features simply are only a function of the current observation." }, { "start": 807.4, "end": 809.48, "text": " And that's why they predict the features." }, { "start": 809.48, "end": 812.28, "text": " They really want to predict the observation." }, { "start": 812.28, "end": 817.16, "text": " But history has sort of shown that if you try to predict, for example, the pixels of" }, { "start": 817.16, "end": 821.36, "text": " the observation, that won't serve you really well." }, { "start": 821.36, "end": 826.8000000000001, "text": " And therefore, what you need to do is you need to predict the latent features of the" }, { "start": 826.8000000000001, "end": 830.32, "text": " observation that works much better." }, { "start": 830.32, "end": 832.52, "text": " So they have a bunch of these models right here." }, { "start": 832.52, "end": 836.08, "text": " They have a bunch of models with different parameterizations." }, { "start": 836.08, "end": 839.9000000000001, "text": " They instantiate k different models of that." }, { "start": 839.9000000000001, "end": 841.5, "text": " And they all run the same." }, { "start": 841.5, "end": 844.64, "text": " So these are all the same inputs through these different models." }, { "start": 844.64, "end": 848.32, "text": " Now, these different models have been initialized at different points." }, { "start": 848.32, "end": 851.36, "text": " So they will make slightly different predictions." }, { "start": 851.36, "end": 858.0400000000001, "text": " And the crucial part is, so if it's really deterministic what the next state is going" }, { "start": 858.0400000000001, "end": 859.0400000000001, "text": " to be, right?" }, { "start": 859.0400000000001, "end": 863.12, "text": " So say you're in this state, you perform this action." }, { "start": 863.12, "end": 867.76, "text": " And so if you have a ball in your hand and you drop the ball, then the ball is going" }, { "start": 867.76, "end": 869.48, "text": " to fall down." }, { "start": 869.48, "end": 870.74, "text": " Really deterministic." }, { "start": 870.74, "end": 877.2, "text": " That means these next these estimated next features, if the models are any good, they" }, { "start": 877.2, "end": 879.12, "text": " all agree." }, { "start": 879.12, "end": 885.5600000000001, "text": " And this variance here between the estimates is very, very small." }, { "start": 885.5600000000001, "end": 891.6800000000001, "text": " And now if if the uncertainty over the next state is very high, and this can be due to" }, { "start": 891.68, "end": 896.3599999999999, "text": " two facts either, it is actually uncertain what's going to happen." }, { "start": 896.3599999999999, "end": 901.52, "text": " So maybe you have a really a piece of paper and you drop that and due to the wind, you" }, { "start": 901.52, "end": 903.1999999999999, "text": " can't know what's happening." }, { "start": 903.1999999999999, "end": 908.28, "text": " Or because your model has simply not learned yet what's going to happen." }, { "start": 908.28, "end": 912.76, "text": " In either of those cases, you don't know what's going to happen." }, { "start": 912.76, "end": 921.28, "text": " Therefore, these these predictions here, sorry, are going to be very different from each other." }, { "start": 921.28, "end": 924.12, "text": " And because of that, this variance will be high." }, { "start": 924.12, "end": 928.14, "text": " And this variance you take as the intrinsic reward." }, { "start": 928.14, "end": 936.66, "text": " So in each step, you basically try to predict over the next actions you can do, which ones" }, { "start": 936.66, "end": 944.4, "text": " leads me to a leads me to a situation where I don't know what's going to happen, where" }, { "start": 944.4, "end": 949.72, "text": " I cannot really predict the variance in my prediction is high." }, { "start": 949.72, "end": 951.6, "text": " So I really don't know what's going to happen." }, { "start": 951.6, "end": 954.6800000000001, "text": " And that is going to be the states you seek out." }, { "start": 954.6800000000001, "end": 957.44, "text": " Okay, so this is the core of the paper." }, { "start": 957.44, "end": 964.84, "text": " Basically, you do this planning in latent space in order to find the states or action" }, { "start": 964.84, "end": 969.24, "text": " that leads you to a state where you don't know what's going to happen." }, { "start": 969.24, "end": 974.32, "text": " And you measure that by trying to predict it using slightly different models." }, { "start": 974.32, "end": 981.88, "text": " And if they disagree a whole bunch, then you can use you sort of you say, I don't know" }, { "start": 981.88, "end": 982.88, "text": " what's going to happen." }, { "start": 982.88, "end": 986.72, "text": " And therefore, I want to go there because I want to learn about that state." }, { "start": 986.72, "end": 990.5400000000001, "text": " Now this, this is the this is the entire thing." }, { "start": 990.5400000000001, "end": 994.6400000000001, "text": " It has a bunch of problems, as you can imagine." }, { "start": 994.6400000000001, "end": 997.6800000000001, "text": " So this is the reasoning behind it, right?" }, { "start": 997.68, "end": 1009.52, "text": " Now they try to make a deal out of basically their latent disagreement here agrees with" }, { "start": 1009.52, "end": 1015.2399999999999, "text": " minima, sorry, maximizing the expected information gain." }, { "start": 1015.2399999999999, "end": 1021.64, "text": " They go into the theory right here and say, okay, if I have a state and an action, and" }, { "start": 1021.64, "end": 1026.98, "text": " I had and this W are the dynamics parameters of the world." }, { "start": 1026.98, "end": 1030.92, "text": " So the W characterizes how the world works." }, { "start": 1030.92, "end": 1036.52, "text": " And the H here is the next state, or sorry, the features of the next observation." }, { "start": 1036.52, "end": 1043.76, "text": " And the I is the mutual information between H and W. So this right here measures how much" }, { "start": 1043.76, "end": 1051.48, "text": " information of the next state is contained in the dynamics of the world." }, { "start": 1051.48, "end": 1055.92, "text": " If this is really low, and I have a good world model, then I should be able to predict the" }, { "start": 1055.92, "end": 1060.3200000000002, "text": " next state really well." }, { "start": 1060.3200000000002, "end": 1067.14, "text": " And this, this, they say, okay, selecting the most promising data during exploration." }, { "start": 1067.14, "end": 1072.0800000000002, "text": " We want to select the action that maximizes this information gain." }, { "start": 1072.0800000000002, "end": 1079.88, "text": " So the, the, the more the mutual information here, we want to select the action that maximizes" }, { "start": 1079.88, "end": 1085.68, "text": " that they decompose this mutual information into two things." }, { "start": 1085.68, "end": 1093.64, "text": " They decompose it into this thing right here, which is simply the entropy of the next state" }, { "start": 1093.64, "end": 1095.76, "text": " given the current state and action." }, { "start": 1095.76, "end": 1102.92, "text": " This is simply the total uncertainty, including the fact that it could actually be stochastic" }, { "start": 1102.92, "end": 1107.54, "text": " like dropping a paper, and the fact that you haven't learned yet what happens like if you" }, { "start": 1107.54, "end": 1112.72, "text": " drop a ball, but you haven't learned that yet, that is also uncertain." }, { "start": 1112.72, "end": 1119.8, "text": " So this is that part, this is the total uncertainty minus this right here." }, { "start": 1119.8, "end": 1123.96, "text": " And this is the uncertainty if you know the dynamics, right?" }, { "start": 1123.96, "end": 1130.42, "text": " So this is the wind, basically, in the paper example." }, { "start": 1130.42, "end": 1138.56, "text": " So you want something where the total uncertainty is high, but the the kind of uncertainty of" }, { "start": 1138.56, "end": 1142.08, "text": " the of the stochasticity of the world is low." }, { "start": 1142.08, "end": 1150.24, "text": " If you maximize this quantity here, this total quantity, sorry, if you maximize this entire" }, { "start": 1150.24, "end": 1156.84, "text": " quantity, because I called one of them total, that means you are going to seek out actions" }, { "start": 1156.84, "end": 1162.8799999999999, "text": " where what's left is only the uncertainty that you yourself don't know, right?" }, { "start": 1162.8799999999999, "end": 1167.36, "text": " You say, well, this state has a pretty high total uncertainty, but it's not due to the" }, { "start": 1167.36, "end": 1170.26, "text": " fact that the world itself is uncertain." }, { "start": 1170.26, "end": 1174.2, "text": " It must be due to the fact that I don't know yet." }, { "start": 1174.2, "end": 1180.8, "text": " And they make the claim that their model is actually going after these things." }, { "start": 1180.8, "end": 1188.76, "text": " And they say, okay, because we have these these Gaussians here as our estimators, they" }, { "start": 1188.76, "end": 1196.32, "text": " somehow reduce to this total to this uncertainty, but only basically by taking Gaussians, they" }, { "start": 1196.32, "end": 1203.56, "text": " assume, they just assume that this quantity here is constant." }, { "start": 1203.56, "end": 1206.08, "text": " At least that's how I understand it." }, { "start": 1206.08, "end": 1211.12, "text": " They basically assume that every transition in the world has about the same amount of" }, { "start": 1211.12, "end": 1212.36, "text": " uncertainty." }, { "start": 1212.36, "end": 1216.54, "text": " And therefore, we can just focus on the total amount of uncertainty, right?" }, { "start": 1216.54, "end": 1223, "text": " So if we can't predict, if we can't predict the next state, a, we can predict the next" }, { "start": 1223, "end": 1230.52, "text": " state a better than the next state B, and both have about the same amount of intrinsic" }, { "start": 1230.52, "end": 1236.4, "text": " uncertainty, stochasticity in the world, that must mean we should go to B because that's" }, { "start": 1236.4, "end": 1238.16, "text": " where our model hasn't learned yet." }, { "start": 1238.16, "end": 1242.28, "text": " Now, of course, in the real world, that is absolutely not the case." }, { "start": 1242.28, "end": 1248.44, "text": " And I think this model works mainly because they tested in these transitions, or in these" }, { "start": 1248.44, "end": 1254.4, "text": " environments where that might be very close to accurate, that actually most of the transitions" }, { "start": 1254.4, "end": 1260.04, "text": " have the same stochasticity as any other transition." }, { "start": 1260.04, "end": 1267.4, "text": " The second, the second part, why this is a bit difficult is because you have to somehow" }, { "start": 1267.4, "end": 1275.8, "text": " keep this latent, sorry, this, this couple of models right here that make this disagreement" }, { "start": 1275.8, "end": 1278.76, "text": " prediction." }, { "start": 1278.76, "end": 1285.04, "text": " So you rely on the fact that you can capture disagreement by looking how those models disagree" }, { "start": 1285.04, "end": 1286.04, "text": " with each other." }, { "start": 1286.04, "end": 1289.02, "text": " And again, they employ Gaussians here." }, { "start": 1289.02, "end": 1296.08, "text": " But it is not said that these things, these things will actually give you the true disagreement" }, { "start": 1296.08, "end": 1297.08, "text": " among themselves." }, { "start": 1297.08, "end": 1303.36, "text": " If you initialize them wrongly, they might miss like if, if your distribution has three" }, { "start": 1303.36, "end": 1308.52, "text": " modes, they act, they might just for all of them, focus on one of them." }, { "start": 1308.52, "end": 1313.12, "text": " And then your disagreement will be completely out of whack, or you could initialize them" }, { "start": 1313.12, "end": 1319.26, "text": " not far enough for or too close together." }, { "start": 1319.26, "end": 1321.4799999999998, "text": " That's the same thing." }, { "start": 1321.4799999999998, "end": 1327.28, "text": " So it all depends on kind of how you manage to handle this uncertainty right here." }, { "start": 1327.28, "end": 1333.32, "text": " So all of this seems a bit problematic, but the whole setup is pretty cool." }, { "start": 1333.32, "end": 1336.92, "text": " Because imagine like all of this is shifting constantly, right?" }, { "start": 1336.92, "end": 1341.84, "text": " The the policy here tries to maximize these rewards." }, { "start": 1341.84, "end": 1343.96, "text": " And that's something I don't understand." }, { "start": 1343.96, "end": 1349.68, "text": " In the paper, they make it sort of explicitly clear that the policy tries to maximize this" }, { "start": 1349.68, "end": 1355.12, "text": " quantity right here, the next uncertainty." }, { "start": 1355.12, "end": 1364.8799999999999, "text": " The planning objective is to maximize expected novelty, or it, which is this thing right" }, { "start": 1364.8799999999999, "end": 1365.8799999999999, "text": " here." }, { "start": 1365.8799999999999, "end": 1374.56, "text": " However, I, I don't actually see why in that case, you'd need planning." }, { "start": 1374.56, "end": 1380.36, "text": " Because with planning, your goal is sort of to look ahead more than one step." }, { "start": 1380.36, "end": 1387.4399999999998, "text": " So what I would expect is that they somehow have the aggregated somehow that they not" }, { "start": 1387.4399999999998, "end": 1395, "text": " don't maximize this, but somehow they maximize the future right of t prime of r t prime," }, { "start": 1395, "end": 1403.34, "text": " I, they somehow maximize the yes, they somehow maximize the future, the total future, maybe" }, { "start": 1403.34, "end": 1408.78, "text": " with a disagreement, like if it was a reward, and you actually want to maximize the total" }, { "start": 1408.78, "end": 1416.52, "text": " reward across your episode, I would imagine they use planning to maximize the total future" }, { "start": 1416.52, "end": 1421.44, "text": " uncertainty that they encounter because right here you have your trajectories." }, { "start": 1421.44, "end": 1427.48, "text": " And as they say it, they only maximize the uncertainty after the first step." }, { "start": 1427.48, "end": 1431.8799999999999, "text": " So this here might be, you know, even intrinsically uncertain or a bit uncertain." }, { "start": 1431.8799999999999, "end": 1437.2, "text": " But if you go down the path here, there might be a state where that's super uncertain, and" }, { "start": 1437.2, "end": 1441.52, "text": " you would like to find that right through your different rollouts." }, { "start": 1441.52, "end": 1448.2, "text": " So I'm not sure that the paper is correct or consistent here, actually." }, { "start": 1448.2, "end": 1451.92, "text": " I might be wrong, though, they do have the code, which is a really good thing." }, { "start": 1451.92, "end": 1456.6000000000001, "text": " So I'll link to the code and you can go and explore that." }, { "start": 1456.6000000000001, "end": 1461.72, "text": " They do have this algorithm down here, which is pretty much I mean, this is saying just" }, { "start": 1461.72, "end": 1463.88, "text": " nothing." }, { "start": 1463.88, "end": 1469.6000000000001, "text": " Still exploring, do train the world model, train the latent disagreement ensemble, train" }, { "start": 1469.6000000000001, "end": 1476.88, "text": " the policy in imagination of like, rather, rather, it helps a bit." }, { "start": 1476.88, "end": 1477.88, "text": " Okay." }, { "start": 1477.88, "end": 1485.6000000000001, "text": " But one other thing right here, the policy that tries to maximize the reward, right," }, { "start": 1485.6000000000001, "end": 1489.0800000000002, "text": " so you use planning to look ahead where the uncertainty is." }, { "start": 1489.0800000000002, "end": 1491.5200000000002, "text": " But how do you do the planning?" }, { "start": 1491.52, "end": 1496.32, "text": " You need a policy in imagination space, right?" }, { "start": 1496.32, "end": 1505.2, "text": " This latent disagreement policy here is used to train how you act in latent space, right?" }, { "start": 1505.2, "end": 1511.8799999999999, "text": " How this action that you imagine comes to be, you can't plan in imagination space and" }, { "start": 1511.8799999999999, "end": 1516.22, "text": " in imagination space use planning again, it's just an infinite recursion." }, { "start": 1516.22, "end": 1519.22, "text": " At some point, you need a model that tells you what to do." }, { "start": 1519.22, "end": 1523.92, "text": " And in imagination, they just use an actor critic model, you see they have a value function" }, { "start": 1523.92, "end": 1524.92, "text": " here." }, { "start": 1524.92, "end": 1533.2, "text": " They just use an actor critic model to basically one shot predict the next best action to" }, { "start": 1533.2, "end": 1535.6000000000001, "text": " get you to the next step." }, { "start": 1535.6000000000001, "end": 1547.04, "text": " So as they themselves rag on these model free methods, because they only look ahead, how" }, { "start": 1547.04, "end": 1553.96, "text": " is that not exactly the same as me ragging on the fact that they use model free and imagination" }, { "start": 1553.96, "end": 1559.3999999999999, "text": " space, because your world model certainly is retrospective, your world model learns" }, { "start": 1559.3999999999999, "end": 1561.8799999999999, "text": " from the past, right?" }, { "start": 1561.8799999999999, "end": 1569.44, "text": " So the model free method that learns on your imagined world model learns from retrospective" }, { "start": 1569.44, "end": 1577.76, "text": " imagination. And therefore, it itself has sort of the same problem, just one layer deeper" }, { "start": 1577.76, "end": 1583.56, "text": " that it learns from retrospective data and not from data ahead, because your uncertainty" }, { "start": 1583.56, "end": 1589.3200000000002, "text": " about the future might just be because of your retrospect, exactly is because of your" }, { "start": 1589.3200000000002, "end": 1590.64, "text": " retrospective data." }, { "start": 1590.64, "end": 1595.14, "text": " I see the value in having this uncertainty." }, { "start": 1595.14, "end": 1601.44, "text": " But I think there are other methods that also do model free, and don't just maximize an" }, { "start": 1601.44, "end": 1606.3200000000002, "text": " intrinsic reward, but actually maximize a sort of uncertainty." }, { "start": 1606.3200000000002, "end": 1610.3600000000001, "text": " Okay, enough ragging, let's go to the experiment." }, { "start": 1610.3600000000001, "end": 1617.9, "text": " So the cool thing you can do with this is what's called zero shot performance." }, { "start": 1617.9, "end": 1625.3200000000002, "text": " So what they do is in the first step, they do this, they just learn task agnostic, just" }, { "start": 1625.3200000000002, "end": 1628.16, "text": " explorer." }, { "start": 1628.16, "end": 1634, "text": " Without task, then second, they go and into their buffer." }, { "start": 1634, "end": 1638.2, "text": " So when they explore, they all save, they save what they do, there is no reward, but" }, { "start": 1638.2, "end": 1642.24, "text": " they just save, they store their episodes, right?" }, { "start": 1642.24, "end": 1648.6, "text": " And then someone comes with a task, and the task is simply, they specify like you have" }, { "start": 1648.6, "end": 1654.88, "text": " to run forward, and they go to this buffer, and they now label every episode with its" }, { "start": 1654.88, "end": 1655.88, "text": " reward." }, { "start": 1655.88, "end": 1661.28, "text": " So this is different, this is like offline reinforcement learning, right?" }, { "start": 1661.28, "end": 1668.3, "text": " So basically, it is how well they call it zero shot, but it is how well can an algorithm" }, { "start": 1668.3, "end": 1680.48, "text": " that has explored with this kind of self supervision, perform in offline reinforcement learning" }, { "start": 1680.48, "end": 1688.08, "text": " on the trajectories that it has already experienced, which is different from performing the same" }, { "start": 1688.08, "end": 1693.96, "text": " trajectories with the reward, because you would learn from the reward, and you would" }, { "start": 1693.96, "end": 1698.1599999999999, "text": " learn to seek out different, like your experience would be different." }, { "start": 1698.16, "end": 1700.28, "text": " If you were going after a reward." }, { "start": 1700.28, "end": 1702.16, "text": " So this is harder." }, { "start": 1702.16, "end": 1709.76, "text": " So they compare this to dreamer, and dreamer is a fully supervised method, the dreamer" }, { "start": 1709.76, "end": 1714.92, "text": " is actually cheating, dreamer actually goes after the reward." }, { "start": 1714.92, "end": 1721, "text": " And all the other methods here, they don't have a reward, and they're just zero shot" }, { "start": 1721, "end": 1724.1200000000001, "text": " offline reinforcement learning generalized to these methods." }, { "start": 1724.12, "end": 1731.32, "text": " And you see the green is the plan to explore, and that outperforms almost all the other" }, { "start": 1731.32, "end": 1738.12, "text": " methods right here, down here, and even comes close to this, to the dreamer that goes up" }, { "start": 1738.12, "end": 1739.12, "text": " here." }, { "start": 1739.12, "end": 1744.36, "text": " It's seen pretty much every graphic that dreamer is the one that's able to cheat, right?" }, { "start": 1744.36, "end": 1747.7199999999998, "text": " So it is performing pretty well." }, { "start": 1747.72, "end": 1756.3600000000001, "text": " But then the the zero shot generalized plan to explore is sometimes on par and certainly" }, { "start": 1756.3600000000001, "end": 1761.88, "text": " outperforms the other intrinsic reward methods." }, { "start": 1761.88, "end": 1770.72, "text": " Now how does that go about when you try actually when you allow the model to fine tune on the" }, { "start": 1770.72, "end": 1771.72, "text": " task." }, { "start": 1771.72, "end": 1778, "text": " So you can see the performance on few shot adaptation from raw pixels without state space" }, { "start": 1778, "end": 1779, "text": " input." }, { "start": 1779, "end": 1786.64, "text": " So basically, you learn without reward for this many steps right here, until this shaded" }, { "start": 1786.64, "end": 1788.1200000000001, "text": " area back here." }, { "start": 1788.1200000000001, "end": 1794.84, "text": " This is how when you do no reward, and then all of a sudden, now you say, okay, now, I'll" }, { "start": 1794.84, "end": 1796.28, "text": " give you reward." }, { "start": 1796.28, "end": 1797.88, "text": " Now please learn." }, { "start": 1797.88, "end": 1804.5200000000002, "text": " This many steps, you have this many steps where you can learn from the reward." }, { "start": 1804.5200000000002, "end": 1806.8400000000001, "text": " So now we're no longer in this offline RL setting." }, { "start": 1806.8400000000001, "end": 1808.68, "text": " This is now online RL." }, { "start": 1808.68, "end": 1813.24, "text": " But we've been pre trained with all of this experience that we had without the reward." }, { "start": 1813.24, "end": 1816.8400000000001, "text": " Again, the orange here is the cheater." }, { "start": 1816.8400000000001, "end": 1822.4, "text": " So the orange is is cheating." }, { "start": 1822.4, "end": 1829.64, "text": " And now we don't so before the graphs were higher, because we've, we've actually at each" }, { "start": 1829.64, "end": 1835.5600000000002, "text": " step, for example, how this works is you train until here without reward, and then you do" }, { "start": 1835.5600000000002, "end": 1840.48, "text": " this offline RL offline RL training." }, { "start": 1840.48, "end": 1842.3600000000001, "text": " And that's how this point comes about." }, { "start": 1842.3600000000001, "end": 1847.6200000000001, "text": " Now, I think in the graph down here, they they don't do that." }, { "start": 1847.6200000000001, "end": 1851.88, "text": " So they just measure how well you're doing in the task." }, { "start": 1851.88, "end": 1857.94, "text": " And of course, if after this many steps, you've never looked at the reward, you haven't been" }, { "start": 1857.94, "end": 1863.8400000000001, "text": " able to look at the reward, your reward will be fairly low, right at the task, because" }, { "start": 1863.8400000000001, "end": 1867.2800000000002, "text": " you don't know what to do to get the high reward." }, { "start": 1867.2800000000002, "end": 1870.3200000000002, "text": " The dreamer again, this orange line is able to cheat." }, { "start": 1870.3200000000002, "end": 1874.8400000000001, "text": " That's why it just is basically straight line or goes up at the beginning, it's able to" }, { "start": 1874.8400000000001, "end": 1876.5200000000002, "text": " look at the reward from the beginning." }, { "start": 1876.5200000000002, "end": 1878.72, "text": " And it's here as a baseline comparison." }, { "start": 1878.72, "end": 1885.28, "text": " So you see as soon as you give the reward to the models, they generally shoot up." }, { "start": 1885.28, "end": 1890.84, "text": " And this plan to explore generally shoots up much harder than these others, as you can" }, { "start": 1890.84, "end": 1892.68, "text": " see, pretty much everywhere." }, { "start": 1892.68, "end": 1898.4, "text": " And again, it gets competitive and here even outperforms the dreamer." }, { "start": 1898.4, "end": 1902.08, "text": " Why could it outperform the supervised method?" }, { "start": 1902.08, "end": 1908.9199999999998, "text": " Maybe because this method here is sort of confused or is stuck in a local optimum, which" }, { "start": 1908.9199999999998, "end": 1912, "text": " can happen very easily in reinforcement learning." }, { "start": 1912, "end": 1917.28, "text": " Whereas the plan to explore has never seen the reward, therefore hasn't tried to just" }, { "start": 1917.28, "end": 1922.36, "text": " single mindedly maximize the reward and has explored a bunch of different things to do" }, { "start": 1922.36, "end": 1923.36, "text": " in the world." }, { "start": 1923.36, "end": 1930.6399999999999, "text": " And now we can use that knowledge to outperform the plan, the dreamer, the baseline." }, { "start": 1930.64, "end": 1936.0400000000002, "text": " So the other thing I would like to draw your attention to here is that sometimes you see" }, { "start": 1936.0400000000002, "end": 1945.5, "text": " that the plan to explore or the other curiosity methods actually get a reward before the reward" }, { "start": 1945.5, "end": 1950, "text": " kicks in as we saw here, right?" }, { "start": 1950, "end": 1952.44, "text": " For example, right here." }, { "start": 1952.44, "end": 1959.76, "text": " And this tells me that this is probably a property of the environment itself, namely" }, { "start": 1959.76, "end": 1964.16, "text": " these reinforcement learning environments, they don't really have much noise going on," }, { "start": 1964.16, "end": 1965.16, "text": " right?" }, { "start": 1965.16, "end": 1970.64, "text": " They pretty much just have, it's a simulator with one figure that can walk or not." }, { "start": 1970.64, "end": 1978.12, "text": " And therefore, it might be that the only interesting thing to do in these models is to actually" }, { "start": 1978.12, "end": 1979.72, "text": " perform one of these tasks." }, { "start": 1979.72, "end": 1986.04, "text": " And that's why it might be that sometimes they already get a reward." }, { "start": 1986.04, "end": 1995, "text": " So it's true that they don't see the reward for this entire duration, but also implicitly" }, { "start": 1995, "end": 2000.32, "text": " via the developers building the simulator, they have made it such that the only interesting" }, { "start": 2000.32, "end": 2005.44, "text": " thing to do is the same thing as getting a reward, right?" }, { "start": 2005.44, "end": 2013.44, "text": " So I'm sort of skeptical that this is like a general exploration policy, because also" }, { "start": 2013.44, "end": 2021.0800000000002, "text": " in the real world, there are just combinatorically hugely many, many actions to do many paths" }, { "start": 2021.0800000000002, "end": 2022.44, "text": " to follow." }, { "start": 2022.44, "end": 2030.72, "text": " And if you just go by what do I not know yet, I think you can't you can't put that all into" }, { "start": 2030.72, "end": 2032.76, "text": " one model is just too much." }, { "start": 2032.76, "end": 2040.16, "text": " And the states where you really were really something interesting happens are so few and" }, { "start": 2040.16, "end": 2045.76, "text": " far in between, and that it doesn't compare to the amount of states where you simply don't" }, { "start": 2045.76, "end": 2051.4, "text": " know most states, you don't know what's going to happen, but probably nothing's nothing" }, { "start": 2051.4, "end": 2056.52, "text": " interesting is going to happen just different things, which will screw over this method" }, { "start": 2056.52, "end": 2058.6, "text": " completely." }, { "start": 2058.6, "end": 2065.64, "text": " In any case, they Yeah, sorry, this is just this is another experiment." }, { "start": 2065.64, "end": 2072.2799999999997, "text": " They have a bunch of other experiments. And yeah, that that was my this was my review" }, { "start": 2072.2799999999997, "end": 2078.04, "text": " of the paper. Tell me if you agree or disagree or if I've misunderstood something that's" }, { "start": 2078.04, "end": 2079.6, "text": " entirely possible." }, { "start": 2079.6, "end": 2088, "text": " I'm just always a bit skeptical of these things a bit." }, { "start": 2088, "end": 2093.52, "text": " So the experiments, they're very compute intensive, of course, so you never know there and then" }, { "start": 2093.52, "end": 2096.28, "text": " these specific environments right here." }, { "start": 2096.28, "end": 2101.12, "text": " You never know there and then the fact that the real world actually has very different" }, { "start": 2101.12, "end": 2106.96, "text": " stochasticity, which they simply assume away right here." }, { "start": 2106.96, "end": 2112.08, "text": " Yeah, but other than that, big props to the fact that the code is out." }, { "start": 2112.08, "end": 2116.6, "text": " And as I said, leave a comment if you agree or disagree." }, { "start": 2116.6, "end": 2120.48, "text": " Please subscribe and share this video if you liked it, and I'll see you next time." }, { "start": 2120.48, "end": 2123.8, "text": " Bye bye." } ]
F5mxzvgl_oU
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
S.H.E. - Search. Human. Equalizer.
[ "Science & Technology" ]
[ "pantene", "search", "google", "bias", "machine learning", "artificial intelligence", "search engine", "ranking", "equality", "diversity" ]
Short opinion on Pantene's tool to de-bias Google search results. https://www.apnews.com/Business%20Wire/c53a0e8f5fe04bf68e8311f214c806cf https://shetransforms.us/
Hi everyone, just a quick more of a news update in the AI world. Which is the following. Pantene launches S.H.E. The Search Human Equalizer to shine a light on bias in search. So Pantene, the kind of cosmetic corporation, launches this thing which is supposed to correct your search. And it's introduced here in this YouTube video which as you can see down here has 400 likes, has 3.5K dislikes and of course comments are disabled. So that's kind of already weird. Let's say weird. If you go to the website here that they made, basically let me refresh this and you can see the intro. They say let's take the bias out of search. So if you search for greatest engineers you'll get all men. If you search for schoolgirl you'll get like this kind of sexualized images. If you search for Asian women in Spanish, same. So basically they have a browser extension that modifies your search results so that for example schoolgirl looks like this. Of course, I don't know, if I were to do this I would actually let people explore the search box right here. But of course I want you to download this extension. So to me the interesting part is how does this work? So you're asked to install a Chrome extension which I won't do. But basically down here they say view the terms that SHE is equalizing. If you click on that you get to a list. So it very much seems like this is absolutely manual handcrafted work because there's a lot of work in kind of correcting bias in for example in search, in machine learning and so on. These approaches usually have some data driven approach that actually will change the models and so on or will re-rank based on some kind of learned input. But this here is simply a list of terms, for example famous actor, famous athletes and so on that it will then kind of re-rank. And I'm pretty sure this is just human manual labor. Someone comes up with a new term like oh this term we should you can actually flag yourself in the Chrome extension. So they say here flag this search. You can there's a button so you can suggest one and they will say oh yeah okay that is really not biased or that is really biased. Will now re-rank the search results for you. I mean academically this is a terrible idea, absolutely terrible. Because how are you going to do this like manually replace every single there is like I don't know it reminds a bit of new speak. But yeah this approach is doomed to fail. But of course it's just a company trying to sell you stuff. It's not, I mean this is not a, this is a PR gag not really trying to do anything, anything state of the art or meaningful or even effective right. If you search a little different thing than this it will still show you the old kind of result. So yeah from the terms you can also pretty clearly see where they come from. They have their own name. They have Pantene. I didn't see this yet. They have Pantene in here. So yeah if you want less biased search results for these exact terms then install the extension. I do not recommend you do so. But I would like them to take on one more query that I came up with that is pretty pretty biased I found. And that's the most dangerous criminals. All men. Goodbye.
[ { "start": 0, "end": 7.12, "text": " Hi everyone, just a quick more of a news update in the AI world." }, { "start": 7.12, "end": 8.96, "text": " Which is the following." }, { "start": 8.96, "end": 11.120000000000001, "text": " Pantene launches S.H.E." }, { "start": 11.120000000000001, "end": 16.740000000000002, "text": " The Search Human Equalizer to shine a light on bias in search." }, { "start": 16.740000000000002, "end": 24.8, "text": " So Pantene, the kind of cosmetic corporation, launches this thing which is supposed to correct" }, { "start": 24.8, "end": 26.84, "text": " your search." }, { "start": 26.84, "end": 36.04, "text": " And it's introduced here in this YouTube video which as you can see down here has 400 likes," }, { "start": 36.04, "end": 41.84, "text": " has 3.5K dislikes and of course comments are disabled." }, { "start": 41.84, "end": 47.8, "text": " So that's kind of already weird." }, { "start": 47.8, "end": 50, "text": " Let's say weird." }, { "start": 50, "end": 57.4, "text": " If you go to the website here that they made, basically let me refresh this and you can" }, { "start": 57.4, "end": 59.2, "text": " see the intro." }, { "start": 59.2, "end": 62.64, "text": " They say let's take the bias out of search." }, { "start": 62.64, "end": 68.06, "text": " So if you search for greatest engineers you'll get all men." }, { "start": 68.06, "end": 76, "text": " If you search for schoolgirl you'll get like this kind of sexualized images." }, { "start": 76, "end": 85.14, "text": " If you search for Asian women in Spanish, same." }, { "start": 85.14, "end": 91.64, "text": " So basically they have a browser extension that modifies your search results so that" }, { "start": 91.64, "end": 96, "text": " for example schoolgirl looks like this." }, { "start": 96, "end": 102.84, "text": " Of course, I don't know, if I were to do this I would actually let people explore the search" }, { "start": 102.84, "end": 104.56, "text": " box right here." }, { "start": 104.56, "end": 110.08, "text": " But of course I want you to download this extension." }, { "start": 110.08, "end": 116.24000000000001, "text": " So to me the interesting part is how does this work?" }, { "start": 116.24000000000001, "end": 123.5, "text": " So you're asked to install a Chrome extension which I won't do." }, { "start": 123.5, "end": 131.36, "text": " But basically down here they say view the terms that SHE is equalizing." }, { "start": 131.36, "end": 133.8, "text": " If you click on that you get to a list." }, { "start": 133.8, "end": 140.20000000000002, "text": " So it very much seems like this is absolutely manual handcrafted work because there's a" }, { "start": 140.20000000000002, "end": 144.84, "text": " lot of work in kind of correcting bias in for example in search, in machine learning" }, { "start": 144.84, "end": 145.92000000000002, "text": " and so on." }, { "start": 145.92000000000002, "end": 152.60000000000002, "text": " These approaches usually have some data driven approach that actually will change the models" }, { "start": 152.60000000000002, "end": 158.36, "text": " and so on or will re-rank based on some kind of learned input." }, { "start": 158.36, "end": 166.68, "text": " But this here is simply a list of terms, for example famous actor, famous athletes and" }, { "start": 166.68, "end": 169.28, "text": " so on that it will then kind of re-rank." }, { "start": 169.28, "end": 172.56, "text": " And I'm pretty sure this is just human manual labor." }, { "start": 172.56, "end": 178.76000000000002, "text": " Someone comes up with a new term like oh this term we should you can actually flag yourself" }, { "start": 178.76000000000002, "end": 180.02, "text": " in the Chrome extension." }, { "start": 180.02, "end": 183.48000000000002, "text": " So they say here flag this search." }, { "start": 183.48000000000002, "end": 187.64000000000001, "text": " You can there's a button so you can suggest one and they will say oh yeah okay that is" }, { "start": 187.64, "end": 191.2, "text": " really not biased or that is really biased." }, { "start": 191.2, "end": 196.51999999999998, "text": " Will now re-rank the search results for you." }, { "start": 196.51999999999998, "end": 200.95999999999998, "text": " I mean academically this is a terrible idea, absolutely terrible." }, { "start": 200.95999999999998, "end": 207.76, "text": " Because how are you going to do this like manually replace every single there is like" }, { "start": 207.76, "end": 213.23999999999998, "text": " I don't know it reminds a bit of new speak." }, { "start": 213.23999999999998, "end": 215.92, "text": " But yeah this approach is doomed to fail." }, { "start": 215.92, "end": 219.32, "text": " But of course it's just a company trying to sell you stuff." }, { "start": 219.32, "end": 228.83999999999997, "text": " It's not, I mean this is not a, this is a PR gag not really trying to do anything, anything" }, { "start": 228.83999999999997, "end": 232.6, "text": " state of the art or meaningful or even effective right." }, { "start": 232.6, "end": 239.04, "text": " If you search a little different thing than this it will still show you the old kind of" }, { "start": 239.04, "end": 241.23999999999998, "text": " result." }, { "start": 241.24, "end": 248.08, "text": " So yeah from the terms you can also pretty clearly see where they come from." }, { "start": 248.08, "end": 249.08, "text": " They have their own name." }, { "start": 249.08, "end": 250.08, "text": " They have Pantene." }, { "start": 250.08, "end": 251.08, "text": " I didn't see this yet." }, { "start": 251.08, "end": 256.04, "text": " They have Pantene in here." }, { "start": 256.04, "end": 265.2, "text": " So yeah if you want less biased search results for these exact terms then install the extension." }, { "start": 265.2, "end": 268.6, "text": " I do not recommend you do so." }, { "start": 268.6, "end": 275.96000000000004, "text": " But I would like them to take on one more query that I came up with that is pretty pretty" }, { "start": 275.96000000000004, "end": 277.38, "text": " biased I found." }, { "start": 277.38, "end": 281.16, "text": " And that's the most dangerous criminals." }, { "start": 281.16, "end": 282.16, "text": " All men." }, { "start": 282.16, "end": 302.8, "text": " Goodbye." } ]
NJCLUzkn-sA
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
EfficientZero: Mastering Atari Games with Limited Data (Machine Learning Research Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "muzero", "alphazero", "berkeley", "pieter abbeel", "dreamer", "dreamerv2", "atari", "reinforcement learning", "deep reinforcement learning", "world model", "learned world model", "latent world model", "alphago", "deep rl", "model-based reinforcement learning", "how does muzero work", "efficientzero", "efficientzero model", "atari 100k", "sample-efficient reinforcement learning" ]
#efficientzero #muzero #atari Reinforcement Learning methods are notoriously data-hungry. Notably, MuZero learns a latent world model just from scalar feedback of reward- and policy-predictions, and therefore relies on scale to perform well. However, most RL algorithms fail when presented with very little data. EfficientZero makes several improvements over MuZero that allows it to learn from astonishingly small amounts of data and outperform other methods by a large margin in the low-sample setting. This could be a staple algorithm for future RL research. OUTLINE: 0:00 - Intro & Outline 2:30 - MuZero Recap 10:50 - EfficientZero improvements 14:15 - Self-Supervised consistency loss 17:50 - End-to-end prediction of the value prefix 20:40 - Model-based off-policy correction 25:45 - Experimental Results & Conclusion Paper: https://arxiv.org/abs/2111.00210 Code: https://github.com/YeWR/EfficientZero Note: code not there yet as of release of this video Abstract: Reinforcement learning has achieved great success in many applications. However, sample efficiency remains a key challenge, with prominent methods requiring millions (or even billions) of environment steps to train. Recently, there has been significant progress in sample efficient image-based RL algorithms; however, consistent human-level performance on the Atari game benchmark remains an elusive goal. We propose a sample efficient model-based visual RL algorithm built on MuZero, which we name EfficientZero. Our method achieves 190.4% mean human performance and 116.0% median performance on the Atari 100k benchmark with only two hours of real-time game experience and outperforms the state SAC in some tasks on the DMControl 100k benchmark. This is the first time an algorithm achieves super-human performance on Atari games with such little data. EfficientZero's performance is also close to DQN's performance at 200 million frames while we consume 500 times less data. EfficientZero's low sample complexity and high performance can bring RL closer to real-world applicability. We implement our algorithm in an easy-to-understand manner and it is available at this https URL. We hope it will accelerate the research of MCTS-based RL algorithms in the wider community. Authors: Weirui Ye, Shaohuai Liu, Thanard Kurutach, Pieter Abbeel, Yang Gao Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there, today we're going to look at Mastering Atari Games with Limited Data by Waziru Yeh, Shahuwa Liu, Tanahar Kuretach, Pietra Biel and Yang Gao. This paper presents the Efficient Zero model, which is a model that can do reinforcement learning with severely limited data. So the paper tackles the Atari 100k benchmark, which means to learn Atari, the Atari benchmark as a reinforcement learning task, as for example, deep Q networks did, but you only get 100k transitions. This is about it's about two days worth of real time data to work with. And after that, the model supposedly be able to play Atari. So this is a variant on mu zero mu zero, which is an insanely data intensive reinforcement learning algorithm. And it introduces various tricks and various amendments to mu zero to make it more sample efficient. So when we look at this paper, you can see the gist of it right here. If you do this Atari 100k benchmark, you can see that a lot of the other reinforcement learning algorithm, they fail to even reach human level performance. Whereas this new algorithm out competes not only the other RL algorithms on in this low data regime, but also the humans. Here they say, efficient zeros performance is close to DQN performance at 200 million frames while we consume 500 times less data. Efficient zeros, low sample complexity, and high performance can bring RL closer to real world applicability. They even say we implement their algorithm in an easy to understand manner. And it is available at this GitHub address. So this code is out there, especially if you want to do reinforcement learning, but you don't have as much computer time or money. This might be for you. So we'll go through the paper, we'll see what the improvements are. There's not a single improvement. There are many improvements, three big ones to be exact. And yeah, if you like content like this, don't hesitate to subscribe and tell your friends, and family and professors, I guess. Alright, so we'll first take a small look at what mu zero does. Just as a recap, I have done a video on mu zero. But if you haven't seen that, then here is a short a very short introduction to mu zero to the algorithm. So in a classic reinforcement learning setting, you have your your basic setup of you have the environment, and you have the actor and the environment gives the actor some sort of an observation at time step, let's call it T. The actor uses that observation to come up with some sort of an action at time step T. And then the environment gives the actor back a reward for that time step. And the next observation T plus one. And that goes on and on and on. So the question is, how is the actor supposed to come up with this action right here, given the past observations that it has seen from the environment in order to maximize all of the reward that it gets. Now, in a regular reinforcement learning algorithm, or regular, let's say in the simpler reinforcement learning algorithm, what people are doing is they're doing model free reinforcement learning, which essentially means that they take the series of observation, observation one, observation two, and so on that they've seen so far, they take that they stick it in a big neural network, and they train it to output some sort of an action. And they train the neural network in order to maximize this reward right here, usually using some sort of policy gradient or something like this. So this is a rather, rather direct way we call that model free reinforcement learning, because you directly predict the action without without an explicit model of the world. Now, when you have a model of the world, so when this environment here is well described, for example, a chessboard, in a chessboard, you know the rules, you know, everything that's going to happen in a chessboard, you can use a model of the chessboard. So what you can do is this, you can take these observations, and these observations would correspond to some defined states or let's let's say tic tac toe, tic tac toe is a better example. So, you know, with the observation, I can actually construct the board of tic tac toe that I'm in. And then what I can do is I can actually search, I can try out, I can say, okay, what if I put, you know, something here, then my opponent's certainly going to do that right here. And then what if I put something here, and then my opponent is going to do that, and then they win, right. So that is one, that is one way to do it. And usually you visualize this as a tree. So you are here at a root note, that's your state. And you have several options to do things. And in the several options, your opponent has several options, or if it's a one player game, you have several options again, and so on. So what you want to do is you want to search this tree for the best possible path. And this is what things like alpha go alpha zero, and so on did. They have these explicit model, and they search through it. And now the neural networks no longer predict actions directly, the neural network help you search through that tree, which means they, they vote essentially on which path paths of the tree to explore, because the tree quickly becomes too large to explore as a whole, right, you can't, like if it's more than three moves ahead, the possibilities just get giant even like especially in a game like go. So the neural networks are here to guide the tree search. And that was, in general, the techniques of that center around the Monte Carlo tree search, because at some point, you abort the search, and you simply play one game to the end, as sort of an approximation of what happens. And so on, I'm not going to go into that super duper right here. But what mu zero does is mu zero says, well, this this whole tree search stuff essentially only works if I have an explicit model of the world, such as the tic tac toe board is clearly defined how it works, right? Also, I can I can I can have a simulator for it, I can rewind, I can try again, this doesn't happen when you're interacting with any sort of real world thing, let's say, or even the Atari benchmark. So in Atari, I know there is there's hacks where you can save the ROM and so on. But essentially, you're not supposed to go back in time or go forward in time, you're not supposed to be able to try something out and then say, well, now that didn't work, I'm going to search for a different path in the tree instead. So what people do is they, they try to learn a model of the environment. So in absence of the model of the environment, they try to learn one and there are many, many different ways of doing this. And what mu zero does is it learns a latent model of the environment. So how does that look? So here you have the current observation observation T, what mu zero does is it uses a neural network, I think they call this H or something to get this into a hidden state. So they map the current observation into a hidden state. And then they plan using the hidden state. So they plan, they say, okay, I'm not going to predict what the next observation is going to be like in the tic tac toe board. I'm only going to predict what is the next hidden state going to be t plus one, t plus one, like this is one, this is two, this is three. So you know, depending on which action I do, which which is going what is going to be the next hidden state of the environment? Sorry, of Yeah, of the environment, what's going to be the next hidden state. And from that hidden state, I always going to predict what's going what's going to be the reward for transitioning there, what's going to be my own policy, which is a bit weird that you have to do this, but you have to, and which is going which what's going to be sort of the value and the value is what is going to be my future reward when I go from here. So these are the sort of things that mu zero predicts. And with that, it is able to search this latent tree. Note the addition to mu zero, sorry. Yeah, the addition, sorry to alpha zero, which is this run right here. So we might we might label this, this is something like reinforce. This is alpha zero. And this is mu zero. So the difference to alpha zero being that we no longer have an explicit model. So in order to do tree search, we have to learn a model. And the model that mu zero learns is in the latent space purely right there is it doesn't predict future observations. And it only learns all of this from the signal that it so it predicts the reward, it predicts its own policy, and it predicts the future value. And those are the only learning signals for the world model. That is good because it focuses the algorithm on what's essential, it is essential to get the maximum reward possible. And therefore, the learning the more the learning signals center around those concepts, the better. But that also means learning the entire world model just from signals like the reward is extremely sparse. So it uses a lot of data. And that is that's essentially the catch right here. So we're not going to go into you know, how exactly mu zero does Monte Carlo tree search, they have a way of balancing exploration and exploitation right here by essentially using an upper confidence bound formula that you can see right here. But so efficient zero goes and says there are three main weaknesses with mu zero. First of all, they say lack of supervision on the environment model. That's what I just said, all the the model, the latent model of the environment is learned purely from the signals of the end from the reward signal, the value signal, these are simple single numbers. And to ask the model to learn a transition function for the environment model is a big ask and of course needs a lot of data just from that. The second one is hardness to deal with aleatoric uncertainty. I like I've given up on trying to remember which one is aleatoric and which one is what's the other one epistemic, I have no idea. Okay, let's just read the paragraph. The predicted rewards have large prediction errors. So if there is uncertainty in the environment, for example, the environment is hard to model, the reward prediction errors will accumulate when expanding the Monte Carlo tree search tree to a large depth, resulting in suboptimal performance in exploration and evaluation. So what they mean is that if I predict if I'm if this reward right here has a bit of an error, and then I go on searching right these branches right here, and then the reward I predict right here also has a bit of an error, and so on. And we go down the tree. And every reward has a bit of an error. What I'll do in order to add you know, at the end, at the end right here, I have a path. And I don't go to the end, I stop after a while and I add up the rewards that led me here. And that's sort of, you know, how valuable this notice plus the value that I predict right here, that's going to be the, the value of this path is going to be the sum of the rewards until I'm here plus the value from here on out. And if all of these little rewards have little errors on them, that quickly adds up to a big error. So that's their second criticism right here. That's something we're going to have to solve. And thirdly, off policy issues with multi step value. And that is a general, that is a general thing in these reinforcement learning algorithms, the more distributed you make them, the more sort of what people usually do is they have like a learner box in the middle, learn. So there's a neural network there. But then they have a lot of actors, actor machines, so they distribute training and interacting with the environment and these send back data, there's usually a replay buffer right here somewhere. And that means just that the neural network that is here at the learner is not the same that generated the data, because the data is kind of old. And until you use the data to practice the neural network will have already learned from other data, and therefore you get an off policy issue, even though it's an on policy algorithm. Now, mu zero does a little bit to correct this. But they say this has to be done more. So how are they? Now, now we tackle these these three things. So the first thing they tackle is this lack of supervision on the environment model. So what they do is they add a self supervised consistency loss, you remember that we map the observation at time t to a state a hidden state at time t. And then we use our latent model to predict for a given action, what's the state going to be at time t plus one. And that's an estimate, right? Now, what this paper says is that wait a minute, if we simply look at what happens in the real world, right, observation t plus one, and we send it through the same, so through this, through this same encoding function, then that gives us the hidden state at time t plus one. So technically, these two things here should be equal. So the hidden state at time t plus one, and the estimated hidden state at time t plus one, they should be kind of the same. So what they do is they use a self supervised consistency loss that they they nap from symposium. So symposium is a contrastive learning framework, or self supervised learning framework. And it's usually used to have two images which have been differently augmented. So to make their representation equal. So till the model learns to sort of ignore the data augmentation, that's how you train self supervised image models. But here, we don't augment differently, what we do is we take an observation, and we take the observation at time t plus one. And the first observation, we actually map it through that function that is supposed to give us this estimation of the next state. And then we use a similarity loss in order to pull those two things together. So this function that gives us the next state, and the representation functions, they're now going to be trained in order to make those two things, the next hidden state, and the estimation of the next hidden state, similar to each other. In fact, the the left branch right here is the one that's trained. But that includes the representation function and the next state function. So you might you might ask, you know, this is kind of the first question that everyone in mu zero has is like, why is this not done? Because this is, if you look at the loss of mu zero, you can pretty easily see that that is possible. And I think the mu zero authors have deliberately not introduced a loss like this, because they say no, if we learn from just the reward signals, that is going to be a better algorithm, even though, you know, it might use more data. But at the end, it really trains for what is important for what is the end goal. And that's why they didn't introduce a loss like this, introducing a loss like this clearly trades off the what's the actual target is. And so it trades that off for sample efficiency, because now the supervision signal here is much, much larger, because now we work with different hidden states, which are entire vectors. So that's going to be a much better signal. So that's the first improvement. The second improvement is what they say, the second improvement is the second improvement is the second improvement is what they say, end to end prediction of the value prefix. So they make an example right here of saying, okay, what's what's the value, you know, if you if you look at this, you have to predict sort of the future value, can you really predict what's it going to be like either the green player, let's say the ball flies in this direction, the green player is going to catch the ball or not, right. And that makes a huge difference. Now, you as a human, at this point, you know that it's not going to the green player is not going to catch that ball. And at this time, you're you're kind of sure. But it's quite hard to predict at this time right here. And it's even harder to predict when you know, at which step in time that player is going to miss the ball. And that's an argument they make for for essentially saying, if we add up the rewards of our own predictions, they can introduce a lot of mistakes. And but that's exactly what we do. If we look at the Q value that we use in this tree search, what we do is we add up the Q value of the tree search, we add up the rewards that we got in the path so far, and we add the value at that particular path. And that is very error prone, because this sum right here accumulates all the little errors that that that happened in in prediction. And, you know, as I said, if if we're not exactly sure at which point that is just one of the examples to show you how hard this problem is of predicting rewards step by step, if you look into the future. So what they do is is pretty simple. They say instead of adding up all the rewards, k steps into the future, what if we simply take the hidden states that we predict k steps into the future, and just shove them into a neural network. And then that neural network will output the sum of the rewards. So instead of summing the rewards directly, we have a neural network output the total sum, much like we have a neural network that outputs the value function at that looks ahead, this neural network right here, it will look sort of back, it will look into the past from the current state to the state, the end state that we rolled out in imagination, it will predict the entire value, they're using LSTM for that, because it can take in arbitrary number of states. And the LSTM has a per step rich supervision, because we have a reward at each step. And therefore, they say that works quite well. So that's the second thing. The third thing is the model based off policy correction. So yeah, this one is a little bit more tricky. But essentially, we can see where is it, we can read a bit through it to see what it does. This is an off policy correction mechanism. And they have two different mechanisms to do off policy correction already said off policy correction, you have to do it because the data that you get to learn from comes from your replay buffer comes from delay from the network and so on, and is a little bit older than the network that you're learning. And that turns out to be quite a big problem. So what we usually do is we sample a trajectory from the replay buffer and we compute, and we compute this target value z right here for the value function. The value target sums from off, sorry, suffers from off policy issues since the trajectory is rolled out using an older policy, and thus the value target is no longer accurate. Now, mu zero reanalyzed, this is a particular version of mu zero already handles that a little bit in that it actually recomputes the values, the scalar values with the current network before it learns from them. But still the policy used to generate that data is from an old policy. And so they say, when data is limited, we have to reuse the data sample from a much older policy, thus exaggerating the inaccurate value target issue. So what they do is they say, well, instead of using instead of using sort of the path, so we're, this is the state, right? And here is what actually happened, right? We took some actions, that's what actually happened. And now, what we would like to do is we would like to take this and learn from it. But the policy used to generate that path is an old policy. So we have to take this and learn from it. And so what they say is that the policy used to generate that path is an old policy. So the current network might have done something entirely different, it might have done a different action right here and got to a different point. And that is a problem because in an own policy method, we'd largely like to learn from actions that have been generated with the policy. So we're simply going to not use the entire trajectory for learning. But we're going to cut off at some point, because of course, the further out the more uncertain we get. And that cutoff point is going to be closer, the older the trajectory is. So for a very recent trajectory, my cutoff towards the end, but for a very old trajectory, we might cut off like all the way to the end. And then what we do after the cutoff point is, so we take this, we cut it off at some point, we say, well, it's old, but you know, this part right here is still sort of the uncertainty is, is not large enough for us to worry so much. And then what they do is they use because they have a latent model for for the environment for the world, they use that model to imagine a rollout. So much like something like dreamer or so they now train using imaginary rollouts from the point where they cut off. So the the trajectories in the replay buffer are more like seed values. And after that, they imagine rollouts using their latent model of the world. All right, so yeah, so I think that's it. We redo an MCTS search with the current policy on the last state and compute the empirical mean value through. Oh, yeah, so at the last, so at the last node right here, they redo an MCTS search they in order to get a really good target value there with the current policy. Yep, that's that's it. Okay, so these are the three improvements. Again, they introduce a consistency loss on the hidden states to make their transition model better. Second, they directly predict the value what they call value prefix this thing right here instead of summing up the rewards as they go along the tree search. And thirdly, they seed they use the collective trajectories as seed values and then train essentially in half imagined, half imagined rollouts with the current policy. So that's it. So what does that give them? It gives them very good performance on this Atari 100k benchmark, they do some additional they do some additional things right here, additional ablation studies, for example, they try to reconstruct the observation from the hidden state, and they see that for example, if you don't have a consistency loss, this quickly fails. So this will be the original mu zero, whereas with the consistency loss, you can see that kind of sort of there is an, you know, there is something right there that looks like the observation. Now here, I don't know if that is after the 100k steps, because of course, mu zero after 100k steps also doesn't perform super duper well. And therefore, you wouldn't be surprised like that this is, or it could be because their reconstruction method is just kind of poor as well. But the difference is noticeable between the two models, the one that has the consistency loss and the one that doesn't. They also analyze, for example, the validation loss, if you have if you directly predict the rewards, or if you use this value prefix prediction method, you can see during training, it's approximately the same. However, at validation time, this loss is much, much lower. And lastly, lastly, but they do a lot of ablations that that is it, what I was surprised or not surprised what I noticed in the ablations, and this is pretty much in all the ablations, there is no consistent ranking. So they have three improvements right here. And sometimes this improvement right here, for example, will be the most valuable. So you can see that without the value prefix, alien drops quite a bit. And in other times, you can see right here, this one will be the most valuable. And yet in other times, some some other one, like the last one will be the most valuable, don't see one right now. But I have, I've looked at it and that there is no consistent thing. So that it means that there's not a single recipe to make this thing better. It's a conglomeration. And for different Atari games, different things are important. And that sort of leads you to think, you know, is this, this isn't a this isn't a method from let's say principle, this is they have looked at what fails, and they fixed essentially one by one, the major mistakes that they found. And that is that is a way to go about it. But it is also a danger that we sort of over engineer to make this sort of over engineer to the benchmarks that we have, because, you know, clearly, if I just put one of these improvements, and some of the Atari games will improve by a lot, but others won't. And that, to me is a little bit of the danger right here. And this is why I'm not, you know, like, I can't I can't tell you if this algorithm is going to be a staple algorithm for sample efficient RL, or if it just works particularly well on this benchmark, they do do another benchmark, they do do the deep mind control benchmark. But I think there's going to be more evaluation needed. But I am excited, it really has the potential to be something something cool. All right, that was it from me. Thank you so much for listening, watching. Let me know what you think in the comments. And bye bye.
[ { "start": 0, "end": 5.84, "text": " Hi there, today we're going to look at Mastering Atari Games with Limited Data by Waziru Yeh," }, { "start": 5.84, "end": 13.76, "text": " Shahuwa Liu, Tanahar Kuretach, Pietra Biel and Yang Gao. This paper presents the Efficient Zero" }, { "start": 13.76, "end": 21.12, "text": " model, which is a model that can do reinforcement learning with severely limited data. So the paper" }, { "start": 21.12, "end": 29.44, "text": " tackles the Atari 100k benchmark, which means to learn Atari, the Atari benchmark as a reinforcement" }, { "start": 29.44, "end": 37.760000000000005, "text": " learning task, as for example, deep Q networks did, but you only get 100k transitions. This is" }, { "start": 37.760000000000005, "end": 45.36, "text": " about it's about two days worth of real time data to work with. And after that, the model supposedly" }, { "start": 45.36, "end": 53.28, "text": " be able to play Atari. So this is a variant on mu zero mu zero, which is an insanely data intensive" }, { "start": 53.28, "end": 59.36, "text": " reinforcement learning algorithm. And it introduces various tricks and various amendments to" }, { "start": 59.36, "end": 66.96, "text": " mu zero to make it more sample efficient. So when we look at this paper, you can see the gist of it" }, { "start": 66.96, "end": 75.44, "text": " right here. If you do this Atari 100k benchmark, you can see that a lot of the other reinforcement" }, { "start": 75.44, "end": 81.92, "text": " learning algorithm, they fail to even reach human level performance. Whereas this new algorithm" }, { "start": 81.92, "end": 88.96000000000001, "text": " out competes not only the other RL algorithms on in this low data regime, but also the humans." }, { "start": 88.96, "end": 97.19999999999999, "text": " Here they say, efficient zeros performance is close to DQN performance at 200 million frames" }, { "start": 97.19999999999999, "end": 104.72, "text": " while we consume 500 times less data. Efficient zeros, low sample complexity, and high performance" }, { "start": 104.72, "end": 110.88, "text": " can bring RL closer to real world applicability. They even say we implement their algorithm in an" }, { "start": 110.88, "end": 117.67999999999999, "text": " easy to understand manner. And it is available at this GitHub address. So this code is out there," }, { "start": 117.68, "end": 123.36000000000001, "text": " especially if you want to do reinforcement learning, but you don't have as much computer time" }, { "start": 123.36000000000001, "end": 129.28, "text": " or money. This might be for you. So we'll go through the paper, we'll see what the improvements" }, { "start": 129.28, "end": 134.4, "text": " are. There's not a single improvement. There are many improvements, three big ones to be exact." }, { "start": 135.20000000000002, "end": 142.56, "text": " And yeah, if you like content like this, don't hesitate to subscribe and tell your friends," }, { "start": 142.56, "end": 152.56, "text": " and family and professors, I guess. Alright, so we'll first take a small look at what mu zero does." }, { "start": 152.56, "end": 160.96, "text": " Just as a recap, I have done a video on mu zero. But if you haven't seen that, then here is a short" }, { "start": 160.96, "end": 167.12, "text": " a very short introduction to mu zero to the algorithm. So in a classic reinforcement" }, { "start": 167.12, "end": 173.20000000000002, "text": " learning setting, you have your your basic setup of you have the environment, and you have the" }, { "start": 173.20000000000002, "end": 180.72, "text": " actor and the environment gives the actor some sort of an observation at time step, let's call it T." }, { "start": 182.32, "end": 188.32, "text": " The actor uses that observation to come up with some sort of an action at time step T. And then" }, { "start": 188.32, "end": 197.84, "text": " the environment gives the actor back a reward for that time step. And the next observation T plus one." }, { "start": 197.84, "end": 203.35999999999999, "text": " And that goes on and on and on. So the question is, how is the actor supposed to come up with" }, { "start": 203.35999999999999, "end": 209.28, "text": " this action right here, given the past observations that it has seen from the environment" }, { "start": 209.28, "end": 216.07999999999998, "text": " in order to maximize all of the reward that it gets. Now, in a regular reinforcement learning" }, { "start": 216.08, "end": 221.28, "text": " algorithm, or regular, let's say in the simpler reinforcement learning algorithm, what people are" }, { "start": 221.28, "end": 227.28, "text": " doing is they're doing model free reinforcement learning, which essentially means that they take" }, { "start": 227.28, "end": 232.16000000000003, "text": " the series of observation, observation one, observation two, and so on that they've seen so" }, { "start": 232.16000000000003, "end": 238.4, "text": " far, they take that they stick it in a big neural network, and they train it to output some sort of" }, { "start": 238.4, "end": 244.96, "text": " an action. And they train the neural network in order to maximize this reward right here, usually" }, { "start": 244.96, "end": 251.04000000000002, "text": " using some sort of policy gradient or something like this. So this is a rather, rather direct way" }, { "start": 251.04000000000002, "end": 257.04, "text": " we call that model free reinforcement learning, because you directly predict the action without" }, { "start": 258.16, "end": 263.68, "text": " without an explicit model of the world. Now, when you have a model of the world, so when this" }, { "start": 263.68, "end": 268.96000000000004, "text": " environment here is well described, for example, a chessboard, in a chessboard, you know the rules," }, { "start": 268.96, "end": 275.2, "text": " you know, everything that's going to happen in a chessboard, you can use a model of the chessboard." }, { "start": 275.2, "end": 280.56, "text": " So what you can do is this, you can take these observations, and these observations would" }, { "start": 280.56, "end": 286.15999999999997, "text": " correspond to some defined states or let's let's say tic tac toe, tic tac toe is a better example." }, { "start": 286.15999999999997, "end": 292.4, "text": " So, you know, with the observation, I can actually construct the board of tic tac toe that I'm in." }, { "start": 292.4, "end": 298.71999999999997, "text": " And then what I can do is I can actually search, I can try out, I can say, okay, what if I put," }, { "start": 298.72, "end": 303.36, "text": " you know, something here, then my opponent's certainly going to do that right here. And then" }, { "start": 303.36, "end": 308.56, "text": " what if I put something here, and then my opponent is going to do that, and then they win, right." }, { "start": 308.56, "end": 316.48, "text": " So that is one, that is one way to do it. And usually you visualize this as a tree. So you are" }, { "start": 316.48, "end": 322.88000000000005, "text": " here at a root note, that's your state. And you have several options to do things. And in the" }, { "start": 322.88000000000005, "end": 327.52000000000004, "text": " several options, your opponent has several options, or if it's a one player game, you have several" }, { "start": 327.52, "end": 333.68, "text": " options again, and so on. So what you want to do is you want to search this tree for the best possible" }, { "start": 334.24, "end": 342.71999999999997, "text": " path. And this is what things like alpha go alpha zero, and so on did. They have these explicit model," }, { "start": 342.71999999999997, "end": 347.52, "text": " and they search through it. And now the neural networks no longer predict actions directly," }, { "start": 347.52, "end": 354.32, "text": " the neural network help you search through that tree, which means they, they vote essentially on" }, { "start": 354.32, "end": 360.64, "text": " which path paths of the tree to explore, because the tree quickly becomes too large to explore" }, { "start": 360.64, "end": 366.4, "text": " as a whole, right, you can't, like if it's more than three moves ahead, the possibilities just" }, { "start": 366.4, "end": 372.88, "text": " get giant even like especially in a game like go. So the neural networks are here to guide" }, { "start": 372.88, "end": 381.36, "text": " the tree search. And that was, in general, the techniques of that center around the Monte Carlo" }, { "start": 381.36, "end": 387.84000000000003, "text": " tree search, because at some point, you abort the search, and you simply play one game to the end," }, { "start": 387.84000000000003, "end": 395.2, "text": " as sort of an approximation of what happens. And so on, I'm not going to go into that super duper" }, { "start": 395.2, "end": 401.84000000000003, "text": " right here. But what mu zero does is mu zero says, well, this this whole tree search stuff" }, { "start": 401.84000000000003, "end": 407.84000000000003, "text": " essentially only works if I have an explicit model of the world, such as the tic tac toe board is" }, { "start": 407.84, "end": 414.23999999999995, "text": " clearly defined how it works, right? Also, I can I can I can have a simulator for it, I can rewind," }, { "start": 414.23999999999995, "end": 421.44, "text": " I can try again, this doesn't happen when you're interacting with any sort of real world thing," }, { "start": 421.44, "end": 427.52, "text": " let's say, or even the Atari benchmark. So in Atari, I know there is there's hacks where you" }, { "start": 427.52, "end": 432.55999999999995, "text": " can save the ROM and so on. But essentially, you're not supposed to go back in time or go" }, { "start": 432.55999999999995, "end": 436.47999999999996, "text": " forward in time, you're not supposed to be able to try something out and then say, well," }, { "start": 436.48, "end": 442.64000000000004, "text": " now that didn't work, I'm going to search for a different path in the tree instead. So what people" }, { "start": 442.64000000000004, "end": 450.16, "text": " do is they, they try to learn a model of the environment. So in absence of the model of the" }, { "start": 450.16, "end": 456, "text": " environment, they try to learn one and there are many, many different ways of doing this. And what" }, { "start": 456, "end": 462.8, "text": " mu zero does is it learns a latent model of the environment. So how does that look? So here you" }, { "start": 462.8, "end": 469.2, "text": " have the current observation observation T, what mu zero does is it uses a neural network, I think" }, { "start": 469.2, "end": 477.68, "text": " they call this H or something to get this into a hidden state. So they map the current observation" }, { "start": 477.68, "end": 487.12, "text": " into a hidden state. And then they plan using the hidden state. So they plan, they say, okay," }, { "start": 487.68, "end": 491.92, "text": " I'm not going to predict what the next observation is going to be like in the tic tac toe board." }, { "start": 491.92, "end": 499.44, "text": " I'm only going to predict what is the next hidden state going to be t plus one, t plus one, like" }, { "start": 499.44, "end": 510.32, "text": " this is one, this is two, this is three. So you know, depending on which action I do, which which" }, { "start": 510.32, "end": 517.12, "text": " is going what is going to be the next hidden state of the environment? Sorry, of Yeah, of the" }, { "start": 517.12, "end": 522.32, "text": " environment, what's going to be the next hidden state. And from that hidden state, I always going" }, { "start": 522.32, "end": 527.92, "text": " to predict what's going what's going to be the reward for transitioning there, what's going to" }, { "start": 527.92, "end": 535.12, "text": " be my own policy, which is a bit weird that you have to do this, but you have to, and which is" }, { "start": 535.12, "end": 541.2, "text": " going which what's going to be sort of the value and the value is what is going to be my future" }, { "start": 541.2, "end": 548.1600000000001, "text": " reward when I go from here. So these are the sort of things that mu zero predicts. And with that," }, { "start": 548.1600000000001, "end": 555.5200000000001, "text": " it is able to search this latent tree. Note the addition to mu zero, sorry. Yeah, the addition," }, { "start": 555.5200000000001, "end": 560.48, "text": " sorry to alpha zero, which is this run right here. So we might we might label this, this is something" }, { "start": 560.48, "end": 571.36, "text": " like reinforce. This is alpha zero. And this is mu zero. So the difference to alpha zero being that" }, { "start": 571.36, "end": 577.6800000000001, "text": " we no longer have an explicit model. So in order to do tree search, we have to learn a model. And" }, { "start": 577.6800000000001, "end": 583.6800000000001, "text": " the model that mu zero learns is in the latent space purely right there is it doesn't predict" }, { "start": 583.68, "end": 592, "text": " future observations. And it only learns all of this from the signal that it so it predicts the" }, { "start": 592, "end": 598.4799999999999, "text": " reward, it predicts its own policy, and it predicts the future value. And those are the only learning" }, { "start": 598.4799999999999, "end": 605.92, "text": " signals for the world model. That is good because it focuses the algorithm on what's essential," }, { "start": 605.92, "end": 612.3199999999999, "text": " it is essential to get the maximum reward possible. And therefore, the learning the more the learning" }, { "start": 612.32, "end": 619.2, "text": " signals center around those concepts, the better. But that also means learning the entire world model" }, { "start": 619.2, "end": 627.12, "text": " just from signals like the reward is extremely sparse. So it uses a lot of data. And that is" }, { "start": 627.12, "end": 634.48, "text": " that's essentially the catch right here. So we're not going to go into you know, how exactly mu zero" }, { "start": 635.6, "end": 641.5200000000001, "text": " does Monte Carlo tree search, they have a way of balancing exploration and exploitation right here" }, { "start": 641.52, "end": 645.6, "text": " by essentially using an upper confidence bound formula that you can see right here." }, { "start": 647.36, "end": 656.4, "text": " But so efficient zero goes and says there are three main weaknesses with mu zero. First of all," }, { "start": 656.4, "end": 663.4399999999999, "text": " they say lack of supervision on the environment model. That's what I just said, all the the model," }, { "start": 663.4399999999999, "end": 669.92, "text": " the latent model of the environment is learned purely from the signals of the end from the reward" }, { "start": 669.92, "end": 676.56, "text": " signal, the value signal, these are simple single numbers. And to ask the model to learn a transition" }, { "start": 677.52, "end": 683.68, "text": " function for the environment model is a big ask and of course needs a lot of data just from that." }, { "start": 685.28, "end": 692.7199999999999, "text": " The second one is hardness to deal with aleatoric uncertainty. I like I've given up on trying to" }, { "start": 692.7199999999999, "end": 698.8, "text": " remember which one is aleatoric and which one is what's the other one epistemic, I have no idea." }, { "start": 698.8, "end": 708.64, "text": " Okay, let's just read the paragraph. The predicted rewards have large prediction errors. So if there" }, { "start": 708.64, "end": 714.0799999999999, "text": " is uncertainty in the environment, for example, the environment is hard to model, the reward" }, { "start": 714.0799999999999, "end": 719.92, "text": " prediction errors will accumulate when expanding the Monte Carlo tree search tree to a large depth," }, { "start": 719.92, "end": 727.12, "text": " resulting in suboptimal performance in exploration and evaluation. So what they mean is that if I" }, { "start": 727.12, "end": 734.64, "text": " predict if I'm if this reward right here has a bit of an error, and then I go on searching right" }, { "start": 734.64, "end": 739.44, "text": " these branches right here, and then the reward I predict right here also has a bit of an error," }, { "start": 739.44, "end": 745.6, "text": " and so on. And we go down the tree. And every reward has a bit of an error. What I'll do in" }, { "start": 745.6, "end": 753.52, "text": " order to add you know, at the end, at the end right here, I have a path. And I don't go to the end," }, { "start": 753.52, "end": 760.24, "text": " I stop after a while and I add up the rewards that led me here. And that's sort of, you know," }, { "start": 760.24, "end": 765.84, "text": " how valuable this notice plus the value that I predict right here, that's going to be the," }, { "start": 766.4, "end": 773.04, "text": " the value of this path is going to be the sum of the rewards until I'm here plus the value from" }, { "start": 773.04, "end": 778.88, "text": " here on out. And if all of these little rewards have little errors on them, that quickly adds up" }, { "start": 778.88, "end": 784.24, "text": " to a big error. So that's their second criticism right here. That's something we're going to have" }, { "start": 784.24, "end": 792.32, "text": " to solve. And thirdly, off policy issues with multi step value. And that is a general, that is" }, { "start": 792.32, "end": 798.24, "text": " a general thing in these reinforcement learning algorithms, the more distributed you make them," }, { "start": 798.24, "end": 804.16, "text": " the more sort of what people usually do is they have like a learner box in the middle, learn." }, { "start": 804.16, "end": 810.24, "text": " So there's a neural network there. But then they have a lot of actors, actor machines, so they" }, { "start": 810.24, "end": 815.4399999999999, "text": " distribute training and interacting with the environment and these send back data, there's" }, { "start": 815.4399999999999, "end": 822.48, "text": " usually a replay buffer right here somewhere. And that means just that the neural network that is" }, { "start": 822.48, "end": 831.12, "text": " here at the learner is not the same that generated the data, because the data is kind of old. And" }, { "start": 831.12, "end": 837.44, "text": " until you use the data to practice the neural network will have already learned from other data," }, { "start": 837.44, "end": 843.76, "text": " and therefore you get an off policy issue, even though it's an on policy algorithm. Now," }, { "start": 843.76, "end": 849.84, "text": " mu zero does a little bit to correct this. But they say this has to be done more." }, { "start": 851.52, "end": 858.8, "text": " So how are they? Now, now we tackle these these three things. So the first thing they tackle" }, { "start": 858.8, "end": 866, "text": " is this lack of supervision on the environment model. So what they do is they add a self supervised" }, { "start": 866, "end": 872.9599999999999, "text": " consistency loss, you remember that we map the observation at time t to a state a hidden state" }, { "start": 872.9599999999999, "end": 879.4399999999999, "text": " at time t. And then we use our latent model to predict for a given action, what's the state" }, { "start": 879.4399999999999, "end": 885.4399999999999, "text": " going to be at time t plus one. And that's an estimate, right? Now, what this paper says is" }, { "start": 885.44, "end": 891.7600000000001, "text": " that wait a minute, if we simply look at what happens in the real world, right, observation t" }, { "start": 891.7600000000001, "end": 898.5600000000001, "text": " plus one, and we send it through the same, so through this, through this same encoding function," }, { "start": 899.36, "end": 906.24, "text": " then that gives us the hidden state at time t plus one. So technically, these two things here" }, { "start": 906.24, "end": 912.72, "text": " should be equal. So the hidden state at time t plus one, and the estimated hidden state at time t" }, { "start": 912.72, "end": 919.0400000000001, "text": " plus one, they should be kind of the same. So what they do is they use a self supervised" }, { "start": 919.0400000000001, "end": 926.64, "text": " consistency loss that they they nap from symposium. So symposium is a contrastive learning framework," }, { "start": 926.64, "end": 934, "text": " or self supervised learning framework. And it's usually used to have two images which have been" }, { "start": 934, "end": 941.52, "text": " differently augmented. So to make their representation equal. So till the model learns to sort of" }, { "start": 941.52, "end": 947.52, "text": " ignore the data augmentation, that's how you train self supervised image models. But here," }, { "start": 947.52, "end": 953.28, "text": " we don't augment differently, what we do is we take an observation, and we take the observation" }, { "start": 953.28, "end": 959.4399999999999, "text": " at time t plus one. And the first observation, we actually map it through that function that is" }, { "start": 959.4399999999999, "end": 965.28, "text": " supposed to give us this estimation of the next state. And then we use a similarity loss" }, { "start": 965.28, "end": 972.8, "text": " in order to pull those two things together. So this function that gives us the next state," }, { "start": 972.8, "end": 978.8, "text": " and the representation functions, they're now going to be trained in order to make those two" }, { "start": 978.8, "end": 985.12, "text": " things, the next hidden state, and the estimation of the next hidden state, similar to each other." }, { "start": 985.12, "end": 990.4, "text": " In fact, the the left branch right here is the one that's trained. But that includes the" }, { "start": 990.4, "end": 998.3199999999999, "text": " representation function and the next state function. So you might you might ask, you know," }, { "start": 999.76, "end": 1004.48, "text": " this is kind of the first question that everyone in mu zero has is like, why is this not done?" }, { "start": 1004.48, "end": 1009.76, "text": " Because this is, if you look at the loss of mu zero, you can pretty easily see that that is" }, { "start": 1009.76, "end": 1016.24, "text": " possible. And I think the mu zero authors have deliberately not introduced a loss like this," }, { "start": 1016.24, "end": 1022.88, "text": " because they say no, if we learn from just the reward signals, that is going to be a better" }, { "start": 1022.88, "end": 1029.84, "text": " algorithm, even though, you know, it might use more data. But at the end, it really trains for" }, { "start": 1029.84, "end": 1035.92, "text": " what is important for what is the end goal. And that's why they didn't introduce a loss like this," }, { "start": 1036.72, "end": 1044.48, "text": " introducing a loss like this clearly trades off the what's the actual target is. And so" }, { "start": 1044.48, "end": 1050.88, "text": " it trades that off for sample efficiency, because now the supervision signal here is much," }, { "start": 1050.88, "end": 1058, "text": " much larger, because now we work with different hidden states, which are entire vectors. So" }, { "start": 1058.72, "end": 1063.44, "text": " that's going to be a much better signal. So that's the first improvement. The second" }, { "start": 1063.44, "end": 1069.2, "text": " improvement is what they say, the second improvement is the second improvement is the" }, { "start": 1069.2, "end": 1076.32, "text": " second improvement is what they say, end to end prediction of the value prefix. So they make an" }, { "start": 1076.32, "end": 1082.72, "text": " example right here of saying, okay, what's what's the value, you know, if you if you look at this," }, { "start": 1082.72, "end": 1088.0800000000002, "text": " you have to predict sort of the future value, can you really predict what's it going to be like" }, { "start": 1088.0800000000002, "end": 1093.3600000000001, "text": " either the green player, let's say the ball flies in this direction, the green player is going to" }, { "start": 1093.36, "end": 1100.1599999999999, "text": " catch the ball or not, right. And that makes a huge difference. Now, you as a human, at this point," }, { "start": 1100.1599999999999, "end": 1107.28, "text": " you know that it's not going to the green player is not going to catch that ball. And at this time," }, { "start": 1107.28, "end": 1114.4799999999998, "text": " you're you're kind of sure. But it's quite hard to predict at this time right here. And it's even" }, { "start": 1114.48, "end": 1123.76, "text": " harder to predict when you know, at which step in time that player is going to miss the ball. And" }, { "start": 1124.64, "end": 1130.64, "text": " that's an argument they make for for essentially saying, if we add up the rewards of our own" }, { "start": 1130.64, "end": 1136.64, "text": " predictions, they can introduce a lot of mistakes. And but that's exactly what we do. If we look at" }, { "start": 1136.64, "end": 1143.76, "text": " the Q value that we use in this tree search, what we do is we add up the Q value of the tree search," }, { "start": 1143.76, "end": 1150.8, "text": " we add up the rewards that we got in the path so far, and we add the value at that particular path." }, { "start": 1150.8, "end": 1157.04, "text": " And that is very error prone, because this sum right here accumulates all the little errors that" }, { "start": 1159.04, "end": 1165.6, "text": " that that happened in in prediction. And, you know, as I said, if if we're not exactly sure" }, { "start": 1165.6, "end": 1173.68, "text": " at which point that is just one of the examples to show you how hard this problem is of predicting" }, { "start": 1173.68, "end": 1181.3600000000001, "text": " rewards step by step, if you look into the future. So what they do is is pretty simple. They say" }, { "start": 1181.3600000000001, "end": 1191.2, "text": " instead of adding up all the rewards, k steps into the future, what if we simply take the hidden" }, { "start": 1191.2, "end": 1196.88, "text": " states that we predict k steps into the future, and just shove them into a neural network." }, { "start": 1197.92, "end": 1203.44, "text": " And then that neural network will output the sum of the rewards. So instead of summing the" }, { "start": 1203.44, "end": 1209.3600000000001, "text": " rewards directly, we have a neural network output the total sum, much like we have a neural network" }, { "start": 1209.3600000000001, "end": 1216.64, "text": " that outputs the value function at that looks ahead, this neural network right here, it will look" }, { "start": 1216.64, "end": 1223.2, "text": " sort of back, it will look into the past from the current state to the state, the end state that we" }, { "start": 1223.2, "end": 1228.8, "text": " rolled out in imagination, it will predict the entire value, they're using LSTM for that," }, { "start": 1228.8, "end": 1237.36, "text": " because it can take in arbitrary number of states. And the LSTM has a per step rich supervision," }, { "start": 1237.36, "end": 1242.1599999999999, "text": " because we have a reward at each step. And therefore, they say that works quite well." }, { "start": 1242.1599999999999, "end": 1250.08, "text": " So that's the second thing. The third thing is the model based off policy correction. So" }, { "start": 1250.08, "end": 1258.08, "text": " yeah, this one is a little bit more tricky. But essentially, we can see where is it," }, { "start": 1259.12, "end": 1264.8, "text": " we can read a bit through it to see what it does. This is an off policy correction mechanism. And" }, { "start": 1266.08, "end": 1271.36, "text": " they have two different mechanisms to do off policy correction already said off policy" }, { "start": 1271.36, "end": 1276.48, "text": " correction, you have to do it because the data that you get to learn from comes from your replay" }, { "start": 1276.48, "end": 1283.84, "text": " buffer comes from delay from the network and so on, and is a little bit older than the network" }, { "start": 1283.84, "end": 1289.3600000000001, "text": " that you're learning. And that turns out to be quite a big problem. So" }, { "start": 1292.8, "end": 1299.28, "text": " what we usually do is we sample a trajectory from the replay buffer and we compute, and we compute" }, { "start": 1299.28, "end": 1308, "text": " this target value z right here for the value function. The value target sums from off, sorry," }, { "start": 1308, "end": 1312.6399999999999, "text": " suffers from off policy issues since the trajectory is rolled out using an older policy," }, { "start": 1312.6399999999999, "end": 1318.96, "text": " and thus the value target is no longer accurate. Now, mu zero reanalyzed, this is a particular" }, { "start": 1318.96, "end": 1326.56, "text": " version of mu zero already handles that a little bit in that it actually recomputes the values," }, { "start": 1326.56, "end": 1333.12, "text": " the scalar values with the current network before it learns from them. But still the policy used to" }, { "start": 1333.12, "end": 1342, "text": " generate that data is from an old policy. And so they say, when data is limited, we have to reuse" }, { "start": 1342, "end": 1348.48, "text": " the data sample from a much older policy, thus exaggerating the inaccurate value target issue." }, { "start": 1348.48, "end": 1356.72, "text": " So what they do is they say, well, instead of using instead of using sort of the path, so we're," }, { "start": 1357.52, "end": 1362.24, "text": " this is the state, right? And here is what actually happened, right? We took some actions," }, { "start": 1362.24, "end": 1367.52, "text": " that's what actually happened. And now, what we would like to do is we would like to take this" }, { "start": 1367.52, "end": 1375.2, "text": " and learn from it. But the policy used to generate that path is an old policy. So we have to" }, { "start": 1375.2, "end": 1381.76, "text": " take this and learn from it. And so what they say is that the policy used to generate that path is" }, { "start": 1381.76, "end": 1386.16, "text": " an old policy. So the current network might have done something entirely different, it might have" }, { "start": 1386.16, "end": 1391.1200000000001, "text": " done a different action right here and got to a different point. And that is a problem because" }, { "start": 1391.8400000000001, "end": 1398.16, "text": " in an own policy method, we'd largely like to learn from actions that have been generated with" }, { "start": 1398.16, "end": 1405.52, "text": " the policy. So we're simply going to not use the entire trajectory for learning. But we're going to" }, { "start": 1405.52, "end": 1410.8000000000002, "text": " cut off at some point, because of course, the further out the more uncertain we get. And that" }, { "start": 1410.8000000000002, "end": 1417.92, "text": " cutoff point is going to be closer, the older the trajectory is. So for a very recent trajectory," }, { "start": 1417.92, "end": 1423.1200000000001, "text": " my cutoff towards the end, but for a very old trajectory, we might cut off like all the way" }, { "start": 1423.12, "end": 1428.4799999999998, "text": " to the end. And then what we do after the cutoff point is, so we take this, we cut it off at some" }, { "start": 1428.4799999999998, "end": 1435.9199999999998, "text": " point, we say, well, it's old, but you know, this part right here is still sort of the uncertainty" }, { "start": 1435.9199999999998, "end": 1443.84, "text": " is, is not large enough for us to worry so much. And then what they do is they use because they" }, { "start": 1443.84, "end": 1452.2399999999998, "text": " have a latent model for for the environment for the world, they use that model to imagine a rollout." }, { "start": 1452.24, "end": 1459.36, "text": " So much like something like dreamer or so they now train using imaginary rollouts from the point" }, { "start": 1459.36, "end": 1465.1200000000001, "text": " where they cut off. So the the trajectories in the replay buffer are more like seed values." }, { "start": 1466, "end": 1474, "text": " And after that, they imagine rollouts using their latent model of the world. All right, so" }, { "start": 1474, "end": 1483.92, "text": " yeah, so I think that's it. We redo an MCTS search with the current policy on the last state and" }, { "start": 1483.92, "end": 1489.36, "text": " compute the empirical mean value through. Oh, yeah, so at the last, so at the last node right here," }, { "start": 1489.36, "end": 1497.84, "text": " they redo an MCTS search they in order to get a really good target value there with the current" }, { "start": 1497.84, "end": 1508.08, "text": " policy. Yep, that's that's it. Okay, so these are the three improvements. Again, they introduce a" }, { "start": 1508.08, "end": 1515.76, "text": " consistency loss on the hidden states to make their transition model better. Second, they directly" }, { "start": 1515.76, "end": 1522.48, "text": " predict the value what they call value prefix this thing right here instead of summing up the rewards" }, { "start": 1522.48, "end": 1531.52, "text": " as they go along the tree search. And thirdly, they seed they use the collective trajectories as" }, { "start": 1531.52, "end": 1540.08, "text": " seed values and then train essentially in half imagined, half imagined rollouts with the current" }, { "start": 1540.08, "end": 1547.52, "text": " policy. So that's it. So what does that give them? It gives them very good performance on this Atari" }, { "start": 1547.52, "end": 1556.4, "text": " 100k benchmark, they do some additional they do some additional things right here, additional" }, { "start": 1556.4, "end": 1562.56, "text": " ablation studies, for example, they try to reconstruct the observation from the hidden state," }, { "start": 1562.56, "end": 1569.12, "text": " and they see that for example, if you don't have a consistency loss, this quickly fails. So this" }, { "start": 1569.12, "end": 1574.8799999999999, "text": " will be the original mu zero, whereas with the consistency loss, you can see that kind of sort" }, { "start": 1574.88, "end": 1581.44, "text": " of there is an, you know, there is something right there that looks like the observation. Now here," }, { "start": 1581.44, "end": 1588.88, "text": " I don't know if that is after the 100k steps, because of course, mu zero after 100k steps also" }, { "start": 1588.88, "end": 1595.5200000000002, "text": " doesn't perform super duper well. And therefore, you wouldn't be surprised like that this is, or" }, { "start": 1595.5200000000002, "end": 1602, "text": " it could be because their reconstruction method is just kind of poor as well. But the difference is" }, { "start": 1602, "end": 1607.6, "text": " noticeable between the two models, the one that has the consistency loss and the one that doesn't." }, { "start": 1608.32, "end": 1616.08, "text": " They also analyze, for example, the validation loss, if you have if you directly predict the" }, { "start": 1616.08, "end": 1620.64, "text": " rewards, or if you use this value prefix prediction method, you can see during training," }, { "start": 1620.64, "end": 1627.84, "text": " it's approximately the same. However, at validation time, this loss is much, much lower. And lastly," }, { "start": 1627.84, "end": 1634.8, "text": " lastly, but they do a lot of ablations that that is it, what I was surprised or not surprised what" }, { "start": 1634.8, "end": 1641.1999999999998, "text": " I noticed in the ablations, and this is pretty much in all the ablations, there is no consistent" }, { "start": 1641.1999999999998, "end": 1648.48, "text": " ranking. So they have three improvements right here. And sometimes this improvement right here," }, { "start": 1648.48, "end": 1654.24, "text": " for example, will be the most valuable. So you can see that without the value prefix, alien drops" }, { "start": 1654.24, "end": 1660.4, "text": " quite a bit. And in other times, you can see right here, this one will be the most valuable." }, { "start": 1660.4, "end": 1666.48, "text": " And yet in other times, some some other one, like the last one will be the most valuable," }, { "start": 1666.48, "end": 1673.6, "text": " don't see one right now. But I have, I've looked at it and that there is no consistent thing. So" }, { "start": 1673.6, "end": 1680.64, "text": " that it means that there's not a single recipe to make this thing better. It's a conglomeration." }, { "start": 1680.64, "end": 1686.8000000000002, "text": " And for different Atari games, different things are important. And that sort of leads you to think," }, { "start": 1686.8000000000002, "end": 1693.92, "text": " you know, is this, this isn't a this isn't a method from let's say principle, this is they have looked" }, { "start": 1694.5600000000002, "end": 1701.92, "text": " at what fails, and they fixed essentially one by one, the major mistakes that they found. And that" }, { "start": 1701.92, "end": 1708.96, "text": " is that is a way to go about it. But it is also a danger that we sort of over engineer to make this" }, { "start": 1708.96, "end": 1715.04, "text": " sort of over engineer to the benchmarks that we have, because, you know, clearly, if I just put" }, { "start": 1715.04, "end": 1719.92, "text": " one of these improvements, and some of the Atari games will improve by a lot, but others won't." }, { "start": 1719.92, "end": 1727.3600000000001, "text": " And that, to me is a little bit of the danger right here. And this is why I'm not, you know," }, { "start": 1727.3600000000001, "end": 1735.04, "text": " like, I can't I can't tell you if this algorithm is going to be a staple algorithm for sample" }, { "start": 1735.04, "end": 1742.08, "text": " efficient RL, or if it just works particularly well on this benchmark, they do do another" }, { "start": 1742.08, "end": 1749.52, "text": " benchmark, they do do the deep mind control benchmark. But I think there's going to be more" }, { "start": 1749.52, "end": 1757.44, "text": " evaluation needed. But I am excited, it really has the potential to be something something cool." }, { "start": 1757.44, "end": 1762.6399999999999, "text": " All right, that was it from me. Thank you so much for listening, watching. Let me know what you" }, { "start": 1762.64, "end": 1767.44, "text": " think in the comments. And bye bye." } ]
NEkriziVYXo
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[ML News] DeepMind does Nowcasting | The Guardian's shady reporting | AI finishes Beethoven's 10th
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "nowcasting", "mlnews", "deepmind", "weather prediction", "ai weather", "short term weather", "rain prediction", "rain when", "ai nowcasting", "ml weather prediction", "deepmind weather", "the guardian", "truthfulqa", "truthful qa", "language models truthful", "plato xl", "beethoven 10", "ai music", "ai art", "ai painting", "painting authenticity", "huggingface", "huggingface infinity", "neuromorphic chips" ]
#deepmind #nowcasting #machinelearning Your holy update on what's new in the Machine Learning world. OUTLINE: 0:00 - Intro 0:30 - DeepMind tackles Nowcasting 3:30 - The Guardian's shady reporting on TruthfulQA 6:15 - Stochastic training not necessary for generalization 7:35 - Google AI's efficient partitioning of road networks 9:15 - MiniHack Reinforcement Learning Environment 10:45 - Plato XL 11B dialog model 11:35 - AI finishes Beethoven's 10th Symphony 13:10 - AI casts doubt on painting authenticity 15:55 - ShadowDragon social media surveillance 18:45 - Helpful Libraries 25:20 - Samsung to copy-paste brains onto chips References: DeepMind improves Nowcasting https://deepmind.com/blog/article/nowcasting https://www.nature.com/articles/s41586-021-03854-z https://github.com/deepmind/deepmind-research/tree/master/nowcasting https://colab.research.google.com/github/deepmind/deepmind-research/blob/master/nowcasting/Open_sourced_dataset_and_model_snapshot_for_precipitation_nowcasting.ipynb The Guardian's shady reporting on TruthfulQA https://www.theguardian.com/commentisfree/2021/oct/02/the-truth-about-artificial-intelligence-it-isnt-that-honest?CMP=Share_iOSApp_Other Stochastic Training is Not Necessary for Generalization https://arxiv.org/pdf/2109.14119.pdf Google AI - Efficient Partitioning of Road Networks https://ai.googleblog.com/2021/09/efficient-partitioning-of-road-networks.html MiniHack Reinforcement Learning Environment https://ai.facebook.com/blog/minihack-a-new-sandbox-for-open-ended-reinforcement-learning Baidu PLATO-XL 11B Dialog Model http://research.baidu.com/Blog/index-view?id=163 AI finishes Beethoven's 10th Symphony https://thenextweb.com/news/computer-scientists-completed-beethoven-10th-symphony-syndication AI casts doubt on paining authenticity https://www.smithsonianmag.com/smart-news/ai-casts-new-doubt-on-national-gallerys-prized-peter-paul-rubens-180978771/ https://art-recognition.com/ https://art-recognition.com/case-studies/ https://art-recognition.com/faq/ ShadowDragon Social Media Surveillance https://www.rt.com/usa/535630-ai-surveillance-police-program-social-media/ https://theintercept.com/2021/09/21/surveillance-social-media-police-microsoft-shadowdragon-kaseware/ Helpful Libraries / Datasets https://huggingface.co/infinity https://yanaiela.github.io/TNE/?s=09&utm_source=pocket_mylist https://arxiv.org/abs/2109.10282 https://github.com/microsoft/unilm/tree/master/trocr https://medium.com/people-ai-research/kaokore-exploring-the-intersection-of-humanities-and-ml-research-through-a-japanese-art-dataset-f6035ba1e4d https://raft.elicit.org/ https://huggingface.co/spaces/ought/raft-leaderboard https://huggingface.co/spaces/ought/raft-viewer?dataset=raft&config=ade_corpus_v2&raft=dataset&banking_77=config https://arxiv.org/pdf/2109.14076.pdf https://arxiv.org/pdf/2109.14394.pdf https://www.robots.ox.ac.uk/~vgg/research/pass/ https://zenodo.org/record/5528345#.YVrtd0ZByDU https://github.com/yukimasano/PASS/ https://openreview.net/pdf?id=BwzYI-KaHdr https://github.com/pytorch/data?utm_source=pocket_mylist Samsung Method to copy paste brain onto chip https://www.engadget.com/samsung-copy-and-paste-brain-neuromorphic-chips-185359994.html Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Cut my hair, but not the beard. I have a giant cold sore here. That just looks weird without the beard. I was just gonna wait. Well, we'll... um, yeah. Intro. DeepMind can predict rain better than anyone else. The Guardian is not so really truthful about truthful language models. And an AI finishes Beethoven's 10th symphony. Welcome to ML News. It's Monday. For centuries upon centuries, millennia upon millennia, humans have shaken their fist at the sky for the rain which they could not predict. But while the gods of the heavens curse us with the falling precipitation, the gods of the earth, namely DeepMind, have now blessed us with a system that can tell us when and where it's going to rain. DeepMind has been looking into what's called now casting, which is an area of weather prediction that concerns just the next one to two hours. The reason being that apparently longer term forecasting can be done pretty accurately by sort of modeling the global weather, seeing how stuff moves, considering the physics and blah, blah, blah. But very short term predictions are not as accurate as we would like them to be. They've published this in a paper in Nature because where else would DeepMind publish? And it's actually a pretty interesting read. They cite the availability of high quality data, at least in the UK, where radar data is available at very high resolution, and the lack of current systems that work well. Now, instead of directly predicting, their model is a generative model. And from the paper, it looks like it's sort of a GAN with a bunch of GAN losses. So there is a temporal discriminator that discriminates between real and fake, I guess temporal rollouts, there is a spatial discriminator, and there's sort of a regularity loss as well. So essentially, what they do is they take a context of 20 minutes of radar data. And from that, they generate how the radar data looks about two hours ahead. And as you can see, this looks pretty good. So on the top left, you have the target on the top right, you have the DeepMind system. And on the bottom, you have two baselines, you can see that the DeepMind system is quite a bit more accurate. And not only is it more accurate as rated by the metrics and also by human climatologists or weather people, I don't know what exists in this case. And while the DeepMind system is more accurate in terms of metrics, and in terms of humans rating it, DeepMind also advocates for a more impact based metrics. For example, they highlight that the prediction of heavy precipitation at long lead times remains difficult for all approaches. And this is one of the crucial events that you would like to predict. So the paper advocates that maybe we should pay more attention to the things that actually impact such things as farming or air travel or deciding whether or not you can hold an event outdoors. Along with the paper, they do provide the data set and also a snapshot of the trained model. There's a colab where you can download the data set and try out the model. So no longer do you need to have a wet head, simply go here and see whether or not it's going to rain in the next hour. The Guardian has an opinion piece by John Norton that says the truth about artificial intelligence, it isn't that honest. Tests of natural language processing models show that the bigger they are, the bigger liars they are, should we be worried? Now, isn't this exactly what I predicted? I reported on this in last ML news, I made even a dedicated video about this benchmark called truthful QA, which is where the authors create a data set specifically designed to trick these language models going as far as throwing out questions that the language models get right and defining the word truthful in a way that if you answer complete garbage, it counts as truthful and therefore, the smaller models are better because they're just worse. Now, if you get the impression that one should mention these things when discussing this data set, then you'd be right. And I advocated for the same thing. I said if someone gives this as an example of how bad large language models are, and doesn't explicitly mention these things, they either don't know or they want to deceive you. Well, enter John Norton, who writes an entire opinion piece about this article. So given that he writes an entire opinion piece, the possibility that he hasn't read the paper is out. The only thing that comes even a little bit close to mentioning the way the data set was created is this sentence, they composed questions that some humans would answer falsely due to a false belief or misconception. Really, really, do you dear viewer, do you feel that is an adequate characterization of this benchmark? And do you feel that giving only this sentence draws the correct conclusion for people? I mean, it's not wrong, they did this, it just leaves out all the other stuff that you would need to know. And why does it leave out all the other stuff? Because of course, John wants to make an argument. And the argument will completely fall apart if you include this other stuff. And this is how science reporting goes when you have a narrative already in mind, it goes from a paper that does describe the complete process, but uses words such as truthful in very weird ways and is already framed in a particular manner to the Twitter announcements of the authors, which hide all of these facts in very specific wording in somewhere down the thread to the more popular hubs in the AI space, completely leaving away these details, and then to the mainstream media that just picks up the talking points and writes big articles about how bad these things are. Good job, everyone. Now, if only there were some kind of independent new source that you could get your machine learning news from that never ever ever makes mistakes. Now, where could one find that? Moving on, there is an interesting new paper on archive that's called stochastic training is not necessary for generalization that argues that if you tune full batch gradient correctly, and if you regularize correctly, and all of these kinds of things, then you can achieve the same performance with full batch gradient descent, then you can with SGD. And this casts doubt on a lot of theoretical explanations of why neural networks generalize so well, because many of these rely on the stochasticity of SGD. It's long been believed that the stochasticity plays some kind of a role in the generalization capabilities. And at least in part, this paper provides evidence that this might not be fully the case. However, that being said, you do need to regularize the network. So you do need to bring some of the implicit regularization that SGD appears to do through stochasticity into the world of explicit regularization. If you don't want the stochasticity in there, this appears to be true with and without data augmentation. And the paper argues that the community has essentially just spent a long time optimizing stochastic optimizers and hyper parameters and hasn't put that much effort into full batch methods. If this is of interest to you give this paper a read. Google AI releases the efficient partitioning of road networks. So this is a method to partition road networks, because if you simply look at a road network and try to do planning, it quickly becomes ginormous. If you just consider your own city, then already that's a pretty big graph if you really model all the connections, and then you consider a country you consider a continent, it quickly becomes so huge that something like a Dijkstra algorithm cannot plan efficiently anymore. So what you have to do is you have to partition and they give the example of state and island, which is an island in New York City. And while state and island has a lot of roads, and the surrounding city has a lot of roads, the access between the city and state and island is limited to four or five different bridges. So a smart algorithm would sort of clump state and island into very few nodes. And then you can essentially plan on these super nodes until you get to state and island and then inside state and island you can plan locally. This relies on the fact that our road networks very often are comprised of large interconnections between clusters of local roads. And in order to do this, they leverage random walks. So they simply start from some point on the map and they do random walks on the map. And the idea is that if you have super duper connected networks like inside state and island, then the random walks are probably going to stay in that area as they walk because the amount of connections inside the area is just so much larger, and they're not going to traverse very often these interconnections between the clusters. So therefore using random walks, you can figure out what are the clusters that are tightly connected and what are the clusters that are only loosely connected and therefore you can partition the graph. This is then refined using some flow algorithms. And at the end, we all get Google Maps. Thank you. There is a paper to go along with it have a read if that is of interest to you. Facebook AI research releases mini hack, a new sandbox for open ended reinforcement learning. This is an iteration on the net hack learning environment, which we've reported previously is available. Net hack is this game where you're in a dungeon and you need to do certain things, battle certain things, and so on. And the cool thing is that it's entirely described in kind of an ASCII way. So on the left here, you see the way that players or level creators would design levels and then add items and certain effects to it. Now the net hack game is very difficult game. And if you do reinforcement learning inside the game, there are a lot of tasks, there are a lot of things to do. And there is essentially just this one game. So mini hack is an environment where you can create small parts of the game, different sub levels, very simple tasks to test the individual abilities of agents. So you could, for example, make a mini level where it's just about avoiding obstacles, or you could make another mini level where it's simply about fighting opponents. So essentially, it's a level editor for the learning environment. Pretty cool. Give it a try. By do releases Plato XL, the world's first 11 billion parameter pre trained dialogue generation model. Now, whenever you say the world's first, you just have to make whatever comes very specific, then you're always the world's first. Like even if there were a 12 billion parameter pre trained dialogue generation model, Plato XL would still be the world's first 11 billion parameter pre trained dialogue generation model. However, this is really so far the biggest model that is specifically made for dialogue. It's available in English and Chinese and it is specifically trained to do long dialogue that keeps the context alive of what's talked about. Also, by do says that they will release the source code together with the English model on GitHub soon. The next web news writes Beethoven never finished his 10th symphony computer scientists just did this is a description of how a team of computer scientists and music scholars went about finishing Beethoven's 10th symphony. So the ninth symphony concluded with the Ode to Joy, they said, but the 10th symphony is unfinished. There are some scribbles by Beethoven some ideas, but it's by no means a finished piece of work. So the article details how the team went about recreating something that Beethoven might have written. And this is the important part to get right here. They do not claim that what they produce is Beethoven's 10th symphony as Beethoven would have written it. They say that this is given the ideas something that Beethoven might conceivably have come up with. Now that being said, there is a lot of iterations here, there's a lot of hand engineering, of course. So rather than this being fully AI generated, so I would rather call it a computer human collaboration to come up with something that plausibly could have happened had Beethoven lived for a bit longer. The article is fairly long, but it concludes with an excerpt from what these people created. That sounds like music, correct. So it seems like a cool practical applications of some of the techniques, the combination of AI and art is more and more explored. And it's good to see that music is not an exception here. Speaking of AI and art, the Smithsonian magazine writes, did Peter Paul Rubens really paint Samsung and Delilah? AI analysis renews doubts over the authenticity of a star painting in the London National Gallery's collection. Right, so there's this painting by a painter, I have no clue about art, I'm very sorry. But apparently the painting has been painted at some point and then went missing for a while and then it reappeared. And there is an entire debate about whether or not the reappeared painting is in fact the original painting or a fake. And there is this company called Art Recognition, which supposedly can give you a report about whether or not a given painting is actually from a given painter or not. And when this company analyzed the painting, the algorithm reported a 91.78% probability that Samsung and Delilah was painted by someone other than Rubens. So the company claims they have had quite a lot of successes when they assessed non disputed works with the algorithm being generally very correct in these assessments. So given this track record, the statement that this painting is probably fake is quite a bit of a shakeup. Now, now, now I have many questions about this, like, why does this need seven days to generate a report? Do these people actually go out and collect training data once you submit your thing? I don't know. Also, these systems got to be like super duper vulnerable to something like adversarial examples, they give you like a certificate of authenticity. Now, I'm going to guess this is like a CNN and the CNN is trained on a bunch of paintings of that painter, then you get some sort of a closeness estimate. Now, are there negative samples that this is trained at? Is this a one class SVM? I don't know. And actually found anything in the FAQ about how exactly this works. Apparently, the entire service is just digital, and you don't actually need the painting itself. And I know a lot of these scholars, they look at the paint strokes themselves and the thicknesses and x rays and whatnot to determine if art is authentic or not. Now, I have no doubt that something like this might actually work and might actually work better than human art experts can assess this. But at the same time, there are a lot of vulnerabilities in these systems. And I also wouldn't trust them. Now, would I trust them more than human experts? Not sure. I think what is safe to say is that simply because this company says this is probably fake, it probably won't convince anyone in the art world to change their minds about this painting. But interesting to know this exists. Rt writes AI driven community surveillance US cops reportedly using invasive tool to grab suspect social media, Pornhub and Tinder data. This report is about a company called Shadow Dragon that produces tools that scrape social media and pull together all kinds of information about individual people. And they sell this to law enforcement such that essentially anything you do across social media is neatly pulled together and analyzed in one place. This can then be combined with other surveillance mechanisms such as facial recognition from surveillance and all your data from various government databases. And it could technically be used to do predictive policing, which is a very controversial practice where you don't react to crime, but you try to react to pre crime, which gives it a sort of dystopian feeling. The company's founder says the company disagrees with predictive policing and does not build products with predictive capabilities or even suggestions. However, also their website praises the product for being able to predict violence. So, another question is where exactly Shadow Dragon has all this data from they themselves claim they do not intercept any private chats and they do not access anything that's proprietary or private, but simply scrape information from public websites. And again, that is highly disputed. Now, even if they only collect data from public websites, it's still quite worrisome to see that police are using these kind of systems. Of course, if you are a suspect police has every opportunity to go look at all of your social media all across the web and cross reference that but this is now being done in an automated fashion that is available to search and yes, train predictive models on top of it. Now, whether or not that's a good development, I leave that up to you. But a good recommendation is that simply assume that all of your activity online is being carried together at some place and just put all into one neat package. So while in previous life, you could be one kind of person on Twitter and another kind of person on LinkedIn in the future, these things are going to morph together more and more right now it's simply for law enforcement and the government. But given that these products seem to exist, you can expect that to be more the case in general in the future. So now you have the opportunity Do you want to behave more professionally on Twitter? Or do you want to just spew random opinions around on LinkedIn? I know what I'm gonna do. I'll also link a more in depth article by the intercept about shadow dragon and its connections to law enforcement if you're into that. Alright, helpful libraries, we have a lot of helpful libraries and data sets this week, like so much help on the internet. It's crazy. I'm suffocating from helpful libraries, I can't library anymore. That being said, you should totally check out hugging faces infinity, which is a Docker container that you can deploy yourself and that brings inference of transformers down to a millisecond. So if you read more into this, apparently it's about three milliseconds for CPU based transformers like Bert and Roberta, and one millisecond if you host them on GPU. Now this is pretty massive, it represents about a 10x improvement over previous attempts at speeding up these transformers. And you can deploy this on premise fits neatly within a Docker container. Now infinity is in a closed beta right now, but I guess they're going to release it at some point. I don't know, there is a website, but it doesn't say a whole lot of things about it. But I guess being in beta, this is bound to develop further. If you are interested, click the request trial button and see what happens. Next up the text based NP enrichment tasks, text base, text based, not sure which one it is, I'm gonna I'm gonna guess text based. So this is a data set for NLP. And by that, I mean rather how NLP used to be before deep learning, where every noun phrase is sort of annotated with all the possible cross references that exist in the text. So for example, the sentence here, Iranian student protesters face expulsion would be annotated in the following way, Iranian student protesters would be annotated at Amir Kabir University, it would also be annotated with against Ahmadinejad and face expulsion would be annotated with expulsion of 54 students expulsion by university chancellor Ali Reza Rahai or expulsion from America beer university. The goal of the data set is to do these annotations exhaustively, which I'm going to guess was a lot of work. But they do end up with 5497 documents that are exhaustively annotated with all possible links between noun phrases in each document. So pretty cool. If you're more into old school NLP, definitely give this a try. If you are into new school NLP, you should probably learn a bit about old school NLP. Next there is TROCR transformer based optical character recognition with pre trained models by Microsoft along with code. This is a new OCR method that uses transformers code is available, give it a try. Kaokore, which is joint work of Google research and collaborators from Japan's National Institute of Informatics and the University of Cambridge released this data set right here of Japanese art depicting faces. So they wonder whether or not they can teach machines to recognize facial depictions in Japanese art and classify them into various categories. So the data set is created from a larger Japanese art data set by cropping out all of the faces and then manually labeling them. The labels are things such as the social status which is divided into noble warrior incarnation, which is a depiction of a god or goddess and commoner which is I guess the rest of us. You can also train GANs on these data sets. And it seems to be just a pretty cool data set for doing research again, intersection of AI and art. This could be like a theme for today. Raft is a data set of real world annotated few shot tasks. This is a data set where both the task itself and the examples are given in natural language. For example, the task here is a data set is a list of institutions that have contributed papers, data data data data data. The goal is to classify these institutions into one of three categories, university, company or research Institute 50 labeled examples are provided and then there are a bunch of labeled examples, but not too many, thus the name few shots tasks. So this could be pretty cool, because especially it has a lot of practical applications, if you can specify the task in natural language, and you don't need a whole lot of examples for the model to learn a task, a lot of new possibilities in applying NLP open up, there is a paper and a leaderboard if you want to give it a try. The next helpful thing is a data set. Edgar data set is a data set of financial texts. Edgar is a database where all the public companies have to send in their annual reports and Edgar corpus is a data set of that they do provide a script with which to mine the Edgar database and they do train a set of word vectors which for specific tasks in finance perform much better than standard glove word vectors. So if you ever wanted a corpus of a giant amount of text that says absolutely nothing important of any informational value, because all of these finance departments basically just cover their own behind. There you go. The next data set is pass an image net replacement for self supervised pre training without humans. The pitch is they have 1.4 million images 1.4 million of them are CC by licensed and they're absolutely zero humans in the data set. Not only aren't there any depictions of humans, there are also no license plates or other personally identifiable information. The catch is this data set comes without labels. So you cannot train your classic computer vision image classification task, but it is supposed to be another data set that you can use for pre training your models without having to worry about there being some personally identifiable information in there. And also without having to worry about the licensing of the pictures that are in the data set. Now are people going to replace image net by this one? Or are people simply going to add this data to their image net data and therefore the problems simply remain? Well, you take a wild guess which one of those two things is going to happen. In any case, the data set is available to download. Have fun. And lastly, torch data by pytorch is a very unstable prototype, but it is primitives in order to build data loaders in order to make data loading from various sources more effective. So if data loading is your bottleneck, and the standard data loaders don't do the job, maybe give this a try. The API is might break. But you know, that's life. Last things for today, Engadget writes Samsung hopes to copy and paste the brain to 3d chip networks. Essentially, their idea is to stick a bunch of electrodes in there stimulate the neurons, see how the neurons stimulate other neurons from this, you can figure out which neurons are connected to each other and how strong and then you can simply map that connection pattern onto a neuromorphic chip. Now this might actually be an interesting way of getting a neural network with the general connection pattern of the human brain like the sparsity pattern or how exactly the things are connected. So it might be a neat architectural investigation into the human brain. However, the article also writes the move could serve as a shortcut to artificial intelligence systems that behave like real brains, including the flexibility to learn new concepts and adapt to changing conditions, you might even see fully autonomous machines with true cognition according to the researchers. Nah, nah. That's simply because you map out the connection pattern doesn't mean at all that you will get any sort of brain like activity connection pattern between neurons is only one of many, many, many things that is going on in the brain, especially things like learning require forming of new connections dynamically strengthening connections or strengthening synapses inhibiting expression of genes that lead to faster or slower reuptake of synaptic material. And all of this is simply not captured by simply mapping out the connection pattern, forgive me, but no, you're probably not going to see fully autonomous machines with true cognition simply because you can map the brain's connections. Now these things are supposed to run on neuromorphic chips, which means they will have some of these additional abilities, but still highly doubtful. That was it for this week's news. So much stuff happening if you have something interesting that's happening in your life. And if it is in any way related to machine learning, let me know we have no standards here at ML news. Anything goes. I'll see you next week. Ow, it hurts.
[ { "start": 0, "end": 4.4, "text": " Cut my hair, but not the beard. I have a giant cold sore here." }, { "start": 4.4, "end": 8.08, "text": " That just looks weird without the beard. I was just gonna wait." }, { "start": 8.08, "end": 11.040000000000001, "text": " Well, we'll... um, yeah. Intro." }, { "start": 11.6, "end": 14.8, "text": " DeepMind can predict rain better than anyone else." }, { "start": 15.44, "end": 20.56, "text": " The Guardian is not so really truthful about truthful language models." }, { "start": 20.56, "end": 30.72, "text": " And an AI finishes Beethoven's 10th symphony. Welcome to ML News. It's Monday." }, { "start": 32.96, "end": 38.879999999999995, "text": " For centuries upon centuries, millennia upon millennia, humans have shaken their" }, { "start": 38.879999999999995, "end": 43.44, "text": " fist at the sky for the rain which they could not predict." }, { "start": 43.44, "end": 48.480000000000004, "text": " But while the gods of the heavens curse us with the falling precipitation," }, { "start": 48.48, "end": 54.08, "text": " the gods of the earth, namely DeepMind, have now blessed us with a system that can tell us" }, { "start": 54.08, "end": 59.68, "text": " when and where it's going to rain. DeepMind has been looking into what's called now casting," }, { "start": 59.68, "end": 65.28, "text": " which is an area of weather prediction that concerns just the next one to two hours." }, { "start": 65.28, "end": 70.8, "text": " The reason being that apparently longer term forecasting can be done pretty accurately by" }, { "start": 70.8, "end": 76, "text": " sort of modeling the global weather, seeing how stuff moves, considering the physics and" }, { "start": 76, "end": 82.32, "text": " blah, blah, blah. But very short term predictions are not as accurate as we would like them to be." }, { "start": 82.32, "end": 87.76, "text": " They've published this in a paper in Nature because where else would DeepMind publish?" }, { "start": 87.76, "end": 93.2, "text": " And it's actually a pretty interesting read. They cite the availability of high quality data," }, { "start": 93.2, "end": 97.76, "text": " at least in the UK, where radar data is available at very high resolution," }, { "start": 97.76, "end": 103.36, "text": " and the lack of current systems that work well. Now, instead of directly predicting," }, { "start": 103.36, "end": 109.12, "text": " their model is a generative model. And from the paper, it looks like it's sort of a GAN with a" }, { "start": 109.12, "end": 114.96, "text": " bunch of GAN losses. So there is a temporal discriminator that discriminates between real" }, { "start": 114.96, "end": 120, "text": " and fake, I guess temporal rollouts, there is a spatial discriminator, and there's sort of a" }, { "start": 120, "end": 126.24, "text": " regularity loss as well. So essentially, what they do is they take a context of 20 minutes of radar" }, { "start": 126.24, "end": 132.4, "text": " data. And from that, they generate how the radar data looks about two hours ahead. And as you can" }, { "start": 132.4, "end": 137.44, "text": " see, this looks pretty good. So on the top left, you have the target on the top right, you have the" }, { "start": 137.44, "end": 142.8, "text": " DeepMind system. And on the bottom, you have two baselines, you can see that the DeepMind system" }, { "start": 142.8, "end": 149.28, "text": " is quite a bit more accurate. And not only is it more accurate as rated by the metrics and also by" }, { "start": 149.28, "end": 155.20000000000002, "text": " human climatologists or weather people, I don't know what exists in this case. And while the" }, { "start": 155.20000000000002, "end": 159.76, "text": " DeepMind system is more accurate in terms of metrics, and in terms of humans rating it," }, { "start": 159.76, "end": 166.23999999999998, "text": " DeepMind also advocates for a more impact based metrics. For example, they highlight that the" }, { "start": 166.23999999999998, "end": 172.39999999999998, "text": " prediction of heavy precipitation at long lead times remains difficult for all approaches. And" }, { "start": 172.39999999999998, "end": 178.39999999999998, "text": " this is one of the crucial events that you would like to predict. So the paper advocates that maybe" }, { "start": 178.39999999999998, "end": 184.48, "text": " we should pay more attention to the things that actually impact such things as farming or air" }, { "start": 184.48, "end": 189.92, "text": " travel or deciding whether or not you can hold an event outdoors. Along with the paper, they do" }, { "start": 189.92, "end": 196.56, "text": " provide the data set and also a snapshot of the trained model. There's a colab where you can" }, { "start": 196.56, "end": 203.04, "text": " download the data set and try out the model. So no longer do you need to have a wet head," }, { "start": 203.04, "end": 207.12, "text": " simply go here and see whether or not it's going to rain in the next hour." }, { "start": 207.12, "end": 215.28, "text": " The Guardian has an opinion piece by John Norton that says the truth about artificial intelligence," }, { "start": 215.28, "end": 221.28, "text": " it isn't that honest. Tests of natural language processing models show that the bigger they are," }, { "start": 221.28, "end": 227.84, "text": " the bigger liars they are, should we be worried? Now, isn't this exactly what I predicted? I" }, { "start": 227.84, "end": 233.84, "text": " reported on this in last ML news, I made even a dedicated video about this benchmark called" }, { "start": 233.84, "end": 239.36, "text": " truthful QA, which is where the authors create a data set specifically designed to trick these" }, { "start": 239.36, "end": 244.24, "text": " language models going as far as throwing out questions that the language models get right" }, { "start": 244.24, "end": 250.96, "text": " and defining the word truthful in a way that if you answer complete garbage, it counts as truthful" }, { "start": 250.96, "end": 256.24, "text": " and therefore, the smaller models are better because they're just worse. Now, if you get" }, { "start": 256.24, "end": 261.76, "text": " the impression that one should mention these things when discussing this data set, then you'd" }, { "start": 261.76, "end": 267.03999999999996, "text": " be right. And I advocated for the same thing. I said if someone gives this as an example of how" }, { "start": 267.03999999999996, "end": 272.24, "text": " bad large language models are, and doesn't explicitly mention these things, they either don't know" }, { "start": 272.24, "end": 279.28, "text": " or they want to deceive you. Well, enter John Norton, who writes an entire opinion piece about" }, { "start": 279.28, "end": 285.68, "text": " this article. So given that he writes an entire opinion piece, the possibility that he hasn't read" }, { "start": 285.68, "end": 292.56, "text": " the paper is out. The only thing that comes even a little bit close to mentioning the way the data" }, { "start": 292.56, "end": 298.88, "text": " set was created is this sentence, they composed questions that some humans would answer falsely" }, { "start": 298.88, "end": 305.28000000000003, "text": " due to a false belief or misconception. Really, really, do you dear viewer, do you feel that is" }, { "start": 305.28000000000003, "end": 311.12, "text": " an adequate characterization of this benchmark? And do you feel that giving only this sentence" }, { "start": 311.12, "end": 317.6, "text": " draws the correct conclusion for people? I mean, it's not wrong, they did this, it just leaves out" }, { "start": 317.6, "end": 321.84000000000003, "text": " all the other stuff that you would need to know. And why does it leave out all the other stuff?" }, { "start": 321.84000000000003, "end": 327.2, "text": " Because of course, John wants to make an argument. And the argument will completely fall apart if" }, { "start": 327.2, "end": 332.32, "text": " you include this other stuff. And this is how science reporting goes when you have a narrative" }, { "start": 332.32, "end": 337.52, "text": " already in mind, it goes from a paper that does describe the complete process, but uses words" }, { "start": 337.52, "end": 343.28, "text": " such as truthful in very weird ways and is already framed in a particular manner to the Twitter" }, { "start": 343.28, "end": 349.52, "text": " announcements of the authors, which hide all of these facts in very specific wording in somewhere" }, { "start": 349.52, "end": 355.28, "text": " down the thread to the more popular hubs in the AI space, completely leaving away these details," }, { "start": 355.28, "end": 360.4, "text": " and then to the mainstream media that just picks up the talking points and writes big articles about" }, { "start": 360.4, "end": 366.32, "text": " how bad these things are. Good job, everyone. Now, if only there were some kind of independent new" }, { "start": 366.32, "end": 372, "text": " source that you could get your machine learning news from that never ever ever makes mistakes." }, { "start": 372, "end": 381.6, "text": " Now, where could one find that? Moving on, there is an interesting new paper on archive that's" }, { "start": 381.6, "end": 387.68, "text": " called stochastic training is not necessary for generalization that argues that if you tune" }, { "start": 387.68, "end": 393.36, "text": " full batch gradient correctly, and if you regularize correctly, and all of these kinds of things," }, { "start": 393.36, "end": 399.92, "text": " then you can achieve the same performance with full batch gradient descent, then you can with SGD." }, { "start": 399.92, "end": 405.04, "text": " And this casts doubt on a lot of theoretical explanations of why neural networks generalize" }, { "start": 405.04, "end": 410.16, "text": " so well, because many of these rely on the stochasticity of SGD. It's long been believed" }, { "start": 410.16, "end": 416.48, "text": " that the stochasticity plays some kind of a role in the generalization capabilities. And at least" }, { "start": 416.48, "end": 422, "text": " in part, this paper provides evidence that this might not be fully the case. However, that being" }, { "start": 422, "end": 428, "text": " said, you do need to regularize the network. So you do need to bring some of the implicit" }, { "start": 428, "end": 433.52, "text": " regularization that SGD appears to do through stochasticity into the world of explicit" }, { "start": 433.52, "end": 440, "text": " regularization. If you don't want the stochasticity in there, this appears to be true with and without" }, { "start": 440, "end": 445.52, "text": " data augmentation. And the paper argues that the community has essentially just spent a long time" }, { "start": 445.52, "end": 450.64, "text": " optimizing stochastic optimizers and hyper parameters and hasn't put that much effort into" }, { "start": 450.64, "end": 457.28, "text": " full batch methods. If this is of interest to you give this paper a read. Google AI releases" }, { "start": 457.28, "end": 463.36, "text": " the efficient partitioning of road networks. So this is a method to partition road networks," }, { "start": 463.36, "end": 469.59999999999997, "text": " because if you simply look at a road network and try to do planning, it quickly becomes ginormous." }, { "start": 469.59999999999997, "end": 474.96, "text": " If you just consider your own city, then already that's a pretty big graph if you really model all" }, { "start": 474.96, "end": 480.15999999999997, "text": " the connections, and then you consider a country you consider a continent, it quickly becomes" }, { "start": 480.16, "end": 485.68, "text": " so huge that something like a Dijkstra algorithm cannot plan efficiently anymore. So what you have" }, { "start": 485.68, "end": 490.96000000000004, "text": " to do is you have to partition and they give the example of state and island, which is an island in" }, { "start": 490.96000000000004, "end": 496.32000000000005, "text": " New York City. And while state and island has a lot of roads, and the surrounding city has a lot" }, { "start": 496.32000000000005, "end": 502.40000000000003, "text": " of roads, the access between the city and state and island is limited to four or five different" }, { "start": 502.40000000000003, "end": 509.04, "text": " bridges. So a smart algorithm would sort of clump state and island into very few nodes. And then you" }, { "start": 509.04, "end": 514.4, "text": " can essentially plan on these super nodes until you get to state and island and then inside state" }, { "start": 514.4, "end": 520.08, "text": " and island you can plan locally. This relies on the fact that our road networks very often are" }, { "start": 520.08, "end": 526.64, "text": " comprised of large interconnections between clusters of local roads. And in order to do this," }, { "start": 526.64, "end": 532.4, "text": " they leverage random walks. So they simply start from some point on the map and they do random" }, { "start": 532.4, "end": 538.72, "text": " walks on the map. And the idea is that if you have super duper connected networks like inside state" }, { "start": 538.72, "end": 544.64, "text": " and island, then the random walks are probably going to stay in that area as they walk because" }, { "start": 544.64, "end": 549.76, "text": " the amount of connections inside the area is just so much larger, and they're not going to traverse" }, { "start": 549.76, "end": 555.36, "text": " very often these interconnections between the clusters. So therefore using random walks," }, { "start": 555.36, "end": 560.08, "text": " you can figure out what are the clusters that are tightly connected and what are the clusters that" }, { "start": 560.08, "end": 565.2, "text": " are only loosely connected and therefore you can partition the graph. This is then refined using" }, { "start": 565.2, "end": 570.32, "text": " some flow algorithms. And at the end, we all get Google Maps. Thank you. There is a paper to go" }, { "start": 570.32, "end": 577.0400000000001, "text": " along with it have a read if that is of interest to you. Facebook AI research releases mini hack," }, { "start": 577.0400000000001, "end": 582.4000000000001, "text": " a new sandbox for open ended reinforcement learning. This is an iteration on the net hack" }, { "start": 582.4000000000001, "end": 588.6400000000001, "text": " learning environment, which we've reported previously is available. Net hack is this game" }, { "start": 588.6400000000001, "end": 593.5200000000001, "text": " where you're in a dungeon and you need to do certain things, battle certain things, and so on." }, { "start": 593.52, "end": 599.68, "text": " And the cool thing is that it's entirely described in kind of an ASCII way. So on the left here," }, { "start": 599.68, "end": 606.72, "text": " you see the way that players or level creators would design levels and then add items and certain" }, { "start": 606.72, "end": 612.72, "text": " effects to it. Now the net hack game is very difficult game. And if you do reinforcement" }, { "start": 612.72, "end": 617.12, "text": " learning inside the game, there are a lot of tasks, there are a lot of things to do. And there is" }, { "start": 617.12, "end": 622.96, "text": " essentially just this one game. So mini hack is an environment where you can create small parts of" }, { "start": 622.96, "end": 629.6800000000001, "text": " the game, different sub levels, very simple tasks to test the individual abilities of agents. So" }, { "start": 629.6800000000001, "end": 634.4000000000001, "text": " you could, for example, make a mini level where it's just about avoiding obstacles, or you could" }, { "start": 634.4000000000001, "end": 639.84, "text": " make another mini level where it's simply about fighting opponents. So essentially, it's a level" }, { "start": 639.84, "end": 648.4000000000001, "text": " editor for the learning environment. Pretty cool. Give it a try. By do releases Plato XL, the world's" }, { "start": 648.4, "end": 655.12, "text": " first 11 billion parameter pre trained dialogue generation model. Now, whenever you say the world's" }, { "start": 655.12, "end": 660.8, "text": " first, you just have to make whatever comes very specific, then you're always the world's first." }, { "start": 660.8, "end": 666.72, "text": " Like even if there were a 12 billion parameter pre trained dialogue generation model, Plato XL" }, { "start": 666.72, "end": 672.24, "text": " would still be the world's first 11 billion parameter pre trained dialogue generation model." }, { "start": 672.24, "end": 678.24, "text": " However, this is really so far the biggest model that is specifically made for dialogue. It's" }, { "start": 678.24, "end": 684.4, "text": " available in English and Chinese and it is specifically trained to do long dialogue that" }, { "start": 684.4, "end": 690.16, "text": " keeps the context alive of what's talked about. Also, by do says that they will release the source" }, { "start": 690.16, "end": 697.28, "text": " code together with the English model on GitHub soon. The next web news writes Beethoven never" }, { "start": 697.28, "end": 705.44, "text": " finished his 10th symphony computer scientists just did this is a description of how a team of" }, { "start": 705.44, "end": 712, "text": " computer scientists and music scholars went about finishing Beethoven's 10th symphony. So the ninth" }, { "start": 712, "end": 718.48, "text": " symphony concluded with the Ode to Joy, they said, but the 10th symphony is unfinished. There are" }, { "start": 718.48, "end": 725.2, "text": " some scribbles by Beethoven some ideas, but it's by no means a finished piece of work. So the article" }, { "start": 725.2, "end": 731.6, "text": " details how the team went about recreating something that Beethoven might have written." }, { "start": 731.6, "end": 736.32, "text": " And this is the important part to get right here. They do not claim that what they produce" }, { "start": 736.32, "end": 742.24, "text": " is Beethoven's 10th symphony as Beethoven would have written it. They say that this is given the" }, { "start": 742.24, "end": 748.5600000000001, "text": " ideas something that Beethoven might conceivably have come up with. Now that being said, there is" }, { "start": 748.5600000000001, "end": 753.9200000000001, "text": " a lot of iterations here, there's a lot of hand engineering, of course. So rather than this being" }, { "start": 753.9200000000001, "end": 760, "text": " fully AI generated, so I would rather call it a computer human collaboration to come up with" }, { "start": 760, "end": 765.28, "text": " something that plausibly could have happened had Beethoven lived for a bit longer. The article is" }, { "start": 765.28, "end": 770.24, "text": " fairly long, but it concludes with an excerpt from what these people created." }, { "start": 776.16, "end": 783.2, "text": " That sounds like music, correct. So it seems like a cool practical applications of some of the" }, { "start": 783.2, "end": 789.52, "text": " techniques, the combination of AI and art is more and more explored. And it's good to see that music" }, { "start": 789.52, "end": 797.36, "text": " is not an exception here. Speaking of AI and art, the Smithsonian magazine writes, did Peter Paul" }, { "start": 797.36, "end": 803.6, "text": " Rubens really paint Samsung and Delilah? AI analysis renews doubts over the authenticity" }, { "start": 803.6, "end": 809.36, "text": " of a star painting in the London National Gallery's collection. Right, so there's this painting by a" }, { "start": 809.36, "end": 815.52, "text": " painter, I have no clue about art, I'm very sorry. But apparently the painting has been painted at" }, { "start": 815.52, "end": 820.88, "text": " some point and then went missing for a while and then it reappeared. And there is an entire debate" }, { "start": 820.88, "end": 826.88, "text": " about whether or not the reappeared painting is in fact the original painting or a fake. And there" }, { "start": 826.88, "end": 832, "text": " is this company called Art Recognition, which supposedly can give you a report about whether" }, { "start": 832, "end": 838.64, "text": " or not a given painting is actually from a given painter or not. And when this company analyzed" }, { "start": 838.64, "end": 846.48, "text": " the painting, the algorithm reported a 91.78% probability that Samsung and Delilah was painted" }, { "start": 846.48, "end": 853.04, "text": " by someone other than Rubens. So the company claims they have had quite a lot of successes" }, { "start": 853.04, "end": 858.96, "text": " when they assessed non disputed works with the algorithm being generally very correct in these" }, { "start": 858.96, "end": 864.24, "text": " assessments. So given this track record, the statement that this painting is probably fake" }, { "start": 864.24, "end": 871.92, "text": " is quite a bit of a shakeup. Now, now, now I have many questions about this, like, why does this" }, { "start": 871.92, "end": 878.48, "text": " need seven days to generate a report? Do these people actually go out and collect training data" }, { "start": 878.48, "end": 884.4, "text": " once you submit your thing? I don't know. Also, these systems got to be like super duper vulnerable" }, { "start": 884.4, "end": 890.32, "text": " to something like adversarial examples, they give you like a certificate of authenticity. Now," }, { "start": 890.32, "end": 896.88, "text": " I'm going to guess this is like a CNN and the CNN is trained on a bunch of paintings of that painter," }, { "start": 896.88, "end": 902, "text": " then you get some sort of a closeness estimate. Now, are there negative samples that this is" }, { "start": 902, "end": 908.24, "text": " trained at? Is this a one class SVM? I don't know. And actually found anything in the FAQ about how" }, { "start": 908.24, "end": 914, "text": " exactly this works. Apparently, the entire service is just digital, and you don't actually need the" }, { "start": 914, "end": 919.36, "text": " painting itself. And I know a lot of these scholars, they look at the paint strokes themselves" }, { "start": 919.36, "end": 925.52, "text": " and the thicknesses and x rays and whatnot to determine if art is authentic or not. Now," }, { "start": 925.52, "end": 930.08, "text": " I have no doubt that something like this might actually work and might actually work better than" }, { "start": 930.08, "end": 936.24, "text": " human art experts can assess this. But at the same time, there are a lot of vulnerabilities in these" }, { "start": 936.24, "end": 942.88, "text": " systems. And I also wouldn't trust them. Now, would I trust them more than human experts? Not sure." }, { "start": 942.88, "end": 947.84, "text": " I think what is safe to say is that simply because this company says this is probably fake," }, { "start": 947.84, "end": 953.12, "text": " it probably won't convince anyone in the art world to change their minds about this painting." }, { "start": 953.12, "end": 960.8000000000001, "text": " But interesting to know this exists. Rt writes AI driven community surveillance US cops reportedly" }, { "start": 960.8000000000001, "end": 966.88, "text": " using invasive tool to grab suspect social media, Pornhub and Tinder data. This report is about a" }, { "start": 966.88, "end": 973.6, "text": " company called Shadow Dragon that produces tools that scrape social media and pull together all" }, { "start": 973.6, "end": 978.5600000000001, "text": " kinds of information about individual people. And they sell this to law enforcement such that" }, { "start": 978.5600000000001, "end": 985.0400000000001, "text": " essentially anything you do across social media is neatly pulled together and analyzed in one place." }, { "start": 985.0400000000001, "end": 990.16, "text": " This can then be combined with other surveillance mechanisms such as facial recognition from" }, { "start": 990.16, "end": 995.2, "text": " surveillance and all your data from various government databases. And it could technically" }, { "start": 995.2, "end": 1001.9200000000001, "text": " be used to do predictive policing, which is a very controversial practice where you don't react" }, { "start": 1001.92, "end": 1008.3199999999999, "text": " to crime, but you try to react to pre crime, which gives it a sort of dystopian feeling." }, { "start": 1008.3199999999999, "end": 1015.12, "text": " The company's founder says the company disagrees with predictive policing and does not build" }, { "start": 1015.12, "end": 1021.12, "text": " products with predictive capabilities or even suggestions. However, also their website praises" }, { "start": 1021.12, "end": 1028.48, "text": " the product for being able to predict violence. So, another question is where exactly Shadow Dragon" }, { "start": 1028.48, "end": 1034.64, "text": " has all this data from they themselves claim they do not intercept any private chats and they do not" }, { "start": 1034.64, "end": 1040.56, "text": " access anything that's proprietary or private, but simply scrape information from public websites." }, { "start": 1040.56, "end": 1046.64, "text": " And again, that is highly disputed. Now, even if they only collect data from public websites," }, { "start": 1046.64, "end": 1052.48, "text": " it's still quite worrisome to see that police are using these kind of systems. Of course," }, { "start": 1052.48, "end": 1059.04, "text": " if you are a suspect police has every opportunity to go look at all of your social media all across" }, { "start": 1059.04, "end": 1064.4, "text": " the web and cross reference that but this is now being done in an automated fashion that is" }, { "start": 1064.4, "end": 1069.84, "text": " available to search and yes, train predictive models on top of it. Now, whether or not that's" }, { "start": 1069.84, "end": 1076.16, "text": " a good development, I leave that up to you. But a good recommendation is that simply assume that" }, { "start": 1076.16, "end": 1083.1200000000001, "text": " all of your activity online is being carried together at some place and just put all into one" }, { "start": 1083.1200000000001, "end": 1089.2, "text": " neat package. So while in previous life, you could be one kind of person on Twitter and another kind" }, { "start": 1089.2, "end": 1095.28, "text": " of person on LinkedIn in the future, these things are going to morph together more and more right" }, { "start": 1095.28, "end": 1100.0800000000002, "text": " now it's simply for law enforcement and the government. But given that these products seem" }, { "start": 1100.0800000000002, "end": 1105.76, "text": " to exist, you can expect that to be more the case in general in the future. So now you have" }, { "start": 1105.76, "end": 1109.76, "text": " the opportunity Do you want to behave more professionally on Twitter? Or do you want to" }, { "start": 1109.76, "end": 1115.12, "text": " just spew random opinions around on LinkedIn? I know what I'm gonna do. I'll also link a more" }, { "start": 1115.12, "end": 1120.24, "text": " in depth article by the intercept about shadow dragon and its connections to law enforcement" }, { "start": 1120.24, "end": 1128, "text": " if you're into that. Alright, helpful libraries, we have a lot of helpful libraries and data sets" }, { "start": 1128, "end": 1135.28, "text": " this week, like so much help on the internet. It's crazy. I'm suffocating from helpful libraries," }, { "start": 1135.28, "end": 1141.44, "text": " I can't library anymore. That being said, you should totally check out hugging faces infinity," }, { "start": 1141.44, "end": 1148.32, "text": " which is a Docker container that you can deploy yourself and that brings inference of transformers" }, { "start": 1148.32, "end": 1153.36, "text": " down to a millisecond. So if you read more into this, apparently it's about three milliseconds" }, { "start": 1153.36, "end": 1161.2, "text": " for CPU based transformers like Bert and Roberta, and one millisecond if you host them on GPU. Now" }, { "start": 1161.2, "end": 1167.2, "text": " this is pretty massive, it represents about a 10x improvement over previous attempts at speeding up" }, { "start": 1167.2, "end": 1174, "text": " these transformers. And you can deploy this on premise fits neatly within a Docker container." }, { "start": 1174, "end": 1180.64, "text": " Now infinity is in a closed beta right now, but I guess they're going to release it at some point." }, { "start": 1180.64, "end": 1185.8400000000001, "text": " I don't know, there is a website, but it doesn't say a whole lot of things about it. But I guess" }, { "start": 1185.84, "end": 1191.36, "text": " being in beta, this is bound to develop further. If you are interested, click the request trial" }, { "start": 1191.36, "end": 1198.6399999999999, "text": " button and see what happens. Next up the text based NP enrichment tasks, text base, text based," }, { "start": 1199.28, "end": 1205.28, "text": " not sure which one it is, I'm gonna I'm gonna guess text based. So this is a data set for NLP." }, { "start": 1205.28, "end": 1211.1999999999998, "text": " And by that, I mean rather how NLP used to be before deep learning, where every noun phrase" }, { "start": 1211.2, "end": 1217.52, "text": " is sort of annotated with all the possible cross references that exist in the text. So for example," }, { "start": 1217.52, "end": 1222.8, "text": " the sentence here, Iranian student protesters face expulsion would be annotated in the following way," }, { "start": 1222.8, "end": 1229.1200000000001, "text": " Iranian student protesters would be annotated at Amir Kabir University, it would also be annotated" }, { "start": 1229.1200000000001, "end": 1235.6000000000001, "text": " with against Ahmadinejad and face expulsion would be annotated with expulsion of 54 students" }, { "start": 1235.6, "end": 1243.04, "text": " expulsion by university chancellor Ali Reza Rahai or expulsion from America beer university. The goal" }, { "start": 1243.04, "end": 1248.56, "text": " of the data set is to do these annotations exhaustively, which I'm going to guess was a" }, { "start": 1248.56, "end": 1257.04, "text": " lot of work. But they do end up with 5497 documents that are exhaustively annotated with all possible" }, { "start": 1257.04, "end": 1262.56, "text": " links between noun phrases in each document. So pretty cool. If you're more into old school NLP," }, { "start": 1262.56, "end": 1267.52, "text": " definitely give this a try. If you are into new school NLP, you should probably learn a bit about" }, { "start": 1267.52, "end": 1274.24, "text": " old school NLP. Next there is TROCR transformer based optical character recognition with pre trained" }, { "start": 1274.24, "end": 1282, "text": " models by Microsoft along with code. This is a new OCR method that uses transformers code is available," }, { "start": 1282, "end": 1288.08, "text": " give it a try. Kaokore, which is joint work of Google research and collaborators from Japan's" }, { "start": 1288.08, "end": 1294.56, "text": " National Institute of Informatics and the University of Cambridge released this data set right here of" }, { "start": 1294.56, "end": 1302.1599999999999, "text": " Japanese art depicting faces. So they wonder whether or not they can teach machines to recognize" }, { "start": 1302.1599999999999, "end": 1308.32, "text": " facial depictions in Japanese art and classify them into various categories. So the data set" }, { "start": 1308.32, "end": 1315.52, "text": " is created from a larger Japanese art data set by cropping out all of the faces and then manually" }, { "start": 1315.52, "end": 1322.24, "text": " labeling them. The labels are things such as the social status which is divided into noble warrior" }, { "start": 1322.24, "end": 1329.04, "text": " incarnation, which is a depiction of a god or goddess and commoner which is I guess the rest" }, { "start": 1329.04, "end": 1335.44, "text": " of us. You can also train GANs on these data sets. And it seems to be just a pretty cool data set for" }, { "start": 1335.44, "end": 1340.48, "text": " doing research again, intersection of AI and art. This could be like a theme for today." }, { "start": 1340.48, "end": 1346.4, "text": " Raft is a data set of real world annotated few shot tasks. This is a data set where both the" }, { "start": 1346.4, "end": 1353.52, "text": " task itself and the examples are given in natural language. For example, the task here is a data set" }, { "start": 1353.52, "end": 1359.1200000000001, "text": " is a list of institutions that have contributed papers, data data data data data. The goal is to" }, { "start": 1359.1200000000001, "end": 1364, "text": " classify these institutions into one of three categories, university, company or research" }, { "start": 1364, "end": 1369.52, "text": " Institute 50 labeled examples are provided and then there are a bunch of labeled examples, but" }, { "start": 1369.52, "end": 1375.76, "text": " not too many, thus the name few shots tasks. So this could be pretty cool, because especially it" }, { "start": 1375.76, "end": 1381.92, "text": " has a lot of practical applications, if you can specify the task in natural language, and you don't" }, { "start": 1381.92, "end": 1387.76, "text": " need a whole lot of examples for the model to learn a task, a lot of new possibilities in applying" }, { "start": 1387.76, "end": 1394.48, "text": " NLP open up, there is a paper and a leaderboard if you want to give it a try. The next helpful thing" }, { "start": 1394.48, "end": 1401.68, "text": " is a data set. Edgar data set is a data set of financial texts. Edgar is a database where all" }, { "start": 1401.68, "end": 1407.92, "text": " the public companies have to send in their annual reports and Edgar corpus is a data set of that" }, { "start": 1407.92, "end": 1412.96, "text": " they do provide a script with which to mine the Edgar database and they do train a set of word" }, { "start": 1412.96, "end": 1419.6, "text": " vectors which for specific tasks in finance perform much better than standard glove word vectors. So" }, { "start": 1419.6, "end": 1426.48, "text": " if you ever wanted a corpus of a giant amount of text that says absolutely nothing important of" }, { "start": 1426.48, "end": 1431.1999999999998, "text": " any informational value, because all of these finance departments basically just cover their" }, { "start": 1431.1999999999998, "end": 1437.6799999999998, "text": " own behind. There you go. The next data set is pass an image net replacement for self supervised" }, { "start": 1437.6799999999998, "end": 1445.04, "text": " pre training without humans. The pitch is they have 1.4 million images 1.4 million of them are" }, { "start": 1445.04, "end": 1451.44, "text": " CC by licensed and they're absolutely zero humans in the data set. Not only aren't there any" }, { "start": 1451.44, "end": 1457.76, "text": " depictions of humans, there are also no license plates or other personally identifiable information." }, { "start": 1457.76, "end": 1464.32, "text": " The catch is this data set comes without labels. So you cannot train your classic computer vision" }, { "start": 1464.32, "end": 1469.84, "text": " image classification task, but it is supposed to be another data set that you can use for pre" }, { "start": 1469.84, "end": 1475.4399999999998, "text": " training your models without having to worry about there being some personally identifiable information" }, { "start": 1475.4399999999998, "end": 1480.9599999999998, "text": " in there. And also without having to worry about the licensing of the pictures that are in the data" }, { "start": 1480.9599999999998, "end": 1487.6, "text": " set. Now are people going to replace image net by this one? Or are people simply going to add this" }, { "start": 1487.6, "end": 1493.36, "text": " data to their image net data and therefore the problems simply remain? Well, you take a wild" }, { "start": 1493.36, "end": 1498.3999999999999, "text": " guess which one of those two things is going to happen. In any case, the data set is available to" }, { "start": 1498.4, "end": 1506.48, "text": " download. Have fun. And lastly, torch data by pytorch is a very unstable prototype, but it is" }, { "start": 1506.48, "end": 1511.92, "text": " primitives in order to build data loaders in order to make data loading from various sources more" }, { "start": 1511.92, "end": 1517.2, "text": " effective. So if data loading is your bottleneck, and the standard data loaders don't do the job," }, { "start": 1517.2, "end": 1524.24, "text": " maybe give this a try. The API is might break. But you know, that's life. Last things for today," }, { "start": 1524.24, "end": 1530.88, "text": " Engadget writes Samsung hopes to copy and paste the brain to 3d chip networks. Essentially," }, { "start": 1530.88, "end": 1537.84, "text": " their idea is to stick a bunch of electrodes in there stimulate the neurons, see how the neurons" }, { "start": 1537.84, "end": 1542.64, "text": " stimulate other neurons from this, you can figure out which neurons are connected to each other and" }, { "start": 1542.64, "end": 1547.84, "text": " how strong and then you can simply map that connection pattern onto a neuromorphic chip." }, { "start": 1547.84, "end": 1552.08, "text": " Now this might actually be an interesting way of getting a neural network with the general" }, { "start": 1552.08, "end": 1557.6, "text": " connection pattern of the human brain like the sparsity pattern or how exactly the things are" }, { "start": 1557.6, "end": 1563.28, "text": " connected. So it might be a neat architectural investigation into the human brain. However," }, { "start": 1563.28, "end": 1568.72, "text": " the article also writes the move could serve as a shortcut to artificial intelligence systems that" }, { "start": 1568.72, "end": 1573.9199999999998, "text": " behave like real brains, including the flexibility to learn new concepts and adapt to changing" }, { "start": 1573.9199999999998, "end": 1579.28, "text": " conditions, you might even see fully autonomous machines with true cognition according to the" }, { "start": 1579.28, "end": 1586.72, "text": " researchers. Nah, nah. That's simply because you map out the connection pattern doesn't mean at all" }, { "start": 1586.72, "end": 1592.8799999999999, "text": " that you will get any sort of brain like activity connection pattern between neurons is only one of" }, { "start": 1592.8799999999999, "end": 1598.8, "text": " many, many, many things that is going on in the brain, especially things like learning require" }, { "start": 1598.8, "end": 1604.3999999999999, "text": " forming of new connections dynamically strengthening connections or strengthening synapses" }, { "start": 1604.4, "end": 1611.2, "text": " inhibiting expression of genes that lead to faster or slower reuptake of synaptic material. And all" }, { "start": 1611.2, "end": 1616.0800000000002, "text": " of this is simply not captured by simply mapping out the connection pattern, forgive me, but no," }, { "start": 1616.0800000000002, "end": 1621.92, "text": " you're probably not going to see fully autonomous machines with true cognition simply because you" }, { "start": 1621.92, "end": 1627.52, "text": " can map the brain's connections. Now these things are supposed to run on neuromorphic chips, which" }, { "start": 1627.52, "end": 1633.2800000000002, "text": " means they will have some of these additional abilities, but still highly doubtful. That was" }, { "start": 1633.28, "end": 1639.76, "text": " it for this week's news. So much stuff happening if you have something interesting that's happening" }, { "start": 1639.76, "end": 1645.92, "text": " in your life. And if it is in any way related to machine learning, let me know we have no standards" }, { "start": 1645.92, "end": 1651.2, "text": " here at ML news. Anything goes. I'll see you next week." }, { "start": 1651.2, "end": 1660.4, "text": " Ow, it hurts." } ]
yexR53My2O4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "jeff dean", "mikolov", "word2vec", "word vectors", "word representations", "nlp", "natural language processing", "sentiment classification", "king", "queen", "man", "woman", "arithmetic", "latent space", "distributed", "country", "capital", "semantic", "synonyms", "skip gram", "negative sampling", "nce", "noise contrastive estimation" ]
#ai #research #word2vec Word vectors have been one of the most influential techniques in modern NLP to date. This paper describes Word2Vec, which the most popular technique to obtain word vectors. The paper introduces the negative sampling technique as an approximation to noise contrastive estimation and shows that this allows the training of word vectors from giant corpora on a single machine in a very short time. OUTLINE: 0:00 - Intro & Outline 1:50 - Distributed Word Representations 5:40 - Skip-Gram Model 12:00 - Hierarchical Softmax 14:55 - Negative Sampling 22:30 - Mysterious 3/4 Power 25:50 - Frequent Words Subsampling 28:15 - Empirical Results 29:45 - Conclusion & Comments Paper: https://arxiv.org/abs/1310.4546 Code: https://code.google.com/archive/p/word2vec/ Abstract: The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible. Authors: Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, Jeffrey Dean Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there, today we'll look at distributed representations of words and phrases and their compositionality by Thomas Mikolov, Ilya Sotskyver, Kai Chen, Greg Corrado and Jeffrey Dean. This is another historical paper, it's one of three papers, it's the middle one that introduces the original Word2vec algorithm. And as you might know, Word2vec was extremely influential in NLP since this paper basically until recently, where it's sort of gone out of fashion a bit in research with the rise of things like ELMo and BERT, but it's still very, very relevant. So we'll look at this historical paper today with kind of the hindsight of being a couple years into the future. In fact, as you see right here, this was released in 2013, so it's seven years later now. And we'll look back and we'll see what they said back then about the system. This is not going to be like a very well-enhanced PowerPoint presentation of how Word2vec works. We're going to look at the paper and read it together. If you like content like this, if you like historical paper readings, let me know in the comments, share it out if you do like it and of course subscribe. Because this kind of historical papers, I enjoy them, but many people might already know what these things are. So, yeah. Okay. Let's, you know, go through the paper and pick up their ideas and kind of put them in context. They say the recently introduced continuous skip gram model is an efficient method for learning high quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. So the skip gram model was already introduced by Mikhailov in an earlier paper that came out, I believe not like one or two months prior to this one. As I said, Word2vec is a series of papers. I don't think there is a paper called Word2vec. Rather, they here have released the code along with the paper. The code was called Word2vec. So the skip gram model was introduced previously, but it is replicated right here. So in the skip gram model, what you're trying to do is you're trying to get a distributed word representation. So what does that mean? That means that for each word in your language, let's take these words right here. For each word in the language, you want to come up with a vector that somehow describes that word in a continuous fashion. So with the two might be mapped to, I don't know, 0.1, 0.9, and 0.3. Learn might be mapped to negative 0.5 and so on. So each word gets assigned a vector in the same dimensional space. And what the previous paper kind of discovered is that if you do this correctly, then these vectors, they have some kind of properties. So we can already kind of jump ahead because this was already a bit, a bit researched in the last paper. The semantics of these vectors will be something like this. So here they have a two dimensional PCA. So these are the first two dimensions of the 1000 dimensional skip gram vector. So the vectors they obtain, they can do things like this, where they can show that in these spaces, for example, there appears to be a vector direction that characterizes the capital of a country. So if you take a few countries and their capitals and you average that vector, you get a kind of a direction for capitalness of a city. Given a country, you can see that there is a pretty clear relation here. Now, some of these things have later been revised to such that they are ultimately ended up being not that impressive. For example, there was always this kind of math with vectors. And I don't, I believe this is, this might not be in this. This is in the last paper where they discovered that if you take the vector for king and you subtract the vector for man and you add the vector for woman, then that would result in the vector for queen. So the way they did it was basically they did this calculation right here and then they searched in the point they ended up, they searched for the nearest neighbor in their vocabulary. And that turned out to be queen. But in order to make it queen, actually, you have to exclude the original word king. People quickly discovered that if you don't exclude the original word, the result of this kind of arithmetic will almost always lead back to the original word. And then a lot of these analogy tasks are simply the result of you then discarding that word during the nearest neighbor search. And then queen just happens to be one of the closest words. And it's sort of much less dependent on which exact calculation you do here. So there's been a lot of follow up work kind of analyzing, criticizing these vector maths. But definitely we know that these word vectors turned out to be extremely, extremely helpful and syntactically and semantically relevant in downstream tasks because they have performed very, very well. So how does the skip gram model work? How does it assign vectors to each word? So first of all, it has a dictionary. So there is a word, an input word. And for each word, you have a big dictionary. And the big dictionary basically says that the word two is going to be mapped to this vector point one, da, da, da, da, da, da, and so on. The word learn is going to be mapped to that vector. And then you also have these output vectors right here. And what you're trying to do is you're trying to take a phrase from the data set like this one right here. And you take out one word like this word vector right here. And you're trying to frame this as a prediction task. So you're trying to frame this as, in this case, four different prediction tasks. So you're telling your machine, I give you the word vector, and which other words are around the word vector? You just tell it that you don't tell it anything else. You just say, which other words are around the word vector? And the correct answers in this case would be to, learn, word, and representations. So you construct four different training examples where you have an X and a Y. So the X is always vector, and the Y is two. And then the next training sample, the X is vector, and the Y is learn, and so on. So this here, each training sample is a classification task. And the classification task is, as you can see, no, you can't see right here, but the classification task is you have the input word, and you classify it into one of many, many, many, many, many, many classes. Namely, there are as many classes as you have words in the dictionary. So each word in the dictionary will have a class associated with it. So in ImageNet, you have like 1,000 classes, but in these, that's already a lot. But in these tasks, you're going to have 100,000 classes, because there are 100,000 words in the English language that you want to treat. There are many more, but in this case, they leave away all the words that appear less than five times in their corpus. That's still a lot of words. So it's like a super duper duper lot of classification task. But ultimately, if you do something like this, then the origin, so the representation that you end up with is going to be very, very good at doing these kind of downstream tasks. And that's what they discovered. So their skip gram model is nothing else than taking a word and predicting the surrounding words from that word. And this is what it means. This is the formal statement of the skip gram objective. What you want to do is the objective of the skip gram model is to maximize the average log probability this one. So for the word we're considering, the word T, we want to maximize the log probability of each word w that is in around the word c, sorry, around the word w in a context window of c. That's exactly what we did before. Take a word like this model right here. And from it, we predict all of the words around it in a given window. That's all. That's the entire objective. And that will give you very good representations. And this is how you would implement that. So what you'll have is you'll have these vector representation v that comes from your original dictionary. Those are the things you learn. And then because you have like a 30,000 way classifier, you know that a classification layer is nothing else than a linear layer followed by a softmax operation. And that linear layer also has parameters. These are the v primes. So first you have the look up in the dictionary for the word vector right here. And this is the vector of the classification layer. Now there are modifications where you can use like the same vectors and so on. Or you can also make use of these vectors. But ultimately, you care about these vectors right here. And the vectors here are simply the classification layers weights. So here you can see that there is what you're trying to maximize is the inner product between the word that you're considering and the words around that word. And you're trying to do a classification task. So you need to normalize. Now this is the normalization constant, and it goes over all of your vocabulary. So that's what they tackle here. They say w is the number of words in the vocabulary. This formulation is impractical because the cost of computing the gradient is proportional to w, which is often large. And that's 10 to the five to 10 to the seven terms. So many like tens of millions of terms in your vocabulary. That's just not feasible. So people have been sort of trying different ways to get around very, very large number of classes. And here it seems that that is really our bottleneck. In the previous paper, they've already shown that this objective can give you very good word representation. But now we need to get around the fact that we have such large vocabularies. So the first idea here is hierarchical softmax. And this is kind of a tangent. I find this paper, by the way, it's sort of hard to read because it's like a half engineering paper. But yeah, so first they introduce this hierarchical softmax, which is kind of a distraction. It's kind of a here is what we do. Here is what we considered first, but then didn't end up using really. They do compare with it, but the flow of text is sort of that you expect this to be part of the final model, which it isn't. So in the hierarchical softmax, what you do instead of having this giant multi class classification task right here, you take all of these classes right here, and you put them in a sort of a tree. Okay, so you take this and you put them into a tree. So instead of classifying, you know, let's say we have 1000 classes, instead of classifying 1000 ways, we first classify in two ways. And then we classify in two ways again, from each one, and then we classify in two ways again, as you know, 1000 is like two to the 10. So we need approximately 10 layers of this before we are actually arriving at 1000 classes. But it also means that we only have two way classifications each time. So in the hierarchical softmax, we build trees like this, and then we so we have a word, we look up its vector, sorry, its vector, and then we classify it for each of these nodes. So your output isn't going to be 1000, 1000 log probabilities, your output is going to be a log probability, a binary log probability for each of the nodes right here. So you want to know, okay, here, is it in the upper half or the lower half of my classes? Okay, cool. It's in the upper half. Okay, here is in the upper half or the lower half and so on. And you learn all to predict all of these junctions right here. And that's going to end up you with you having to predict less. Now, of course, you are constrained, you impose a very big prior on the class distribution, classes aren't independently anymore. Namely, if two classes here are in the same subtree, that means that they are going to be predicted, their predictions are going to be correlated because the path to them is the same partially. So how you arrange the classes here is very important. And there has been a lot of work in this. But as I said, this is rather a distraction right here. Hierarchical softmax is a way to solve this. However, they went with a different way right here. They went with this approach called negative sampling. Negative sampling has been, it's been very influential. Not only in word2vec, but negative sampling is one of the cornerstones of the current trend in self supervised learning and contrastive estimation and so on. So this all of this, you know, it pops up in unlikely ways in other fields. And it sort of, I'm not going to say it originated here, but definitely it was introduced into the popular deep learning world right here. So they say an alternative to hierarchical softmax is noise contrastive estimation. Okay, so in noise contrastive estimation posits that a good model should be able to differentiate data from noise by means of logistic regression. You know, that seems very reasonable. This is similar to the hinge loss and so on, yada yada yada. While NCE can be shown to approximately maximize the log probability of the softmax, the skip grab model is only concerned with learning high quality vector representations. So we are free to simplify noise contrastive estimation as long as the vector representations retain their quality. We define negative sampling by this following objective. So this is very interesting. They see, okay, noise contrastive estimation, you know, it approximately maximizes the log probability. So the noise contrastive estimation would actually be the correct way to approximate their problem. However, they say, well, as long as, you know, as long as something reasonable comes out, we're free to change that up a bit. So they go with this negative sampling approach right here. And you can see that this is almost the same. So it's written a bit differently from the original softmax thing because the original softmax thing was written as a fraction and here it's as a sum. But what you're trying to do in the negative sampling framework is you're trying to maximize the following. You're trying to maximize the inner product of the word you're considering and the words around them. Okay. So you're trying to still predict the words around you. But now instead of having this prediction softmax over all of the classes, you only have the softmax over a subset of classes. So what you'll do is you sample words from your vocabulary at random and you sample k of them and you're simply trying to now minimize the inner product between those words and your word. Okay. So what does that ultimately lead to? It ultimately leads to the following. You have a word like this word here, negative. And what you're trying to do is you're not trying that much to predict the word sampling. What you're trying to do is you're trying to say that in my space right here, I simply want sampling to be closer than any other words that's not in the context window. Okay. So here is my word negative and here is my word sampling. And I want these two to be close. And if I sample another word, like here, this is the word cake. If I, sorry, if I sample that, I simply want that to be far away, farther than the word sampling. Okay. So this is now a comparative. It's not I classify sampling as the highest class. It's simply I want to classify the word sampling against the other classes higher. All right. So, and this is now much, much easier. So instead of a thousand or 10,000 or a million way classification, I now maybe have, I have a K plus one way classification, right? Pretty easy, right? I simply sample K other words. And I assume because I have so many words, chances that I actually sample one that's in my context window is very small, right? So I simply sample other words and I say, well, these other words are random. They have nothing to do with the current frame that I'm looking at. So they should be, you know, they can be whatever they want, but at least they should be farther away than the words that are actually in my con in my context. And that is negative sampling, the process of sampling negatives, this right here, and then making sure that the positives, which are these here, um, in this case, the words in the context are classified with a higher probability than the negatives for a given input word, right? This here is the input word. That's it. That's negative sampling. And of course, yeah, as I said, you recognize this from current things like, um, self supervised learning where you want to have the same image augmented twice, go through the pipeline, you know, you augment, you put a little bit of different noise and then you have a different image and at the end you say these two should be close together while this other one should be far apart. It's the exact same thing here, except that you have a different way of obtaining the positive and the negative samples. In this case, positive samples are everything that's in the context. Negative samples are just randomly sampled from the dataset. And that, you know, works, of course that works much, much, much faster. And you can see that this, um, this, uh, turns out to give you vectors that are pretty good and you can train with higher vectors, sorry, with higher dimensional vectors, you can train with bigger vocabularies with this. This has turned out to be very, very influential. As I said, uh, now with the rise of BERT and so on, work to back is kind of getting forgotten, but, um, this was a revolution and distributed vectors. So it wasn't a thing really. It kind of was a thing before that, but it wasn't really a thing that people used. What people would do is still, they would do N-gram models before that. So they would kind of dist, dist, they would sort of chunk up their sentences into N-grams into overlapping N-grams and then have a big giant, uh, table for their, where they index their N-grams. So the word, I don't know, so the word, um, hello is ID one. The word hello there is ID two and so on. So you have a big table for all the N-grams. And then what we would try to do is you would try to do this kind of bag of words estimation where you would take a, you know, whatever N-grams appeared in your sentence and you would have this big classification where you'd associate the N-grams with each other and so on. So distributed word representations were kind of a revolution at that point, especially distributed representation that actually outperformed these old N-gram methods. Um, yeah. So there are a number of tricks right here that are, I think, not understood until this day. For example, the question is how do you sample these negative samples? Right here, this basically says get K words from your vocabulary at random according to this distribution right here. Now how are you going to do that? Basically you have a spectrum of options. The one side of the spectrum is going to be completely uniform. Okay. We sample each word with the same probability. And the other side of the spectrum is something like sample this according to their uni-gram. These are two different things. They're opposites in this, in this fashion. So here you say, Hey, um, some words appear way, way, way more often than other words. Shouldn't we prefer them when we sample? Right? So if we have a corpus, um, and shouldn't we sample from the corpus? And if in the corpus, one word appears 50 times more than the other word, then shouldn't we sample that 50 times more as a negative because it's, you know, so abundant and it should give a higher classification accuracy. Whereas on the other hand, you could say, no, no, no, we should simply sample every word in our dictionary uniformly. They came up with something in between, which they say, um, both NCE and negative sampling have noise distribution as a free parameter. We investigated a number of choices and found that the uni-gram distribution raised to the three quarter power, i.e. uni-gram to the three quarter, outperformed significantly the uni-gram and uniform distributions. For both NCE and negative on every task we tried including language modeling. This I think is a mystery until today. And it actually turned out that this exponent right here is magically much better than like the exponent of one or even the exponent of one half. Like you might be reasonably assumed that the square root, you know, might be something, but the three quarters I think turned out to be very good and very mystical. So what does it mean? It means that you have kind of a balance between words that appear often and words that don't appear often. Usually in these kind of things, you have a power law where you have very few words that appear very often. And then you have, okay, that's the tail shouldn't go up, but you have a very long tail of words. And what you want to do is in this case, you want to sample these words here more, but they appear so much more often than if you simply sample according to their uni-gram distribution, you'll basically not regard these words right here, you'll forget about them and your performance will suffer because they do appear every now and then. So what you want to do is you want to push that those down a little bit and the optimal amount for the little bit turns out to be to raise it the you raise it to the three quarters. Strange but you know, turned out to work well. The other thing they do is they do the they do a sub sampling of frequent words. So again, this is a way to kind of push down the often appearing words where they say the most frequent words can easily occur hundreds of millions of times like in the or a such words usually provide less information value than the rare words. For example, while the skipgram model benefits from observing the co-occurrences of France and Paris, it benefits much less from observing the frequent co-occurrences of France and the as nearly every word co-occurring frequently with in a sentence with the. So they do another trick here to counter this imbalance between rare and frequent words use a simple sub sampling approach, each word in the training set is discarded with probability computed by that formula. Right, so therefore formula right here and you might be asking again why why this formula? So this is the sampling probability of a word and it goes with one over T. T is a temperature parameter and F is the frequency with which the word appears in the corpus. So as you can see, as the word appears more in the in the corpus, then so this is the frequency as the word appears more than this thing goes down than this thing goes up. So it's discarded with this probability. So it's discarded with a higher probability if it appears more often. Where F is frequency of a word, T is a chosen threshold. We chose this sub sampling formula because it aggressively sub samples words whose frequency is greater than T while preserving the ranking of the frequencies. Although this sub sampling formula was chosen heuristically, we found it to work well in practice. It accelerates learning and even significantly improves the accuracy of the learned vectors of the rare words as will be shown in the following sections. So again, something sort of arbitrary, it's more understandable than the three quarters, but still it's sort of arbitrary. They experimented around, they found this works well and then everybody ended up using that. So that's how this kind of stuff happens. Okay, so now we get into the empirical results. And the empirical results in this case were already sort of given in the previous paper, but here they have these the analogical reasoning task where you can see that the negative sampling did outperform the others by quite a bit right here. So the negative sampling approaches outperformed the hierarchical softmax and the noise contrastive estimation. And in the previous paper, they also compared with other baselines and saw that it also outperforms those while being quite time efficient. So you can see that especially with the sub sampling approaches, the time here is 36 minutes for and I think they have like a huge corpus that they train on these were to back code turned out to be really, really efficient code. And that's why it got so popular as well. They did the same thing for phrases right here. So for phrases like New York Times and so on, but this was kind of more of a this was more of a side thing. The phrase vectors turned out to be, you know, rather a side thing from the actual code right here. So yeah, as I said, this paper is very different from other research papers in that it's it's sort of half an engineering paper and all of these papers are they're kind of hard to read because they just kind of state some things in the order is kind of weird sometimes. Why they do things is kind of weird sometimes. But you can't you know, you can't deny that it had the quite the effect on the community. And this it is a very cool paper, very cool series of papers. And it's very cool that actually, they released the code, and they made the code such that it is super duper efficient, even like on a single machine. And that was very cool, because you know, being Google, they could have just released code that is very efficient on a distributed data center. And they didn't do that. So that this is, it's sort of not really like today anymore. Like today, when they release code, it's always you need you need like 50 cloud TPUs to do it. And it's still cool that they release code. But this was this was really a step into kind of democratizing AI. And yeah, so that was my rant about Word2vec. I hope you enjoyed this. I hope this still was useful to you, even though most of you probably already knew Word2vec. And yeah, so I'll see you next time. Bye bye.
[ { "start": 0, "end": 5.5200000000000005, "text": " Hi there, today we'll look at distributed representations of words and phrases and their" }, { "start": 5.5200000000000005, "end": 12.56, "text": " compositionality by Thomas Mikolov, Ilya Sotskyver, Kai Chen, Greg Corrado and Jeffrey Dean." }, { "start": 12.56, "end": 17.580000000000002, "text": " This is another historical paper, it's one of three papers, it's the middle one that" }, { "start": 17.580000000000002, "end": 21.16, "text": " introduces the original Word2vec algorithm." }, { "start": 21.16, "end": 29.240000000000002, "text": " And as you might know, Word2vec was extremely influential in NLP since this paper basically" }, { "start": 29.24, "end": 34.6, "text": " until recently, where it's sort of gone out of fashion a bit in research with the rise" }, { "start": 34.6, "end": 40.12, "text": " of things like ELMo and BERT, but it's still very, very relevant." }, { "start": 40.12, "end": 45.12, "text": " So we'll look at this historical paper today with kind of the hindsight of being a couple" }, { "start": 45.12, "end": 46.12, "text": " years into the future." }, { "start": 46.12, "end": 53.92, "text": " In fact, as you see right here, this was released in 2013, so it's seven years later now." }, { "start": 53.92, "end": 58.86, "text": " And we'll look back and we'll see what they said back then about the system." }, { "start": 58.86, "end": 66.6, "text": " This is not going to be like a very well-enhanced PowerPoint presentation of how Word2vec works." }, { "start": 66.6, "end": 70.96, "text": " We're going to look at the paper and read it together." }, { "start": 70.96, "end": 75.56, "text": " If you like content like this, if you like historical paper readings, let me know in" }, { "start": 75.56, "end": 81.34, "text": " the comments, share it out if you do like it and of course subscribe." }, { "start": 81.34, "end": 86.74, "text": " Because this kind of historical papers, I enjoy them, but many people might already" }, { "start": 86.74, "end": 89, "text": " know what these things are." }, { "start": 89, "end": 90.83999999999999, "text": " So, yeah." }, { "start": 90.83999999999999, "end": 91.83999999999999, "text": " Okay." }, { "start": 91.83999999999999, "end": 97.56, "text": " Let's, you know, go through the paper and pick up their ideas and kind of put them in" }, { "start": 97.56, "end": 98.94, "text": " context." }, { "start": 98.94, "end": 103.06, "text": " They say the recently introduced continuous skip gram model is an efficient method for" }, { "start": 103.06, "end": 109.28, "text": " learning high quality distributed vector representations that capture a large number of precise syntactic" }, { "start": 109.28, "end": 111.44, "text": " and semantic word relationships." }, { "start": 111.44, "end": 116.24, "text": " So the skip gram model was already introduced by Mikhailov in an earlier paper that came" }, { "start": 116.24, "end": 121.08, "text": " out, I believe not like one or two months prior to this one." }, { "start": 121.08, "end": 123.75999999999999, "text": " As I said, Word2vec is a series of papers." }, { "start": 123.75999999999999, "end": 126.67999999999999, "text": " I don't think there is a paper called Word2vec." }, { "start": 126.67999999999999, "end": 132, "text": " Rather, they here have released the code along with the paper." }, { "start": 132, "end": 135.28, "text": " The code was called Word2vec." }, { "start": 135.28, "end": 140.42, "text": " So the skip gram model was introduced previously, but it is replicated right here." }, { "start": 140.42, "end": 146.23999999999998, "text": " So in the skip gram model, what you're trying to do is you're trying to get a distributed" }, { "start": 146.23999999999998, "end": 148.28, "text": " word representation." }, { "start": 148.28, "end": 149.27999999999997, "text": " So what does that mean?" }, { "start": 149.27999999999997, "end": 154.07999999999998, "text": " That means that for each word in your language, let's take these words right here." }, { "start": 154.07999999999998, "end": 158.51999999999998, "text": " For each word in the language, you want to come up with a vector that somehow describes" }, { "start": 158.51999999999998, "end": 161.07999999999998, "text": " that word in a continuous fashion." }, { "start": 161.08, "end": 170.60000000000002, "text": " So with the two might be mapped to, I don't know, 0.1, 0.9, and 0.3." }, { "start": 170.60000000000002, "end": 174.48000000000002, "text": " Learn might be mapped to negative 0.5 and so on." }, { "start": 174.48000000000002, "end": 179.72000000000003, "text": " So each word gets assigned a vector in the same dimensional space." }, { "start": 179.72000000000003, "end": 184.98000000000002, "text": " And what the previous paper kind of discovered is that if you do this correctly, then these" }, { "start": 184.98000000000002, "end": 187.64000000000001, "text": " vectors, they have some kind of properties." }, { "start": 187.64, "end": 194.04, "text": " So we can already kind of jump ahead because this was already a bit, a bit researched in" }, { "start": 194.04, "end": 195.92, "text": " the last paper." }, { "start": 195.92, "end": 199.55999999999997, "text": " The semantics of these vectors will be something like this." }, { "start": 199.55999999999997, "end": 202.76, "text": " So here they have a two dimensional PCA." }, { "start": 202.76, "end": 208.14, "text": " So these are the first two dimensions of the 1000 dimensional skip gram vector." }, { "start": 208.14, "end": 213.79999999999998, "text": " So the vectors they obtain, they can do things like this, where they can show that in these" }, { "start": 213.8, "end": 221, "text": " spaces, for example, there appears to be a vector direction that characterizes the capital" }, { "start": 221, "end": 222.72, "text": " of a country." }, { "start": 222.72, "end": 229.12, "text": " So if you take a few countries and their capitals and you average that vector, you get a kind" }, { "start": 229.12, "end": 233.52, "text": " of a direction for capitalness of a city." }, { "start": 233.52, "end": 237.76000000000002, "text": " Given a country, you can see that there is a pretty clear relation here." }, { "start": 237.76, "end": 245.92, "text": " Now, some of these things have later been revised to such that they are ultimately ended" }, { "start": 245.92, "end": 247.56, "text": " up being not that impressive." }, { "start": 247.56, "end": 252.78, "text": " For example, there was always this kind of math with vectors." }, { "start": 252.78, "end": 256.2, "text": " And I don't, I believe this is, this might not be in this." }, { "start": 256.2, "end": 262.52, "text": " This is in the last paper where they discovered that if you take the vector for king and you" }, { "start": 262.52, "end": 272.32, "text": " subtract the vector for man and you add the vector for woman, then that would result in" }, { "start": 272.32, "end": 274.96, "text": " the vector for queen." }, { "start": 274.96, "end": 281.56, "text": " So the way they did it was basically they did this calculation right here and then they" }, { "start": 281.56, "end": 285.35999999999996, "text": " searched in the point they ended up, they searched for the nearest neighbor in their" }, { "start": 285.35999999999996, "end": 287.38, "text": " vocabulary." }, { "start": 287.38, "end": 288.79999999999995, "text": " And that turned out to be queen." }, { "start": 288.8, "end": 295.36, "text": " But in order to make it queen, actually, you have to exclude the original word king." }, { "start": 295.36, "end": 301.6, "text": " People quickly discovered that if you don't exclude the original word, the result of this" }, { "start": 301.6, "end": 305.82, "text": " kind of arithmetic will almost always lead back to the original word." }, { "start": 305.82, "end": 311.94, "text": " And then a lot of these analogy tasks are simply the result of you then discarding that" }, { "start": 311.94, "end": 313.72, "text": " word during the nearest neighbor search." }, { "start": 313.72, "end": 317.90000000000003, "text": " And then queen just happens to be one of the closest words." }, { "start": 317.9, "end": 323.78, "text": " And it's sort of much less dependent on which exact calculation you do here." }, { "start": 323.78, "end": 329.2, "text": " So there's been a lot of follow up work kind of analyzing, criticizing these vector maths." }, { "start": 329.2, "end": 334.85999999999996, "text": " But definitely we know that these word vectors turned out to be extremely, extremely helpful" }, { "start": 334.85999999999996, "end": 341.08, "text": " and syntactically and semantically relevant in downstream tasks because they have performed" }, { "start": 341.08, "end": 343.12, "text": " very, very well." }, { "start": 343.12, "end": 346.17999999999995, "text": " So how does the skip gram model work?" }, { "start": 346.18, "end": 352.84000000000003, "text": " How does it assign vectors to each word?" }, { "start": 352.84000000000003, "end": 357.04, "text": " So first of all, it has a dictionary." }, { "start": 357.04, "end": 360.38, "text": " So there is a word, an input word." }, { "start": 360.38, "end": 363.44, "text": " And for each word, you have a big dictionary." }, { "start": 363.44, "end": 369.68, "text": " And the big dictionary basically says that the word two is going to be mapped to this" }, { "start": 369.68, "end": 372.54, "text": " vector point one, da, da, da, da, da, da, and so on." }, { "start": 372.54, "end": 377.8, "text": " The word learn is going to be mapped to that vector." }, { "start": 377.8, "end": 383.62, "text": " And then you also have these output vectors right here." }, { "start": 383.62, "end": 390.56, "text": " And what you're trying to do is you're trying to take a phrase from the data set like this" }, { "start": 390.56, "end": 392.48, "text": " one right here." }, { "start": 392.48, "end": 398.46000000000004, "text": " And you take out one word like this word vector right here." }, { "start": 398.46, "end": 405.08, "text": " And you're trying to frame this as a prediction task." }, { "start": 405.08, "end": 410.68, "text": " So you're trying to frame this as, in this case, four different prediction tasks." }, { "start": 410.68, "end": 417.4, "text": " So you're telling your machine, I give you the word vector, and which other words are" }, { "start": 417.4, "end": 419.84, "text": " around the word vector?" }, { "start": 419.84, "end": 421.96, "text": " You just tell it that you don't tell it anything else." }, { "start": 421.96, "end": 426.03999999999996, "text": " You just say, which other words are around the word vector?" }, { "start": 426.04, "end": 433.04, "text": " And the correct answers in this case would be to, learn, word, and representations." }, { "start": 433.04, "end": 439.92, "text": " So you construct four different training examples where you have an X and a Y." }, { "start": 439.92, "end": 445.36, "text": " So the X is always vector, and the Y is two." }, { "start": 445.36, "end": 453.92, "text": " And then the next training sample, the X is vector, and the Y is learn, and so on." }, { "start": 453.92, "end": 459.24, "text": " So this here, each training sample is a classification task." }, { "start": 459.24, "end": 466.8, "text": " And the classification task is, as you can see, no, you can't see right here, but the" }, { "start": 466.8, "end": 474.44, "text": " classification task is you have the input word, and you classify it into one of many," }, { "start": 474.44, "end": 477.48, "text": " many, many, many, many, many classes." }, { "start": 477.48, "end": 482.64, "text": " Namely, there are as many classes as you have words in the dictionary." }, { "start": 482.64, "end": 488.76, "text": " So each word in the dictionary will have a class associated with it." }, { "start": 488.76, "end": 493.36, "text": " So in ImageNet, you have like 1,000 classes, but in these, that's already a lot." }, { "start": 493.36, "end": 499.59999999999997, "text": " But in these tasks, you're going to have 100,000 classes, because there are 100,000 words in" }, { "start": 499.59999999999997, "end": 502.47999999999996, "text": " the English language that you want to treat." }, { "start": 502.47999999999996, "end": 507.28, "text": " There are many more, but in this case, they leave away all the words that appear less" }, { "start": 507.28, "end": 509.08, "text": " than five times in their corpus." }, { "start": 509.08, "end": 510.84, "text": " That's still a lot of words." }, { "start": 510.84, "end": 515.6999999999999, "text": " So it's like a super duper duper lot of classification task." }, { "start": 515.6999999999999, "end": 522.22, "text": " But ultimately, if you do something like this, then the origin, so the representation that" }, { "start": 522.22, "end": 527.56, "text": " you end up with is going to be very, very good at doing these kind of downstream tasks." }, { "start": 527.56, "end": 529.88, "text": " And that's what they discovered." }, { "start": 529.88, "end": 536.68, "text": " So their skip gram model is nothing else than taking a word and predicting the surrounding" }, { "start": 536.68, "end": 540.4, "text": " words from that word." }, { "start": 540.4, "end": 542.48, "text": " And this is what it means." }, { "start": 542.48, "end": 545.76, "text": " This is the formal statement of the skip gram objective." }, { "start": 545.76, "end": 552.28, "text": " What you want to do is the objective of the skip gram model is to maximize the average" }, { "start": 552.28, "end": 554.68, "text": " log probability this one." }, { "start": 554.68, "end": 561.6, "text": " So for the word we're considering, the word T, we want to maximize the log probability" }, { "start": 561.6, "end": 571.4, "text": " of each word w that is in around the word c, sorry, around the word w in a context window" }, { "start": 571.4, "end": 572.4, "text": " of c." }, { "start": 572.4, "end": 573.72, "text": " That's exactly what we did before." }, { "start": 573.72, "end": 576.6, "text": " Take a word like this model right here." }, { "start": 576.6, "end": 585, "text": " And from it, we predict all of the words around it in a given window." }, { "start": 585, "end": 586, "text": " That's all." }, { "start": 586, "end": 587.5, "text": " That's the entire objective." }, { "start": 587.5, "end": 592.76, "text": " And that will give you very good representations." }, { "start": 592.76, "end": 594.96, "text": " And this is how you would implement that." }, { "start": 594.96, "end": 602.68, "text": " So what you'll have is you'll have these vector representation v that comes from your original" }, { "start": 602.68, "end": 603.68, "text": " dictionary." }, { "start": 603.68, "end": 605.32, "text": " Those are the things you learn." }, { "start": 605.32, "end": 610.7, "text": " And then because you have like a 30,000 way classifier, you know that a classification" }, { "start": 610.7, "end": 616.08, "text": " layer is nothing else than a linear layer followed by a softmax operation." }, { "start": 616.08, "end": 618.58, "text": " And that linear layer also has parameters." }, { "start": 618.58, "end": 620.4000000000001, "text": " These are the v primes." }, { "start": 620.4000000000001, "end": 627.48, "text": " So first you have the look up in the dictionary for the word vector right here." }, { "start": 627.48, "end": 630.74, "text": " And this is the vector of the classification layer." }, { "start": 630.74, "end": 634.36, "text": " Now there are modifications where you can use like the same vectors and so on." }, { "start": 634.36, "end": 636.94, "text": " Or you can also make use of these vectors." }, { "start": 636.94, "end": 641.88, "text": " But ultimately, you care about these vectors right here." }, { "start": 641.88, "end": 646.52, "text": " And the vectors here are simply the classification layers weights." }, { "start": 646.52, "end": 654.46, "text": " So here you can see that there is what you're trying to maximize is the inner product between" }, { "start": 654.46, "end": 662.28, "text": " the word that you're considering and the words around that word." }, { "start": 662.28, "end": 665.94, "text": " And you're trying to do a classification task." }, { "start": 665.94, "end": 667.58, "text": " So you need to normalize." }, { "start": 667.58, "end": 674.8000000000001, "text": " Now this is the normalization constant, and it goes over all of your vocabulary." }, { "start": 674.8000000000001, "end": 677.48, "text": " So that's what they tackle here." }, { "start": 677.48, "end": 682.6, "text": " They say w is the number of words in the vocabulary." }, { "start": 682.6, "end": 688.0600000000001, "text": " This formulation is impractical because the cost of computing the gradient is proportional" }, { "start": 688.0600000000001, "end": 691.2, "text": " to w, which is often large." }, { "start": 691.2, "end": 694, "text": " And that's 10 to the five to 10 to the seven terms." }, { "start": 694, "end": 699.6, "text": " So many like tens of millions of terms in your vocabulary." }, { "start": 699.6, "end": 701.52, "text": " That's just not feasible." }, { "start": 701.52, "end": 707.44, "text": " So people have been sort of trying different ways to get around very, very large number" }, { "start": 707.44, "end": 708.92, "text": " of classes." }, { "start": 708.92, "end": 711.88, "text": " And here it seems that that is really our bottleneck." }, { "start": 711.88, "end": 716.08, "text": " In the previous paper, they've already shown that this objective can give you very good" }, { "start": 716.08, "end": 717.9, "text": " word representation." }, { "start": 717.9, "end": 722.32, "text": " But now we need to get around the fact that we have such large vocabularies." }, { "start": 722.32, "end": 724.88, "text": " So the first idea here is hierarchical softmax." }, { "start": 724.88, "end": 726.36, "text": " And this is kind of a tangent." }, { "start": 726.36, "end": 732.2, "text": " I find this paper, by the way, it's sort of hard to read because it's like a half engineering" }, { "start": 732.2, "end": 733.88, "text": " paper." }, { "start": 733.88, "end": 740.4000000000001, "text": " But yeah, so first they introduce this hierarchical softmax, which is kind of a distraction." }, { "start": 740.4000000000001, "end": 743.0400000000001, "text": " It's kind of a here is what we do." }, { "start": 743.0400000000001, "end": 746.9200000000001, "text": " Here is what we considered first, but then didn't end up using really." }, { "start": 746.92, "end": 753, "text": " They do compare with it, but the flow of text is sort of that you expect this to be part" }, { "start": 753, "end": 755.12, "text": " of the final model, which it isn't." }, { "start": 755.12, "end": 760.4799999999999, "text": " So in the hierarchical softmax, what you do instead of having this giant multi class classification" }, { "start": 760.4799999999999, "end": 767.4799999999999, "text": " task right here, you take all of these classes right here, and you put them in a sort of" }, { "start": 767.4799999999999, "end": 768.4799999999999, "text": " a tree." }, { "start": 768.4799999999999, "end": 773.24, "text": " Okay, so you take this and you put them into a tree." }, { "start": 773.24, "end": 777.8, "text": " So instead of classifying, you know, let's say we have 1000 classes, instead of classifying" }, { "start": 777.8, "end": 782, "text": " 1000 ways, we first classify in two ways." }, { "start": 782, "end": 787.64, "text": " And then we classify in two ways again, from each one, and then we classify in two ways" }, { "start": 787.64, "end": 791.1, "text": " again, as you know, 1000 is like two to the 10." }, { "start": 791.1, "end": 800.16, "text": " So we need approximately 10 layers of this before we are actually arriving at 1000 classes." }, { "start": 800.16, "end": 805.88, "text": " But it also means that we only have two way classifications each time." }, { "start": 805.88, "end": 812.5, "text": " So in the hierarchical softmax, we build trees like this, and then we so we have a word," }, { "start": 812.5, "end": 818.4399999999999, "text": " we look up its vector, sorry, its vector, and then we classify it for each of these" }, { "start": 818.4399999999999, "end": 819.4399999999999, "text": " nodes." }, { "start": 819.4399999999999, "end": 826.64, "text": " So your output isn't going to be 1000, 1000 log probabilities, your output is going to" }, { "start": 826.64, "end": 832.96, "text": " be a log probability, a binary log probability for each of the nodes right here." }, { "start": 832.96, "end": 838.96, "text": " So you want to know, okay, here, is it in the upper half or the lower half of my classes?" }, { "start": 838.96, "end": 839.96, "text": " Okay, cool." }, { "start": 839.96, "end": 840.96, "text": " It's in the upper half." }, { "start": 840.96, "end": 844, "text": " Okay, here is in the upper half or the lower half and so on." }, { "start": 844, "end": 848.16, "text": " And you learn all to predict all of these junctions right here." }, { "start": 848.16, "end": 851.4, "text": " And that's going to end up you with you having to predict less." }, { "start": 851.4, "end": 859.0799999999999, "text": " Now, of course, you are constrained, you impose a very big prior on the class distribution," }, { "start": 859.0799999999999, "end": 860.68, "text": " classes aren't independently anymore." }, { "start": 860.68, "end": 866, "text": " Namely, if two classes here are in the same subtree, that means that they are going to" }, { "start": 866, "end": 873.88, "text": " be predicted, their predictions are going to be correlated because the path to them" }, { "start": 873.88, "end": 875.8, "text": " is the same partially." }, { "start": 875.8, "end": 880.68, "text": " So how you arrange the classes here is very important." }, { "start": 880.68, "end": 882.9599999999999, "text": " And there has been a lot of work in this." }, { "start": 882.9599999999999, "end": 888.52, "text": " But as I said, this is rather a distraction right here." }, { "start": 888.52, "end": 891.16, "text": " Hierarchical softmax is a way to solve this." }, { "start": 891.16, "end": 895.92, "text": " However, they went with a different way right here." }, { "start": 895.92, "end": 899.12, "text": " They went with this approach called negative sampling." }, { "start": 899.12, "end": 904.5999999999999, "text": " Negative sampling has been, it's been very influential." }, { "start": 904.5999999999999, "end": 910.56, "text": " Not only in word2vec, but negative sampling is one of the cornerstones of the current" }, { "start": 910.56, "end": 915.8, "text": " trend in self supervised learning and contrastive estimation and so on." }, { "start": 915.8, "end": 922.4399999999999, "text": " So this all of this, you know, it pops up in unlikely ways in other fields." }, { "start": 922.4399999999999, "end": 929.9, "text": " And it sort of, I'm not going to say it originated here, but definitely it was introduced into" }, { "start": 929.9, "end": 933.28, "text": " the popular deep learning world right here." }, { "start": 933.28, "end": 939.1199999999999, "text": " So they say an alternative to hierarchical softmax is noise contrastive estimation." }, { "start": 939.12, "end": 946.04, "text": " Okay, so in noise contrastive estimation posits that a good model should be able to differentiate" }, { "start": 946.04, "end": 949.2, "text": " data from noise by means of logistic regression." }, { "start": 949.2, "end": 951.72, "text": " You know, that seems very reasonable." }, { "start": 951.72, "end": 956.16, "text": " This is similar to the hinge loss and so on, yada yada yada." }, { "start": 956.16, "end": 961.32, "text": " While NCE can be shown to approximately maximize the log probability of the softmax, the skip" }, { "start": 961.32, "end": 965.66, "text": " grab model is only concerned with learning high quality vector representations." }, { "start": 965.66, "end": 971.1999999999999, "text": " So we are free to simplify noise contrastive estimation as long as the vector representations" }, { "start": 971.1999999999999, "end": 973.24, "text": " retain their quality." }, { "start": 973.24, "end": 976.4, "text": " We define negative sampling by this following objective." }, { "start": 976.4, "end": 977.88, "text": " So this is very interesting." }, { "start": 977.88, "end": 983.64, "text": " They see, okay, noise contrastive estimation, you know, it approximately maximizes the log" }, { "start": 983.64, "end": 984.64, "text": " probability." }, { "start": 984.64, "end": 989.92, "text": " So the noise contrastive estimation would actually be the correct way to approximate" }, { "start": 989.92, "end": 990.92, "text": " their problem." }, { "start": 990.92, "end": 997, "text": " However, they say, well, as long as, you know, as long as something reasonable comes out," }, { "start": 997, "end": 998.92, "text": " we're free to change that up a bit." }, { "start": 998.92, "end": 1004.24, "text": " So they go with this negative sampling approach right here." }, { "start": 1004.24, "end": 1009.26, "text": " And you can see that this is almost the same." }, { "start": 1009.26, "end": 1015.16, "text": " So it's written a bit differently from the original softmax thing because the original" }, { "start": 1015.16, "end": 1018.64, "text": " softmax thing was written as a fraction and here it's as a sum." }, { "start": 1018.64, "end": 1025.52, "text": " But what you're trying to do in the negative sampling framework is you're trying to maximize" }, { "start": 1025.52, "end": 1026.66, "text": " the following." }, { "start": 1026.66, "end": 1032.04, "text": " You're trying to maximize the inner product of the word you're considering and the words" }, { "start": 1032.04, "end": 1033.2, "text": " around them." }, { "start": 1033.2, "end": 1034.2, "text": " Okay." }, { "start": 1034.2, "end": 1038.16, "text": " So you're trying to still predict the words around you." }, { "start": 1038.16, "end": 1045.12, "text": " But now instead of having this prediction softmax over all of the classes, you only" }, { "start": 1045.12, "end": 1049.56, "text": " have the softmax over a subset of classes." }, { "start": 1049.56, "end": 1057.28, "text": " So what you'll do is you sample words from your vocabulary at random and you sample k" }, { "start": 1057.28, "end": 1064.76, "text": " of them and you're simply trying to now minimize the inner product between those words and" }, { "start": 1064.76, "end": 1066.08, "text": " your word." }, { "start": 1066.08, "end": 1067.08, "text": " Okay." }, { "start": 1067.08, "end": 1070.2199999999998, "text": " So what does that ultimately lead to?" }, { "start": 1070.2199999999998, "end": 1073.3, "text": " It ultimately leads to the following." }, { "start": 1073.3, "end": 1077.8799999999999, "text": " You have a word like this word here, negative." }, { "start": 1077.8799999999999, "end": 1082.9199999999998, "text": " And what you're trying to do is you're not trying that much to predict the word sampling." }, { "start": 1082.9199999999998, "end": 1088.6599999999999, "text": " What you're trying to do is you're trying to say that in my space right here, I simply" }, { "start": 1088.6599999999999, "end": 1095.68, "text": " want sampling to be closer than any other words that's not in the context window." }, { "start": 1095.68, "end": 1096.68, "text": " Okay." }, { "start": 1096.68, "end": 1101.9199999999998, "text": " So here is my word negative and here is my word sampling." }, { "start": 1101.92, "end": 1104.6000000000001, "text": " And I want these two to be close." }, { "start": 1104.6000000000001, "end": 1109.76, "text": " And if I sample another word, like here, this is the word cake." }, { "start": 1109.76, "end": 1116.96, "text": " If I, sorry, if I sample that, I simply want that to be far away, farther than the word" }, { "start": 1116.96, "end": 1117.96, "text": " sampling." }, { "start": 1117.96, "end": 1118.96, "text": " Okay." }, { "start": 1118.96, "end": 1120.6200000000001, "text": " So this is now a comparative." }, { "start": 1120.6200000000001, "end": 1124.2, "text": " It's not I classify sampling as the highest class." }, { "start": 1124.2, "end": 1132.56, "text": " It's simply I want to classify the word sampling against the other classes higher." }, { "start": 1132.56, "end": 1133.56, "text": " All right." }, { "start": 1133.56, "end": 1136.68, "text": " So, and this is now much, much easier." }, { "start": 1136.68, "end": 1142.0800000000002, "text": " So instead of a thousand or 10,000 or a million way classification, I now maybe have, I have" }, { "start": 1142.0800000000002, "end": 1146, "text": " a K plus one way classification, right?" }, { "start": 1146, "end": 1147, "text": " Pretty easy, right?" }, { "start": 1147, "end": 1149.52, "text": " I simply sample K other words." }, { "start": 1149.52, "end": 1155.72, "text": " And I assume because I have so many words, chances that I actually sample one that's" }, { "start": 1155.72, "end": 1158.8, "text": " in my context window is very small, right?" }, { "start": 1158.8, "end": 1162.68, "text": " So I simply sample other words and I say, well, these other words are random." }, { "start": 1162.68, "end": 1166.8, "text": " They have nothing to do with the current frame that I'm looking at." }, { "start": 1166.8, "end": 1173.2, "text": " So they should be, you know, they can be whatever they want, but at least they should be farther" }, { "start": 1173.2, "end": 1180.0800000000002, "text": " away than the words that are actually in my con in my context." }, { "start": 1180.0800000000002, "end": 1185.8, "text": " And that is negative sampling, the process of sampling negatives, this right here, and" }, { "start": 1185.8, "end": 1191.96, "text": " then making sure that the positives, which are these here, um, in this case, the words" }, { "start": 1191.96, "end": 1198.8, "text": " in the context are classified with a higher probability than the negatives for a given" }, { "start": 1198.8, "end": 1200.04, "text": " input word, right?" }, { "start": 1200.04, "end": 1205.3999999999999, "text": " This here is the input word." }, { "start": 1205.3999999999999, "end": 1206.3999999999999, "text": " That's it." }, { "start": 1206.3999999999999, "end": 1207.3999999999999, "text": " That's negative sampling." }, { "start": 1207.3999999999999, "end": 1214.12, "text": " And of course, yeah, as I said, you recognize this from current things like, um, self supervised" }, { "start": 1214.12, "end": 1220.8799999999999, "text": " learning where you want to have the same image augmented twice, go through the pipeline," }, { "start": 1220.8799999999999, "end": 1224.72, "text": " you know, you augment, you put a little bit of different noise and then you have a different" }, { "start": 1224.72, "end": 1231, "text": " image and at the end you say these two should be close together while this other one should" }, { "start": 1231, "end": 1232.94, "text": " be far apart." }, { "start": 1232.94, "end": 1238.52, "text": " It's the exact same thing here, except that you have a different way of obtaining the" }, { "start": 1238.52, "end": 1241.26, "text": " positive and the negative samples." }, { "start": 1241.26, "end": 1245.66, "text": " In this case, positive samples are everything that's in the context." }, { "start": 1245.66, "end": 1252.14, "text": " Negative samples are just randomly sampled from the dataset." }, { "start": 1252.14, "end": 1256.5200000000002, "text": " And that, you know, works, of course that works much, much, much faster." }, { "start": 1256.5200000000002, "end": 1263.8400000000001, "text": " And you can see that this, um, this, uh, turns out to give you vectors that are pretty good" }, { "start": 1263.8400000000001, "end": 1268.44, "text": " and you can train with higher vectors, sorry, with higher dimensional vectors, you can train" }, { "start": 1268.44, "end": 1270.6000000000001, "text": " with bigger vocabularies with this." }, { "start": 1270.6000000000001, "end": 1274.0600000000002, "text": " This has turned out to be very, very influential." }, { "start": 1274.0600000000002, "end": 1280.2800000000002, "text": " As I said, uh, now with the rise of BERT and so on, work to back is kind of getting forgotten," }, { "start": 1280.28, "end": 1285.84, "text": " but, um, this was a revolution and distributed vectors." }, { "start": 1285.84, "end": 1288.12, "text": " So it wasn't a thing really." }, { "start": 1288.12, "end": 1292.92, "text": " It kind of was a thing before that, but it wasn't really a thing that people used." }, { "start": 1292.92, "end": 1297.18, "text": " What people would do is still, they would do N-gram models before that." }, { "start": 1297.18, "end": 1302.6, "text": " So they would kind of dist, dist, they would sort of chunk up their sentences into N-grams" }, { "start": 1302.6, "end": 1308.68, "text": " into overlapping N-grams and then have a big giant, uh, table for their, where they index" }, { "start": 1308.68, "end": 1309.68, "text": " their N-grams." }, { "start": 1309.68, "end": 1316.2, "text": " So the word, I don't know, so the word, um, hello is ID one." }, { "start": 1316.2, "end": 1321.4, "text": " The word hello there is ID two and so on." }, { "start": 1321.4, "end": 1324.14, "text": " So you have a big table for all the N-grams." }, { "start": 1324.14, "end": 1328.5600000000002, "text": " And then what we would try to do is you would try to do this kind of bag of words estimation" }, { "start": 1328.5600000000002, "end": 1333.88, "text": " where you would take a, you know, whatever N-grams appeared in your sentence and you" }, { "start": 1333.88, "end": 1340.5200000000002, "text": " would have this big classification where you'd associate the N-grams with each other and" }, { "start": 1340.5200000000002, "end": 1341.5200000000002, "text": " so on." }, { "start": 1341.5200000000002, "end": 1346.2, "text": " So distributed word representations were kind of a revolution at that point, especially" }, { "start": 1346.2, "end": 1351.48, "text": " distributed representation that actually outperformed these old N-gram methods." }, { "start": 1351.48, "end": 1353.18, "text": " Um, yeah." }, { "start": 1353.18, "end": 1358.3200000000002, "text": " So there are a number of tricks right here that are, I think, not understood until this" }, { "start": 1358.3200000000002, "end": 1359.3200000000002, "text": " day." }, { "start": 1359.32, "end": 1364, "text": " For example, the question is how do you sample these negative samples?" }, { "start": 1364, "end": 1372.8799999999999, "text": " Right here, this basically says get K words from your vocabulary at random according to" }, { "start": 1372.8799999999999, "end": 1374.84, "text": " this distribution right here." }, { "start": 1374.84, "end": 1376.84, "text": " Now how are you going to do that?" }, { "start": 1376.84, "end": 1379.6, "text": " Basically you have a spectrum of options." }, { "start": 1379.6, "end": 1384.4399999999998, "text": " The one side of the spectrum is going to be completely uniform." }, { "start": 1384.4399999999998, "end": 1385.4399999999998, "text": " Okay." }, { "start": 1385.4399999999998, "end": 1388.76, "text": " We sample each word with the same probability." }, { "start": 1388.76, "end": 1396.64, "text": " And the other side of the spectrum is something like sample this according to their uni-gram." }, { "start": 1396.64, "end": 1398.96, "text": " These are two different things." }, { "start": 1398.96, "end": 1401.16, "text": " They're opposites in this, in this fashion." }, { "start": 1401.16, "end": 1409, "text": " So here you say, Hey, um, some words appear way, way, way more often than other words." }, { "start": 1409, "end": 1411.36, "text": " Shouldn't we prefer them when we sample?" }, { "start": 1411.36, "end": 1412.36, "text": " Right?" }, { "start": 1412.36, "end": 1419.1599999999999, "text": " So if we have a corpus, um, and shouldn't we sample from the corpus?" }, { "start": 1419.1599999999999, "end": 1423.76, "text": " And if in the corpus, one word appears 50 times more than the other word, then shouldn't" }, { "start": 1423.76, "end": 1428.6, "text": " we sample that 50 times more as a negative because it's, you know, so abundant and it" }, { "start": 1428.6, "end": 1431.6399999999999, "text": " should give a higher classification accuracy." }, { "start": 1431.6399999999999, "end": 1434.3799999999999, "text": " Whereas on the other hand, you could say, no, no, no, we should simply sample every" }, { "start": 1434.3799999999999, "end": 1437.08, "text": " word in our dictionary uniformly." }, { "start": 1437.08, "end": 1445.8799999999999, "text": " They came up with something in between, which they say, um, both NCE and negative sampling" }, { "start": 1445.8799999999999, "end": 1448.3799999999999, "text": " have noise distribution as a free parameter." }, { "start": 1448.3799999999999, "end": 1454.12, "text": " We investigated a number of choices and found that the uni-gram distribution raised to the" }, { "start": 1454.12, "end": 1461.1399999999999, "text": " three quarter power, i.e. uni-gram to the three quarter, outperformed significantly" }, { "start": 1461.1399999999999, "end": 1463.96, "text": " the uni-gram and uniform distributions." }, { "start": 1463.96, "end": 1469.32, "text": " For both NCE and negative on every task we tried including language modeling." }, { "start": 1469.32, "end": 1471.92, "text": " This I think is a mystery until today." }, { "start": 1471.92, "end": 1478.72, "text": " And it actually turned out that this exponent right here is magically much better than like" }, { "start": 1478.72, "end": 1481.4, "text": " the exponent of one or even the exponent of one half." }, { "start": 1481.4, "end": 1487.48, "text": " Like you might be reasonably assumed that the square root, you know, might be something," }, { "start": 1487.48, "end": 1492.64, "text": " but the three quarters I think turned out to be very good and very mystical." }, { "start": 1492.64, "end": 1494.68, "text": " So what does it mean?" }, { "start": 1494.68, "end": 1499.16, "text": " It means that you have kind of a balance between words that appear often and words that don't" }, { "start": 1499.16, "end": 1500.44, "text": " appear often." }, { "start": 1500.44, "end": 1504.8200000000002, "text": " Usually in these kind of things, you have a power law where you have very few words" }, { "start": 1504.8200000000002, "end": 1506.46, "text": " that appear very often." }, { "start": 1506.46, "end": 1512.16, "text": " And then you have, okay, that's the tail shouldn't go up, but you have a very long tail of words." }, { "start": 1512.16, "end": 1517.3600000000001, "text": " And what you want to do is in this case, you want to sample these words here more, but" }, { "start": 1517.3600000000001, "end": 1522.0800000000002, "text": " they appear so much more often than if you simply sample according to their uni-gram" }, { "start": 1522.08, "end": 1526.96, "text": " distribution, you'll basically not regard these words right here, you'll forget about" }, { "start": 1526.96, "end": 1532.08, "text": " them and your performance will suffer because they do appear every now and then." }, { "start": 1532.08, "end": 1537.96, "text": " So what you want to do is you want to push that those down a little bit and the optimal" }, { "start": 1537.96, "end": 1544.48, "text": " amount for the little bit turns out to be to raise it the you raise it to the three" }, { "start": 1544.48, "end": 1547.48, "text": " quarters." }, { "start": 1547.48, "end": 1551.9199999999998, "text": " Strange but you know, turned out to work well." }, { "start": 1551.92, "end": 1557.88, "text": " The other thing they do is they do the they do a sub sampling of frequent words." }, { "start": 1557.88, "end": 1564.16, "text": " So again, this is a way to kind of push down the often appearing words where they say the" }, { "start": 1564.16, "end": 1569.44, "text": " most frequent words can easily occur hundreds of millions of times like in the or a such" }, { "start": 1569.44, "end": 1573.74, "text": " words usually provide less information value than the rare words." }, { "start": 1573.74, "end": 1577.96, "text": " For example, while the skipgram model benefits from observing the co-occurrences of France" }, { "start": 1577.96, "end": 1582.72, "text": " and Paris, it benefits much less from observing the frequent co-occurrences of France and" }, { "start": 1582.72, "end": 1589.24, "text": " the as nearly every word co-occurring frequently with in a sentence with the." }, { "start": 1589.24, "end": 1595.68, "text": " So they do another trick here to counter this imbalance between rare and frequent words" }, { "start": 1595.68, "end": 1601.04, "text": " use a simple sub sampling approach, each word in the training set is discarded with probability" }, { "start": 1601.04, "end": 1603.44, "text": " computed by that formula." }, { "start": 1603.44, "end": 1610.92, "text": " Right, so therefore formula right here and you might be asking again why why this formula?" }, { "start": 1610.92, "end": 1618.96, "text": " So this is the sampling probability of a word and it goes with one over T. T is a temperature" }, { "start": 1618.96, "end": 1625.2, "text": " parameter and F is the frequency with which the word appears in the corpus." }, { "start": 1625.2, "end": 1632.76, "text": " So as you can see, as the word appears more in the in the corpus, then so this is the" }, { "start": 1632.76, "end": 1638.72, "text": " frequency as the word appears more than this thing goes down than this thing goes up." }, { "start": 1638.72, "end": 1642.48, "text": " So it's discarded with this probability." }, { "start": 1642.48, "end": 1648.92, "text": " So it's discarded with a higher probability if it appears more often." }, { "start": 1648.92, "end": 1653.46, "text": " Where F is frequency of a word, T is a chosen threshold." }, { "start": 1653.46, "end": 1658, "text": " We chose this sub sampling formula because it aggressively sub samples words whose frequency" }, { "start": 1658, "end": 1663.24, "text": " is greater than T while preserving the ranking of the frequencies." }, { "start": 1663.24, "end": 1667, "text": " Although this sub sampling formula was chosen heuristically, we found it to work well in" }, { "start": 1667, "end": 1668.08, "text": " practice." }, { "start": 1668.08, "end": 1673.12, "text": " It accelerates learning and even significantly improves the accuracy of the learned vectors" }, { "start": 1673.12, "end": 1676.8, "text": " of the rare words as will be shown in the following sections." }, { "start": 1676.8, "end": 1682.18, "text": " So again, something sort of arbitrary, it's more understandable than the three quarters," }, { "start": 1682.18, "end": 1684.08, "text": " but still it's sort of arbitrary." }, { "start": 1684.08, "end": 1689.28, "text": " They experimented around, they found this works well and then everybody ended up using" }, { "start": 1689.28, "end": 1690.28, "text": " that." }, { "start": 1690.28, "end": 1693.6799999999998, "text": " So that's how this kind of stuff happens." }, { "start": 1693.6799999999998, "end": 1698.36, "text": " Okay, so now we get into the empirical results." }, { "start": 1698.36, "end": 1704.04, "text": " And the empirical results in this case were already sort of given in the previous paper," }, { "start": 1704.04, "end": 1713.9199999999998, "text": " but here they have these the analogical reasoning task where you can see that the negative sampling" }, { "start": 1713.92, "end": 1718.68, "text": " did outperform the others by quite a bit right here." }, { "start": 1718.68, "end": 1724.48, "text": " So the negative sampling approaches outperformed the hierarchical softmax and the noise contrastive" }, { "start": 1724.48, "end": 1726.02, "text": " estimation." }, { "start": 1726.02, "end": 1730.44, "text": " And in the previous paper, they also compared with other baselines and saw that it also" }, { "start": 1730.44, "end": 1738.92, "text": " outperforms those while being quite time efficient." }, { "start": 1738.92, "end": 1747.3600000000001, "text": " So you can see that especially with the sub sampling approaches, the time here is 36 minutes" }, { "start": 1747.3600000000001, "end": 1754.0800000000002, "text": " for and I think they have like a huge corpus that they train on these were to back code" }, { "start": 1754.0800000000002, "end": 1757.3600000000001, "text": " turned out to be really, really efficient code." }, { "start": 1757.3600000000001, "end": 1760.3600000000001, "text": " And that's why it got so popular as well." }, { "start": 1760.3600000000001, "end": 1765.1200000000001, "text": " They did the same thing for phrases right here." }, { "start": 1765.12, "end": 1772.1799999999998, "text": " So for phrases like New York Times and so on, but this was kind of more of a this was" }, { "start": 1772.1799999999998, "end": 1775.52, "text": " more of a side thing." }, { "start": 1775.52, "end": 1782.6, "text": " The phrase vectors turned out to be, you know, rather a side thing from the actual code right" }, { "start": 1782.6, "end": 1785.36, "text": " here." }, { "start": 1785.36, "end": 1792.08, "text": " So yeah, as I said, this paper is very different from other research papers in that it's it's" }, { "start": 1792.08, "end": 1797.1999999999998, "text": " sort of half an engineering paper and all of these papers are they're kind of hard to" }, { "start": 1797.1999999999998, "end": 1805.1799999999998, "text": " read because they just kind of state some things in the order is kind of weird sometimes." }, { "start": 1805.1799999999998, "end": 1808.58, "text": " Why they do things is kind of weird sometimes." }, { "start": 1808.58, "end": 1815.04, "text": " But you can't you know, you can't deny that it had the quite the effect on the community." }, { "start": 1815.04, "end": 1820.8999999999999, "text": " And this it is a very cool paper, very cool series of papers." }, { "start": 1820.9, "end": 1826.48, "text": " And it's very cool that actually, they released the code, and they made the code such that" }, { "start": 1826.48, "end": 1831.3200000000002, "text": " it is super duper efficient, even like on a single machine." }, { "start": 1831.3200000000002, "end": 1836.3600000000001, "text": " And that was very cool, because you know, being Google, they could have just released" }, { "start": 1836.3600000000001, "end": 1842.88, "text": " code that is very efficient on a distributed data center." }, { "start": 1842.88, "end": 1844.8000000000002, "text": " And they didn't do that." }, { "start": 1844.8000000000002, "end": 1849.5600000000002, "text": " So that this is, it's sort of not really like today anymore." }, { "start": 1849.56, "end": 1856.44, "text": " Like today, when they release code, it's always you need you need like 50 cloud TPUs to do" }, { "start": 1856.44, "end": 1857.44, "text": " it." }, { "start": 1857.44, "end": 1858.84, "text": " And it's still cool that they release code." }, { "start": 1858.84, "end": 1866.44, "text": " But this was this was really a step into kind of democratizing AI." }, { "start": 1866.44, "end": 1870.6799999999998, "text": " And yeah, so that was my rant about Word2vec." }, { "start": 1870.6799999999998, "end": 1871.96, "text": " I hope you enjoyed this." }, { "start": 1871.96, "end": 1878.1599999999999, "text": " I hope this still was useful to you, even though most of you probably already knew Word2vec." }, { "start": 1878.16, "end": 1880.8000000000002, "text": " And yeah, so I'll see you next time." }, { "start": 1880.8, "end": 1908.8, "text": " Bye bye." } ]
utuz7wBGjKM
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[News] OpenAI Model Generates Python Code
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "microsoft", "openai", "msbuild", "build", "code", "gpt2", "language model", "completion", "intellisense", "intellicode", "vscode", "github", "python", "code completion", "smart", "generate", "function body", "docstring", "name", "arguments", "programmer", "stackoverflow", "dataset", "interpolate" ]
This code completion engine can write an entire function from just the name! OpenAI demonstrates what happens when you learn a language model on thousands of GitHub Python repositories. Source Clip: https://youtu.be/fZSFNUT6iY8 Full Video: https://www.pscp.tv/Microsoft/1OyKAYWPRrWKb Kite: https://kite.com/ TabNine: https://www.tabnine.com/ Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there. So I saw this and probably many of you have seen this. OpenAI was demonstrating at MSBuild basically a GPT-2 language model but trained not on language but on code, on Python code, open source code from GitHub. And so the idea is that the model learns to produce code. And we'll just have a short look at the clip they have here. I'll link the entire clip down. So this is what the human types. Def is palindrome, so the function name, the argument, and the doc string. And now the model is asked to produce the rest of the function and check out. So it's pretty good, right? This is actually to check whether, this is a basic check, whether a string is a palindrome, as long as you can ensure that s is a string and so on. So the model learned this. You can still say maybe that's just interpolating from, you know, something like this is surely in a GitHub repo somewhere. So they go further and they try to say, okay, please give me a function where the palindromes, so return a list indices for elements that are palindromes and at least seven characters. And I personally have searched for this function on GitHub and it does not exist. So what does the model produce? Pretty cool. So this is first of all a list comprehension in Python, which is reasonably complicated, right? And you can see there is this length filter that is greater or equal to seven. And it refers actually back to the is palindrome function that it wrote before. That's pretty cool. Now, this is not like a language model producing, as far as I understand, producing this basically letter by letter or word by word. This goes over the constraints of abstract syntax trees. So it is, I think that's what's happening. They don't have a paper to go along, though I will look into more papers of that sort. They do kind of constrain the model to actually produce valid code. But of course, which variables go where and so on that that's that is completely up to the model. And you see here it understands completely what the user wants. Now, of course, these examples might be cherry picked, right? But it's even for cherry picked examples, still pretty impressive. As I said, I could not find this particular function. So they post two classes here data classes, item and order. And now the model is asked to compute the total order price, which is a method of the of the of the order class. And they stop here. So this is what the human enters. The human enters that just the name of the function, not even the doc string. And the model comes up with the following, compute the total price of the order, including the palindrome. So it does all of that by itself, including the doc string, just from the method. Now you can see pretty much what's happening here. It's kind of like the GPT-2 language model. So what it does is probably it from the method name, it derives this doc string compute total price, right? That's a lot of programmers do this of the order. Order is, of course, the name of the class of self. And here it says including the palindrome discount. And that is probably somewhat pattern match to other functions that have some sort of discount or something like this or one argument that is a number. But the fact that it is also able to see, it adds up the total price per item. And it basically discounts every item. Now it cannot work out that palindrome discount should mean that every item should be a palindrome. That's the only thing it can't work out. It just applies a discount to every single item. Now they go ahead and kind of change that and write the doc strings themselves such that it is clear that apply the discount to items whose name are palindromes. Now the model is again asked to complete this and absolutely crazy. If it's a palindrome, then multiply it with the discount. And if it's not a palindrome, then just add the price. And this final touch here, that's one minus the palindrome discount is added by the programmer. So you can see that this goes towards kind of a symbiosis of human and machine in this way. I don't think the AI will replace programmers, but it certainly is going to be very helpful to automate some of the things or give you suggestions for things that you have to do over and over again. Now I think a lot of these rely on the fact that a lot of programming is redundant still. A lot of people name the function and then in the doc string, they basically repeat the function because they've already intelligently named the function. So technically there doesn't need to be a doc string, but then whatever your style guide comes in, it says there needs to be a doc string. Every argument must be described. Every argument must have a description and a help string and a type, even though it is completely obvious from the names what they do. So if it is completely obvious, I would argue you don't need a doc string. And this is kind of additional information that this model is able to actually sort of make use of. So the fact that a lot of these functions, you can already, the doc string is sort of already the implementation of the function almost. So the distance there is not, it's not like you can say whatever you want. And yeah, so you can see here when it's asked to print the receipt, it just works out. So it's printing, it's doing format strings and whatnot. So it's just learned to do that. I would argue, again, this works. You couldn't just put anything like doc string language is a very specific type of language that programmers use where they basically already sort of implement the method in the doc string. And then the body of the method is just the then really specific code. But as of that, yeah, there's a lot of information already in the doc string in the naming of the function. And it's still pretty impressive, right? So yeah, I just wanted to show you that this already is available, even though not in as big of a form, it can't write giant functions for you, you can't write function bodies, but there are some machine learning based completions already available. So kite is one of them. And tab nine is the other one that I use for now. Both are close sources, I understand it. So that's a bit of a downer there. But these are exactly kind of GPT language models learned on a lot of code. So they can kind of guess what you want and interpolate with your variables in there and so on. I also found this when I searched this comparison here kite versus tab nine. And you see it starts off Yeah, but these are I think these are kind of auto generated. So when you look at the video, you get tab nine is correct. But that kite, it's an actual video review of a kite. Yeah, so, you know, who knows. But what I wanted to do is basically show you a bit of the power of tab nine. Let me get this out of the way. So a while back ago, I live coded this session right here where I where I implemented a sentiment classifier from scratch using hugging face libraries. And I thought we would just play around in here a bit to see what the tab nine could do for us. Alright, so I have tab nine in here together with a bunch of other stuff, I have to admit, so I'm not sure how this is going to turn out. So let's say we wanted to compute the loss here. Let's say we wanted to compute square loss, square loss, you see that tab nine immediately kind of turns up. I've not tried this, I'm impressed. So it says it estimates loss here. And no, that's maybe not what I want. So I'll go with square. Now this is a language server suggestion. So and you can see right here, even though I don't have these variables, kind of tab nine will suggest train loss and validation loss. So let me start a new file right here. Just to see what we can get this thing to do without doing anything. So and so let's say we'll import OS and we'll say if name. So tab nine auto suggests that. And it auto suggests that we should write main here, right. And it knows that a lot of people then call a function called main. So we should probably do that def main, right. Sorry. Okay, let's go with the following, we'll say, we'll try the same thing they did, right. So we'll say we have a data class order, it has, let's say price float and a name string. And order one is an order with the price of five and the name of hello. And we can print CC tab nine, tab nine, if you can see that, it's closely suggested to print order dot price order one dot price. So it can it can see that we kind of want that. How do I select that? Right here. See, order two equals so we'll get another order right here. Seven. Hello. Let's get it with order two. Let's do that. Orders equals order one, order two. So total price, total price equals zero for order in. Wow, did you see that? Price, total price equals zero for order in. Wow, did you see that? In, I can't get it anymore. In orders, that's what tab nine says. Total price plus equals order dot price. So it is already pretty smart, I would argue. Print. And there you saw that total price was suggested. How can I, I don't know how to select this. But I'll figure it out. I'm not that advanced yet. So you can see this already sort of works. And I think it's pretty cool already. And I'm very excited to see what kind of how far people can push this because I think this code generation kind of inferring what you want is only at the beginning right now. And it's for sure going to be a very, very, very, very, very, very, very interesting, very, very, very interesting thing to come more. And yeah, with that, bye bye.
[ { "start": 0, "end": 8, "text": " Hi there. So I saw this and probably many of you have seen this. OpenAI was demonstrating at MSBuild" }, { "start": 8, "end": 14.88, "text": " basically a GPT-2 language model but trained not on language but on code, on Python code," }, { "start": 14.88, "end": 21.28, "text": " open source code from GitHub. And so the idea is that the model learns to produce code. And" }, { "start": 21.28, "end": 27.04, "text": " we'll just have a short look at the clip they have here. I'll link the entire clip down. So this is" }, { "start": 27.04, "end": 32.32, "text": " what the human types. Def is palindrome, so the function name, the argument, and the doc string." }, { "start": 32.32, "end": 36.4, "text": " And now the model is asked to produce the rest of the function and check out." }, { "start": 38.64, "end": 43.44, "text": " So it's pretty good, right? This is actually to check whether, this is a basic check, whether a" }, { "start": 43.44, "end": 49.2, "text": " string is a palindrome, as long as you can ensure that s is a string and so on. So the model learned" }, { "start": 49.2, "end": 53.68, "text": " this. You can still say maybe that's just interpolating from, you know, something like" }, { "start": 53.68, "end": 59.44, "text": " this is surely in a GitHub repo somewhere. So they go further and they try to say, okay, please give" }, { "start": 59.44, "end": 67.36, "text": " me a function where the palindromes, so return a list indices for elements that are palindromes and" }, { "start": 67.36, "end": 71.84, "text": " at least seven characters. And I personally have searched for this function on GitHub and it does" }, { "start": 71.84, "end": 78.48, "text": " not exist. So what does the model produce? Pretty cool. So this is first of all a list comprehension" }, { "start": 78.48, "end": 84, "text": " in Python, which is reasonably complicated, right? And you can see there is this length filter" }, { "start": 84, "end": 90.24000000000001, "text": " that is greater or equal to seven. And it refers actually back to the is palindrome function that" }, { "start": 90.24000000000001, "end": 96.08000000000001, "text": " it wrote before. That's pretty cool. Now, this is not like a language model producing, as far as I" }, { "start": 96.08000000000001, "end": 101.28, "text": " understand, producing this basically letter by letter or word by word. This goes over the" }, { "start": 101.28, "end": 107.52000000000001, "text": " constraints of abstract syntax trees. So it is, I think that's what's happening. They don't have a" }, { "start": 107.52, "end": 113.52, "text": " paper to go along, though I will look into more papers of that sort. They do kind of constrain" }, { "start": 113.52, "end": 118, "text": " the model to actually produce valid code. But of course, which variables go where and so on that" }, { "start": 118, "end": 125.36, "text": " that's that is completely up to the model. And you see here it understands completely what the user" }, { "start": 125.36, "end": 129.6, "text": " wants. Now, of course, these examples might be cherry picked, right? But it's even for cherry" }, { "start": 129.6, "end": 135.04, "text": " picked examples, still pretty impressive. As I said, I could not find this particular function." }, { "start": 135.04, "end": 142.16, "text": " So they post two classes here data classes, item and order. And now the model is asked to compute" }, { "start": 142.16, "end": 151.04, "text": " the total order price, which is a method of the of the of the order class. And they stop here. So" }, { "start": 151.04, "end": 157.68, "text": " this is what the human enters. The human enters that just the name of the function, not even the" }, { "start": 157.68, "end": 163.04, "text": " doc string. And the model comes up with the following, compute the total price of the order," }, { "start": 163.04, "end": 169.76, "text": " including the palindrome. So it does all of that by itself, including the doc string, just from" }, { "start": 169.76, "end": 173.76, "text": " the method. Now you can see pretty much what's happening here. It's kind of like the GPT-2" }, { "start": 173.76, "end": 178.64, "text": " language model. So what it does is probably it from the method name, it derives this doc string" }, { "start": 178.64, "end": 184.23999999999998, "text": " compute total price, right? That's a lot of programmers do this of the order. Order is," }, { "start": 184.23999999999998, "end": 189.68, "text": " of course, the name of the class of self. And here it says including the palindrome discount." }, { "start": 189.68, "end": 195.44, "text": " And that is probably somewhat pattern match to other functions that have some sort of discount" }, { "start": 195.44, "end": 202.8, "text": " or something like this or one argument that is a number. But the fact that it is also able to see," }, { "start": 202.8, "end": 212.4, "text": " it adds up the total price per item. And it basically discounts every item. Now it cannot" }, { "start": 212.4, "end": 217.44, "text": " work out that palindrome discount should mean that every item should be a palindrome. That's the only" }, { "start": 217.44, "end": 222.96, "text": " thing it can't work out. It just applies a discount to every single item. Now they go ahead and kind" }, { "start": 222.96, "end": 228.32, "text": " of change that and write the doc strings themselves such that it is clear that apply the discount to" }, { "start": 228.32, "end": 235.92, "text": " items whose name are palindromes. Now the model is again asked to complete this and absolutely crazy." }, { "start": 235.92, "end": 240.07999999999998, "text": " If it's a palindrome, then multiply it with the discount. And if it's not a palindrome," }, { "start": 240.07999999999998, "end": 244.96, "text": " then just add the price. And this final touch here, that's one minus the palindrome discount" }, { "start": 244.96, "end": 251.84, "text": " is added by the programmer. So you can see that this goes towards kind of a symbiosis of human and" }, { "start": 251.84, "end": 258.96000000000004, "text": " machine in this way. I don't think the AI will replace programmers, but it certainly is going" }, { "start": 258.96000000000004, "end": 264.8, "text": " to be very helpful to automate some of the things or give you suggestions for things that you have" }, { "start": 264.8, "end": 270.64, "text": " to do over and over again. Now I think a lot of these rely on the fact that a lot of programming" }, { "start": 270.64, "end": 276.56, "text": " is redundant still. A lot of people name the function and then in the doc string, they basically" }, { "start": 276.56, "end": 280.4, "text": " repeat the function because they've already intelligently named the function. So technically" }, { "start": 280.4, "end": 284.32, "text": " there doesn't need to be a doc string, but then whatever your style guide comes in, it says there" }, { "start": 284.32, "end": 289.28, "text": " needs to be a doc string. Every argument must be described. Every argument must have a description" }, { "start": 289.28, "end": 296.47999999999996, "text": " and a help string and a type, even though it is completely obvious from the names what they do." }, { "start": 296.48, "end": 302.32, "text": " So if it is completely obvious, I would argue you don't need a doc string. And this is kind" }, { "start": 302.32, "end": 310.64000000000004, "text": " of additional information that this model is able to actually sort of make use of. So the fact that" }, { "start": 310.64000000000004, "end": 314.96000000000004, "text": " a lot of these functions, you can already, the doc string is sort of already the implementation" }, { "start": 314.96000000000004, "end": 320.64000000000004, "text": " of the function almost. So the distance there is not, it's not like you can say whatever you want." }, { "start": 320.64, "end": 328.08, "text": " And yeah, so you can see here when it's asked to print the receipt, it just works out. So it's" }, { "start": 328.08, "end": 332.88, "text": " printing, it's doing format strings and whatnot. So it's just learned to do that. I would argue," }, { "start": 332.88, "end": 337.68, "text": " again, this works. You couldn't just put anything like doc string language is a very specific type" }, { "start": 337.68, "end": 341.91999999999996, "text": " of language that programmers use where they basically already sort of implement the method" }, { "start": 341.91999999999996, "end": 348.71999999999997, "text": " in the doc string. And then the body of the method is just the then really specific code." }, { "start": 348.72, "end": 354.24, "text": " But as of that, yeah, there's a lot of information already in the doc string in the naming of the" }, { "start": 354.24, "end": 362, "text": " function. And it's still pretty impressive, right? So yeah, I just wanted to show you that this" }, { "start": 362, "end": 369.6, "text": " already is available, even though not in as big of a form, it can't write giant functions for you," }, { "start": 369.6, "end": 374.08000000000004, "text": " you can't write function bodies, but there are some machine learning based completions" }, { "start": 374.08, "end": 380.88, "text": " already available. So kite is one of them. And tab nine is the other one that I use for now." }, { "start": 381.68, "end": 386.96, "text": " Both are close sources, I understand it. So that's a bit of a downer there. But these are" }, { "start": 386.96, "end": 392.79999999999995, "text": " exactly kind of GPT language models learned on a lot of code. So they can kind of guess what you" }, { "start": 392.79999999999995, "end": 397.76, "text": " want and interpolate with your variables in there and so on. I also found this when I searched this" }, { "start": 397.76, "end": 404, "text": " comparison here kite versus tab nine. And you see it starts off Yeah, but these are I think these" }, { "start": 404, "end": 408.88, "text": " are kind of auto generated. So when you look at the video, you get tab nine is correct. But that" }, { "start": 408.88, "end": 417.84, "text": " kite, it's an actual video review of a kite. Yeah, so, you know, who knows. But what I wanted to do" }, { "start": 417.84, "end": 425.03999999999996, "text": " is basically show you a bit of the power of tab nine. Let me get this out of the way. So" }, { "start": 425.04, "end": 432, "text": " a while back ago, I live coded this session right here where I where I implemented a sentiment" }, { "start": 432, "end": 437.28000000000003, "text": " classifier from scratch using hugging face libraries. And I thought we would just play around in here a" }, { "start": 437.28000000000003, "end": 443.76, "text": " bit to see what the tab nine could do for us. Alright, so I have tab nine in here together with" }, { "start": 443.76, "end": 449.20000000000005, "text": " a bunch of other stuff, I have to admit, so I'm not sure how this is going to turn out. So let's say" }, { "start": 449.2, "end": 461.44, "text": " we wanted to compute the loss here. Let's say we wanted to compute square loss, square loss, you see" }, { "start": 461.44, "end": 470.08, "text": " that tab nine immediately kind of turns up. I've not tried this, I'm impressed. So it says it" }, { "start": 470.08, "end": 479.76, "text": " estimates loss here. And no, that's maybe not what I want. So I'll go with square. Now this is a" }, { "start": 479.76, "end": 486.96, "text": " language server suggestion. So and you can see right here, even though I don't have these" }, { "start": 486.96, "end": 492.47999999999996, "text": " variables, kind of tab nine will suggest train loss and validation loss. So let me start a new" }, { "start": 492.48, "end": 502.56, "text": " file right here. Just to see what we can get this thing to do without doing anything. So and so" }, { "start": 502.56, "end": 513.9200000000001, "text": " let's say we'll import OS and we'll say if name. So tab nine auto suggests that. And it auto" }, { "start": 513.92, "end": 521.36, "text": " suggests that we should write main here, right. And it knows that a lot of people then call a" }, { "start": 521.36, "end": 529.5999999999999, "text": " function called main. So we should probably do that def main, right. Sorry. Okay, let's go with" }, { "start": 529.5999999999999, "end": 536.88, "text": " the following, we'll say, we'll try the same thing they did, right. So we'll say we have a data class" }, { "start": 536.88, "end": 548.4, "text": " order, it has, let's say price float and a name string. And order one is an order with the price" }, { "start": 548.4, "end": 558.48, "text": " of five and the name of hello. And we can print CC tab nine, tab nine, if you can see that," }, { "start": 558.48, "end": 564.8000000000001, "text": " it's closely suggested to print order dot price order one dot price. So it can it can see that" }, { "start": 564.8000000000001, "end": 577.12, "text": " we kind of want that. How do I select that? Right here. See, order two equals so we'll get another" }, { "start": 577.12, "end": 587.28, "text": " order right here. Seven. Hello. Let's get it with order two. Let's do that. Orders equals order one," }, { "start": 587.28, "end": 600.64, "text": " order two. So total price, total price equals zero for order in. Wow, did you see that?" }, { "start": 600.64, "end": 607.28, "text": " Price, total price equals zero for order in. Wow, did you see that?" }, { "start": 608.56, "end": 617.84, "text": " In, I can't get it anymore. In orders, that's what tab nine says. Total price plus equals" }, { "start": 618.96, "end": 624.3199999999999, "text": " order dot price. So it is already pretty smart, I would argue." }, { "start": 624.32, "end": 631.6800000000001, "text": " Print. And there you saw that total price was suggested. How can I, I don't know how to select" }, { "start": 631.6800000000001, "end": 638.4000000000001, "text": " this. But I'll figure it out. I'm not that advanced yet. So you can see this already sort of works." }, { "start": 638.4000000000001, "end": 644, "text": " And I think it's pretty cool already. And I'm very excited to see what kind of how far people" }, { "start": 644, "end": 648.4000000000001, "text": " can push this because I think this code generation kind of inferring what you want is only at the" }, { "start": 648.4000000000001, "end": 654.08, "text": " beginning right now. And it's for sure going to be a very, very, very, very, very, very, very" }, { "start": 654.08, "end": 664.1600000000001, "text": " interesting, very, very, very interesting thing to come more. And yeah, with that, bye bye." } ]
p3sAF3gVMMA
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Deep Learning for Symbolic Mathematics
[ "Science & Technology" ]
[ "deep learning", "machine learning", "nlp", "natural language processing", "machine translation", "arxiv", "attention mechanism", "attention", "transformer", "rnn", "recurrent", "seq2seq", "facebook", "fair", "research", "math", "integral", "ode" ]
This model solves integrals and ODEs by doing seq2seq! https://arxiv.org/abs/1912.01412 https://ai.facebook.com/blog/using-neural-networks-to-solve-advanced-mathematics-equations/ Abstract: Neural networks have a reputation for being better at solving statistical or approximate problems than at performing calculations or working with symbolic data. In this paper, we show that they can be surprisingly good at more elaborated tasks in mathematics, such as symbolic integration and solving differential equations. We propose a syntax for representing mathematical problems, and methods for generating large datasets that can be used to train sequence-to-sequence models. We achieve results that outperform commercial Computer Algebra Systems such as Matlab or Mathematica. Authors: Guillaume Lample, François Charton Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Can you solve this? Neither can I. But Wolfram Alpha can. So this is the thing that probably I have most to thank for for passing university, especially the math classes in it. If you don't know Wolfram Alpha, it is an engine from the creators of Mathematica, but it is online. It can do symbolic math. So it can integrate this expression, for example, and you'll get the solution here. And if you have the pro version, it can even give you a step-by-step solution of how to get there. So this part of math is an entirely different part than we usually do with computers. Usually we do numeric math, working with actual values. But here it's about symbolic math. It's about manipulating expression, in this case integrating them. So here is a paper by Facebook AI Research called Deep Learning for Symbolic Mathematics by Guillaume Lampe and François Gertot. These people have basically tackled the task of doing these mathematical reasoning, solving these mathematical problems in a symbolic way using neural networks. So they start out by saying here, neural networks have a reputation for being better at solving statistical or proximate problems. That's what I meant by numeric, then at performing calculations or working with symbolic data. And in this case, they go about this other than other people have. So let's look at how they did it. We can express symbolic mathematics in these kind of trees. So an expression like these up here would be expressed into this tree. So you would have a plus, this 2 plus 3. Sorry, of course there's an implicit bracket here. So you'd have this plus right here, the 2 here and the entire right hand side here. So you can basically decompose it into trees like this or this or this. Here you also can have the differentiation operator as a symbol in there, just like any other operator. Moreover, you can basically decompose everything into everything they have here, into binary and unary nodes in a tree. What that means is either like a plus sign, it has two components, so a left and a right hand side that should be added together. Or like the cosine, it has one argument, namely the thing that it should take the cosine of. So a lot of people have tried going about this problem by working with these trees and basically training neural networks to... So first they use kind of a parser to decompose such a thing into a tree like this. And then use neural networks, let's say tree recursive neural networks or so on, to kind of make sense of the tree and solve it in a recursive manner or something like this. But that has its limitations. So what these people from Facebook AI did is they viewed it as a natural language expression problem. So they say, no, no, let's actually go with trees as sequences. So you can see that this mathematical expression, for example, is already a sequence. It's simply a sequence of tokens. But there are many different ways of expressing this. So you can say 2 plus 3 times the parentheses, you can say 3 times parentheses plus 2. You can turn many things around and there's always these parentheses make it harder and so on. So what they do is they say, OK, let's actually go from this thing to a tree. So let's go to the tree representation and then let's take the tree representation because the tree representation can be normalized. And then let's put that again into a sequence representation such as this one. And this is called reverse polish notation. And it has multiple advantages over the old expression. So let's keep that on the right hand side here. This is the same thing, except it's what is called a prefix notation, whereas the thing on the right here is called an infix notation. Infix because the operators such as the plus is always between its arguments. So it's left hand argument and it's right hand argument. In prefix notation, the operator is always in front of its arguments. So this operator here is has a first argument. This end as a second argument. This right now, the cool thing is if you express a tree like this, you can simply go and use a stack machine to solve it. So you can basically go. I would say you can go from the from the right here and you see you select two and five plus. And let's do it by hand. Actually, this is fun. So we have plus two times three. If you're a boomer like me, you remember you have to use calculators like this and couldn't use the infix notation. So you go from the right, right? You say two, five plus. Cool. That's seven. So scratch that. Put seven here, right? So your new stack is three, two times. This right. Then you go again from the right and you go seven, three times. OK, that's twenty one. Cool. Twenty one. Scratch this. Now it's twenty one. Two plus twenty one is twenty three. I'm fairly sure that's the solution. Well, correct me if I'm wrong. But this is how you would would go about solving like this. So it is the same expression as the original one, but it doesn't use any parentheses. And it is it is derived from the from the tree, basically. So it is you can you can normalize it much more in order to find unique expressions. So what this system does is it it transforms any expression into a prefix notation such as this one. Oops. And then it uses a sequence to sequence model. In order to derive the solution. Now, just how crazy is this? Right. So we come we go from this thing here, right? From this thing. And the solution is twenty one. Right. And the neural network is simply trained to do sequence to sequence from this to that sequence to sequence. That means it basically parses this as a token level. Right. And then it outputs these tokens without. So during training, you simply give it the you give it the input here and you give it the output. And it's supposed to learn how to transform one into the other without you giving it any sort of mathematical ability. Right. Without you telling it what does a plus sign mean without you telling it this algorithm that I just told you. Now, this by itself is already pretty astounding that you would try such a thing. It really transforms the string. So this is not the mathematical equation, but the string of this into the string of that. Now, they don't do it on numbers. Like, I don't think that would work as well if you were to to make it kind of calculate numerical things like this. As we said, this is symbolic. So what it can do is it can, for example, integrate. So if you have an expression like. Let's see some on the bottom here. So if you had an expression such as a polynomial. Here, an expression like this. Right. You would like to find its integral. That is a problem. That's one of the problems we had at the beginning. Right. This integral right here. You can write this in a string like we said. And then derive its solution right here. And have the neural network learn to map one to the other, right, to map this to that. So the way it goes is it would map this into map this into its tree representation. It would map this into its prefix notation. Right. It would also map this to. Let's take another color here. This into its tree. Then it will map this into its prefix notation. And then that's the training data. The training data is take this, derive that. Right. And at inference time, of course, you won't have this here. You'll simply be asked to output a sequence as a normal natural language. Like you can think of machine translation. This thing translates problems into solutions. It's crazy. I mean, it's not it's not technically super challenging, but it's crazy that it works or that it could work. Right. So we'll see how this actually works. They use a transformer model, which is just which is a classic model. If you don't know what a transformer is, I have a video called Attention is All You Need about transformers. You can basically use them to do these kinds of tasks, to map one string into another string. So. Yeah, so they go into detail here of how they construct the data set and how big the problem space is and so on. Ultimately, they compare their system to. Mathematica, I think, and Maple and MathLab, which do the same thing. So Mathematica, which is the kind of desktop version of Wolfram Alpha that I've shown you, you have it here. So integration. Is the task of integrating, let's say, these these symbolic expressions. ODE order one and order two are slightly different tasks where you're asked to solve an ordinary differential equation, which is also a task in symbolic mathematics. If you compare it to Mathematica here and they give it Mathematica a limit of 30 seconds, what Mathematica will do is it will kind of search the manipulations that it knows. So the advantage of this is it can always give you, let's say, a step by step solution if it finds a solution. Right. It will just start and it will do a tree search, manipulating the expression you give in until it reaches a satisfactory solution. But then once it has that, it can give you a path through the tree, which leads to the solution, which will give you a step by step solution. So you can understand it. The system that Facebook designs here doesn't do that. It simply takes right. It simply takes the input tokens like this is the input and it just gives you an output that is learned so that the network per se doesn't understand math. It simply learns from many, many examples that to transform to to come up with good hypotheses. So if you compare here, Mathematica, for example, it can integrate 84 percent of the things that they put into it. It's not said whether it gets it wrong or simply times out in the rest. I would say it times out because probably Mathematica never gets it wrong because it's an actual symbolic manipulator with defined rules. So I guess the rest of the rest 16 percent, it simply times out, doesn't find a solution. Whereas this Facebook system and they say it usually finds the solution in less than a second, finds these solutions in 98.4 percent of the time with a beam size of one. Now, what does the beam size mean? It means that the time that you have to generate the output is the time that you generate the output. So if you have a sequence of input, you can always choose to do a beam search. So when you have a sequence of input, let's actually give an example, a cat jumps. The task is simply to continue the sentence, right, to continue the sentence so you can generate an output sequence. The output sequence could be over the dog. What you can do is you can this is beam size, would be called beam size one or no beam search at all. You can do what's called a beam search in that each step you actually generate multiple hypotheses and then keep the best ones in memory. So you in a beam size of 10, you would always consider the 10 most probable solutions and you would kind of evaluate all 10 and then always keep the 10 best. Let's see how this goes. Let's do a beam size of three in our case. So a cat jumps and then you could come up with three different things. This sentence could continue cat jumps over a cat jumps between and a cat jumps swiftly. Right. So these are your three hypotheses. Then we go to the next step. We have to evaluate each of those, each of them. So a cat jumps over the over a over me. A cat jumps between the between two and a cat jumps between many. The cat jumps swiftly end of sentence, that jumps swiftly over cat jumps swiftly. And right, these are all valid. So of these nine, you would now select again the three that overall have the highest likelihood. Maybe that's the following cat jumps over the cat jumps over a and a cat jumps between two. These three. Right. So you just keep these three. And then in the next step, you again from these three, you would want for each three hypotheses and so on. So this is what's called a beam search. And if you give it a beam size of 10 or 50, this system tends to improve even more. The way this system works is quite different from Mathematica in that Mathematica, as I said, is a symbolic solver that never makes mistakes, but can fail to give you a solution. This system simply generates an output sequence that is not guaranteed to be actually a solution to the problem. It's just a hypothesis. But then you can quickly check whether the hypothesis is correct. So the nature of these math problems with integration, you can simply differentiate. And with ODE, you can simply plug them in to see if there is solution. It's kind of like your classic, let's say, NP-hard problems or like a SAT solving where you can quickly check whether something is a solution. So if you have a system that generates 50 hypotheses, you could quickly check which one is actually correct. So these numbers here mean that one of these 50 that the system came up with was a correct solution. And if you allow for such many hypotheses, you can see it goes up quite a bit. For example, the ODE solving is almost the same. And here it's even worse if you take ODE's of order 2. It's even worse than Mathematica. But if you allow for larger beam sizes, you see it dramatically goes up. And so it's a different approach. I wouldn't be surprised if Mathematica would actually implement something like this very soon or just buy this off of Facebook or something, or Facebook by Mathematica in whatever way. This clearly is a different approach and it appears to work better. But there is a caveat. So here's the caveat that I see with this kind of thing. These evaluations are done on data sets, of course. And this paper goes into big detail on how to generate these data sets. So they have to pay attention to many things like many solutions are equivalent. For example, here, you know, that this solution and this solution to this equation, to this differential equation are the same. So they have to use a symbolic framework to check whether the solutions are the same and so on. This it is very good work, but they do evaluate on expressions that fit into their data set. So here in their data set, they say, OK, we evaluate, you know, expressions with up to 15 internal nodes, 11 leave values for these four binary operators, then these 15 unary operators. So the expressions that they train on fall into this data set. Right. Also, just numbers from negative five to five. So it is it is kind of to be expected that a system that is trained on these things would meet would perform very well on these things as opposed to opposed to Mathematica. That is, you know, a general purpose tool. Moreover, if you look at. Sorry, I think this is further down. For example, in integration for the integration task, they have three different ways of solving of generating data. They have the forward way where they simply use a symbolic integrator to generate expressions. They have the backward way where they start from the integral and then differentiate it in order to obtain a training pair. And they have an integration by parts method. These are three different methods to come up with problems for this system to be trained on. And they have very different properties to the effect that if you train with one just one, it won't work well on the other. So if you train with the forward method, it will work very well on data that has been generated with the forward method. So this is down here. This is what it's trained on. And this is what it's evaluated on. Right. You can see the diagonal is very, very strong. But if you train with the backward method, but you evaluate on data generated with the forward method, it is actually very poor. That's because in one case, generally, the solutions are longer than the input. In the other case, the solutions are shorter. So not only does this system only work on the particular task here, it is actually very attuned to the way that this data was generated. Right. So in fact, I would postulate that this training data is only probably a very small subset of all of the things that we would like to integrate. And again, the problem the problem is made kind of worse because they their evaluation set would also come from their distribution. So what they've ultimately shown is that they can do this on a very skewed, probably very biased subset of this mathematical problem. And on that biased subset, they can outperform something like Mathematica. Right. They kind of defeat themselves. Yeah. If you look here, they even the different integration data generating methods, if you only train on one of them, it doesn't generalize. If you only train on forward data, then if you evaluate on backward generated data, it doesn't work. So even the integrator can't really generalize. So they have to kind of combine different method. And even now, we can probably easily find examples that this integrator can't solve. So, I mean, there is a lot of cool things here and they show a number of properties that the model learns just from without them telling it to. And it's cool that it works anyway. As I said, this model has no programmed in notion of how math works. But also it kind of shows the problems if you do this via a training data set in that if your training data set is very skewed and then your evaluation set follows the same generation process, the claims you can make at the end are limited. And to be fair, I don't know what claims they made in the press generally. So I think there is a pretty cool work. Check it out. And that was it. Thanks.
[ { "start": 0, "end": 16, "text": " Hi there! Can you solve this? Neither can I. But Wolfram Alpha can. So this is the thing that probably I have most to thank for for passing university, especially the math classes in it." }, { "start": 16, "end": 33, "text": " If you don't know Wolfram Alpha, it is an engine from the creators of Mathematica, but it is online. It can do symbolic math. So it can integrate this expression, for example, and you'll get the solution here." }, { "start": 33, "end": 48, "text": " And if you have the pro version, it can even give you a step-by-step solution of how to get there. So this part of math is an entirely different part than we usually do with computers." }, { "start": 48, "end": 60, "text": " Usually we do numeric math, working with actual values. But here it's about symbolic math. It's about manipulating expression, in this case integrating them." }, { "start": 60, "end": 75, "text": " So here is a paper by Facebook AI Research called Deep Learning for Symbolic Mathematics by Guillaume Lampe and François Gertot." }, { "start": 75, "end": 86, "text": " These people have basically tackled the task of doing these mathematical reasoning, solving these mathematical problems in a symbolic way using neural networks." }, { "start": 86, "end": 95, "text": " So they start out by saying here, neural networks have a reputation for being better at solving statistical or proximate problems." }, { "start": 95, "end": 101, "text": " That's what I meant by numeric, then at performing calculations or working with symbolic data." }, { "start": 101, "end": 111, "text": " And in this case, they go about this other than other people have. So let's look at how they did it." }, { "start": 111, "end": 124, "text": " We can express symbolic mathematics in these kind of trees. So an expression like these up here would be expressed into this tree." }, { "start": 124, "end": 132, "text": " So you would have a plus, this 2 plus 3. Sorry, of course there's an implicit bracket here." }, { "start": 132, "end": 138, "text": " So you'd have this plus right here, the 2 here and the entire right hand side here." }, { "start": 138, "end": 146, "text": " So you can basically decompose it into trees like this or this or this." }, { "start": 146, "end": 156, "text": " Here you also can have the differentiation operator as a symbol in there, just like any other operator." }, { "start": 156, "end": 165, "text": " Moreover, you can basically decompose everything into everything they have here, into binary and unary nodes in a tree." }, { "start": 165, "end": 173, "text": " What that means is either like a plus sign, it has two components, so a left and a right hand side that should be added together." }, { "start": 173, "end": 181, "text": " Or like the cosine, it has one argument, namely the thing that it should take the cosine of." }, { "start": 181, "end": 191, "text": " So a lot of people have tried going about this problem by working with these trees and basically training neural networks to..." }, { "start": 191, "end": 196, "text": " So first they use kind of a parser to decompose such a thing into a tree like this." }, { "start": 196, "end": 208, "text": " And then use neural networks, let's say tree recursive neural networks or so on, to kind of make sense of the tree and solve it in a recursive manner or something like this." }, { "start": 208, "end": 212, "text": " But that has its limitations." }, { "start": 212, "end": 218, "text": " So what these people from Facebook AI did is they viewed it as a natural language expression problem." }, { "start": 218, "end": 226, "text": " So they say, no, no, let's actually go with trees as sequences." }, { "start": 226, "end": 233, "text": " So you can see that this mathematical expression, for example, is already a sequence." }, { "start": 233, "end": 237, "text": " It's simply a sequence of tokens." }, { "start": 237, "end": 242, "text": " But there are many different ways of expressing this." }, { "start": 242, "end": 247, "text": " So you can say 2 plus 3 times the parentheses, you can say 3 times parentheses plus 2." }, { "start": 247, "end": 254, "text": " You can turn many things around and there's always these parentheses make it harder and so on." }, { "start": 254, "end": 259, "text": " So what they do is they say, OK, let's actually go from this thing to a tree." }, { "start": 259, "end": 272, "text": " So let's go to the tree representation and then let's take the tree representation because the tree representation can be normalized." }, { "start": 272, "end": 278, "text": " And then let's put that again into a sequence representation such as this one." }, { "start": 278, "end": 281, "text": " And this is called reverse polish notation." }, { "start": 281, "end": 286, "text": " And it has multiple advantages over the old expression." }, { "start": 286, "end": 291, "text": " So let's keep that on the right hand side here." }, { "start": 291, "end": 300, "text": " This is the same thing, except it's what is called a prefix notation, whereas the thing on the right here is called an infix notation." }, { "start": 300, "end": 306, "text": " Infix because the operators such as the plus is always between its arguments." }, { "start": 306, "end": 310, "text": " So it's left hand argument and it's right hand argument." }, { "start": 310, "end": 316, "text": " In prefix notation, the operator is always in front of its arguments." }, { "start": 316, "end": 321, "text": " So this operator here is has a first argument." }, { "start": 321, "end": 324, "text": " This end as a second argument." }, { "start": 324, "end": 329, "text": " This right now, the cool thing is if you express a tree like this," }, { "start": 329, "end": 334, "text": " you can simply go and use a stack machine to solve it." }, { "start": 334, "end": 337, "text": " So you can basically go." }, { "start": 337, "end": 343, "text": " I would say you can go from the from the right here and you see you select two and five plus." }, { "start": 343, "end": 346, "text": " And let's do it by hand." }, { "start": 346, "end": 352, "text": " Actually, this is fun. So we have plus two times three." }, { "start": 352, "end": 359, "text": " If you're a boomer like me, you remember you have to use calculators like this and couldn't use the infix notation." }, { "start": 359, "end": 361, "text": " So you go from the right, right?" }, { "start": 361, "end": 364, "text": " You say two, five plus. Cool." }, { "start": 364, "end": 366, "text": " That's seven. So scratch that." }, { "start": 366, "end": 368, "text": " Put seven here, right?" }, { "start": 368, "end": 373, "text": " So your new stack is three, two times." }, { "start": 373, "end": 374, "text": " This right." }, { "start": 374, "end": 380, "text": " Then you go again from the right and you go seven, three times." }, { "start": 380, "end": 382, "text": " OK, that's twenty one. Cool." }, { "start": 382, "end": 384, "text": " Twenty one. Scratch this." }, { "start": 384, "end": 389, "text": " Now it's twenty one. Two plus twenty one is twenty three." }, { "start": 389, "end": 392, "text": " I'm fairly sure that's the solution." }, { "start": 392, "end": 394, "text": " Well, correct me if I'm wrong." }, { "start": 394, "end": 397, "text": " But this is how you would would go about solving like this." }, { "start": 397, "end": 404, "text": " So it is the same expression as the original one, but it doesn't use any parentheses." }, { "start": 404, "end": 411, "text": " And it is it is derived from the from the tree, basically." }, { "start": 411, "end": 420, "text": " So it is you can you can normalize it much more in order to find unique expressions." }, { "start": 420, "end": 430, "text": " So what this system does is it it transforms any expression into a prefix notation such as this one." }, { "start": 430, "end": 435, "text": " Oops. And then it uses a sequence to sequence model." }, { "start": 435, "end": 437, "text": " In order to derive the solution." }, { "start": 437, "end": 441, "text": " Now, just how crazy is this? Right." }, { "start": 441, "end": 446, "text": " So we come we go from this thing here, right?" }, { "start": 446, "end": 450, "text": " From this thing. And the solution is twenty one." }, { "start": 450, "end": 459, "text": " Right. And the neural network is simply trained to do sequence to sequence from this to that sequence to sequence." }, { "start": 459, "end": 467, "text": " That means it basically parses this as a token level. Right." }, { "start": 467, "end": 471, "text": " And then it outputs these tokens without." }, { "start": 471, "end": 480, "text": " So during training, you simply give it the you give it the input here and you give it the output." }, { "start": 480, "end": 488, "text": " And it's supposed to learn how to transform one into the other without you giving it any sort of" }, { "start": 488, "end": 490, "text": " mathematical ability. Right." }, { "start": 490, "end": 496, "text": " Without you telling it what does a plus sign mean without you telling it this algorithm that I just told you." }, { "start": 496, "end": 503, "text": " Now, this by itself is already pretty astounding that you would try such a thing." }, { "start": 503, "end": 506, "text": " It really transforms the string." }, { "start": 506, "end": 512, "text": " So this is not the mathematical equation, but the string of this into the string of that." }, { "start": 512, "end": 515, "text": " Now, they don't do it on numbers." }, { "start": 515, "end": 524, "text": " Like, I don't think that would work as well if you were to to make it kind of calculate numerical things like this." }, { "start": 524, "end": 526, "text": " As we said, this is symbolic." }, { "start": 526, "end": 530, "text": " So what it can do is it can, for example, integrate." }, { "start": 530, "end": 539, "text": " So if you have an expression like." }, { "start": 539, "end": 541, "text": " Let's see some on the bottom here." }, { "start": 541, "end": 548, "text": " So if you had an expression such as a polynomial." }, { "start": 548, "end": 552, "text": " Here, an expression like this." }, { "start": 552, "end": 556, "text": " Right. You would like to find its integral." }, { "start": 556, "end": 559, "text": " That is a problem. That's one of the problems we had at the beginning." }, { "start": 559, "end": 561, "text": " Right. This integral right here." }, { "start": 561, "end": 567, "text": " You can write this in a string like we said." }, { "start": 567, "end": 575, "text": " And then derive its solution right here." }, { "start": 575, "end": 582, "text": " And have the neural network learn to map one to the other, right, to map this to that." }, { "start": 582, "end": 591, "text": " So the way it goes is it would map this into map this into its tree representation." }, { "start": 591, "end": 598, "text": " It would map this into its prefix notation." }, { "start": 598, "end": 602, "text": " Right. It would also map this to." }, { "start": 602, "end": 604, "text": " Let's take another color here." }, { "start": 604, "end": 608, "text": " This into its tree." }, { "start": 608, "end": 612, "text": " Then it will map this into its prefix notation." }, { "start": 612, "end": 614, "text": " And then that's the training data." }, { "start": 614, "end": 619, "text": " The training data is take this, derive that." }, { "start": 619, "end": 624, "text": " Right. And at inference time, of course, you won't have this here." }, { "start": 624, "end": 630, "text": " You'll simply be asked to output a sequence as a normal natural language." }, { "start": 630, "end": 632, "text": " Like you can think of machine translation." }, { "start": 632, "end": 638, "text": " This thing translates problems into solutions." }, { "start": 638, "end": 640, "text": " It's crazy." }, { "start": 640, "end": 646, "text": " I mean, it's not it's not technically super challenging, but it's crazy that it works or that it could work." }, { "start": 646, "end": 647, "text": " Right." }, { "start": 647, "end": 650, "text": " So we'll see how this actually works." }, { "start": 650, "end": 654, "text": " They use a transformer model, which is just which is a classic model." }, { "start": 654, "end": 661, "text": " If you don't know what a transformer is, I have a video called Attention is All You Need about transformers." }, { "start": 661, "end": 667, "text": " You can basically use them to do these kinds of tasks, to map one string into another string." }, { "start": 667, "end": 671, "text": " So." }, { "start": 671, "end": 683, "text": " Yeah, so they go into detail here of how they construct the data set and how big the problem space is and so on." }, { "start": 683, "end": 688, "text": " Ultimately, they compare their system to." }, { "start": 688, "end": 696, "text": " Mathematica, I think, and Maple and MathLab, which do the same thing." }, { "start": 696, "end": 704, "text": " So Mathematica, which is the kind of desktop version of Wolfram Alpha that I've shown you, you have it here." }, { "start": 704, "end": 707, "text": " So integration." }, { "start": 707, "end": 712, "text": " Is the task of integrating, let's say, these these symbolic expressions." }, { "start": 712, "end": 725, "text": " ODE order one and order two are slightly different tasks where you're asked to solve an ordinary differential equation, which is also a task in symbolic mathematics." }, { "start": 725, "end": 737, "text": " If you compare it to Mathematica here and they give it Mathematica a limit of 30 seconds, what Mathematica will do is it will kind of search the manipulations that it knows." }, { "start": 737, "end": 745, "text": " So the advantage of this is it can always give you, let's say, a step by step solution if it finds a solution." }, { "start": 745, "end": 755, "text": " Right. It will just start and it will do a tree search, manipulating the expression you give in until it reaches a satisfactory solution." }, { "start": 755, "end": 764, "text": " But then once it has that, it can give you a path through the tree, which leads to the solution, which will give you a step by step solution." }, { "start": 764, "end": 766, "text": " So you can understand it." }, { "start": 766, "end": 768, "text": " The system that Facebook designs here doesn't do that." }, { "start": 768, "end": 770, "text": " It simply takes right." }, { "start": 770, "end": 780, "text": " It simply takes the input tokens like this is the input and it just gives you an output that is learned so that the network per se doesn't understand math." }, { "start": 780, "end": 790, "text": " It simply learns from many, many examples that to transform to to come up with good hypotheses." }, { "start": 790, "end": 800, "text": " So if you compare here, Mathematica, for example, it can integrate 84 percent of the things that they put into it." }, { "start": 800, "end": 805, "text": " It's not said whether it gets it wrong or simply times out in the rest." }, { "start": 805, "end": 815, "text": " I would say it times out because probably Mathematica never gets it wrong because it's an actual symbolic manipulator with defined rules." }, { "start": 815, "end": 822, "text": " So I guess the rest of the rest 16 percent, it simply times out, doesn't find a solution." }, { "start": 822, "end": 838, "text": " Whereas this Facebook system and they say it usually finds the solution in less than a second, finds these solutions in 98.4 percent of the time with a beam size of one." }, { "start": 838, "end": 840, "text": " Now, what does the beam size mean?" }, { "start": 840, "end": 846, "text": " It means that the time that you have to generate the output is the time that you generate the output." }, { "start": 846, "end": 852, "text": " So if you have a sequence of input, you can always choose to do a beam search." }, { "start": 852, "end": 865, "text": " So when you have a sequence of input, let's actually give an example, a cat jumps." }, { "start": 865, "end": 871, "text": " The task is simply to continue the sentence, right, to continue the sentence so you can generate an output sequence." }, { "start": 871, "end": 876, "text": " The output sequence could be over the dog." }, { "start": 876, "end": 884, "text": " What you can do is you can this is beam size, would be called beam size one or no beam search at all." }, { "start": 884, "end": 893, "text": " You can do what's called a beam search in that each step you actually generate multiple hypotheses and then keep the best ones in memory." }, { "start": 893, "end": 908, "text": " So you in a beam size of 10, you would always consider the 10 most probable solutions and you would kind of evaluate all 10 and then always keep the 10 best." }, { "start": 908, "end": 913, "text": " Let's see how this goes. Let's do a beam size of three in our case." }, { "start": 913, "end": 917, "text": " So a cat jumps and then you could come up with three different things." }, { "start": 917, "end": 930, "text": " This sentence could continue cat jumps over a cat jumps between and a cat jumps swiftly." }, { "start": 930, "end": 932, "text": " Right. So these are your three hypotheses." }, { "start": 932, "end": 937, "text": " Then we go to the next step. We have to evaluate each of those, each of them." }, { "start": 937, "end": 944, "text": " So a cat jumps over the over a over me." }, { "start": 944, "end": 956, "text": " A cat jumps between the between two and a cat jumps between many." }, { "start": 956, "end": 967, "text": " The cat jumps swiftly end of sentence, that jumps swiftly over cat jumps swiftly." }, { "start": 967, "end": 970, "text": " And right, these are all valid." }, { "start": 970, "end": 978, "text": " So of these nine, you would now select again the three that overall have the highest likelihood." }, { "start": 978, "end": 990, "text": " Maybe that's the following cat jumps over the cat jumps over a and a cat jumps between two." }, { "start": 990, "end": 992, "text": " These three. Right. So you just keep these three." }, { "start": 992, "end": 1000, "text": " And then in the next step, you again from these three, you would want for each three hypotheses and so on." }, { "start": 1000, "end": 1002, "text": " So this is what's called a beam search." }, { "start": 1002, "end": 1010, "text": " And if you give it a beam size of 10 or 50, this system tends to improve even more." }, { "start": 1010, "end": 1019, "text": " The way this system works is quite different from Mathematica in that Mathematica, as I said, is a symbolic solver that never makes mistakes," }, { "start": 1019, "end": 1022, "text": " but can fail to give you a solution." }, { "start": 1022, "end": 1029, "text": " This system simply generates an output sequence that is not guaranteed to be actually a solution to the problem." }, { "start": 1029, "end": 1031, "text": " It's just a hypothesis." }, { "start": 1031, "end": 1035, "text": " But then you can quickly check whether the hypothesis is correct." }, { "start": 1035, "end": 1041, "text": " So the nature of these math problems with integration, you can simply differentiate." }, { "start": 1041, "end": 1046, "text": " And with ODE, you can simply plug them in to see if there is solution." }, { "start": 1046, "end": 1057, "text": " It's kind of like your classic, let's say, NP-hard problems or like a SAT solving where you can quickly check whether something is a solution." }, { "start": 1057, "end": 1065, "text": " So if you have a system that generates 50 hypotheses, you could quickly check which one is actually correct." }, { "start": 1065, "end": 1073, "text": " So these numbers here mean that one of these 50 that the system came up with was a correct solution." }, { "start": 1073, "end": 1079, "text": " And if you allow for such many hypotheses, you can see it goes up quite a bit." }, { "start": 1079, "end": 1083, "text": " For example, the ODE solving is almost the same." }, { "start": 1083, "end": 1087, "text": " And here it's even worse if you take ODE's of order 2." }, { "start": 1087, "end": 1089, "text": " It's even worse than Mathematica." }, { "start": 1089, "end": 1096, "text": " But if you allow for larger beam sizes, you see it dramatically goes up." }, { "start": 1096, "end": 1099, "text": " And so it's a different approach." }, { "start": 1099, "end": 1113, "text": " I wouldn't be surprised if Mathematica would actually implement something like this very soon or just buy this off of Facebook or something, or Facebook by Mathematica in whatever way." }, { "start": 1113, "end": 1117, "text": " This clearly is a different approach and it appears to work better." }, { "start": 1117, "end": 1119, "text": " But there is a caveat." }, { "start": 1119, "end": 1123, "text": " So here's the caveat that I see with this kind of thing." }, { "start": 1123, "end": 1130, "text": " These evaluations are done on data sets, of course." }, { "start": 1130, "end": 1136, "text": " And this paper goes into big detail on how to generate these data sets." }, { "start": 1136, "end": 1142, "text": " So they have to pay attention to many things like many solutions are equivalent." }, { "start": 1142, "end": 1156, "text": " For example, here, you know, that this solution and this solution to this equation, to this differential equation are the same." }, { "start": 1156, "end": 1164, "text": " So they have to use a symbolic framework to check whether the solutions are the same and so on." }, { "start": 1164, "end": 1176, "text": " This it is very good work, but they do evaluate on expressions that fit into their data set." }, { "start": 1176, "end": 1191, "text": " So here in their data set, they say, OK, we evaluate, you know, expressions with up to 15 internal nodes, 11 leave values for these four binary operators, then these 15 unary operators." }, { "start": 1191, "end": 1197, "text": " So the expressions that they train on fall into this data set." }, { "start": 1197, "end": 1205, "text": " Right. Also, just numbers from negative five to five." }, { "start": 1205, "end": 1217, "text": " So it is it is kind of to be expected that a system that is trained on these things would meet would perform very well on these things as opposed to opposed to Mathematica." }, { "start": 1217, "end": 1222, "text": " That is, you know, a general purpose tool." }, { "start": 1222, "end": 1226, "text": " Moreover, if you look at." }, { "start": 1226, "end": 1228, "text": " Sorry, I think this is further down." }, { "start": 1228, "end": 1239, "text": " For example, in integration for the integration task, they have three different ways of solving of generating data." }, { "start": 1239, "end": 1245, "text": " They have the forward way where they simply use a symbolic integrator to generate expressions." }, { "start": 1245, "end": 1253, "text": " They have the backward way where they start from the integral and then differentiate it in order to obtain a training pair." }, { "start": 1253, "end": 1256, "text": " And they have an integration by parts method." }, { "start": 1256, "end": 1261, "text": " These are three different methods to come up with problems for this system to be trained on." }, { "start": 1261, "end": 1271, "text": " And they have very different properties to the effect that if you train with one just one, it won't work well on the other." }, { "start": 1271, "end": 1282, "text": " So if you train with the forward method, it will work very well on data that has been generated with the forward method." }, { "start": 1282, "end": 1285, "text": " So this is down here. This is what it's trained on." }, { "start": 1285, "end": 1287, "text": " And this is what it's evaluated on." }, { "start": 1287, "end": 1291, "text": " Right. You can see the diagonal is very, very strong." }, { "start": 1291, "end": 1303, "text": " But if you train with the backward method, but you evaluate on data generated with the forward method, it is actually very poor." }, { "start": 1303, "end": 1309, "text": " That's because in one case, generally, the solutions are longer than the input." }, { "start": 1309, "end": 1311, "text": " In the other case, the solutions are shorter." }, { "start": 1311, "end": 1317, "text": " So not only does this system only work on the particular task here," }, { "start": 1317, "end": 1325, "text": " it is actually very attuned to the way that this data was generated." }, { "start": 1325, "end": 1338, "text": " Right. So in fact, I would postulate that this training data is only probably a very small subset of all of the things that we would like to integrate." }, { "start": 1338, "end": 1349, "text": " And again, the problem the problem is made kind of worse because they their evaluation set would also come from their distribution." }, { "start": 1349, "end": 1361, "text": " So what they've ultimately shown is that they can do this on a very skewed, probably very biased subset of this mathematical problem." }, { "start": 1361, "end": 1366, "text": " And on that biased subset, they can outperform something like Mathematica." }, { "start": 1366, "end": 1369, "text": " Right. They kind of defeat themselves." }, { "start": 1369, "end": 1378, "text": " Yeah. If you look here, they even the different integration data generating methods, if you only train on one of them, it doesn't generalize." }, { "start": 1378, "end": 1388, "text": " If you only train on forward data, then if you evaluate on backward generated data, it doesn't work." }, { "start": 1388, "end": 1392, "text": " So even the integrator can't really generalize." }, { "start": 1392, "end": 1403, "text": " So they have to kind of combine different method. And even now, we can probably easily find examples that this integrator can't solve." }, { "start": 1403, "end": 1415, "text": " So, I mean, there is a lot of cool things here and they show a number of properties that the model learns just from without them telling it to." }, { "start": 1415, "end": 1421, "text": " And it's cool that it works anyway. As I said, this model has no programmed in notion of how math works." }, { "start": 1421, "end": 1443, "text": " But also it kind of shows the problems if you do this via a training data set in that if your training data set is very skewed and then your evaluation set follows the same generation process, the claims you can make at the end are limited." }, { "start": 1443, "end": 1447, "text": " And to be fair, I don't know what claims they made in the press generally." }, { "start": 1447, "end": 1455, "text": " So I think there is a pretty cool work. Check it out. And that was it. Thanks." } ]
BBp0tHcirtQ
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
git for research basics: fundamentals, commits, branches, merging
[ "Science & Technology" ]
[ "git", "research", "commit", "merge", "conflict" ]
Don't watch this if you already know how to solve a merge conflict :)
Hi there. Today we're taking a look at Git, especially Git as it is used maybe in research collaborations. So Git is like a tool to collaborate, but when you research, like when you work on a paper together with other people, you won't use a lot of the features that Git offers and that are usually described by Git. So in this series I want to talk about what's kind of the most simple way to collaborate with people on a research project using Git. And today we're going to go over just the fundamentals, which makes everything else a lot easier. So what you need to understand about Git is that fundamentally Git is a graph, and it's a graph of commits. What I mean by this. So let's say you have your paper, you write some things, and then this is kind of version one. And then you have another paper, or the same paper, and you kind of change this line here. That's version two, and so on. You have kind of this chain of versions that you would like to keep in store. So this is the classic example of version control, where you would like to save these versions, and do it in a way that you can at any point in time go back to any version previously. And this is exactly what Git does, without you having to kind of rename. Like people usually copy the file and then rename like this version two, version three, final version, really final version, really final version corrected, blah blah blah. Alright, so Git fundamentally is a graph, and a graph of an object we call a commit. So a commit, which I'm going to represent as a bubble here, is simply a kind of an image of your hard drive, or one folder of your hard drive at a particular point in time. So this will contain all kind of files. Let's call this file A, file B. Oops, well, I meant to make a square here. But all the files that are in your folder, which is called the Git repository, or it's not correct, but bear with me. You have this folder, and all the files in this folder, when you make a commit, all these files are kind of saved as they are into one of these bubbles. And they're saved forever basically in this status that they are. So what you can do now is you can go ahead and make a second commit. So you change a bunch of files. Let's say the file B is still the same, but the file A has changed, is now A'. You make a second commit, and the second commit references the first commit. So part of a commit, except the very first commit, part of a commit is always a pointer to its parent commit. And especially if you look at the commits, they all have names. And the name of a commit is always its hash. And the hash includes basically the hash of all the files that are in there. So a hash could be something like F5C259, and so on. And for the next commit, the hash also includes the reference to the parent. That's why the integral part of a commit is to which parent it belongs. This ultimately is what makes the graph kind of the graph. Every commit references its parent. So you can address every commit by its name, as I said, which is the hash of the commit. So the hash is really long, but you can also simply reference it by the first couple of letters. As long as that's unique, Git will let you do this whenever you need to reference some commit. So we've discussed that basically a commit is a bunch of files, as they are, and it's saved in this state. So Git is of course smart. It will only save the diff from one to the other commit. But you can just imagine that a commit is simply the status of a folder at a particular point in time. So let me just take away these files here. There are a bunch of other things in Git. So one concept that Git has is called a tag. A tag is a name for a commit that you give yourself. And the tag is like a little flag that sticks in a commit. And you may say this, v1, version 1. This is simply a tag, and as you make new commits, the tag simply stays there. And at any time, if you don't want to remember this big long hash, you can simply refer to this commit as v1. Because that's the tag. It's kind of simple. The next form of a little flag that you can append to a commit is called a branch. And a branch, the difference between a tag and a branch. So a branch is also this flag, and we'll call it, I don't know, blah. The difference is that when you are on this commit here, right here, and you make a commit on top of this commit, while what's called you've checked out the blah branch. So right now you're looking at blah, which is this commit, and you make a commit on top of this commit. What Git will do automatically for you is it will erase this flag and move it to this next commit. So you might know branches from subversion or other version control technologies. It's very similar, but in Git, a branch is simply like a tag. It's simply a name for a commit. But with the additional property that when you make a commit on top of that commit, so when it has the commit as its parent, then Git will move the branch, the little flag, to the new commit. So basically, you always have that one branch, which is called master. Git creates this automatically for you if you just have this little flag, master. And you make a commit on top of master, which would cause master to go here. So people usually say they work on the master branch, which means they're simply making commits on top of the commit that currently has the master flag. Git also allows you to move around both tags and the branches basically to any commit. So I could forcefully go erase this here and simply stick the master flag here. And sometimes if we kind of decide these two commits are no good, we would simply do this. We would simply take the master flag, put it here, and then when we make a new commit on top of the master now, what we would make is we make a new commit point here, then Git would move the master flag because it's a branch, master, and then we simply continue working here, working here, and Git will happily move along this master. So in Git, there is no need to actually delete commits or something like this. What we can simply do is kind of move the branch that we're working on to the commit we like, and garbage collection ultimately will at some point go and delete these two commits. This is a bit more difficult once you collaborate with other people, because they might actually have made commits that reference the commits that you just kind of deleted or so. So it's a bit tricky, but ultimately this is something you can do. So the next thing we're going to talk about is multiple branches. Having multiple branches basically boils down to you have few commits, you have your graph, and let's say this is your master branch. So here we have master, but also, or let's make the one before, otherwise I don't have space, master. So what someone else would like to do is say, hey, I want to try out this new feature in code. It will probably change the code base and so on, but I want to try it out. Maybe it'll introduce some bugs and so on. And then what you can do is you can make a new branch, F1, let's call it F1 for feature one. And then I can make a commit on top of feature one, which would then move the feature one flag to here, and so on. I can make second and third commit and so on. Meanwhile, the other people working on the project, or maybe even you yourself, can work on top of this commit, on top of the master branch. So in kind of software engineering, this is typically used when one part of the team wants to implement a new feature, but the other part of the team kind of continues to do bug fixes or things like this, development on the version of the software that doesn't yet have the new feature. But they kind of need to fix bugs, and since the new feature is not complete yet, they can't both work on the same code base. So each work on their own branch, so to say. And at the end, when feature one is ready, people say, okay, we've implemented it, it's all good, there's no bugs. We would like to integrate the feature one into the main software, basically. What you have to do is you would have to do a so-called merge. A merge is a process that generates a merge commit, and a merge commit is this thing here. As you notice, it has more than one parent. It has, in this case, two parents where it kind of combines. So from this commit here, both branches are based off of this commit. And then changes were made, individual changes, in this branch and in this branch. So there's the possibility that people change different things. And what the merge commit needs to do somehow is to bring together these changes. So actually, both branches might have changed the same file, but in a different way. And now the question is how do we merge these different files? And that's kind of the last topic we'll go into today. How does Git do a merge? So when we talk about merging, Git has a bunch of built-in algorithms that it helps you with. Most of the time, merging is automatic. So if you have files here, A and B, and in one branch, A is changed, some here, and in one branch, B is changed. Git simply assumes, well, this one branch has changed A, the other one hasn't. So the changes mean something. I'll take them. So basically, whenever something has changed in one branch and not changed in the other, it will assume that the changes are the thing that needs to continue to live. It assumes that the changes were made for a reason, and that reason should continue. So one might be a bug fix, the other one might be the new feature. The same goes in the same file. So when you have the same file and in one branch something on top is changed, and the other branch something kind of on the bottom is changed, Git simply assumes both changes are wanted and takes both. The only kind of time when Git doesn't know what to do is when both branches change the same line. So I'm going to represent this with, I don't know, but when both branches change the same line in the same file, or close by, so there are these algorithms that Git determines when there's a so-called merge conflict. That's the only time where Git doesn't know what to do. And so as preliminary, it's a good idea to structure files in a line-based fashion, especially if you write kind of LaTeX. A good practice is to put every sentence on a new line and not have like giant lines of multiple sentences, because if you put every sentence on a new line, then you immediately kind of see where something was changed. Whereas if you have this big paragraph and Git will simply tell you this line has changed, which is an entire paragraph, and you don't see what's happening. So when you have a merge conflict, Git asks you what to do, and it does this in a very simple way. We're just going to kind of take a look here as a final demonstration. So I have a Git repository here. Let me do that. So as you can see, there's simply this test file in here. And I've just made one commit, the initial commit. And let's look at this test file. It simply says, hello. So what I can do is, for example, I can say, hi. When I want to make a new commit, first of all, Git status will always tell you kind of what you can do, what's happening in your Git repository. Here it says, changes not staged for commit, modified test.txt. And it also tells you what you can do. So it tells me, for example, use git checkout dash dash with the file name to discard changes. Or use git add to update what will be committed. There's a... I'll use git add with this. So it tells me changes to be committed. Now it's green, as you can see. So when I now type git commit, it should commit these changes. And this is a common occurrence in Git. Whenever you see a text editor opening, Git expects you to type a text message, like a commit message in this case, like a log message, basically. The hashtags are comments, which will not go in here. This is all described right here, actually, in these comments. The thing about these things is, when you type an empty message, then Git will abort the commit. Notice you've done something wrong, you can simply save this file with being empty, being nothing but comments, basically. Git will abort. So it's super useful. I'll just say, added hi, and then save this file. So this is not a special thing. All you need to do... This is an editor, a text editor, that edits a text file. You simply need to save the file and close the editor, and Git will be like, OK, cool, I'll continue. So with git log, now you can see we have two commits. We have my initial commit, and we have the commit called added hi. If you look at the test file, you see hi. So what we'll do now is, finally, we'll make two branches, as we've discussed before. So this is my initial commit. I've made one more commit. And we're on branch master right now, which Git status will tell you. See? On branch master. So this is now master. What we'll do is we'll make a new branch called F1. We'll make a commit on F1, meaning we'll move this. F1. Then we'll make a commit on top of master, like this, which means we'll move this. Master. And then we will merge F1 back into master, such that this master is here. And at the end, we can even remove the F1 branch. And we'll do this while we're having a merge conflict, so that you see the whole process. So, okay. So what I want to do is, first I want to make a branch F1. For this, we can use checkout minus B for making a new branch F1. If the branch already exists, you simply need to checkout, which means I simply go to where this branch is, to the commit that the branch references to. We also say we put head to this commit. Head is always the thing you're looking at, basically. The thing you've currently checked out. So, make a new branch F1, and we'll immediately switch to F1 if I type status. It says on branch F1. It's still the same commit, but we're just in a different branch. So we'll make kind of a change to this file here. I'm gonna say hello. Cool. Save the file. Status. It says it's modified. I want to add and commit it. And there's a shortcut. Commit minus A minus M. So the A simply says all the files that have changed, add them. So I don't need to add, git add all the changed files separately. Though this only counts for kind of changed files. If you have completely new files that git isn't tracking yet, you need to add them yourself. So here with a minus A, I skip the need to first add the files, and with the minus M I can give directly the commit message. More O. Cool. So now what we've done is we have made this commit here and moved the F1 flag to this commit. What we'll do now is we'll go back to this commit, which is currently master branch, and we'll make this commit. So first what we need to do is we'll go back to some commit, which is a checkout. Checkout master. Since master is still referring to that commit. As you can see, when I open the test file, there's no hello. It's the status from before. Hello. I can now change the file in some other manner. In this case I say hello, because I want many Es. And I can say I can commit this, because I'm now on the branch master. It will make this new commit here and move the master branch to that. More E. If you look at git log, you see all these commits on this kind of branch. You don't see the commit on the F1 branch. For that I would have to go back to the F1 branch. I log, and you see here it's a different story. After the added high commit, there's the more O commit. Whereas up here, after the added high commit, there's the more E commit. Merging also happens when you have different branches. When you collaborate with other people, and these people make commits, and you make commits independent of each other, and you try to synchronize your work, often you need to do a merge. And then merge conflicts can also happen. What we can do now is we can go back to master. Because we've... Oops. Git checkout master. There are shortcuts for all of these. We're on this branch right here. What we want to do is we want to make the merge commit. We want to merge F1 into master. While I am on master, I can say git merge F1. It will try to merge, but it will tell me conflict, automatic merge failed, fixed conflicts, and then commit the result. I can say git status. It will tell me you're currently merging. You have unmerged paths. And this test.txt file is both branches modified. I'll go into the test. This is very strange if you see it for the first time, but it's actually very intuitive. What git will do is wherever the line is that both branches have changed, or wherever the block of lines is that both branches have changed, git will basically indicate this by writing directly into the file. It will make these smaller, smaller, smaller, smaller, smaller than sign. Then it says head, which means this is the thing you're currently looking at, which we know is master, has changed this first line to this. Hello. Then it will be like equal, equal, equal, equal. Then it will say down here, it will say the F1 branch has changed this line, the same line to hello. It will denote the end of this with larger, larger, larger, larger, greater than signs. What you need to do in order to merge is simply make this file as you wish it is in the merged state. First of all, you can always start by removing, actually, good practice maybe to remove these equal lines. Then within these delimiters change how you want the file to look. In essence, I simply want to have these O's here at the end. I just want too many. Like this. Or like this. I like that. I'm going to call that the merged state. Then I delete these lines. This is the file that I would like the merged commit to have. What I can do is save this file. Again, I say git status. It still tells me it's unmerged, but it tells me what to do. It says use git add to mark resolution. I've resolved it. git add test txt. git status. It says all conflicts fixed, but you are still merging. Use git commit to conclude merge. git commit. Bam. I still have to enter a commit message, which is already predefined here. I'm saying I merged the branch F1 and there were conflicts, but that's fine. I like this message, so I'm simply going to save the file right here. When I look into git log, it now gives me the full story. First I have this added high commit, then I have the more O commit and the more E commit, which were in parallel to each other. Then I merged both branches into one. We're now right here. What I can do now is delete the F1 flag, because I don't need it anymore. I do that by git branch minus d F1. It says delete the branch F1. No commits are actually deleted when you delete the branch. It's simply the little flag that is deleted. The only danger is when you delete the little flag and the name, and you're unable to reach the commit from any other end. Here of course we have this master, and by following this edge here, we can reach this commit just fine. git won't delete it or garbage collect it. But git will also tell you when you're about to do something dangerous. So don't worry. With this I think you should already have many tools or many insights into git. In another video we're going to look at how to collaborate online with people, which isn't much harder than this. It's simply two more steps to push and pull your work from a server together with other people. Alright, so that was it. Take care.
[ { "start": 0, "end": 9, "text": " Hi there. Today we're taking a look at Git, especially Git as it is used maybe in research collaborations." }, { "start": 9, "end": 19, "text": " So Git is like a tool to collaborate, but when you research, like when you work on a paper together with other people," }, { "start": 19, "end": 24, "text": " you won't use a lot of the features that Git offers and that are usually described by Git." }, { "start": 24, "end": 33, "text": " So in this series I want to talk about what's kind of the most simple way to collaborate with people on a research project using Git." }, { "start": 33, "end": 40, "text": " And today we're going to go over just the fundamentals, which makes everything else a lot easier." }, { "start": 40, "end": 51, "text": " So what you need to understand about Git is that fundamentally Git is a graph, and it's a graph of commits." }, { "start": 51, "end": 61, "text": " What I mean by this. So let's say you have your paper, you write some things, and then this is kind of version one." }, { "start": 61, "end": 70, "text": " And then you have another paper, or the same paper, and you kind of change this line here. That's version two, and so on." }, { "start": 70, "end": 76, "text": " You have kind of this chain of versions that you would like to keep in store." }, { "start": 76, "end": 82, "text": " So this is the classic example of version control, where you would like to save these versions," }, { "start": 82, "end": 88, "text": " and do it in a way that you can at any point in time go back to any version previously." }, { "start": 88, "end": 92, "text": " And this is exactly what Git does, without you having to kind of rename." }, { "start": 92, "end": 102, "text": " Like people usually copy the file and then rename like this version two, version three, final version, really final version, really final version corrected, blah blah blah." }, { "start": 102, "end": 108, "text": " Alright, so Git fundamentally is a graph, and a graph of an object we call a commit." }, { "start": 108, "end": 116, "text": " So a commit, which I'm going to represent as a bubble here, is simply a kind of an image of your hard drive," }, { "start": 116, "end": 120, "text": " or one folder of your hard drive at a particular point in time." }, { "start": 120, "end": 127, "text": " So this will contain all kind of files. Let's call this file A, file B." }, { "start": 127, "end": 134, "text": " Oops, well, I meant to make a square here. But all the files that are in your folder," }, { "start": 134, "end": 140, "text": " which is called the Git repository, or it's not correct, but bear with me." }, { "start": 140, "end": 146, "text": " You have this folder, and all the files in this folder, when you make a commit," }, { "start": 146, "end": 152, "text": " all these files are kind of saved as they are into one of these bubbles." }, { "start": 152, "end": 159, "text": " And they're saved forever basically in this status that they are." }, { "start": 159, "end": 165, "text": " So what you can do now is you can go ahead and make a second commit." }, { "start": 165, "end": 175, "text": " So you change a bunch of files. Let's say the file B is still the same, but the file A has changed, is now A'." }, { "start": 175, "end": 179, "text": " You make a second commit, and the second commit references the first commit." }, { "start": 179, "end": 188, "text": " So part of a commit, except the very first commit, part of a commit is always a pointer to its parent commit." }, { "start": 188, "end": 192, "text": " And especially if you look at the commits, they all have names." }, { "start": 192, "end": 196, "text": " And the name of a commit is always its hash." }, { "start": 196, "end": 201, "text": " And the hash includes basically the hash of all the files that are in there." }, { "start": 201, "end": 209, "text": " So a hash could be something like F5C259, and so on." }, { "start": 209, "end": 215, "text": " And for the next commit, the hash also includes the reference to the parent." }, { "start": 215, "end": 222, "text": " That's why the integral part of a commit is to which parent it belongs." }, { "start": 222, "end": 228, "text": " This ultimately is what makes the graph kind of the graph." }, { "start": 228, "end": 233, "text": " Every commit references its parent." }, { "start": 233, "end": 238, "text": " So you can address every commit by its name, as I said, which is the hash of the commit." }, { "start": 238, "end": 246, "text": " So the hash is really long, but you can also simply reference it by the first couple of letters." }, { "start": 246, "end": 252, "text": " As long as that's unique, Git will let you do this whenever you need to reference some commit." }, { "start": 252, "end": 262, "text": " So we've discussed that basically a commit is a bunch of files, as they are, and it's saved in this state." }, { "start": 262, "end": 267, "text": " So Git is of course smart. It will only save the diff from one to the other commit." }, { "start": 267, "end": 274, "text": " But you can just imagine that a commit is simply the status of a folder at a particular point in time." }, { "start": 274, "end": 280, "text": " So let me just take away these files here." }, { "start": 280, "end": 285, "text": " There are a bunch of other things in Git." }, { "start": 285, "end": 292, "text": " So one concept that Git has is called a tag." }, { "start": 292, "end": 297, "text": " A tag is a name for a commit that you give yourself." }, { "start": 297, "end": 301, "text": " And the tag is like a little flag that sticks in a commit." }, { "start": 301, "end": 305, "text": " And you may say this, v1, version 1." }, { "start": 305, "end": 310, "text": " This is simply a tag, and as you make new commits, the tag simply stays there." }, { "start": 310, "end": 316, "text": " And at any time, if you don't want to remember this big long hash, you can simply refer to this commit as v1." }, { "start": 316, "end": 321, "text": " Because that's the tag. It's kind of simple." }, { "start": 321, "end": 327, "text": " The next form of a little flag that you can append to a commit is called a branch." }, { "start": 327, "end": 331, "text": " And a branch, the difference between a tag and a branch." }, { "start": 331, "end": 338, "text": " So a branch is also this flag, and we'll call it, I don't know, blah." }, { "start": 338, "end": 345, "text": " The difference is that when you are on this commit here, right here," }, { "start": 345, "end": 350, "text": " and you make a commit on top of this commit," }, { "start": 350, "end": 353, "text": " while what's called you've checked out the blah branch." }, { "start": 353, "end": 359, "text": " So right now you're looking at blah, which is this commit, and you make a commit on top of this commit." }, { "start": 359, "end": 368, "text": " What Git will do automatically for you is it will erase this flag and move it to this next commit." }, { "start": 368, "end": 378, "text": " So you might know branches from subversion or other version control technologies." }, { "start": 378, "end": 382, "text": " It's very similar, but in Git, a branch is simply like a tag." }, { "start": 382, "end": 385, "text": " It's simply a name for a commit." }, { "start": 385, "end": 390, "text": " But with the additional property that when you make a commit on top of that commit," }, { "start": 390, "end": 399, "text": " so when it has the commit as its parent, then Git will move the branch, the little flag, to the new commit." }, { "start": 399, "end": 406, "text": " So basically, you always have that one branch, which is called master." }, { "start": 406, "end": 413, "text": " Git creates this automatically for you if you just have this little flag, master." }, { "start": 413, "end": 421, "text": " And you make a commit on top of master, which would cause master to go here." }, { "start": 421, "end": 427, "text": " So people usually say they work on the master branch," }, { "start": 427, "end": 433, "text": " which means they're simply making commits on top of the commit that currently has the master flag." }, { "start": 433, "end": 441, "text": " Git also allows you to move around both tags and the branches basically to any commit." }, { "start": 441, "end": 449, "text": " So I could forcefully go erase this here and simply stick the master flag here." }, { "start": 449, "end": 456, "text": " And sometimes if we kind of decide these two commits are no good, we would simply do this." }, { "start": 456, "end": 463, "text": " We would simply take the master flag, put it here, and then when we make a new commit on top of the master now," }, { "start": 463, "end": 466, "text": " what we would make is we make a new commit point here," }, { "start": 466, "end": 472, "text": " then Git would move the master flag because it's a branch, master," }, { "start": 472, "end": 482, "text": " and then we simply continue working here, working here, and Git will happily move along this master." }, { "start": 482, "end": 486, "text": " So in Git, there is no need to actually delete commits or something like this." }, { "start": 486, "end": 496, "text": " What we can simply do is kind of move the branch that we're working on to the commit we like," }, { "start": 496, "end": 501, "text": " and garbage collection ultimately will at some point go and delete these two commits." }, { "start": 501, "end": 505, "text": " This is a bit more difficult once you collaborate with other people," }, { "start": 505, "end": 514, "text": " because they might actually have made commits that reference the commits that you just kind of deleted or so." }, { "start": 514, "end": 521, "text": " So it's a bit tricky, but ultimately this is something you can do." }, { "start": 521, "end": 525, "text": " So the next thing we're going to talk about is multiple branches." }, { "start": 525, "end": 535, "text": " Having multiple branches basically boils down to you have few commits, you have your graph," }, { "start": 535, "end": 539, "text": " and let's say this is your master branch." }, { "start": 539, "end": 551, "text": " So here we have master, but also, or let's make the one before, otherwise I don't have space, master." }, { "start": 551, "end": 565, "text": " So what someone else would like to do is say, hey, I want to try out this new feature in code." }, { "start": 565, "end": 569, "text": " It will probably change the code base and so on, but I want to try it out." }, { "start": 569, "end": 572, "text": " Maybe it'll introduce some bugs and so on." }, { "start": 572, "end": 580, "text": " And then what you can do is you can make a new branch, F1, let's call it F1 for feature one." }, { "start": 580, "end": 592, "text": " And then I can make a commit on top of feature one, which would then move the feature one flag to here, and so on." }, { "start": 592, "end": 594, "text": " I can make second and third commit and so on." }, { "start": 594, "end": 603, "text": " Meanwhile, the other people working on the project, or maybe even you yourself, can work on top of this commit," }, { "start": 603, "end": 606, "text": " on top of the master branch." }, { "start": 606, "end": 614, "text": " So in kind of software engineering, this is typically used when one part of the team wants to implement a new feature," }, { "start": 614, "end": 619, "text": " but the other part of the team kind of continues to do bug fixes or things like this," }, { "start": 619, "end": 625, "text": " development on the version of the software that doesn't yet have the new feature." }, { "start": 625, "end": 632, "text": " But they kind of need to fix bugs, and since the new feature is not complete yet, they can't both work on the same code base." }, { "start": 632, "end": 639, "text": " So each work on their own branch, so to say." }, { "start": 639, "end": 652, "text": " And at the end, when feature one is ready, people say, okay, we've implemented it, it's all good, there's no bugs." }, { "start": 652, "end": 658, "text": " We would like to integrate the feature one into the main software, basically." }, { "start": 658, "end": 666, "text": " What you have to do is you would have to do a so-called merge." }, { "start": 666, "end": 675, "text": " A merge is a process that generates a merge commit, and a merge commit is this thing here." }, { "start": 675, "end": 678, "text": " As you notice, it has more than one parent." }, { "start": 678, "end": 686, "text": " It has, in this case, two parents where it kind of combines." }, { "start": 686, "end": 693, "text": " So from this commit here, both branches are based off of this commit." }, { "start": 693, "end": 699, "text": " And then changes were made, individual changes, in this branch and in this branch." }, { "start": 699, "end": 704, "text": " So there's the possibility that people change different things." }, { "start": 704, "end": 712, "text": " And what the merge commit needs to do somehow is to bring together these changes." }, { "start": 712, "end": 718, "text": " So actually, both branches might have changed the same file, but in a different way." }, { "start": 718, "end": 723, "text": " And now the question is how do we merge these different files?" }, { "start": 723, "end": 729, "text": " And that's kind of the last topic we'll go into today." }, { "start": 729, "end": 733, "text": " How does Git do a merge?" }, { "start": 733, "end": 744, "text": " So when we talk about merging, Git has a bunch of built-in algorithms that it helps you with." }, { "start": 744, "end": 747, "text": " Most of the time, merging is automatic." }, { "start": 747, "end": 760, "text": " So if you have files here, A and B, and in one branch, A is changed, some here, and in one branch, B is changed." }, { "start": 760, "end": 766, "text": " Git simply assumes, well, this one branch has changed A, the other one hasn't." }, { "start": 766, "end": 770, "text": " So the changes mean something. I'll take them." }, { "start": 770, "end": 778, "text": " So basically, whenever something has changed in one branch and not changed in the other," }, { "start": 778, "end": 787, "text": " it will assume that the changes are the thing that needs to continue to live." }, { "start": 787, "end": 794, "text": " It assumes that the changes were made for a reason, and that reason should continue." }, { "start": 794, "end": 798, "text": " So one might be a bug fix, the other one might be the new feature." }, { "start": 798, "end": 800, "text": " The same goes in the same file." }, { "start": 800, "end": 807, "text": " So when you have the same file and in one branch something on top is changed," }, { "start": 807, "end": 810, "text": " and the other branch something kind of on the bottom is changed," }, { "start": 810, "end": 819, "text": " Git simply assumes both changes are wanted and takes both." }, { "start": 819, "end": 828, "text": " The only kind of time when Git doesn't know what to do is when both branches change the same line." }, { "start": 828, "end": 837, "text": " So I'm going to represent this with, I don't know, but when both branches change the same line in the same file," }, { "start": 837, "end": 847, "text": " or close by, so there are these algorithms that Git determines when there's a so-called merge conflict." }, { "start": 847, "end": 850, "text": " That's the only time where Git doesn't know what to do." }, { "start": 850, "end": 856, "text": " And so as preliminary, it's a good idea to structure files in a line-based fashion," }, { "start": 856, "end": 860, "text": " especially if you write kind of LaTeX." }, { "start": 860, "end": 869, "text": " A good practice is to put every sentence on a new line and not have like giant lines of multiple sentences," }, { "start": 869, "end": 878, "text": " because if you put every sentence on a new line, then you immediately kind of see where something was changed." }, { "start": 878, "end": 883, "text": " Whereas if you have this big paragraph and Git will simply tell you this line has changed," }, { "start": 883, "end": 888, "text": " which is an entire paragraph, and you don't see what's happening." }, { "start": 888, "end": 895, "text": " So when you have a merge conflict, Git asks you what to do, and it does this in a very simple way." }, { "start": 895, "end": 900, "text": " We're just going to kind of take a look here as a final demonstration." }, { "start": 900, "end": 905, "text": " So I have a Git repository here." }, { "start": 905, "end": 907, "text": " Let me do that." }, { "start": 907, "end": 911, "text": " So as you can see, there's simply this test file in here." }, { "start": 911, "end": 914, "text": " And I've just made one commit, the initial commit." }, { "start": 914, "end": 918, "text": " And let's look at this test file." }, { "start": 918, "end": 920, "text": " It simply says, hello." }, { "start": 920, "end": 924, "text": " So what I can do is, for example, I can say, hi." }, { "start": 924, "end": 931, "text": " When I want to make a new commit, first of all, Git status will always tell you kind of what you can do," }, { "start": 931, "end": 934, "text": " what's happening in your Git repository." }, { "start": 934, "end": 940, "text": " Here it says, changes not staged for commit, modified test.txt." }, { "start": 940, "end": 942, "text": " And it also tells you what you can do." }, { "start": 942, "end": 949, "text": " So it tells me, for example, use git checkout dash dash with the file name to discard changes." }, { "start": 949, "end": 954, "text": " Or use git add to update what will be committed." }, { "start": 954, "end": 957, "text": " There's a..." }, { "start": 957, "end": 961, "text": " I'll use git add with this." }, { "start": 961, "end": 966, "text": " So it tells me changes to be committed." }, { "start": 966, "end": 968, "text": " Now it's green, as you can see." }, { "start": 968, "end": 974, "text": " So when I now type git commit, it should commit these changes." }, { "start": 974, "end": 977, "text": " And this is a common occurrence in Git." }, { "start": 977, "end": 983, "text": " Whenever you see a text editor opening, Git expects you to type a text message," }, { "start": 983, "end": 988, "text": " like a commit message in this case, like a log message, basically." }, { "start": 988, "end": 992, "text": " The hashtags are comments, which will not go in here." }, { "start": 992, "end": 997, "text": " This is all described right here, actually, in these comments." }, { "start": 997, "end": 1004, "text": " The thing about these things is, when you type an empty message, then Git will abort the commit." }, { "start": 1004, "end": 1009, "text": " Notice you've done something wrong, you can simply save this file with being empty," }, { "start": 1009, "end": 1014, "text": " being nothing but comments, basically." }, { "start": 1014, "end": 1016, "text": " Git will abort. So it's super useful." }, { "start": 1016, "end": 1022, "text": " I'll just say, added hi, and then save this file." }, { "start": 1022, "end": 1025, "text": " So this is not a special thing. All you need to do..." }, { "start": 1025, "end": 1029, "text": " This is an editor, a text editor, that edits a text file." }, { "start": 1029, "end": 1033, "text": " You simply need to save the file and close the editor, and Git will be like," }, { "start": 1033, "end": 1037, "text": " OK, cool, I'll continue." }, { "start": 1037, "end": 1040, "text": " So with git log, now you can see we have two commits." }, { "start": 1040, "end": 1044, "text": " We have my initial commit, and we have the commit called added hi." }, { "start": 1044, "end": 1048, "text": " If you look at the test file, you see hi." }, { "start": 1048, "end": 1056, "text": " So what we'll do now is, finally, we'll make two branches, as we've discussed before." }, { "start": 1056, "end": 1060, "text": " So this is my initial commit. I've made one more commit." }, { "start": 1060, "end": 1065, "text": " And we're on branch master right now, which Git status will tell you." }, { "start": 1065, "end": 1070, "text": " See? On branch master. So this is now master." }, { "start": 1070, "end": 1074, "text": " What we'll do is we'll make a new branch called F1." }, { "start": 1074, "end": 1081, "text": " We'll make a commit on F1, meaning we'll move this. F1." }, { "start": 1081, "end": 1087, "text": " Then we'll make a commit on top of master, like this, which means we'll move this." }, { "start": 1087, "end": 1099, "text": " Master. And then we will merge F1 back into master, such that this master is here." }, { "start": 1099, "end": 1104, "text": " And at the end, we can even remove the F1 branch." }, { "start": 1104, "end": 1111, "text": " And we'll do this while we're having a merge conflict, so that you see the whole process." }, { "start": 1111, "end": 1117, "text": " So, okay. So what I want to do is, first I want to make a branch F1." }, { "start": 1117, "end": 1122, "text": " For this, we can use checkout minus B for making a new branch F1." }, { "start": 1122, "end": 1130, "text": " If the branch already exists, you simply need to checkout, which means I simply go to where this branch is," }, { "start": 1130, "end": 1133, "text": " to the commit that the branch references to." }, { "start": 1133, "end": 1138, "text": " We also say we put head to this commit." }, { "start": 1138, "end": 1144, "text": " Head is always the thing you're looking at, basically. The thing you've currently checked out." }, { "start": 1144, "end": 1150, "text": " So, make a new branch F1, and we'll immediately switch to F1 if I type status." }, { "start": 1150, "end": 1157, "text": " It says on branch F1. It's still the same commit, but we're just in a different branch." }, { "start": 1157, "end": 1166, "text": " So we'll make kind of a change to this file here. I'm gonna say hello." }, { "start": 1166, "end": 1174, "text": " Cool. Save the file. Status. It says it's modified. I want to add and commit it." }, { "start": 1174, "end": 1180, "text": " And there's a shortcut. Commit minus A minus M." }, { "start": 1180, "end": 1186, "text": " So the A simply says all the files that have changed, add them." }, { "start": 1186, "end": 1190, "text": " So I don't need to add, git add all the changed files separately." }, { "start": 1190, "end": 1193, "text": " Though this only counts for kind of changed files." }, { "start": 1193, "end": 1198, "text": " If you have completely new files that git isn't tracking yet, you need to add them yourself." }, { "start": 1198, "end": 1204, "text": " So here with a minus A, I skip the need to first add the files," }, { "start": 1204, "end": 1210, "text": " and with the minus M I can give directly the commit message. More O. Cool." }, { "start": 1210, "end": 1219, "text": " So now what we've done is we have made this commit here and moved the F1 flag to this commit." }, { "start": 1219, "end": 1229, "text": " What we'll do now is we'll go back to this commit, which is currently master branch, and we'll make this commit." }, { "start": 1229, "end": 1234, "text": " So first what we need to do is we'll go back to some commit, which is a checkout." }, { "start": 1234, "end": 1240, "text": " Checkout master. Since master is still referring to that commit." }, { "start": 1240, "end": 1248, "text": " As you can see, when I open the test file, there's no hello. It's the status from before. Hello." }, { "start": 1248, "end": 1256, "text": " I can now change the file in some other manner. In this case I say hello, because I want many Es." }, { "start": 1256, "end": 1263, "text": " And I can say I can commit this, because I'm now on the branch master." }, { "start": 1263, "end": 1271, "text": " It will make this new commit here and move the master branch to that. More E." }, { "start": 1271, "end": 1277, "text": " If you look at git log, you see all these commits on this kind of branch." }, { "start": 1277, "end": 1286, "text": " You don't see the commit on the F1 branch. For that I would have to go back to the F1 branch." }, { "start": 1286, "end": 1293, "text": " I log, and you see here it's a different story. After the added high commit, there's the more O commit." }, { "start": 1293, "end": 1299, "text": " Whereas up here, after the added high commit, there's the more E commit." }, { "start": 1299, "end": 1307, "text": " Merging also happens when you have different branches." }, { "start": 1307, "end": 1313, "text": " When you collaborate with other people, and these people make commits, and you make commits independent of each other," }, { "start": 1313, "end": 1319, "text": " and you try to synchronize your work, often you need to do a merge." }, { "start": 1319, "end": 1324, "text": " And then merge conflicts can also happen." }, { "start": 1324, "end": 1329, "text": " What we can do now is we can go back to master." }, { "start": 1329, "end": 1334, "text": " Because we've... Oops. Git checkout master." }, { "start": 1334, "end": 1337, "text": " There are shortcuts for all of these." }, { "start": 1337, "end": 1340, "text": " We're on this branch right here." }, { "start": 1340, "end": 1345, "text": " What we want to do is we want to make the merge commit." }, { "start": 1345, "end": 1354, "text": " We want to merge F1 into master. While I am on master, I can say git merge F1." }, { "start": 1354, "end": 1362, "text": " It will try to merge, but it will tell me conflict, automatic merge failed, fixed conflicts, and then commit the result." }, { "start": 1362, "end": 1364, "text": " I can say git status." }, { "start": 1364, "end": 1369, "text": " It will tell me you're currently merging. You have unmerged paths." }, { "start": 1369, "end": 1375, "text": " And this test.txt file is both branches modified." }, { "start": 1375, "end": 1378, "text": " I'll go into the test." }, { "start": 1378, "end": 1383, "text": " This is very strange if you see it for the first time, but it's actually very intuitive." }, { "start": 1383, "end": 1390, "text": " What git will do is wherever the line is that both branches have changed," }, { "start": 1390, "end": 1395, "text": " or wherever the block of lines is that both branches have changed," }, { "start": 1395, "end": 1400, "text": " git will basically indicate this by writing directly into the file." }, { "start": 1400, "end": 1405, "text": " It will make these smaller, smaller, smaller, smaller, smaller than sign." }, { "start": 1405, "end": 1409, "text": " Then it says head, which means this is the thing you're currently looking at," }, { "start": 1409, "end": 1413, "text": " which we know is master, has changed this first line to this." }, { "start": 1413, "end": 1418, "text": " Hello. Then it will be like equal, equal, equal, equal." }, { "start": 1418, "end": 1423, "text": " Then it will say down here, it will say the F1 branch has changed this line," }, { "start": 1423, "end": 1426, "text": " the same line to hello." }, { "start": 1426, "end": 1434, "text": " It will denote the end of this with larger, larger, larger, larger, greater than signs." }, { "start": 1434, "end": 1444, "text": " What you need to do in order to merge is simply make this file as you wish it is in the merged state." }, { "start": 1444, "end": 1452, "text": " First of all, you can always start by removing, actually, good practice maybe to remove these equal lines." }, { "start": 1452, "end": 1459, "text": " Then within these delimiters change how you want the file to look." }, { "start": 1459, "end": 1469, "text": " In essence, I simply want to have these O's here at the end." }, { "start": 1469, "end": 1474, "text": " I just want too many. Like this." }, { "start": 1474, "end": 1478, "text": " Or like this. I like that. I'm going to call that the merged state." }, { "start": 1478, "end": 1488, "text": " Then I delete these lines. This is the file that I would like the merged commit to have." }, { "start": 1488, "end": 1491, "text": " What I can do is save this file." }, { "start": 1491, "end": 1495, "text": " Again, I say git status. It still tells me it's unmerged, but it tells me what to do." }, { "start": 1495, "end": 1499, "text": " It says use git add to mark resolution." }, { "start": 1499, "end": 1508, "text": " I've resolved it. git add test txt. git status." }, { "start": 1508, "end": 1516, "text": " It says all conflicts fixed, but you are still merging. Use git commit to conclude merge." }, { "start": 1516, "end": 1519, "text": " git commit. Bam." }, { "start": 1519, "end": 1527, "text": " I still have to enter a commit message, which is already predefined here." }, { "start": 1527, "end": 1533, "text": " I'm saying I merged the branch F1 and there were conflicts, but that's fine." }, { "start": 1533, "end": 1539, "text": " I like this message, so I'm simply going to save the file right here." }, { "start": 1539, "end": 1545, "text": " When I look into git log, it now gives me the full story." }, { "start": 1545, "end": 1550, "text": " First I have this added high commit, then I have the more O commit and the more E commit," }, { "start": 1550, "end": 1552, "text": " which were in parallel to each other." }, { "start": 1552, "end": 1562, "text": " Then I merged both branches into one. We're now right here." }, { "start": 1562, "end": 1570, "text": " What I can do now is delete the F1 flag, because I don't need it anymore." }, { "start": 1570, "end": 1576, "text": " I do that by git branch minus d F1." }, { "start": 1576, "end": 1582, "text": " It says delete the branch F1. No commits are actually deleted when you delete the branch." }, { "start": 1582, "end": 1585, "text": " It's simply the little flag that is deleted." }, { "start": 1585, "end": 1590, "text": " The only danger is when you delete the little flag and the name," }, { "start": 1590, "end": 1594, "text": " and you're unable to reach the commit from any other end." }, { "start": 1594, "end": 1600, "text": " Here of course we have this master, and by following this edge here, we can reach this commit just fine." }, { "start": 1600, "end": 1605, "text": " git won't delete it or garbage collect it." }, { "start": 1605, "end": 1610, "text": " But git will also tell you when you're about to do something dangerous." }, { "start": 1610, "end": 1613, "text": " So don't worry." }, { "start": 1613, "end": 1621, "text": " With this I think you should already have many tools or many insights into git." }, { "start": 1621, "end": 1626, "text": " In another video we're going to look at how to collaborate online with people," }, { "start": 1626, "end": 1628, "text": " which isn't much harder than this." }, { "start": 1628, "end": 1638, "text": " It's simply two more steps to push and pull your work from a server together with other people." }, { "start": 1638, "end": 1660, "text": " Alright, so that was it. Take care." } ]
AU30czb4iQA
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Imputer: Sequence Modelling via Imputation and Dynamic Programming
[ "Science & Technology" ]
[ "deep learning", "machine learning", "nlp", "natural language processing", "machine translation", "arxiv", "google", "attention mechanism", "attention", "transformer", "seq2seq", "autoregressive", "independence", "decoding" ]
The imputer is a sequence-to-sequence model that strikes a balance between fully autoregressive models with long inference times and fully non-autoregressive models with fast inference. The imputer achieves constant decoding time independent of sequence length by exploiting dynamic programming. https://arxiv.org/abs/2002.08926 Abstract: This paper presents the Imputer, a neural sequence model that generates output sequences iteratively via imputations. The Imputer is an iterative generative model, requiring only a constant number of generation steps independent of the number of input or output tokens. The Imputer can be trained to approximately marginalize over all possible alignments between the input and output sequences, and all possible generation orders. We present a tractable dynamic programming training algorithm, which yields a lower bound on the log marginal likelihood. When applied to end-to-end speech recognition, the Imputer outperforms prior non-autoregressive models and achieves competitive results to autoregressive models. On LibriSpeech test-other, the Imputer achieves 11.1 WER, outperforming CTC at 13.0 WER and seq2seq at 12.5 WER. Authors: William Chan, Chitwan Saharia, Geoffrey Hinton, Mohammad Norouzi, Navdeep Jaitly Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at the imputer sequence modeling via imputation and dynamic programming by William Chan, Chitwan Sariah, Jeffrey Hinton, Mohamed Nourouzi and Navdeep Jaitley. So this is a model to perform sequence-to-sequence tasks. Now sequence-to-sequence tasks are very very common in NLP, but in this case it's kind of a subset of sequence-to-sequence tasks. So a classic sequence-to-sequence task is a machine translation. Here for example the sentence I like you. If you want to translate it to German, sorry you, if you want to translate it to German that would become Ich mag dich. And you see that the input is a sequence right and the output is a sequence. Now the imputer deals with very special kind of sequence-to-sequence tasks. Namely it deals with sequence-to-sequence tasks where there is a monotonic alignment. So you see that this is given here. The first word is corresponding to the first word here, the second to the second and the third to the third. This is not always the case in machine translation. You know different languages have different sentence structures. So for example in French this would be je d'aime. And you can see that the first word is still the first word, however the third word has become the second, the you and the verb goes to the end. So the imputer would not be able to deal with this task very well. A different task where the imputer would be useful for would be something like speech recognition. So if someone were to speak the words I like you and you would measure the waveform of that it would look something like I like you. So if you have this waveform let's actually make some chunk samples out of this. Let's say this is a sample right here and here is a break here and here. So we have five samples on the bottom. You can see pretty easily that this sample here, this is the I and then this is silence, this is the like, this is silence and this is the you. So the imputer deals with these kind of sequence to sequence tasks where first of all there is a monotonic alignment, sorry monotonic alignment and second of all this is an engineering constraint where the length of the input sequence X is larger or equal to the length of the input sequence Y and you'll see why mainly because we rely on being able to compute this alignment here. The alignment of input samples to output samples. You can see that the monotonic alignment is actually given fairly well in speech recognition because if something is later down here it is also later in the sequence up here. That is a monotonic alignment and also usually we have more wave samples then we have words in the output sequence. So that would be a task for the imputer. Now let's think about how we would do something like this. So let's put X at the top here and we said X has five tokens in it and let's put Y at the bottom. Y actually has three tokens. So this here is I like you. This is the waveform and we want the I like you at the bottom. So what could we do? First of all what the imputer does is it represents I like you not as this form right here but as a form where you have the same length as X divided into the same amount of things and then it does the following. So for this this is an example. This is how it would represent Y. It would say I have as many chunks on the top as on the bottom. I know this chunk here corresponds to this token then this here to this and this here to this and then these are these intermediate ones. So you can see these correspond to those. These are silents right here. Now it doesn't always need to be that there is always one token and a silence then a token and a silence. The task of the imputer is actually to see whether this is more likely than for example I like and then silence silence and then you. So the task of the imputer is to distinguish these two from each other and then of course also produce the actual tokens. Now if you think about how would you go about taking X and producing something like Y. So this is Y let's call it tilde. This is the actual Y right but you can see that this here is a deterministic function in one way. It's actually not a deterministic function in the other way and that becomes interesting when you have to compute a loss for this. But how would we go about doing this? What we could do is we could just take a big transformer BERT. That's actually drawn arrow. We could just take BERT and we could simply so in BERT you have in if you if you construct it correctly you have as many input tokens as output tokens. So what we could simply say is for each of the outputs that we get we simply make this as a softmax classifier over our vocabulary with the silence being one special token and we simply classify each of the outputs into this vocabulary. This would be one step right? So we could do one step BERT bang bang input to output and there is more there are more sophisticated approaches to doing this in one step like CTC but ultimately we could just do one step but then you'd have the same problem like for example XL net if you haven't seen my XL net video I recommend it that they exactly take the problem if you do this right then at the moment where you decode the word like you have no idea that there is an I over here all you know is the the vector you have here that you sample the I from right but this could be a distribution where I is pretty high but some other word is also pretty high so this process over here that samples the word like has no idea which of the two here you actually would sample so it cannot condition on it so it is the the assumption here is that the sampling of the word like is independent of the sampling of the word I and of course that's not the case the you need to know what word is there if you want to sample the word like otherwise you can end up with some very confusing sentences so this one step process is pretty quick but it has the drawback that there are these conditional independence assumptions and again I invite you to watch the XL net video if you want to dive more into this problem the second thing we could do is we could just decode one after another right so we could say all right I'll make sorry I'll make my five slots here and I just leave them empty for now and I'm just going to decode the one that I am most sure about and let's say the the speech at the back here is very clear and you say other I'm I know this is a you right so I'm gonna fill in you right here right and make this alignment that this goes here this is the you right I still don't know what the others are but now what I did they do a second step and in the second step I get as an input not only the original input like this this thing here but I also get the fact that I already decoded the word you to here right in this step so now I say given that I already decoded the word you which one am I now most sure about and I might be most sure about to say I'm most sure about this now being an eye because there's a you at the end and this kind of sounds like an eye so an eye here right it goes to the next step and then the next step it already has the information that it decoded I and you and now it's a might say ah okay given these that's so probably this thing so I here probably the thing here the thing right here is silence right makes the most sense I kind of hear some noise but there's already a word after so now I'm pretty sure that this here is a silent token right and you go this until the end until you're actually at this so this here would be n step decoding this here would be n steps of decoding which now no longer has the problem of these conditional independence assumptions but of course now you have the problem that you need n steps right the imputer does something in the middle of this the imputer will as you can see here it will form this into blocks right blocks of size B and this is the empty symbol here right and what it will do is it will make a step where in each block for each block it will conditioned on the previous alignment and conditioned on the input it will decode whatever it feels it is most certain about in each block and then it does this for as long as there are still empty tokens right you can see here the first block and then in the second step it will decode this this this and this so the imputer can trade off between the conditional independence assumption of the one step BERT and the full conditional independence assumption of the n step decoding right so it will compute this alignment and the actual tokens at the same time in this process so how many steps does this take this takes now B steps and this is pretty cool because B is the block size so this is independent of the sequence length so it is able to compute this alignment and output in a constant number of steps right so you're by modulating this B you're now able to trade off speed versus let's say performance in the imputer and this is pretty cool so I think actually I think the the bigger point to understand here is how to actually use the assumption that there is a monotonic alignment right because if there is a monotonic alignment and if this thing is given here then you can do this you can do this representation right here with the silence tokens and that allows you to basically represent the output in a form that is of the same length as the input and do this kind of token by token decoding while still allowing you to have variable lengths output as long as they're smaller in length than the input so that's pretty cool and then the the next pretty cool thing is the fact that they do this in blocks now of course my issue with this so this is how the system works my issue with this is how the system is trained so if you think about how you train this you must train this first of all the loss function right has to revert this and how they do it as they marginalize you see this down here you want to marginalize over all the possible alignments right here so this is how you train you sample an alignment from the alignment policy and this alignment policy is I think they have some heuristics of how they construct the alignments during during training or you have experts actually giving you this alignment I think they use in the speech recognition they use something like CTC to give you the alignments from the alignment policy and then you have a masking policy and I think they also they just do random masking and then they use that for training and then they marginalize over the alignments this I'm pretty sure is not the same distribution as the decoding procedure I just described right so the decoding procedure if you do this in B steps right that means each of the step is dependent on the step before so that means the distribution of whatever you whatever the imputer sees is actually dependent on itself while these people are proposing a training framework where you have here you have a heuristic in order to come up with the training sample alignments and here you have a random I think a random masking policy that comes up with the with where the empty tokens are so this is not the same distribution and then also it marginalizes over all compatible alignments which I'm I'm pretty sure this is not the same distribution this is not the correct loss distribution they have some math to show that in expectation it's the same but yeah this is this is over there over their role in policy and role and expert and and marginalization this I don't want to go too deep into this I've given it some thought but it will make this video too long and boring if I actually go into the details here suffice to say I invite you to look at the loss computation and ask yourself if you think that is the correct way to produce the data set for training given how you do the inference later the architecture of the imputer is actually pretty similar to BERT in that first of all well okay you're dealing with audio in the input so you're going to have some convolutional network here and you also need to take as an input the prior alignment that you've already produced right so this you embed and but then you simply do an attention network a transformer which will which is pretty close to to the bird example we've made and so I mean they stress that that their that their loss is actually a lower bound on the loss so I shouldn't be I shouldn't be too hard when I say it's not the correct distribution they do minimize something some loss that actually makes sense but yeah I mainly wanted to go over the over the how the imputer works and how the it is structured and I think it's pretty cool and it lends itself very well to these tasks and most of all I like the fact that it exploits the these assumptions here so not all tasks fit these assumptions but if a task does fit the assumption then I think it should be you know it it should be fairly obvious that one should exploit that in order to perform better all right that was it for me thanks
[ { "start": 0, "end": 6.12, "text": " Hi there! Today we're looking at the imputer sequence modeling via imputation" }, { "start": 6.12, "end": 12.72, "text": " and dynamic programming by William Chan, Chitwan Sariah, Jeffrey Hinton, Mohamed" }, { "start": 12.72, "end": 18.96, "text": " Nourouzi and Navdeep Jaitley. So this is a model to perform sequence-to-sequence" }, { "start": 18.96, "end": 28.2, "text": " tasks. Now sequence-to-sequence tasks are very very common in NLP, but in this" }, { "start": 28.2, "end": 33.44, "text": " case it's kind of a subset of sequence-to-sequence tasks. So a classic" }, { "start": 33.44, "end": 38.04, "text": " sequence-to-sequence task is a machine translation. Here for example the" }, { "start": 38.04, "end": 45.32, "text": " sentence I like you. If you want to translate it to German, sorry you, if you" }, { "start": 45.32, "end": 55.92, "text": " want to translate it to German that would become Ich mag dich. And you see that the" }, { "start": 55.92, "end": 62.56, "text": " input is a sequence right and the output is a sequence. Now the imputer deals with" }, { "start": 62.56, "end": 66.76, "text": " very special kind of sequence-to-sequence tasks. Namely it deals with" }, { "start": 66.76, "end": 71.88, "text": " sequence-to-sequence tasks where there is a monotonic alignment. So you see" }, { "start": 71.88, "end": 75.88, "text": " that this is given here. The first word is corresponding to the first word here," }, { "start": 75.88, "end": 82.64, "text": " the second to the second and the third to the third. This is not always the case" }, { "start": 82.64, "end": 86.24, "text": " in machine translation. You know different languages have different sentence" }, { "start": 86.24, "end": 93.86, "text": " structures. So for example in French this would be je d'aime. And you can see that" }, { "start": 93.86, "end": 99.6, "text": " the first word is still the first word, however the third word has become the" }, { "start": 99.6, "end": 104.96000000000001, "text": " second, the you and the verb goes to the end. So the imputer would not be able to" }, { "start": 104.96000000000001, "end": 110.92, "text": " deal with this task very well. A different task where the imputer would be" }, { "start": 110.92, "end": 117.2, "text": " useful for would be something like speech recognition. So if someone were to speak" }, { "start": 117.2, "end": 121.4, "text": " the words I like you and you would measure the waveform of that it would" }, { "start": 121.4, "end": 129.64, "text": " look something like I like you. So if you have this waveform let's actually" }, { "start": 129.64, "end": 136.12, "text": " make some chunk samples out of this. Let's say this is a sample right here and" }, { "start": 136.12, "end": 143.6, "text": " here is a break here and here. So we have five samples on the bottom." }, { "start": 143.6, "end": 150.72, "text": " You can see pretty easily that this sample here, this is the I and then this" }, { "start": 150.72, "end": 157.28, "text": " is silence, this is the like, this is silence and this is the you. So the" }, { "start": 157.28, "end": 161.04000000000002, "text": " imputer deals with these kind of sequence to sequence tasks where first" }, { "start": 161.04, "end": 167.64, "text": " of all there is a monotonic alignment, sorry monotonic alignment and second of" }, { "start": 167.64, "end": 173.68, "text": " all this is an engineering constraint where the length of the input sequence X" }, { "start": 173.68, "end": 179.07999999999998, "text": " is larger or equal to the length of the input sequence Y and you'll see" }, { "start": 179.07999999999998, "end": 185.95999999999998, "text": " why mainly because we rely on being able to compute this alignment here. The" }, { "start": 185.96, "end": 193.92000000000002, "text": " alignment of input samples to output samples. You can see that the" }, { "start": 193.92000000000002, "end": 197.96, "text": " monotonic alignment is actually given fairly well in speech recognition" }, { "start": 197.96, "end": 202.88, "text": " because if something is later down here it is also later in the sequence up here." }, { "start": 202.88, "end": 210.68, "text": " That is a monotonic alignment and also usually we have more wave samples" }, { "start": 210.68, "end": 217.84, "text": " then we have words in the output sequence. So that would be a task for the" }, { "start": 217.84, "end": 225.36, "text": " imputer. Now let's think about how we would do something like this. So let's" }, { "start": 225.36, "end": 233.68, "text": " put X at the top here and we said X has five tokens in it and let's put Y at the" }, { "start": 233.68, "end": 246.28, "text": " bottom. Y actually has three tokens. So this here is I like you." }, { "start": 246.28, "end": 252.44, "text": " This is the waveform and we want the I like you at the bottom. So what could we" }, { "start": 252.44, "end": 259.36, "text": " do? First of all what the imputer does is it represents I like you not as this" }, { "start": 259.36, "end": 267.88, "text": " form right here but as a form where you have the same length as X divided into" }, { "start": 267.88, "end": 276.16, "text": " the same amount of things and then it does the following. So for this this is" }, { "start": 276.16, "end": 278.68, "text": " an example." }, { "start": 278.68, "end": 291, "text": " This is how it would represent Y. It would say I have as many chunks on" }, { "start": 291, "end": 296.24, "text": " the top as on the bottom. I know this chunk here corresponds to this token" }, { "start": 296.24, "end": 302.48, "text": " then this here to this and this here to this and then these are these" }, { "start": 302.48, "end": 308.6, "text": " intermediate ones. So you can see these correspond to those. These are" }, { "start": 308.6, "end": 314, "text": " silents right here. Now it doesn't always need to be that there is always one" }, { "start": 314, "end": 318.32000000000005, "text": " token and a silence then a token and a silence. The task of the imputer is" }, { "start": 318.32000000000005, "end": 329.20000000000005, "text": " actually to see whether this is more likely than for example I like and then" }, { "start": 329.20000000000005, "end": 334.52000000000004, "text": " silence silence and then you. So the task of the imputer is to distinguish" }, { "start": 334.52, "end": 339.24, "text": " these two from each other and then of course also produce the actual tokens." }, { "start": 339.24, "end": 346.32, "text": " Now if you think about how would you go about taking X and producing something" }, { "start": 346.32, "end": 351.84, "text": " like Y. So this is Y let's call it tilde. This is the actual Y right but you can" }, { "start": 351.84, "end": 356.2, "text": " see that this here is a deterministic function in one way. It's actually not a" }, { "start": 356.2, "end": 360.79999999999995, "text": " deterministic function in the other way and that becomes interesting when you" }, { "start": 360.8, "end": 365, "text": " have to compute a loss for this. But how would we go about doing this? What" }, { "start": 365, "end": 370.08, "text": " we could do is we could just take a big transformer BERT. That's actually" }, { "start": 370.08, "end": 379.12, "text": " drawn arrow. We could just take BERT and we could simply so in BERT you have" }, { "start": 379.12, "end": 385.64, "text": " in if you if you construct it correctly you have as many input tokens as output" }, { "start": 385.64, "end": 390.36, "text": " tokens. So what we could simply say is for each of the outputs that we get we" }, { "start": 390.36, "end": 397.24, "text": " simply make this as a softmax classifier over our vocabulary with the silence" }, { "start": 397.24, "end": 404.08000000000004, "text": " being one special token and we simply classify each of the outputs into this" }, { "start": 404.08000000000004, "end": 412.16, "text": " vocabulary. This would be one step right? So we could do one step BERT bang bang" }, { "start": 412.16, "end": 418.16, "text": " input to output and there is more there are more sophisticated approaches to" }, { "start": 418.16, "end": 423.20000000000005, "text": " doing this in one step like CTC but ultimately we could just do one step but" }, { "start": 423.20000000000005, "end": 428.16, "text": " then you'd have the same problem like for example XL net if you haven't seen" }, { "start": 428.16, "end": 434.20000000000005, "text": " my XL net video I recommend it that they exactly take the problem if you do this" }, { "start": 434.20000000000005, "end": 441.04, "text": " right then at the moment where you decode the word like you have no idea" }, { "start": 441.04, "end": 446.32000000000005, "text": " that there is an I over here all you know is the the vector you have here" }, { "start": 446.32, "end": 453.52, "text": " that you sample the I from right but this could be a distribution where I is" }, { "start": 453.52, "end": 458.2, "text": " pretty high but some other word is also pretty high so this process over here" }, { "start": 458.2, "end": 464.56, "text": " that samples the word like has no idea which of the two here you actually would" }, { "start": 464.56, "end": 470.08, "text": " sample so it cannot condition on it so it is the the assumption here is that" }, { "start": 470.08, "end": 473.68, "text": " the sampling of the word like is independent of the sampling of the word" }, { "start": 473.68, "end": 480, "text": " I and of course that's not the case the you need to know what word is there if" }, { "start": 480, "end": 486.12, "text": " you want to sample the word like otherwise you can end up with some very" }, { "start": 486.12, "end": 492.6, "text": " confusing sentences so this one step process is pretty quick but it has the" }, { "start": 492.6, "end": 495.8, "text": " drawback that there are these conditional independence assumptions and" }, { "start": 495.8, "end": 500.88, "text": " again I invite you to watch the XL net video if you want to dive more into this" }, { "start": 500.88, "end": 507.04, "text": " problem the second thing we could do is we could just decode one after another" }, { "start": 507.04, "end": 516.64, "text": " right so we could say all right I'll make sorry I'll make my five slots here" }, { "start": 516.64, "end": 521.52, "text": " and I just leave them empty for now and I'm just going to decode the one that I" }, { "start": 521.52, "end": 527.72, "text": " am most sure about and let's say the the speech at the back here is very clear" }, { "start": 527.72, "end": 533.48, "text": " and you say other I'm I know this is a you right so I'm gonna fill in you right" }, { "start": 533.48, "end": 540.36, "text": " here right and make this alignment that this goes here this is the you right I" }, { "start": 540.36, "end": 546.6800000000001, "text": " still don't know what the others are but now what I did they do a second step and" }, { "start": 546.6800000000001, "end": 556.6, "text": " in the second step I get as an input not only the original input like this this" }, { "start": 556.6, "end": 562.8000000000001, "text": " thing here but I also get the fact that I already decoded the word you to here" }, { "start": 562.8000000000001, "end": 568.48, "text": " right in this step so now I say given that I already decoded the word you which" }, { "start": 568.48, "end": 575.36, "text": " one am I now most sure about and I might be most sure about to say I'm most sure" }, { "start": 575.36, "end": 578.52, "text": " about this now being an eye because there's a you at the end and this kind" }, { "start": 578.52, "end": 584.2, "text": " of sounds like an eye so an eye here right it goes to the next step and then" }, { "start": 584.2, "end": 589.12, "text": " the next step it already has the information that it decoded I and you" }, { "start": 589.12, "end": 597.4000000000001, "text": " and now it's a might say ah okay given these that's so probably this thing so I" }, { "start": 597.4000000000001, "end": 604.2800000000001, "text": " here probably the thing here the thing right here is silence right makes the" }, { "start": 604.2800000000001, "end": 608.2, "text": " most sense I kind of hear some noise but there's already a word after so now I'm" }, { "start": 608.2, "end": 613.76, "text": " pretty sure that this here is a silent token right and you go this until the" }, { "start": 613.76, "end": 621.96, "text": " end until you're actually at this so this here would be n step decoding this" }, { "start": 621.96, "end": 628.24, "text": " here would be n steps of decoding which now no longer has the problem of these" }, { "start": 628.24, "end": 632.72, "text": " conditional independence assumptions but of course now you have the problem that" }, { "start": 632.72, "end": 641.16, "text": " you need n steps right the imputer does something in the middle of this the" }, { "start": 641.16, "end": 648.12, "text": " imputer will as you can see here it will form this into blocks right blocks of" }, { "start": 648.12, "end": 655.3199999999999, "text": " size B and this is the empty symbol here right and what it will do is it will" }, { "start": 655.3199999999999, "end": 661.36, "text": " make a step where in each block for each block it will conditioned on the" }, { "start": 661.36, "end": 665.36, "text": " previous alignment and conditioned on the input it will decode whatever it" }, { "start": 665.36, "end": 673.24, "text": " feels it is most certain about in each block and then it does this for as long" }, { "start": 673.24, "end": 678.64, "text": " as there are still empty tokens right you can see here the first block and then" }, { "start": 678.64, "end": 686.4, "text": " in the second step it will decode this this this and this so the imputer can" }, { "start": 686.4, "end": 692, "text": " trade off between the conditional independence assumption of the one step" }, { "start": 692, "end": 697.48, "text": " BERT and the full conditional independence assumption of the n step" }, { "start": 697.48, "end": 705.44, "text": " decoding right so it will compute this alignment and the actual tokens at the" }, { "start": 705.44, "end": 712.4, "text": " same time in this process so how many steps does this take this takes now B" }, { "start": 712.4, "end": 721.36, "text": " steps and this is pretty cool because B is the block size so this is independent" }, { "start": 721.36, "end": 727.4, "text": " of the sequence length so it is able to compute this alignment and output in a" }, { "start": 727.4, "end": 734.2, "text": " constant number of steps right so you're by modulating this B you're now able to" }, { "start": 734.2, "end": 741.84, "text": " trade off speed versus let's say performance in the imputer and this is" }, { "start": 741.84, "end": 747.12, "text": " pretty cool so I think actually I think the the bigger point to understand here" }, { "start": 747.12, "end": 753.28, "text": " is how to actually use the assumption that there is a monotonic alignment" }, { "start": 753.28, "end": 757.28, "text": " right because if there is a monotonic alignment and if this thing is given" }, { "start": 757.28, "end": 765.6, "text": " here then you can do this you can do this representation right here with the" }, { "start": 765.6, "end": 773.72, "text": " silence tokens and that allows you to basically represent the output in a" }, { "start": 773.72, "end": 778.48, "text": " form that is of the same length as the input and do this kind of token by token" }, { "start": 778.48, "end": 784.84, "text": " decoding while still allowing you to have variable lengths output as long as" }, { "start": 784.84, "end": 792.08, "text": " they're smaller in length than the input so that's pretty cool and then the the" }, { "start": 792.08, "end": 799.88, "text": " next pretty cool thing is the fact that they do this in blocks now of course my" }, { "start": 799.88, "end": 805.92, "text": " issue with this so this is how the system works my issue with this is how" }, { "start": 805.92, "end": 812.76, "text": " the system is trained so if you think about how you train this you must train" }, { "start": 812.76, "end": 820.56, "text": " this first of all the loss function right has to revert this and how they" }, { "start": 820.56, "end": 829.48, "text": " do it as they marginalize you see this down here you want to marginalize over" }, { "start": 829.48, "end": 838.96, "text": " all the possible alignments right here so this is how you train you sample an" }, { "start": 838.96, "end": 848.16, "text": " alignment from the alignment policy and this alignment policy is I think they" }, { "start": 848.16, "end": 853.2, "text": " have some heuristics of how they construct the alignments during during" }, { "start": 853.2, "end": 858.32, "text": " training or you have experts actually giving you this alignment I think they" }, { "start": 858.32, "end": 864.84, "text": " use in the speech recognition they use something like CTC to give you the" }, { "start": 864.84, "end": 872.1600000000001, "text": " alignments from the alignment policy and then you have a masking policy and I" }, { "start": 872.1600000000001, "end": 877.5200000000001, "text": " think they also they just do random masking and then they use that for" }, { "start": 877.5200000000001, "end": 884.7600000000001, "text": " training and then they marginalize over the alignments this I'm pretty sure is" }, { "start": 884.76, "end": 892.72, "text": " not the same distribution as the decoding procedure I just described" }, { "start": 892.72, "end": 901.16, "text": " right so the decoding procedure if you do this in B steps right that means each" }, { "start": 901.16, "end": 908.04, "text": " of the step is dependent on the step before so that means the distribution of" }, { "start": 908.04, "end": 914.4, "text": " whatever you whatever the imputer sees is actually dependent on itself while" }, { "start": 914.4, "end": 921.68, "text": " these people are proposing a training framework where you have here you have a" }, { "start": 921.68, "end": 928.16, "text": " heuristic in order to come up with the training sample alignments and here you" }, { "start": 928.16, "end": 936, "text": " have a random I think a random masking policy that comes up with the with where" }, { "start": 936, "end": 941.3199999999999, "text": " the empty tokens are so this is not the same distribution and then also it" }, { "start": 941.32, "end": 947.44, "text": " marginalizes over all compatible alignments which I'm I'm pretty sure" }, { "start": 947.44, "end": 952.2800000000001, "text": " this is not the same distribution this is not the correct loss distribution" }, { "start": 952.2800000000001, "end": 959.5200000000001, "text": " they have some math to show that in expectation it's the same but yeah this" }, { "start": 959.5200000000001, "end": 967.7600000000001, "text": " is this is over there over their role in policy and role and expert and and" }, { "start": 967.76, "end": 974.2, "text": " marginalization this I don't want to go too deep into this I've given it some" }, { "start": 974.2, "end": 979.28, "text": " thought but it will make this video too long and boring if I actually go into" }, { "start": 979.28, "end": 984.96, "text": " the details here suffice to say I invite you to look at the loss computation and" }, { "start": 984.96, "end": 992.68, "text": " ask yourself if you think that is the correct way to produce the data set for" }, { "start": 992.68, "end": 999.7199999999999, "text": " training given how you do the inference later the architecture of the imputer is" }, { "start": 999.7199999999999, "end": 1006.52, "text": " actually pretty similar to BERT in that first of all well okay you're dealing" }, { "start": 1006.52, "end": 1011.3599999999999, "text": " with audio in the input so you're going to have some convolutional network here" }, { "start": 1011.3599999999999, "end": 1015.8, "text": " and you also need to take as an input the prior alignment that you've already" }, { "start": 1015.8, "end": 1021.9599999999999, "text": " produced right so this you embed and but then you simply do an attention" }, { "start": 1021.96, "end": 1029.1200000000001, "text": " network a transformer which will which is pretty close to to the bird example" }, { "start": 1029.1200000000001, "end": 1039.44, "text": " we've made and so I mean they stress that that their that their loss is" }, { "start": 1039.44, "end": 1044.44, "text": " actually a lower bound on the loss so I shouldn't be I shouldn't be too hard when" }, { "start": 1044.44, "end": 1050.64, "text": " I say it's not the correct distribution they do minimize something some loss" }, { "start": 1050.64, "end": 1058.4, "text": " that actually makes sense but yeah I mainly wanted to go over the over the" }, { "start": 1058.4, "end": 1064.3200000000002, "text": " how the imputer works and how the it is structured and I think it's pretty cool" }, { "start": 1064.3200000000002, "end": 1072.48, "text": " and it lends itself very well to these tasks and most of all I like the fact" }, { "start": 1072.48, "end": 1079.96, "text": " that it exploits the these assumptions here so not all tasks fit these" }, { "start": 1079.96, "end": 1085.8400000000001, "text": " assumptions but if a task does fit the assumption then I think it should be you" }, { "start": 1085.8400000000001, "end": 1090.32, "text": " know it it should be fairly obvious that one should exploit that in order to" }, { "start": 1090.32, "end": 1110, "text": " perform better all right that was it for me thanks" } ]
WVPE62Gk3EM
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Big Bird: Transformers for Longer Sequences (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "google", "google research", "bigbird", "big bird", "bert", "attention", "attention is all you need", "longformer", "random attention", "quadratic attention", "attention mechanism", "qa", "natural questions", "hotpot qa", "genomics", "nlp", "natural language processing", "transformer", "transformers", "fully connected", "sparse attention", "graph", "star graph", "turing complete", "universal approximation", "window attention", "convolution" ]
#ai #nlp #attention The quadratic resource requirements of the attention mechanism are the main roadblock in scaling up transformers to long sequences. This paper replaces the full quadratic attention mechanism by a combination of random attention, window attention, and global attention. Not only does this allow the processing of longer sequences, translating to state-of-the-art experimental results, but also the paper shows that BigBird comes with theoretical guarantees of universal approximation and turing completeness. OUTLINE: 0:00 - Intro & Overview 1:50 - Quadratic Memory in Full Attention 4:55 - Architecture Overview 6:35 - Random Attention 10:10 - Window Attention 13:45 - Global Attention 15:40 - Architecture Summary 17:10 - Theoretical Result 22:00 - Experimental Parameters 25:35 - Structured Block Computations 29:30 - Recap 31:50 - Experimental Results 34:05 - Conclusion Paper: https://arxiv.org/abs/2007.14062 My Video on Attention: https://youtu.be/iDulhoQ2pro My Video on BERT: https://youtu.be/-9evrZnBorM My Video on Longformer: https://youtu.be/_8KNb5iqblE ... and its memory requirements: https://youtu.be/gJR28onlqzs Abstract: Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data. Authors: Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there, today we'll look at Big Bird Transformers for Longer Sequences by Manil Zahir and Gurugar Uganesh et al. of Google Research. So this paper on a high level proposes to replace the quadratic attention mechanism in transformers by a mix of random attention, windowed attention, and selective global attention, therefore achieving a complexity of linear memory requirement instead of quadratic memory requirement. And as a result of that, they can process longer sequences than traditional transformers like BERT and achieve better results in some NLP tasks, and they also evaluate on genomics tasks. So we'll go through this paper a bit, look a bit at the proof because they give a theoretical kind of guarantee that their random attention mechanism can still be Turing complete and can still achieve the same things as a full attention mechanism, but we'll also look at the drawbacks. I sort of have mixed feelings about this paper, and I think I'll voice my concerns as we go through here. But first, let's look at the paper, let's look at the architecture, and I think this is actually a pretty cool paper for the empirical progression of the field to process longer sequences with transformers. As always, if you like content like this, feel free to share it around, leave a like and tell me in the comments what you think about the paper and about what I think, whatever, you just, just go nuts. Alright, so the basic premise right here is that the transformers, they've been pretty impactful, especially in NLP. So they say transformer based models such as BERT have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency, mainly in terms of memory, on the sequence length due to their full attention mechanism. So really briefly, the full attention mechanism, and I've done numerous videos about attention mechanism BERT, attention is all you need, and so on. So if you want a detailed explanation of what that is, just go look up the corresponding videos. But briefly, what you'll have in NLP is a set of tokens, a sequence of tokens as an input, and you want to transform them layer after layer into sort of a higher order representation of that same sequence. And for that, you build these layers out of nodes and you have as many nodes usually as you have tokens in the sequence. And the next set of, so each token is represented by a vector at the beginning, and each layer transforms the sequence, as I said, into sort of a higher level representation. So you want the vector of this token right here to be a better representation than the vector was right here. And you do that by incorporating information from all the other tokens into that particular vector. Now, as I said, this is called an attention mechanism, and we don't actually have to go into how it works right here, but you can see pretty clearly that if you want to do this for every token, you need to have information routed from every token to every token, like from here to here, from here to here, and so on. And this is just one token, and then you need to do it for this token and for this token and for this token. So what you'll ultimately get, if n is your sequence length, you'll get some n squared amount of computation and memory requirements for this. So this is a problem. And usually, this means that, you know, this sequence length in BERT, this is limited to something like 512 tokens, which is okay for some applications. But if you want to summarize, you know, entire articles, entire books, even, or do question answering with lots of context, it's not really enough. So people have been thinking about how to scale this input, how to scale this. And of course, the main culprit is this quadratic attention mechanism, because if you, you know, scale the 512, you need, you know, four times the amount of compute and memory. So how does this paper go about reducing that quadratic dependency? The goal right here is, of course, to get this to some O of n, right? Because then, as we double the input length, we simply need to double the compute requirements, and that would be fantastic. And that's what this paper does. And it does so without sacrificing the properties of the transformer. So here's the architecture that Big Bird proposes. By the way, Big Bird, another character from Sesame Street, I guess, will continue the naming here after Elmo and BERT. You know, I'm waiting for the model that's the count. Yeah, that's going to be a fun model. So Big Bird basically has three different types of attention. And here, these are adjacency matrices in this attention mechanism. So here is the input layer, and the output layer is right here. So that basically means that node i right here would be connected. Sorry, that's not a straight line, would be connected to this particular node and also to this particular node. So we're now trying, if we have node i right here, we're now trying to not connect it to all of these nodes, but we'll say, we'll just select some at random and then connect it to that. Okay, this is what we call random attention. And you can pretty clearly see if you connect each of the i nodes to r equals 2, to two random nodes, then you don't have an n squared anymore. But you'll have a like an O of r times n, which you know, if r is a constant is an O of n attention mechanism. Okay, so the main goal between the random attention mechanism is that for each query, basically, you select random tokens that you attend to, and that random number is a fixed number that's not dependent on the sequence length. And the paper is a little bit unclear about whether or not those random ones are the same for every sequence or are switched up, or the same for every layer or are switched up. But they formulate all of this as sort of in sort of a graph in sort of a random graph. So they're, they formulate the attention mechanism in form of a graph. So if we transform all of these nodes into a graph, a full attention mechanism would mean that each graph, each node is connected to each of the other nodes, right, fully connected graph, I don't, maybe that's it. So that would be a full attention mechanism. And then they say, well, if we just have random connections between these things, then there are some theorems from graph theory that say that each random walk in this graph is going to, so this graph is going to mix pretty quickly. So I can get from each node to each other node by a random walk in a logarithmic time. And this random walk, which basically means that you go from here to here, this would be one layer of the transformer. And then if you want to go from here to here, that you would have to do that in the next layer. So this formulation as a random graph leads me to believe that layer after layer, the random attention pattern is going to be the same. But also the formulation of the paper leads me to believe that the this random attention differs from sequence to sequence. So I believe what's happening is that they get a new sequence, then they decide on this pattern right here once and then they use this layer after layer, the same pattern again. So you can see that in the traditional attention, information can basically flow from each of the nodes to each other node in one single step, right? Because each node is connected to each other node. You see this in the graph right here. However, if we only select a subset, then you know, it needs to if I want to go from, as I said, from here to here, then I need to do it in two steps. And therefore I need two layers. And that's going to be the culprit of this method here. And while it is mentioned in the paper, it's sort of I feel at least that's my my assessment of this paper, it's kind of swept under the rug a little bit. I mean, they do have a theorem that clearly says we can construct an example of a task that in the full attention setting can be solved with a single step. So a single layer that in our random attention setting needs a lot of layers, a lot of steps. But you know, the rest of the paper is sort of shaky on on this thing. But nevertheless, you can see how the random attention can, if you have enough layers, do the same information routing as the full attention. Okay. However, this is not a property of the random attention. And we'll see this in the next thing right here. So the next ingredient that this paper uses is window attention. And you can see over here that Big Bird is ultimately going to be a combination of the three types of attention, which will, which we are looking at here. So window attention basically means that each each i each token at the i of position is going to attend to itself, of course. So here is i, but it is also going to attend to its neighbors. So here is i minus one and here is i plus one. And this is a you know, this is a window size w that you can that is a parameter, but also it is a constant and therefore you again go from n squared to w times n, which you know is o of n if w is a constant. And this might be familiar to you, because we've already seen this in the long former paper. We made a video or I think even two videos on the long former, which used exactly the window attention in combination with the global attention. And if you want to know more about that, go watch these videos. But the new thing in Big Bird right here is this addition of the random attention. Again, the the window here is is has exactly the same properties as the random attention. So you have instead of a fully connected graph, you have a sparsely connected graph. Now if you have random attention, the sparsely connected graph is like like the one on the right. But if you have a windowed attention, you can it is kind of not randomly connected, but each node is connected to its neighbors like this. And you can also see that if I want to go from this node to this node right here, I can't do it in one step, but I can do it in two steps. I go here and I go here. So in the terms of the attention layers, if I want to go from node one to node three, I have to do it in two steps because each node is only connected to its neighbors. So the connection patterns would sort of look like this. So I have to go from one to two and then in the next layer from two to three. So the paper basically makes up for the lack of full attention by adding layers. And you also might recognize this from a convolution operation. This basically because it is a convolution operation, right in a convolution, each node only aggregates input from its neighbors for the next layer. And then we know that as we go up the layers, the de facto window that each node looks at is going to be like a cone kind of like this. So this is very similar to how a convolutional neural network works. And the reasoning is very similar because the reasoning is, well, in a sentence, the most important words for any given word are probably going to be its neighbors, like the words around it. And as you go up the layers, you branch out more and more. But ultimately, this neighborhood principle holds in NLP as well. So again, we already saw this in the long former, but that's the reason behind the window attention and that's the second ingredient. And then the third ingredient is this global attention. Now the global attention is selected tokens that are so important and that's fixed by the developers that are so important that they are connected to everything else. So for example, in these transformers, you often have what's this kind of CLS token. So this is a special token that you prepend to some piece of text and the output of this token is going to be your classification output because you don't want to bind your classification if you need to classify the entire sequence. You don't want to bind that decision to one particular word. What you want to do is you want to have an extra token and that's this CLS token that kind of aggregates information from all of this. So layer after layer, layer after layer, you'll have, so if we go here, layer after layer, we have this one special node. And in each step, every single other node is able to send information right here to this node and receive information from this node. So now, as a result of this, as you may be able to see, every single path is kind of a maximum length of two because if I want to go from any node to any other node, I can simply send information to this global node and then the global node in the next step can send information to whatever other node. And that is a property that they use in their proof that this attention mechanism is as sort of as powerful as the classic full attention mechanism. And we'll go through that in one second. But first, I hope this was clear that this combination of random attention, window attention and global attention is what is called Big Bird. They have some engineering tricks that go along with this, but in concept, you can imagine Big Bird being long former plus these random attention right here. And as an engineer, as an NLP engineer, that makes kind of total sense. I totally believe that the introduction, the addition of these random attention of these random attention patterns can absolutely help your classification or whatever your NLP tasks because more attention, better. And I also am completely willing to believe that using the full attention matrix, while it is, of course, more accurate, it won't hurt too much to leave some of that attention away because essentially all the path lengths are just becoming two or even with the random attention are really short or logarithmic to route information from a node to some other node. So the loss that you incur is kind of in a logarithmic scale in terms of performance, while the gain that you make is sort of in a in a quadratic or like a linear scale, you go from quadratic to linear. And that seems to me like a good empirical trade off. All right, however, the the proofs here, the proof of of how how these how these things are constructed are a little bit. I don't know. So what they do in the proof that this function can sort of is a universal approximator. People have already shown that full attention mechanisms are universal approximators. So they show here that this sparse attention mechanism is also a universal approximator. They make big use of star graphs. What they say is, OK, if we have a star graph, which is one node connected right here to every other node, this is a star graph. If we have a star graph, we can achieve the same thing than with a full graph. A full graph is where every node is connected to every other node. But as I already said, what they need for this is multiple layers of this star graph. So and that has to do with the fact that if I want to route information, I basically have to go via this middle node right here. And there is an additional complication because this middle node in our case right here is only one node. I can't route information at the same like I can't have this routing right here at the same time that I have this routing right here, like going from here to here, because I only have one middle node. And I kind of this is not how that like this is very dumb math. But maybe you have to imagine that there is one memory slot. And you can only use that one memory slot at the same time for one of these things. So essentially, what you'll have to do is you'll have to do the green thing first. And then in the next step, you'll have to do the blue thing second. And then so these are now pairwise routing between nodes. But ultimately, what an attention mechanism does is it does everything to everything right in a single layer, it routes information from all the nodes to all the other nodes. And to achieve that, so you need multiple rounds of this. And it turns out that in the worst case, you actually need n rounds of this. So you know, you trade off your you go from n squared to n memory and compute requirements in a single layer. But in the worst case, you need n layers to recover the power of the full of the full transformer. And that is the last one of their theoretical results right here. So first, they prove universal approximations. And second, they prove Turing completeness. These two properties have been proven for full attention mechanisms. And third, they prove that there are tasks where you actually do need n layers to solve them with their limited attention. So you know, I'm not sure but I feel you can make any sort of polynomial algorithm into a linear algorithm like this. Like I have a I have like a cool sorting algorithm, right? So if this is my sequence that I want to sort, what I can do is I can simply, you know, take a random subset of them, like this, this and this and then kind of go and sort them and then put them like I send them to the to the global memory like this, I sort them, and then I put them back, right? And if I do this for enough, if I do this for enough rounds, okay, you know, if I do this for enough rounds, you know, at the worst case, I need n rounds to sort my or log n rounds if I do it smartly. But you know, in, you know, the single step here is the single step is just O of n. So I have now an O of n sorting algorithm. I you know, I have my sort of a bit of worry to express things like that. And yeah, but you know, it is from an empirical standpoint, I absolutely believe that this this is enough. Now my second coral right here is that if you look at the proof, first of all, what it makes use is this star graph, and the star graph corresponds to the global attention. So that's not much to do with the random attention, though they use the random attention in their proof, but I at least believe that it would be possible with the global attention only. And then the second thing is if you look at the parameters that they use for the for the experiments, and I've already said this in the long former video. So in the long form of video, it turned out that if you look at how big these window attention is, it turns out that it you're still well, you know, the original BERT attended to 512 tokens. And then you look at the window and the window was still 512 tokens. It's just that the global attention was even more so ultimately they ended up using more memory than the original BERT. And here, if I look at the parameters of their thing, and they have multiple experiments right here, and I believe this is the the base version. So this is the base version, they also have this large version. But here, this is the 12 layer version. And you can see they have this block length. And we'll get into the block length in one second. But then you can see that their window size is three times the block length, the number of random tokens is three times the block length, and the number of global tokens is two times the block length. So that results in eight times B. So eight times 64 is, you know, Can I calculate this? Or am I stupid? It's 512. Yes, actually calculated this before. So this is 512 tokens. So you know, you you go from from BERT that has 512 tokens and attends to 512 tokens to also attending to 512 tokens. Of course, the advantage here is that they now have 4096 sequence length. So they have the freedom to not attend to as many tokens as they have in the input length. But you know, to put it in perspective, this here uses more memory and more compute on on its face than BERT, because BERT attends to as many tokens but has a smaller input sequence. And you know, I, there's sort of a thing where in order to make these sparse attention things work, you have to go pretty, pretty, you know, high in the number of things you attend to, you can leave away some but it's not like you can scale up orders of magnitude of your input sequence length. So that's the this promise of linear attention is sort of it's kind of fulfilled but not there yet. The second thing I would like to point out is that in a lot of cases, the number of random tokens is actually set to zero. So really making use, I believe, of these of the of the global of the number of global tokens. So it that seems a bit strange in that they continuously refer to their random attention mechanism. But then in a lot of experiments, they don't actually have a random attention mechanism. I believe they have to do that because that's kind of what makes them different from the long former in principle, but still, yeah. So the last novelty, let's say is an engineering novelty in that they now always consider not single, for example, they don't consider single random attention, they always consider these in blocks. And that's because our current hardware is really bad at sparse stuff. Really bad at single indexing, gathering single things. So if you can do everything in blocks, you basically get you get these blocks almost for free. So it takes only marginally longer to retrieve this full two by two block right here than it would to retrieve the single instance right here. Of course, that means you have, you know, four times you still use four times more memory, but it is not four times slower than the original thing. So you can use these blocks right here. You can do it for the random attention, you can do it for the window attention, as you can see here. So you break this window pattern a little bit into blocks. And that makes it a lot faster. So that speeds up, get the speed up almost for free. And then they make another approximation in that the way they do this windowing is, and I just go really briefly. So you can see right here that it would be very cumbersome to gather. So what we need, we're just going to focus this this dotted thing right here is a bit confusing. So you want to attend to these things. And these you can just get out with a matrix slice really easy. But then you want to attend to this kind of blocky thing right here from the window attention, right, like this thing. And this is hard to get out because you'd have to kind of index each row individually. And that's very slow. So what they do, there is this matrix roll operation, where you can sort of roll the axis around. So what you'll do is you'll take this thing right here, and you put it to the left right here, and you'll take, for example, this thing right here, and you'll put it to the right or no, like it's, it's up and down. But in essence, that's what you do. And you can you can fold all of this blue stuff into a rectangular matrix. If you know if you can see right here. So you kind of roll this back, roll this back, roll this forward, and you replace whatever is missing by these. Now this again gives you some inaccuracies because this block right here was never intended to be attended to. And all of a sudden you see you have the K6 in here. So it gives you a bit of inaccuracies at the edges of the sequence. But you can take that, you know, you can take that hit for the increased performance that you gain by now having a rectangular matrix. TPUs are really efficient at this, not as efficient at this. And then the only thing that's really slow is gathering these random blocks right here. But also by having the same amount of random blocks per input token, what you'll do is you'll end up with just one of these columns right here, or you know, R of these columns. And that again gives you a rectangular matrix. So this thing right here you can process very, very efficiently using a TPU. And you know, the mistakes you make are basically this thing right here and this thing right here, because those weren't intended and are at the edges of the sequence. So these were the tricks of Big Bird to quickly summarize. Big Bird is basically taking a transformer saying, well, why do we need all of this attention, all of this full attention, maybe we only need some of that and can already do a big job, a good job, especially, you know, considering the attention mechanism goes over multiple layers. So we don't need a routing from each token to each token, we can make up for not having a fully connected graph by simply running multiple layers. So their sparsity is first of all, you have this random attention, which I believe changes from sequence to sequence, but stays within or among the layers of the same sequence. Then you have the window attention with the reasoning. So the reasoning behind the random attention is that if you have a randomly connected graph, the path lengths are on average logarithmic. So you can route information efficiently. The reasoning behind the window attention is that probably neighbor information is very important and that has been shown empirically. And then the global attention, the reasoning behind this is that some of the tokens that are fixed by the developers are so important that it's very beneficial that each other node is connected to them and that they are connected to each other node. The result of that is the Big Bird attention mechanism, which is basically long former, which already had these two plus the random attention. This achieves a linear complexity in terms of memory and compute, though linear has to be qualified a bit because it's modified by the window size, by the number of random attention tokens, by the number of global tokens, and in practice often ends up being fairly large ish. And also the theoretical guarantees now come with the fact that you need multiple layers. In the worst case, you need sequence length amount of layers, which in the worst case would result right back into a quadratic requirement for memory and compute. They do some engineering, some engineering tricks right here, and their results are pretty good. So the results in various tasks and we'll, we'll look at some of the tasks right here. So these are def set results using base size models. For example, where you can see they do outperform basic Roberta models, they outperform long former, which may mean that the random attention is useful, but you know, in these things, it's also always may just mean that you've thrown more compute at it. At least I'm not really looking that they outperform the models because as you can see right here, if they compare to state of the art and you know, granted, these are models that have been trained specifically for these tasks and are crafted and engineered and Big Bird manages to Big Bird manages to hold itself against them in a lot of tasks and even get state of the art on some. What I'm more interested in is that it, you know, it can reach good numbers. It doesn't necessarily have to be state of the art, but it can reach good numbers, which tells me that, okay, probably the, the empirical hit that I take by not having the full attention is, you know, is justifiable by the speed up and memory savings I do get. Yeah, especially when result, when you see results mixed like this, you know, sometimes the other model is good and sometimes the Big Bird is good on different variations and so on. I would not, you know, I would not make a big deal out of the fact that it is state of the art. I get that the authors have to do that. I would do so as well, but you know, you don't, don't think that this is the, like the best thing now. It's very probable. They just thrown also a lot of compute at it. What is cool is they do some genomics experiments. So not only do they have NLP state of the art, but also they go into genomics and experiment with data there. I don't want to go into that because ultimately it's another task and I believe the paper is about the architecture. All right. So that was Big Bird. I hope you enjoyed this video and learned. I learned something. Certainly. If you want to check out the proofs, they're actually pretty entertaining to read and yeah, I'll see you next time. Bye bye.
[ { "start": 0, "end": 6.88, "text": " Hi there, today we'll look at Big Bird Transformers for Longer Sequences by Manil Zahir and Gurugar" }, { "start": 6.88, "end": 9.84, "text": " Uganesh et al. of Google Research." }, { "start": 9.84, "end": 14.5, "text": " So this paper on a high level proposes to replace the quadratic attention mechanism" }, { "start": 14.5, "end": 23.32, "text": " in transformers by a mix of random attention, windowed attention, and selective global attention," }, { "start": 23.32, "end": 29.54, "text": " therefore achieving a complexity of linear memory requirement instead of quadratic memory" }, { "start": 29.54, "end": 31.2, "text": " requirement." }, { "start": 31.2, "end": 36.2, "text": " And as a result of that, they can process longer sequences than traditional transformers" }, { "start": 36.2, "end": 43.04, "text": " like BERT and achieve better results in some NLP tasks, and they also evaluate on genomics" }, { "start": 43.04, "end": 44.04, "text": " tasks." }, { "start": 44.04, "end": 49.46, "text": " So we'll go through this paper a bit, look a bit at the proof because they give a theoretical" }, { "start": 49.46, "end": 57.24, "text": " kind of guarantee that their random attention mechanism can still be Turing complete and" }, { "start": 57.24, "end": 63.28, "text": " can still achieve the same things as a full attention mechanism, but we'll also look at" }, { "start": 63.28, "end": 64.28, "text": " the drawbacks." }, { "start": 64.28, "end": 70.24000000000001, "text": " I sort of have mixed feelings about this paper, and I think I'll voice my concerns as we go" }, { "start": 70.24000000000001, "end": 71.24000000000001, "text": " through here." }, { "start": 71.24000000000001, "end": 75.6, "text": " But first, let's look at the paper, let's look at the architecture, and I think this" }, { "start": 75.6, "end": 82.28, "text": " is actually a pretty cool paper for the empirical progression of the field to process longer" }, { "start": 82.28, "end": 85.08, "text": " sequences with transformers." }, { "start": 85.08, "end": 90, "text": " As always, if you like content like this, feel free to share it around, leave a like" }, { "start": 90, "end": 96.44, "text": " and tell me in the comments what you think about the paper and about what I think, whatever," }, { "start": 96.44, "end": 100.28, "text": " you just, just go nuts." }, { "start": 100.28, "end": 108.6, "text": " Alright, so the basic premise right here is that the transformers, they've been pretty" }, { "start": 108.6, "end": 111.04, "text": " impactful, especially in NLP." }, { "start": 111.04, "end": 115.2, "text": " So they say transformer based models such as BERT have been one of the most successful" }, { "start": 115.2, "end": 117.76, "text": " deep learning models for NLP." }, { "start": 117.76, "end": 123.28, "text": " Unfortunately, one of their core limitations is the quadratic dependency, mainly in terms" }, { "start": 123.28, "end": 127.92, "text": " of memory, on the sequence length due to their full attention mechanism." }, { "start": 127.92, "end": 133.44, "text": " So really briefly, the full attention mechanism, and I've done numerous videos about attention" }, { "start": 133.44, "end": 136.28, "text": " mechanism BERT, attention is all you need, and so on." }, { "start": 136.28, "end": 141.96, "text": " So if you want a detailed explanation of what that is, just go look up the corresponding" }, { "start": 141.96, "end": 142.96, "text": " videos." }, { "start": 142.96, "end": 149.6, "text": " But briefly, what you'll have in NLP is a set of tokens, a sequence of tokens as an input," }, { "start": 149.6, "end": 157.62, "text": " and you want to transform them layer after layer into sort of a higher order representation" }, { "start": 157.62, "end": 159.6, "text": " of that same sequence." }, { "start": 159.6, "end": 164.88, "text": " And for that, you build these layers out of nodes and you have as many nodes usually as" }, { "start": 164.88, "end": 166.96, "text": " you have tokens in the sequence." }, { "start": 166.96, "end": 174.74, "text": " And the next set of, so each token is represented by a vector at the beginning, and each layer" }, { "start": 174.74, "end": 179.04000000000002, "text": " transforms the sequence, as I said, into sort of a higher level representation." }, { "start": 179.04000000000002, "end": 186.32, "text": " So you want the vector of this token right here to be a better representation than the" }, { "start": 186.32, "end": 188.56, "text": " vector was right here." }, { "start": 188.56, "end": 195.12, "text": " And you do that by incorporating information from all the other tokens into that particular" }, { "start": 195.12, "end": 196.12, "text": " vector." }, { "start": 196.12, "end": 200.72, "text": " Now, as I said, this is called an attention mechanism, and we don't actually have to go" }, { "start": 200.72, "end": 205.92, "text": " into how it works right here, but you can see pretty clearly that if you want to do" }, { "start": 205.92, "end": 212.92, "text": " this for every token, you need to have information routed from every token to every token, like" }, { "start": 212.92, "end": 216.64, "text": " from here to here, from here to here, and so on." }, { "start": 216.64, "end": 220.6, "text": " And this is just one token, and then you need to do it for this token and for this token" }, { "start": 220.6, "end": 222.06, "text": " and for this token." }, { "start": 222.06, "end": 226.66, "text": " So what you'll ultimately get, if n is your sequence length, you'll get some n squared" }, { "start": 226.66, "end": 231.16, "text": " amount of computation and memory requirements for this." }, { "start": 231.16, "end": 232.45999999999998, "text": " So this is a problem." }, { "start": 232.45999999999998, "end": 237.35999999999999, "text": " And usually, this means that, you know, this sequence length in BERT, this is limited to" }, { "start": 237.36, "end": 243.72000000000003, "text": " something like 512 tokens, which is okay for some applications." }, { "start": 243.72000000000003, "end": 249.36, "text": " But if you want to summarize, you know, entire articles, entire books, even, or do question" }, { "start": 249.36, "end": 253.28, "text": " answering with lots of context, it's not really enough." }, { "start": 253.28, "end": 259.5, "text": " So people have been thinking about how to scale this input, how to scale this." }, { "start": 259.5, "end": 264.88, "text": " And of course, the main culprit is this quadratic attention mechanism, because if you, you know," }, { "start": 264.88, "end": 270.76, "text": " scale the 512, you need, you know, four times the amount of compute and memory." }, { "start": 270.76, "end": 275.71999999999997, "text": " So how does this paper go about reducing that quadratic dependency?" }, { "start": 275.71999999999997, "end": 281.12, "text": " The goal right here is, of course, to get this to some O of n, right?" }, { "start": 281.12, "end": 287.28, "text": " Because then, as we double the input length, we simply need to double the compute requirements," }, { "start": 287.28, "end": 288.71999999999997, "text": " and that would be fantastic." }, { "start": 288.71999999999997, "end": 290.74, "text": " And that's what this paper does." }, { "start": 290.74, "end": 296.2, "text": " And it does so without sacrificing the properties of the transformer." }, { "start": 296.2, "end": 300.84000000000003, "text": " So here's the architecture that Big Bird proposes." }, { "start": 300.84000000000003, "end": 306.86, "text": " By the way, Big Bird, another character from Sesame Street, I guess, will continue the" }, { "start": 306.86, "end": 309.84000000000003, "text": " naming here after Elmo and BERT." }, { "start": 309.84000000000003, "end": 315.8, "text": " You know, I'm waiting for the model that's the count." }, { "start": 315.8, "end": 319.40000000000003, "text": " Yeah, that's going to be a fun model." }, { "start": 319.4, "end": 323.96, "text": " So Big Bird basically has three different types of attention." }, { "start": 323.96, "end": 327.76, "text": " And here, these are adjacency matrices in this attention mechanism." }, { "start": 327.76, "end": 333.53999999999996, "text": " So here is the input layer, and the output layer is right here." }, { "start": 333.53999999999996, "end": 337.84, "text": " So that basically means that node i right here would be connected." }, { "start": 337.84, "end": 343.52, "text": " Sorry, that's not a straight line, would be connected to this particular node and also" }, { "start": 343.52, "end": 345.34, "text": " to this particular node." }, { "start": 345.34, "end": 353.44, "text": " So we're now trying, if we have node i right here, we're now trying to not connect it to" }, { "start": 353.44, "end": 359.84, "text": " all of these nodes, but we'll say, we'll just select some at random and then connect it" }, { "start": 359.84, "end": 360.84, "text": " to that." }, { "start": 360.84, "end": 363.64, "text": " Okay, this is what we call random attention." }, { "start": 363.64, "end": 371.15999999999997, "text": " And you can pretty clearly see if you connect each of the i nodes to r equals 2, to two" }, { "start": 371.16, "end": 376.40000000000003, "text": " random nodes, then you don't have an n squared anymore." }, { "start": 376.40000000000003, "end": 383.56, "text": " But you'll have a like an O of r times n, which you know, if r is a constant is an O" }, { "start": 383.56, "end": 386.32000000000005, "text": " of n attention mechanism." }, { "start": 386.32000000000005, "end": 394, "text": " Okay, so the main goal between the random attention mechanism is that for each query," }, { "start": 394, "end": 401.44, "text": " basically, you select random tokens that you attend to, and that random number is a fixed" }, { "start": 401.44, "end": 405.2, "text": " number that's not dependent on the sequence length." }, { "start": 405.2, "end": 412.32, "text": " And the paper is a little bit unclear about whether or not those random ones are the same" }, { "start": 412.32, "end": 418.28, "text": " for every sequence or are switched up, or the same for every layer or are switched up." }, { "start": 418.28, "end": 423.82, "text": " But they formulate all of this as sort of in sort of a graph in sort of a random graph." }, { "start": 423.82, "end": 428.96, "text": " So they're, they formulate the attention mechanism in form of a graph." }, { "start": 428.96, "end": 435, "text": " So if we transform all of these nodes into a graph, a full attention mechanism would" }, { "start": 435, "end": 441.36, "text": " mean that each graph, each node is connected to each of the other nodes, right, fully connected" }, { "start": 441.36, "end": 447.02, "text": " graph, I don't, maybe that's it." }, { "start": 447.02, "end": 448.88, "text": " So that would be a full attention mechanism." }, { "start": 448.88, "end": 456.32, "text": " And then they say, well, if we just have random connections between these things, then there" }, { "start": 456.32, "end": 462.64, "text": " are some theorems from graph theory that say that each random walk in this graph is going" }, { "start": 462.64, "end": 466.15999999999997, "text": " to, so this graph is going to mix pretty quickly." }, { "start": 466.15999999999997, "end": 473.56, "text": " So I can get from each node to each other node by a random walk in a logarithmic time." }, { "start": 473.56, "end": 478.28, "text": " And this random walk, which basically means that you go from here to here, this would" }, { "start": 478.28, "end": 481.65999999999997, "text": " be one layer of the transformer." }, { "start": 481.65999999999997, "end": 485.91999999999996, "text": " And then if you want to go from here to here, that you would have to do that in the next" }, { "start": 485.91999999999996, "end": 486.91999999999996, "text": " layer." }, { "start": 486.91999999999996, "end": 493.03999999999996, "text": " So this formulation as a random graph leads me to believe that layer after layer, the" }, { "start": 493.03999999999996, "end": 497.03999999999996, "text": " random attention pattern is going to be the same." }, { "start": 497.03999999999996, "end": 503.76, "text": " But also the formulation of the paper leads me to believe that the this random attention" }, { "start": 503.76, "end": 505.91999999999996, "text": " differs from sequence to sequence." }, { "start": 505.92, "end": 512.96, "text": " So I believe what's happening is that they get a new sequence, then they decide on this" }, { "start": 512.96, "end": 519.28, "text": " pattern right here once and then they use this layer after layer, the same pattern again." }, { "start": 519.28, "end": 527.84, "text": " So you can see that in the traditional attention, information can basically flow from each of" }, { "start": 527.84, "end": 532.08, "text": " the nodes to each other node in one single step, right?" }, { "start": 532.08, "end": 534.48, "text": " Because each node is connected to each other node." }, { "start": 534.48, "end": 536.9200000000001, "text": " You see this in the graph right here." }, { "start": 536.9200000000001, "end": 545.5600000000001, "text": " However, if we only select a subset, then you know, it needs to if I want to go from," }, { "start": 545.5600000000001, "end": 550.12, "text": " as I said, from here to here, then I need to do it in two steps." }, { "start": 550.12, "end": 552.4, "text": " And therefore I need two layers." }, { "start": 552.4, "end": 555.28, "text": " And that's going to be the culprit of this method here." }, { "start": 555.28, "end": 562.04, "text": " And while it is mentioned in the paper, it's sort of I feel at least that's my my assessment" }, { "start": 562.04, "end": 566.12, "text": " of this paper, it's kind of swept under the rug a little bit." }, { "start": 566.12, "end": 573.24, "text": " I mean, they do have a theorem that clearly says we can construct an example of a task" }, { "start": 573.24, "end": 577.2199999999999, "text": " that in the full attention setting can be solved with a single step." }, { "start": 577.2199999999999, "end": 585.8399999999999, "text": " So a single layer that in our random attention setting needs a lot of layers, a lot of steps." }, { "start": 585.84, "end": 592.08, "text": " But you know, the rest of the paper is sort of shaky on on this thing." }, { "start": 592.08, "end": 598.64, "text": " But nevertheless, you can see how the random attention can, if you have enough layers," }, { "start": 598.64, "end": 602.24, "text": " do the same information routing as the full attention." }, { "start": 602.24, "end": 603.24, "text": " Okay." }, { "start": 603.24, "end": 607.52, "text": " However, this is not a property of the random attention." }, { "start": 607.52, "end": 609.82, "text": " And we'll see this in the next thing right here." }, { "start": 609.82, "end": 614.4200000000001, "text": " So the next ingredient that this paper uses is window attention." }, { "start": 614.42, "end": 618.92, "text": " And you can see over here that Big Bird is ultimately going to be a combination of the" }, { "start": 618.92, "end": 623.28, "text": " three types of attention, which will, which we are looking at here." }, { "start": 623.28, "end": 630.4, "text": " So window attention basically means that each each i each token at the i of position is" }, { "start": 630.4, "end": 633.5799999999999, "text": " going to attend to itself, of course." }, { "start": 633.5799999999999, "end": 639.02, "text": " So here is i, but it is also going to attend to its neighbors." }, { "start": 639.02, "end": 642.68, "text": " So here is i minus one and here is i plus one." }, { "start": 642.68, "end": 649.5, "text": " And this is a you know, this is a window size w that you can that is a parameter, but also" }, { "start": 649.5, "end": 657.06, "text": " it is a constant and therefore you again go from n squared to w times n, which you know" }, { "start": 657.06, "end": 661.2199999999999, "text": " is o of n if w is a constant." }, { "start": 661.2199999999999, "end": 665.8, "text": " And this might be familiar to you, because we've already seen this in the long former" }, { "start": 665.8, "end": 666.8, "text": " paper." }, { "start": 666.8, "end": 674.4399999999999, "text": " We made a video or I think even two videos on the long former, which used exactly the" }, { "start": 674.4399999999999, "end": 678.54, "text": " window attention in combination with the global attention." }, { "start": 678.54, "end": 682.0999999999999, "text": " And if you want to know more about that, go watch these videos." }, { "start": 682.0999999999999, "end": 688.92, "text": " But the new thing in Big Bird right here is this addition of the random attention." }, { "start": 688.92, "end": 698.54, "text": " Again, the the window here is is has exactly the same properties as the random attention." }, { "start": 698.54, "end": 704.3, "text": " So you have instead of a fully connected graph, you have a sparsely connected graph." }, { "start": 704.3, "end": 710.42, "text": " Now if you have random attention, the sparsely connected graph is like like the one on the" }, { "start": 710.42, "end": 711.42, "text": " right." }, { "start": 711.42, "end": 717.38, "text": " But if you have a windowed attention, you can it is kind of not randomly connected," }, { "start": 717.38, "end": 721.12, "text": " but each node is connected to its neighbors like this." }, { "start": 721.12, "end": 726.58, "text": " And you can also see that if I want to go from this node to this node right here, I" }, { "start": 726.58, "end": 729.98, "text": " can't do it in one step, but I can do it in two steps." }, { "start": 729.98, "end": 732.5, "text": " I go here and I go here." }, { "start": 732.5, "end": 741.42, "text": " So in the terms of the attention layers, if I want to go from node one to node three," }, { "start": 741.42, "end": 745.58, "text": " I have to do it in two steps because each node is only connected to its neighbors." }, { "start": 745.58, "end": 750.98, "text": " So the connection patterns would sort of look like this." }, { "start": 750.98, "end": 757.3000000000001, "text": " So I have to go from one to two and then in the next layer from two to three." }, { "start": 757.3000000000001, "end": 764.82, "text": " So the paper basically makes up for the lack of full attention by adding layers." }, { "start": 764.82, "end": 769.22, "text": " And you also might recognize this from a convolution operation." }, { "start": 769.22, "end": 775.5400000000001, "text": " This basically because it is a convolution operation, right in a convolution, each node" }, { "start": 775.54, "end": 780.54, "text": " only aggregates input from its neighbors for the next layer." }, { "start": 780.54, "end": 786.3399999999999, "text": " And then we know that as we go up the layers, the de facto window that each node looks at" }, { "start": 786.3399999999999, "end": 790.0999999999999, "text": " is going to be like a cone kind of like this." }, { "start": 790.0999999999999, "end": 794.38, "text": " So this is very similar to how a convolutional neural network works." }, { "start": 794.38, "end": 799.38, "text": " And the reasoning is very similar because the reasoning is, well, in a sentence, the" }, { "start": 799.38, "end": 804.74, "text": " most important words for any given word are probably going to be its neighbors, like the" }, { "start": 804.74, "end": 806.42, "text": " words around it." }, { "start": 806.42, "end": 809.98, "text": " And as you go up the layers, you branch out more and more." }, { "start": 809.98, "end": 815.9, "text": " But ultimately, this neighborhood principle holds in NLP as well." }, { "start": 815.9, "end": 821.54, "text": " So again, we already saw this in the long former, but that's the reason behind the window" }, { "start": 821.54, "end": 823.82, "text": " attention and that's the second ingredient." }, { "start": 823.82, "end": 827.46, "text": " And then the third ingredient is this global attention." }, { "start": 827.46, "end": 836.0600000000001, "text": " Now the global attention is selected tokens that are so important and that's fixed by" }, { "start": 836.0600000000001, "end": 843.14, "text": " the developers that are so important that they are connected to everything else." }, { "start": 843.14, "end": 851.1, "text": " So for example, in these transformers, you often have what's this kind of CLS token." }, { "start": 851.1, "end": 857.78, "text": " So this is a special token that you prepend to some piece of text and the output of this" }, { "start": 857.78, "end": 863.74, "text": " token is going to be your classification output because you don't want to bind your classification" }, { "start": 863.74, "end": 866.3000000000001, "text": " if you need to classify the entire sequence." }, { "start": 866.3000000000001, "end": 870.6, "text": " You don't want to bind that decision to one particular word." }, { "start": 870.6, "end": 875.4200000000001, "text": " What you want to do is you want to have an extra token and that's this CLS token that" }, { "start": 875.4200000000001, "end": 879.0600000000001, "text": " kind of aggregates information from all of this." }, { "start": 879.06, "end": 885.3, "text": " So layer after layer, layer after layer, you'll have, so if we go here, layer after layer," }, { "start": 885.3, "end": 887.8599999999999, "text": " we have this one special node." }, { "start": 887.8599999999999, "end": 895.14, "text": " And in each step, every single other node is able to send information right here to" }, { "start": 895.14, "end": 900.7399999999999, "text": " this node and receive information from this node." }, { "start": 900.74, "end": 911.78, "text": " So now, as a result of this, as you may be able to see, every single path is kind of" }, { "start": 911.78, "end": 916.5, "text": " a maximum length of two because if I want to go from any node to any other node, I can" }, { "start": 916.5, "end": 922.54, "text": " simply send information to this global node and then the global node in the next step" }, { "start": 922.54, "end": 926.5600000000001, "text": " can send information to whatever other node." }, { "start": 926.56, "end": 933.5799999999999, "text": " And that is a property that they use in their proof that this attention mechanism is as" }, { "start": 933.5799999999999, "end": 937.2199999999999, "text": " sort of as powerful as the classic full attention mechanism." }, { "start": 937.2199999999999, "end": 940.3399999999999, "text": " And we'll go through that in one second." }, { "start": 940.3399999999999, "end": 944.9, "text": " But first, I hope this was clear that this combination of random attention, window attention" }, { "start": 944.9, "end": 952.8199999999999, "text": " and global attention is what is called Big Bird." }, { "start": 952.82, "end": 957.38, "text": " They have some engineering tricks that go along with this, but in concept, you can imagine" }, { "start": 957.38, "end": 963.1400000000001, "text": " Big Bird being long former plus these random attention right here." }, { "start": 963.1400000000001, "end": 968.46, "text": " And as an engineer, as an NLP engineer, that makes kind of total sense." }, { "start": 968.46, "end": 976.36, "text": " I totally believe that the introduction, the addition of these random attention of these" }, { "start": 976.36, "end": 983.82, "text": " random attention patterns can absolutely help your classification or whatever your NLP tasks" }, { "start": 983.82, "end": 986.9, "text": " because more attention, better." }, { "start": 986.9, "end": 993.38, "text": " And I also am completely willing to believe that using the full attention matrix, while" }, { "start": 993.38, "end": 999.54, "text": " it is, of course, more accurate, it won't hurt too much to leave some of that attention" }, { "start": 999.54, "end": 1005.46, "text": " away because essentially all the path lengths are just becoming two or even with the random" }, { "start": 1005.46, "end": 1011.58, "text": " attention are really short or logarithmic to route information from a node to some other" }, { "start": 1011.58, "end": 1012.58, "text": " node." }, { "start": 1012.58, "end": 1019.7800000000001, "text": " So the loss that you incur is kind of in a logarithmic scale in terms of performance," }, { "start": 1019.7800000000001, "end": 1025.02, "text": " while the gain that you make is sort of in a in a quadratic or like a linear scale, you" }, { "start": 1025.02, "end": 1027.76, "text": " go from quadratic to linear." }, { "start": 1027.76, "end": 1031.3400000000001, "text": " And that seems to me like a good empirical trade off." }, { "start": 1031.34, "end": 1042.98, "text": " All right, however, the the proofs here, the proof of of how how these how these things" }, { "start": 1042.98, "end": 1046.3799999999999, "text": " are constructed are a little bit." }, { "start": 1046.3799999999999, "end": 1047.3799999999999, "text": " I don't know." }, { "start": 1047.3799999999999, "end": 1054.8999999999999, "text": " So what they do in the proof that this function can sort of is a universal approximator." }, { "start": 1054.8999999999999, "end": 1060.8, "text": " People have already shown that full attention mechanisms are universal approximators." }, { "start": 1060.8, "end": 1066.1399999999999, "text": " So they show here that this sparse attention mechanism is also a universal approximator." }, { "start": 1066.1399999999999, "end": 1068.5, "text": " They make big use of star graphs." }, { "start": 1068.5, "end": 1073.8999999999999, "text": " What they say is, OK, if we have a star graph, which is one node connected right here to" }, { "start": 1073.8999999999999, "end": 1077.48, "text": " every other node, this is a star graph." }, { "start": 1077.48, "end": 1084.22, "text": " If we have a star graph, we can achieve the same thing than with a full graph." }, { "start": 1084.22, "end": 1087.94, "text": " A full graph is where every node is connected to every other node." }, { "start": 1087.94, "end": 1093.98, "text": " But as I already said, what they need for this is multiple layers of this star graph." }, { "start": 1093.98, "end": 1099.7, "text": " So and that has to do with the fact that if I want to route information, I basically have" }, { "start": 1099.7, "end": 1103.4, "text": " to go via this middle node right here." }, { "start": 1103.4, "end": 1107.74, "text": " And there is an additional complication because this middle node in our case right here is" }, { "start": 1107.74, "end": 1110.18, "text": " only one node." }, { "start": 1110.18, "end": 1116.74, "text": " I can't route information at the same like I can't have this routing right here at the" }, { "start": 1116.74, "end": 1123.34, "text": " same time that I have this routing right here, like going from here to here, because I only" }, { "start": 1123.34, "end": 1125.04, "text": " have one middle node." }, { "start": 1125.04, "end": 1129.42, "text": " And I kind of this is not how that like this is very dumb math." }, { "start": 1129.42, "end": 1135.54, "text": " But maybe you have to imagine that there is one memory slot." }, { "start": 1135.54, "end": 1141.02, "text": " And you can only use that one memory slot at the same time for one of these things." }, { "start": 1141.02, "end": 1145.8, "text": " So essentially, what you'll have to do is you'll have to do the green thing first." }, { "start": 1145.8, "end": 1150.26, "text": " And then in the next step, you'll have to do the blue thing second." }, { "start": 1150.26, "end": 1154.58, "text": " And then so these are now pairwise routing between nodes." }, { "start": 1154.58, "end": 1159.56, "text": " But ultimately, what an attention mechanism does is it does everything to everything right" }, { "start": 1159.56, "end": 1164.3799999999999, "text": " in a single layer, it routes information from all the nodes to all the other nodes." }, { "start": 1164.3799999999999, "end": 1168.6599999999999, "text": " And to achieve that, so you need multiple rounds of this." }, { "start": 1168.6599999999999, "end": 1173.9199999999998, "text": " And it turns out that in the worst case, you actually need n rounds of this." }, { "start": 1173.92, "end": 1181.94, "text": " So you know, you trade off your you go from n squared to n memory and compute requirements" }, { "start": 1181.94, "end": 1183.5800000000002, "text": " in a single layer." }, { "start": 1183.5800000000002, "end": 1190.48, "text": " But in the worst case, you need n layers to recover the power of the full of the full" }, { "start": 1190.48, "end": 1192.02, "text": " transformer." }, { "start": 1192.02, "end": 1196.46, "text": " And that is the last one of their theoretical results right here." }, { "start": 1196.46, "end": 1200.22, "text": " So first, they prove universal approximations." }, { "start": 1200.22, "end": 1203.0600000000002, "text": " And second, they prove Turing completeness." }, { "start": 1203.06, "end": 1207.3799999999999, "text": " These two properties have been proven for full attention mechanisms." }, { "start": 1207.3799999999999, "end": 1213.4199999999998, "text": " And third, they prove that there are tasks where you actually do need n layers to solve" }, { "start": 1213.4199999999998, "end": 1217.54, "text": " them with their limited attention." }, { "start": 1217.54, "end": 1227.94, "text": " So you know, I'm not sure but I feel you can make any sort of polynomial algorithm into" }, { "start": 1227.94, "end": 1229.6, "text": " a linear algorithm like this." }, { "start": 1229.6, "end": 1232.84, "text": " Like I have a I have like a cool sorting algorithm, right?" }, { "start": 1232.84, "end": 1238.78, "text": " So if this is my sequence that I want to sort, what I can do is I can simply, you know, take" }, { "start": 1238.78, "end": 1245.78, "text": " a random subset of them, like this, this and this and then kind of go and sort them and" }, { "start": 1245.78, "end": 1252.06, "text": " then put them like I send them to the to the global memory like this, I sort them, and" }, { "start": 1252.06, "end": 1255.78, "text": " then I put them back, right?" }, { "start": 1255.78, "end": 1262.4199999999998, "text": " And if I do this for enough, if I do this for enough rounds, okay, you know, if I do" }, { "start": 1262.42, "end": 1267.66, "text": " this for enough rounds, you know, at the worst case, I need n rounds to sort my or log n" }, { "start": 1267.66, "end": 1269.54, "text": " rounds if I do it smartly." }, { "start": 1269.54, "end": 1276.42, "text": " But you know, in, you know, the single step here is the single step is just O of n." }, { "start": 1276.42, "end": 1280.14, "text": " So I have now an O of n sorting algorithm." }, { "start": 1280.14, "end": 1287.26, "text": " I you know, I have my sort of a bit of worry to express things like that." }, { "start": 1287.26, "end": 1296.42, "text": " And yeah, but you know, it is from an empirical standpoint, I absolutely believe that this" }, { "start": 1296.42, "end": 1298.82, "text": " this is enough." }, { "start": 1298.82, "end": 1304.74, "text": " Now my second coral right here is that if you look at the proof, first of all, what" }, { "start": 1304.74, "end": 1310.3799999999999, "text": " it makes use is this star graph, and the star graph corresponds to the global attention." }, { "start": 1310.3799999999999, "end": 1314.46, "text": " So that's not much to do with the random attention, though they use the random attention in their" }, { "start": 1314.46, "end": 1323.74, "text": " proof, but I at least believe that it would be possible with the global attention only." }, { "start": 1323.74, "end": 1330.74, "text": " And then the second thing is if you look at the parameters that they use for the for the" }, { "start": 1330.74, "end": 1334.8600000000001, "text": " experiments, and I've already said this in the long former video." }, { "start": 1334.8600000000001, "end": 1340.14, "text": " So in the long form of video, it turned out that if you look at how big these window attention" }, { "start": 1340.14, "end": 1347.9, "text": " is, it turns out that it you're still well, you know, the original BERT attended to 512" }, { "start": 1347.9, "end": 1348.9, "text": " tokens." }, { "start": 1348.9, "end": 1353.3400000000001, "text": " And then you look at the window and the window was still 512 tokens." }, { "start": 1353.3400000000001, "end": 1357.5400000000002, "text": " It's just that the global attention was even more so ultimately they ended up using more" }, { "start": 1357.5400000000002, "end": 1360.14, "text": " memory than the original BERT." }, { "start": 1360.14, "end": 1367.6200000000001, "text": " And here, if I look at the parameters of their thing, and they have multiple experiments" }, { "start": 1367.62, "end": 1371.26, "text": " right here, and I believe this is the the base version." }, { "start": 1371.26, "end": 1376.1, "text": " So this is the base version, they also have this large version." }, { "start": 1376.1, "end": 1380.86, "text": " But here, this is the 12 layer version." }, { "start": 1380.86, "end": 1383.86, "text": " And you can see they have this block length." }, { "start": 1383.86, "end": 1388.1999999999998, "text": " And we'll get into the block length in one second." }, { "start": 1388.1999999999998, "end": 1393.6799999999998, "text": " But then you can see that their window size is three times the block length, the number" }, { "start": 1393.68, "end": 1397.98, "text": " of random tokens is three times the block length, and the number of global tokens is" }, { "start": 1397.98, "end": 1399.5800000000002, "text": " two times the block length." }, { "start": 1399.5800000000002, "end": 1411.5800000000002, "text": " So that results in eight times B. So eight times 64 is, you know," }, { "start": 1411.5800000000002, "end": 1413.46, "text": " Can I calculate this?" }, { "start": 1413.46, "end": 1415.66, "text": " Or am I stupid?" }, { "start": 1415.66, "end": 1416.66, "text": " It's 512." }, { "start": 1416.66, "end": 1420.42, "text": " Yes, actually calculated this before." }, { "start": 1420.42, "end": 1423.3400000000001, "text": " So this is 512 tokens." }, { "start": 1423.34, "end": 1432.1799999999998, "text": " So you know, you you go from from BERT that has 512 tokens and attends to 512 tokens to" }, { "start": 1432.1799999999998, "end": 1435.1, "text": " also attending to 512 tokens." }, { "start": 1435.1, "end": 1442.4199999999998, "text": " Of course, the advantage here is that they now have 4096 sequence length." }, { "start": 1442.4199999999998, "end": 1450.58, "text": " So they have the freedom to not attend to as many tokens as they have in the input length." }, { "start": 1450.58, "end": 1458.54, "text": " But you know, to put it in perspective, this here uses more memory and more compute on" }, { "start": 1458.54, "end": 1466.26, "text": " on its face than BERT, because BERT attends to as many tokens but has a smaller input" }, { "start": 1466.26, "end": 1469.4199999999998, "text": " sequence." }, { "start": 1469.4199999999998, "end": 1475.8799999999999, "text": " And you know, I, there's sort of a thing where in order to make these sparse attention things" }, { "start": 1475.88, "end": 1482.0200000000002, "text": " work, you have to go pretty, pretty, you know, high in the number of things you attend to," }, { "start": 1482.0200000000002, "end": 1487.7, "text": " you can leave away some but it's not like you can scale up orders of magnitude of your" }, { "start": 1487.7, "end": 1489.88, "text": " input sequence length." }, { "start": 1489.88, "end": 1495.3000000000002, "text": " So that's the this promise of linear attention is sort of it's kind of fulfilled but not" }, { "start": 1495.3000000000002, "end": 1496.3000000000002, "text": " there yet." }, { "start": 1496.3000000000002, "end": 1501.8600000000001, "text": " The second thing I would like to point out is that in a lot of cases, the number of random" }, { "start": 1501.8600000000001, "end": 1504.7, "text": " tokens is actually set to zero." }, { "start": 1504.7, "end": 1511.18, "text": " So really making use, I believe, of these of the of the global of the number of global" }, { "start": 1511.18, "end": 1512.76, "text": " tokens." }, { "start": 1512.76, "end": 1520.02, "text": " So it that seems a bit strange in that they continuously refer to their random attention" }, { "start": 1520.02, "end": 1521.6200000000001, "text": " mechanism." }, { "start": 1521.6200000000001, "end": 1527.22, "text": " But then in a lot of experiments, they don't actually have a random attention mechanism." }, { "start": 1527.22, "end": 1530.98, "text": " I believe they have to do that because that's kind of what makes them different from the" }, { "start": 1530.98, "end": 1537.06, "text": " long former in principle, but still, yeah." }, { "start": 1537.06, "end": 1544.76, "text": " So the last novelty, let's say is an engineering novelty in that they now always consider not" }, { "start": 1544.76, "end": 1549.66, "text": " single, for example, they don't consider single random attention, they always consider these" }, { "start": 1549.66, "end": 1550.66, "text": " in blocks." }, { "start": 1550.66, "end": 1556.14, "text": " And that's because our current hardware is really bad at sparse stuff." }, { "start": 1556.14, "end": 1559.88, "text": " Really bad at single indexing, gathering single things." }, { "start": 1559.88, "end": 1566.0200000000002, "text": " So if you can do everything in blocks, you basically get you get these blocks almost" }, { "start": 1566.0200000000002, "end": 1567.0200000000002, "text": " for free." }, { "start": 1567.0200000000002, "end": 1572.48, "text": " So it takes only marginally longer to retrieve this full two by two block right here than" }, { "start": 1572.48, "end": 1576.46, "text": " it would to retrieve the single instance right here." }, { "start": 1576.46, "end": 1582.38, "text": " Of course, that means you have, you know, four times you still use four times more memory," }, { "start": 1582.38, "end": 1585.9, "text": " but it is not four times slower than the original thing." }, { "start": 1585.9, "end": 1589.94, "text": " So you can use these blocks right here." }, { "start": 1589.94, "end": 1593.26, "text": " You can do it for the random attention, you can do it for the window attention, as you" }, { "start": 1593.26, "end": 1594.26, "text": " can see here." }, { "start": 1594.26, "end": 1598.22, "text": " So you break this window pattern a little bit into blocks." }, { "start": 1598.22, "end": 1601.02, "text": " And that makes it a lot faster." }, { "start": 1601.02, "end": 1605.64, "text": " So that speeds up, get the speed up almost for free." }, { "start": 1605.64, "end": 1613.02, "text": " And then they make another approximation in that the way they do this windowing is, and" }, { "start": 1613.02, "end": 1615.98, "text": " I just go really briefly." }, { "start": 1615.98, "end": 1624.06, "text": " So you can see right here that it would be very cumbersome to gather." }, { "start": 1624.06, "end": 1629.5, "text": " So what we need, we're just going to focus this this dotted thing right here is a bit" }, { "start": 1629.5, "end": 1630.5, "text": " confusing." }, { "start": 1630.5, "end": 1634.86, "text": " So you want to attend to these things." }, { "start": 1634.86, "end": 1639.16, "text": " And these you can just get out with a matrix slice really easy." }, { "start": 1639.16, "end": 1644.9, "text": " But then you want to attend to this kind of blocky thing right here from the window attention," }, { "start": 1644.9, "end": 1646.92, "text": " right, like this thing." }, { "start": 1646.92, "end": 1653.3400000000001, "text": " And this is hard to get out because you'd have to kind of index each row individually." }, { "start": 1653.3400000000001, "end": 1654.8000000000002, "text": " And that's very slow." }, { "start": 1654.8000000000002, "end": 1659.5600000000002, "text": " So what they do, there is this matrix roll operation, where you can sort of roll the" }, { "start": 1659.5600000000002, "end": 1661.1000000000001, "text": " axis around." }, { "start": 1661.1000000000001, "end": 1665.8000000000002, "text": " So what you'll do is you'll take this thing right here, and you put it to the left right" }, { "start": 1665.8, "end": 1670.98, "text": " here, and you'll take, for example, this thing right here, and you'll put it to the right" }, { "start": 1670.98, "end": 1674.78, "text": " or no, like it's, it's up and down." }, { "start": 1674.78, "end": 1677.12, "text": " But in essence, that's what you do." }, { "start": 1677.12, "end": 1683.6599999999999, "text": " And you can you can fold all of this blue stuff into a rectangular matrix." }, { "start": 1683.6599999999999, "end": 1687.26, "text": " If you know if you can see right here." }, { "start": 1687.26, "end": 1693.1, "text": " So you kind of roll this back, roll this back, roll this forward, and you replace whatever" }, { "start": 1693.1, "end": 1695.62, "text": " is missing by these." }, { "start": 1695.62, "end": 1702.4199999999998, "text": " Now this again gives you some inaccuracies because this block right here was never intended" }, { "start": 1702.4199999999998, "end": 1704.76, "text": " to be attended to." }, { "start": 1704.76, "end": 1708.82, "text": " And all of a sudden you see you have the K6 in here." }, { "start": 1708.82, "end": 1713.6599999999999, "text": " So it gives you a bit of inaccuracies at the edges of the sequence." }, { "start": 1713.6599999999999, "end": 1718.6399999999999, "text": " But you can take that, you know, you can take that hit for the increased performance that" }, { "start": 1718.6399999999999, "end": 1721.62, "text": " you gain by now having a rectangular matrix." }, { "start": 1721.62, "end": 1727.1399999999999, "text": " TPUs are really efficient at this, not as efficient at this." }, { "start": 1727.1399999999999, "end": 1733.06, "text": " And then the only thing that's really slow is gathering these random blocks right here." }, { "start": 1733.06, "end": 1738.78, "text": " But also by having the same amount of random blocks per input token, what you'll do is" }, { "start": 1738.78, "end": 1745.3, "text": " you'll end up with just one of these columns right here, or you know, R of these columns." }, { "start": 1745.3, "end": 1747.6599999999999, "text": " And that again gives you a rectangular matrix." }, { "start": 1747.66, "end": 1753.3400000000001, "text": " So this thing right here you can process very, very efficiently using a TPU." }, { "start": 1753.3400000000001, "end": 1759.3000000000002, "text": " And you know, the mistakes you make are basically this thing right here and this thing right" }, { "start": 1759.3000000000002, "end": 1764.88, "text": " here, because those weren't intended and are at the edges of the sequence." }, { "start": 1764.88, "end": 1771.3400000000001, "text": " So these were the tricks of Big Bird to quickly summarize." }, { "start": 1771.34, "end": 1779.6599999999999, "text": " Big Bird is basically taking a transformer saying, well, why do we need all of this attention," }, { "start": 1779.6599999999999, "end": 1784.78, "text": " all of this full attention, maybe we only need some of that and can already do a big" }, { "start": 1784.78, "end": 1789.98, "text": " job, a good job, especially, you know, considering the attention mechanism goes over multiple" }, { "start": 1789.98, "end": 1791.6599999999999, "text": " layers." }, { "start": 1791.6599999999999, "end": 1797.86, "text": " So we don't need a routing from each token to each token, we can make up for not having" }, { "start": 1797.86, "end": 1801.8799999999999, "text": " a fully connected graph by simply running multiple layers." }, { "start": 1801.8799999999999, "end": 1809.1, "text": " So their sparsity is first of all, you have this random attention, which I believe changes" }, { "start": 1809.1, "end": 1815.28, "text": " from sequence to sequence, but stays within or among the layers of the same sequence." }, { "start": 1815.28, "end": 1818.82, "text": " Then you have the window attention with the reasoning." }, { "start": 1818.82, "end": 1822.74, "text": " So the reasoning behind the random attention is that if you have a randomly connected graph," }, { "start": 1822.74, "end": 1826, "text": " the path lengths are on average logarithmic." }, { "start": 1826, "end": 1828.52, "text": " So you can route information efficiently." }, { "start": 1828.52, "end": 1834.14, "text": " The reasoning behind the window attention is that probably neighbor information is very" }, { "start": 1834.14, "end": 1837.5, "text": " important and that has been shown empirically." }, { "start": 1837.5, "end": 1841.66, "text": " And then the global attention, the reasoning behind this is that some of the tokens that" }, { "start": 1841.66, "end": 1848.42, "text": " are fixed by the developers are so important that it's very beneficial that each other" }, { "start": 1848.42, "end": 1853.46, "text": " node is connected to them and that they are connected to each other node." }, { "start": 1853.46, "end": 1859.54, "text": " The result of that is the Big Bird attention mechanism, which is basically long former," }, { "start": 1859.54, "end": 1864.4, "text": " which already had these two plus the random attention." }, { "start": 1864.4, "end": 1872.46, "text": " This achieves a linear complexity in terms of memory and compute, though linear has to" }, { "start": 1872.46, "end": 1878.78, "text": " be qualified a bit because it's modified by the window size, by the number of random attention" }, { "start": 1878.78, "end": 1885.86, "text": " tokens, by the number of global tokens, and in practice often ends up being fairly large" }, { "start": 1885.86, "end": 1888.66, "text": " ish." }, { "start": 1888.66, "end": 1896.8999999999999, "text": " And also the theoretical guarantees now come with the fact that you need multiple layers." }, { "start": 1896.8999999999999, "end": 1902.02, "text": " In the worst case, you need sequence length amount of layers, which in the worst case" }, { "start": 1902.02, "end": 1907.8999999999999, "text": " would result right back into a quadratic requirement for memory and compute." }, { "start": 1907.9, "end": 1916.8600000000001, "text": " They do some engineering, some engineering tricks right here, and their results are pretty" }, { "start": 1916.8600000000001, "end": 1917.8600000000001, "text": " good." }, { "start": 1917.8600000000001, "end": 1923.0600000000002, "text": " So the results in various tasks and we'll, we'll look at some of the tasks right here." }, { "start": 1923.0600000000002, "end": 1928.7, "text": " So these are def set results using base size models." }, { "start": 1928.7, "end": 1935.18, "text": " For example, where you can see they do outperform basic Roberta models, they outperform long" }, { "start": 1935.18, "end": 1940.6200000000001, "text": " former, which may mean that the random attention is useful, but you know, in these things," }, { "start": 1940.6200000000001, "end": 1947.46, "text": " it's also always may just mean that you've thrown more compute at it." }, { "start": 1947.46, "end": 1951.5800000000002, "text": " At least I'm not really looking that they outperform the models because as you can see" }, { "start": 1951.5800000000002, "end": 1955.7, "text": " right here, if they compare to state of the art and you know, granted, these are models" }, { "start": 1955.7, "end": 1962.38, "text": " that have been trained specifically for these tasks and are crafted and engineered and Big" }, { "start": 1962.38, "end": 1969.2600000000002, "text": " Bird manages to Big Bird manages to hold itself against them in a lot of tasks and even get" }, { "start": 1969.2600000000002, "end": 1971.6200000000001, "text": " state of the art on some." }, { "start": 1971.6200000000001, "end": 1976.7, "text": " What I'm more interested in is that it, you know, it can reach good numbers." }, { "start": 1976.7, "end": 1981.38, "text": " It doesn't necessarily have to be state of the art, but it can reach good numbers, which" }, { "start": 1981.38, "end": 1989.0200000000002, "text": " tells me that, okay, probably the, the empirical hit that I take by not having the full attention" }, { "start": 1989.02, "end": 1996.58, "text": " is, you know, is justifiable by the speed up and memory savings I do get." }, { "start": 1996.58, "end": 2001.58, "text": " Yeah, especially when result, when you see results mixed like this, you know, sometimes" }, { "start": 2001.58, "end": 2007.62, "text": " the other model is good and sometimes the Big Bird is good on different variations and" }, { "start": 2007.62, "end": 2008.62, "text": " so on." }, { "start": 2008.62, "end": 2012.48, "text": " I would not, you know, I would not make a big deal out of the fact that it is state" }, { "start": 2012.48, "end": 2013.48, "text": " of the art." }, { "start": 2013.48, "end": 2014.9, "text": " I get that the authors have to do that." }, { "start": 2014.9, "end": 2022.9, "text": " I would do so as well, but you know, you don't, don't think that this is the, like the best" }, { "start": 2022.9, "end": 2023.9, "text": " thing now." }, { "start": 2023.9, "end": 2025.22, "text": " It's very probable." }, { "start": 2025.22, "end": 2028.76, "text": " They just thrown also a lot of compute at it." }, { "start": 2028.76, "end": 2032.5400000000002, "text": " What is cool is they do some genomics experiments." }, { "start": 2032.5400000000002, "end": 2039.3400000000001, "text": " So not only do they have NLP state of the art, but also they go into genomics and experiment" }, { "start": 2039.3400000000001, "end": 2040.3400000000001, "text": " with data there." }, { "start": 2040.34, "end": 2045.3799999999999, "text": " I don't want to go into that because ultimately it's another task and I believe the paper" }, { "start": 2045.3799999999999, "end": 2046.86, "text": " is about the architecture." }, { "start": 2046.86, "end": 2047.9399999999998, "text": " All right." }, { "start": 2047.9399999999998, "end": 2050.98, "text": " So that was Big Bird." }, { "start": 2050.98, "end": 2054.74, "text": " I hope you enjoyed this video and learned." }, { "start": 2054.74, "end": 2056.62, "text": " I learned something." }, { "start": 2056.62, "end": 2058.34, "text": " Certainly." }, { "start": 2058.34, "end": 2065.2999999999997, "text": " If you want to check out the proofs, they're actually pretty entertaining to read and yeah," }, { "start": 2065.2999999999997, "end": 2067.2999999999997, "text": " I'll see you next time." }, { "start": 2067.3, "end": 2071.02, "text": " Bye bye." } ]
O9kFX33nUcU
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
On the Measure of Intelligence by François Chollet - Part 4: The ARC Challenge (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "chollet", "keras", "google", "francois", "intelligence", "iq", "iq test", "deep neural networks", "prior", "skill", "performance", "measurement", "measure", "test", "number", "intelligent", "smart", "learning", "generalization", "ability", "experience", "humans", "evolution", "nature", "nurture", "psychometrics", "range", "adaptability", "arc", "kaggle", "difficulty", "entropy", "core knowledge", "objectness", "navigation", "contact", "agent", "goal" ]
In this part, we look at the ARC challenge as a proposed test of machine intelligence. The dataset features 1000 tasks that test rapid generalization based on human core knowledge priors, such as object-ness, symmetry, and navigation. OUTLINE: 0:00 - Intro 0:55 - What is ARC? 6:30 - The Goals of ARC 10:40 - Assumed Priors & Examples 21:50 - An Imagined Solution 28:15 - Consequences of a Solution 31:00 - Weaknesses 31:25 - My Comments & Ideas Paper: https://arxiv.org/abs/1911.01547 ARC: https://github.com/fchollet/ARC Abstract: To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans. Authors: François Chollet Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there and welcome to the last part of On the Measure of Intelligence by François Chollet. This last part concerns the ARC challenge that Chollet has proposed, or the ARC dataset, which stands for the Abstraction and Reasoning Corpus. And we're just quickly going over the dataset, look how it's built and discuss what kind of solutions might be relevant right here. So if you haven't seen the last videos in this series, this is the last one of a series, you might not exactly know what's going on. But I think you can you can keep up pretty well because this part is fairly independent of the other parts. And it's just cool to think about even if you haven't seen the other ones, I encourage you to go see the other ones, but it's not necessary. Okay, let's jump in. So the ARC is a challenge currently running a Kaggle challenge. But in essence, it is a dataset. And let me just jump into one of the tasks of the dataset. So in this dataset, you always have the task in the following form. So you always have multiple input examples like this or say these are called the training examples. And then you have a test example. In this case, you have three training example one test example. So an entire if you think of this in a machine learning way, this entire thing here is your x and this thing here is your y. Okay, so the label is going to be the output of the last example that you you don't know that now in the course in the training data set you do, but in the test you don't. So each one of these, as I said is is is demonstrated. These are the demonstration examples. And then you're supposed to sort of learn the regularity out of the demonstration examples. And then on this test example, you are supposed to apply this regularity that you learned. So in here, a human can fairly accurately see that there are these black squares in each image, and that in the training samples, the output will always sort of exactly match into the place of these black squares. As you can see, this is like a high rectangle. It goes here, it has the same amount of tiles and so on. And you can also see that whatever colors are in here sort of are the continuation of a symmetric pattern. So here, this is exactly the same as up here, but you know, flipped or turned by 180 degrees. So there is a notion of symmetry right here. So technically, one could compute this one would say, oh, that's probably going to be the three rows and this bunch of things. And it's probably going to be the same as this one down here, but just flipped on its head. So as a human, you get this even without a description, you realize like, oh, this is like a regular pattern, it's symmetric, there's a hole in it. And apparently, the thing here always fills the hole, I can see that, you know, three examples are enough for me to confirm that that's what's going on. And I see the hole here. So I'm going to do the same thing. So you can already see how these things are constructed in every, this is not the only task, by the way, this is just one task. Okay, there are 1000 tasks in this data set of this sort of nature. Now they're not always three demonstration examples, I believe there can be more or less. But what's always the case is they always each of these training examples consist of these demonstration examples and these test example. Each of the demonstration examples consists of an input grid and an output grid, the input grid and output grid, they can be anywhere from one by one to 30 by 30. Okay, anywhere in between that. And the colors here, I believe there are nine different colors that can go, they're just encoded by nine different numbers. But there are nine different colors that these things can have, you can see black, blue, orange, red, dark blue, and so on. And the output grid exactly the same. Now in this test example, you can only see the input grid, you cannot see the output grid. And that means you don't even know how large it should be. You can see right here, they're not all the same size, the output grids. In fact, not even the input grids have to be always the same size. But you have to now come up with an output grid. You have to first decide how big it is. And we've here we've determined since the whole has three rows, we're probably going to make three rows and it has like seven columns, we're probably going to make seven columns. And that's the sort of thing you have to do. And then not only do you have to decide how big it is, you now have to decide in each cell what color you put in. And only if this thing exactly matches the train or the test label, you get a you get a point. Otherwise you get no point. So in the training task, there are I believe 400 of these tasks and then there are 400 more as test split, but these are still public and then there are 200 that are secret. There are I guess, part of this Kaggle challenge. Yes, the training set features 400 tasks, while the evaluation set features 600 tasks. The evaluation set is further split into a public evaluation set of 400 tasks and a private evaluation set of 200 tasks. All tasks are unique. And the set of tasks and the set of training tasks are disjoint. Sorry, of test tasks and training tasks. The task data is available at this as you can see right here. So I really hope that Cholay will keep these 200 tasks as a secret, even after the Kaggle challenge, because it's going to be fun for people that might want to get into this later. So here are the goals of this data set. They want to stay close to these psychometric intelligence tests. They say in particular, it should be solvable by humans without any specific practice or training and probably also without any language instructions. So you just be able to set a human in front of it and the human should be able to solve it or a large portion of humans should be able to solve it. Ideally, this test would also differentiate humans from each other. But at this point, we want to simply assess machines. So they say focus on measuring developer aware generalization rather than task specific skill by only featuring novel tasks in the evaluation set. And the novel tasks are unknown to the developer of a test taker. So if I develop a system, I don't know what are these 200 tasks that Cholay keeps hidden. I simply submit my code and I'll figure out if my code does well on them. So they say they want to feature highly abstract tasks must be understood by a test taker using very few examples. That's what you saw. You don't have a big training example to learn that this task is about symmetry and hole filling. You only have three. And from three, you need to recognize what's going on and produce the output of the test sample. Quality of control for experience by only providing a fixed amount of training data for each task. That's what we saw. And only featuring tasks that do not lend themselves well to artificially generating new data. So it's not like ImageNet where you can go on the internet and find a whole bunch of images or some NLP tasks where people pre train on all of Wikipedia and all of the books in the world because they want to understand language better. These tasks are supposed to be such that it makes no sense for you to go out and try to find more data or find similar data or pre train your model on something. And then lastly, and this refers to the last few chapters we looked at explicitly describe the complete set of priors that it assumes and enable a fair general intelligence comparison between human and machines by only requiring priors to those innate human close to innate human prior knowledge. So that means that whatever human have whatever humans have as a prior built into them by let's say evolution or that most humans have picked up through life. Those are the things that you have to explicitly point out. So and you require that and you have to point them out. Sorry, explicitly describe them such that I as a developer of a system can build them into my system such that it's a fair comparison. In the last chapters, we looked at the fact that a fair intelligence comparison is only fair if two systems that are compared to each other have the same amount of experience. And here we control that by only providing a fixed amount of training data and also have the same prior knowledge. And here we simply do that by listing the human priors that are required for the tasks that we think that humans have. And then we enable the developers to explicitly build those into machines. So I would maybe build a little calculator module into my AI that solves this tasks. Okay, so they say he each task consists of a small number of demonstration examples, 3.3 on average, and a small number of test examples, generally one, although it might be two or three in rare cases. Each example consists of an input grid and an output grid. Each grid is a literal grid of symbols. Each symbol is visualized by color. There are 10 unique symbols, a grid can be any height or width between one by one and 30 by 30, so it doesn't even need to be square, right? And as I said, you need to provide your own output grid as an AI taking this test. So here are the priors that this test assumes. And we're going to look at some examples that make it explicit like some tasks in the training set that where you can see these priors in action. There's an object nest prior where the task assumes that the AI or the task tests that the AI understands something about objects. So these are tasks that you can only reasonably solve if you know something about objects, like you would, a human would recognize or would, you know, would recognize that these things might represent different objects, right? Now that's mainly, I think also due to the black background helps, but you would even recognize this with another background or here, the different colors indicate that those are two different things, even though those two pixels here touch and are different from black, you would recognize that those are two different things because they have different color. But you would generally recognize one of these things as an individual object. If you're not given anything here, you see, for example, a denoising task as a human, you can pretty quickly see what the task is about, right? There appear to be these green things. They're all rectangles and there appear to be these blue things. And on the right side, there are no more blue things. But the now it's not always that when there was a blue thing, there is now a green thing only here where it was sort of inside a green thing is now a green pixel. Whenever there was a blue pixel outside in this black area, then there is now black. So this is sort of like the blue things were noise and you're able to remove it. This already tests a lot of assumptions. A lot of these priors, a lot of understanding of the world. So there are objects, right? As human understands that objects are square in this case, or rectangles, the human understands that it that we need to remove the blue things going over. And the human understands that somehow this inside relation, right, if something is inside or outside of one of these rectangles, and that determines whether we have to turn the pixel green or black. You can, I mean, think about how you would train a machine to do something like this. It's not easy, especially if you don't know that this task is coming. Imagine for all of these things, you don't know that the task is coming. This is just one of 400 tasks that you know of. There are 600 tasks that you don't know of that are similar, but also in a way completely different. Here's another tasks that object influence via contact. So this is your first demonstration example. A human pretty quickly recognizes there appears to be red thing and a blue thing, and then they appear to be together. And then in the next thing, you see, oh, there appears to be a blue thing and the red thing in the next thing, they appear to be together. And if you look here, it always appears to be the red thing going to the blue thing in the most direct way. So in the in the along the grid. That's all that the human needs to see two examples and the human most humans will already make that inference and can now solve if there is like, if there now is a test example, where the blue thing is like, the blue thing is down here. And the red thing is here like this. And it asks you what comes next, you know, you know that the red thing is going down to the blue thing. But it's very hard to train a machine to do this. So I like this test, because it's sort of a different test. And I believe the test these tests weren't procedurally generated. These tests were actually generated by sholey or, you know, by by actual humans. That's pretty cool. And 1000 tasks like this is going to be very hard to solve. There are even more abstract priors like goal directedness. So now you here you can already see this a little bit in that you can say, well, the red thing wants to go to the blue thing. So there is a notion of time involved, maybe. There's also counting and numbers and numbers prior. So here you see like a time process. So in this demonstration example, you see blue things here, red, big thing. And then the next the output grid is this green thing. And as a human immediately recognize, okay, so it shoots out from the blue thing, the green thing shoots from the blue thing, hits the red wall and goes here. Try to make a machine understand this. This is insane, right? So if you look at the more examples, it all it appears that the blue thing always comes from somewhere like the side of the image, and the green thing comes out obviously from whatever is not at the at the border of the image and then bounces off the red thing if it hits the red thing. Now here you can you can already see what's going to happen. Remember your AI would need to first determine aha, okay, all of these output grids, they seem to be the same as the input grid. So it would need to explicitly construct the output grid in the same manner as the input grid because it understands this right? This is not the same in every task, then it needs to recognize the red thing that stays in every one. So it needs to put the red thing here right from from here. And then it needs to recognize the blue thing stays as well. And then most most shockingly needs to recognize, okay, I will draw a line in pixels and lines in pixels are hard here. And then as soon as it would hit the red thing, it bounces off into the other direction. So from just these three examples, the machine has to understand that and correctly output the exact solution, not an approximate solution, the exact solution. Okay, so yeah, there are these basic geometry and topology priors like lines, rectangular shapes, symmetries, rotations, translations, shape upscaling, containing being contained, drawing lines, connecting points, and so on. Now, let's look at some more examples. These are fun, right? Check out this one here. So you see green, red, and then somehow the green connected to the red. So this is an example of that has many of these priors in many of these concepts in there is goal directedness, you can already sort of form the hypothesis that the green wants to go to the red. But also you see that somehow it sort of appears to the blue things seem to be maybe obstacles and it appears to change direction when it encounters an obstacle like here. So here, you see the example, and you probably confirm so your hypothesis could be it always goes until it hits and then it changes direction towards the red thing, right? Always towards red thing, because it's not always towards the right because you return toward the left. So it goes somehow towards the red thing and so it's pretty ambiguous in this situation, but you can also make the assumption that if it's ambiguous, it goes towards the middle, maybe, maybe. So here, again, now we're actually confirming probably so we go towards the red thing, which would be towards this direction, then we hit an object, then we go towards the red thing until we hit an object, and then we go here. Also see that these grids here are not the same size. So it's not always the case that the grids of within the same tasks are even the same size. So now here, again, your AI would need to recognize what size of grid it needs to draw and what the result is. So it would need to copy this entire grid and also change these pixels right here to be green pixels. That's hard. I mean, that's I find I find this to be pretty hard. This is the line extrapolation and turning on obstacle and efficiently reaching a goal prior. That's crazy. And is there more? Yes, there is two more, I believe. Yeah, those are the last examples. So in this one, you can see right here, there appear to be objects, which there's this blue objects appear to be the same and there are these red, and then the output grid is one of these blue objects. Okay, so here we again see different objects, the output grid is one of them. So as a human, you can already recognize the output grid is probably always going to be one of these objects. And now we need to decide on which one. So we can formulate the hypothesis that it's probably going to be the one that's the most like here, there's three of the blue ones here, there's four of the yellow ones, that's more than any other. And this here confirms our hypothesis that the it's the object that appears most often. Now again, see that there is this notion of object ness. You need to upscale somehow. No, this is not upscale because the grid is the same size. It's simply the image that's upscale. But you need to somehow focus be able to focus in on one of these objects, I need to count them, you need to compare the counts via each other. And now here you can pretty easily see that the output grid is going to contain one of those blue things as a human. And here, it's it's sort of a symmetry filling task. Now as a human, you need one demonstration to get this. Maybe you need more, but many tasks involve some sort of symmetry. Okay, drawing the symmetrized version around the version of a shape around a marker, that's going to be fairly hard for a machine to learn without without the developer knowing that this task is coming. Okay, they highlight some differentiations to standard psychometric tests. But what I find interesting here is that this thing, what a solution to arc may look like and what it would imply for AI applications. They say we have found art to be fully solvable by humans. So they set a human in front of every, every one of these tasks, and it's solvable. While many arc tasks are intellectually challenging human test takes us appear to be able to solve the majority of tasks on their first try without any practice or verbal explanations. In effect, in this task, you get three tries at each at each of the at each of the problems, you get three, three tries and the humans can already solve it in one. So that just shows you shows you how cool humans are. So here is a shawley suggests a solution approach says by start by developing a domain specific language capable of expressing all possible situations, all possible solution programs for any arc task. Since the exact set of arc tax is purposely not formally definable, this may be challenging. The space of tasks is defined as anything expressible in terms of arc pairs that would only involve core knowledge. So core knowledge is this set of human priors that we discussed last time like objectness and symmetries and geometric shapes and navigation and so on. So he asks you to basically develop a DSL that can capture all the different tasks. So so kept basically define a formalism of these tasks. But it's hard because you don't know what the tasks are going to be. So your best bet is probably to make a formalism that completely over represents what the tasks can be. It would require hard coding the core knowledge priors from 3.1.2 in a sufficiently abstract and combinable program form to serve as a basis functions for a kind of human like reasoning DSL. We believe that solving this specific subproblem is critical to a to general AI progress. Basically says whenever we can describe this is like saying that this AI progress will make a big step once we can formally describe human priors. And while true this I feel the hardness of this problem is as hard as actually building general artificial intelligence or very close to it. So it is a bit of a like how to how to go how to build a GI step one build a GI that's sort of and not exactly but it's kind of what this says. Right. So if I could actually have this DSL to describe every single task and I could do it you know such that it is not not super over capturing all the tasks then I would be able and I would have described human core knowledge in a sufficiently accurate degree that I could just you know build a GI. So he goes on says given a task use the DSL to generate a set of candidate programs that turn the input grids into the corresponding output grids. This step would reuse and recombine sub programs that previously proved useful in other arc tasks. So says whenever you have captured the core knowledge or whenever you have captured the problem space in a formal language you can simply use that formal language to express whatever your input is so the that turn the input grids into the corresponding output grids. So you would put in these demonstration examples and describe this with your formal language that you have and you can somehow reuse and recombine sub programs that previously proved useful for basically asking you to write to come up with source code that would generate these demonstration examples in the language of your DSL. And then he says select top candidates among these programs so you would generate multiple versions of source code that generate these things based on a criterion such as a program simplicity or program likelihood. Note that we do not expect that merely selecting the simplest possible program that works on training pairs will generalize well to test pairs. And then use the top three candidates to generate output grids for the test examples. So I hope the approach here I feel it makes sense but it is sort of over hopeful in my mind and that's mainly because of step one. So step one asks you to come up with like a programming language that can capture all the tasks in this all the tasks in the data set even though you don't know what the tasks are and that has this human core knowledge in inside of it in a in a formally describable way. And then once you have that programming language you would if you're given this task where you have you know a bunch of these demonstration you have a bunch of these demonstration things and then you have the test thing you would generate all the programs that would produce these demonstration examples or that would given the demo given the input grade would produce the output grid right you would generate all the programs and then you would select somehow among all these programs the one that you think generalizes the most and you would use that program to put this in and get out the solution. They say it's probably it's not always the simplest program not always the shortest program maybe who knows like I feel step one is the kind of the crucial issue here. Okay so they say they make some claims here and about what this what this would bring the community we posit that the existence of human level arc solver would represent the ability to program and AI from demonstrations alone only requiring a handful of demonstrations to specify complex tasks to do a wide range of human relatable tasks of a kind that would normally require human level human like fluid intelligence. As supporting evidence we note that human performance on psychometric intelligent test which are similar to our is predictive of success across all human cognitive tasks. Further we posit that since an arc solver and human intelligence would be both founded on the same knowledge priors the scope of application of an arc solver would be closer to that of human cognition making it such a solver both practically valuable and easy to interact with and would produce behavior that is in line with human expectations. Okay so they're making the same argument that anyone before has made but they condition it on some things and this is I think the conclusion of the entire article here of on the measure of intelligence because people had this hope and they say that here claims are highly speculative and may prove incorrect much like Newell's 1973 hopes that progress on chess playing would translate into meaningful progress and achieving a broad range of cognitive abilities especially if arc turns out to feature unforeseen vulnerabilities to unintelligent shortcuts. This is the AI effect and basically means that whenever you think a task the solving of a task represents AI and then you actually see the solution then the solution turns out to be not AI in the eyes of the human. So the human at first they would say oh this task really requires intelligence and then someone solves the task and they'll say oh that's not intelligence you kind of hacked your way to that and the expectation is that in this arc challenge there might be a hacky way to that but I mean the good question is when at what is there even a task like this arc challenge here could that is there even a possibility of a task where you wouldn't say that and I'm not so sure about this they seem to be more hopeful than I am but at least they say the arc challenge is founded on the same priors as a human has it gives you the same amount of experience as a human has right and therefore it is much more comparable to human intelligence. They go over some weaknesses right here of criticizing their own thing generalization is not quantified so they have a measure of generalization in the previous chapter but they don't use it right here test validity is not established data set size and diversity may be limited and so on but I in my mind this I would not consider this as like an AGI task or anything like this I'm pretty sure the solution to this will come in a form again where people don't really think it's it exhibits intelligence but I do like the task as such and as a machine learner I am very excited to think about how machine learning can go about solving this task and especially with what we've seen from something like GPT-3 it has exactly this kind of structure where you train on a giant data set blah blah blah you pre train your language model but then at inference time you input a bunch of these demonstration examples and you ask it for the next output so I feel that might be a good start for for doing it the question of course is what what then do you pre train this model on this GPT-3 for ARC what's the pre training data set for it and I guess that's going to be the challenge and probably going to require people to specifically program all of these priors into a data set generator for pre training so that would be my approach my approach would be write a data set generator for pre training and GPT-3 model to do these kind of tasks and in order to write the data set generator you'd have to basically program in all of these priors and that's not going to be easy because your best bet is to sort of put yourself into the shoes of Chalet and be like oh if I were to design a task what kind of things would I do and then try to capture that that's going to be your best bet your most honest bet with respect to the challenge is to try to as faithfully as possible implement something like an object Ness prior where cohesion and persistence are captured that would be the most scientifically sound approach to my approach alright so that was my take on the ARC data set if you have any comments I'm very excited to hear comments on this if you have already tried the ARC challenge have some insight I also welcome comments on that and with that I'll see you next time bye bye
[ { "start": 0, "end": 9.040000000000001, "text": " Hi there and welcome to the last part of On the Measure of Intelligence by François Chollet." }, { "start": 9.040000000000001, "end": 16, "text": " This last part concerns the ARC challenge that Chollet has proposed, or the ARC dataset," }, { "start": 16, "end": 20.6, "text": " which stands for the Abstraction and Reasoning Corpus." }, { "start": 20.6, "end": 26.84, "text": " And we're just quickly going over the dataset, look how it's built and discuss what kind" }, { "start": 26.84, "end": 30.12, "text": " of solutions might be relevant right here." }, { "start": 30.12, "end": 36.08, "text": " So if you haven't seen the last videos in this series, this is the last one of a series," }, { "start": 36.08, "end": 38.56, "text": " you might not exactly know what's going on." }, { "start": 38.56, "end": 43.760000000000005, "text": " But I think you can you can keep up pretty well because this part is fairly independent" }, { "start": 43.760000000000005, "end": 45.519999999999996, "text": " of the other parts." }, { "start": 45.519999999999996, "end": 49.44, "text": " And it's just cool to think about even if you haven't seen the other ones, I encourage" }, { "start": 49.44, "end": 53.24, "text": " you to go see the other ones, but it's not necessary." }, { "start": 53.24, "end": 55.92, "text": " Okay, let's jump in." }, { "start": 55.92, "end": 61.400000000000006, "text": " So the ARC is a challenge currently running a Kaggle challenge." }, { "start": 61.400000000000006, "end": 64.36, "text": " But in essence, it is a dataset." }, { "start": 64.36, "end": 68.84, "text": " And let me just jump into one of the tasks of the dataset." }, { "start": 68.84, "end": 74.24000000000001, "text": " So in this dataset, you always have the task in the following form." }, { "start": 74.24000000000001, "end": 82.24000000000001, "text": " So you always have multiple input examples like this or say these are called the training" }, { "start": 82.24000000000001, "end": 83.48, "text": " examples." }, { "start": 83.48, "end": 85.32000000000001, "text": " And then you have a test example." }, { "start": 85.32, "end": 87.88, "text": " In this case, you have three training example one test example." }, { "start": 87.88, "end": 95.28, "text": " So an entire if you think of this in a machine learning way, this entire thing here is your" }, { "start": 95.28, "end": 98.96, "text": " x and this thing here is your y." }, { "start": 98.96, "end": 105.56, "text": " Okay, so the label is going to be the output of the last example that you you don't know" }, { "start": 105.56, "end": 113.8, "text": " that now in the course in the training data set you do, but in the test you don't." }, { "start": 113.8, "end": 118.16, "text": " So each one of these, as I said is is is demonstrated." }, { "start": 118.16, "end": 119.82, "text": " These are the demonstration examples." }, { "start": 119.82, "end": 124.72, "text": " And then you're supposed to sort of learn the regularity out of the demonstration examples." }, { "start": 124.72, "end": 130.54, "text": " And then on this test example, you are supposed to apply this regularity that you learned." }, { "start": 130.54, "end": 136.96, "text": " So in here, a human can fairly accurately see that there are these black squares in" }, { "start": 136.96, "end": 144.18, "text": " each image, and that in the training samples, the output will always sort of exactly match" }, { "start": 144.18, "end": 146.20000000000002, "text": " into the place of these black squares." }, { "start": 146.20000000000002, "end": 148.44, "text": " As you can see, this is like a high rectangle." }, { "start": 148.44, "end": 151.68, "text": " It goes here, it has the same amount of tiles and so on." }, { "start": 151.68, "end": 159.60000000000002, "text": " And you can also see that whatever colors are in here sort of are the continuation of" }, { "start": 159.60000000000002, "end": 160.96, "text": " a symmetric pattern." }, { "start": 160.96, "end": 170.52, "text": " So here, this is exactly the same as up here, but you know, flipped or turned by 180 degrees." }, { "start": 170.52, "end": 173.4, "text": " So there is a notion of symmetry right here." }, { "start": 173.4, "end": 179.72, "text": " So technically, one could compute this one would say, oh, that's probably going to be" }, { "start": 179.72, "end": 183.76000000000002, "text": " the three rows and this bunch of things." }, { "start": 183.76000000000002, "end": 187.88, "text": " And it's probably going to be the same as this one down here, but just flipped on its" }, { "start": 187.88, "end": 189.20000000000002, "text": " head." }, { "start": 189.2, "end": 194.67999999999998, "text": " So as a human, you get this even without a description, you realize like, oh, this is" }, { "start": 194.67999999999998, "end": 198.67999999999998, "text": " like a regular pattern, it's symmetric, there's a hole in it." }, { "start": 198.67999999999998, "end": 203.17999999999998, "text": " And apparently, the thing here always fills the hole, I can see that, you know, three" }, { "start": 203.17999999999998, "end": 206.56, "text": " examples are enough for me to confirm that that's what's going on." }, { "start": 206.56, "end": 207.79999999999998, "text": " And I see the hole here." }, { "start": 207.79999999999998, "end": 211.2, "text": " So I'm going to do the same thing." }, { "start": 211.2, "end": 215.56, "text": " So you can already see how these things are constructed in every, this is not the only" }, { "start": 215.56, "end": 218.35999999999999, "text": " task, by the way, this is just one task." }, { "start": 218.36, "end": 224.84, "text": " Okay, there are 1000 tasks in this data set of this sort of nature." }, { "start": 224.84, "end": 230.68, "text": " Now they're not always three demonstration examples, I believe there can be more or less." }, { "start": 230.68, "end": 235.08, "text": " But what's always the case is they always each of these training examples consist of" }, { "start": 235.08, "end": 238.76000000000002, "text": " these demonstration examples and these test example." }, { "start": 238.76000000000002, "end": 244.84, "text": " Each of the demonstration examples consists of an input grid and an output grid, the input" }, { "start": 244.84, "end": 251.24, "text": " grid and output grid, they can be anywhere from one by one to 30 by 30." }, { "start": 251.24, "end": 255.48000000000002, "text": " Okay, anywhere in between that." }, { "start": 255.48000000000002, "end": 260.88, "text": " And the colors here, I believe there are nine different colors that can go, they're just" }, { "start": 260.88, "end": 263.04, "text": " encoded by nine different numbers." }, { "start": 263.04, "end": 267.52, "text": " But there are nine different colors that these things can have, you can see black, blue," }, { "start": 267.52, "end": 271.88, "text": " orange, red, dark blue, and so on." }, { "start": 271.88, "end": 275.3, "text": " And the output grid exactly the same." }, { "start": 275.3, "end": 281.94, "text": " Now in this test example, you can only see the input grid, you cannot see the output" }, { "start": 281.94, "end": 282.94, "text": " grid." }, { "start": 282.94, "end": 285.52, "text": " And that means you don't even know how large it should be." }, { "start": 285.52, "end": 288.84, "text": " You can see right here, they're not all the same size, the output grids." }, { "start": 288.84, "end": 292.36, "text": " In fact, not even the input grids have to be always the same size." }, { "start": 292.36, "end": 295.88, "text": " But you have to now come up with an output grid." }, { "start": 295.88, "end": 297.64, "text": " You have to first decide how big it is." }, { "start": 297.64, "end": 301.44, "text": " And we've here we've determined since the whole has three rows, we're probably going" }, { "start": 301.44, "end": 307.36, "text": " to make three rows and it has like seven columns, we're probably going to make seven columns." }, { "start": 307.36, "end": 309.56, "text": " And that's the sort of thing you have to do." }, { "start": 309.56, "end": 314.12, "text": " And then not only do you have to decide how big it is, you now have to decide in each" }, { "start": 314.12, "end": 317.16, "text": " cell what color you put in." }, { "start": 317.16, "end": 326.92, "text": " And only if this thing exactly matches the train or the test label, you get a you get" }, { "start": 326.92, "end": 328.15999999999997, "text": " a point." }, { "start": 328.16, "end": 331.44, "text": " Otherwise you get no point." }, { "start": 331.44, "end": 338.52000000000004, "text": " So in the training task, there are I believe 400 of these tasks and then there are 400" }, { "start": 338.52000000000004, "end": 345.48, "text": " more as test split, but these are still public and then there are 200 that are secret." }, { "start": 345.48, "end": 349.6, "text": " There are I guess, part of this Kaggle challenge." }, { "start": 349.6, "end": 357.56, "text": " Yes, the training set features 400 tasks, while the evaluation set features 600 tasks." }, { "start": 357.56, "end": 362.32, "text": " The evaluation set is further split into a public evaluation set of 400 tasks and a private" }, { "start": 362.32, "end": 364.64, "text": " evaluation set of 200 tasks." }, { "start": 364.64, "end": 366.4, "text": " All tasks are unique." }, { "start": 366.4, "end": 370.16, "text": " And the set of tasks and the set of training tasks are disjoint." }, { "start": 370.16, "end": 373.36, "text": " Sorry, of test tasks and training tasks." }, { "start": 373.36, "end": 379.26, "text": " The task data is available at this as you can see right here." }, { "start": 379.26, "end": 386.4, "text": " So I really hope that Cholay will keep these 200 tasks as a secret, even after the Kaggle" }, { "start": 386.4, "end": 393.15999999999997, "text": " challenge, because it's going to be fun for people that might want to get into this later." }, { "start": 393.15999999999997, "end": 395.71999999999997, "text": " So here are the goals of this data set." }, { "start": 395.71999999999997, "end": 400.67999999999995, "text": " They want to stay close to these psychometric intelligence tests." }, { "start": 400.67999999999995, "end": 405.4, "text": " They say in particular, it should be solvable by humans without any specific practice or" }, { "start": 405.4, "end": 409.12, "text": " training and probably also without any language instructions." }, { "start": 409.12, "end": 413.47999999999996, "text": " So you just be able to set a human in front of it and the human should be able to solve" }, { "start": 413.48, "end": 418.36, "text": " it or a large portion of humans should be able to solve it." }, { "start": 418.36, "end": 422.12, "text": " Ideally, this test would also differentiate humans from each other." }, { "start": 422.12, "end": 426.84000000000003, "text": " But at this point, we want to simply assess machines." }, { "start": 426.84000000000003, "end": 433.08000000000004, "text": " So they say focus on measuring developer aware generalization rather than task specific skill" }, { "start": 433.08000000000004, "end": 437.28000000000003, "text": " by only featuring novel tasks in the evaluation set." }, { "start": 437.28000000000003, "end": 440.88, "text": " And the novel tasks are unknown to the developer of a test taker." }, { "start": 440.88, "end": 447, "text": " So if I develop a system, I don't know what are these 200 tasks that Cholay keeps hidden." }, { "start": 447, "end": 455.04, "text": " I simply submit my code and I'll figure out if my code does well on them." }, { "start": 455.04, "end": 462.44, "text": " So they say they want to feature highly abstract tasks must be understood by a test taker using" }, { "start": 462.44, "end": 464.71999999999997, "text": " very few examples." }, { "start": 464.71999999999997, "end": 465.71999999999997, "text": " That's what you saw." }, { "start": 465.71999999999997, "end": 470, "text": " You don't have a big training example to learn that this task is about symmetry and hole" }, { "start": 470, "end": 471, "text": " filling." }, { "start": 471, "end": 472.28, "text": " You only have three." }, { "start": 472.28, "end": 477.08, "text": " And from three, you need to recognize what's going on and produce the output of the test" }, { "start": 477.08, "end": 481.12, "text": " sample." }, { "start": 481.12, "end": 484.28, "text": " Quality of control for experience by only providing a fixed amount of training data" }, { "start": 484.28, "end": 485.28, "text": " for each task." }, { "start": 485.28, "end": 486.28, "text": " That's what we saw." }, { "start": 486.28, "end": 491.54, "text": " And only featuring tasks that do not lend themselves well to artificially generating new data." }, { "start": 491.54, "end": 496.22, "text": " So it's not like ImageNet where you can go on the internet and find a whole bunch of" }, { "start": 496.22, "end": 501.98, "text": " images or some NLP tasks where people pre train on all of Wikipedia and all of the books" }, { "start": 501.98, "end": 505.62, "text": " in the world because they want to understand language better." }, { "start": 505.62, "end": 511.94000000000005, "text": " These tasks are supposed to be such that it makes no sense for you to go out and try to" }, { "start": 511.94000000000005, "end": 517.52, "text": " find more data or find similar data or pre train your model on something." }, { "start": 517.52, "end": 523, "text": " And then lastly, and this refers to the last few chapters we looked at explicitly describe" }, { "start": 523, "end": 530.2, "text": " the complete set of priors that it assumes and enable a fair general intelligence comparison" }, { "start": 530.2, "end": 536.32, "text": " between human and machines by only requiring priors to those innate human close to innate" }, { "start": 536.32, "end": 537.84, "text": " human prior knowledge." }, { "start": 537.84, "end": 545.36, "text": " So that means that whatever human have whatever humans have as a prior built into them by" }, { "start": 545.36, "end": 550.46, "text": " let's say evolution or that most humans have picked up through life." }, { "start": 550.46, "end": 555.5400000000001, "text": " Those are the things that you have to explicitly point out." }, { "start": 555.5400000000001, "end": 560.9200000000001, "text": " So and you require that and you have to point them out." }, { "start": 560.9200000000001, "end": 566.64, "text": " Sorry, explicitly describe them such that I as a developer of a system can build them" }, { "start": 566.64, "end": 569.88, "text": " into my system such that it's a fair comparison." }, { "start": 569.88, "end": 574.52, "text": " In the last chapters, we looked at the fact that a fair intelligence comparison is only" }, { "start": 574.52, "end": 579.6600000000001, "text": " fair if two systems that are compared to each other have the same amount of experience." }, { "start": 579.66, "end": 585.52, "text": " And here we control that by only providing a fixed amount of training data and also have" }, { "start": 585.52, "end": 587.9599999999999, "text": " the same prior knowledge." }, { "start": 587.9599999999999, "end": 593.0799999999999, "text": " And here we simply do that by listing the human priors that are required for the tasks" }, { "start": 593.0799999999999, "end": 595.24, "text": " that we think that humans have." }, { "start": 595.24, "end": 599.92, "text": " And then we enable the developers to explicitly build those into machines." }, { "start": 599.92, "end": 607.36, "text": " So I would maybe build a little calculator module into my AI that solves this tasks." }, { "start": 607.36, "end": 613.88, "text": " Okay, so they say he each task consists of a small number of demonstration examples," }, { "start": 613.88, "end": 619.16, "text": " 3.3 on average, and a small number of test examples, generally one, although it might" }, { "start": 619.16, "end": 622, "text": " be two or three in rare cases." }, { "start": 622, "end": 624.32, "text": " Each example consists of an input grid and an output grid." }, { "start": 624.32, "end": 627.4, "text": " Each grid is a literal grid of symbols." }, { "start": 627.4, "end": 630.08, "text": " Each symbol is visualized by color." }, { "start": 630.08, "end": 633.72, "text": " There are 10 unique symbols, a grid can be any height or width between one by one and" }, { "start": 633.72, "end": 638, "text": " 30 by 30, so it doesn't even need to be square, right?" }, { "start": 638, "end": 644.6600000000001, "text": " And as I said, you need to provide your own output grid as an AI taking this test." }, { "start": 644.6600000000001, "end": 646.96, "text": " So here are the priors that this test assumes." }, { "start": 646.96, "end": 652.72, "text": " And we're going to look at some examples that make it explicit like some tasks in the training" }, { "start": 652.72, "end": 656.4, "text": " set that where you can see these priors in action." }, { "start": 656.4, "end": 664.3199999999999, "text": " There's an object nest prior where the task assumes that the AI or the task tests that" }, { "start": 664.3199999999999, "end": 667.4599999999999, "text": " the AI understands something about objects." }, { "start": 667.4599999999999, "end": 672.4399999999999, "text": " So these are tasks that you can only reasonably solve if you know something about objects," }, { "start": 672.4399999999999, "end": 679.1999999999999, "text": " like you would, a human would recognize or would, you know, would recognize that these" }, { "start": 679.1999999999999, "end": 682.6, "text": " things might represent different objects, right?" }, { "start": 682.6, "end": 689.0600000000001, "text": " Now that's mainly, I think also due to the black background helps, but you would even" }, { "start": 689.0600000000001, "end": 694.44, "text": " recognize this with another background or here, the different colors indicate that those" }, { "start": 694.44, "end": 700.2, "text": " are two different things, even though those two pixels here touch and are different from" }, { "start": 700.2, "end": 704.48, "text": " black, you would recognize that those are two different things because they have different" }, { "start": 704.48, "end": 705.48, "text": " color." }, { "start": 705.48, "end": 711.36, "text": " But you would generally recognize one of these things as an individual object." }, { "start": 711.36, "end": 716.82, "text": " If you're not given anything here, you see, for example, a denoising task as a human," }, { "start": 716.82, "end": 720.02, "text": " you can pretty quickly see what the task is about, right?" }, { "start": 720.02, "end": 722.62, "text": " There appear to be these green things." }, { "start": 722.62, "end": 726.9, "text": " They're all rectangles and there appear to be these blue things." }, { "start": 726.9, "end": 729.9, "text": " And on the right side, there are no more blue things." }, { "start": 729.9, "end": 735.6800000000001, "text": " But the now it's not always that when there was a blue thing, there is now a green thing" }, { "start": 735.68, "end": 742.7199999999999, "text": " only here where it was sort of inside a green thing is now a green pixel." }, { "start": 742.7199999999999, "end": 748.8, "text": " Whenever there was a blue pixel outside in this black area, then there is now black." }, { "start": 748.8, "end": 752.88, "text": " So this is sort of like the blue things were noise and you're able to remove it." }, { "start": 752.88, "end": 756.4399999999999, "text": " This already tests a lot of assumptions." }, { "start": 756.4399999999999, "end": 760.12, "text": " A lot of these priors, a lot of understanding of the world." }, { "start": 760.12, "end": 762.9599999999999, "text": " So there are objects, right?" }, { "start": 762.96, "end": 770.52, "text": " As human understands that objects are square in this case, or rectangles, the human understands" }, { "start": 770.52, "end": 775.72, "text": " that it that we need to remove the blue things going over." }, { "start": 775.72, "end": 784.08, "text": " And the human understands that somehow this inside relation, right, if something is inside" }, { "start": 784.08, "end": 788.2800000000001, "text": " or outside of one of these rectangles, and that determines whether we have to turn the" }, { "start": 788.2800000000001, "end": 790.84, "text": " pixel green or black." }, { "start": 790.84, "end": 794.9200000000001, "text": " You can, I mean, think about how you would train a machine to do something like this." }, { "start": 794.9200000000001, "end": 800.0400000000001, "text": " It's not easy, especially if you don't know that this task is coming." }, { "start": 800.0400000000001, "end": 803.32, "text": " Imagine for all of these things, you don't know that the task is coming." }, { "start": 803.32, "end": 806.48, "text": " This is just one of 400 tasks that you know of." }, { "start": 806.48, "end": 814.0400000000001, "text": " There are 600 tasks that you don't know of that are similar, but also in a way completely" }, { "start": 814.0400000000001, "end": 816.8000000000001, "text": " different." }, { "start": 816.8000000000001, "end": 820.2800000000001, "text": " Here's another tasks that object influence via contact." }, { "start": 820.28, "end": 823.36, "text": " So this is your first demonstration example." }, { "start": 823.36, "end": 828.4399999999999, "text": " A human pretty quickly recognizes there appears to be red thing and a blue thing, and then" }, { "start": 828.4399999999999, "end": 831.1, "text": " they appear to be together." }, { "start": 831.1, "end": 835.24, "text": " And then in the next thing, you see, oh, there appears to be a blue thing and the red thing" }, { "start": 835.24, "end": 837.3399999999999, "text": " in the next thing, they appear to be together." }, { "start": 837.3399999999999, "end": 843.0799999999999, "text": " And if you look here, it always appears to be the red thing going to the blue thing in" }, { "start": 843.0799999999999, "end": 844.4, "text": " the most direct way." }, { "start": 844.4, "end": 847.64, "text": " So in the in the along the grid." }, { "start": 847.64, "end": 855.1999999999999, "text": " That's all that the human needs to see two examples and the human most humans will already" }, { "start": 855.1999999999999, "end": 862.96, "text": " make that inference and can now solve if there is like, if there now is a test example, where" }, { "start": 862.96, "end": 866.84, "text": " the blue thing is like, the blue thing is down here." }, { "start": 866.84, "end": 869.92, "text": " And the red thing is here like this." }, { "start": 869.92, "end": 874.24, "text": " And it asks you what comes next, you know, you know that the red thing is going down" }, { "start": 874.24, "end": 877.12, "text": " to the blue thing." }, { "start": 877.12, "end": 879.6, "text": " But it's very hard to train a machine to do this." }, { "start": 879.6, "end": 884.32, "text": " So I like this test, because it's sort of a different test." }, { "start": 884.32, "end": 887.72, "text": " And I believe the test these tests weren't procedurally generated." }, { "start": 887.72, "end": 894.64, "text": " These tests were actually generated by sholey or, you know, by by actual humans." }, { "start": 894.64, "end": 896.72, "text": " That's pretty cool." }, { "start": 896.72, "end": 900.84, "text": " And 1000 tasks like this is going to be very hard to solve." }, { "start": 900.84, "end": 905.58, "text": " There are even more abstract priors like goal directedness." }, { "start": 905.58, "end": 911.48, "text": " So now you here you can already see this a little bit in that you can say, well, the" }, { "start": 911.48, "end": 914.7, "text": " red thing wants to go to the blue thing." }, { "start": 914.7, "end": 919.2800000000001, "text": " So there is a notion of time involved, maybe." }, { "start": 919.2800000000001, "end": 922.64, "text": " There's also counting and numbers and numbers prior." }, { "start": 922.64, "end": 925.58, "text": " So here you see like a time process." }, { "start": 925.58, "end": 931.36, "text": " So in this demonstration example, you see blue things here, red, big thing." }, { "start": 931.36, "end": 936.12, "text": " And then the next the output grid is this green thing." }, { "start": 936.12, "end": 941, "text": " And as a human immediately recognize, okay, so it shoots out from the blue thing, the" }, { "start": 941, "end": 946.5600000000001, "text": " green thing shoots from the blue thing, hits the red wall and goes here." }, { "start": 946.5600000000001, "end": 949.3000000000001, "text": " Try to make a machine understand this." }, { "start": 949.3000000000001, "end": 951, "text": " This is insane, right?" }, { "start": 951, "end": 956.98, "text": " So if you look at the more examples, it all it appears that the blue thing always comes" }, { "start": 956.98, "end": 962.44, "text": " from somewhere like the side of the image, and the green thing comes out obviously from" }, { "start": 962.44, "end": 968.74, "text": " whatever is not at the at the border of the image and then bounces off the red thing if" }, { "start": 968.74, "end": 971.82, "text": " it hits the red thing." }, { "start": 971.82, "end": 976.1, "text": " Now here you can you can already see what's going to happen." }, { "start": 976.1, "end": 982.32, "text": " Remember your AI would need to first determine aha, okay, all of these output grids, they" }, { "start": 982.32, "end": 985.16, "text": " seem to be the same as the input grid." }, { "start": 985.16, "end": 990.52, "text": " So it would need to explicitly construct the output grid in the same manner as the input" }, { "start": 990.52, "end": 992.64, "text": " grid because it understands this right?" }, { "start": 992.64, "end": 997.36, "text": " This is not the same in every task, then it needs to recognize the red thing that stays" }, { "start": 997.36, "end": 998.36, "text": " in every one." }, { "start": 998.36, "end": 1003.26, "text": " So it needs to put the red thing here right from from here." }, { "start": 1003.26, "end": 1007.26, "text": " And then it needs to recognize the blue thing stays as well." }, { "start": 1007.26, "end": 1015.92, "text": " And then most most shockingly needs to recognize, okay, I will draw a line in pixels and lines" }, { "start": 1015.92, "end": 1019, "text": " in pixels are hard here." }, { "start": 1019, "end": 1024.94, "text": " And then as soon as it would hit the red thing, it bounces off into the other direction." }, { "start": 1024.94, "end": 1030.56, "text": " So from just these three examples, the machine has to understand that and correctly output" }, { "start": 1030.56, "end": 1035.84, "text": " the exact solution, not an approximate solution, the exact solution." }, { "start": 1035.84, "end": 1043.1999999999998, "text": " Okay, so yeah, there are these basic geometry and topology priors like lines, rectangular" }, { "start": 1043.1999999999998, "end": 1051.36, "text": " shapes, symmetries, rotations, translations, shape upscaling, containing being contained," }, { "start": 1051.36, "end": 1054.72, "text": " drawing lines, connecting points, and so on." }, { "start": 1054.72, "end": 1057.1799999999998, "text": " Now, let's look at some more examples." }, { "start": 1057.1799999999998, "end": 1059.8799999999999, "text": " These are fun, right?" }, { "start": 1059.8799999999999, "end": 1062.12, "text": " Check out this one here." }, { "start": 1062.12, "end": 1068.6399999999999, "text": " So you see green, red, and then somehow the green connected to the red." }, { "start": 1068.6399999999999, "end": 1073.4799999999998, "text": " So this is an example of that has many of these priors in many of these concepts in" }, { "start": 1073.4799999999998, "end": 1078.32, "text": " there is goal directedness, you can already sort of form the hypothesis that the green" }, { "start": 1078.32, "end": 1080.4599999999998, "text": " wants to go to the red." }, { "start": 1080.4599999999998, "end": 1088.84, "text": " But also you see that somehow it sort of appears to the blue things seem to be maybe obstacles" }, { "start": 1088.84, "end": 1095.28, "text": " and it appears to change direction when it encounters an obstacle like here." }, { "start": 1095.28, "end": 1101.9599999999998, "text": " So here, you see the example, and you probably confirm so your hypothesis could be it always" }, { "start": 1101.9599999999998, "end": 1107.72, "text": " goes until it hits and then it changes direction towards the red thing, right?" }, { "start": 1107.72, "end": 1110.84, "text": " Always towards red thing, because it's not always towards the right because you return" }, { "start": 1110.84, "end": 1112.8, "text": " toward the left." }, { "start": 1112.8, "end": 1120.44, "text": " So it goes somehow towards the red thing and so it's pretty ambiguous in this situation," }, { "start": 1120.44, "end": 1125.12, "text": " but you can also make the assumption that if it's ambiguous, it goes towards the middle," }, { "start": 1125.12, "end": 1127.72, "text": " maybe, maybe." }, { "start": 1127.72, "end": 1135.36, "text": " So here, again, now we're actually confirming probably so we go towards the red thing, which" }, { "start": 1135.36, "end": 1139.8, "text": " would be towards this direction, then we hit an object, then we go towards the red thing" }, { "start": 1139.8, "end": 1145.44, "text": " until we hit an object, and then we go here." }, { "start": 1145.44, "end": 1148.9199999999998, "text": " Also see that these grids here are not the same size." }, { "start": 1148.9199999999998, "end": 1154.12, "text": " So it's not always the case that the grids of within the same tasks are even the same" }, { "start": 1154.12, "end": 1155.12, "text": " size." }, { "start": 1155.12, "end": 1160.36, "text": " So now here, again, your AI would need to recognize what size of grid it needs to draw" }, { "start": 1160.36, "end": 1161.72, "text": " and what the result is." }, { "start": 1161.72, "end": 1168.72, "text": " So it would need to copy this entire grid and also change these pixels right here to" }, { "start": 1168.72, "end": 1172.24, "text": " be green pixels." }, { "start": 1172.24, "end": 1173.24, "text": " That's hard." }, { "start": 1173.24, "end": 1176.84, "text": " I mean, that's I find I find this to be pretty hard." }, { "start": 1176.84, "end": 1183.1200000000001, "text": " This is the line extrapolation and turning on obstacle and efficiently reaching a goal" }, { "start": 1183.1200000000001, "end": 1185.24, "text": " prior." }, { "start": 1185.24, "end": 1188, "text": " That's crazy." }, { "start": 1188, "end": 1189, "text": " And is there more?" }, { "start": 1189, "end": 1190.76, "text": " Yes, there is two more, I believe." }, { "start": 1190.76, "end": 1193.56, "text": " Yeah, those are the last examples." }, { "start": 1193.56, "end": 1201.04, "text": " So in this one, you can see right here, there appear to be objects, which there's this blue" }, { "start": 1201.04, "end": 1206.22, "text": " objects appear to be the same and there are these red, and then the output grid is one" }, { "start": 1206.22, "end": 1207.76, "text": " of these blue objects." }, { "start": 1207.76, "end": 1212.52, "text": " Okay, so here we again see different objects, the output grid is one of them." }, { "start": 1212.52, "end": 1216.56, "text": " So as a human, you can already recognize the output grid is probably always going to be" }, { "start": 1216.56, "end": 1218.9199999999998, "text": " one of these objects." }, { "start": 1218.9199999999998, "end": 1220.6799999999998, "text": " And now we need to decide on which one." }, { "start": 1220.68, "end": 1227.44, "text": " So we can formulate the hypothesis that it's probably going to be the one that's the most" }, { "start": 1227.44, "end": 1231.1200000000001, "text": " like here, there's three of the blue ones here, there's four of the yellow ones, that's" }, { "start": 1231.1200000000001, "end": 1232.96, "text": " more than any other." }, { "start": 1232.96, "end": 1240.24, "text": " And this here confirms our hypothesis that the it's the object that appears most often." }, { "start": 1240.24, "end": 1245.6000000000001, "text": " Now again, see that there is this notion of object ness." }, { "start": 1245.6000000000001, "end": 1247.4, "text": " You need to upscale somehow." }, { "start": 1247.4, "end": 1250.44, "text": " No, this is not upscale because the grid is the same size." }, { "start": 1250.44, "end": 1252.8, "text": " It's simply the image that's upscale." }, { "start": 1252.8, "end": 1258, "text": " But you need to somehow focus be able to focus in on one of these objects, I need to count" }, { "start": 1258, "end": 1262.96, "text": " them, you need to compare the counts via each other." }, { "start": 1262.96, "end": 1266.64, "text": " And now here you can pretty easily see that the output grid is going to contain one of" }, { "start": 1266.64, "end": 1271.1200000000001, "text": " those blue things as a human." }, { "start": 1271.1200000000001, "end": 1275.56, "text": " And here, it's it's sort of a symmetry filling task." }, { "start": 1275.56, "end": 1281.84, "text": " Now as a human, you need one demonstration to get this." }, { "start": 1281.84, "end": 1287.48, "text": " Maybe you need more, but many tasks involve some sort of symmetry." }, { "start": 1287.48, "end": 1293.72, "text": " Okay, drawing the symmetrized version around the version of a shape around a marker, that's" }, { "start": 1293.72, "end": 1299.12, "text": " going to be fairly hard for a machine to learn without without the developer knowing that" }, { "start": 1299.12, "end": 1302.1599999999999, "text": " this task is coming." }, { "start": 1302.16, "end": 1308.78, "text": " Okay, they highlight some differentiations to standard psychometric tests." }, { "start": 1308.78, "end": 1314.28, "text": " But what I find interesting here is that this thing, what a solution to arc may look like" }, { "start": 1314.28, "end": 1318.0800000000002, "text": " and what it would imply for AI applications." }, { "start": 1318.0800000000002, "end": 1320.9, "text": " They say we have found art to be fully solvable by humans." }, { "start": 1320.9, "end": 1326.8400000000001, "text": " So they set a human in front of every, every one of these tasks, and it's solvable." }, { "start": 1326.8400000000001, "end": 1331.3600000000001, "text": " While many arc tasks are intellectually challenging human test takes us appear to be able to solve" }, { "start": 1331.36, "end": 1336.9599999999998, "text": " the majority of tasks on their first try without any practice or verbal explanations." }, { "start": 1336.9599999999998, "end": 1344.7199999999998, "text": " In effect, in this task, you get three tries at each at each of the at each of the problems," }, { "start": 1344.7199999999998, "end": 1350.3999999999999, "text": " you get three, three tries and the humans can already solve it in one." }, { "start": 1350.3999999999999, "end": 1356.8, "text": " So that just shows you shows you how cool humans are." }, { "start": 1356.8, "end": 1365.28, "text": " So here is a shawley suggests a solution approach says by start by developing a domain specific" }, { "start": 1365.28, "end": 1372.24, "text": " language capable of expressing all possible situations, all possible solution programs" }, { "start": 1372.24, "end": 1375.28, "text": " for any arc task." }, { "start": 1375.28, "end": 1382.44, "text": " Since the exact set of arc tax is purposely not formally definable, this may be challenging." }, { "start": 1382.44, "end": 1387.3200000000002, "text": " The space of tasks is defined as anything expressible in terms of arc pairs that would" }, { "start": 1387.3200000000002, "end": 1389.96, "text": " only involve core knowledge." }, { "start": 1389.96, "end": 1395.88, "text": " So core knowledge is this set of human priors that we discussed last time like objectness" }, { "start": 1395.88, "end": 1402.0800000000002, "text": " and symmetries and geometric shapes and navigation and so on." }, { "start": 1402.0800000000002, "end": 1409.22, "text": " So he asks you to basically develop a DSL that can capture all the different tasks." }, { "start": 1409.22, "end": 1414.68, "text": " So so kept basically define a formalism of these tasks." }, { "start": 1414.68, "end": 1418.18, "text": " But it's hard because you don't know what the tasks are going to be." }, { "start": 1418.18, "end": 1423.72, "text": " So your best bet is probably to make a formalism that completely over represents what the tasks" }, { "start": 1423.72, "end": 1426.48, "text": " can be." }, { "start": 1426.48, "end": 1433.64, "text": " It would require hard coding the core knowledge priors from 3.1.2 in a sufficiently abstract" }, { "start": 1433.64, "end": 1440.4, "text": " and combinable program form to serve as a basis functions for a kind of human like reasoning" }, { "start": 1440.4, "end": 1441.5600000000002, "text": " DSL." }, { "start": 1441.5600000000002, "end": 1447.64, "text": " We believe that solving this specific subproblem is critical to a to general AI progress." }, { "start": 1447.64, "end": 1456.3600000000001, "text": " Basically says whenever we can describe this is like saying that this AI progress will" }, { "start": 1456.3600000000001, "end": 1461.0400000000002, "text": " make a big step once we can formally describe human priors." }, { "start": 1461.04, "end": 1468.92, "text": " And while true this I feel the hardness of this problem is as hard as actually building" }, { "start": 1468.92, "end": 1472.24, "text": " general artificial intelligence or very close to it." }, { "start": 1472.24, "end": 1482.52, "text": " So it is a bit of a like how to how to go how to build a GI step one build a GI that's" }, { "start": 1482.52, "end": 1487.76, "text": " sort of and not exactly but it's kind of what this says." }, { "start": 1487.76, "end": 1488.76, "text": " Right." }, { "start": 1488.76, "end": 1494.96, "text": " So if I could actually have this DSL to describe every single task and I could do it you know" }, { "start": 1494.96, "end": 1502.8, "text": " such that it is not not super over capturing all the tasks then I would be able and I would" }, { "start": 1502.8, "end": 1512.08, "text": " have described human core knowledge in a sufficiently accurate degree that I could just you know" }, { "start": 1512.08, "end": 1515.46, "text": " build a GI." }, { "start": 1515.46, "end": 1520.92, "text": " So he goes on says given a task use the DSL to generate a set of candidate programs that" }, { "start": 1520.92, "end": 1524.72, "text": " turn the input grids into the corresponding output grids." }, { "start": 1524.72, "end": 1529.8, "text": " This step would reuse and recombine sub programs that previously proved useful in other arc" }, { "start": 1529.8, "end": 1531.16, "text": " tasks." }, { "start": 1531.16, "end": 1535.04, "text": " So says whenever you have captured the core knowledge or whenever you have captured the" }, { "start": 1535.04, "end": 1540.72, "text": " problem space in a formal language you can simply use that formal language to express" }, { "start": 1540.72, "end": 1545.76, "text": " whatever your input is so the that turn the input grids into the corresponding output" }, { "start": 1545.76, "end": 1546.76, "text": " grids." }, { "start": 1546.76, "end": 1551.92, "text": " So you would put in these demonstration examples and describe this with your formal language" }, { "start": 1551.92, "end": 1557.3600000000001, "text": " that you have and you can somehow reuse and recombine sub programs that previously proved" }, { "start": 1557.3600000000001, "end": 1565.46, "text": " useful for basically asking you to write to come up with source code that would generate" }, { "start": 1565.46, "end": 1572.96, "text": " these demonstration examples in the language of your DSL." }, { "start": 1572.96, "end": 1578.44, "text": " And then he says select top candidates among these programs so you would generate multiple" }, { "start": 1578.44, "end": 1587.28, "text": " versions of source code that generate these things based on a criterion such as a program" }, { "start": 1587.28, "end": 1590.16, "text": " simplicity or program likelihood." }, { "start": 1590.16, "end": 1594.46, "text": " Note that we do not expect that merely selecting the simplest possible program that works on" }, { "start": 1594.46, "end": 1599.68, "text": " training pairs will generalize well to test pairs." }, { "start": 1599.68, "end": 1605.52, "text": " And then use the top three candidates to generate output grids for the test examples." }, { "start": 1605.52, "end": 1613.28, "text": " So I hope the approach here I feel it makes sense but it is sort of over hopeful in my" }, { "start": 1613.28, "end": 1617.32, "text": " mind and that's mainly because of step one." }, { "start": 1617.32, "end": 1622.04, "text": " So step one asks you to come up with like a programming language that can capture all" }, { "start": 1622.04, "end": 1628.52, "text": " the tasks in this all the tasks in the data set even though you don't know what the tasks" }, { "start": 1628.52, "end": 1635.2, "text": " are and that has this human core knowledge in inside of it in a in a formally describable" }, { "start": 1635.2, "end": 1636.42, "text": " way." }, { "start": 1636.42, "end": 1641.48, "text": " And then once you have that programming language you would if you're given this task where" }, { "start": 1641.48, "end": 1647.36, "text": " you have you know a bunch of these demonstration you have a bunch of these demonstration things" }, { "start": 1647.36, "end": 1655.4799999999998, "text": " and then you have the test thing you would generate all the programs that would produce" }, { "start": 1655.4799999999998, "end": 1661.6799999999998, "text": " these demonstration examples or that would given the demo given the input grade would" }, { "start": 1661.6799999999998, "end": 1666.56, "text": " produce the output grid right you would generate all the programs and then you would select" }, { "start": 1666.56, "end": 1672.04, "text": " somehow among all these programs the one that you think generalizes the most and you would" }, { "start": 1672.04, "end": 1678.6, "text": " use that program to put this in and get out the solution." }, { "start": 1678.6, "end": 1683.86, "text": " They say it's probably it's not always the simplest program not always the shortest program" }, { "start": 1683.86, "end": 1690.68, "text": " maybe who knows like I feel step one is the kind of the crucial issue here." }, { "start": 1690.68, "end": 1700.18, "text": " Okay so they say they make some claims here and about what this what this would bring" }, { "start": 1700.18, "end": 1704.64, "text": " the community we posit that the existence of human level arc solver would represent" }, { "start": 1704.64, "end": 1710.8, "text": " the ability to program and AI from demonstrations alone only requiring a handful of demonstrations" }, { "start": 1710.8, "end": 1717.8400000000001, "text": " to specify complex tasks to do a wide range of human relatable tasks of a kind that would" }, { "start": 1717.8400000000001, "end": 1724.48, "text": " normally require human level human like fluid intelligence." }, { "start": 1724.48, "end": 1728.64, "text": " As supporting evidence we note that human performance on psychometric intelligent test" }, { "start": 1728.64, "end": 1734.16, "text": " which are similar to our is predictive of success across all human cognitive tasks." }, { "start": 1734.16, "end": 1738.44, "text": " Further we posit that since an arc solver and human intelligence would be both founded" }, { "start": 1738.44, "end": 1743.24, "text": " on the same knowledge priors the scope of application of an arc solver would be closer" }, { "start": 1743.24, "end": 1748.5600000000002, "text": " to that of human cognition making it such a solver both practically valuable and easy" }, { "start": 1748.5600000000002, "end": 1755.44, "text": " to interact with and would produce behavior that is in line with human expectations." }, { "start": 1755.44, "end": 1762.72, "text": " Okay so they're making the same argument that anyone before has made but they condition" }, { "start": 1762.72, "end": 1767.88, "text": " it on some things and this is I think the conclusion of the entire article here of on" }, { "start": 1767.88, "end": 1772.6000000000001, "text": " the measure of intelligence because people had this hope and they say that here claims" }, { "start": 1772.6000000000001, "end": 1779.04, "text": " are highly speculative and may prove incorrect much like Newell's 1973 hopes that progress" }, { "start": 1779.04, "end": 1783.92, "text": " on chess playing would translate into meaningful progress and achieving a broad range of cognitive" }, { "start": 1783.92, "end": 1793, "text": " abilities especially if arc turns out to feature unforeseen vulnerabilities to unintelligent" }, { "start": 1793, "end": 1795.4, "text": " shortcuts." }, { "start": 1795.4, "end": 1802.44, "text": " This is the AI effect and basically means that whenever you think a task the solving" }, { "start": 1802.44, "end": 1808.6000000000001, "text": " of a task represents AI and then you actually see the solution then the solution turns out" }, { "start": 1808.6000000000001, "end": 1812.46, "text": " to be not AI in the eyes of the human." }, { "start": 1812.46, "end": 1817, "text": " So the human at first they would say oh this task really requires intelligence and then" }, { "start": 1817, "end": 1820.8, "text": " someone solves the task and they'll say oh that's not intelligence you kind of hacked" }, { "start": 1820.8, "end": 1826, "text": " your way to that and the expectation is that in this arc challenge there might be a hacky" }, { "start": 1826, "end": 1834, "text": " way to that but I mean the good question is when at what is there even a task like this" }, { "start": 1834, "end": 1840.42, "text": " arc challenge here could that is there even a possibility of a task where you wouldn't" }, { "start": 1840.42, "end": 1848.3600000000001, "text": " say that and I'm not so sure about this they seem to be more hopeful than I am but at least" }, { "start": 1848.3600000000001, "end": 1853.64, "text": " they say the arc challenge is founded on the same priors as a human has it gives you the" }, { "start": 1853.64, "end": 1859.88, "text": " same amount of experience as a human has right and therefore it is much more comparable to" }, { "start": 1859.88, "end": 1862.88, "text": " human intelligence." }, { "start": 1862.88, "end": 1870.92, "text": " They go over some weaknesses right here of criticizing their own thing generalization" }, { "start": 1870.92, "end": 1876.8000000000002, "text": " is not quantified so they have a measure of generalization in the previous chapter but" }, { "start": 1876.8000000000002, "end": 1882.6200000000001, "text": " they don't use it right here test validity is not established data set size and diversity" }, { "start": 1882.6200000000001, "end": 1892.3600000000001, "text": " may be limited and so on but I in my mind this I would not consider this as like an" }, { "start": 1892.36, "end": 1898.8799999999999, "text": " AGI task or anything like this I'm pretty sure the solution to this will come in a form" }, { "start": 1898.8799999999999, "end": 1904.8799999999999, "text": " again where people don't really think it's it exhibits intelligence but I do like the" }, { "start": 1904.8799999999999, "end": 1912.32, "text": " task as such and as a machine learner I am very excited to think about how machine learning" }, { "start": 1912.32, "end": 1920.84, "text": " can go about solving this task and especially with what we've seen from something like GPT-3" }, { "start": 1920.84, "end": 1926.3999999999999, "text": " it has exactly this kind of structure where you train on a giant data set blah blah blah" }, { "start": 1926.3999999999999, "end": 1932, "text": " you pre train your language model but then at inference time you input a bunch of these" }, { "start": 1932, "end": 1940.4399999999998, "text": " demonstration examples and you ask it for the next output so I feel that might be a" }, { "start": 1940.4399999999998, "end": 1948.84, "text": " good start for for doing it the question of course is what what then do you pre train" }, { "start": 1948.84, "end": 1955.56, "text": " this model on this GPT-3 for ARC what's the pre training data set for it and I guess that's" }, { "start": 1955.56, "end": 1962.32, "text": " going to be the challenge and probably going to require people to specifically program" }, { "start": 1962.32, "end": 1969.1, "text": " all of these priors into a data set generator for pre training so that would be my approach" }, { "start": 1969.1, "end": 1974.56, "text": " my approach would be write a data set generator for pre training and GPT-3 model to do these" }, { "start": 1974.56, "end": 1982.1599999999999, "text": " kind of tasks and in order to write the data set generator you'd have to basically program" }, { "start": 1982.1599999999999, "end": 1988.54, "text": " in all of these priors and that's not going to be easy because your best bet is to sort" }, { "start": 1988.54, "end": 1993.36, "text": " of put yourself into the shoes of Chalet and be like oh if I were to design a task what" }, { "start": 1993.36, "end": 1997.52, "text": " kind of things would I do and then try to capture that that's going to be your best" }, { "start": 1997.52, "end": 2004.48, "text": " bet your most honest bet with respect to the challenge is to try to as faithfully as possible" }, { "start": 2004.48, "end": 2011.04, "text": " implement something like an object Ness prior where cohesion and persistence are captured" }, { "start": 2011.04, "end": 2018.1, "text": " that would be the most scientifically sound approach to my approach alright so that was" }, { "start": 2018.1, "end": 2025.08, "text": " my take on the ARC data set if you have any comments I'm very excited to hear comments" }, { "start": 2025.08, "end": 2030.8, "text": " on this if you have already tried the ARC challenge have some insight I also welcome" }, { "start": 2030.8, "end": 2035.36, "text": " comments on that and with that I'll see you next time bye bye" } ]
bFn2xcGi1TQ
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Faster Neural Network Training with Data Echoing (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "google", "brain", "pipeline", "bottleneck", "speed", "gpu", "tpu", "idle", "network", "distributed", "preprocessing", "augmentation" ]
CPUs are often bottlenecks in Machine Learning pipelines. Data fetching, loading, preprocessing and augmentation can be slow to a point where the GPUs are mostly idle. Data Echoing is a technique to re-use data that is already in the pipeline to reclaim this idle time and keep the GPUs busy at all times. https://arxiv.org/abs/1907.05550 Abstract: In the twilight of Moore's law, GPUs and other specialized hardware accelerators have dramatically sped up neural network training. However, earlier stages of the training pipeline, such as disk I/O and data preprocessing, do not run on accelerators. As accelerators continue to improve, these earlier stages will increasingly become the bottleneck. In this paper, we introduce "data echoing," which reduces the total computation used by earlier pipeline stages and speeds up training whenever computation upstream from accelerators dominates the training time. Data echoing reuses (or "echoes") intermediate outputs from earlier pipeline stages in order to reclaim idle capacity. We investigate the behavior of different data echoing algorithms on various workloads, for various amounts of echoing, and for various batch sizes. We find that in all settings, at least one data echoing algorithm can match the baseline's predictive performance using less upstream computation. We measured a factor of 3.25 decrease in wall-clock time for ResNet-50 on ImageNet when reading training data over a network. Authors: Dami Choi, Alexandre Passos, Christopher J. Shallue, George E. Dahl Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at faster neural network training with data echoing by Damme Choi, Alexander Passo, Christopher J. Shalu and George E. Dahl. So on a high level this paper basically says you should repeat data that's already in memory in order to speed up the entire process of neural network training. And it also says that this can speed up your wall time without hurting your performance too much. And I have mixed feelings about it. So let's jump in. So they basically make a point of saying that machine learning doesn't happen in just one thing. It's not like sklearn.fit anymore. It is more of a pipeline. So what do we mean by this? If you think of something like you want to train an ImageNet model, what you want to do is you have like your data set somewhere. And that could be in a database, it could be in the network somewhere. So if you have even something larger than ImageNet, you'll probably store it on a central server on an Amazon bucket somewhere. So this is in AWS. And the first thing you actually need to do is you need to read that data set. Now usually you're not gonna have enough memory on a machine to just load in the entire data set into memory. So that means this process here is streaming. So this is continuously streaming data points. And once you have used the data points, you're gonna throw it away because you need space for the next one, right? And so the streaming is done continuously. The next process is read and decode. That means you have to read it from the network and actually bring it into a format where you can use it. Usually something like an umpire array or a tensorflow tensor. You need to apply some shuffling because usually the order, you can't really trust the order that it is stored in. Oftentimes there is like a bias in the ordering, so you need some sort of a shuffle buffer here. Then often you want to apply some data augmentation to it. That means that you have one image. And we know for these models, if this is your cat, we know for these models that what can help is to basically make many many different images from one image. So this could be by cropping part of it and saying, well, if it was a cat before, the small upper right part here is still a cat. So this is one update. It's called data augmentation. And you're gonna apply a whole bunch of these things. So you can crop, you can rotate the image a bit. It's still a cat, right? And you can also change its luminance and a bit of its colors. You can jitter the colors, you can horizontally flip it and it'll still be a cat. And that's basically how you make many data points from one data point. And we know that helps. Then what you want to do is you want to batch this data. So you want to put it into mini batches. Since you've shuffled here, that means when the next time the same data point comes along, it's going to be batched with a different group of images and of course augmented differently with a different group of images. And that basically means it's a different training batch for the model. So this entire pipeline is basically a way that we take data points that we have and we make a whole bunch of variations and various groupings and batchings of it. And we know that helps enormously with the generalization capability of your final models. And then here you do your apply your SGD update. So that's usually where you forward propagate your data through your network, which here is F. You'll get like some Y hat as an output. And then you have your labels that also come through the pipeline. And you have some sort of loss function L that takes both as an input and gives you an output. And then you do your back propagation. So the back propagation would go through your loss function through your network and update the network parameters such that your network learned something right now this step to the right here is usually what what we focus on when we do deep learning. The step on the right here, all of this, this can be done on a GPU or is usually done on something like a GPU or a TPU. Right. But in these things are getting faster and faster. The point the paper makes is that the TPUs and GPUs of the world are getting faster. But this entire other thing right here, this is basically CPU land. Now I know there is some data augmentation now happening on the GPU and so on. But in essence, you can think of a pipeline where the thing to the left is happening on CPU and the thing to the right is happening on TPU. And even even worse, let's say the speed is continuously is continuously increasing. So in your pipeline, the kind of speed would be so here is the network reading. And here over here is the GPU SGD step. And this is speed. Basically, the further to the right in your pipeline, you go the faster your the faster your hardware gets. And that means that if if this since this is a continuous pipeline, right, that basically means that if I input something here, it goes through the pipeline. And even if this is all running in parallel, at this thing over here is going to idle this since this is the fastest part of the pipeline, it is going to just idle a lot. Right, it because because it can only consume things as fast as this thing can produce. Now if you if you have some sort of a multi GPU machine and just train image net, like you just run the code, usually your this is not the bottleneck. Usually, your GPU is here are at 100% capacity. So this paper is not for you. But if you are, let's say a big company have this network storage, have a big data set, have very expensive data augmentation. This happens, for example, this can happen in NLP and so on. This can be quite your situation, where the earlier in the pipeline, the slower it is. And don't you just love these graphics? So here's here's time. And apparently, it goes in both directions. And so does it go like this? I think what they mean is just time goes in this direction. And you're here. And you're upstream. So your upstream is your network. This is your network reading or your pre processing and the downstream, this is the GPU. So as you are pre processing things, you you and this should be this should be different. It should mean okay, to correct this right here, this is idle. And this is running. So as you're upstream processes images, right at the beginning, your GPU is idle. But then as soon as it ships off the first batch of images, your GPU can run now it's running. While you're doing that, your upstream your network is still reading new images, pre processing them and so on, but it cannot is too slow to insert a batch at the time that the GPU is done. The time the GPU is done, it's still processing this batch. So the GPU is idle until here where it finally manages to process that batch and then the GPU is running again. I think that would have been a much better graphic. But you know, so their goal basically is that what you'll have is right here, for example, after the batch, what you'll do is you scrap this connection, you take this and you put it into a smaller buffer. And the buffer is a repeat buffer. So what it does is it simply will repeat the whatever you have in the buffer until something new comes in, right? So new data point comes in, you just output that data point again, again, again, again, the for the GPU, it's gonna feel like these are all new batches and they continuously come in. But it's always the same until the next data point comes in. And then you output that one again and again and again and again. Now the the actual factor here you can, of course, tune by hand or you can just say repeat until something else comes in. In this paper, they have an explicit factor where they say we repeat each data point four times or three times or so on. So this is data echoing, you basically echo the data point multiple times. And this can be done in various places. So they experiment with echoing in any of these places right here. So the egg they experiment with it right here, after reading and decoding, after shuffling. No, I think always before shuffling. Because if you if you have a shuffle buffer anyway, they say it makes sense that if you do the echoing, you you do your shuffle buffer after you're echoing. So here, then after augmentation and after batching. So they experiment with these three locations in in echoing. Now what could be the downturn of something like this, the downturn, of course, is that this SGD procedure right here, basically, it relies on the data incoming being an IID sample from your data distribution, right? That's that's how we formulate SGD is that there's always new data incoming. Now, if you just output the same data point all the time, that could that is like no new information, first of all, and second of all, it could bias the SGD update, such that you it because it sees the same data, it doesn't it sees the same information over and over, is going to think that's the whole data set, right? So potentially, it can make too many steps into the wrong direction. That just happens to be the bias in this particular data point. So the IID assumption is is is invalid. Now, why do you experiment with this in different locations? Because what you expect is that it hurts more or it hurts less the earlier you introduce this. So if you introduce echoing right here, so if you echo your data until new data from the network comes in, it's still going to be shuffled differently, right? It's and it's still going to be augmented differently. So each time the data point comes out of the echo buffer, it is going to be shuffled. And it is going to be augmented in a different way than the last time the same data point came out. And this is going to be batched together because you've shuffled differently, it's going to be batched together with a different bunch of data points. And that means SGD gets new information. But if you go on to the very last thing, where you just after the batch right here, where you input the echo, that means SGD just gets to see the same batch of data augmented in the same way all the time, right? So the of course, where you exactly have to echo, you have to trade this off. So you have to trade off the how much you basically violate the IID fresh data assumption against where in your data pipeline is the bottleneck. So if your bottleneck is in the data augmentation, it may make little sense to echo before that because your bottleneck is the data augmentation. And that being said, if the bottleneck is that you don't have enough GPUs, then it probably doesn't make sense to data echo at all, though their experiments are somehow wonky on this. But so let's dive in, they make the following claims. Let's just go through them really quick. Data echo reduces the amount of upstream that think of network reading or augmentation computation needed to reach a competitive out of sample error rate on various data sets and model architectures. Second, data echoing can provide a wall time speed up in practice. Third, data echoing can support a wide range of echoing factors. And that's the echoing factor is how often you repeat the data. Fourth, the effectiveness of data echoing depends on the intersection point in the training pipeline, sorry, in the insertion point. That's what our hypothesis was, right? Fifth, data echoing can benefit from additional shuffling after echoing, but does not require it. And six, countering expectations, data echoing reaches the same final error rate as well tuned baselines. So I am can absolutely accept one through five, especially in like an actual practical in the wild setting. But six, we'll see about six. So let's jump into their models. They, sorry about that, they train the following four models. So they train a transformer on these two data sets LM1B and common crawl. So I guess technically it's five models on language modeling. They train the ResNet 32 on CIFAR 10. They train the ResNet 50 on ImageNet and they train SD on Coco. Now here is the accuracies they get and here is, sorry, this is the target. So what they do is they train these models and then they say, okay, what's the accuracy we reach? And then they set a target value. So on ResNet 50 on ImageNet, a very common number to reach is something like 76.5. If you look at, for example, torch vision models, they reach something like this. And so they say, well, our target accuracy here is just a little bit below that. So and then we just measure how many steps or how many their measurement here is fresh data points. So how many actual fresh training samples do we need to reach this target? And this is where it gets wonky because, for example, take the 91% here on CIFAR 10. That is quite, quite low. And also the ResNet 50 is, I mean, this is standard, but still ImageNet is much further nowadays. And I think the effectiveness of something like this has a lot to do with how competitive you want to get. Maybe this is all just an effect of how much under par your, this target performance really is. And I would expect that even though they say it doesn't hurt their performance in their experiments, I would at least expect it will hurt your performance in general if you try to get competitive. Because these things aren't, as of now, at least the ones I know, like the ResNets, aren't really competitive. But so what do they do? They measure data echoing with an echoing factor of 2. So that means data that's incoming is output twice in a row. And every data point that's coming in is just emitted twice from the buffer. And then the next data point is emitted twice and so on. And what they measure, again, is the fresh examples read. So how many fresh data points do you need to achieve something? This is a good measurement because this is kind of independent of hardware. So if you're really in the situation where your GPU is twice as fast as the rest of your pipeline, then an echoing factor of 2 will speed up at most your training procedure by a factor of 2. All right, so you have the baseline in red. And then you have batch echoing, which is where you echo what we said at the worst possible time right after batching. So this might hurt your performance the most, but also it has the potential to be the fastest if maybe your augmentation is very expensive. Then, sorry, or your batching. You have example echoing after augmentation. So that would mean the augmentation is very expensive. So you save the augmented data point. And then you emit it multiple times, but each time it is batched differently. So it is shuffled and then batched with different other data points. So you have a shuffle buffer after it. And then you have example echoing before data augmentation. So that means the same data point emitted multiple times will be augmented in different ways and basically will lead to slightly different data points. So the results here are pretty much what you could expect in that the earlier you do the echoing, as you can see here, the more this echoing helps. So the number, if you, for example, this is the object segmentation task, the baseline needs this many fresh examples to reach this target accuracy. With batch echoing, not only do you, sorry, with batch echoing, you need less fresh training examples. So that means even though you kind of train on the same data twice, this helps you more, or this helps you. It doesn't help you fully because the dashed line here is the, if it would help you as much as a fresh data point, you'd be at the dashed line, right? This is exactly half of this because the echoing factor is two. So if a repeated data point was as useful as a fresh data point, you'd be at the dashed line. As you can see right here, you're not at the dashed line, but at least it doesn't hurt. You might expect that it hurts, but it doesn't hurt. It actually speeds up. So the repeated data points at least have some utility. Again, this is only useful if you have this asymmetry in your pipeline. If your pipeline is actually symmetric and you do an echoing factor of two, the wall time here, the wall time plot would look this for the baseline and then almost twice as high for the batch echoing. Because even though it needs the same amount of fresh, or almost the same amount of fresh example, you echo each one twice. So it needs to process it twice so it'll take much longer. So again, this is useful if you have this asymmetry and if the echoing factor is kind of smaller than your asymmetry. Otherwise you're simply wasting time repeating data points. Then if you do example echoing here after augmentation, you use even less fresh data points. And if you do it before augmentation, this is really surprising. You almost get the benefit of fresh data points, which is something you might expect, right? Because an augmented newly shuffled data point is kind of almost a new data point. But still, it's quite surprising that you almost get to the level of the of the theoretical possible. And also here on the image net task. Now here is actually an example where you can see that it hurts to do this batch echoing. Because the reasons why it could hurt is just that you have you violate this IID assumption, you basically have correlated data points. This is a big, big problem, for example, in reinforcement learning, where already by nature of you running episodes and then feeding the episodes back into the training procedure, you have correlated data points. And that hurts your performance here actually compared to the to the baseline. But then if you go to example echoing, and the example echoing before augmentation, again, you get a speed up, which is pretty cool. Okay, so they do a bunch of other experiments. And I appreciate these experiments here to really show what's going on. And until when can you push this? So here they have a plot of example echoing before augmentation can reduce training time for ResNet 50 on image net. So this is before augmentation. And the echoing factor describes how often you repeat each data point. So this goes from two to five. And you can see that basically you you get the speed up, you just sort of get it for free. As you can see, the dashed line again is as if if at repeated data point were as useful as a fresh data point, you'd be at the dashed line. And you can see right here that you are just above this dashed line. So this can help a lot. And so this is the fresh examples read and this is the wall time in their particular situation. In this case, it doesn't help as much. But again, it if that very much depends on how the asymmetry in your pipeline is. Now, in these experiments, I would actually appreciate something like they do down here, where I would always like to see where it breaks. So how far can you go with the echoing factor until it doesn't help anymore? Because this sort of tells me pretty much nothing. I want to see where is the low point? Where's kind of the optimal echoing factor? And what can you tell me about this optimal echoing factor? How can we determine it sort of beforehand? Or how can you reason how does it connect to the different parts of your architecture? So if I had to point out a flaw in this paper, it would be that right here, I would expect the them to continue this echoing factor increase until it breaks, sort of like they do down here. This is for I believe this is for the transformer on LM 1B. Now here they have a batch size of 1024. And you can see, and this is the this is their standard setting for the transformer, the 1024 batch size, you can see that the baseline uses this many 1.5 times 10 to the seventh fresh examples to train until their target. If you increase the echoing factor by two, you basically need half as many fresh examples, as long as you echo each one twice. Again, very surprising the fact how close you can get to the as if each batch were a a perfect fresh data point. But you can see as you increase this echoing factor, and here is exactly what I said, right, you at some point, this hurts at some point, you get to the point where the non IID ness, the correlation of date of successive data points will actually hurt you. And they make a point here of saying that this is, for example, dependent on batch size. Now in this experiment over here, they have a larger batch size. And here is again the the baseline number of data points to reach the target. And you can see again it goes down. But now with the echoing factor where before you had a you had an increase again, now it continues to decrease. Again, it will be interesting to see where it goes up here and how the number at the lowest like here the four and here the I don't know what it's gonna be maybe the 16, how this will kind of depend on your batch size. And here is another problem. And that's what I alluded to at the beginning, this this performance dependence. Now, I have not read anything differently in the paper. So I had to assume that they trained this here, this number of fresh examples to reach the target is still the target that they determined at the beginning. So it's that 3.9 in the table, that 3.9 was achieved with this batch size with 1024. And we know especially in language models that larger batch sizes will lead to a better performance, even if you need let's say more samples. So here you can see that the samples here is 1.5 and here it's actually four, because you increase that batch size. So that will tell you something 1.5 and four, that is that is like a times. Okay, that's like an times 2.5. So you go with the batch size of times four, and you need 2.5 more more fresh training samples to reach the same target accuracy. First of all, we know that the larger batch sizes can reach higher target accuracies. So again, these results, the dependence of them on the actual performance to the maximum achievable value, to me that's kind of a shady world here to always say, okay, how long does it take to reach that particular target? Because we know that this model right here can reach a much higher target, but we don't know this about these models here. What is their kind of performance in the limit? And they try to make these experiments, but I don't really believe them. Maybe yeah. And yes, and the second, right, that will that will be that will that is already interesting. So this ratio right here, this 2.5 to 4, this ratio must mean something, right? It's it's I go to a higher batch size, four times higher batch size, and I need 2.5 many more fresh training samples to reach the same target. That must somehow tell you something about the usefulness of a single data point versus a succession of data points, right? So it doesn't seem because I would expect if each data point was valuable, I would expect this to be times four. And if it were if it were times one, so if it were no, no speed up at all, sorry, not times four, if it would be times one, it would mean I'd need the same number of fresh training samples, right? No matter how I batch them. But it were times four. That means basically that it doesn't matter really how many training points I have in a batch as long as I have enough and the 1024 seems to be enough. It just it just matters how many you know SGD steps I do. So basically, what we're saying SGD isn't getting the most out of these data points. And this ratio this 2.5, this this tells you something about the information content of a of an additional data point versus the usefulness content of an additional step of SGD. And I would expect that to depend to intrinsically be connected to the where the low point of this echoing factor is because that's exactly what the echoing does. It trades off freshness of data point versus doing more steps on the on the on the same information. And for a paper, especially paper by Google brain, I this this is a connection that I would love to see investigated. But enough of the ranting, they do investigate other things they do investigate, for example, what happens if we just up the batch size. And you can see here, yeah, this is interesting in the baseline needs more fresh samples as you up the batch size. But and at the beginning, this batch echoing, for example, doesn't help doesn't hurt, but doesn't help. But as you go to higher and higher batch sizes, this batch echoing starts to help more and more. Again, I believe this is connected to the usefulness of the single data point, at some point, your batch size is just too large for the problem, you'd rather do more steps. And that's why this helps. But also, this model right here might have a higher ceiling accuracy. So indeed is the question whether this model right here has the same or whether this model right here, the batch echoing model would actually fall back to the ceiling accuracy of one of these models over here. Yeah, in any case, their point is basically that as you increase the batch size, this echoing tends to help more relatively. Because maybe it's because what I said, right, they say as batch size increases, the performance of batch echoing relative to the baseline stays either stays the same or improves. While for example, echoing it either stays the same or it gets sorry, while for example, echoing it either stays the same or gets worse. Dashed lines indicate the expected values if repeated examples were as useful as fresh examples. Yeah, so I believe there is an intrinsic connection here between the usefulness of more data and usefulness of doing additional steps. And here the example echoing you can almost see it as more data because especially here you're going to do augmentation on top of it and you see the non augmented versus the augmented ratio changes dramatically from here to here. Okay, final set of experiments. As you can tell, this is more mostly an experimental paper. And it is always easy to criticize experimental papers and rightfully so because I would not trust this very much. But given that it comes from a big institution, and it is a very well written paper, I would trust it more than I would a regular paper. And I would say if you're in practice, this is certainly worth trying. Absolutely. I'm just I just think that some of the things aren't aren't researched, like some of my questions aren't answered of this. So they investigate sizes. So they now build shuffle buffers. So we have batch echoing, but they say ah, but we can do batch echoing with shuffle buffers. So after the batch echoing, right, we have this state where we have the batching. And then we have the echoing. So this is our echo buffer, where we output the each data point multiple times. And then we have another buffer, which is a shuffle buffer, that a shuffle buffer just collects data points and then shuffles them around before outputting them. And that means even though we you know, output this five times, it might not come out five times after each other, it might be that it comes out once, and then another data point that was already in the shuffle buffer comes out. And then it will just say that in total, it comes out five times. But it is first shuffled together with a bunch of other data points. Of course, this uses more memory, but it returns to that more IID setting. And you can see here as the buffer size increases, then the performance gets more and more to the performance that you would have with completely fresh data. Right. So again, trading off freshness and freshness and doing multiple steps with by by basically repeating, repeating data points straight out versus repeating data points shuffled. And also here you have the same with example echoing. So if you apply the shuffle buffer to example echoing and you increase its size, you can get very, very, very close to the performance that you would get with fresh data, which of course, if you increase the shuffle buffer to the size of the data set, you are at the situation that's the limit you are at the situation of fresh data, right? If you do example echoing. Right. So here is where it gets into the funky part, where they say we actually measure the validation cross entropy and the validation accuracy versus the number of fresh examples read. And here I want to concentrate on the ResNet 50 on ImageNet. And as you can see, most of these models, they pretty much end up in the same place here. It's just that the echoing models end up there faster. Right. And this this is, I mean, this is where it gets a bit confusing, honestly, because why do you have this super sharp thing here? Because usually and here it sort of speeds up in the middle, you see you see that and then it kind of sharply declines. Is this maybe because they drop the learning rate or something. Now, my main thing is that the performance here even though this target thing is lower than than the even though this target thing is the same for everyone, it is lower than the best reachable accuracy. And I'm I'm just this this is just confusing. If this is really true. Whoa. If this is really true, I think we have a lot to learn about SGD yet and how we're not actually doing SGD correctly. And because it seems like almost the the echo versions are better or reach a better accuracy than the baseline. I don't know, do they just cap it at the performance? I don't think so. I think they say they let it reach. They also have these things right here. These these curves where they say this is the best we reach. And this is the ResNet 32 on C410. Again, 91% on C410 is just very, very low. And I'm almost thinking that, okay, this might help if you just throw something that we know is kind of overpowered because we can reach 99%. Or at least you can reach something like 94% on C410 easily, easily with a network smaller than ResNet 32. Maybe this effect manifests if you if you have actually something that could reach higher, but for some reason you only reach this low. I'm not sure. But this is confusing. And if this is really true. Yeah, I would just if it's true, which I believe I believe this paper, it might be just an effect of not reaching the actual ceiling. And again, look at this. This is just the curves are just strange, right? You have the echoing before augmentation, like it seems like it's outperforming the the fresh data points. I don't know, there's a little bell in my head that doesn't like this. If it's actually true, then you know, that's cool. But yeah, so my main criticisms are a bit with the experimental methodology, for example, where they increase the batch size, but still reach the same target accuracy, even though we know that there is a higher ceiling if you increase the batch size for language models. My other criticism is the non investigation of this connection. This connection right here, maybe, but all in all, it's a pretty cool paper. If I had a big company with these pipeline issues, I would absolutely implement this. This seems like a no brainer to do this and can help you tremendously. Alright, that was it. Thank you for listening. If you're still here, subscribe, like, tell a friend. Bye bye.
[ { "start": 0, "end": 5.12, "text": " Hi there! Today we're looking at faster neural network training with data" }, { "start": 5.12, "end": 11.9, "text": " echoing by Damme Choi, Alexander Passo, Christopher J. Shalu and George E. Dahl." }, { "start": 11.9, "end": 17.34, "text": " So on a high level this paper basically says you should repeat data that's" }, { "start": 17.34, "end": 21.86, "text": " already in memory in order to speed up the entire process of neural network" }, { "start": 21.86, "end": 27.76, "text": " training. And it also says that this can speed up your wall time without hurting" }, { "start": 27.76, "end": 33.52, "text": " your performance too much. And I have mixed feelings about it. So let's jump in." }, { "start": 33.52, "end": 38.800000000000004, "text": " So they basically make a point of saying that machine learning doesn't happen in" }, { "start": 38.800000000000004, "end": 46.8, "text": " just one thing. It's not like sklearn.fit anymore. It is more of a pipeline. So" }, { "start": 46.8, "end": 51.6, "text": " what do we mean by this? If you think of something like you want to" }, { "start": 51.6, "end": 57.160000000000004, "text": " train an ImageNet model, what you want to do is you have like your data set" }, { "start": 57.16, "end": 63.279999999999994, "text": " somewhere. And that could be in a database, it could be in the network" }, { "start": 63.279999999999994, "end": 68.44, "text": " somewhere. So if you have even something larger than ImageNet, you'll probably" }, { "start": 68.44, "end": 72.16, "text": " store it on a central server on an Amazon bucket somewhere. So this is in" }, { "start": 72.16, "end": 77.03999999999999, "text": " AWS. And the first thing you actually need to do is you need to read that" }, { "start": 77.03999999999999, "end": 82.47999999999999, "text": " data set. Now usually you're not gonna have enough memory on a machine to just" }, { "start": 82.48, "end": 87.52000000000001, "text": " load in the entire data set into memory. So that means this process here is" }, { "start": 87.52000000000001, "end": 93.32000000000001, "text": " streaming. So this is continuously streaming data points. And once you have" }, { "start": 93.32000000000001, "end": 96.82000000000001, "text": " used the data points, you're gonna throw it away because you need space for the" }, { "start": 96.82000000000001, "end": 103.24000000000001, "text": " next one, right? And so the streaming is done continuously. The next process is" }, { "start": 103.24000000000001, "end": 107.68, "text": " read and decode. That means you have to read it from the network and" }, { "start": 107.68, "end": 111.52000000000001, "text": " actually bring it into a format where you can use it. Usually something like an" }, { "start": 111.52, "end": 117.75999999999999, "text": " umpire array or a tensorflow tensor. You need to apply some shuffling because" }, { "start": 117.75999999999999, "end": 123.88, "text": " usually the order, you can't really trust the order that it is stored in." }, { "start": 123.88, "end": 127.39999999999999, "text": " Oftentimes there is like a bias in the ordering, so you need some sort of a" }, { "start": 127.39999999999999, "end": 133.2, "text": " shuffle buffer here. Then often you want to apply some data augmentation to it." }, { "start": 133.2, "end": 139.35999999999999, "text": " That means that you have one image. And we know for these models, if this is your" }, { "start": 139.36, "end": 146.4, "text": " cat, we know for these models that what can help is to basically make many many" }, { "start": 146.4, "end": 151.60000000000002, "text": " different images from one image. So this could be by cropping part of it and" }, { "start": 151.60000000000002, "end": 156.8, "text": " saying, well, if it was a cat before, the small upper right part here is still a" }, { "start": 156.8, "end": 162.72000000000003, "text": " cat. So this is one update. It's called data augmentation. And you're gonna apply" }, { "start": 162.72000000000003, "end": 166.76000000000002, "text": " a whole bunch of these things. So you can crop, you can rotate the image a bit. It's" }, { "start": 166.76, "end": 171.79999999999998, "text": " still a cat, right? And you can also change its luminance and a bit of its" }, { "start": 171.79999999999998, "end": 176.6, "text": " colors. You can jitter the colors, you can horizontally flip it and it'll still be a" }, { "start": 176.6, "end": 181.56, "text": " cat. And that's basically how you make many data points from one data point. And" }, { "start": 181.56, "end": 187.2, "text": " we know that helps. Then what you want to do is you want to batch this data. So you" }, { "start": 187.2, "end": 193.04, "text": " want to put it into mini batches. Since you've shuffled here, that means when the" }, { "start": 193.04, "end": 196.72, "text": " next time the same data point comes along, it's going to be batched with a" }, { "start": 196.72, "end": 202.7, "text": " different group of images and of course augmented differently with a" }, { "start": 202.7, "end": 206.95999999999998, "text": " different group of images. And that basically means it's a different training" }, { "start": 206.95999999999998, "end": 212.48, "text": " batch for the model. So this entire pipeline is basically a way that we take" }, { "start": 212.48, "end": 217.2, "text": " data points that we have and we make a whole bunch of variations and various" }, { "start": 217.2, "end": 221.84, "text": " groupings and batchings of it. And we know that helps enormously with the" }, { "start": 221.84, "end": 227.48, "text": " generalization capability of your final models. And then here you do your apply" }, { "start": 227.48, "end": 233.08, "text": " your SGD update. So that's usually where you forward propagate your data through" }, { "start": 233.08, "end": 239.8, "text": " your network, which here is F. You'll get like some Y hat as an output. And then" }, { "start": 239.8, "end": 244.56, "text": " you have your labels that also come through the pipeline. And you have some" }, { "start": 244.56, "end": 250.72, "text": " sort of loss function L that takes both as an input and gives you an output. And" }, { "start": 250.72, "end": 256.8, "text": " then you do your back propagation. So the back propagation would go through your" }, { "start": 256.8, "end": 262.96, "text": " loss function through your network and update the network parameters such that" }, { "start": 262.96, "end": 268.32, "text": " your network learned something right now this step to the right here is usually" }, { "start": 268.32, "end": 275.48, "text": " what what we focus on when we do deep learning. The step on the right here, all" }, { "start": 275.48, "end": 282.84000000000003, "text": " of this, this can be done on a GPU or is usually done on something like a GPU or a" }, { "start": 282.84000000000003, "end": 291.72, "text": " TPU. Right. But in these things are getting faster and faster. The point the" }, { "start": 291.72, "end": 297.44, "text": " paper makes is that the TPUs and GPUs of the world are getting faster. But this" }, { "start": 297.44, "end": 303.16, "text": " entire other thing right here, this is basically CPU land. Now I know there is" }, { "start": 303.16, "end": 309.32000000000005, "text": " some data augmentation now happening on the GPU and so on. But in essence, you can" }, { "start": 309.32000000000005, "end": 313.36, "text": " think of a pipeline where the thing to the left is happening on CPU and the" }, { "start": 313.36, "end": 319.40000000000003, "text": " thing to the right is happening on TPU. And even even worse, let's say the speed" }, { "start": 319.40000000000003, "end": 326.36, "text": " is continuously is continuously increasing. So in your pipeline, the kind" }, { "start": 326.36, "end": 337.36, "text": " of speed would be so here is the network reading. And here over here is the GPU SGD" }, { "start": 337.36, "end": 344.52000000000004, "text": " step. And this is speed. Basically, the further to the right in your pipeline," }, { "start": 344.52000000000004, "end": 353.36, "text": " you go the faster your the faster your hardware gets. And that means that if if" }, { "start": 353.36, "end": 358.28000000000003, "text": " this since this is a continuous pipeline, right, that basically means that if I" }, { "start": 358.28000000000003, "end": 364.6, "text": " input something here, it goes through the pipeline. And even if this is all running" }, { "start": 364.6, "end": 370.08000000000004, "text": " in parallel, at this thing over here is going to idle this since this is the" }, { "start": 370.08000000000004, "end": 376.52000000000004, "text": " fastest part of the pipeline, it is going to just idle a lot. Right, it because" }, { "start": 376.52000000000004, "end": 380.64, "text": " because it can only consume things as fast as this thing can produce. Now if" }, { "start": 380.64, "end": 386.24, "text": " you if you have some sort of a multi GPU machine and just train image net, like" }, { "start": 386.24, "end": 392.12, "text": " you just run the code, usually your this is not the bottleneck. Usually, your GPU" }, { "start": 392.12, "end": 398.24, "text": " is here are at 100% capacity. So this paper is not for you. But if you are," }, { "start": 398.24, "end": 402.64, "text": " let's say a big company have this network storage, have a big data set, have" }, { "start": 402.64, "end": 407.76, "text": " very expensive data augmentation. This happens, for example, this can happen in" }, { "start": 407.76, "end": 415.56, "text": " NLP and so on. This can be quite your situation, where the earlier in the" }, { "start": 415.56, "end": 423.08, "text": " pipeline, the slower it is. And don't you just love these graphics? So here's" }, { "start": 423.08, "end": 431.56, "text": " here's time. And apparently, it goes in both directions. And so does it go like" }, { "start": 431.56, "end": 437.52, "text": " this? I think what they mean is just time goes in this direction. And you're" }, { "start": 437.52, "end": 444.35999999999996, "text": " here. And you're upstream. So your upstream is your network. This is your" }, { "start": 444.35999999999996, "end": 450.71999999999997, "text": " network reading or your pre processing and the downstream, this is the GPU. So as" }, { "start": 450.71999999999997, "end": 457.24, "text": " you are pre processing things, you you and this should be this should be" }, { "start": 457.64, "end": 464.12, "text": " different. It should mean okay, to correct this right here, this is idle." }, { "start": 464.12, "end": 470.92, "text": " And this is running. So as you're upstream processes images, right at the" }, { "start": 470.92, "end": 475.88, "text": " beginning, your GPU is idle. But then as soon as it ships off the first batch of" }, { "start": 475.88, "end": 480.84000000000003, "text": " images, your GPU can run now it's running. While you're doing that, your" }, { "start": 480.84000000000003, "end": 484.76, "text": " upstream your network is still reading new images, pre processing them and so" }, { "start": 484.76, "end": 491.16, "text": " on, but it cannot is too slow to insert a batch at the time that the GPU is done." }, { "start": 491.16, "end": 496.68, "text": " The time the GPU is done, it's still processing this batch. So the GPU is idle" }, { "start": 497.68, "end": 502.32000000000005, "text": " until here where it finally manages to process that batch and then the GPU is" }, { "start": 502.32000000000005, "end": 507.40000000000003, "text": " running again. I think that would have been a much better graphic. But you know," }, { "start": 507.88, "end": 515, "text": " so their goal basically is that what you'll have is right here, for example," }, { "start": 515, "end": 520.36, "text": " after the batch, what you'll do is you scrap this connection, you take this" }, { "start": 520.36, "end": 528.4, "text": " and you put it into a smaller buffer. And the buffer is a repeat buffer. So what it" }, { "start": 528.4, "end": 536.6, "text": " does is it simply will repeat the whatever you have in the buffer until" }, { "start": 536.6, "end": 541.12, "text": " something new comes in, right? So new data point comes in, you just output that" }, { "start": 541.12, "end": 547.6, "text": " data point again, again, again, again, the for the GPU, it's gonna feel like these" }, { "start": 547.6, "end": 552.6800000000001, "text": " are all new batches and they continuously come in. But it's always the same until" }, { "start": 552.6800000000001, "end": 557.28, "text": " the next data point comes in. And then you output that one again and again and" }, { "start": 557.28, "end": 562.32, "text": " again and again. Now the the actual factor here you can, of course, tune by" }, { "start": 562.32, "end": 567.48, "text": " hand or you can just say repeat until something else comes in. In this paper," }, { "start": 567.48, "end": 572.2, "text": " they have an explicit factor where they say we repeat each data point four times" }, { "start": 572.2, "end": 577.24, "text": " or three times or so on. So this is data echoing, you basically echo the data" }, { "start": 577.24, "end": 584.88, "text": " point multiple times. And this can be done in various places. So they" }, { "start": 584.88, "end": 589.88, "text": " experiment with echoing in any of these places right here. So the egg they" }, { "start": 589.88, "end": 596.52, "text": " experiment with it right here, after reading and decoding, after shuffling. No," }, { "start": 596.52, "end": 602.84, "text": " I think always before shuffling. Because if you if you have a shuffle buffer" }, { "start": 602.84, "end": 607.52, "text": " anyway, they say it makes sense that if you do the echoing, you you do your" }, { "start": 607.52, "end": 612.72, "text": " shuffle buffer after you're echoing. So here, then after augmentation and after" }, { "start": 612.72, "end": 620.5600000000001, "text": " batching. So they experiment with these three locations in in echoing. Now what" }, { "start": 620.5600000000001, "end": 625.44, "text": " could be the downturn of something like this, the downturn, of course, is that" }, { "start": 625.44, "end": 633.72, "text": " this SGD procedure right here, basically, it relies on the data incoming being an" }, { "start": 633.72, "end": 640.1600000000001, "text": " IID sample from your data distribution, right? That's that's how we formulate SGD" }, { "start": 640.1600000000001, "end": 644.72, "text": " is that there's always new data incoming. Now, if you just output the same data" }, { "start": 644.72, "end": 650.8800000000001, "text": " point all the time, that could that is like no new information, first of all," }, { "start": 650.88, "end": 657.96, "text": " and second of all, it could bias the SGD update, such that you it because it sees" }, { "start": 657.96, "end": 661.56, "text": " the same data, it doesn't it sees the same information over and over, is going" }, { "start": 661.56, "end": 666.04, "text": " to think that's the whole data set, right? So potentially, it can make too many" }, { "start": 666.04, "end": 671.88, "text": " steps into the wrong direction. That just happens to be the bias in this particular" }, { "start": 671.88, "end": 681.4, "text": " data point. So the IID assumption is is is invalid. Now, why do you experiment" }, { "start": 681.4, "end": 687.48, "text": " with this in different locations? Because what you expect is that it hurts more or" }, { "start": 687.48, "end": 692.48, "text": " it hurts less the earlier you introduce this. So if you introduce echoing right" }, { "start": 692.48, "end": 698.36, "text": " here, so if you echo your data until new data from the network comes in, it's" }, { "start": 698.36, "end": 702.76, "text": " still going to be shuffled differently, right? It's and it's still going to be" }, { "start": 702.76, "end": 707.48, "text": " augmented differently. So each time the data point comes out of the echo buffer," }, { "start": 707.48, "end": 712.5600000000001, "text": " it is going to be shuffled. And it is going to be augmented in a different way" }, { "start": 712.5600000000001, "end": 716.96, "text": " than the last time the same data point came out. And this is going to be batched" }, { "start": 716.96, "end": 720.6, "text": " together because you've shuffled differently, it's going to be batched" }, { "start": 720.6, "end": 725.6800000000001, "text": " together with a different bunch of data points. And that means SGD gets new" }, { "start": 725.68, "end": 730.88, "text": " information. But if you go on to the very last thing, where you just after the" }, { "start": 730.88, "end": 736.2399999999999, "text": " batch right here, where you input the echo, that means SGD just gets to see" }, { "start": 736.2399999999999, "end": 745.52, "text": " the same batch of data augmented in the same way all the time, right? So the of" }, { "start": 745.52, "end": 750.68, "text": " course, where you exactly have to echo, you have to trade this off. So you have" }, { "start": 750.68, "end": 757.1999999999999, "text": " to trade off the how much you basically violate the IID fresh data assumption" }, { "start": 757.1999999999999, "end": 762.4799999999999, "text": " against where in your data pipeline is the bottleneck. So if your bottleneck is" }, { "start": 762.4799999999999, "end": 768.4799999999999, "text": " in the data augmentation, it may make little sense to echo before that because" }, { "start": 768.4799999999999, "end": 772.88, "text": " your bottleneck is the data augmentation. And that being said, if the bottleneck is" }, { "start": 772.88, "end": 777.16, "text": " that you don't have enough GPUs, then it probably doesn't make sense to data" }, { "start": 777.16, "end": 785, "text": " echo at all, though their experiments are somehow wonky on this. But so let's dive" }, { "start": 785, "end": 791.1999999999999, "text": " in, they make the following claims. Let's just go through them really quick. Data" }, { "start": 791.1999999999999, "end": 796.52, "text": " echo reduces the amount of upstream that think of network reading or" }, { "start": 796.52, "end": 802.12, "text": " augmentation computation needed to reach a competitive out of sample error rate" }, { "start": 802.12, "end": 806.28, "text": " on various data sets and model architectures. Second, data echoing can" }, { "start": 806.28, "end": 811.24, "text": " provide a wall time speed up in practice. Third, data echoing can support a wide" }, { "start": 811.24, "end": 815.72, "text": " range of echoing factors. And that's the echoing factor is how often you repeat" }, { "start": 815.72, "end": 821.24, "text": " the data. Fourth, the effectiveness of data echoing depends on the intersection" }, { "start": 821.24, "end": 826.36, "text": " point in the training pipeline, sorry, in the insertion point. That's what" }, { "start": 826.36, "end": 832.12, "text": " our hypothesis was, right? Fifth, data echoing can benefit from additional" }, { "start": 832.12, "end": 837.92, "text": " shuffling after echoing, but does not require it. And six, countering" }, { "start": 837.92, "end": 843.2, "text": " expectations, data echoing reaches the same final error rate as well tuned" }, { "start": 843.2, "end": 851.2, "text": " baselines. So I am can absolutely accept one through five, especially in like an" }, { "start": 851.2, "end": 863.5600000000001, "text": " actual practical in the wild setting. But six, we'll see about six. So let's jump" }, { "start": 863.5600000000001, "end": 870.9200000000001, "text": " into their models. They, sorry about that, they train the following four models. So" }, { "start": 870.9200000000001, "end": 876.8000000000001, "text": " they train a transformer on these two data sets LM1B and common crawl. So I" }, { "start": 876.8, "end": 884.4, "text": " guess technically it's five models on language modeling. They train the ResNet" }, { "start": 884.4, "end": 891.5999999999999, "text": " 32 on CIFAR 10. They train the ResNet 50 on ImageNet and they train SD on Coco." }, { "start": 891.5999999999999, "end": 899.3599999999999, "text": " Now here is the accuracies they get and here is, sorry, this is the target. So" }, { "start": 899.3599999999999, "end": 903.9599999999999, "text": " what they do is they train these models and then they say, okay, what's the" }, { "start": 903.96, "end": 910.48, "text": " accuracy we reach? And then they set a target value. So on ResNet 50 on ImageNet," }, { "start": 910.48, "end": 918.6, "text": " a very common number to reach is something like 76.5. If you look at, for" }, { "start": 918.6, "end": 924.1600000000001, "text": " example, torch vision models, they reach something like this. And so they say," }, { "start": 924.1600000000001, "end": 930.2, "text": " well, our target accuracy here is just a little bit below that. So and then we" }, { "start": 930.2, "end": 935.84, "text": " just measure how many steps or how many their measurement here is fresh data" }, { "start": 935.84, "end": 940.8000000000001, "text": " points. So how many actual fresh training samples do we need to reach this target?" }, { "start": 940.8000000000001, "end": 949.08, "text": " And this is where it gets wonky because, for example, take the 91% here on CIFAR 10." }, { "start": 949.08, "end": 958.1600000000001, "text": " That is quite, quite low. And also the ResNet 50 is, I mean, this is standard," }, { "start": 958.16, "end": 965.3199999999999, "text": " but still ImageNet is much further nowadays. And I think the effectiveness of" }, { "start": 965.3199999999999, "end": 970.8, "text": " something like this has a lot to do with how competitive you want to get. Maybe" }, { "start": 970.8, "end": 976.76, "text": " this is all just an effect of how much under par your, this target performance" }, { "start": 976.76, "end": 984.6, "text": " really is. And I would expect that even though they say it doesn't hurt their" }, { "start": 984.6, "end": 989.9200000000001, "text": " performance in their experiments, I would at least expect it will hurt your" }, { "start": 989.9200000000001, "end": 997.5600000000001, "text": " performance in general if you try to get competitive. Because these things aren't," }, { "start": 997.5600000000001, "end": 1003.84, "text": " as of now, at least the ones I know, like the ResNets, aren't really" }, { "start": 1003.84, "end": 1012.6, "text": " competitive. But so what do they do? They measure data echoing with an echoing" }, { "start": 1012.6, "end": 1018.84, "text": " factor of 2. So that means data that's incoming is output twice in a row. And" }, { "start": 1018.84, "end": 1023.76, "text": " every data point that's coming in is just emitted twice from the buffer. And" }, { "start": 1023.76, "end": 1029.48, "text": " then the next data point is emitted twice and so on. And what they measure," }, { "start": 1029.48, "end": 1035.32, "text": " again, is the fresh examples read. So how many fresh data points do you need to" }, { "start": 1035.32, "end": 1038.52, "text": " achieve something? This is a good measurement because this is kind of" }, { "start": 1038.52, "end": 1046.16, "text": " independent of hardware. So if you're really in the situation where your GPU is" }, { "start": 1046.16, "end": 1053.56, "text": " twice as fast as the rest of your pipeline, then an echoing factor of 2 will" }, { "start": 1053.56, "end": 1061.92, "text": " speed up at most your training procedure by a factor of 2. All right, so you have" }, { "start": 1061.92, "end": 1068.04, "text": " the baseline in red. And then you have batch echoing, which is where you echo" }, { "start": 1068.04, "end": 1072.6, "text": " what we said at the worst possible time right after batching. So this might hurt" }, { "start": 1072.6, "end": 1079.8, "text": " your performance the most, but also it has the potential to be the fastest if" }, { "start": 1079.8, "end": 1085.1599999999999, "text": " maybe your augmentation is very expensive. Then, sorry, or your batching." }, { "start": 1085.1599999999999, "end": 1090.6399999999999, "text": " You have example echoing after augmentation. So that would mean the" }, { "start": 1090.6399999999999, "end": 1095.56, "text": " augmentation is very expensive. So you save the augmented data point. And then" }, { "start": 1095.56, "end": 1103.6399999999999, "text": " you emit it multiple times, but each time it is batched differently. So it is" }, { "start": 1103.6399999999999, "end": 1107.32, "text": " shuffled and then batched with different other data points. So you have a shuffle" }, { "start": 1107.32, "end": 1111.52, "text": " buffer after it. And then you have example echoing before data augmentation." }, { "start": 1111.52, "end": 1115.24, "text": " So that means the same data point emitted multiple times will be augmented in" }, { "start": 1115.24, "end": 1120.1599999999999, "text": " different ways and basically will lead to slightly different data points. So the" }, { "start": 1120.1599999999999, "end": 1125.1599999999999, "text": " results here are pretty much what you could expect in that the earlier you do" }, { "start": 1125.16, "end": 1132.4, "text": " the echoing, as you can see here, the more this echoing helps. So the number, if" }, { "start": 1132.4, "end": 1138.28, "text": " you, for example, this is the object segmentation task, the baseline needs" }, { "start": 1138.28, "end": 1144.0800000000002, "text": " this many fresh examples to reach this target accuracy. With batch echoing, not" }, { "start": 1144.0800000000002, "end": 1151.44, "text": " only do you, sorry, with batch echoing, you need less fresh training examples. So" }, { "start": 1151.44, "end": 1159.56, "text": " that means even though you kind of train on the same data twice, this" }, { "start": 1159.56, "end": 1164.8400000000001, "text": " helps you more, or this helps you. It doesn't help you fully because the dashed" }, { "start": 1164.8400000000001, "end": 1172.6000000000001, "text": " line here is the, if it would help you as much as a fresh data point, you'd be at" }, { "start": 1172.6000000000001, "end": 1176.2, "text": " the dashed line, right? This is exactly half of this because the echoing factor" }, { "start": 1176.2, "end": 1183.4, "text": " is two. So if a repeated data point was as useful as a fresh data point," }, { "start": 1183.4, "end": 1187.24, "text": " you'd be at the dashed line. As you can see right here, you're not at the dashed" }, { "start": 1187.24, "end": 1192.1200000000001, "text": " line, but at least it doesn't hurt. You might expect that it hurts, but it" }, { "start": 1192.1200000000001, "end": 1196.1200000000001, "text": " doesn't hurt. It actually speeds up. So the repeated data points at least have" }, { "start": 1196.1200000000001, "end": 1203.48, "text": " some utility. Again, this is only useful if you have this asymmetry" }, { "start": 1203.48, "end": 1207.28, "text": " in your pipeline. If your pipeline is actually symmetric and you do an echoing" }, { "start": 1207.28, "end": 1212.04, "text": " factor of two, the wall time here, the wall time plot would look this for the" }, { "start": 1212.04, "end": 1218.72, "text": " baseline and then almost twice as high for the batch echoing. Because even" }, { "start": 1218.72, "end": 1223.28, "text": " though it needs the same amount of fresh, or almost the same amount of fresh" }, { "start": 1223.28, "end": 1231.68, "text": " example, you echo each one twice. So it needs to process it twice so it'll" }, { "start": 1231.68, "end": 1236.88, "text": " take much longer. So again, this is useful if you have this asymmetry and if" }, { "start": 1236.88, "end": 1242.6000000000001, "text": " the echoing factor is kind of smaller than your asymmetry. Otherwise you're" }, { "start": 1242.6000000000001, "end": 1249.64, "text": " simply wasting time repeating data points. Then if you do example echoing" }, { "start": 1249.64, "end": 1254.72, "text": " here after augmentation, you use even less fresh data points. And if you do it" }, { "start": 1254.72, "end": 1260.44, "text": " before augmentation, this is really surprising. You almost get the benefit of" }, { "start": 1260.44, "end": 1266.1200000000001, "text": " fresh data points, which is something you might expect, right? Because an" }, { "start": 1266.1200000000001, "end": 1272.3600000000001, "text": " augmented newly shuffled data point is kind of almost a new data point. But" }, { "start": 1272.3600000000001, "end": 1278.96, "text": " still, it's quite surprising that you almost get to the level of the of the" }, { "start": 1278.96, "end": 1285.8400000000001, "text": " theoretical possible. And also here on the image net task. Now here is actually" }, { "start": 1285.84, "end": 1291.24, "text": " an example where you can see that it hurts to do this batch echoing. Because" }, { "start": 1291.24, "end": 1295.9199999999998, "text": " the reasons why it could hurt is just that you have you violate this IID" }, { "start": 1295.9199999999998, "end": 1300.84, "text": " assumption, you basically have correlated data points. This is a big, big problem," }, { "start": 1300.84, "end": 1307.12, "text": " for example, in reinforcement learning, where already by nature of you running" }, { "start": 1307.12, "end": 1311.76, "text": " episodes and then feeding the episodes back into the training procedure, you" }, { "start": 1311.76, "end": 1316.84, "text": " have correlated data points. And that hurts your performance here actually" }, { "start": 1316.84, "end": 1323.2, "text": " compared to the to the baseline. But then if you go to example echoing, and the" }, { "start": 1323.2, "end": 1329.76, "text": " example echoing before augmentation, again, you get a speed up, which is pretty" }, { "start": 1329.8, "end": 1336.4, "text": " cool. Okay, so they do a bunch of other experiments. And I appreciate these" }, { "start": 1336.4, "end": 1340.72, "text": " experiments here to really show what's going on. And until when can you push" }, { "start": 1340.72, "end": 1346.48, "text": " this? So here they have a plot of example echoing before augmentation can reduce" }, { "start": 1346.52, "end": 1352.88, "text": " training time for ResNet 50 on image net. So this is before augmentation. And the" }, { "start": 1352.92, "end": 1358.1200000000001, "text": " echoing factor describes how often you repeat each data point. So this goes from" }, { "start": 1358.1200000000001, "end": 1365.3600000000001, "text": " two to five. And you can see that basically you you get the speed up, you" }, { "start": 1365.36, "end": 1372.6799999999998, "text": " just sort of get it for free. As you can see, the dashed line again is as if if at" }, { "start": 1372.7199999999998, "end": 1378.28, "text": " repeated data point were as useful as a fresh data point, you'd be at the dashed" }, { "start": 1378.28, "end": 1385.52, "text": " line. And you can see right here that you are just above this dashed line. So this" }, { "start": 1385.8799999999999, "end": 1391.8, "text": " can help a lot. And so this is the fresh examples read and this is the wall time" }, { "start": 1391.8, "end": 1397.32, "text": " in their particular situation. In this case, it doesn't help as much. But again," }, { "start": 1397.3999999999999, "end": 1404.68, "text": " it if that very much depends on how the asymmetry in your pipeline is. Now, in" }, { "start": 1404.68, "end": 1410.28, "text": " these experiments, I would actually appreciate something like they do down" }, { "start": 1410.32, "end": 1417.24, "text": " here, where I would always like to see where it breaks. So how far can you go" }, { "start": 1417.24, "end": 1423.04, "text": " with the echoing factor until it doesn't help anymore? Because this sort of tells" }, { "start": 1423.04, "end": 1427.2, "text": " me pretty much nothing. I want to see where is the low point? Where's kind of" }, { "start": 1427.2, "end": 1433.24, "text": " the optimal echoing factor? And what can you tell me about this optimal echoing" }, { "start": 1433.24, "end": 1438.4, "text": " factor? How can we determine it sort of beforehand? Or how can you reason how" }, { "start": 1438.4, "end": 1442.4, "text": " does it connect to the different parts of your architecture? So if I had to point" }, { "start": 1442.4, "end": 1448, "text": " out a flaw in this paper, it would be that right here, I would expect the them" }, { "start": 1448, "end": 1455.4, "text": " to continue this echoing factor increase until it breaks, sort of like they do" }, { "start": 1455.44, "end": 1464.44, "text": " down here. This is for I believe this is for the transformer on LM 1B. Now here" }, { "start": 1464.44, "end": 1472.88, "text": " they have a batch size of 1024. And you can see, and this is the this is their" }, { "start": 1472.88, "end": 1477.16, "text": " standard setting for the transformer, the 1024 batch size, you can see that the" }, { "start": 1477.16, "end": 1485.52, "text": " baseline uses this many 1.5 times 10 to the seventh fresh examples to train" }, { "start": 1485.52, "end": 1491.64, "text": " until their target. If you increase the echoing factor by two, you basically need" }, { "start": 1491.64, "end": 1498.8400000000001, "text": " half as many fresh examples, as long as you echo each one twice. Again, very" }, { "start": 1498.8400000000001, "end": 1507.4, "text": " surprising the fact how close you can get to the as if each batch were a a" }, { "start": 1507.4, "end": 1514.2, "text": " perfect fresh data point. But you can see as you increase this echoing factor, and" }, { "start": 1514.2, "end": 1519.5600000000002, "text": " here is exactly what I said, right, you at some point, this hurts at some point," }, { "start": 1519.56, "end": 1525.32, "text": " you get to the point where the non IID ness, the correlation of date of" }, { "start": 1525.32, "end": 1530.76, "text": " successive data points will actually hurt you. And they make a point here of" }, { "start": 1530.76, "end": 1537.9199999999998, "text": " saying that this is, for example, dependent on batch size. Now in this" }, { "start": 1537.9199999999998, "end": 1544.44, "text": " experiment over here, they have a larger batch size. And here is again the the" }, { "start": 1544.44, "end": 1551, "text": " baseline number of data points to reach the target. And you can see again it goes" }, { "start": 1551, "end": 1558.52, "text": " down. But now with the echoing factor where before you had a you had an" }, { "start": 1558.52, "end": 1562.68, "text": " increase again, now it continues to decrease. Again, it will be interesting to" }, { "start": 1562.68, "end": 1568.74, "text": " see where it goes up here and how the number at the lowest like here the four" }, { "start": 1568.74, "end": 1573.64, "text": " and here the I don't know what it's gonna be maybe the 16, how this will" }, { "start": 1573.64, "end": 1579.0800000000002, "text": " kind of depend on your batch size. And here is another problem. And that's what" }, { "start": 1579.0800000000002, "end": 1584.2, "text": " I alluded to at the beginning, this this performance dependence. Now, I have not" }, { "start": 1584.2, "end": 1589.96, "text": " read anything differently in the paper. So I had to assume that they trained" }, { "start": 1589.96, "end": 1595.38, "text": " this here, this number of fresh examples to reach the target is still the target" }, { "start": 1595.38, "end": 1601.64, "text": " that they determined at the beginning. So it's that 3.9 in the table, that 3.9" }, { "start": 1601.64, "end": 1607.3600000000001, "text": " was achieved with this batch size with 1024. And we know especially in language" }, { "start": 1607.3600000000001, "end": 1614.2800000000002, "text": " models that larger batch sizes will lead to a better performance, even if you need" }, { "start": 1614.2800000000002, "end": 1619.4, "text": " let's say more samples. So here you can see that the samples here is 1.5 and" }, { "start": 1619.4, "end": 1625.48, "text": " here it's actually four, because you increase that batch size. So that will" }, { "start": 1625.48, "end": 1633, "text": " tell you something 1.5 and four, that is that is like a times. Okay, that's like" }, { "start": 1633, "end": 1641.4, "text": " an times 2.5. So you go with the batch size of times four, and you need 2.5 more" }, { "start": 1641.4, "end": 1647.2, "text": " more fresh training samples to reach the same target accuracy. First of all, we" }, { "start": 1647.2, "end": 1653, "text": " know that the larger batch sizes can reach higher target accuracies. So again," }, { "start": 1653, "end": 1658.84, "text": " these results, the dependence of them on the actual performance to the" }, { "start": 1658.84, "end": 1665.6, "text": " maximum achievable value, to me that's kind of a shady world here to always" }, { "start": 1665.6, "end": 1672.36, "text": " say, okay, how long does it take to reach that particular target? Because we" }, { "start": 1672.36, "end": 1677.8, "text": " know that this model right here can reach a much higher target, but we don't" }, { "start": 1677.8, "end": 1682.72, "text": " know this about these models here. What is their kind of performance in the limit?" }, { "start": 1682.72, "end": 1688.68, "text": " And they try to make these experiments, but I don't really believe them. Maybe" }, { "start": 1688.68, "end": 1695.68, "text": " yeah. And yes, and the second, right, that will that will be that will that is" }, { "start": 1695.68, "end": 1702.82, "text": " already interesting. So this ratio right here, this 2.5 to 4, this ratio must mean" }, { "start": 1702.82, "end": 1707.72, "text": " something, right? It's it's I go to a higher batch size, four times higher" }, { "start": 1707.72, "end": 1714.64, "text": " batch size, and I need 2.5 many more fresh training samples to reach the same" }, { "start": 1714.64, "end": 1719.48, "text": " target. That must somehow tell you something about the usefulness of a" }, { "start": 1719.48, "end": 1724.96, "text": " single data point versus a succession of data points, right? So it doesn't seem" }, { "start": 1724.96, "end": 1728.64, "text": " because I would expect if each data point was valuable, I would expect this" }, { "start": 1728.64, "end": 1738.48, "text": " to be times four. And if it were if it were times one, so if it were no, no speed" }, { "start": 1738.48, "end": 1744.8400000000001, "text": " up at all, sorry, not times four, if it would be times one, it would mean I'd" }, { "start": 1744.8400000000001, "end": 1748.88, "text": " need the same number of fresh training samples, right? No matter how I batch" }, { "start": 1748.88, "end": 1754.44, "text": " them. But it were times four. That means basically that it doesn't matter really" }, { "start": 1754.44, "end": 1759.48, "text": " how many training points I have in a batch as long as I have enough and the" }, { "start": 1759.48, "end": 1765.8, "text": " 1024 seems to be enough. It just it just matters how many you know SGD steps I do." }, { "start": 1765.8, "end": 1769.92, "text": " So basically, what we're saying SGD isn't getting the most out of these data" }, { "start": 1769.92, "end": 1774.52, "text": " points. And this ratio this 2.5, this this tells you something about the" }, { "start": 1774.52, "end": 1779.0800000000002, "text": " information content of a of an additional data point versus the" }, { "start": 1779.08, "end": 1784.76, "text": " usefulness content of an additional step of SGD. And I would expect that to" }, { "start": 1784.76, "end": 1790.76, "text": " depend to intrinsically be connected to the where the low point of this echoing" }, { "start": 1790.76, "end": 1794.08, "text": " factor is because that's exactly what the echoing does. It trades off" }, { "start": 1794.08, "end": 1800.4399999999998, "text": " freshness of data point versus doing more steps on the on the on the same" }, { "start": 1800.44, "end": 1809.44, "text": " information. And for a paper, especially paper by Google brain, I this this is a" }, { "start": 1809.44, "end": 1815.72, "text": " connection that I would love to see investigated. But enough of the ranting," }, { "start": 1815.72, "end": 1820.16, "text": " they do investigate other things they do investigate, for example, what happens if" }, { "start": 1820.16, "end": 1825.2, "text": " we just up the batch size. And you can see here, yeah, this is interesting in" }, { "start": 1825.2, "end": 1832.48, "text": " the baseline needs more fresh samples as you up the batch size. But and at the" }, { "start": 1832.48, "end": 1836.04, "text": " beginning, this batch echoing, for example, doesn't help doesn't hurt, but" }, { "start": 1836.04, "end": 1840.88, "text": " doesn't help. But as you go to higher and higher batch sizes, this batch echoing" }, { "start": 1840.88, "end": 1848.1200000000001, "text": " starts to help more and more. Again, I believe this is connected to the" }, { "start": 1848.1200000000001, "end": 1852.1200000000001, "text": " usefulness of the single data point, at some point, your batch size is just too" }, { "start": 1852.12, "end": 1858.32, "text": " large for the problem, you'd rather do more steps. And that's why this helps. But" }, { "start": 1858.32, "end": 1865.3999999999999, "text": " also, this model right here might have a higher ceiling accuracy. So indeed is" }, { "start": 1865.3999999999999, "end": 1869.9599999999998, "text": " the question whether this model right here has the same or whether this model" }, { "start": 1869.9599999999998, "end": 1874.2399999999998, "text": " right here, the batch echoing model would actually fall back to the ceiling" }, { "start": 1874.24, "end": 1882.88, "text": " accuracy of one of these models over here. Yeah, in any case, their point is" }, { "start": 1882.88, "end": 1888.64, "text": " basically that as you increase the batch size, this echoing tends to help more" }, { "start": 1888.64, "end": 1895.96, "text": " relatively. Because maybe it's because what I said, right, they say as batch" }, { "start": 1895.96, "end": 1899.36, "text": " size increases, the performance of batch echoing relative to the baseline stays" }, { "start": 1899.36, "end": 1905.8, "text": " either stays the same or improves. While for example, echoing it either stays the" }, { "start": 1905.8, "end": 1910.6799999999998, "text": " same or it gets sorry, while for example, echoing it either stays the same or gets" }, { "start": 1910.6799999999998, "end": 1916.52, "text": " worse. Dashed lines indicate the expected values if repeated examples were as" }, { "start": 1916.52, "end": 1920.52, "text": " useful as fresh examples. Yeah, so I believe there is an intrinsic connection" }, { "start": 1920.52, "end": 1927.9599999999998, "text": " here between the usefulness of more data and usefulness of doing additional steps." }, { "start": 1927.96, "end": 1931.8, "text": " And here the example echoing you can almost see it as more data because" }, { "start": 1931.8, "end": 1936.3600000000001, "text": " especially here you're going to do augmentation on top of it and you see" }, { "start": 1936.3600000000001, "end": 1942.3600000000001, "text": " the non augmented versus the augmented ratio changes dramatically from here to" }, { "start": 1942.3600000000001, "end": 1950.1200000000001, "text": " here. Okay, final set of experiments. As you can tell, this is more mostly an" }, { "start": 1950.1200000000001, "end": 1956, "text": " experimental paper. And it is always easy to criticize experimental papers and" }, { "start": 1956, "end": 1964.56, "text": " rightfully so because I would not trust this very much. But given that it comes" }, { "start": 1964.56, "end": 1971.72, "text": " from a big institution, and it is a very well written paper, I would trust it" }, { "start": 1971.72, "end": 1976.16, "text": " more than I would a regular paper. And I would say if you're in practice, this is" }, { "start": 1976.16, "end": 1983.12, "text": " certainly worth trying. Absolutely. I'm just I just think that some of the" }, { "start": 1983.12, "end": 1988.52, "text": " things aren't aren't researched, like some of my questions aren't answered of" }, { "start": 1988.52, "end": 1995.6399999999999, "text": " this. So they investigate sizes. So they now build shuffle buffers. So we have" }, { "start": 1995.6399999999999, "end": 2001.32, "text": " batch echoing, but they say ah, but we can do batch echoing with shuffle buffers." }, { "start": 2001.32, "end": 2004.84, "text": " So after the batch echoing, right, we have this state where we have the" }, { "start": 2004.84, "end": 2011.28, "text": " batching. And then we have the echoing. So this is our echo buffer, where we" }, { "start": 2011.28, "end": 2016.44, "text": " output the each data point multiple times. And then we have another buffer," }, { "start": 2016.44, "end": 2020.3999999999999, "text": " which is a shuffle buffer, that a shuffle buffer just collects data points and" }, { "start": 2020.3999999999999, "end": 2024.6399999999999, "text": " then shuffles them around before outputting them. And that means even" }, { "start": 2024.6399999999999, "end": 2031.44, "text": " though we you know, output this five times, it might not come out five times" }, { "start": 2031.44, "end": 2036.2, "text": " after each other, it might be that it comes out once, and then another data" }, { "start": 2036.2, "end": 2039.76, "text": " point that was already in the shuffle buffer comes out. And then it will just" }, { "start": 2039.76, "end": 2044.32, "text": " say that in total, it comes out five times. But it is first shuffled together" }, { "start": 2044.32, "end": 2050.92, "text": " with a bunch of other data points. Of course, this uses more memory, but it" }, { "start": 2050.92, "end": 2056.12, "text": " returns to that more IID setting. And you can see here as the buffer size" }, { "start": 2056.12, "end": 2061.52, "text": " increases, then the performance gets more and more to the performance that you" }, { "start": 2061.52, "end": 2066.68, "text": " would have with completely fresh data. Right. So again, trading off freshness" }, { "start": 2066.68, "end": 2075.48, "text": " and freshness and doing multiple steps with by by basically repeating," }, { "start": 2075.96, "end": 2082.12, "text": " repeating data points straight out versus repeating data points shuffled." }, { "start": 2083.3999999999996, "end": 2088.7599999999998, "text": " And also here you have the same with example echoing. So if you apply the" }, { "start": 2088.7599999999998, "end": 2094.3599999999997, "text": " shuffle buffer to example echoing and you increase its size, you can get very," }, { "start": 2094.36, "end": 2101.1600000000003, "text": " very, very close to the performance that you would get with fresh data, which of" }, { "start": 2101.1600000000003, "end": 2105.8, "text": " course, if you increase the shuffle buffer to the size of the data set, you" }, { "start": 2105.8, "end": 2109.96, "text": " are at the situation that's the limit you are at the situation of fresh data," }, { "start": 2110.2000000000003, "end": 2115.48, "text": " right? If you do example echoing. Right. So here is where it gets into the funky" }, { "start": 2115.48, "end": 2121.56, "text": " part, where they say we actually measure the validation cross entropy and the" }, { "start": 2121.56, "end": 2127.4, "text": " validation accuracy versus the number of fresh examples read. And here I want to" }, { "start": 2127.56, "end": 2133.48, "text": " concentrate on the ResNet 50 on ImageNet. And as you can see, most of these" }, { "start": 2133.56, "end": 2139.7999999999997, "text": " models, they pretty much end up in the same place here. It's just that the" }, { "start": 2139.88, "end": 2148.68, "text": " echoing models end up there faster. Right. And this this is, I mean, this is" }, { "start": 2148.68, "end": 2155.48, "text": " where it gets a bit confusing, honestly, because why do you have this super sharp" }, { "start": 2155.48, "end": 2162.2, "text": " thing here? Because usually and here it sort of speeds up in the middle, you see" }, { "start": 2162.2, "end": 2167.48, "text": " you see that and then it kind of sharply declines. Is this maybe because they" }, { "start": 2167.48, "end": 2173.8799999999997, "text": " drop the learning rate or something. Now, my main thing is that the performance" }, { "start": 2173.88, "end": 2181.4, "text": " here even though this target thing is lower than than the even though this" }, { "start": 2181.4, "end": 2185.6400000000003, "text": " target thing is the same for everyone, it is lower than the best reachable" }, { "start": 2185.6400000000003, "end": 2193.48, "text": " accuracy. And I'm I'm just this this is just confusing. If this is really true." }, { "start": 2194.2000000000003, "end": 2202.2000000000003, "text": " Whoa. If this is really true, I think we have a lot to learn about SGD yet and" }, { "start": 2202.2, "end": 2207.24, "text": " how we're not actually doing SGD correctly. And because it seems like" }, { "start": 2207.24, "end": 2214.68, "text": " almost the the echo versions are better or reach a better accuracy than the" }, { "start": 2214.68, "end": 2219.96, "text": " baseline. I don't know, do they just cap it at the performance? I don't think so." }, { "start": 2219.96, "end": 2225.64, "text": " I think they say they let it reach. They also have these things right here." }, { "start": 2225.64, "end": 2229.8799999999997, "text": " These these curves where they say this is the best we reach. And this is the" }, { "start": 2229.88, "end": 2239, "text": " ResNet 32 on C410. Again, 91% on C410 is just very, very low. And I'm almost" }, { "start": 2239, "end": 2244.6800000000003, "text": " thinking that, okay, this might help if you just throw something that we know is" }, { "start": 2244.6800000000003, "end": 2249.96, "text": " kind of overpowered because we can reach 99%. Or at least you can reach" }, { "start": 2249.96, "end": 2256.04, "text": " something like 94% on C410 easily, easily with a network smaller than ResNet 32." }, { "start": 2256.04, "end": 2260.7599999999998, "text": " Maybe this effect manifests if you if you have actually something that could" }, { "start": 2260.7599999999998, "end": 2268.2799999999997, "text": " reach higher, but for some reason you only reach this low. I'm not sure. But" }, { "start": 2268.2799999999997, "end": 2275.56, "text": " this is confusing. And if this is really true. Yeah, I would just if it's true," }, { "start": 2275.56, "end": 2280.36, "text": " which I believe I believe this paper, it might be just an effect of not reaching" }, { "start": 2280.36, "end": 2286.6, "text": " the actual ceiling. And again, look at this. This is just the curves are just" }, { "start": 2286.6, "end": 2293.48, "text": " strange, right? You have the echoing before augmentation, like it seems like" }, { "start": 2293.48, "end": 2301.48, "text": " it's outperforming the the fresh data points. I don't know, there's a little" }, { "start": 2301.48, "end": 2306.44, "text": " bell in my head that doesn't like this. If it's actually true, then you know," }, { "start": 2306.44, "end": 2310.92, "text": " that's cool. But yeah, so my main criticisms are a bit with the" }, { "start": 2310.92, "end": 2315.96, "text": " experimental methodology, for example, where they increase the batch size, but" }, { "start": 2315.96, "end": 2320.36, "text": " still reach the same target accuracy, even though we know that there is a" }, { "start": 2320.36, "end": 2324.68, "text": " higher ceiling if you increase the batch size for language models. My other" }, { "start": 2324.68, "end": 2331.8, "text": " criticism is the non investigation of this connection. This connection right" }, { "start": 2331.8, "end": 2336.84, "text": " here, maybe, but all in all, it's a pretty cool paper. If I had a big company" }, { "start": 2336.84, "end": 2341.32, "text": " with these pipeline issues, I would absolutely implement this. This seems" }, { "start": 2341.32, "end": 2348.52, "text": " like a no brainer to do this and can help you tremendously. Alright, that was" }, { "start": 2348.52, "end": 2352.52, "text": " it. Thank you for listening. If you're still here, subscribe, like, tell a" }, { "start": 2352.52, "end": 2361.8, "text": " friend. Bye bye." } ]
ifBI2jTaAEo
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Celebrating 100k Subscribers! (w/ Channel Statistics)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper" ]
#yannickilcher #machinelearning #100k OUTLINE: 0:00 - 100k! 1:00 - Announcements & Thanks 3:55 - Channel Statistics Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Yay! 100k! Nice! Big celebration, we have just reached 100,000 subscribers. Now truth be told as of recording of this videos, we actually don't have 100,000 subscribers yet. There's like 156 missing. So all I have to do is not get cancelled in the next two days or so. And this is harder than it seems. But I've managed so far I think I can make it. So thank you everyone who's been here for any amount of time. 100,000 of you have decided to click on the subscribe button and I'm eternally grateful to every single one. I would have never ever ever thought that a dude on YouTube talking for 45 minutes about research papers and stuff would get any attention at all pun intended. But hey, it's come to this. So thank you all so much. This has been absolutely great. I have no intention of stopping. Now this video right here is supposed to be a little bit of an announcement video. And also I thought we'd look a little bit into the channel statistics because I know some of you are interested. So what are the announcements? As I said, I have no intention of stopping reaching 100k doesn't make a big difference in terms of content. In fact, I have lots of ideas for nice content, and probably more ideas than time to implement them. But there's some cool stuff coming up. Also, I will be hosting and ask me anything on probably Sunday as gonna happen here on YouTube. So you'll see that pop up if you're around at that time. Next thing, merch. I thought it'd be funny to have a little bit of channel merch, and I don't have it ready yet. But we'll chat on this court a little bit about what is going to be offered because I do want your inputs into these kinds of things. So let's get some funny merch. And I think that'll be cool. Speaking of discord, special thanks to everyone who is there who participates to everyone who has ever asked and to everyone who has ever answered a question in the help channel to everyone who has participated or even just listened to the paper discussions we host there is special thanks to the regulars and to the moderators who keep everything going. This would absolutely not be possible if it were just myself. So huge thanks to everyone there. This community is just amazing. And we will not be at 100k right now if it weren't for the support that I'm getting from there. If you're not yet a discord member and you do want to be more involved, link is right there in the description. Everyone's welcome. As I said, next to the usual discord chit chat, we have regular paper discussions. And also there are some community projects. Currently, there is one called homebrew NLP, where the goal is to build a framework that can run really large language models on a single machine. If you're interested in that, absolutely join and participate in creation of that. Very cool. Okay, that being said, let's dive a little bit into the channel statistics. Now I think due to the rules of AdSense, I'm not allowed to show you the exact numbers of revenue that come from ads, not entirely sure that's the rule actually, but I have heard it from somewhere and I'd rather not get into trouble. Safe to say, it's not nearly a number where you could live off of this or anything like this. It did support for example, the new camera that I've gotten so you can enjoy an excellent quality. Also, thanks of course to the Patreon and subscribe star supporters and also the people who've sent me a bit of crypto. This has also enabled me to get a new iPad instead of my old Surface tablet, which makes the creation of the paper reviews just a lot easier. So thanks a lot for that. So here I've pulled up statistics since January 2020. I have made numerous videos before that, but not nearly at the scale or frequency that I'm making them now. So the real video making started in the early days of 2020, when the first wave of the current global phenomenon hit, and I suddenly found myself with a bit of more time on my hands. And at that time, I was watching a lot of videos by people like PewDiePie and Casey Neistat, and I deep respect for these people that upload every single day. And I asked myself, how long could I keep this up? And it turned out I could keep it up for about three to four months. So as you can see, YouTube is mostly a grind with a few intermittent spikes. I believe the first spike here is GPT three. And the second spike is alpha fold. You can also see the times I took a couple of breaks namely here in late summer of 2020, and in early summer of this year. It's pretty cool how you can see all of this in the stats. Also, we've recently passed 4 million views, which is crazy. Interestingly, here you can see while a lot of people appear to have watched the GPT three video, not a lot of people have watched it to the end. See the difference? Spike? No spike. Spike? No spike. Maybe that was a different video. Top videos, of course, the all time favorite attention is all you need. See, I've uploaded this in 2017. And it's drawn people ever since, which means I must have done something right. Now people have told me to get a thumbnail for this going or anything like this. But I'm not I'm not going to change a single thing about this video is doing well. People are watching it for a long time, not going to change a thing. Here you see other popular videos are alpha fold and GPT three. Now also surprising is trans coder, which a lot of people watch, but then they watch kind of none of it. So this might have been the big spike. I'm not sure if the thumbnail here is misleading and people expected coding content rather than an analysis of a research paper, or it's because the first part of this word is sort of politically overloaded and maybe people clicked on that or the algorithm recommended that to people. I'm not sure but it is what it is. Interestingly, click through rate has been going steadily down. I'm not sure if that is to be expected as you grow, I guess. I'm not sure. But maybe I should do a little bit more clickbait to get people to click more. When people search for this channel, the most thing they search is my name, which is quite flattering. And then it is the titles of the videos they're interested in such as attention is all you need GPT three, alpha fold or vision transformer, which was a cool video. If you remember, I reviewed that before it was clear who the authors were and I sort of deanonymize the paper live and yeah, I thought that was funny. So who are you? You are probably on YouTube mostly around 6pm in Central Europe, you're probably also subscribed to Two Minute Papers, Lex Friedman, Tesla, the MLS Street Talk and Sabine Hossenfelder among other channels. Now specific shout out to MLS Street Talk. If you're not subscribed to that, I can highly recommend it. I'm part of it not always but a lot of times and we have super duper interesting discussions with people that I would have never guessed I could ever reach and talk to and ask them questions. So I think we have really cool guests and the conversations are often quite technical. So I think you will enjoy that. In terms of watch time, only about half the people are subscribed, which is surprising. That means 200k subscribers isn't far away. And 19 out of 20 of you are probably male and a lot of you are between 25 and 34 years old. Now I'm never sure if that is just the statistics of the people where YouTube knows what they are because they've specified it somewhere or is that what they guess about people in which case I guess that would be seriously distorted because the guessing would probably be based on something like your interests, which might be that if you're into a lot of technical subjects, you're more likely to be male, but then you count that to the statistic here and probably that statistic is then used again for training the algorithms. I'm not sure so I'm not going to interpret too much into this thing right here. Also, you're quite likely to be from the United States or India, but really the geographies are distributed quite all over the world. Okay, I've actually figured it out. Yes, the giant spike was in fact the transcoder video. And here you can see that the traffic source was mostly external. So in fact, the GPT three video was a much smaller spike, not much earlier than the transcoder spike. So this was it for the channel statistics for the celebration of 100k. Thank you so much to everyone who is here to everyone who's helped and who's participated. I hope you still enjoy the content. I still read all the comments. If you have any feedback, any wishes or anything like this, let me know. I'm looking forward to what's to come and have a great day. Bye bye.
[ { "start": 0, "end": 12.280000000000001, "text": " Yay! 100k! Nice! Big celebration, we have just reached 100,000 subscribers. Now truth" }, { "start": 12.280000000000001, "end": 17.96, "text": " be told as of recording of this videos, we actually don't have 100,000 subscribers yet." }, { "start": 17.96, "end": 25.02, "text": " There's like 156 missing. So all I have to do is not get cancelled in the next two days" }, { "start": 25.02, "end": 30.64, "text": " or so. And this is harder than it seems. But I've managed so far I think I can make it." }, { "start": 30.64, "end": 37.26, "text": " So thank you everyone who's been here for any amount of time. 100,000 of you have decided" }, { "start": 37.26, "end": 43.239999999999995, "text": " to click on the subscribe button and I'm eternally grateful to every single one. I would have" }, { "start": 43.239999999999995, "end": 51, "text": " never ever ever thought that a dude on YouTube talking for 45 minutes about research papers" }, { "start": 51, "end": 58.34, "text": " and stuff would get any attention at all pun intended. But hey, it's come to this. So thank" }, { "start": 58.34, "end": 63.8, "text": " you all so much. This has been absolutely great. I have no intention of stopping. Now" }, { "start": 63.8, "end": 69.03999999999999, "text": " this video right here is supposed to be a little bit of an announcement video. And also" }, { "start": 69.03999999999999, "end": 73.08, "text": " I thought we'd look a little bit into the channel statistics because I know some of" }, { "start": 73.08, "end": 78.2, "text": " you are interested. So what are the announcements? As I said, I have no intention of stopping" }, { "start": 78.2, "end": 83.2, "text": " reaching 100k doesn't make a big difference in terms of content. In fact, I have lots" }, { "start": 83.2, "end": 88.7, "text": " of ideas for nice content, and probably more ideas than time to implement them. But there's" }, { "start": 88.7, "end": 95.86, "text": " some cool stuff coming up. Also, I will be hosting and ask me anything on probably Sunday" }, { "start": 95.86, "end": 101.18, "text": " as gonna happen here on YouTube. So you'll see that pop up if you're around at that time." }, { "start": 101.18, "end": 106.32000000000001, "text": " Next thing, merch. I thought it'd be funny to have a little bit of channel merch, and" }, { "start": 106.32, "end": 111.1, "text": " I don't have it ready yet. But we'll chat on this court a little bit about what is going" }, { "start": 111.1, "end": 116.1, "text": " to be offered because I do want your inputs into these kinds of things. So let's get some" }, { "start": 116.1, "end": 121.97999999999999, "text": " funny merch. And I think that'll be cool. Speaking of discord, special thanks to everyone" }, { "start": 121.97999999999999, "end": 126.97999999999999, "text": " who is there who participates to everyone who has ever asked and to everyone who has" }, { "start": 126.97999999999999, "end": 132.54, "text": " ever answered a question in the help channel to everyone who has participated or even just" }, { "start": 132.54, "end": 137.66, "text": " listened to the paper discussions we host there is special thanks to the regulars and" }, { "start": 137.66, "end": 142.7, "text": " to the moderators who keep everything going. This would absolutely not be possible if it" }, { "start": 142.7, "end": 148.94, "text": " were just myself. So huge thanks to everyone there. This community is just amazing. And" }, { "start": 148.94, "end": 154.26, "text": " we will not be at 100k right now if it weren't for the support that I'm getting from there." }, { "start": 154.26, "end": 159.34, "text": " If you're not yet a discord member and you do want to be more involved, link is right" }, { "start": 159.34, "end": 164.06, "text": " there in the description. Everyone's welcome. As I said, next to the usual discord chit" }, { "start": 164.06, "end": 169.74, "text": " chat, we have regular paper discussions. And also there are some community projects. Currently," }, { "start": 169.74, "end": 174.7, "text": " there is one called homebrew NLP, where the goal is to build a framework that can run" }, { "start": 174.7, "end": 180.5, "text": " really large language models on a single machine. If you're interested in that, absolutely join" }, { "start": 180.5, "end": 185.42000000000002, "text": " and participate in creation of that. Very cool. Okay, that being said, let's dive a" }, { "start": 185.42, "end": 193.42, "text": " little bit into the channel statistics. Now I think due to the rules of AdSense, I'm not" }, { "start": 193.42, "end": 199.45999999999998, "text": " allowed to show you the exact numbers of revenue that come from ads, not entirely sure that's" }, { "start": 199.45999999999998, "end": 203.38, "text": " the rule actually, but I have heard it from somewhere and I'd rather not get into trouble." }, { "start": 203.38, "end": 208.77999999999997, "text": " Safe to say, it's not nearly a number where you could live off of this or anything like" }, { "start": 208.77999999999997, "end": 214.64, "text": " this. It did support for example, the new camera that I've gotten so you can enjoy an" }, { "start": 214.64, "end": 220.82, "text": " excellent quality. Also, thanks of course to the Patreon and subscribe star supporters" }, { "start": 220.82, "end": 225.54, "text": " and also the people who've sent me a bit of crypto. This has also enabled me to get a" }, { "start": 225.54, "end": 230.73999999999998, "text": " new iPad instead of my old Surface tablet, which makes the creation of the paper reviews" }, { "start": 230.73999999999998, "end": 236.42, "text": " just a lot easier. So thanks a lot for that. So here I've pulled up statistics since January" }, { "start": 236.42, "end": 244.1, "text": " 2020. I have made numerous videos before that, but not nearly at the scale or frequency that" }, { "start": 244.1, "end": 251.29999999999998, "text": " I'm making them now. So the real video making started in the early days of 2020, when the" }, { "start": 251.29999999999998, "end": 257.1, "text": " first wave of the current global phenomenon hit, and I suddenly found myself with a bit" }, { "start": 257.1, "end": 262.54, "text": " of more time on my hands. And at that time, I was watching a lot of videos by people like" }, { "start": 262.54, "end": 269.1, "text": " PewDiePie and Casey Neistat, and I deep respect for these people that upload every single" }, { "start": 269.1, "end": 274.38, "text": " day. And I asked myself, how long could I keep this up? And it turned out I could keep" }, { "start": 274.38, "end": 280.42, "text": " it up for about three to four months. So as you can see, YouTube is mostly a grind with" }, { "start": 280.42, "end": 286.96000000000004, "text": " a few intermittent spikes. I believe the first spike here is GPT three. And the second spike" }, { "start": 286.96000000000004, "end": 292.38, "text": " is alpha fold. You can also see the times I took a couple of breaks namely here in late" }, { "start": 292.38, "end": 296.70000000000005, "text": " summer of 2020, and in early summer of this year. It's pretty cool how you can see all" }, { "start": 296.7, "end": 303.62, "text": " of this in the stats. Also, we've recently passed 4 million views, which is crazy. Interestingly," }, { "start": 303.62, "end": 308.94, "text": " here you can see while a lot of people appear to have watched the GPT three video, not a" }, { "start": 308.94, "end": 316.53999999999996, "text": " lot of people have watched it to the end. See the difference? Spike? No spike. Spike?" }, { "start": 316.53999999999996, "end": 324.34, "text": " No spike. Maybe that was a different video. Top videos, of course, the all time favorite" }, { "start": 324.34, "end": 331.09999999999997, "text": " attention is all you need. See, I've uploaded this in 2017. And it's drawn people ever since," }, { "start": 331.09999999999997, "end": 335.09999999999997, "text": " which means I must have done something right. Now people have told me to get a thumbnail" }, { "start": 335.09999999999997, "end": 339.21999999999997, "text": " for this going or anything like this. But I'm not I'm not going to change a single thing" }, { "start": 339.21999999999997, "end": 343.85999999999996, "text": " about this video is doing well. People are watching it for a long time, not going to" }, { "start": 343.85999999999996, "end": 349.62, "text": " change a thing. Here you see other popular videos are alpha fold and GPT three. Now also" }, { "start": 349.62, "end": 354.66, "text": " surprising is trans coder, which a lot of people watch, but then they watch kind of" }, { "start": 354.66, "end": 359.82, "text": " none of it. So this might have been the big spike. I'm not sure if the thumbnail here" }, { "start": 359.82, "end": 365.38, "text": " is misleading and people expected coding content rather than an analysis of a research paper," }, { "start": 365.38, "end": 370.16, "text": " or it's because the first part of this word is sort of politically overloaded and maybe" }, { "start": 370.16, "end": 376.24, "text": " people clicked on that or the algorithm recommended that to people. I'm not sure but it is what" }, { "start": 376.24, "end": 382.3, "text": " it is. Interestingly, click through rate has been going steadily down. I'm not sure if" }, { "start": 382.3, "end": 388.02, "text": " that is to be expected as you grow, I guess. I'm not sure. But maybe I should do a little" }, { "start": 388.02, "end": 393.16, "text": " bit more clickbait to get people to click more. When people search for this channel," }, { "start": 393.16, "end": 398.72, "text": " the most thing they search is my name, which is quite flattering. And then it is the titles" }, { "start": 398.72, "end": 403.82, "text": " of the videos they're interested in such as attention is all you need GPT three, alpha" }, { "start": 403.82, "end": 409.2, "text": " fold or vision transformer, which was a cool video. If you remember, I reviewed that before" }, { "start": 409.2, "end": 416.74, "text": " it was clear who the authors were and I sort of deanonymize the paper live and yeah, I" }, { "start": 416.74, "end": 425.21999999999997, "text": " thought that was funny. So who are you? You are probably on YouTube mostly around 6pm" }, { "start": 425.21999999999997, "end": 431.46, "text": " in Central Europe, you're probably also subscribed to Two Minute Papers, Lex Friedman, Tesla," }, { "start": 431.46, "end": 437.09999999999997, "text": " the MLS Street Talk and Sabine Hossenfelder among other channels. Now specific shout out" }, { "start": 437.09999999999997, "end": 441.62, "text": " to MLS Street Talk. If you're not subscribed to that, I can highly recommend it. I'm part" }, { "start": 441.62, "end": 446.65999999999997, "text": " of it not always but a lot of times and we have super duper interesting discussions with" }, { "start": 446.65999999999997, "end": 453.26, "text": " people that I would have never guessed I could ever reach and talk to and ask them questions." }, { "start": 453.26, "end": 458.29999999999995, "text": " So I think we have really cool guests and the conversations are often quite technical." }, { "start": 458.3, "end": 465.78000000000003, "text": " So I think you will enjoy that. In terms of watch time, only about half the people are" }, { "start": 465.78000000000003, "end": 474.46000000000004, "text": " subscribed, which is surprising. That means 200k subscribers isn't far away. And 19 out" }, { "start": 474.46000000000004, "end": 482.26, "text": " of 20 of you are probably male and a lot of you are between 25 and 34 years old. Now I'm" }, { "start": 482.26, "end": 487.5, "text": " never sure if that is just the statistics of the people where YouTube knows what they" }, { "start": 487.5, "end": 492.78, "text": " are because they've specified it somewhere or is that what they guess about people in" }, { "start": 492.78, "end": 498.38, "text": " which case I guess that would be seriously distorted because the guessing would probably" }, { "start": 498.38, "end": 503.1, "text": " be based on something like your interests, which might be that if you're into a lot of" }, { "start": 503.1, "end": 507.7, "text": " technical subjects, you're more likely to be male, but then you count that to the statistic" }, { "start": 507.7, "end": 512.84, "text": " here and probably that statistic is then used again for training the algorithms. I'm not" }, { "start": 512.84, "end": 517.42, "text": " sure so I'm not going to interpret too much into this thing right here. Also, you're quite" }, { "start": 517.42, "end": 524.9799999999999, "text": " likely to be from the United States or India, but really the geographies are distributed" }, { "start": 524.9799999999999, "end": 529.66, "text": " quite all over the world. Okay, I've actually figured it out. Yes, the giant spike was in" }, { "start": 529.66, "end": 536.86, "text": " fact the transcoder video. And here you can see that the traffic source was mostly external." }, { "start": 536.86, "end": 544.9399999999999, "text": " So in fact, the GPT three video was a much smaller spike, not much earlier than the transcoder" }, { "start": 544.94, "end": 551.6, "text": " spike. So this was it for the channel statistics for the celebration of 100k. Thank you so" }, { "start": 551.6, "end": 557.98, "text": " much to everyone who is here to everyone who's helped and who's participated. I hope you" }, { "start": 557.98, "end": 562.58, "text": " still enjoy the content. I still read all the comments. If you have any feedback, any" }, { "start": 562.58, "end": 567.6600000000001, "text": " wishes or anything like this, let me know. I'm looking forward to what's to come and" }, { "start": 567.66, "end": 578.42, "text": " have a great day. Bye bye." } ]
RXwZKzczkF8
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[ML News] AI Threatens Biological Arms Race
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "gtc", "gtc22", "nvidia", "jensen huang", "3090", "rtx 3090", "ithaca", "deepmind", "deep mind", "deepmind greek text", "deepmind ithaca", "ml news", "mlnews", "ai news", "kilcher news", "drug discovery", "ai drug discovery", "ai drug development", "yoshua bengio", "joshua bengio", "yosha bengio", "bengio knight", "gary marcus", "deep learning wall", "gary marcus deep learning", "pig grunts", "ai animal communication", "meta ai" ]
#mlnews #gtc22 #ithaca GTC Registration Link: https://ykilcher.com/gtc Your regular updates on what's going on in the ML world! OUTLINE: 0:00 - Intro 0:20 - Register to Nvidia GTC and win a 3090! 4:15 - DeepMind's Ithaca deciphers Lost Ancient Texts 6:45 - Drug discovery model turns toxic 10:00 - Gary Marcus: Deep Learning is hitting a wall 19:40 - GopherCite: Backing up answers with citations 22:40 - Yoshua Bengio appointed knight of the legion of honour 23:00 - Meta AI tags parody account of Yoshua Bengio 23:40 - Building games using just natural language 24:55 - YOU.com adds writing assistant 25:45 - Horace He: How to brrr 26:35 - Karpathy: Reproducing Yann LeCun's 1989 paper 27:50 - Pig grunt emotion classifier 28:20 - AI annotates protein domain functions 29:40 - Atwood & Carmack: 10k self-driving car bet 30:50 - Helpful Things References: Register to GTC and win a 3090! https://twitter.com/NVIDIAEU/status/1501881813651836930 https://www.nvidia.com/gtc/keynote/?ncid=so-twit-533413&=&linkId=100000114410590 https://www.nvidia.com/gtc/?ncid=ref-inpa-330612 https://www.nvidia.com/gtc/keynote/ https://www.nvidia.com/gtc/training/ https://developer.nvidia.com/nvidia-omniverse-platform DeepMind deciphers Lost Ancient Texts https://deepmind.com/blog/article/Predicting-the-past-with-Ithaca https://www.nature.com/articles/s41586-022-04448-z https://github.com/deepmind/ithaca https://ithaca.deepmind.com/?job=eyJyZXF1ZXN0SUQiOiI1N2I4MWFjNTIxNGM3NDBiMjc3YzA1YzFiOTYwYzI0NCIsImF0dHJpYnV0aW9uIjp0cnVlLCJyZXN0b3JhdGlvbiI6dHJ1ZX0%3D Drug discovery model turns toxic https://www.theverge.com/2022/3/17/22983197/ai-new-possible-chemical-weapons-generative-models-vx https://www.nature.com/articles/s42256-022-00465-9.pdf?utm_source=pocket_mylist Gary Marcus: Deep Learning is hitting a wall https://nautil.us/deep-learning-is-hitting-a-wall-14467/ https://www.youtube.com/watch?v=fVkXE330Bh0&t=4437s GopherCite: Backing up answers with citations https://deepmind.com/research/publications/2022/GopherCite-Teaching-Language-Models-To-Support-Answers-With-Verified-Quotes Yoshua Bengio appointed knight of the legion of honour https://mila.quebec/en/professor-yoshua-bengio-appointed-knight-of-the-legion-of-honour-by-france/ Meta AI tags parody account https://twitter.com/MetaAI/status/1504575140532613125 Building games using just natural language https://andrewmayneblog.wordpress.com/2022/03/17/building-games-and-apps-entirely-through-natural-language-using-openais-davinci-code-model/ YOU.com adds writing assistant https://you.com/search?q=how%20to%20write%20well Horace He: How to brrr https://horace.io/brrr_intro.html Karpathy: Reproducing Yann LeCun's 1989 paper https://karpathy.github.io/2022/03/14/lecun1989/ Pig grunt emotion classifier https://science.ku.dk/english/press/news/2022/pig-grunts-reveal-their-emotions/?utm_source=pocket_mylist AI annotates protein domain functions https://ai.googleblog.com/2022/03/using-deep-learning-to-annotate-protein.html?utm_source=pocket_mylist https://google-research.github.io/proteinfer/ Atwood & Carmack: 10k self-driving car bet https://blog.codinghorror.com/the-2030-self-driving-car-bet/?utm_source=pocket_mylist Helpful Things https://github.com/recognai/rubrix https://twitter.com/taiyasaki/status/1501288630697877504 https://github.com/mosaicml/composer?src=twitter https://mujoco.org/ https://mujoco.readthedocs.io/en/latest/changelog.html https://github.com/deepmind/mctx?utm_source=pocket_mylist https://padl.ai/ https://github.com/LaihoE/did-it-spill https://pytorch.org/blog/pytorch-1.11-released/ Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
DeepMind uses deep learning to restore ancient Greek texts. A drug discovery system has been abused to create thousands and thousands of super toxic compounds. And Gary Marcus claims deep learning is hitting a wall. Welcome to ML News. It's Monday. GTC conference goes into its next iteration. Now GTC is a company conference like all of the big companies, they present all of their newest stuff there. But they also have a host of external speakers and all kinds of people that just give education and talks about how they use deep learning for various things. Now all of it is obviously Nvidia themed. But I can promise you the talks are interesting by themselves as well. The highlight of the conference is obviously the keynote by Jensen Huang. And depending on when you're watching this video, the conference is going on probably right now. And the best part is if you use my link, that's by culture.com slash GTC and you use that to sign up for the conference, you can win a 3090 that has been hand signed by Jensen Huang. So the same person that is giving the keynote will have signed your GPU if you win it. Now this is pretty cool. All you have to do is sign up using my link and then attend at least one session and why not attend the keynote. The keynote will go into all of the upcoming things of Nvidia. For example, is there going to be something like a 4090? How does it look like? Why do they increase the first digit of 3090 and not just make it the 3091? All the biggest questions of humanity. Now other than new architectures coming up, there will also be a lot of talks on the topics of accelerated computing, autonomous driving, anything to do with computer vision, rendering, cybersecurity. Nvidia hardware now powers almost all of deep learning advances apart from some specialized vendors. So this is definitely a good place to look. Another thing I want to highlight is the Nvidia Omniverse platform, which is a high performance and really good simulation, physics and rendering engine. This includes Pixar's universal scene description technology and can be used to do accurate renderings. And since synthetic data is such a big deal in recent times, this could really be something to accelerate your research if you are into simulated data transferring to the real world. It's pretty cool and a lot of things can be done with it. And no, the Omniverse isn't the metaverse per se, but there is a session that you can attend in GTC that talks about the metaverse and how to build virtual connected worlds. And as you can see, one of the speakers is the VP of Omniverse. So in the end, everything's somehow going to be together. There are even sessions called connect with the experts where you get one on one time with experts in a certain area, for example, GPU performance analysis and optimization. This is first come, first serve. So area, as I said, besides the keynote, there is an entire plethora of sessions that you can attend. These go from building large language models to next generation rendering, to using AI for cybersecurity, or understanding how newest technologies can help your business. There's also more specialized tracks such as focuses on health care, autonomous driving, and other areas. Registration is free and you can put together your own little calendar that reminds you whenever these sessions are coming up. Again, use my link to sign up in order to win a 3090. There's one caveat you need to be in EMEA, which is Europe, Middle East or Africa, in order to qualify for the 3090 raffle. However, I've decided that anyone living outside of these areas can also participate in another raffle that I sponsor. And that will just give you some some merch. So inside EMEA, you can participate for the 3090 outside EMEA, you can participate for the merge. Now, if you are in either bucket, and you want to be in the other bucket, I'm sure we're going to do stuff in the future where you can win to your heart's content. But for now, this seems the most fairest allocation of resources. And remember, you have to attend a session in GTC in order to qualify for the 3090. DeepMind has released a new blog post called predicting the past with Ithaca. Now, this is a system that restores ancient texts, namely ancient texts from the Greeks. So throughout the years, a lot of these inscriptions in stone have gone missing, have been damaged. And therefore, historians, they need to tease out what things could mean. Now, this is obviously a good application for something like a language model. So what Ithaca does is it takes in whatever is undamaged, and a few hints of where it needs to fill in missing characters. And it tries to reconstruct these things. Not only will it give an output that restores the missing pieces of text, but it will also determine a probability distribution over the geographical origins of this piece of text, as well as a chronological attribution, meaning it will estimate when the text was written. Now, it's interesting to me, as you can see right here, the input is just plain text, I would have guessed that they would use some sort of computer visiony things as well, as maybe the Greeks would have written down some stuff in certain ways in certain order, but I'm not too educated in ancient Greek. So this might not have been the case after all. What is cool, though, is that the blog post goes into a lot of detail, not only about the system itself, and how good it is, which it undoubtedly is, but how the combination of humans and machines together can outperform anyone alone. They talk a lot about how to build tools in order for historians to be able to effectively interface with the system, and that it has really accelerated their research. Now, this isn't only good for ancient Greek texts, but the more we learn and how we can use AI in order to accelerate other fields, I think the better the success rates for all of science. This goes along with an open access paper in nature that you can read, the code is online, you can try it out for yourself. And they even have a website with a little demo application, or you can try it out yourself. And just in case you happen to have some ancient Greek block laying around with some damages in it, just enter it here, it will it will do it, it will predict it. Overall, I think it's a pretty cool trend what DeepMind is doing interfacing with lots of experts in adjacent and even non adjacent fields and using AI in order to come up with accelerations in those fields. I think it's a neat application and it benefits everyone. The verge writes AI suggested 40,000 new possible chemical weapons in just six hours. That is an interview with the author of this commentary here. It is called dual use of artificial intelligence powered drug discovery. So what has happened here is that there is a lot of research in drug discovery and AI accelerated drug discovery, obviously, and the mission there is to come up with compounds that achieve some sort of an effect while also not being toxic. It's a good property to have not being toxic. And what often is done is that there are toxicity data sets, so explicitly labeled substances and how toxic they are. And what those people can do is they can essentially take those data sets and train a classifier, an auxiliary classifier that helps their method avoid toxicity. So neural network A will try to come up with new compounds. And then neural network B would just reduce the likelihood of the ones that are really toxic. So you can imagine almost like a little bit of a regularizer or a loss component for the generative model of new compounds. Now all that these researchers did is simply flip the sign essentially in front of that auxiliary classifier. So instead of coming up with new compounds that go less toxic, these new compounds go more toxic. And what's interesting is that they observe that this system will immediately give them lots of substances that have been used for doing chemical warfare. And also a couple of instances of substances that are more toxic than the nerve agent VX, which is very lethal compound in very, very small doses, it paralyzes your lungs and you dead. So this is quite concerning because of the easiness of how that is to do essentially, if you are a little bit into drug discovery, and you can handle a bit of machine learning, this is relatively simple to do. The more hard part here is to actually synthesize those molecules, although that is also not too difficult as the article alludes. The article is necessarily kept not very detailed in order to not just, you know, throw out exactly how to do it. But it is implied that anyone with a bit of knowledge of the topic could go about doing this. And this comes back to what I've been saying for a while, I didn't invent this opinion, but I was always saying that any technology can be used for good and for bad with like a few tiny pieces of exception, the goodness or badness of the technology is almost two sides of the same coin. And this lays it pretty bare essentially any method that we have to make AI technologies somehow more beneficial, less toxic, more truthful, more reliable, anything like this, any method like this that is usually hailed. If you usually just flip a sign on something you flip one bit in the objective, you can achieve the exact opposite. There are very few techniques where you cannot directly derive a more quote unquote evil method from a quote unquote good method. Now to me, I think just raises a set of important questions. And I think it requires us to rethink a little bit how we deal with AI safety and with undesirable consequences of research. But if you have an opinion, let me know in the comments. Gary Marcus writes in Nautilus, deep learning is hitting a wall. This is an essay, an opinion piece essentially by Gary Marcus, who is a longtime AI researcher and author and public persona. If you've been in AI for a while, you've certainly heard of him. He's usually pitched as a little bit of an antagonist to the current paradigm of just do deep learning and scale it up big. And this article right here lays out some of his arguments, but also ends on an optimistic note of the future of deep learning and its combination with symbolic methods. The core story thread of the article is Gary Marcus recalling people like Jeffrey Hinton being very pro symbolic methods and combining symbolic methods with neural networks, let's say back in the day. So symbolic methods contrary to continuous or distributed methods would be methods where you can explicitly manipulate discrete symbols. The extreme version of this would be things like logical systems or expert systems. Now these can get quite complicated in that you can have symbols which themselves are functions over other symbols, symbols that represent abstract concepts and very complicated parameterized manipulation of those symbols. If you go to the other extreme, which is currently very popular, it is that essentially continuous distributed representation systems such as deep neural networks will be able to do all of the AI tasks that we could possibly want. Proponents of this view would say that if we just keep scaling up systems like GPT-3 or so, then AGI will emerge. Now what Marcus is pleading for here ultimately is that we need a synthesis of the two methods in order to progress in the field of AI. Now this in itself, I don't think is that controversial. People I think are well aware that deep learning has some limitations, especially let's call it pure deep learning, just scaling up and feeding more data. And obviously some tasks are tackled way better by symbolic methods. However, this article has created quite a stir on social media, lots of people commenting on it getting into a little bit of fights about it. And I've been trying to understand what's going on right here. So my conclusions are not as much as the content of the article is necessarily wrong, or the conclusions that we need the synthesis is out of the ordinary. However, the framing is such that Marcus tends to be quite critical of the recent advances in the distributed system. So in the deep neural networks, and what I think is unreasonably bullish on symbolic methods and their appeals. Now, as I said, the storyline goes very much with the development of Jeff Hinton, who at one point, apparently has been more pro fusing symbolic methods with neural networks, and then somehow has transitioned into discarding symbolic methods more and more saying that neural networks will essentially be able to do it all to do reasoning to do understanding, etc. Now, I think this itself is a little bit also of a one sided framing of Jeff Hinton's views. But you can definitely see how Jeff Hinton is a strong advocate for neural systems and for distributed systems doing these things. And I have various points to make right here. I think one of the fundamental questions is that obviously we all know that for some tasks, we need some kind of symbolic logical reasoning, it can't just all be done like latently and so on because well, we observe ourselves and we ourselves do symbolic logic reasoning. So point one is that even though we do symbolic reasoning, it is implemented in neural hardware. In fact, nowhere in the brain is there an explicit symbolic processor. So all the evidence we have is that even though symbolic manipulation might be going on in the brain, it is emergent from the underlying neurological structure. Now does that mean we have to go the same route in deep learning in that we train the neurological structure to do the symbolic manipulations? Or does it mean we could take a shortcut and directly implement the symbolic manipulations by itself? I don't know. I'm just saying the precedent is that everything in the brain as far as we see is implemented using a neural distributed architecture and not an explicit symbolic one. On the other hand, the brain obviously consists of super duper specialized parts, all interacting in very sparse and structured manners. And the current deep learning systems that we have are essentially very fully connected, very homogeneous systems, which are also very unlike the brain. So the argument only counts about half. The next thing is and somewhat of an issue I have with symbolicists or let's call it hybridists attacking deep learning in that they tend to be a little bit too dismissive of the abilities of deep learning. And the example that often comes up is something like GPT-3. Now, obviously, it's easy to go ahead and criticize GPT-3. It exhibits many failure cases, whether it represents a really bad therapist, or it just invents facts out of thin air. But I think there wasn't really a person in the world that wasn't a little bit at least surprised by just how much it can do. Like, of course, in hindsight, you can always say, well, it's just a bigger version of GPT-2. Well, it just kind of recites its training examples. And I agree, it does it kind of recites and moshes its training examples. I personally think humans don't do that much more. But there are definitely emergent phenomena, for example, the sheer ability to in context learn as well as it does, that emerge just purely out of a function of the scale, and not because we built anything explicitly in. And I think when people are very bullish on neural methods, what they refer to is this ability, this emergence of functionality that we previously thought could only be explicitly implemented by a symbolic approach. And that just arise if we scale things up. Now, it is true, our ability to scale things up, especially the exponential scaling that we require for deep learning has come to a little bit of a stop since now it takes entire giant companies to implement one of those things. And it is not clear how we can scale that up 10x 100x or 1000x more. But that doesn't necessarily dismiss the claim. Marcus also criticizes things like if GPT-3 has all these failure modes, then, you know, be careful about wanting this in your self driving car. And I think those miss a little bit what we're going for. GPT-3 is aimed to produce text as if it were found on the internet. And that's what you're getting. If people expect to get a truthful or factual or helpful answer out of GPT-3, that fundamentally misses what it was trained for. Now, if someone sat me in a car and said, this car was trained on driving like human drivers, and we filtered out all the human drivers that got into accidents, and it has really learned well how to replicate the human driving ability, then I'd be quite comfortable because that's exactly what I want. I want the car to drive like a human would drive. So there's much less of a mismatch of what the thing is trained for, and what I'm using the thing for. And therefore, I think at least half of the criticism leveraged here is not really applicable to something like self driving cars. The other half is. And likewise, Marcus brings up the net hack challenge right here as an example for how deep methods are still way behind symbolic methods mentioning that in the net hack challenge, the symbolic methods way outperformed the learning methods. By the way, if you don't know net hack is this little game that is largely text based or at least ASCII based. And you have to do exploration, you have to do long term reasoning and so on. Now what I find a little bit worth mentioning is that the symbolic methods that actually one they are just handcrafted they are, and I'm sure the neural methods to an extent are too. But the symbolic methods are just bots for the game, they just implement the game, they parse the messages, they list items they have, they have heuristics for battle for doing anything essentially, everything is hard coded. This is the Boston dynamics of net hack. And I think that kind of misses the point of why we're trying to get deep learning to do these types of things. Because deep learning, they are largely more general methods that we could apply to any sort of environment. And this just happens to be like a very defined environment, the net hack environment, where everything is super bounded and all the inputs are extremely expected and parsable. Yet deep learning has the potential to be much more generalizable and much more applicable to multiple things at the same time. Whereas a bot like this, you can transfer to even a similar game. So I think that kind of criticism is a bit weak too. Now the article by Marcus ends on a high note saying for the first time in 40 years, I finally feel some optimism about AI as recounting that after the symbolic methods had been almost a little bit frowned upon by the community, they do make a resurgence and hybrid approaches do seem to be promising a interesting area for the future. And with that, I agree. And I think the article itself is a cool read. If you are interested more in Marcus's arguments, and a little bit of the history as he sees it, please give it a read. DeepMind releases go for site, which is a language model that supports its answers with verified quotes. This is a language model that will go out and search for information as you query it. And it will first of all base its answers on these citations. But second of all, also be able to actually serve you the citations. Now this is not the first kind of its system. There have been other attempts at doing this. And this is just one in this iteration. But it is an interesting approach. These language models, they do tend to hallucinate a bunch of facts, because there's always a conflicting interest between the language model objective, and sort of the let's call it factual consistency. And if you go deeper, that is a mismatch between the model wanting to be grammatical, but also kind of good at reciting whatever is in the data. And so sometimes that leads to hallucinated facts. And this can be drastically reduced if you base whatever you produce on actual citations that exist somewhere. Now this has advantages and disadvantages. Obviously, the advantages, you'll be more accurate on some of these questions, you'll be able to provide the user directly with the citation that you base your reasoning on. However, there are also things that don't work so well. What they discuss here is an example that says, what does drinking Red Bull give you? And the answer being wings is wrong, because there is a citation, but obviously drinking Red Bull doesn't give you wings. However, this is the type of argument that I also don't quite buy. Because if I go to a human and I asked them, you know, what does drinking Red Bull give you, they will either say diabetes or wings. I don't see why we play such a focus on evaluating these language models on like factual truthfulness, when we query them with questions that really imply not a factual truthfulness, but sort of the truthfulness, according to common lore, or what advertisement tells us. I mean, for all intents and purposes, if a human gave you this answer, you would be happy if that was the question that you asked. So these things being brought up as negative examples are kind of shady to me. What I can imagine it also doesn't do that well is give you answers where you need to synthesize multiple passages, multiple things of citations, although I'm pretty sure you could extend the system to pull all kinds of citations, maybe actually already do that. But the main focus really seems to be on going out finding some citations that actually answers your questions and then gives you that. Another cool thing about these systems is that you don't need to encapsulate all their knowledge into their parameters at training time. So they can potentially even answer questions about topics they've never seen during training simply by you providing them with more external sources that they can query at inference time. So go for site was here able to answer questions about itself. So that's very cool. In other news, Mila writes that Professor Joshua Benjo was appointed knight of the Legion of Honor by France. This is one of the highest honors that France gives out. Obviously, Benjo is Canadian, but he fosters a lot of collaboration between France and Canada. And it's really cool to see him honored once more. Speaking of Joshua Benjo, Meta AI has tweeted out a little clip and a little advertisement for a discussion that was moderated by Alex Friedman between Yann LeCun and Joshua Benjo. They've tagged all the people on Twitter. Now, Joshua Benjo is not on Twitter. And you know, good for him. But they've just gone with the first result that popped up in the search, which is a parody account of a bored Benjo. So I don't know why, but I just find this really funny. Please follow bored Benjo on Twitter. If the account gets enough followers, we can maybe bully the real Benjo to also get on Twitter. Andrew Maine released a cool blog post titled building games and apps entirely through natural language using OpenAI's code DaVinci model. So this is essentially an exploration of OpenAI's codex model that can take in natural language and produce code. And Andrew has used this to build various games. And it's pretty cool to see, for example, here is a minimal legend of Zelda that was built using this input right here. That's it. That's the input. There are various other projects such as a wordle clone, a matrix rain effect, tic tac toe, an image manipulation tool, and much more. What I find really interesting is that you can't really yet describe the application you want in natural language as a non programmer would do. But you still very much have to speak like a programmer. Essentially, you have to write all the comments that go with your code. And the model will simply implement that stuff for you. So this might be an artifact of how it's trained and could definitely help programmers in the future. However, it also shows we're not quite at the point yet where a non programmer could sit down and use one of these models to build an application. The use search engine has added a little tool that's called you write that helps you write stuff. So you input whatever you want here, and you'll get out a text and I thought we'll just make the title of this video will be whatever you write outputs. So we'll go to the article about the toxic compounds. We're just kind of copy the thing here or paste it here. We want a title. Our audience is YouTube. We want a tone that is persuasive. Let's go AI threatens biological arms race. Why not? Why not? Let it be the title. So if you want to try out you write then go to you.com search for how to write well currently you is in beta. So signups are free for now. I don't know for how long more for us has a blog post called making deep learning go from first principles and yes, you have to pronounce like so the theme of the blog post is that lots of people have either superstitious ideas of how to accelerate deep learning or they just kind of know some tricks from somewhere like, oh, just use whatever function here instead of that other function or in place operations are better or non in place operations are better. And this blog post goes into details in how you can think about deep learning performance and by that I mean, like things going fast and things being efficient from first principles by thinking about how compute and memory and transfer between accelerators and CPUs interact and so on is a pretty good read. And if you're interested, I definitely recommend that you check it out. Related Andre Karpat has released a new blog post in which he goes about recreating one famous paper of young Lecar from 1989 about handwritten digit recognition with convolutional neural networks. This is also very cool because Karpat the implements the original model as much as he can decipher from the original paper and tries to reproduce those results. I have to say he does get pretty close and then he goes ahead and implements all of the things that we've learned so far about deep learning about how to tweak architectures and so on. And he's able to bring down the validation loss by quite a bit. So in the end, he gets I think over a 60% reduction in validation error by implementing all of the newer techniques and finally also scaling up the data sets a bit. He draws some conclusions and finally concludes with a bit of a final look at the data set. He concludes with a bit of an outlook instead of looking 30 years into the past looking 30 years into the future, trying to extrapolate a little bit of what the world of deep learning and AI might look like then looking back to now is a pretty cool read and a pretty cool project. Definitely recommend you check it out. University of Copenhagen has a press release about their paper called pick grunts reveal about a system that has a data set of pick grunts with annotations of whether pigs are happy or not or surprised or anxious and it develops a system to classify these things. So all in all this is a pretty cool application of deep learning. And it turns out short grunts are happy grunts. Who knew? I guess farmers knew all along but you know, who knew? Google AI blog has a post about using deep learning to annotate the protein universe. Now, whereas systems like alpha fold have generated a lot of buzz, there are a lot of different tasks in the macro molecules or more specifically the protein area of biology. The one tackled here is the question of what kind of function does a protein have and what domains within the protein exhibit those functions. So the paper is about recent advances by Google to build systems that would annotate such sequences and proteins with their respective functions and push the state of the art by quite a bit. Now for that they use interestingly enough dilated convolutional networks. And they emphasize that a big part of getting this research to be successful is to actually also care for the implementation and the architecture. But also there's a big part in data set preparation and really validating your approach really making sure that what you do is effective and valid is a pretty cool read and along with it goes a larger a little bit of a website blog post a little bit like a distill article that is interactive that you can read and that contains some hands on demonstrations where you can learn about the architecture, learn about the results and explore a little bit by yourself. Jeff Atwood and John Carmack have made a bet. The bet is whether or not by January 1 2030 completely autonomous self driving cars meeting level five fully self driving specification will be commercially available for passenger use in major cities. In this instance, John Carmack is for and Jeff Atwood is against now I have to say 2030 isn't that far away. And as Jeff Atwood points out fully self driving is a really hard problem. However, as other people point out, in some major cities, you're already available to call something like a robot taxi, which doesn't seem to be too far away from what's needed. But that might just appear so because again, the gap between driving in controlled conditions on terrain and roads that you know where you have exact specifications of everything, and being able to handle most situations that a human driver would encounter anywhere at all times. That's a big difference. I'm not sure how this bet is going to turn out. That's why it's interesting. But I'm interested to hear your opinions in the comments. Alright, lastly, we'll get to some helpful things helpful things for this week rubrics is an open source platform for data centric NLP mostly specifying with managing text data and annotating it. Kubrick is a scalable data set generator for video and 3d data. Composer is a pytorch library for efficient neural network training, they implement a lot of the recent advances in speed ups of training and give you reproducible and accessible baselines for you to implement your own very speedy training loops. Mojoco is a physics simulation library, but I guess you already knew that. However, as we've reported deep mind took over bought essentially mojo co and is releasing it open source. And now they've implemented Python bindings. So you're just able to do pip install mojo co we've been waiting for this for decades. Thank you. MCTX is Monte Carlo tree search in Jack's paddle standing for pipeline abstractions for deep learning is a deep learning library that in its own words makes working with deep learning models intuitive, simple and fun. And it is entirely cross compatible with the entire pytorch and scientific Python ecosystem. Did it spill is a library for pytorch that checks if you have any test samples that were in the training set. Speaking of pytorch pytorch releases version one dot 11 with the addition of torch data and funk torch. Now these things have been brewing for a while, but it's pretty cool to see them added to the library. torch data is a library a bunch of functions that make it really easy to do various data set loading, composing and transforming things directly in the data loading pipeline, whereas funk torch is a library that adds composable function transforms to pytorch a little bit in the flavor of Jack's. So definitely check out both. Alright, that was already it for the helpful things and ml news. This episode is already way too long. Thank you for sticking around. Check out GTC use the link sign up win some merch or 3090 and I'll see you around. Thank you bye bye.
[ { "start": 0, "end": 6.5600000000000005, "text": " DeepMind uses deep learning to restore ancient Greek texts. A drug discovery system has been" }, { "start": 6.5600000000000005, "end": 12.32, "text": " abused to create thousands and thousands of super toxic compounds. And Gary Marcus claims" }, { "start": 12.32, "end": 16.32, "text": " deep learning is hitting a wall. Welcome to ML News. It's Monday." }, { "start": 21.28, "end": 28.16, "text": " GTC conference goes into its next iteration. Now GTC is a company conference like all of the big" }, { "start": 28.16, "end": 33.12, "text": " companies, they present all of their newest stuff there. But they also have a host of external" }, { "start": 33.12, "end": 38.56, "text": " speakers and all kinds of people that just give education and talks about how they use deep learning" }, { "start": 38.56, "end": 44.56, "text": " for various things. Now all of it is obviously Nvidia themed. But I can promise you the talks" }, { "start": 44.56, "end": 49.44, "text": " are interesting by themselves as well. The highlight of the conference is obviously the" }, { "start": 49.44, "end": 54.32, "text": " keynote by Jensen Huang. And depending on when you're watching this video, the conference is" }, { "start": 54.32, "end": 60.96, "text": " going on probably right now. And the best part is if you use my link, that's by culture.com slash" }, { "start": 60.96, "end": 68.48, "text": " GTC and you use that to sign up for the conference, you can win a 3090 that has been hand signed by" }, { "start": 68.48, "end": 74.64, "text": " Jensen Huang. So the same person that is giving the keynote will have signed your GPU if you win" }, { "start": 74.64, "end": 79.36, "text": " it. Now this is pretty cool. All you have to do is sign up using my link and then attend at least" }, { "start": 79.36, "end": 84.8, "text": " one session and why not attend the keynote. The keynote will go into all of the upcoming things" }, { "start": 84.8, "end": 90.08, "text": " of Nvidia. For example, is there going to be something like a 4090? How does it look like?" }, { "start": 90.08, "end": 95.36, "text": " Why do they increase the first digit of 3090 and not just make it the 3091? All the biggest" }, { "start": 95.36, "end": 100.32, "text": " questions of humanity. Now other than new architectures coming up, there will also be a lot" }, { "start": 100.32, "end": 106.56, "text": " of talks on the topics of accelerated computing, autonomous driving, anything to do with computer" }, { "start": 106.56, "end": 113.28, "text": " vision, rendering, cybersecurity. Nvidia hardware now powers almost all of deep learning advances" }, { "start": 113.28, "end": 118.4, "text": " apart from some specialized vendors. So this is definitely a good place to look. Another thing I" }, { "start": 118.4, "end": 124.80000000000001, "text": " want to highlight is the Nvidia Omniverse platform, which is a high performance and really good" }, { "start": 124.80000000000001, "end": 130.64000000000001, "text": " simulation, physics and rendering engine. This includes Pixar's universal scene description" }, { "start": 130.64, "end": 136.88, "text": " technology and can be used to do accurate renderings. And since synthetic data is such a big" }, { "start": 136.88, "end": 142.64, "text": " deal in recent times, this could really be something to accelerate your research if you are into" }, { "start": 142.64, "end": 147.51999999999998, "text": " simulated data transferring to the real world. It's pretty cool and a lot of things can be done" }, { "start": 147.51999999999998, "end": 153.6, "text": " with it. And no, the Omniverse isn't the metaverse per se, but there is a session that you can attend" }, { "start": 153.6, "end": 160.32, "text": " in GTC that talks about the metaverse and how to build virtual connected worlds. And as you can see," }, { "start": 160.32, "end": 166.23999999999998, "text": " one of the speakers is the VP of Omniverse. So in the end, everything's somehow going to be together." }, { "start": 166.23999999999998, "end": 172.07999999999998, "text": " There are even sessions called connect with the experts where you get one on one time with experts" }, { "start": 172.07999999999998, "end": 177.51999999999998, "text": " in a certain area, for example, GPU performance analysis and optimization. This is first come," }, { "start": 177.51999999999998, "end": 183.68, "text": " first serve. So area, as I said, besides the keynote, there is an entire plethora of sessions" }, { "start": 183.68, "end": 189.92, "text": " that you can attend. These go from building large language models to next generation rendering," }, { "start": 189.92, "end": 196.39999999999998, "text": " to using AI for cybersecurity, or understanding how newest technologies can help your business." }, { "start": 196.39999999999998, "end": 201.67999999999998, "text": " There's also more specialized tracks such as focuses on health care, autonomous driving," }, { "start": 201.67999999999998, "end": 208, "text": " and other areas. Registration is free and you can put together your own little calendar that reminds" }, { "start": 208, "end": 213.44, "text": " you whenever these sessions are coming up. Again, use my link to sign up in order to win a 3090." }, { "start": 213.44, "end": 219.2, "text": " There's one caveat you need to be in EMEA, which is Europe, Middle East or Africa, in order to" }, { "start": 219.2, "end": 225.35999999999999, "text": " qualify for the 3090 raffle. However, I've decided that anyone living outside of these areas can also" }, { "start": 225.35999999999999, "end": 232, "text": " participate in another raffle that I sponsor. And that will just give you some some merch." }, { "start": 232, "end": 237.83999999999997, "text": " So inside EMEA, you can participate for the 3090 outside EMEA, you can participate for the merge." }, { "start": 237.83999999999997, "end": 242.16, "text": " Now, if you are in either bucket, and you want to be in the other bucket, I'm sure we're going to" }, { "start": 242.16, "end": 248.07999999999998, "text": " do stuff in the future where you can win to your heart's content. But for now, this seems the most" }, { "start": 248.08, "end": 254.56, "text": " fairest allocation of resources. And remember, you have to attend a session in GTC in order to qualify" }, { "start": 254.56, "end": 262.96000000000004, "text": " for the 3090. DeepMind has released a new blog post called predicting the past with Ithaca. Now," }, { "start": 262.96000000000004, "end": 269.76, "text": " this is a system that restores ancient texts, namely ancient texts from the Greeks. So throughout" }, { "start": 269.76, "end": 275.28000000000003, "text": " the years, a lot of these inscriptions in stone have gone missing, have been damaged. And therefore," }, { "start": 275.28, "end": 280.23999999999995, "text": " historians, they need to tease out what things could mean. Now, this is obviously a good" }, { "start": 280.23999999999995, "end": 286.23999999999995, "text": " application for something like a language model. So what Ithaca does is it takes in whatever is" }, { "start": 286.23999999999995, "end": 292.15999999999997, "text": " undamaged, and a few hints of where it needs to fill in missing characters. And it tries to" }, { "start": 292.15999999999997, "end": 298.15999999999997, "text": " reconstruct these things. Not only will it give an output that restores the missing pieces of text," }, { "start": 298.15999999999997, "end": 303.67999999999995, "text": " but it will also determine a probability distribution over the geographical origins" }, { "start": 303.68, "end": 309.28000000000003, "text": " of this piece of text, as well as a chronological attribution, meaning it will estimate when the" }, { "start": 309.28000000000003, "end": 314.48, "text": " text was written. Now, it's interesting to me, as you can see right here, the input is just plain" }, { "start": 314.48, "end": 320.88, "text": " text, I would have guessed that they would use some sort of computer visiony things as well," }, { "start": 320.88, "end": 327.44, "text": " as maybe the Greeks would have written down some stuff in certain ways in certain order, but I'm" }, { "start": 327.44, "end": 333.12, "text": " not too educated in ancient Greek. So this might not have been the case after all. What is cool," }, { "start": 333.12, "end": 338.48, "text": " though, is that the blog post goes into a lot of detail, not only about the system itself," }, { "start": 338.48, "end": 344.96, "text": " and how good it is, which it undoubtedly is, but how the combination of humans and machines together" }, { "start": 344.96, "end": 351.52, "text": " can outperform anyone alone. They talk a lot about how to build tools in order for historians to be" }, { "start": 351.52, "end": 356.56, "text": " able to effectively interface with the system, and that it has really accelerated their research. Now," }, { "start": 356.56, "end": 362.4, "text": " this isn't only good for ancient Greek texts, but the more we learn and how we can use AI in order" }, { "start": 362.4, "end": 368.08, "text": " to accelerate other fields, I think the better the success rates for all of science. This goes" }, { "start": 368.08, "end": 374.56, "text": " along with an open access paper in nature that you can read, the code is online, you can try it out" }, { "start": 374.56, "end": 380.56, "text": " for yourself. And they even have a website with a little demo application, or you can try it out" }, { "start": 380.56, "end": 386.56, "text": " yourself. And just in case you happen to have some ancient Greek block laying around with some damages" }, { "start": 386.56, "end": 391.59999999999997, "text": " in it, just enter it here, it will it will do it, it will predict it. Overall, I think it's a pretty" }, { "start": 391.6, "end": 398.16, "text": " cool trend what DeepMind is doing interfacing with lots of experts in adjacent and even non adjacent" }, { "start": 398.16, "end": 403.52000000000004, "text": " fields and using AI in order to come up with accelerations in those fields. I think it's a" }, { "start": 403.52000000000004, "end": 412, "text": " neat application and it benefits everyone. The verge writes AI suggested 40,000 new possible" }, { "start": 412, "end": 418.56, "text": " chemical weapons in just six hours. That is an interview with the author of this commentary" }, { "start": 418.56, "end": 423.44, "text": " here. It is called dual use of artificial intelligence powered drug discovery. So what" }, { "start": 423.44, "end": 428, "text": " has happened here is that there is a lot of research in drug discovery and AI accelerated" }, { "start": 428, "end": 432.56, "text": " drug discovery, obviously, and the mission there is to come up with compounds that achieve some" }, { "start": 432.56, "end": 438.16, "text": " sort of an effect while also not being toxic. It's a good property to have not being toxic." }, { "start": 438.16, "end": 444.88, "text": " And what often is done is that there are toxicity data sets, so explicitly labeled substances and" }, { "start": 444.88, "end": 449.44, "text": " how toxic they are. And what those people can do is they can essentially take those data sets" }, { "start": 449.44, "end": 456.08, "text": " and train a classifier, an auxiliary classifier that helps their method avoid toxicity. So neural" }, { "start": 456.08, "end": 461.28, "text": " network A will try to come up with new compounds. And then neural network B would just reduce the" }, { "start": 461.28, "end": 466.24, "text": " likelihood of the ones that are really toxic. So you can imagine almost like a little bit of" }, { "start": 466.24, "end": 472.24, "text": " a regularizer or a loss component for the generative model of new compounds. Now all that these" }, { "start": 472.24, "end": 478.96000000000004, "text": " researchers did is simply flip the sign essentially in front of that auxiliary classifier. So instead" }, { "start": 478.96000000000004, "end": 484.72, "text": " of coming up with new compounds that go less toxic, these new compounds go more toxic. And what's" }, { "start": 484.72, "end": 490.8, "text": " interesting is that they observe that this system will immediately give them lots of substances that" }, { "start": 490.8, "end": 496.24, "text": " have been used for doing chemical warfare. And also a couple of instances of substances that are" }, { "start": 496.24, "end": 503.76, "text": " more toxic than the nerve agent VX, which is very lethal compound in very, very small doses," }, { "start": 503.76, "end": 510.8, "text": " it paralyzes your lungs and you dead. So this is quite concerning because of the easiness of how" }, { "start": 510.8, "end": 516.16, "text": " that is to do essentially, if you are a little bit into drug discovery, and you can handle a bit of" }, { "start": 516.16, "end": 522.32, "text": " machine learning, this is relatively simple to do. The more hard part here is to actually synthesize" }, { "start": 522.32, "end": 528, "text": " those molecules, although that is also not too difficult as the article alludes. The article is" }, { "start": 528, "end": 534.5600000000001, "text": " necessarily kept not very detailed in order to not just, you know, throw out exactly how to do it." }, { "start": 534.5600000000001, "end": 540.24, "text": " But it is implied that anyone with a bit of knowledge of the topic could go about doing this." }, { "start": 540.24, "end": 545.6, "text": " And this comes back to what I've been saying for a while, I didn't invent this opinion, but I was" }, { "start": 545.6, "end": 552.08, "text": " always saying that any technology can be used for good and for bad with like a few tiny pieces of" }, { "start": 552.08, "end": 558.88, "text": " exception, the goodness or badness of the technology is almost two sides of the same coin. And this lays" }, { "start": 558.88, "end": 565.6, "text": " it pretty bare essentially any method that we have to make AI technologies somehow more beneficial," }, { "start": 565.6, "end": 572.64, "text": " less toxic, more truthful, more reliable, anything like this, any method like this that is usually" }, { "start": 572.64, "end": 578.48, "text": " hailed. If you usually just flip a sign on something you flip one bit in the objective," }, { "start": 578.48, "end": 583.84, "text": " you can achieve the exact opposite. There are very few techniques where you cannot directly derive" }, { "start": 583.84, "end": 590.48, "text": " a more quote unquote evil method from a quote unquote good method. Now to me, I think just raises" }, { "start": 590.48, "end": 596.5600000000001, "text": " a set of important questions. And I think it requires us to rethink a little bit how we deal" }, { "start": 596.5600000000001, "end": 601.9200000000001, "text": " with AI safety and with undesirable consequences of research. But if you have an opinion, let me" }, { "start": 601.92, "end": 609.68, "text": " know in the comments. Gary Marcus writes in Nautilus, deep learning is hitting a wall. This is an essay," }, { "start": 609.68, "end": 616.24, "text": " an opinion piece essentially by Gary Marcus, who is a longtime AI researcher and author and public" }, { "start": 616.24, "end": 621.92, "text": " persona. If you've been in AI for a while, you've certainly heard of him. He's usually pitched as a" }, { "start": 621.92, "end": 628.56, "text": " little bit of an antagonist to the current paradigm of just do deep learning and scale it up big. And" }, { "start": 628.56, "end": 633.76, "text": " this article right here lays out some of his arguments, but also ends on an optimistic note" }, { "start": 633.76, "end": 639.68, "text": " of the future of deep learning and its combination with symbolic methods. The core story thread of" }, { "start": 639.68, "end": 647.76, "text": " the article is Gary Marcus recalling people like Jeffrey Hinton being very pro symbolic methods and" }, { "start": 647.76, "end": 653.68, "text": " combining symbolic methods with neural networks, let's say back in the day. So symbolic methods" }, { "start": 653.68, "end": 660.7199999999999, "text": " contrary to continuous or distributed methods would be methods where you can explicitly manipulate" }, { "start": 660.7199999999999, "end": 668, "text": " discrete symbols. The extreme version of this would be things like logical systems or expert systems." }, { "start": 668, "end": 672.4, "text": " Now these can get quite complicated in that you can have symbols which themselves are functions" }, { "start": 672.4, "end": 678.0799999999999, "text": " over other symbols, symbols that represent abstract concepts and very complicated parameterized" }, { "start": 678.0799999999999, "end": 683.4399999999999, "text": " manipulation of those symbols. If you go to the other extreme, which is currently very popular," }, { "start": 683.44, "end": 689.6800000000001, "text": " it is that essentially continuous distributed representation systems such as deep neural" }, { "start": 689.6800000000001, "end": 695.5200000000001, "text": " networks will be able to do all of the AI tasks that we could possibly want. Proponents of this" }, { "start": 695.5200000000001, "end": 702.6400000000001, "text": " view would say that if we just keep scaling up systems like GPT-3 or so, then AGI will emerge." }, { "start": 702.6400000000001, "end": 708.32, "text": " Now what Marcus is pleading for here ultimately is that we need a synthesis of the two methods" }, { "start": 708.32, "end": 715.0400000000001, "text": " in order to progress in the field of AI. Now this in itself, I don't think is that controversial." }, { "start": 715.0400000000001, "end": 719.44, "text": " People I think are well aware that deep learning has some limitations, especially let's call it" }, { "start": 719.44, "end": 724.72, "text": " pure deep learning, just scaling up and feeding more data. And obviously some tasks are tackled" }, { "start": 724.72, "end": 730.48, "text": " way better by symbolic methods. However, this article has created quite a stir on social media," }, { "start": 730.48, "end": 735.36, "text": " lots of people commenting on it getting into a little bit of fights about it. And I've been" }, { "start": 735.36, "end": 740.72, "text": " trying to understand what's going on right here. So my conclusions are not as much as the content" }, { "start": 740.72, "end": 746.5600000000001, "text": " of the article is necessarily wrong, or the conclusions that we need the synthesis is out" }, { "start": 746.5600000000001, "end": 751.6800000000001, "text": " of the ordinary. However, the framing is such that Marcus tends to be quite critical of the" }, { "start": 751.6800000000001, "end": 758.4, "text": " recent advances in the distributed system. So in the deep neural networks, and what I think is" }, { "start": 758.4, "end": 765.92, "text": " unreasonably bullish on symbolic methods and their appeals. Now, as I said, the storyline goes very" }, { "start": 765.92, "end": 773.12, "text": " much with the development of Jeff Hinton, who at one point, apparently has been more pro fusing" }, { "start": 773.12, "end": 779.28, "text": " symbolic methods with neural networks, and then somehow has transitioned into discarding symbolic" }, { "start": 779.28, "end": 785.6, "text": " methods more and more saying that neural networks will essentially be able to do it all to do" }, { "start": 785.6, "end": 792.5600000000001, "text": " reasoning to do understanding, etc. Now, I think this itself is a little bit also of a one sided" }, { "start": 792.5600000000001, "end": 798, "text": " framing of Jeff Hinton's views. But you can definitely see how Jeff Hinton is a strong" }, { "start": 798, "end": 803.84, "text": " advocate for neural systems and for distributed systems doing these things. And I have various" }, { "start": 803.84, "end": 809.12, "text": " points to make right here. I think one of the fundamental questions is that obviously we all" }, { "start": 809.12, "end": 814.72, "text": " know that for some tasks, we need some kind of symbolic logical reasoning, it can't just all" }, { "start": 814.72, "end": 822, "text": " be done like latently and so on because well, we observe ourselves and we ourselves do symbolic" }, { "start": 822, "end": 828.64, "text": " logic reasoning. So point one is that even though we do symbolic reasoning, it is implemented in" }, { "start": 828.64, "end": 835.2, "text": " neural hardware. In fact, nowhere in the brain is there an explicit symbolic processor. So all the" }, { "start": 835.2, "end": 841.36, "text": " evidence we have is that even though symbolic manipulation might be going on in the brain," }, { "start": 841.36, "end": 847.12, "text": " it is emergent from the underlying neurological structure. Now does that mean we have to go the" }, { "start": 847.12, "end": 852.48, "text": " same route in deep learning in that we train the neurological structure to do the symbolic" }, { "start": 852.48, "end": 857.6800000000001, "text": " manipulations? Or does it mean we could take a shortcut and directly implement the symbolic" }, { "start": 857.6800000000001, "end": 863.6, "text": " manipulations by itself? I don't know. I'm just saying the precedent is that everything in the" }, { "start": 863.6, "end": 870.16, "text": " brain as far as we see is implemented using a neural distributed architecture and not an" }, { "start": 870.16, "end": 876.0799999999999, "text": " explicit symbolic one. On the other hand, the brain obviously consists of super duper specialized" }, { "start": 876.0799999999999, "end": 881.4399999999999, "text": " parts, all interacting in very sparse and structured manners. And the current deep learning" }, { "start": 881.4399999999999, "end": 887.1999999999999, "text": " systems that we have are essentially very fully connected, very homogeneous systems, which are" }, { "start": 887.1999999999999, "end": 893.4399999999999, "text": " also very unlike the brain. So the argument only counts about half. The next thing is and somewhat" }, { "start": 893.44, "end": 900.8000000000001, "text": " of an issue I have with symbolicists or let's call it hybridists attacking deep learning in that they" }, { "start": 900.8000000000001, "end": 906.1600000000001, "text": " tend to be a little bit too dismissive of the abilities of deep learning. And the example that" }, { "start": 906.1600000000001, "end": 911.36, "text": " often comes up is something like GPT-3. Now, obviously, it's easy to go ahead and criticize" }, { "start": 911.36, "end": 917.2, "text": " GPT-3. It exhibits many failure cases, whether it represents a really bad therapist, or it just" }, { "start": 917.2, "end": 922.8000000000001, "text": " invents facts out of thin air. But I think there wasn't really a person in the world that wasn't" }, { "start": 922.8, "end": 928.16, "text": " a little bit at least surprised by just how much it can do. Like, of course, in hindsight, you can" }, { "start": 928.16, "end": 934.16, "text": " always say, well, it's just a bigger version of GPT-2. Well, it just kind of recites its training" }, { "start": 934.16, "end": 939.68, "text": " examples. And I agree, it does it kind of recites and moshes its training examples. I personally" }, { "start": 939.68, "end": 945.4399999999999, "text": " think humans don't do that much more. But there are definitely emergent phenomena, for example," }, { "start": 945.4399999999999, "end": 952.0799999999999, "text": " the sheer ability to in context learn as well as it does, that emerge just purely out of a function" }, { "start": 952.08, "end": 957.76, "text": " of the scale, and not because we built anything explicitly in. And I think when people are very" }, { "start": 957.76, "end": 964.1600000000001, "text": " bullish on neural methods, what they refer to is this ability, this emergence of functionality that" }, { "start": 964.1600000000001, "end": 971.44, "text": " we previously thought could only be explicitly implemented by a symbolic approach. And that just" }, { "start": 971.44, "end": 977.36, "text": " arise if we scale things up. Now, it is true, our ability to scale things up, especially the" }, { "start": 977.36, "end": 983.36, "text": " exponential scaling that we require for deep learning has come to a little bit of a stop since" }, { "start": 983.36, "end": 988.96, "text": " now it takes entire giant companies to implement one of those things. And it is not clear how we" }, { "start": 988.96, "end": 994.88, "text": " can scale that up 10x 100x or 1000x more. But that doesn't necessarily dismiss the claim." }, { "start": 994.88, "end": 1001.76, "text": " Marcus also criticizes things like if GPT-3 has all these failure modes, then, you know, be careful" }, { "start": 1001.76, "end": 1006.8000000000001, "text": " about wanting this in your self driving car. And I think those miss a little bit what we're going" }, { "start": 1006.8, "end": 1012.56, "text": " for. GPT-3 is aimed to produce text as if it were found on the internet. And that's what you're" }, { "start": 1012.56, "end": 1018.4799999999999, "text": " getting. If people expect to get a truthful or factual or helpful answer out of GPT-3," }, { "start": 1018.4799999999999, "end": 1024.56, "text": " that fundamentally misses what it was trained for. Now, if someone sat me in a car and said," }, { "start": 1024.56, "end": 1030.8, "text": " this car was trained on driving like human drivers, and we filtered out all the human" }, { "start": 1030.8, "end": 1036.1599999999999, "text": " drivers that got into accidents, and it has really learned well how to replicate the human" }, { "start": 1036.16, "end": 1041.68, "text": " driving ability, then I'd be quite comfortable because that's exactly what I want. I want the" }, { "start": 1041.68, "end": 1047.6000000000001, "text": " car to drive like a human would drive. So there's much less of a mismatch of what the thing is" }, { "start": 1047.6000000000001, "end": 1053.2, "text": " trained for, and what I'm using the thing for. And therefore, I think at least half of the" }, { "start": 1053.2, "end": 1058.96, "text": " criticism leveraged here is not really applicable to something like self driving cars. The other" }, { "start": 1058.96, "end": 1065.8400000000001, "text": " half is. And likewise, Marcus brings up the net hack challenge right here as an example for how" }, { "start": 1065.84, "end": 1071.04, "text": " deep methods are still way behind symbolic methods mentioning that in the net hack challenge," }, { "start": 1071.04, "end": 1076.56, "text": " the symbolic methods way outperformed the learning methods. By the way, if you don't know net hack is" }, { "start": 1076.56, "end": 1082.32, "text": " this little game that is largely text based or at least ASCII based. And you have to do exploration," }, { "start": 1082.32, "end": 1087.52, "text": " you have to do long term reasoning and so on. Now what I find a little bit worth mentioning is that" }, { "start": 1087.52, "end": 1093.52, "text": " the symbolic methods that actually one they are just handcrafted they are, and I'm sure the neural" }, { "start": 1093.52, "end": 1099.2, "text": " methods to an extent are too. But the symbolic methods are just bots for the game, they just" }, { "start": 1099.2, "end": 1106, "text": " implement the game, they parse the messages, they list items they have, they have heuristics for" }, { "start": 1106, "end": 1112.24, "text": " battle for doing anything essentially, everything is hard coded. This is the Boston dynamics of" }, { "start": 1112.24, "end": 1117.04, "text": " net hack. And I think that kind of misses the point of why we're trying to get deep learning to do" }, { "start": 1117.04, "end": 1122.08, "text": " these types of things. Because deep learning, they are largely more general methods that we could" }, { "start": 1122.08, "end": 1128.08, "text": " apply to any sort of environment. And this just happens to be like a very defined environment," }, { "start": 1128.08, "end": 1133.1999999999998, "text": " the net hack environment, where everything is super bounded and all the inputs are extremely" }, { "start": 1133.1999999999998, "end": 1139.28, "text": " expected and parsable. Yet deep learning has the potential to be much more generalizable and much" }, { "start": 1139.28, "end": 1145.76, "text": " more applicable to multiple things at the same time. Whereas a bot like this, you can transfer" }, { "start": 1145.76, "end": 1151.1999999999998, "text": " to even a similar game. So I think that kind of criticism is a bit weak too. Now the article by" }, { "start": 1151.2, "end": 1156.24, "text": " Marcus ends on a high note saying for the first time in 40 years, I finally feel some optimism" }, { "start": 1156.24, "end": 1162.64, "text": " about AI as recounting that after the symbolic methods had been almost a little bit frowned upon" }, { "start": 1162.64, "end": 1168.24, "text": " by the community, they do make a resurgence and hybrid approaches do seem to be promising" }, { "start": 1168.24, "end": 1174.48, "text": " a interesting area for the future. And with that, I agree. And I think the article itself is a cool" }, { "start": 1174.48, "end": 1179.28, "text": " read. If you are interested more in Marcus's arguments, and a little bit of the history as" }, { "start": 1179.28, "end": 1186.6399999999999, "text": " he sees it, please give it a read. DeepMind releases go for site, which is a language model" }, { "start": 1186.6399999999999, "end": 1192.56, "text": " that supports its answers with verified quotes. This is a language model that will go out and" }, { "start": 1192.56, "end": 1199.52, "text": " search for information as you query it. And it will first of all base its answers on these citations." }, { "start": 1199.52, "end": 1204.8799999999999, "text": " But second of all, also be able to actually serve you the citations. Now this is not the first kind" }, { "start": 1204.88, "end": 1210.88, "text": " of its system. There have been other attempts at doing this. And this is just one in this iteration." }, { "start": 1210.88, "end": 1215.92, "text": " But it is an interesting approach. These language models, they do tend to hallucinate a bunch of" }, { "start": 1215.92, "end": 1220.72, "text": " facts, because there's always a conflicting interest between the language model objective," }, { "start": 1220.72, "end": 1227.2, "text": " and sort of the let's call it factual consistency. And if you go deeper, that is a mismatch between" }, { "start": 1227.2, "end": 1234.64, "text": " the model wanting to be grammatical, but also kind of good at reciting whatever is in the data. And" }, { "start": 1234.64, "end": 1240.64, "text": " so sometimes that leads to hallucinated facts. And this can be drastically reduced if you base" }, { "start": 1240.64, "end": 1246.0800000000002, "text": " whatever you produce on actual citations that exist somewhere. Now this has advantages and" }, { "start": 1246.0800000000002, "end": 1250.88, "text": " disadvantages. Obviously, the advantages, you'll be more accurate on some of these questions," }, { "start": 1251.44, "end": 1257.1200000000001, "text": " you'll be able to provide the user directly with the citation that you base your reasoning on." }, { "start": 1257.1200000000001, "end": 1261.92, "text": " However, there are also things that don't work so well. What they discuss here is an example that" }, { "start": 1261.92, "end": 1268.96, "text": " says, what does drinking Red Bull give you? And the answer being wings is wrong, because there is a" }, { "start": 1268.96, "end": 1274.3200000000002, "text": " citation, but obviously drinking Red Bull doesn't give you wings. However, this is the type of" }, { "start": 1274.3200000000002, "end": 1280, "text": " argument that I also don't quite buy. Because if I go to a human and I asked them, you know," }, { "start": 1280, "end": 1286.88, "text": " what does drinking Red Bull give you, they will either say diabetes or wings. I don't see why" }, { "start": 1286.88, "end": 1293.5200000000002, "text": " we play such a focus on evaluating these language models on like factual truthfulness, when we query" }, { "start": 1293.5200000000002, "end": 1300.24, "text": " them with questions that really imply not a factual truthfulness, but sort of the truthfulness," }, { "start": 1300.24, "end": 1306.24, "text": " according to common lore, or what advertisement tells us. I mean, for all intents and purposes," }, { "start": 1306.24, "end": 1311.44, "text": " if a human gave you this answer, you would be happy if that was the question that you asked." }, { "start": 1311.44, "end": 1316.72, "text": " So these things being brought up as negative examples are kind of shady to me. What I can" }, { "start": 1316.72, "end": 1323.44, "text": " imagine it also doesn't do that well is give you answers where you need to synthesize multiple" }, { "start": 1323.44, "end": 1328.56, "text": " passages, multiple things of citations, although I'm pretty sure you could extend the system to" }, { "start": 1328.56, "end": 1334.56, "text": " pull all kinds of citations, maybe actually already do that. But the main focus really seems to be on" }, { "start": 1334.56, "end": 1339.2, "text": " going out finding some citations that actually answers your questions and then gives you that." }, { "start": 1339.2, "end": 1343.68, "text": " Another cool thing about these systems is that you don't need to encapsulate all their knowledge" }, { "start": 1343.68, "end": 1349.2, "text": " into their parameters at training time. So they can potentially even answer questions about topics" }, { "start": 1349.2, "end": 1354.4, "text": " they've never seen during training simply by you providing them with more external sources that they" }, { "start": 1354.4, "end": 1361.44, "text": " can query at inference time. So go for site was here able to answer questions about itself. So" }, { "start": 1361.44, "end": 1370.0800000000002, "text": " that's very cool. In other news, Mila writes that Professor Joshua Benjo was appointed knight of the" }, { "start": 1370.08, "end": 1375.4399999999998, "text": " Legion of Honor by France. This is one of the highest honors that France gives out. Obviously," }, { "start": 1375.4399999999998, "end": 1381.04, "text": " Benjo is Canadian, but he fosters a lot of collaboration between France and Canada. And" }, { "start": 1381.04, "end": 1387.6799999999998, "text": " it's really cool to see him honored once more. Speaking of Joshua Benjo, Meta AI has tweeted out" }, { "start": 1387.6799999999998, "end": 1393.6799999999998, "text": " a little clip and a little advertisement for a discussion that was moderated by Alex Friedman" }, { "start": 1393.6799999999998, "end": 1399.9199999999998, "text": " between Yann LeCun and Joshua Benjo. They've tagged all the people on Twitter. Now, Joshua Benjo" }, { "start": 1399.92, "end": 1406.24, "text": " is not on Twitter. And you know, good for him. But they've just gone with the first result that" }, { "start": 1406.24, "end": 1413.44, "text": " popped up in the search, which is a parody account of a bored Benjo. So I don't know why, but I just" }, { "start": 1413.44, "end": 1418.4, "text": " find this really funny. Please follow bored Benjo on Twitter. If the account gets enough followers," }, { "start": 1418.4, "end": 1426.4, "text": " we can maybe bully the real Benjo to also get on Twitter. Andrew Maine released a cool blog post" }, { "start": 1426.4, "end": 1432.5600000000002, "text": " titled building games and apps entirely through natural language using OpenAI's code DaVinci model." }, { "start": 1432.5600000000002, "end": 1439.8400000000001, "text": " So this is essentially an exploration of OpenAI's codex model that can take in natural language and" }, { "start": 1439.8400000000001, "end": 1445.3600000000001, "text": " produce code. And Andrew has used this to build various games. And it's pretty cool to see, for" }, { "start": 1445.3600000000001, "end": 1451.8400000000001, "text": " example, here is a minimal legend of Zelda that was built using this input right here. That's it." }, { "start": 1451.84, "end": 1457.6799999999998, "text": " That's the input. There are various other projects such as a wordle clone, a matrix rain effect," }, { "start": 1457.6799999999998, "end": 1464, "text": " tic tac toe, an image manipulation tool, and much more. What I find really interesting is that you" }, { "start": 1464, "end": 1470.48, "text": " can't really yet describe the application you want in natural language as a non programmer would do." }, { "start": 1470.48, "end": 1475.52, "text": " But you still very much have to speak like a programmer. Essentially, you have to write all" }, { "start": 1475.52, "end": 1482, "text": " the comments that go with your code. And the model will simply implement that stuff for you. So this" }, { "start": 1482, "end": 1487.52, "text": " might be an artifact of how it's trained and could definitely help programmers in the future. However," }, { "start": 1487.52, "end": 1492.96, "text": " it also shows we're not quite at the point yet where a non programmer could sit down and use" }, { "start": 1492.96, "end": 1500.32, "text": " one of these models to build an application. The use search engine has added a little tool that's" }, { "start": 1500.32, "end": 1506.56, "text": " called you write that helps you write stuff. So you input whatever you want here, and you'll get" }, { "start": 1506.56, "end": 1512.72, "text": " out a text and I thought we'll just make the title of this video will be whatever you write outputs." }, { "start": 1512.72, "end": 1520.3999999999999, "text": " So we'll go to the article about the toxic compounds. We're just kind of copy the thing here" }, { "start": 1520.4, "end": 1530.5600000000002, "text": " or paste it here. We want a title. Our audience is YouTube. We want a tone that is persuasive." }, { "start": 1531.3600000000001, "end": 1538.5600000000002, "text": " Let's go AI threatens biological arms race. Why not? Why not? Let it be the title. So if you want" }, { "start": 1538.5600000000002, "end": 1545.2800000000002, "text": " to try out you write then go to you.com search for how to write well currently you is in beta. So" }, { "start": 1545.28, "end": 1552.8, "text": " signups are free for now. I don't know for how long more for us has a blog post called making" }, { "start": 1552.8, "end": 1558.48, "text": " deep learning go from first principles and yes, you have to pronounce like so the theme of the" }, { "start": 1558.48, "end": 1565.84, "text": " blog post is that lots of people have either superstitious ideas of how to accelerate deep" }, { "start": 1565.84, "end": 1571.76, "text": " learning or they just kind of know some tricks from somewhere like, oh, just use whatever function" }, { "start": 1571.76, "end": 1576.96, "text": " here instead of that other function or in place operations are better or non in place operations" }, { "start": 1576.96, "end": 1581.84, "text": " are better. And this blog post goes into details in how you can think about deep learning performance" }, { "start": 1581.84, "end": 1587.92, "text": " and by that I mean, like things going fast and things being efficient from first principles by" }, { "start": 1587.92, "end": 1594.8799999999999, "text": " thinking about how compute and memory and transfer between accelerators and CPUs interact and so on" }, { "start": 1594.8799999999999, "end": 1599.2, "text": " is a pretty good read. And if you're interested, I definitely recommend that you check it out." }, { "start": 1599.2, "end": 1606.72, "text": " Related Andre Karpat has released a new blog post in which he goes about recreating one famous paper" }, { "start": 1606.72, "end": 1613.92, "text": " of young Lecar from 1989 about handwritten digit recognition with convolutional neural networks." }, { "start": 1613.92, "end": 1619.44, "text": " This is also very cool because Karpat the implements the original model as much as he can" }, { "start": 1619.44, "end": 1625.1200000000001, "text": " decipher from the original paper and tries to reproduce those results. I have to say he does" }, { "start": 1625.12, "end": 1630.3999999999999, "text": " get pretty close and then he goes ahead and implements all of the things that we've learned" }, { "start": 1630.3999999999999, "end": 1637.36, "text": " so far about deep learning about how to tweak architectures and so on. And he's able to bring" }, { "start": 1637.36, "end": 1644.08, "text": " down the validation loss by quite a bit. So in the end, he gets I think over a 60% reduction in" }, { "start": 1644.08, "end": 1649.9199999999998, "text": " validation error by implementing all of the newer techniques and finally also scaling up the data" }, { "start": 1649.9199999999998, "end": 1654.56, "text": " sets a bit. He draws some conclusions and finally concludes with a bit of a final look at the" }, { "start": 1654.56, "end": 1660, "text": " data set. He concludes with a bit of an outlook instead of looking 30 years into the past looking" }, { "start": 1660, "end": 1666.08, "text": " 30 years into the future, trying to extrapolate a little bit of what the world of deep learning" }, { "start": 1666.08, "end": 1672.8, "text": " and AI might look like then looking back to now is a pretty cool read and a pretty cool project." }, { "start": 1672.8, "end": 1674.6399999999999, "text": " Definitely recommend you check it out." }, { "start": 1676.1599999999999, "end": 1680.8799999999999, "text": " University of Copenhagen has a press release about their paper called pick grunts reveal" }, { "start": 1680.88, "end": 1686.5600000000002, "text": " about a system that has a data set of pick grunts with annotations of whether pigs are happy or not" }, { "start": 1686.5600000000002, "end": 1692.3200000000002, "text": " or surprised or anxious and it develops a system to classify these things. So all in all this is a" }, { "start": 1692.3200000000002, "end": 1698.48, "text": " pretty cool application of deep learning. And it turns out short grunts are happy grunts. Who knew?" }, { "start": 1698.48, "end": 1705.92, "text": " I guess farmers knew all along but you know, who knew? Google AI blog has a post about using deep" }, { "start": 1705.92, "end": 1711.8400000000001, "text": " learning to annotate the protein universe. Now, whereas systems like alpha fold have generated a" }, { "start": 1711.8400000000001, "end": 1718.48, "text": " lot of buzz, there are a lot of different tasks in the macro molecules or more specifically the" }, { "start": 1718.48, "end": 1725.52, "text": " protein area of biology. The one tackled here is the question of what kind of function does a protein" }, { "start": 1725.52, "end": 1730.8000000000002, "text": " have and what domains within the protein exhibit those functions. So the paper is about recent" }, { "start": 1730.8, "end": 1736.96, "text": " advances by Google to build systems that would annotate such sequences and proteins with their" }, { "start": 1736.96, "end": 1742.08, "text": " respective functions and push the state of the art by quite a bit. Now for that they use interestingly" }, { "start": 1742.08, "end": 1748.48, "text": " enough dilated convolutional networks. And they emphasize that a big part of getting this research" }, { "start": 1748.48, "end": 1754.56, "text": " to be successful is to actually also care for the implementation and the architecture. But also there's" }, { "start": 1754.56, "end": 1760.56, "text": " a big part in data set preparation and really validating your approach really making sure that" }, { "start": 1760.56, "end": 1767.44, "text": " what you do is effective and valid is a pretty cool read and along with it goes a larger a little" }, { "start": 1767.44, "end": 1773.6799999999998, "text": " bit of a website blog post a little bit like a distill article that is interactive that you can" }, { "start": 1773.6799999999998, "end": 1778.8, "text": " read and that contains some hands on demonstrations where you can learn about the architecture," }, { "start": 1778.8, "end": 1787.36, "text": " learn about the results and explore a little bit by yourself. Jeff Atwood and John Carmack have made" }, { "start": 1787.36, "end": 1795.36, "text": " a bet. The bet is whether or not by January 1 2030 completely autonomous self driving cars" }, { "start": 1795.36, "end": 1802.32, "text": " meeting level five fully self driving specification will be commercially available for passenger use" }, { "start": 1802.32, "end": 1809.4399999999998, "text": " in major cities. In this instance, John Carmack is for and Jeff Atwood is against now I have to say" }, { "start": 1810.1599999999999, "end": 1816.6399999999999, "text": " 2030 isn't that far away. And as Jeff Atwood points out fully self driving is a really hard" }, { "start": 1816.64, "end": 1822.3200000000002, "text": " problem. However, as other people point out, in some major cities, you're already available" }, { "start": 1822.3200000000002, "end": 1827.76, "text": " to call something like a robot taxi, which doesn't seem to be too far away from what's needed. But" }, { "start": 1827.76, "end": 1834, "text": " that might just appear so because again, the gap between driving in controlled conditions on terrain" }, { "start": 1834, "end": 1838.5600000000002, "text": " and roads that you know where you have exact specifications of everything, and being able to" }, { "start": 1838.5600000000002, "end": 1844.0800000000002, "text": " handle most situations that a human driver would encounter anywhere at all times. That's a big" }, { "start": 1844.08, "end": 1848.24, "text": " difference. I'm not sure how this bet is going to turn out. That's why it's interesting. But" }, { "start": 1848.24, "end": 1856.48, "text": " I'm interested to hear your opinions in the comments. Alright, lastly, we'll get to some" }, { "start": 1856.48, "end": 1862.72, "text": " helpful things helpful things for this week rubrics is an open source platform for data centric NLP" }, { "start": 1862.72, "end": 1869.84, "text": " mostly specifying with managing text data and annotating it. Kubrick is a scalable data set" }, { "start": 1869.84, "end": 1877.84, "text": " generator for video and 3d data. Composer is a pytorch library for efficient neural network" }, { "start": 1877.84, "end": 1882.56, "text": " training, they implement a lot of the recent advances in speed ups of training and give you" }, { "start": 1882.56, "end": 1888.3999999999999, "text": " reproducible and accessible baselines for you to implement your own very speedy training loops." }, { "start": 1888.3999999999999, "end": 1894.6399999999999, "text": " Mojoco is a physics simulation library, but I guess you already knew that. However, as we've" }, { "start": 1894.64, "end": 1900.96, "text": " reported deep mind took over bought essentially mojo co and is releasing it open source. And now" }, { "start": 1900.96, "end": 1906.8000000000002, "text": " they've implemented Python bindings. So you're just able to do pip install mojo co we've been" }, { "start": 1906.8000000000002, "end": 1916.48, "text": " waiting for this for decades. Thank you. MCTX is Monte Carlo tree search in Jack's paddle standing" }, { "start": 1916.48, "end": 1921.76, "text": " for pipeline abstractions for deep learning is a deep learning library that in its own words makes" }, { "start": 1921.76, "end": 1927.92, "text": " working with deep learning models intuitive, simple and fun. And it is entirely cross compatible with" }, { "start": 1927.92, "end": 1935.28, "text": " the entire pytorch and scientific Python ecosystem. Did it spill is a library for pytorch that checks" }, { "start": 1935.28, "end": 1941.28, "text": " if you have any test samples that were in the training set. Speaking of pytorch pytorch releases" }, { "start": 1941.28, "end": 1947.12, "text": " version one dot 11 with the addition of torch data and funk torch. Now these things have been" }, { "start": 1947.12, "end": 1953.12, "text": " brewing for a while, but it's pretty cool to see them added to the library. torch data is a library" }, { "start": 1953.12, "end": 1958.8, "text": " a bunch of functions that make it really easy to do various data set loading, composing and" }, { "start": 1958.8, "end": 1963.4399999999998, "text": " transforming things directly in the data loading pipeline, whereas funk torch is a library that" }, { "start": 1963.4399999999998, "end": 1969.1999999999998, "text": " adds composable function transforms to pytorch a little bit in the flavor of Jack's. So definitely" }, { "start": 1969.1999999999998, "end": 1973.9199999999998, "text": " check out both. Alright, that was already it for the helpful things and ml news. This episode is" }, { "start": 1973.92, "end": 1979.68, "text": " already way too long. Thank you for sticking around. Check out GTC use the link sign up" }, { "start": 1979.68, "end": 2004.64, "text": " win some merch or 3090 and I'll see you around. Thank you bye bye." } ]
pH2jZun8MoY
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Involution: Inverting the Inherence of Convolution for Visual Recognition (Research Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "what is deep learning", "deep learning tutorial", "introduction to deep learning", "computer vision", "convolutional neural network", "convolutions alternative", "cnn attention", "self attention", "attention mechanism for vision", "weight sharing neural networks", "convolutions vision", "cnn vision", "involution vision", "image segmentation", "rednet", "resnet", "residual neural networks", "bytedance ai" ]
#involution #computervision #attention Convolutional Neural Networks (CNNs) have dominated computer vision for almost a decade by applying two fundamental principles: Spatial agnosticism and channel-specific computations. Involution aims to invert these principles and presents a spatial-specific computation, which is also channel-agnostic. The resulting Involution Operator and RedNet architecture are a compromise between classic Convolutions and the newer Local Self-Attention architectures and perform favorably in terms of computation accuracy tradeoff when compared to either. OUTLINE: 0:00 - Intro & Overview 3:00 - Principles of Convolution 10:50 - Towards spatial-specific computations 17:00 - The Involution Operator 20:00 - Comparison to Self-Attention 25:15 - Experimental Results 30:30 - Comments & Conclusion Paper: https://arxiv.org/abs/2103.06255 Code: https://github.com/d-li14/involution Abstract: Convolution has been the core ingredient of modern neural networks, triggering the surge of deep learning in vision. In this work, we rethink the inherent principles of standard convolution for vision tasks, specifically spatial-agnostic and channel-specific. Instead, we present a novel atomic operation for deep neural networks by inverting the aforementioned design principles of convolution, coined as involution. We additionally demystify the recent popular self-attention operator and subsume it into our involution family as an over-complicated instantiation. The proposed involution operator could be leveraged as fundamental bricks to build the new generation of neural networks for visual recognition, powering different deep learning models on several prevalent benchmarks, including ImageNet classification, COCO detection and segmentation, together with Cityscapes segmentation. Our involution-based models improve the performance of convolutional baselines using ResNet-50 by up to 1.6% top-1 accuracy, 2.5% and 2.4% bounding box AP, and 4.7% mean IoU absolutely while compressing the computational cost to 66%, 65%, 72%, and 57% on the above benchmarks, respectively. Code and pre-trained models for all the tasks are available at this https URL. Authors: Duo Li, Jie Hu, Changhu Wang, Xiangtai Li, Qi She, Lei Zhu, Tong Zhang, Qifeng Chen Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello there! Today we're looking at involution, inverting the inheritance of convolution for visual recognition by a number of researchers of the Hong Kong University of Science and Technology, ByteDance AI lab and Peking University. In this paper on a high level the researchers try to replace the good old convolution operator in CNNs by this new thing called an involution. In its essence, involution is about halfway between a convolution and a self attention kind of operation. And it turns out that with some clever weight-sharing scheme you can achieve very good performance compared to CNNs and self-attention networks, while keeping the number of parameters and the computational cost relatively low. This I think is very much worth trying for anyone who does not operate on extremely large-scale problems. We'll get into that a bit more when we go into the experiments, but for now let's go through the paper, through what involution is, what it does, how it's different. If you like this, don't hesitate to share it out, it would help a lot. We're on the road to a hundred K subscribers and with every subscriber I get a subscriber. I stole that joke. They say here in the abstract, convolution has been the core ingredient of modern neural networks triggering the surge of deep learning in vision. AlexNet, ResNet, etc. Convolution, even though transformers are slowly taking over computer vision, convolutions are still very very much used and if you're not on a super large scale problem a convolutional neural network is still very probably the best way to go if you have a computer vision problem. They say we rethink the inherent principles of standard convolution for vision tasks, specifically spatial agnostic and channel specific. Instead we present a novel atomic operation for deep neural networks by inverting the aforementioned design principles of convolution, coined an involution. They say we additionally demystify the recent popular self-attention operator and subsume it into our involution family as an over complicated instantiation. A lot of statements in this paper are true, especially further down. A lot of the experiments are really cool, but it is a bit of an over statement what they say right here. Their claim is that if you have a convolution, what you do is something that's spatial agnostic and channel specific, which means that in a convolutional neural network when you have an image, let's say, with a bunch of pixels, these are now true pixels, not patches, and you run a convolutional layer over it, you run a convolutional kernel over it, you put the center of the kernel at some pixel, so the kernel will be something like a 3x3 kernel, you put that on the center here, so it overlaps here, you multiply element-wise, and then you aggregate. You can do that in multiple channels, but essentially you do that. Then after you've done that, you move the kernel one, let's say to the right, you shift it, so the center is here, you do the same thing again, and you shift it, you do the same thing again. It's spatial agnostic because it repeats the same computation over and over and over across the image, and it doesn't care where the computation is. It does the same computation, and that is the selling point of convolutional neural networks. They are translation invariant. It's a form of weight sharing, you share the weights across the locations, and therefore you don't really care where stuff is in the image. The CNN will be able to recognize it just as well, and you don't need to learn over and over and over the same principle just because it's in different parts of the image. This is spatial agnostic. What does channel specific mean? For that we have to go into the multiple channels realm. If your image has multiple channels, let's say I'm going to draw a new image right here with a bunch of pixels, and it has multiple channels, that means you can imagine it sort of as a 3D tensor here, where each pixel is a column, and every column is a vector of a certain dimensionality. The original image has of course three channels, which is red, green, and blue, but if you have intermediate representations these channels can grow to sizes of hundreds of channels. The point of the channels is that every entry here is a number, and every number can capture one aspect of what's described in that particular pixel. Maybe the first channel is a corner, the second one is an edge, the third one is a blue pixel, the fourth one is probably a cat here, and so on. These are the different features in the channels. A convolution operator is channel specific, that means if you have the kernel... Now convolutional kernels aren't as easy as I drew them, they're in fact four dimensional tensors. They are four dimensional tensors, which makes it a little bit complicated for me to draw, honestly. However, you can imagine that you have one kernel like so, that has the same amount of channels as your image. Now you can still do the same operation. You can overlay your kernel on a part of the image, overlay it like so, and then you can do element-wise multiplication, and then you do a sum, you sum it all up. After you do this operation, you do a big sum over all the elements of whatever your kernel multiplied with your image, and that gives you one number. You do an all-reduce, one number gives you one number. So you do this, so this is one kernel, but you have another one right here. You do the same thing, and that gives you also one number. You have another kernel, I think you get the idea, you have another kernel here. You have many of those kernels per layer. If you've never looked at how the weights look when you instantiate these layers in a deep learning framework, I encourage you to do so. A convolutional layer will have weights that are of the size kernel size by kernel size, by input channels, by output channels. It's a 4D tensor, and this orange part here is just one of those sub tensors. In fact you have as many as you have output channels. That gives you, of course when you then go over all of these, that gives you the next layer. So that becomes in the next layer. This is the next layer representation, at the point where you overlaid the kernel in the last layer, that will become this column right here. So you have the orange thing in the first, the blue thing in the second channel, green thing in the third channel, and so on. I hope this is relatively clear. So you have in fact one convolutional kernel per output channel. So if you call the orange thing here a convolutional kernel, then you have one kernel per output channel. That means it's channel specific. This is a conscious choice and it makes sense when you think about it, because each output channel means something different. If my output channel means is there a cat at this particular location, then I might want to aggregate the last layer's representation differently than if my output channel says, well is this part of the sky, or is there a corner here, or something like this. So I want to aggregate the weights differently. That's why I have to have a different set of weights here, here, and here, because they mean different things. So it's spatial agnostic, because it does the same computation at every location. It's channel specific, because it does a different computation at each channel, even though it does it for all the locations equally. Now we're prepared to invert that. So convolution promises we invert this. What we want to do is something spatial specific and channel agnostic. So the first thing here is the channel agnostic. If you've seen my last video about MLP mixer, this is very much the same idea. The idea is just of, hey why do we have different things here? Why do I have different computations? Can't we just apply the same principle we apply to the spatial thing, where we say we just slide the same computation over the image, and that is generally fine. That's weight sharing, it's actually good. Why don't we just do this here? Why don't we aggregate the information in the same way for all the different channels? So you can do that. You can just have one kernel. So instead of having a number of output channels, many kernel. So the involution will come up with simply one kernel that it shares across all of the channels. They have a little picture down here. Just look at the last step right here. Wow sorry, I crossed that out. Here this is the kernel that they have. Sorry, it's not even by number of channels. It's actually you just flatten this thing. So it's a k by k by 1 kernel and you simply push that, put that over a location in the image and then you share the computation across. So the image here, given that this is all in the same colors, it means that you just multiply, you broadcast. That's the word I was looking for. You broadcast the operation across the channels and then you aggregate after that. So you can see what involution does is broadcast and then not reduce. You don't reduce at the end to a single number, but you keep the channels as they are. That's why you only need a k by k by 1, because you don't have the different computation for each output channel and you don't reduce across the input channels. So you get away with a lot less parameters. That's even wrong here. Just a k by k kernel. Now that's one part. The other part is why don't we do something that's spatial specific. Now remember what spatial agnostic was. Spatial agnostic was we slide the same kernel across the image. What they're saying in first instance, they're saying things like, or they said something, don't know where it was in the picture, but they say what we could do is, if we have an image, and we do something spatial specific, what that means is we could have a kernel that's just as big as the image. Then no more sliding across it. It's simply you multiply those things together, you broadcast it across these channels of the image, and there you go. Also something that that MLP mixer does, they just say whatever, we don't do slidey slidey anymore. They do weight sharing, but essentially you're trying to get rid of this sliding over. You have different weight for each location. That means that the computation actually differs from where stuff is in the image. We know that that is somewhat important, because usually the sky is up and objects in these natural images that humans take might be more in the middle than anywhere else. Text goes from left to right. It's not all super translation and location invariant. It makes sense to have weights that are different for each position. But then they run into a problem. They say we couldn't do that very well, because now we can't just input pictures of different resolutions. That's one problem. I think the other problem is that this might not work too well. They come up with a different thing. They say can't we make a compromise? They don't call it a compromise. They call it something different. But they say look, can we come up with a scheme where we can retain a kernel that's approximately this size, like a small kernel, but it is different for each location. We still do the classic convolution way of doing things, in that we do these local aggregations across neighboring pixels. However the kernel that we use here is different from the kernel that we use here. That's different from the kernel that we use here. How could you make a computation where the kernel is always different? You do that by coming up with the kernel in a dynamic way. The authors here say, let's say we're at this pixel right here. We care about this neighborhood. How can we come up on the fly with a kernel for this particular pixel? Their answer is, let's just generate it from the pixel. This is the full involution diagram. We've now arrived at this. They are at this neighborhood, which is outlined here in this black scaffolding grid thing. The center pixel is the red pixel here. They say we look at that pixel and all its channels. We use that pixel and only that pixel. Not the neighborhood. We use that pixel to come up with the kernel. They have a computation here, which of course is going to be a small neural network. This is a two-layer neural network that comes up with the kernel. You see this is simply a reshape. You compute the kernel across the neighborhood from the pixel itself. That means that every single pixel here, unless it's the exact same pixel, so the exact same color in the first layer, or the exact same representation in the intermediate layers, every single location gets its own kernel for the convolution. The computation I've already told you is a small neural network. Specifically it's a bottleneck neural network. It takes the pixel representation as a vector, bottlenecks it. There is a non-linearity here and then it expands it again to the size of the actual kernel. Then you use that kernel and you broadcast it instead of having one kernel per input channel. Then you multiply and then you don't reduce across the input channels. That alleviates you from having to have multiple kernels, one for each output channel. This is the whole convolution pipeline. I would say there are multiple different concepts here. This coming up with the kernel on the fly is one concept. Then this broadcasting scheme is an entirely different concept. You could do both independently of each other. They do them together. They do ablations further down, but it's two new things in one. The first thing here is that you might think of a tension mechanism as you look at that. It's a form of fast weights. The weights of the computation are computed on the fly from the data itself. That is exactly what an attention mechanism does. However, here you do it in a slightly different way. They say that they have a discussion about attention right here. They say there are a bunch of differences. In attention what you'd have is you don't only compute your weights from the actual location where you are, even in local self-attention. You actually compute your weights from more than just the pixel where you are. You compute it from the entire region you care about. That's the first thing. The second thing is that in self-attention you have the queries and the keys. You have your data, your neighborhood, let's say. Each of those things produces a query and a key. Everyone produces a query and a key. Then you do this sort of quadratic thing in order to determine how you should aggregate your information. In involution you simply don't produce keys. You only produce queries, if you will, or only keys, however you want to look at it. Then you don't do the quadratic thing. Rather you immediately interpret this as the weights of aggregation. You can write this, and they say that, you can interpret this as the positional encodings already being present in these weights, because it's now specific to a position. Whereas in the attention literature you'd have to supply positional encodings. In order for the algorithm to know that this is a different thing, that this here is a different thing from this thing here, you need to supply it with positional encodings. Not here, because the individual channels of this thing immediately refer to different positions. This neural network is very aware of what position is where relative to the pixel you're considering. They say the success of involution explains in part why other people had lots of success with leaving away the keys and only using positional encodings together with the query. If I'm not mistaken, I think you could frame the lambda networks into this category, where at some point they never do this attention. However they rely heavily on positional encodings. However you can learn those ahead of time or statically. This is the connection to attention. The connection to attention is that the weights are constructed on the fly. However here there's no quadratic interaction, there is no softmax and so on. You construct the weights from the pixel in the center. To frame attention as a more complicated instantiation of our idea, that's a bit out there. The authors here say that attention is just a more complicated thing. The second thing I worry a bit about is that they say that this is position specific. They started out with saying that convolution is spatial agnostic. We want to do something spatial specific. This here is also spatial agnostic. If you get the same pixel at different locations in the image, this thing will produce the same weights and the computation will be the same. In fact you do this entire computation right here. That is a spatially agnostic computation. The difference here is the same difference that you have between slow weights and fast weights. You simply construct the weights of the actual computation on the fly. However the way you construct these weights remains position agnostic. The second thing is that the weight sharing is a bit of an independent thing. I get that the two work well together, but the broadcasting and weight sharing thing across the channels is almost a much simpler mention. It's a bit related to the fact that if you have a depth separated convolution and you simply share the weights across that, that's about what it boils down to. What does that give us? In fact it gives us a lot. In this paper they do experiments and they compare against for example ResNets and other networks with similar number of parameters. I like these experiments here in that you can see they always make sure that they have the lowest number of parameters among the things they compare with. Yet they show that they still beat these models. They compare ResNet with the same number of layers. This is standalone ResNet. Here is the axial ResNet. You can see that this outperforms on these tabs. This is ImageNet. They also have different things such as this segmentation task. I think they have a picture down here. This segmentation task where they perform better. This is the baseline and you can see the involution network. I think the effect that you see right here. The fact that they are better in this number is really cool. It's probably a bit due to the fact that they do this on the fly computation of weights. Which is a more powerful idea than the static weights of a convolution. The lower number of parameters I think is more a result of their weight sharing. They tout here how that they are on par with ResNet 101 regarding the top one recognition accuracy. While saving 65% of storage and computation. I think that the saving of computation is more due to the weight sharing mechanism. I think they've just selected tasks and they might be important tasks. It was just the case that in these tasks whether or not you share the weights probably doesn't matter. It doesn't hit you as hard or is even beneficial if you don't have enough data. Therefore that's why they have less parameters. What you can also observe here is that differences. They get continuously smaller as you move up the scale of network. This is all on the same data set but it would be interesting to see how this performs on a really large scale. My intuition is that as you go larger and larger in scale. This approach is going to top out and lose out to the more general architectures like attention. It's a clown world now. In these regimes and I would argue these are the regimes where a lot of practitioners care about. These and actually smaller regimes. This seems to perform reasonably well. You can see right here the curves here when you compare compute to accuracy is very favorable. Especially if you're in this region here. If you're in the low resource region it might be something that you want to try out. It remains to be seen how well this is pre-trainable and fine-tunable. It's something you might want to try. If you try to only use parts of it it would be interesting to see. If we still do convolution but we do this weight sharing scheme. They also have a notion of grouping in the channels. As the attention mechanism has it. Sharing a single kernel across all channels obviously underperforms in accuracy. Considering channel redundancy of evolution kernels. As long as the channels shared in a group to an acceptable range. The channel agnostic behavior will not only preserve the performance. But also reduce the parameter count and computational cost. This will also permit the larger kernel size under the same budget. It's the same reasoning as people introducing groups or different heads in multi-head attention. Try all of this stuff out. I think it's worth it. The code is available right here. I'll also put a link to that. That was it from me for this paper. I wish you a very pleasant day of the week. Bye bye.
[ { "start": 0, "end": 5.44, "text": " Hello there! Today we're looking at involution, inverting the inheritance of" }, { "start": 5.44, "end": 9.78, "text": " convolution for visual recognition by a number of researchers of the Hong Kong" }, { "start": 9.78, "end": 14.76, "text": " University of Science and Technology, ByteDance AI lab and Peking University." }, { "start": 14.76, "end": 21.48, "text": " In this paper on a high level the researchers try to replace the good old" }, { "start": 21.48, "end": 28.86, "text": " convolution operator in CNNs by this new thing called an involution. In its" }, { "start": 28.86, "end": 35.08, "text": " essence, involution is about halfway between a convolution and a self" }, { "start": 35.08, "end": 42.12, "text": " attention kind of operation. And it turns out that with some clever" }, { "start": 42.12, "end": 48.480000000000004, "text": " weight-sharing scheme you can achieve very good performance compared to CNNs" }, { "start": 48.480000000000004, "end": 53.28, "text": " and self-attention networks, while keeping the number of parameters and the" }, { "start": 53.28, "end": 60.480000000000004, "text": " computational cost relatively low. This I think is very much worth trying for" }, { "start": 60.480000000000004, "end": 67.2, "text": " anyone who does not operate on extremely large-scale problems." }, { "start": 67.2, "end": 71.56, "text": " We'll get into that a bit more when we go into the experiments, but for now" }, { "start": 71.56, "end": 76.8, "text": " let's go through the paper, through what involution is, what it does, how it's" }, { "start": 76.8, "end": 84.6, "text": " different. If you like this, don't hesitate to share it out, it" }, { "start": 84.6, "end": 89.75999999999999, "text": " would help a lot. We're on the road to a hundred K subscribers and with every" }, { "start": 89.75999999999999, "end": 97.08, "text": " subscriber I get a subscriber. I stole that joke. They say here in the" }, { "start": 97.08, "end": 101.03999999999999, "text": " abstract, convolution has been the core ingredient of modern neural networks" }, { "start": 101.03999999999999, "end": 105.32, "text": " triggering the surge of deep learning in vision." }, { "start": 105.32, "end": 112.16, "text": " AlexNet, ResNet, etc. Convolution, even though transformers are slowly taking" }, { "start": 112.16, "end": 119.44, "text": " over computer vision, convolutions are still very very much used and if you're" }, { "start": 119.44, "end": 124.52, "text": " not on a super large scale problem a convolutional neural network is still" }, { "start": 124.52, "end": 130.95999999999998, "text": " very probably the best way to go if you have a computer vision problem. They say" }, { "start": 130.96, "end": 136.56, "text": " we rethink the inherent principles of standard convolution for vision tasks," }, { "start": 136.56, "end": 142.28, "text": " specifically spatial agnostic and channel specific. Instead we present a" }, { "start": 142.28, "end": 146.68, "text": " novel atomic operation for deep neural networks by inverting the aforementioned" }, { "start": 146.68, "end": 152.68, "text": " design principles of convolution, coined an involution. They say we" }, { "start": 152.68, "end": 156.4, "text": " additionally demystify the recent popular self-attention operator and" }, { "start": 156.4, "end": 162.20000000000002, "text": " subsume it into our involution family as an over complicated instantiation." }, { "start": 162.20000000000002, "end": 171.6, "text": " A lot of statements in this paper are true, especially further" }, { "start": 171.6, "end": 176.28, "text": " down. A lot of the experiments are really cool, but it is a bit of an over" }, { "start": 176.28, "end": 183.64000000000001, "text": " statement what they say right here. Their claim is that if you have a" }, { "start": 183.64, "end": 188.44, "text": " convolution, what you do is something that's spatial agnostic and" }, { "start": 188.44, "end": 194.76, "text": " channel specific, which means that in a convolutional neural network when" }, { "start": 194.76, "end": 200.95999999999998, "text": " you have an image, let's say, with a bunch of pixels, these are now true pixels, not" }, { "start": 200.95999999999998, "end": 207, "text": " patches, and you run a convolutional layer over it, you run a convolutional" }, { "start": 207, "end": 213.96, "text": " kernel over it, you put the center of the kernel at some pixel, so the kernel" }, { "start": 213.96, "end": 219.8, "text": " will be something like a 3x3 kernel, you put that on the center here, so it" }, { "start": 219.8, "end": 224.88, "text": " overlaps here, you multiply element-wise, and then you aggregate. You can do" }, { "start": 224.88, "end": 228.56, "text": " that in multiple channels, but essentially you do that. Then after" }, { "start": 228.56, "end": 233.68, "text": " you've done that, you move the kernel one, let's say to the right, you" }, { "start": 233.68, "end": 239.24, "text": " shift it, so the center is here, you do the same thing again, and you shift it," }, { "start": 239.24, "end": 243.68, "text": " you do the same thing again. It's spatial agnostic because it repeats the" }, { "start": 243.68, "end": 249.72, "text": " same computation over and over and over across the image, and it doesn't care" }, { "start": 249.72, "end": 255.44, "text": " where the computation is. It does the same computation, and that is the" }, { "start": 255.44, "end": 259.4, "text": " selling point of convolutional neural networks. They are translation" }, { "start": 259.4, "end": 264.03999999999996, "text": " invariant. It's a form of weight sharing, you share the weights" }, { "start": 264.03999999999996, "end": 268.91999999999996, "text": " across the locations, and therefore you don't really care where stuff is in the" }, { "start": 268.91999999999996, "end": 274.03999999999996, "text": " image. The CNN will be able to recognize it just as well, and you don't" }, { "start": 274.03999999999996, "end": 279.26, "text": " need to learn over and over and over the same principle just because it's in" }, { "start": 279.26, "end": 284.56, "text": " different parts of the image. This is spatial agnostic. What does channel" }, { "start": 284.56, "end": 290.72, "text": " specific mean? For that we have to go into the multiple channels realm." }, { "start": 290.72, "end": 297.04, "text": " If your image has multiple channels, let's say I'm going to draw a new image" }, { "start": 297.04, "end": 303.92, "text": " right here with a bunch of pixels, and it has multiple channels, that means you can" }, { "start": 303.92, "end": 311.58, "text": " imagine it sort of as a 3D tensor here, where each pixel is a column, and every" }, { "start": 311.58, "end": 319.96, "text": " column is a vector of a certain dimensionality. The original image" }, { "start": 319.96, "end": 326.64, "text": " has of course three channels, which is red, green, and blue, but if you have" }, { "start": 326.64, "end": 332.12, "text": " intermediate representations these channels can grow to sizes of hundreds" }, { "start": 332.12, "end": 339.91999999999996, "text": " of channels. The point of the channels is that every entry here is a number, and" }, { "start": 339.92, "end": 346.16, "text": " every number can capture one aspect of what's described in that" }, { "start": 346.16, "end": 351.52000000000004, "text": " particular pixel. Maybe the first channel is a corner, the" }, { "start": 351.52000000000004, "end": 356.52000000000004, "text": " second one is an edge, the third one is a blue" }, { "start": 356.52000000000004, "end": 362.44, "text": " pixel, the fourth one is probably a cat here, and so on. These are the" }, { "start": 362.44, "end": 367.44, "text": " different features in the channels. A convolution operator is channel" }, { "start": 367.44, "end": 372.84, "text": " specific, that means if you have the kernel... Now convolutional kernels aren't" }, { "start": 372.84, "end": 379.24, "text": " as easy as I drew them, they're in fact four dimensional tensors." }, { "start": 379.24, "end": 386.48, "text": " They are four dimensional tensors, which makes it a little bit" }, { "start": 386.48, "end": 393.68, "text": " complicated for me to draw, honestly. However, you can imagine that you" }, { "start": 393.68, "end": 405.56, "text": " have one kernel like so, that has the same amount of channels as your image." }, { "start": 405.56, "end": 410.72, "text": " Now you can still do the same operation. You can overlay your" }, { "start": 410.72, "end": 419.8, "text": " kernel on a part of the image, overlay it like so, and then" }, { "start": 419.8, "end": 424.92, "text": " you can do element-wise multiplication, and then you do a sum, you sum it all up." }, { "start": 424.92, "end": 430.28000000000003, "text": " After you do this operation, you do a big sum over all the elements of" }, { "start": 430.28000000000003, "end": 438.48, "text": " whatever your kernel multiplied with your image, and that gives you one" }, { "start": 438.48, "end": 447.16, "text": " number. You do an all-reduce, one number gives you one number. So you do" }, { "start": 447.16, "end": 453.56, "text": " this, so this is one kernel, but you have another one right here." }, { "start": 455.64000000000004, "end": 462.84000000000003, "text": " You do the same thing, and that gives you also one number." }, { "start": 462.84000000000003, "end": 468.64000000000004, "text": " You have another kernel, I think you get the idea, you have another kernel" }, { "start": 468.64000000000004, "end": 473.88, "text": " here. You have many of those kernels per layer. If you've" }, { "start": 473.88, "end": 477.76, "text": " never looked at how the weights look when you instantiate these" }, { "start": 477.76, "end": 482.24, "text": " layers in a deep learning framework, I encourage you to do so. A" }, { "start": 482.24, "end": 488, "text": " convolutional layer will have weights that are of the size kernel size by" }, { "start": 488, "end": 497.48, "text": " kernel size, by input channels, by output channels. It's a 4D tensor, and this" }, { "start": 497.48, "end": 508.64000000000004, "text": " orange part here is just one of those sub tensors. In fact you have as many as" }, { "start": 508.64000000000004, "end": 515.04, "text": " you have output channels. That gives you, of course when you then go over" }, { "start": 515.04, "end": 527.48, "text": " all of these, that gives you the next layer. So that becomes in the next layer." }, { "start": 527.48, "end": 534.7199999999999, "text": " This is the next layer representation, at the point where you" }, { "start": 534.72, "end": 545.28, "text": " overlaid the kernel in the last layer, that will become this column right here." }, { "start": 546.48, "end": 553.24, "text": " So you have the orange thing in the first, the blue thing in the second" }, { "start": 553.24, "end": 558.2, "text": " channel, green thing in the third channel, and so on. I hope this is relatively clear." }, { "start": 558.2, "end": 565.6, "text": " So you have in fact one convolutional kernel per output channel. So if you" }, { "start": 565.6, "end": 568.84, "text": " call the orange thing here a convolutional kernel, then you have one" }, { "start": 568.84, "end": 578.9200000000001, "text": " kernel per output channel. That means it's channel specific. This is a" }, { "start": 578.9200000000001, "end": 584.6, "text": " conscious choice and it makes sense when you think about it, because each" }, { "start": 584.6, "end": 590.0400000000001, "text": " output channel means something different. If my output channel" }, { "start": 590.0400000000001, "end": 595.4, "text": " means is there a cat at this particular location, then I might want to" }, { "start": 595.4, "end": 600, "text": " aggregate the last layer's representation differently than if my" }, { "start": 600, "end": 608.16, "text": " output channel says, well is this part of the sky, or is there a corner here," }, { "start": 608.16, "end": 611.5600000000001, "text": " or something like this. So I want to aggregate the weights differently." }, { "start": 611.56, "end": 617.2399999999999, "text": " That's why I have to have a different set of weights here, here, and here, because" }, { "start": 617.2399999999999, "end": 624.3199999999999, "text": " they mean different things. So it's spatial agnostic, because it does the same" }, { "start": 624.3199999999999, "end": 628.1199999999999, "text": " computation at every location. It's channel specific, because it does a" }, { "start": 628.1199999999999, "end": 632.8399999999999, "text": " different computation at each channel, even though it does it for all the" }, { "start": 632.8399999999999, "end": 640.76, "text": " locations equally. Now we're prepared to invert that. So convolution" }, { "start": 640.76, "end": 647.96, "text": " promises we invert this. What we want to do is something spatial specific and" }, { "start": 647.96, "end": 657.24, "text": " channel agnostic. So the first thing here is the channel agnostic." }, { "start": 657.24, "end": 664.3199999999999, "text": " If you've seen my last video about MLP mixer, this is very much the same idea." }, { "start": 664.3199999999999, "end": 669.88, "text": " The idea is just of, hey why do we have different things here? Why do I have" }, { "start": 669.88, "end": 674.68, "text": " different computations? Can't we just apply the same principle we apply" }, { "start": 674.68, "end": 681.52, "text": " to the spatial thing, where we say we just slide the same computation" }, { "start": 681.52, "end": 686.6, "text": " over the image, and that is generally fine. That's weight sharing, it's actually" }, { "start": 686.6, "end": 691.24, "text": " good. Why don't we just do this here? Why don't we aggregate the information in" }, { "start": 691.24, "end": 697.64, "text": " the same way for all the different channels? So you can do that." }, { "start": 697.64, "end": 703.76, "text": " You can just have one kernel. So instead of having a number of output channels," }, { "start": 703.76, "end": 711.8, "text": " many kernel. So the involution will come up with simply one kernel that" }, { "start": 711.8, "end": 717.24, "text": " it shares across all of the channels. They have a little" }, { "start": 717.24, "end": 723.56, "text": " picture down here. Just look at the last step right here." }, { "start": 723.56, "end": 731.1999999999999, "text": " Wow sorry, I crossed that out. Here this is the kernel that they have." }, { "start": 731.1999999999999, "end": 736, "text": " Sorry, it's not even by number of channels. It's actually you" }, { "start": 736, "end": 743.5999999999999, "text": " just flatten this thing. So it's a k by k by 1 kernel and you simply" }, { "start": 743.5999999999999, "end": 751.4, "text": " push that, put that over a location in the image and then you share the" }, { "start": 751.4, "end": 756.72, "text": " computation across. So the image here, given that this is all in the same" }, { "start": 756.72, "end": 763.0799999999999, "text": " colors, it means that you just multiply, you broadcast. That's the word I was" }, { "start": 763.0799999999999, "end": 768.0799999999999, "text": " looking for. You broadcast the operation across the channels and then you" }, { "start": 768.0799999999999, "end": 774.24, "text": " aggregate after that. So you can see what involution does is broadcast and then" }, { "start": 774.24, "end": 780.96, "text": " not reduce. You don't reduce at the end to a single number, but you keep" }, { "start": 780.96, "end": 788.64, "text": " the channels as they are. That's why you only need a k by k by 1," }, { "start": 788.64, "end": 792.9200000000001, "text": " because you don't have the different computation for each output channel and" }, { "start": 792.9200000000001, "end": 799.2800000000001, "text": " you don't reduce across the input channels. So you get away with a lot" }, { "start": 799.2800000000001, "end": 807.6, "text": " less parameters. That's even wrong here. Just a k by k kernel. Now that's" }, { "start": 807.6, "end": 814.96, "text": " one part. The other part is why don't we do something that's spatial" }, { "start": 814.96, "end": 820.6800000000001, "text": " specific. Now remember what spatial agnostic was." }, { "start": 820.6800000000001, "end": 827.8000000000001, "text": " Spatial agnostic was we slide the same kernel across the image. What they're" }, { "start": 827.8000000000001, "end": 834.96, "text": " saying in first instance, they're saying things like, or they said something," }, { "start": 834.96, "end": 841.8000000000001, "text": " don't know where it was in the picture, but they say what we could do is," }, { "start": 841.8000000000001, "end": 849.0400000000001, "text": " if we have an image, and we do something spatial specific," }, { "start": 849.0400000000001, "end": 855.24, "text": " what that means is we could have a kernel that's just as big as the image." }, { "start": 855.24, "end": 862.76, "text": " Then no more sliding across it. It's simply you multiply those" }, { "start": 862.76, "end": 867.3199999999999, "text": " things together, you broadcast it across these channels of the image," }, { "start": 867.3199999999999, "end": 874.3199999999999, "text": " and there you go. Also something that that MLP mixer does," }, { "start": 874.3199999999999, "end": 882.04, "text": " they just say whatever, we don't do slidey slidey anymore." }, { "start": 882.04, "end": 887.58, "text": " They do weight sharing, but essentially you're trying to get rid of this sliding" }, { "start": 887.58, "end": 892.3, "text": " over. You have different weight for each location. That means that the" }, { "start": 892.3, "end": 897.3199999999999, "text": " computation actually differs from where stuff is in the image. We know that" }, { "start": 897.3199999999999, "end": 905.4799999999999, "text": " that is somewhat important, because usually the sky is up and objects" }, { "start": 905.4799999999999, "end": 910.7199999999999, "text": " in these natural images that humans take might be more in the middle than" }, { "start": 910.7199999999999, "end": 916.4799999999999, "text": " anywhere else. Text goes from left to right. It's not all super" }, { "start": 916.4799999999999, "end": 922.16, "text": " translation and location invariant. It makes sense to have weights that are" }, { "start": 922.16, "end": 927.16, "text": " different for each position. But then they run into a problem. They say we" }, { "start": 927.16, "end": 937.04, "text": " couldn't do that very well, because now we can't just input pictures of" }, { "start": 937.04, "end": 941.12, "text": " different resolutions. That's one problem. I think the other problem is" }, { "start": 941.12, "end": 947.28, "text": " that this might not work too well. They come up with a different thing." }, { "start": 947.28, "end": 953.1999999999999, "text": " They say can't we make a compromise? They don't call it a compromise." }, { "start": 953.1999999999999, "end": 958.76, "text": " They call it something different. But they say look, can we come up with a" }, { "start": 958.76, "end": 965.1999999999999, "text": " scheme where we can retain a kernel that's approximately this size, like a" }, { "start": 965.1999999999999, "end": 971.8399999999999, "text": " small kernel, but it is different for each location. We still do the" }, { "start": 971.84, "end": 978, "text": " classic convolution way of doing things, in that we do these local aggregations" }, { "start": 978, "end": 984.48, "text": " across neighboring pixels. However the kernel that we use here is different" }, { "start": 984.48, "end": 991.0400000000001, "text": " from the kernel that we use here. That's different from the kernel that we" }, { "start": 991.0400000000001, "end": 996.9200000000001, "text": " use here. How could you make a computation where the kernel is always" }, { "start": 996.92, "end": 1004.24, "text": " different? You do that by coming up with the kernel in a dynamic way." }, { "start": 1004.24, "end": 1009.3199999999999, "text": " The authors here say, let's say we're at this pixel right here. We care" }, { "start": 1009.3199999999999, "end": 1016, "text": " about this neighborhood. How can we come up on the fly with a kernel for this" }, { "start": 1016, "end": 1026.08, "text": " particular pixel? Their answer is, let's just generate it from the pixel." }, { "start": 1026.08, "end": 1031.84, "text": " This is the full involution diagram. We've now arrived at this. They are at" }, { "start": 1031.84, "end": 1037.08, "text": " this neighborhood, which is outlined here in this black scaffolding grid" }, { "start": 1037.08, "end": 1045.28, "text": " thing. The center pixel is the red pixel here. They say we look at that" }, { "start": 1045.28, "end": 1050.76, "text": " pixel and all its channels. We use that pixel and only that pixel. Not the" }, { "start": 1050.76, "end": 1056.64, "text": " neighborhood. We use that pixel to come up with the kernel. They have a" }, { "start": 1056.64, "end": 1060.92, "text": " computation here, which of course is going to be a small neural network." }, { "start": 1060.92, "end": 1067.32, "text": " This is a two-layer neural network that comes up with the kernel. You see this" }, { "start": 1067.32, "end": 1077.4, "text": " is simply a reshape. You compute the kernel" }, { "start": 1077.4, "end": 1084.0400000000002, "text": " across the neighborhood from the pixel itself. That means that every" }, { "start": 1084.0400000000002, "end": 1091.72, "text": " single pixel here, unless it's the exact same pixel, so the exact same color in" }, { "start": 1091.72, "end": 1095.88, "text": " the first layer, or the exact same representation in the intermediate" }, { "start": 1095.88, "end": 1102.92, "text": " layers, every single location gets its own kernel for the convolution. The" }, { "start": 1102.92, "end": 1108.76, "text": " computation I've already told you is a small neural network. Specifically it's" }, { "start": 1108.76, "end": 1117, "text": " a bottleneck neural network. It takes the pixel representation as a" }, { "start": 1117, "end": 1122.92, "text": " vector, bottlenecks it. There is a non-linearity here and then it expands" }, { "start": 1122.92, "end": 1129.5600000000002, "text": " it again to the size of the actual kernel. Then you use that kernel and" }, { "start": 1129.56, "end": 1136.76, "text": " you broadcast it instead of having one kernel per input channel. Then" }, { "start": 1136.76, "end": 1143.08, "text": " you multiply and then you don't reduce across the input channels." }, { "start": 1143.08, "end": 1149.6399999999999, "text": " That alleviates you from having to have" }, { "start": 1149.6399999999999, "end": 1156.1599999999999, "text": " multiple kernels, one for each output channel. This is the whole" }, { "start": 1156.16, "end": 1161.2, "text": " convolution pipeline. I would say there are multiple different" }, { "start": 1161.2, "end": 1166.92, "text": " concepts here. This coming up with the kernel on the fly is one concept." }, { "start": 1166.92, "end": 1171.3200000000002, "text": " Then this broadcasting scheme is an entirely different concept. You could do" }, { "start": 1171.3200000000002, "end": 1181.6000000000001, "text": " both independently of each other. They do them together. They do" }, { "start": 1181.6, "end": 1188.76, "text": " ablations further down, but it's two new things in one. The first" }, { "start": 1188.76, "end": 1196.24, "text": " thing here is that you might think of a tension mechanism as" }, { "start": 1196.24, "end": 1201.28, "text": " you look at that. It's a form of fast weights. The weights of the" }, { "start": 1201.28, "end": 1208.8799999999999, "text": " computation are computed on the fly from the data itself. That is exactly" }, { "start": 1208.88, "end": 1211.92, "text": " what an attention mechanism does. However, here you do it in a slightly" }, { "start": 1211.92, "end": 1219.0400000000002, "text": " different way. They say that they have a discussion about" }, { "start": 1219.0400000000002, "end": 1225.68, "text": " attention right here. They say there are a bunch of differences." }, { "start": 1225.68, "end": 1231.44, "text": " In attention what you'd have is you don't only compute your" }, { "start": 1231.44, "end": 1237.1200000000001, "text": " weights from the actual location where you are, even in local self-attention." }, { "start": 1237.12, "end": 1241.6799999999998, "text": " You actually compute your weights from more than just the pixel where you are." }, { "start": 1241.6799999999998, "end": 1246.4399999999998, "text": " You compute it from the entire region you care about. That's the first thing." }, { "start": 1246.4399999999998, "end": 1252.08, "text": " The second thing is that in self-attention you have the" }, { "start": 1252.08, "end": 1257.8799999999999, "text": " queries and the keys. You have your data, your neighborhood, let's say." }, { "start": 1257.88, "end": 1267.64, "text": " Each of those things produces a query and a key." }, { "start": 1267.64, "end": 1273.5200000000002, "text": " Everyone produces a query and a key. Then you do this sort of" }, { "start": 1273.5200000000002, "end": 1281.16, "text": " quadratic thing in order to determine how you should aggregate your" }, { "start": 1281.16, "end": 1286.5200000000002, "text": " information. In involution you simply don't produce keys. You" }, { "start": 1286.52, "end": 1291.08, "text": " only produce queries, if you will, or only keys, however you want to look at it." }, { "start": 1291.08, "end": 1298.08, "text": " Then you don't do the quadratic thing. Rather you immediately interpret" }, { "start": 1298.08, "end": 1304.68, "text": " this as the weights of aggregation. You can write this, and they say that," }, { "start": 1304.68, "end": 1310.96, "text": " you can interpret this as the positional encodings already" }, { "start": 1310.96, "end": 1317.16, "text": " being present in these weights, because it's now specific to a position." }, { "start": 1317.16, "end": 1323.3600000000001, "text": " Whereas in the attention literature you'd have to supply positional encodings." }, { "start": 1323.3600000000001, "end": 1328.3600000000001, "text": " In order for the algorithm to know that this is a different thing," }, { "start": 1328.3600000000001, "end": 1332.8400000000001, "text": " that this here is a different thing from this thing here, you need to" }, { "start": 1332.8400000000001, "end": 1337.6000000000001, "text": " supply it with positional encodings. Not here, because the individual" }, { "start": 1337.6, "end": 1343.3999999999999, "text": " channels of this thing immediately refer to different positions." }, { "start": 1343.3999999999999, "end": 1349.28, "text": " This neural network is very aware of what position is where relative" }, { "start": 1349.28, "end": 1354.6799999999998, "text": " to the pixel you're considering. They say the success of involution" }, { "start": 1354.6799999999998, "end": 1361.3999999999999, "text": " explains in part why other people had lots of success with leaving away the" }, { "start": 1361.3999999999999, "end": 1366.9199999999998, "text": " keys and only using positional encodings together with the query." }, { "start": 1366.92, "end": 1373.4, "text": " If I'm not mistaken, I think you could frame the lambda networks" }, { "start": 1373.4, "end": 1380.1200000000001, "text": " into this category, where at some point they never do this attention." }, { "start": 1380.1200000000001, "end": 1386.04, "text": " However they rely heavily on positional encodings." }, { "start": 1386.04, "end": 1392.4, "text": " However you can learn those ahead of time or statically." }, { "start": 1392.4, "end": 1397.44, "text": " This is the connection to attention." }, { "start": 1397.44, "end": 1400.76, "text": " The connection to attention is that the weights are constructed on the fly." }, { "start": 1400.76, "end": 1406.96, "text": " However here there's no quadratic interaction, there is no softmax and so on." }, { "start": 1406.96, "end": 1413, "text": " You construct the weights from the pixel in the center." }, { "start": 1413, "end": 1419.6000000000001, "text": " To frame attention as a more complicated instantiation of our idea," }, { "start": 1419.6, "end": 1425.3999999999999, "text": " that's a bit out there. The authors here say that attention is just a more complicated thing." }, { "start": 1425.3999999999999, "end": 1434.1599999999999, "text": " The second thing I worry a bit about is that they say that this is position specific." }, { "start": 1434.1599999999999, "end": 1440.1599999999999, "text": " They started out with saying that convolution is spatial agnostic." }, { "start": 1440.1599999999999, "end": 1445.52, "text": " We want to do something spatial specific." }, { "start": 1445.52, "end": 1451.12, "text": " This here is also spatial agnostic. If you get the same pixel at different locations in the image," }, { "start": 1451.12, "end": 1456.96, "text": " this thing will produce the same weights and the computation will be the same." }, { "start": 1456.96, "end": 1461.68, "text": " In fact you do this entire computation right here." }, { "start": 1461.68, "end": 1466.56, "text": " That is a spatially agnostic computation." }, { "start": 1466.56, "end": 1470.48, "text": " The difference here is the same difference that you have between slow weights and fast weights." }, { "start": 1470.48, "end": 1476.72, "text": " You simply construct the weights of the actual computation on the fly." }, { "start": 1476.72, "end": 1484, "text": " However the way you construct these weights remains position agnostic." }, { "start": 1484, "end": 1489.28, "text": " The second thing is that the weight sharing is a bit of an independent thing." }, { "start": 1489.28, "end": 1495.44, "text": " I get that the two work well together, but the broadcasting and weight sharing thing across the channels" }, { "start": 1495.44, "end": 1501.6000000000001, "text": " is almost a much simpler mention." }, { "start": 1501.6000000000001, "end": 1507.76, "text": " It's a bit related to the fact that if you have a depth separated convolution" }, { "start": 1507.76, "end": 1513.04, "text": " and you simply share the weights across that, that's about what it boils down to." }, { "start": 1513.04, "end": 1519.6000000000001, "text": " What does that give us? In fact it gives us a lot." }, { "start": 1519.6, "end": 1525.9199999999998, "text": " In this paper they do experiments and they compare against for example" }, { "start": 1525.9199999999998, "end": 1531.4399999999998, "text": " ResNets and other networks with similar number of parameters." }, { "start": 1531.4399999999998, "end": 1537.12, "text": " I like these experiments here in that you can see they always make sure that they have the lowest number of parameters" }, { "start": 1537.12, "end": 1541.6799999999998, "text": " among the things they compare with." }, { "start": 1541.6799999999998, "end": 1547.9199999999998, "text": " Yet they show that they still beat these models." }, { "start": 1547.92, "end": 1554.72, "text": " They compare ResNet with the same number of layers." }, { "start": 1554.72, "end": 1561.52, "text": " This is standalone ResNet." }, { "start": 1561.52, "end": 1568.24, "text": " Here is the axial ResNet." }, { "start": 1568.24, "end": 1574.3200000000002, "text": " You can see that this outperforms on these tabs." }, { "start": 1574.32, "end": 1580.6399999999999, "text": " This is ImageNet." }, { "start": 1580.6399999999999, "end": 1586.8, "text": " They also have different things such as this segmentation task." }, { "start": 1586.8, "end": 1591.6799999999998, "text": " I think they have a picture down here." }, { "start": 1591.6799999999998, "end": 1594.96, "text": " This segmentation task where they perform better." }, { "start": 1594.96, "end": 1600.32, "text": " This is the baseline and you can see the involution network." }, { "start": 1600.32, "end": 1604.8799999999999, "text": " I think the effect that you see right here." }, { "start": 1604.8799999999999, "end": 1609.6799999999998, "text": " The fact that they are better in this number is really cool." }, { "start": 1609.6799999999998, "end": 1615.6799999999998, "text": " It's probably a bit due to the fact that they do this on the fly computation of weights." }, { "start": 1615.6799999999998, "end": 1621.4399999999998, "text": " Which is a more powerful idea than the static weights of a convolution." }, { "start": 1621.4399999999998, "end": 1626.6399999999999, "text": " The lower number of parameters I think is more a result of their weight sharing." }, { "start": 1626.64, "end": 1634, "text": " They tout here how that they are on par with ResNet 101" }, { "start": 1634, "end": 1638.88, "text": " regarding the top one recognition accuracy." }, { "start": 1638.88, "end": 1644.5600000000002, "text": " While saving 65% of storage and computation." }, { "start": 1644.5600000000002, "end": 1650.0800000000002, "text": " I think that the saving of computation is more due to the weight sharing mechanism." }, { "start": 1650.08, "end": 1657.04, "text": " I think they've just selected tasks and they might be important tasks." }, { "start": 1657.04, "end": 1663.04, "text": " It was just the case that in these tasks whether or not you share the weights probably doesn't matter." }, { "start": 1663.04, "end": 1668, "text": " It doesn't hit you as hard or is even beneficial if you don't have enough data." }, { "start": 1668, "end": 1673.52, "text": " Therefore that's why they have less parameters." }, { "start": 1673.52, "end": 1680.8, "text": " What you can also observe here is that differences." }, { "start": 1680.8, "end": 1687.92, "text": " They get continuously smaller as you move up the scale of network." }, { "start": 1687.92, "end": 1694.48, "text": " This is all on the same data set but it would be interesting to see how this performs on a really large scale." }, { "start": 1694.48, "end": 1701.68, "text": " My intuition is that as you go larger and larger in scale." }, { "start": 1701.68, "end": 1708.48, "text": " This approach is going to top out and lose out to the more general architectures like attention." }, { "start": 1708.48, "end": 1714.8, "text": " It's a clown world now." }, { "start": 1714.8, "end": 1721.1200000000001, "text": " In these regimes and I would argue these are the regimes where a lot of practitioners care about." }, { "start": 1721.1200000000001, "end": 1726.3200000000002, "text": " These and actually smaller regimes." }, { "start": 1726.32, "end": 1732.08, "text": " This seems to perform reasonably well." }, { "start": 1732.08, "end": 1740.32, "text": " You can see right here the curves here when you compare compute to accuracy is very favorable." }, { "start": 1740.32, "end": 1746.72, "text": " Especially if you're in this region here." }, { "start": 1746.72, "end": 1753.28, "text": " If you're in the low resource region it might be something that you want to try out." }, { "start": 1753.28, "end": 1760.08, "text": " It remains to be seen how well this is pre-trainable and fine-tunable." }, { "start": 1760.08, "end": 1765.6, "text": " It's something you might want to try." }, { "start": 1765.6, "end": 1771.84, "text": " If you try to only use parts of it it would be interesting to see." }, { "start": 1771.84, "end": 1777.36, "text": " If we still do convolution but we do this weight sharing scheme." }, { "start": 1777.36, "end": 1785.52, "text": " They also have a notion of grouping in the channels." }, { "start": 1785.52, "end": 1792.24, "text": " As the attention mechanism has it." }, { "start": 1792.24, "end": 1797.6799999999998, "text": " Sharing a single kernel across all channels obviously underperforms in accuracy." }, { "start": 1797.6799999999998, "end": 1802.32, "text": " Considering channel redundancy of evolution kernels." }, { "start": 1802.32, "end": 1807.6, "text": " As long as the channels shared in a group to an acceptable range." }, { "start": 1807.6, "end": 1813.28, "text": " The channel agnostic behavior will not only preserve the performance." }, { "start": 1813.28, "end": 1818.3999999999999, "text": " But also reduce the parameter count and computational cost." }, { "start": 1818.3999999999999, "end": 1822.8, "text": " This will also permit the larger kernel size under the same budget." }, { "start": 1822.8, "end": 1830.6399999999999, "text": " It's the same reasoning as people introducing groups or different heads in multi-head attention." }, { "start": 1830.64, "end": 1834.88, "text": " Try all of this stuff out. I think it's worth it." }, { "start": 1834.88, "end": 1840.48, "text": " The code is available right here." }, { "start": 1840.48, "end": 1843.5200000000002, "text": " I'll also put a link to that." }, { "start": 1843.5200000000002, "end": 1845.68, "text": " That was it from me for this paper." }, { "start": 1845.68, "end": 1851.68, "text": " I wish you a very pleasant day of the week." }, { "start": 1851.68, "end": 1861.04, "text": " Bye bye." } ]
smxwT82o40Y
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Active Dendrites avoid catastrophic forgetting - Interview with the Authors
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "active dendrites", "neurons dendrites", "biological deep learning", "deep learning biology", "numenta", "numenta research", "numenta deep learning", "dendrites deep learning", "deep learning tutorial", "hierarchical temporal memory", "computational neuroscience", "reinforcement learning", "robotics", "multi task learning", "continuous learning", "continual learning", "permuted mnist" ]
#multitasklearning #biology #neuralnetworks This is an interview with the paper's authors: Abhiram Iyer, Karan Grewal, and Akash Velu! Paper Review Video: https://youtu.be/O_dJ31T01i8 Check out Zak's course on Graph Neural Networks (discount with this link): https://www.graphneuralnets.com/p/introduction-to-gnns?coupon_code=SUNGLASSES&affcode=999036_lzknae-d Catastrophic forgetting is a big problem in mutli-task and continual learning. Gradients of different objectives tend to conflict, and new tasks tend to override past knowledge. In biological neural networks, each neuron carries a complex network of dendrites that mitigate such forgetting by recognizing the context of an input signal. This paper introduces Active Dendrites, which carries over the principle of context-sensitive gating by dendrites into the deep learning world. Various experiments show the benefit in combatting catastrophic forgetting, while preserving sparsity and limited parameter counts. OUTLINE: 0:00 - Intro 0:55 - Sponsor: GNN Course 2:30 - How did the idea come to be? 7:05 - What roles do the different parts of the method play? 8:50 - What was missing in the paper review? 10:35 - Are biological concepts viable if we still have backprop? 11:50 - How many dendrites are necessary? 14:10 - Why is there a plateau in the sparsity plot? 20:50 - How does task difficulty play into the algorithm? 24:10 - Why are there different setups in the experiments? 30:00 - Is there a place for unsupervised pre-training? 32:50 - How can we apply the online prototyping to more difficult tasks? 37:00 - What did not work out during the project? 41:30 - How do you debug a project like this? 47:10 - How is this related to other architectures? 51:10 - What other things from neuroscience are to be included? 55:50 - Don't miss the awesome ending :) Paper: https://arxiv.org/abs/2201.00042 Blog: https://numenta.com/blog/2021/11/08/can-active-dendrites-mitigate-catastrophic-forgetting Link to the GNN course (with discount): https://www.graphneuralnets.com/p/introduction-to-gnns?coupon_code=SUNGLASSES&affcode=999036_lzknae-d Abstract: A key challenge for AI is to build embodied systems that operate in dynamically changing environments. Such systems must adapt to changing task contexts and learn continuously. Although standard deep learning systems achieve state of the art results on static benchmarks, they often struggle in dynamic scenarios. In these settings, error signals from multiple contexts can interfere with one another, ultimately leading to a phenomenon known as catastrophic forgetting. In this article we investigate biologically inspired architectures as solutions to these problems. Specifically, we show that the biophysical properties of dendrites and local inhibitory systems enable networks to dynamically restrict and route information in a context-specific manner. Our key contributions are as follows. First, we propose a novel artificial neural network architecture that incorporates active dendrites and sparse representations into the standard deep learning framework. Next, we study the performance of this architecture on two separate benchmarks requiring task-based adaptation: Meta-World, a multi-task reinforcement learning environment where a robotic agent must learn to solve a variety of manipulation tasks simultaneously; and a continual learning benchmark in which the model's prediction task changes throughout training. Analysis on both benchmarks demonstrates the emergence of overlapping but distinct and sparse subnetworks, allowing the system to fluidly learn multiple tasks with minimal forgetting. Our neural implementation marks the first time a single architecture has achieved competitive results on both multi-task and continual learning settings. Our research sheds light on how biological properties of neurons can inform deep learning systems to address dynamic scenarios that are typically impossible for traditional ANNs to solve. Authors: Abhiram Iyer, Karan Grewal, Akash Velu, Lucas Oliveira Souza, Jeremy Forest, Subutai Ahmad Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello, this is an interview with the authors of the paper on active dendrites. Now, if you haven't seen it, I've made a comprehensive paper review video on this paper and I released that yesterday. If you watch this video as it comes out, which obviously you do today, I'm going to interview the authors and we've all seen my review. So we'll be able to directly dive in. So if you haven't seen the review yet, and you want to know what's in the paper, maybe that is a good place to start. The authors here were really helpful and really informative answering all of my questions and concerns that I had and even bringing up some new interesting insights. So I hope you learn something from this interview or at least that it entertains you. And if you have any comments, please let me know in the comments below the video. I'll see you around. Bye bye. Hey there, today's sponsor is the course on introduction to graph neural networks. This is a course by my friend Zach Jost, who is an expert in graph neural networks, and also runs the welcome AI overlords YouTube channel has a very interesting blog and does many other cool things. He's packed all his knowledge of graph neural networks into one course that will educate you on both the theoretical and hands on practical aspect on graph neural networks. Graph neural networks are really important. They're definitely one of the most interesting areas in deep learning right now they're on the upswing, they model data that has an underlying structure that is connected that is not really well fit for any of the classic formats like tables or images. They've also powered a lot of recent advances in scientific breakthroughs, such as alpha fold protein structure predictions, or better traffic predictions. So if you're interested in graph neural network, I'll definitely recommend you check out that course. If you use my link, you'll get a 15% discount on the course enrollment is open right now and lasts until April 1 or until spaces run out. The course is a six weeks course. It's cohort based, you'll get access to a community to discord community of other students, and you'll get all the materials and hands on experience. All right, let's get into the video now. See ya. Hi everyone, today I'm here with the three joint first authors of the paper on active dendrites, Abhi, Karan and Akash. And I'm very, very happy to have you all here. This paper covers many areas, it covers biology, it covers neural networks, it covers kind of different architectures of stuff. It's very cool that you all sort of are here and are able to sort of answer my questions. Welcome, all of you. Yeah, thanks, Janek. Thanks for having us. Thanks for having us. It's very interesting paper. So I saw this paper and I was intrigued because it's not often that a lot of people say they do biologically inspired things. But it's not often that someone really goes and says, look, you know, here's what's missing, let's build it in. And then it actually leads to something that works. And that is, you know, the hypothesis in your paper, the hypothesis you pose on what should happen are actually confirmed at the end. And this is, I think, a very good story arc for a paper and a really nice thing to write up. So is this, how did this come to be? How did you get the idea of bringing these very two distant, not too distant, but these two distant fields together of sort of neurobiology and deep learning? Well, at Numenta, we're interested, one of the things we're interested in is in continual learning and learning multiple tasks, more generally speaking. And so, you know, we're looking at, but a lot of neural networks and deep learning today focuses on trying to solve a single task. So we said, well, you know, how is biology enabling the ability to solve multiple things in sequence or, you know, at the same time, learning different things? And so, you know, there's been a lot of work out there on active dendrites. And so, and it's not exactly clear what their role was. But a little while back, we speculated that, hey, they might actually be helping at the neural level to allow for continual learning. And so if we can build this idea into deep learning, then there might be some prospect there for addressing problems like continual learning and multitask learning. So is it fair to say that it grew out of sort of a need to solve a task? I think it grew out of the need to solve multiple tasks in sequence, either learning them together or in sequence continuously. To add on to what Karan was saying is that we believe that active dendrites can really aid in achieving these specialized neural circuits. And we can apply these ideas directly to any neural network and show some competitive performance on various benchmarks that involve continual learning setups. So I guess the purpose of this project, if you were to just summarize it very briefly, is we just want to show a proof of concept for a new idea that can allow deep learning to work in more dynamic environments and scenarios. To kind of add on to what Karan and Abhi said. So at a higher level, I think we were kind of examining where a lot of modern deep networks fail, and that's in these streaming task settings and multitask settings. And the kind of inspiration for our solution was directed towards biology and biological neurons, which is a lot of what Numentos focuses on. And I think quite nicely we found these existing benchmarks and existing tasks that show that typical deep learning networks fail in these scenarios. And we were able to build in these biologically inspired neurons to improve the performance in such dynamic settings by using the fact that we believe active dendrites in biology kind of do this kind of context dependent adaptation in multiple tasks. What I found interesting is that even though you targeted a little bit towards multilayer perceptrons, in principle, this active dendrites architecture is sort of pluggable almost anywhere. So you could always imagine some sort of a context dependent signal that gets routed in and modulates the signal that exists. So I think what I'm trying to find out is there are a number of things happening in this model. There is first of all the modulation itself, which is a relatively it's not really a known concept, at least in classical deep learning, we always have weighted sums, we rarely have the situation where two parts of the signal are multiplied together, or one modulates the other, it happens a little bit in LSTM and so on. The other one is the sort of recognition of a context and, you know, being context dependent. And then a third thing is this, this sparsity. Now, you have sort of combined all of them. Is there one thing that you think is specifically important? Or is it sort of the combination of things that is really what makes the difference? You have some ablations in the paper. What can you say about this? I think it's the combination of all these things acting together. So it's the it's the it's the dendrites, which are, you know, up modulating and down modulating certain neurons to determine which ones should become which which to determine which sub network should be invoked. And then it's as far as you on top of that, which is ensuring that, you know, a large portion of the network is essentially not performing or learning a certain task. And it's those two things together, which, which, which really gets at this idea of using specialized sub networks for different things. So I wouldn't say any any one one thing that stands out more than the others. So when we get let's get into the paper itself, you've seen my review of it, with respect to just framing the problem and maybe framing the architecture as such, is there do you think I have captured what you've tried to say? Do you think I've left something important out or have put emphasis on or have not put emphasis on something that you would like to put emphasis on when it comes to like, what the architecture is, what it does and how it works? I think your explanations for the architecture, at least we're very good. I think it does definitely does capture what we were trying to trying to say. And the whole point to kind of reiterate is that the same model with the same principles should work on completely separate areas. One is the multitask reinforcement learning. The other one is continual learning with permuted MNIST. And I think you touched upon that idea too. So yeah, I think that the kind of motivation that I think you in towards the beginning of your review, you showed you kind of compared the typical weighted linear sum neuron with the active dendrites neuron. And I think our motivation in coming up with this architecture was how can we incorporate a lot of these properties into active dendrites with having dendritic segments being able to either up modulate or down modulate certain neurons in a way that didn't completely change from normal back propagation trainable networks. So this architecture kind of brings in that flavor of having dendrites influence certain neurons, but does so in a way that mathematically allows for back propagation to train the networks and I think you touched on that pretty well as well. Do you think it's valid to sort of bring in biological concepts even though we train with back propagation? Because it's very evident that at least pure like correct back propagation isn't happening in the brain. Do you think it's still valid to bring in the concepts and maybe the brain is doing something like backprop? Or do you think we're sort of just kind of taking inspiration from biology in order to solve some of our problems? I think it's more so the latter. Of course, the most accurate biological neural network would likely not use back propagation, right? But this is one area where I think the goal was can we make deep learning just a little bit more plausible? And in doing so, can we make it a little bit more dynamic? So we're not necessarily here to remove backprop entirely and say that that's the best way that the dendrites in this architecture can work. Although certainly that is how it works in biology. The point was, can we just augment traditional deep neural nets to work in more dynamic scenarios? Now I had some criticisms with respect to just like that details of your architecture. For example, you always or you often choose the number of dendritic segments to match the number of tasks that you have, which obviously, if I was a researcher, I would do the same. But can you say maybe something about how this is in the brain? Like what numbers are we talking about? How many of these sub networks that are composed of distal dendrites? How many are there approximately? Do you know? Do you have an idea? And what can you say about how many we should build into a problem where we maybe don't know how many tasks we expect? From what I recall, probably in the order of hundreds or thousands of individual dendrite segments for each individual neuron, actually, it might even be more than that. The actual numbers escape me. But regarding what you said earlier about having the number of tasks be equal to the number of segments here, we found that actually, even though in a lot of the experiments we report here, we do set the number of dendrites to the number of tasks. We found that we actually don't need to have that many. And we actually have further studies which show that we can actually keep the architecture fixed and increase the number of tasks we're doing. I'm talking about continual learning here because for multitask, we're focused on 10 specifically. We can increase the number of tasks and the performance actually doesn't change by much. So that shows that as we're increasing the number of dendrite segments, we actually end up overparameterizing the network quite a bit, which we don't need to do. Yeah. So this is the plot on the left right here. You just increase the number of dendritic segments and the top line is learning 10 tasks. And it doesn't get noticeably worse, which I find to be a very cool property. I don't want to have to set the parameter very specifically. I can just set it too high and it doesn't hurt, which is cool. Which leads me to the plot on the right where you discuss the sparsity. I'm going to guess that's the sparsity parameter. So that's the thing that ultimately controls k. And I find it peculiar, not that there is an optimal setting, which I would expect because that I can't set high that I have to set between 0 and 1. So there's going to be some optimum in between. But there's this two bump thing going on. So what's going on there? Why is it like really good at lows, like high sparsity, and then there's like this plateau, and then it just flat like crashes down. I think there in the beginning, you know, if you have if you have too much. So yeah, I always think in terms of sparsity, so I'm converting from density to sparsity. So if you have if it's too sparse, right, there's not enough signal going through. And that's why, you know, as you as you increase the amount of signal that you're allowing through as you're increasing the capacity of your representation, then you're going to get you're going to get an increase in performance. But then if you have if you're using up too many units to to create that, to create that representation, then you're going to get more interference, right. And as you have more interference, you're going to you're going to you're going to forget more and more network parameters are overwritten as you move on to subsequent tasks. And so you get a drop in accuracy. And towards the end, so you know, you notice that it does fall drastically. Honestly, I haven't thought too much about why that happens. Although it is it is a pretty, pretty monotonic fall, even though I guess in that in that upper curve, there's a slight bump with that could just be due to seeding or something like that. But yeah, Yeah, I was more referring to like the plateau itself, right? There's there's this plateau kind of, and I I know, I know that there could be almost like two two modes of using the sparsity in one mode, I have entire sub networks that do the job. And in the other mode, I have like a shared network. Yet I have like separate things that just kind of like track, track which task I'm on, which would sort of correspond to what the baseline is doing, right? When people say, well, the baseline has access to the task to it can just allocate some units. No, it's maybe not a perfect analogy. But I was just wondering, it was just interesting to see that there's this kind of this type of plateau. Yeah, that's that's something I guess, we haven't gone too deep into. But this might, this might just be a property of sparse representations and how and how much overlap there is as you as you as you increase the sparsity level, it could just be something to do with that. So in your paper, you make really, which I appreciate you make really sure that you sort of always have the same amount of let's say trainable parameters in your architectures. And you show that by arranging them correctly, you can you can achieve a better result. You always use this name of non zero parameters, right? Is there like, is there a difference? Are there large swaths of zero parameters in one or the one of these architectures? Yeah, so this is something that we control for. In the beginning, this is why we mentioned the idea of weight sparsity. So in the beginning, when when we're actually creating the architecture from scratch, we decide that some layers have an X percent sparsity level applied to it. And what that really means is that X percent of the parameters are zero throughout the entire part of training, and even towards the end. So that's why we express everything in non zero parameters. So the MLPs, for instance, at least in reinforcement learning, are trained with no weight sparsity. So it's completely dense. There are no zeros anywhere in the in the layers. And then the your your architecture, you sort of modulate the amount of sparsity. And that is on top of modulating the K parameter of the K winner takes all layers. Yeah, there's two aspects to the sparsity. So one is activation sparsity, which is like, at a hidden, like when you have a hidden state vector, how many neurons remain non zero after the activation is applied, which is a K winner activation. And then the second aspect of sparsity is weight sparsity, which is how connected are subsequent layers in the network. So if a lot of the units in the weight matrix are zero, then this models the fact that subsequent layers in the network are not very connected, they're sparsely connected. To I guess answer your question again on that is, it's not something with weight sparsity, at least it's something that it's not something we modulate, it's fixed. It's a fixed percentage that we find. And this can either be done through fine tuning, or just Yeah, just just experimentation. Okay, because I think yeah, I might I might have just over read that. But but I recall that in the introduction, you say, you know, both the weights and the both the weights and the the activations are sparse, but then sort of the I think the winner takes all really focuses on the on the activations itself. Have you experimented with setting, you know, something else than K to a number or a percentage, setting maybe a threshold for sparsity or something like this, where whenever a signal is strong enough, it is let through? We haven't, we haven't done anything like that. But we could do that. And you know, there is a chance that it could work out pretty well if we if we have a fixed threshold. But one potential downside there is that, you know, if you have if you have too many signals that cross the threshold, too many units whose activation crosses the threshold, you're going to get more interference when you train. Or if you have not not enough neurons whose activation crosses the threshold, you're going to get you're going to get that phenomenon which you're showing on the screen right now on the left side where you have a drop in accuracy because your representations don't have enough capacity. So that's why we we opted to go for a fixed value of K. But even if you know, we didn't have even if we did have a threshold, I think one of your critiques were here, you know, now we have another hyper parameter K that we're choosing. In the other case, I mean, we'd have to with our hyper parameter would just be the threshold value there, right? Obviously, yeah. Yeah. So to me, this this continual learning setup is very cool. And you can generate data very easily using this permuted MNIST. But there is a bit of an issue that I have. And that is that if I use permuted MNIST, there is another thing there's like all the tasks are like the same difficulty, right? They're essentially the same task. It's just permuted. So I need to learn. Yes, I need to learn like a different function. So this would be the permutation identity. And then the pixels are permuted somehow, right? So all the tasks are kind of the same, right? Which warrants a static network architecture and every context vector is kind of the same length, right? And all the dendrites, they can they can sort of specialize in each of their little task recognition. What would change here? Or is it is this a drastic requirement to your architecture? Or do you think if many of the tasks were wildly different from each other, and you have this a little bit in the robot example, so what can you tell about when tasks are very different in their difficulty, maybe in their amount of training data, like how do these things influence an architecture that's targeted towards continual learning? In our case, I think there might actually be similarities between different tasks. And so like, you know, for example, in this case, in permuted MNIST, right, there's a certain certain pixels are more likely to be white. And certain pixels are more likely to be black, depending on the permutation. So maybe, you know, two different permutations could have more overlap in terms of which pixels are white, which pixels are black, or they could be totally separate. And if they're more, if they're more similar, if the permutations are more similar, then we could expect that the the sub networks that are selected by the dendrites will probably have more are likely to overlap more in which neurons become active, since there's a lot of there's probably a lot of similar computation going on. But of course, you know, in that case, difficulty doesn't really change at all. I think to kind of add on to that, I think a lot of it depends on the quality of the context signal. Because ultimately, that's the part of the network that indicates to the active dendrites, what kind of task you're solving, how similar is it to previous tasks you might have seen and things like that. So I think that in this in this permuted MNIST case, the way we're computing the context does allow for this property that Karen just mentioned, where if there's some overlap in the input space, then the context signal for that will demonstrate this input and perhaps allow for overlapping subnetworks to emerge. Whereas if you have like wildly different tasks, which is something we see more in the robotics environment, then these context signals can like differ more and indicate that the sub networks must be like must not overlap. I think it would be really interesting. And we've talked about this before to try a similar setup in a continual like robotics learning case where you have a streaming set of like robotics tasks. And I think that would probably be a super interesting study to do. And something that hopefully we will try at some point in the future. So I had I had some observations with respect to your experimental setup. It's very cool that you do two different things. But there are also noticeable differences on how you implement the two different tasks, right in the first task, you give the task ID directly. In the second tasks, you do this, this this prototyping approach, which is a more advanced approach. Can you tell a little bit about how is there like a reason why in because I could also imagine you just give me the task ID in the second task, or I do the prototyping in the first task. Is there like a research process reason? Like did you find that some things did work or didn't work? Or how did this come about? That all of a sudden, we're introduced in the new task, we're introduced to this new way of detecting the context. I think in the context of the multi agent, like, sorry, the multitask reinforcement setup, like the environment setup itself gives the task ID. And I think that the concept of multitask learning itself is more focused on if you have different tasks, which may conflict with one another, in terms of the types of behavior you have to do, or the types of predictions, can how can you mathematically still optimize your like joint objective function without and still be able to perform well on all the tasks. And the problem shifts not so much from trying to infer what tasks you're doing, to more, you know what tasks you're doing, and you want to try to do all of them. How can we like optimize this joint objective. This is kind of the way we use this one hot task encoding is in line with passwords that deal with multitask learning and multitask reinforcement learning, where you have this like one hot task encoding that is provided. I do agree that like the one hot encoding is quite convenient and a little bit arbitrary, you can probably use like a denser representation for each task or try to infer it. But I think for the purposes of our experiments, this one hot encoding seemed simple as it was environment provided and kind of like the point of the multitask setup was to again like try to show that this network architecture prevents from like conflicting updates across tasks and avoids this like interfering updates from occurring. I think for continual learning, the kind of the kind of setup of the problem itself is a little bit bigger and that you have to you're not always provided with the task IDs and you have to infer this on the fly, which again, I think Karn can talk a little bit more about. Yeah, in continual learning, there are a couple other recent papers that have come out in the last couple of years and they're not providing task ID and the model actually needs to infer the task ID as it does some sort of modulation or whatever their technique is. So we thought that makes the problem a bit more challenging, a bit more interesting. So since we are working on continual learning and comparing to some of these other methods, let's also try to infer what the task should be. So if I hear this correctly, it's very much inspired by the environment itself, like what the problem is supposed to be. Because if I see something like this, I always have the vague suspicion that people try something and it didn't work and it's like, well, let's try something else. But there's also, I mean, I don't want to infer that. So it's always good to hear like, okay, this really came about through the environment. And I mean, it would be equally cool if it was the other thing. But I'm just always interested to hear so I can adjust my priors. What I think is just to add really quick, sorry, just to add really quickly, I think in the reinforcement learning setup as well, because the state space is shared across all the tasks, because essentially, it's hard to infer from the states, what task you might be doing if you weren't given such an ID. And the only information you would have is the reward signal. And that might not be enough to infer what the task is. So giving a task ID is part of the solution. Given that it's at the end, right? Yeah. It's like, you do something and then you get a reward and then you find out what task you just did. Okay, I agree with you. That's really not helpful at all. Also I think one thing to add here is that we did try a couple, so I think this is something you pointed out in your intro where the task IDs that we're using are one-on-encoded, right? At least for multitask RL. And that means that all these tasks are entirely orthogonal to each other. And it really doesn't reflect how similar one task is to another. And it really doesn't also reflect how different one task might be from another. So one thing that we were experimenting with, I think we mentioned briefly in the paper is that we tried having an embedding layer that effectively embeds this one-hot encode into some other higher dimensional representation and using this instead of that one-hot encode as a context. And I think what we eventually found was that using the embedding or not using the embedding produced fairly similar results. So we just decided to remove it for simplicity's sake. But one thing to note is that using the embedding allows you to represent contexts, I think, that are a little bit more nuanced in the sense that the embedding, since it's trained via end-to-end backprop, any task that is similar to another task would have a shared representation in that higher dimensional embedding. And ones that are really separate from each other would likewise correspond to huge distances apart in that higher dimensional space. But the one-hot encode is entirely orthogonal from each task, but it still worked out pretty well compared to the embedding. And if it gets more complicated, I think you could put entire sub-neural networks, instead of even that embedding layer, you could have non-linearities inferring more complicated task embedding or task relations. It is interesting though with respect to the context itself, to learn these things, all of this through backprop. And my question, I think I brought this up, is would this be like a candidate for maybe unsupervised pre-training that you sort of maybe collect episodes or something in your multitask RL and then just sort of decide based on this, you know, how do we structure our dendritic segments in order to recognize the context, maybe some sort of contrastive objective or anything like, is this something that came, I just blurt these things out when I do the reviews, right? I never know if they're entirely stupid or if people have thought about it or discarded it. Is that something that is a candidate? I don't think it's something that we considered. But an interesting thing to note is that if we did use this for some kind of unsupervised pre-training tactic, is that when you're actually fine-tuning the network, your context vectors are different. So that's something I think that would be the most important nuance to investigate. I personally don't know how well that would work if we trained on a set of contexts that are different during the unsupervised portion and then use a totally different set of contexts during the fine-tuning procedure. I would imagine that doesn't work well. So yeah. To add on to that, I think, yeah, kind of like when I heard you say that in your review, it was quite interesting. I think from the perspective of reinforcement learning at a high level, I don't know if this will work out, but it would be quite cool to see if you can train these dendritic segments to either produce... If you can train them to recognize different contexts and maybe guide exploration in different ways based on the context in an unsupervised manner and maybe do different things in different contexts as an exploration strategy, I think that'd be super cool. Again, I think the challenge there would be to come up with a clever way of generating contexts in an unsupervised way. So I think that would be an interesting area of investigation. It's still like, how do you come up with context signals in an unsupervised manner? A contrastive approach might be cool there. And given these contexts, how do you train these active dendrites to modulate neurons to do what you want it to do? And I think thinking about that in the lens of exploration in RL could be quite interesting. Yeah. You could sort of even prepare for contexts that you hadn't considered before, maybe new instructions in a familiar environment or something like this. You have this notion of prototyping to recognize the context, which I found very interesting because it's kind of like an unsupervised online way even, as the data streams in, you create these new prototypes and so on. And sure, there are some hyperparameters, but I think my main concern is that just taking the average of the samples as they come in right here, it's going to work for something very simple, like permuted MNIST or so. But this gets to its limits very quickly, right? If I think about ImageNet classification or so, it is quite limited. How can this idea be extended to, let's say, arbitrary complexity? Like, what would I have to do with this online prototyping approach to make it usable for more complex problems? Hey, look, I think you're absolutely right that this technique only works for something like permuted MNIST, where you get really good task separation through just averaging the examples from a single task. And that's why it works so well here, right? We actually evaluated how well this clustering procedure works, and it works pretty well. It's not misclassifying things when it's clustering the prototypes. But if we want something that's a bit more general and can apply to other domains, like ImageNet, as you mentioned, I think something along the lines of self-supervised learning might help there. That way, you're trying to build a context vector that is going to provide you sufficiently good task separation, and it's not as simple as just averaging. Does that get at your question? Yeah, no, absolutely. And I think also in meta-learning literature, there are prototyping methods that maybe process the raw input into an embedding space and then do clustering similar to what we're doing there. So I think that would be a quite simple approach that is similar in flavor to this one, but kind of embeds the raw input, like an ImageNet input, into some better clusterable space. Another thing I noticed, and this is a minor thing, but here you feed the context signal into both of your layers. And in the experiment before here, you draw this very accurately. You feed the context signal into only one of the layers, so it doesn't go in here. Is there a particular reason behind the choice of this? Yeah, so there's a bit of background regarding this. I want to say first that the continual learning and reinforcement learning projects started out as separate areas within Numenta. And the goal for this was really to see if the same principles of the same model could work equally in both of these areas. So while we did modulate both the layers in continual learning, the intuition for not doing so in reinforcement learning was a bit different. It was that the first layer should contain all the shared information the model needs, and you could really do this without activating any specific sub-networks, and that the second layer would then activate the context-dependent sub-networks for each task. But you're absolutely right that we could have tried doing in-depth experiments where we modulated both layers for the RL setup. I think we started doing that at the beginning of this project, but we found it worked reasonably well. But because of the time and computing constraints of running each of these RL experiments, we decided to stick with the original plan and really pick a few key experiments and key architectures to run, and really leave the ablations for the continual learning experiments, which are really significantly faster to run. But you are absolutely right, though. We just went off of our intuition on this one. It's just my reviewer too popping up and be like, hey! But it's good. It's even interesting to see that this is kind of a convergence of projects. Could you tell us a little bit more about just the research process? You already talked about how this came to be, but the process of researching this, it's kind of a new thing, right? You propose a new architecture. The tasks are, let's say, not that mainstream. People work on them, but they're not super mainstream. Was it smooth sailing from beginning to end, like stepwise improvement? Or was there points that just didn't work at all for a long time? Or are there entire avenues that you discarded and didn't end up working out? Could you let other people... I don't know what you can or want to disclose, but it's always interesting to hear what also didn't work out during a project. I can start off. When we first tried implementing some of these ideas behind dendrites, you noticed that we talk about this, that we're picking the maximum dendritic activation and we're using that to modulate. But actually, it was through the process of trial and error that we realized that we were just working on an initial toy task. We weren't working on continual learning back then. We found that, hey, we actually can't turn things off. We can only turn them on because you are picking the maximum value, right? So how do you get something that's super sparse? So we actually want to turn things off. So we're like, oh, okay, let's go back and let's actually not just pick the maximum, but pick the maximum and keep the sign. So if something's really negative, we're picking that. And so there's a whole appendix section and that's actually the detail of how... That's in the details of how we're actually implementing this. So through a bit of trial and error. And then also with going back to the prototype, for a while we were thinking, well, how can we get something that really provides sufficient task differentiation? So we tried a bunch of different things. Just like Avi mentioned, he had a linear embedding, which was created from his context. We also had one for continual learning, but that didn't really work too well either. And we ended up settling, converging on something that's really dumb and simple for permutativeness that ended up working out. Yeah. There's actually, just based off of what Karan was saying, if you go to figure 11, I think you had some points there as well. It's a visualization, if I remember correctly. Yeah, this one. 11. Yeah. So if you notice, we use the exact same gating technique for both continual learning and multitask reinforcement learning. And that's the absolute max gating. So you're picking not only the absolute max, but you're retaining the sign. And what you'll notice is that the initial intuition for doing this was that, as Karan just said, is you want to give each neuron the ability to either turn on or turn off. And it's very interesting because if you look at the results in multitask RL, you can see that for neuron B at least, you see some negative activations, those red squares that you see. So that's effectively the neuron being told to turn off. It's the exact opposite of a strongly positive activation. I think something that's very interesting to see is, at least for the two neurons that we've showed for continual learning on the right-hand side, you don't really see that happening. It's either the neuron doesn't receive high magnitudes of activation or it receives really high magnitudes, but it's all positive. So something interesting to note that we were, even in the multitask RL part, we were working trying to understand would max gating work better than absolute max gating in the sense that do we want to discard the sign or keep the sign? In the beginning, there was a lot of trial and error process. In multitask RL too, we had a good amount of time spent on understanding what the right sparsity levels were to apply for the weight sparsity and the feed forward layers. What we saw, I think, is also pretty intuitive. If you really increase your sparsity level to a really high sparsity, there's just not enough information in the network to keep training, and your accuracy plummets. But something that's interesting to note is that there's always a sweet spot for sparsity. Once you reach there, that's when the accuracy is the best. How do you debug these things? What is your main method? Is your main method mainly setting a parameter and then running things? Are there good ways to peek inside and what's happening? What are things that you look at to debug something like this? Like, oh, we are not sparse enough or we're too sparse or we don't turn off neurons or something like this. I think diagrams like this, which you have on your screen, are a perfect example, visualizations of how the dendrites are behaving. I think there was at one point early on, here you have in both cases after learning that different segments are responding to different tasks contexts. But there are cases early on where these diagrams looked exactly like just really just horizontal bars. So you have the same segment that's just winning all the time. So we realized, okay, well, this is not right. We don't want the same segment to always win. So that helps in identifying, okay, this is why the network is failing. So you would look at these things even during your research process. It's not just something that you made after the fact just to demonstrate to the readers. Yeah. Oh, yeah. This was a very helpful tool for debugging. Cool. I mean, that's really interesting to hear. A lot of the architecture decisions that were made in continual learning were used in multitask RL simply because I think each multitask experiment took 25 hours to run plus easily. So it was really hard to change a parameter, observe how the results and visualizations looked, and then sort of edit from there on. So a lot of the intuitions that we got in RL came from current continual learning experiments. So that was nice. Did you ever compare these things to, well, it's not too easy to compare, but sort of a baseline because there is the danger with these things that you kind of interpret. I think I said, well, couldn't this be just like the difference between the top and the bottom just be one is at initialization and one is trained and maybe has not much to do with sparsity? Did you ever compare this to something that isn't explicitly sparse or anything like this? Is there something you can say as a reference point? Yeah. So there's two things to note there. The first is that at least for this visualization, the activations are normalized with respect to when they were trained. So I think you mentioned this in your intro as well. You said that could it potentially be that you have really high activations in the beginning and the area that you've circled there in purple, it just sort of gets dimmed down. And I think the important thing to note is they're all normalized. So the range of values between the highest activated neurons are much higher than the lowest activated neurons after training than before training. But to address the second point, I think that's regarding figure 10, if you scroll up. And that was why don't we have like a baseline for this? Is it really that the active dendrites networks that are creating these hyper sparse sub networks? And to that, you're absolutely right. We should have had a nice diagram here that also showed how this would look in a baseline MLP. You're absolutely right. That's something that we could definitely include. I mean, I totally believe you that it's like very sparse. It's just that it's not it's not obvious from a diagram like this. Like what, you know, what what should I expect? I Yeah, but cool. Yeah, there is one one other thing in that big, by the way, like, I have mad respect for you for including the graph on the right. Like, like mad respect, like, 90% plus of researchers where they try something like this specifically because no one would notice if you leave this away, right? No one no one comes to you and says, Well, okay, maybe someone comes to you, but no, no one would seriously miss adding the SI to both of these things. And you, you know, at the left, you beat them very clearly. So, you know, huge respect for for including that that is, it's, it's, I think, to be commended and to be highlighted. I think, you know, when we present a new architecture like this, you know, we really want to show the community that, hey, we can we can do things like continual learning with our more biologically inspired ideas. And it's competitive with what's already out there, right? So even if we're not beating the state of the art, I think that that's perfectly fine. Even though you know, nowadays, a lot of machine learning has turned into this competition of, you know, getting getting the best numbers. And if you don't have the best numbers, apparently that that means you you won't be able to publish anymore. So yeah, to add on to that, I think the purpose of this paper is really something I said that we all said in the beginning, and now it's we really want to show a proof of concept for this completely novel architecture, where the goal is really not to get state of the art, I can see on either of these benchmarks. It's really about the promise of something new, something I think that deep learning is has been missing for the past, what 10 years or so. So yeah, it's exciting. And the last thing maybe we can get into is this comparison to other to other networks, because you you you very clearly address this in like a paragraph. And I think, wait, I have like even a transformer diagram somewhere, you clearly address this in a paragraph saying, like, isn't this just equivalent to to like a bigger network? And I try to myself also to come up with, you know, is there some way I could do the multiplication in like an MLP? And I'm fairly convinced there isn't. But there is a connection clearly to like LSTM which do modulate things with like forget gates and so on. They even have sigmoids, right? So they can they can module model this, this on or off, and also sparsity to an extent. And I also think that a transformer could conceivably like a two layer transformer could conceivably model the interaction right here. Did you explore at all, like the the inter like the connections of sort of this active dendrites framework to other models? Is there something you can say about that? I definitely think that these are great observations, by the way, that the kind of relationship between attention and transformers and like the gating and LSTMs and GRUs, there's definitely a relationship between those mechanisms and what we're doing here. I think in our research process, we definitely thought a lot about how this gating mechanism could be related to like things like multi headed attention, where basically you're doing a similar thing where you're matching keys and queries as vectors with an inner product and then using that as a way to see what parts of a sequence, for example, to weight when you're considering a certain position. I think the key difference in terms of I think the similarity is that for in the specific instance of attention, you are using learned weights to match a given input. So for example, in our active dendrites, you're matching the context with the set of dendritic segments and in attention, you're matching like the query vector with a set of keys. I think that the key difference is that the purpose for which it's done here in active dendrites, you're looking at a specific neuron and you're saying, okay, given the context, is this neuron relevant? In transformers, you're saying, okay, here's a position. What context around me in terms of the sentence, for example, is relevant for me? And how can I weight certain aspects of it? So I think it's a little bit like flipped in how an interpretation of the focus. Kind of shifting to the LSTM aspect, I think as a mechanism, it's quite similar in that the LSTM is actually like turn off or turn on certain units themselves to carry forward in time. I think, yeah, exactly. That's what's done here. I think the difference is now like focus more on the sparsity aspect of it. In LSTMs, you're doing like a weighted sum between what's in the past and what's current and saying, okay, let's pass this forward. And there's no aspect of like using this to enforce a level of sparsity. Here, we're saying, okay, let's turn off certain things and do that in order to remain sparse and pass forward this information. So there's definitely a relationship there. I think the interpretation is similar, but a little bit different. And I think in all of these things, again, to highlight, LSTMs and transformers, they're all trained, let's say, with back prop, and all the parameters are trained. So still, you'd run into the same problems where if you do discontinue learning, tasks would interfere with each other, no matter how much they can implement the multiplication. So that's definitely a difference. So in your outlook section, I haven't mentioned this in the video, but you discuss sort of what to do next. And you mentioned a lot of like, oh, yeah, we want to investigate maybe the combination of RL and continual learning and so on. Is there something that's here? Is there? Yeah, you said, you mentioned neuroscience a little bit, what would be sort of the next big things from neuroscience to include in deep learning architectures that aren't yet really done by other people? Like, is there something where, you know, you could say, well, if we had that, that's not really in our deep networks yet. But if we had that, that would be like, amazing. I think this is a very small point. But the dendrites that we're sort of modeling right now are, they can be considered the basal dendrites. I think you went over this briefly in your intro. And the basal dendrites are responsible for receiving this context and depolarizing the main cell to either fire or not, if that context was recognized. Something that we haven't looked into, which could be potentially interesting is modeling apical dendrites. And the apical dendrites receive feedback from other cells that also biases the soma to fire or not. I think that could be a potentially interesting way to also gate each individual neuron. I think standard deep learning doesn't do any of this anyway. They only consider the proximal dendrites, which is mimicked by the simple linear weighted sum to determine if the neuron is fired. But if we can gather all this other neuroscience background from all the other kinds of dendrites too, like apical dendrites, it could be a very potentially interesting architecture, like a very powerful one for dynamic scenarios. The issue of top down feedback or lateral inhibition or anything like this, a lot of people talk about it, but I haven't yet seen anyone successfully bring it into a deep network and actually do something useful with it. Definitely think beyond dendrites, just mechanisms like this would be super helpful. I think another aspect, which is a little bit quite different from what Avi just said, that would be quite interesting is the local learning rule aspects that are present in biological neurons and how they might relate to unsupervised learning in conditional machine learning. I think a lot of the unsupervised learning objectives are addendums to the loss function that we think might be useful and it just flows through the network. I might be wrong, but I don't think there's a lot of research until figuring out which parts of the network could focus on certain things in an unsupervised way, which might be better done in biological networks. I think thinking about that and getting inspiration to see what local learning rules in an unsupervised way could improve performance in modern deep learning would be super cool. Cool. Do you have anything to add, anything people should know or that we haven't talked about yet about the paper? People can get started with your code, which is online. I've seen that, which is very cool. Anything you want to get out there to the viewers? The take home message from this is what we want to be is that the brain is able to do a lot of different things. It's using different neural circuits to do it, but neural networks, as they've been designed decades ago, they're really just optimizing for one thing. They're great function approximators, but you don't just want to approximate one function. You want to be able to approximate multiple functions. We're trying to show that, hey, there are ways where we can get neural networks to actually have different sub-networks, different neural circuits that are able to be different function approximators. If we can do that, then neural networks will be able to operate in more dynamic, changing scenarios. I think that's really exciting because the world is constantly changing, but a lot of the applications for deep learning right now are the environments that they operate in, are static. If we can get to that, then that's great. Cool. Well, Akash, Karen, Avi, thank you very much for being here today. This was great fun and I learned a lot. Yeah, thanks, Yannick. Now you're influencing my fashion. Nice. I'll join the show. Thanks so much for being here. Yeah, I hope you continue this because it's really cool and I think we're missing it in deep learning. Thanks, Yannick. That was a lot of fun. It was a pleasure. Thanks for having us. Thanks for having me.
[ { "start": 0, "end": 10.64, "text": " Hello, this is an interview with the authors of the paper on active dendrites. Now, if" }, { "start": 10.64, "end": 16.94, "text": " you haven't seen it, I've made a comprehensive paper review video on this paper and I released" }, { "start": 16.94, "end": 22.580000000000002, "text": " that yesterday. If you watch this video as it comes out, which obviously you do today," }, { "start": 22.580000000000002, "end": 28.18, "text": " I'm going to interview the authors and we've all seen my review. So we'll be able to directly" }, { "start": 28.18, "end": 32.68, "text": " dive in. So if you haven't seen the review yet, and you want to know what's in the paper," }, { "start": 32.68, "end": 38.92, "text": " maybe that is a good place to start. The authors here were really helpful and really informative" }, { "start": 38.92, "end": 44.28, "text": " answering all of my questions and concerns that I had and even bringing up some new interesting" }, { "start": 44.28, "end": 50.08, "text": " insights. So I hope you learn something from this interview or at least that it entertains" }, { "start": 50.08, "end": 55.6, "text": " you. And if you have any comments, please let me know in the comments below the video." }, { "start": 55.6, "end": 61.08, "text": " I'll see you around. Bye bye. Hey there, today's sponsor is the course on introduction to graph" }, { "start": 61.08, "end": 66.48, "text": " neural networks. This is a course by my friend Zach Jost, who is an expert in graph neural" }, { "start": 66.48, "end": 73.08, "text": " networks, and also runs the welcome AI overlords YouTube channel has a very interesting blog" }, { "start": 73.08, "end": 78.08, "text": " and does many other cool things. He's packed all his knowledge of graph neural networks" }, { "start": 78.08, "end": 84.52000000000001, "text": " into one course that will educate you on both the theoretical and hands on practical aspect" }, { "start": 84.52, "end": 89.3, "text": " on graph neural networks. Graph neural networks are really important. They're definitely one" }, { "start": 89.3, "end": 94.52, "text": " of the most interesting areas in deep learning right now they're on the upswing, they model" }, { "start": 94.52, "end": 101.46, "text": " data that has an underlying structure that is connected that is not really well fit for" }, { "start": 101.46, "end": 107.66, "text": " any of the classic formats like tables or images. They've also powered a lot of recent" }, { "start": 107.66, "end": 113.64, "text": " advances in scientific breakthroughs, such as alpha fold protein structure predictions," }, { "start": 113.64, "end": 118.76, "text": " or better traffic predictions. So if you're interested in graph neural network, I'll definitely" }, { "start": 118.76, "end": 125.12, "text": " recommend you check out that course. If you use my link, you'll get a 15% discount on" }, { "start": 125.12, "end": 131.92000000000002, "text": " the course enrollment is open right now and lasts until April 1 or until spaces run out." }, { "start": 131.92000000000002, "end": 137.12, "text": " The course is a six weeks course. It's cohort based, you'll get access to a community to" }, { "start": 137.12, "end": 143.36, "text": " discord community of other students, and you'll get all the materials and hands on experience." }, { "start": 143.36, "end": 148.12, "text": " All right, let's get into the video now. See ya." }, { "start": 148.12, "end": 153.60000000000002, "text": " Hi everyone, today I'm here with the three joint first authors of the paper on active" }, { "start": 153.60000000000002, "end": 160.60000000000002, "text": " dendrites, Abhi, Karan and Akash. And I'm very, very happy to have you all here. This paper" }, { "start": 160.60000000000002, "end": 166.64000000000001, "text": " covers many areas, it covers biology, it covers neural networks, it covers kind of different" }, { "start": 166.64000000000001, "end": 172.88000000000002, "text": " architectures of stuff. It's very cool that you all sort of are here and are able to sort" }, { "start": 172.88, "end": 176.6, "text": " of answer my questions. Welcome, all of you." }, { "start": 176.6, "end": 180.79999999999998, "text": " Yeah, thanks, Janek. Thanks for having us." }, { "start": 180.79999999999998, "end": 181.79999999999998, "text": " Thanks for having us." }, { "start": 181.79999999999998, "end": 188.07999999999998, "text": " It's very interesting paper. So I saw this paper and I was intrigued because it's not" }, { "start": 188.07999999999998, "end": 195.72, "text": " often that a lot of people say they do biologically inspired things. But it's not often that someone" }, { "start": 195.72, "end": 200.88, "text": " really goes and says, look, you know, here's what's missing, let's build it in. And then" }, { "start": 200.88, "end": 208.32, "text": " it actually leads to something that works. And that is, you know, the hypothesis in your" }, { "start": 208.32, "end": 214.28, "text": " paper, the hypothesis you pose on what should happen are actually confirmed at the end." }, { "start": 214.28, "end": 219.84, "text": " And this is, I think, a very good story arc for a paper and a really nice thing to write" }, { "start": 219.84, "end": 228.38, "text": " up. So is this, how did this come to be? How did you get the idea of bringing these very" }, { "start": 228.38, "end": 234.5, "text": " two distant, not too distant, but these two distant fields together of sort of neurobiology" }, { "start": 234.5, "end": 236.12, "text": " and deep learning?" }, { "start": 236.12, "end": 241.44, "text": " Well, at Numenta, we're interested, one of the things we're interested in is in continual" }, { "start": 241.44, "end": 247.64, "text": " learning and learning multiple tasks, more generally speaking. And so, you know, we're" }, { "start": 247.64, "end": 253.76, "text": " looking at, but a lot of neural networks and deep learning today focuses on trying to solve" }, { "start": 253.76, "end": 260.24, "text": " a single task. So we said, well, you know, how is biology enabling the ability to solve" }, { "start": 260.24, "end": 264.76, "text": " multiple things in sequence or, you know, at the same time, learning different things?" }, { "start": 264.76, "end": 271.36, "text": " And so, you know, there's been a lot of work out there on active dendrites. And so, and" }, { "start": 271.36, "end": 277.36, "text": " it's not exactly clear what their role was. But a little while back, we speculated that," }, { "start": 277.36, "end": 285.48, "text": " hey, they might actually be helping at the neural level to allow for continual learning." }, { "start": 285.48, "end": 294.2, "text": " And so if we can build this idea into deep learning, then there might be some prospect" }, { "start": 294.2, "end": 297.96000000000004, "text": " there for addressing problems like continual learning and multitask learning." }, { "start": 297.96000000000004, "end": 302.40000000000003, "text": " So is it fair to say that it grew out of sort of a need to solve a task?" }, { "start": 302.4, "end": 310.4, "text": " I think it grew out of the need to solve multiple tasks in sequence, either learning them together" }, { "start": 310.4, "end": 317.32, "text": " or in sequence continuously. To add on to what Karan was saying is that we believe that" }, { "start": 317.32, "end": 322.91999999999996, "text": " active dendrites can really aid in achieving these specialized neural circuits. And we" }, { "start": 322.91999999999996, "end": 327.59999999999997, "text": " can apply these ideas directly to any neural network and show some competitive performance" }, { "start": 327.6, "end": 333.6, "text": " on various benchmarks that involve continual learning setups. So I guess the purpose of" }, { "start": 333.6, "end": 338.48, "text": " this project, if you were to just summarize it very briefly, is we just want to show a" }, { "start": 338.48, "end": 344.72, "text": " proof of concept for a new idea that can allow deep learning to work in more dynamic environments" }, { "start": 344.72, "end": 349.76000000000005, "text": " and scenarios. To kind of add on to what Karan and Abhi" }, { "start": 349.76000000000005, "end": 355.8, "text": " said. So at a higher level, I think we were kind of examining where a lot of modern deep" }, { "start": 355.8, "end": 362.16, "text": " networks fail, and that's in these streaming task settings and multitask settings. And" }, { "start": 362.16, "end": 368.64, "text": " the kind of inspiration for our solution was directed towards biology and biological neurons," }, { "start": 368.64, "end": 375.88, "text": " which is a lot of what Numentos focuses on. And I think quite nicely we found these existing" }, { "start": 375.88, "end": 381.04, "text": " benchmarks and existing tasks that show that typical deep learning networks fail in these" }, { "start": 381.04, "end": 387, "text": " scenarios. And we were able to build in these biologically inspired neurons to improve the" }, { "start": 387, "end": 392.52000000000004, "text": " performance in such dynamic settings by using the fact that we believe active dendrites" }, { "start": 392.52000000000004, "end": 402.16, "text": " in biology kind of do this kind of context dependent adaptation in multiple tasks." }, { "start": 402.16, "end": 406.72, "text": " What I found interesting is that even though you targeted a little bit towards multilayer" }, { "start": 406.72, "end": 414.92, "text": " perceptrons, in principle, this active dendrites architecture is sort of pluggable almost anywhere." }, { "start": 414.92, "end": 420.48, "text": " So you could always imagine some sort of a context dependent signal that gets routed" }, { "start": 420.48, "end": 429.20000000000005, "text": " in and modulates the signal that exists. So I think what I'm trying to find out is there" }, { "start": 429.20000000000005, "end": 435.16, "text": " are a number of things happening in this model. There is first of all the modulation itself," }, { "start": 435.16, "end": 440.72, "text": " which is a relatively it's not really a known concept, at least in classical deep learning," }, { "start": 440.72, "end": 447.72, "text": " we always have weighted sums, we rarely have the situation where two parts of the signal" }, { "start": 447.72, "end": 452.92, "text": " are multiplied together, or one modulates the other, it happens a little bit in LSTM" }, { "start": 452.92, "end": 462.40000000000003, "text": " and so on. The other one is the sort of recognition of a context and, you know, being context" }, { "start": 462.4, "end": 471.03999999999996, "text": " dependent. And then a third thing is this, this sparsity. Now, you have sort of combined" }, { "start": 471.03999999999996, "end": 477.52, "text": " all of them. Is there one thing that you think is specifically important? Or is it sort of" }, { "start": 477.52, "end": 482.32, "text": " the combination of things that is really what makes the difference? You have some ablations" }, { "start": 482.32, "end": 485.32, "text": " in the paper. What can you say about this?" }, { "start": 485.32, "end": 489, "text": " I think it's the combination of all these things acting together. So it's the it's" }, { "start": 489, "end": 492.92, "text": " the it's the dendrites, which are, you know, up modulating and down modulating certain" }, { "start": 492.92, "end": 499.08, "text": " neurons to determine which ones should become which which to determine which sub network" }, { "start": 499.08, "end": 503.04, "text": " should be invoked. And then it's as far as you on top of that, which is ensuring that," }, { "start": 503.04, "end": 508.96, "text": " you know, a large portion of the network is essentially not performing or learning a certain" }, { "start": 508.96, "end": 517.12, "text": " task. And it's those two things together, which, which, which really gets at this idea" }, { "start": 517.12, "end": 522.72, "text": " of using specialized sub networks for different things. So I wouldn't say any any one one" }, { "start": 522.72, "end": 526.12, "text": " thing that stands out more than the others." }, { "start": 526.12, "end": 532.12, "text": " So when we get let's get into the paper itself, you've seen my review of it, with respect" }, { "start": 532.12, "end": 537.8, "text": " to just framing the problem and maybe framing the architecture as such, is there do you" }, { "start": 537.8, "end": 543.64, "text": " think I have captured what you've tried to say? Do you think I've left something important" }, { "start": 543.64, "end": 549.52, "text": " out or have put emphasis on or have not put emphasis on something that you would like" }, { "start": 549.52, "end": 553.52, "text": " to put emphasis on when it comes to like, what the architecture is, what it does and" }, { "start": 553.52, "end": 559.12, "text": " how it works?" }, { "start": 559.12, "end": 563.12, "text": " I think your explanations for the architecture, at least we're very good. I think it does" }, { "start": 563.12, "end": 567.98, "text": " definitely does capture what we were trying to trying to say. And the whole point to kind" }, { "start": 567.98, "end": 573.24, "text": " of reiterate is that the same model with the same principles should work on completely" }, { "start": 573.24, "end": 578.28, "text": " separate areas. One is the multitask reinforcement learning. The other one is continual learning" }, { "start": 578.28, "end": 583.4, "text": " with permuted MNIST. And I think you touched upon that idea too. So yeah," }, { "start": 583.4, "end": 588.36, "text": " I think that the kind of motivation that I think you in towards the beginning of your" }, { "start": 588.36, "end": 594.6, "text": " review, you showed you kind of compared the typical weighted linear sum neuron with the" }, { "start": 594.6, "end": 600.04, "text": " active dendrites neuron. And I think our motivation in coming up with this architecture was how" }, { "start": 600.04, "end": 606.3199999999999, "text": " can we incorporate a lot of these properties into active dendrites with having dendritic" }, { "start": 606.3199999999999, "end": 611.12, "text": " segments being able to either up modulate or down modulate certain neurons in a way" }, { "start": 611.12, "end": 618.24, "text": " that didn't completely change from normal back propagation trainable networks. So this" }, { "start": 618.24, "end": 624.48, "text": " architecture kind of brings in that flavor of having dendrites influence certain neurons," }, { "start": 624.48, "end": 629.9599999999999, "text": " but does so in a way that mathematically allows for back propagation to train the networks" }, { "start": 629.96, "end": 633.64, "text": " and I think you touched on that pretty well as well." }, { "start": 633.64, "end": 639.2, "text": " Do you think it's valid to sort of bring in biological concepts even though we train with" }, { "start": 639.2, "end": 647, "text": " back propagation? Because it's very evident that at least pure like correct back propagation" }, { "start": 647, "end": 652, "text": " isn't happening in the brain. Do you think it's still valid to bring in the concepts" }, { "start": 652, "end": 657.5400000000001, "text": " and maybe the brain is doing something like backprop? Or do you think we're sort of just" }, { "start": 657.54, "end": 666.48, "text": " kind of taking inspiration from biology in order to solve some of our problems?" }, { "start": 666.48, "end": 674.28, "text": " I think it's more so the latter. Of course, the most accurate biological neural network" }, { "start": 674.28, "end": 681.68, "text": " would likely not use back propagation, right? But this is one area where I think the goal" }, { "start": 681.68, "end": 686.8399999999999, "text": " was can we make deep learning just a little bit more plausible? And in doing so, can we" }, { "start": 686.84, "end": 695.52, "text": " make it a little bit more dynamic? So we're not necessarily here to remove backprop entirely" }, { "start": 695.52, "end": 700.88, "text": " and say that that's the best way that the dendrites in this architecture can work. Although" }, { "start": 700.88, "end": 707.0600000000001, "text": " certainly that is how it works in biology. The point was, can we just augment traditional" }, { "start": 707.0600000000001, "end": 712.2, "text": " deep neural nets to work in more dynamic scenarios?" }, { "start": 712.2, "end": 718.08, "text": " Now I had some criticisms with respect to just like that details of your architecture." }, { "start": 718.08, "end": 724.3000000000001, "text": " For example, you always or you often choose the number of dendritic segments to match" }, { "start": 724.3000000000001, "end": 732.0400000000001, "text": " the number of tasks that you have, which obviously, if I was a researcher, I would do the same." }, { "start": 732.0400000000001, "end": 737.6800000000001, "text": " But can you say maybe something about how this is in the brain? Like what numbers are" }, { "start": 737.68, "end": 745.16, "text": " we talking about? How many of these sub networks that are composed of distal dendrites? How" }, { "start": 745.16, "end": 752, "text": " many are there approximately? Do you know? Do you have an idea? And what can you say" }, { "start": 752, "end": 757.1999999999999, "text": " about how many we should build into a problem where we maybe don't know how many tasks" }, { "start": 757.1999999999999, "end": 761.4799999999999, "text": " we expect?" }, { "start": 761.48, "end": 767.84, "text": " From what I recall, probably in the order of hundreds or thousands of individual dendrite" }, { "start": 767.84, "end": 775.02, "text": " segments for each individual neuron, actually, it might even be more than that. The actual" }, { "start": 775.02, "end": 781.6800000000001, "text": " numbers escape me. But regarding what you said earlier about having the number of tasks" }, { "start": 781.6800000000001, "end": 789.2, "text": " be equal to the number of segments here, we found that actually, even though in a lot" }, { "start": 789.2, "end": 795.48, "text": " of the experiments we report here, we do set the number of dendrites to the number of tasks." }, { "start": 795.48, "end": 801.76, "text": " We found that we actually don't need to have that many. And we actually have further studies" }, { "start": 801.76, "end": 806.44, "text": " which show that we can actually keep the architecture fixed and increase the number of tasks we're" }, { "start": 806.44, "end": 810.26, "text": " doing. I'm talking about continual learning here because for multitask, we're focused" }, { "start": 810.26, "end": 816.0600000000001, "text": " on 10 specifically. We can increase the number of tasks and the performance actually doesn't" }, { "start": 816.06, "end": 822.92, "text": " change by much. So that shows that as we're increasing the number of dendrite segments," }, { "start": 822.92, "end": 826.1199999999999, "text": " we actually end up overparameterizing the network quite a bit, which we don't need to" }, { "start": 826.1199999999999, "end": 827.1199999999999, "text": " do." }, { "start": 827.1199999999999, "end": 831.92, "text": " Yeah. So this is the plot on the left right here. You just increase the number of dendritic" }, { "start": 831.92, "end": 837.5799999999999, "text": " segments and the top line is learning 10 tasks. And it doesn't get noticeably worse, which" }, { "start": 837.5799999999999, "end": 844.28, "text": " I find to be a very cool property. I don't want to have to set the parameter very specifically." }, { "start": 844.28, "end": 849.3, "text": " I can just set it too high and it doesn't hurt, which is cool. Which leads me to the" }, { "start": 849.3, "end": 855.56, "text": " plot on the right where you discuss the sparsity. I'm going to guess that's the sparsity parameter." }, { "start": 855.56, "end": 862.0799999999999, "text": " So that's the thing that ultimately controls k. And I find it peculiar, not that there" }, { "start": 862.0799999999999, "end": 866.64, "text": " is an optimal setting, which I would expect because that I can't set high that I have" }, { "start": 866.64, "end": 872.76, "text": " to set between 0 and 1. So there's going to be some optimum in between. But there's this" }, { "start": 872.76, "end": 879.96, "text": " two bump thing going on. So what's going on there? Why is it like really good at lows," }, { "start": 879.96, "end": 885.64, "text": " like high sparsity, and then there's like this plateau, and then it just flat like crashes" }, { "start": 885.64, "end": 888.64, "text": " down." }, { "start": 888.64, "end": 897.6, "text": " I think there in the beginning, you know, if you have if you have too much. So yeah," }, { "start": 897.6, "end": 901, "text": " I always think in terms of sparsity, so I'm converting from density to sparsity. So if" }, { "start": 901, "end": 905.08, "text": " you have if it's too sparse, right, there's not enough signal going through. And that's" }, { "start": 905.08, "end": 908.16, "text": " why, you know, as you as you increase the amount of signal that you're allowing through" }, { "start": 908.16, "end": 912.44, "text": " as you're increasing the capacity of your representation, then you're going to get you're" }, { "start": 912.44, "end": 916.44, "text": " going to get an increase in performance. But then if you have if you're using up too many" }, { "start": 916.44, "end": 921.96, "text": " units to to create that, to create that representation, then you're going to get more interference," }, { "start": 921.96, "end": 924.88, "text": " right. And as you have more interference, you're going to you're going to you're going" }, { "start": 924.88, "end": 928.8, "text": " to forget more and more network parameters are overwritten as you move on to subsequent" }, { "start": 928.8, "end": 935.92, "text": " tasks. And so you get a drop in accuracy. And towards the end, so you know, you notice" }, { "start": 935.92, "end": 942.64, "text": " that it does fall drastically. Honestly, I haven't thought too much about why that happens." }, { "start": 942.64, "end": 947.24, "text": " Although it is it is a pretty, pretty monotonic fall, even though I guess in that in that" }, { "start": 947.24, "end": 952.4, "text": " upper curve, there's a slight bump with that could just be due to seeding or something" }, { "start": 952.4, "end": 953.4, "text": " like that. But yeah," }, { "start": 953.4, "end": 959, "text": " Yeah, I was more referring to like the plateau itself, right? There's there's this plateau" }, { "start": 959, "end": 964.12, "text": " kind of, and I I know, I know that there could be almost like two two modes of using the" }, { "start": 964.12, "end": 968.84, "text": " sparsity in one mode, I have entire sub networks that do the job. And in the other mode, I" }, { "start": 968.84, "end": 974.84, "text": " have like a shared network. Yet I have like separate things that just kind of like track," }, { "start": 974.84, "end": 980.48, "text": " track which task I'm on, which would sort of correspond to what the baseline is doing," }, { "start": 980.48, "end": 985.08, "text": " right? When people say, well, the baseline has access to the task to it can just allocate" }, { "start": 985.08, "end": 992.32, "text": " some units. No, it's maybe not a perfect analogy. But I was just wondering, it was just interesting" }, { "start": 992.32, "end": 995.48, "text": " to see that there's this kind of this type of plateau." }, { "start": 995.48, "end": 1001.6800000000001, "text": " Yeah, that's that's something I guess, we haven't gone too deep into. But this might," }, { "start": 1001.6800000000001, "end": 1006.04, "text": " this might just be a property of sparse representations and how and how much overlap there is as you" }, { "start": 1006.04, "end": 1013.04, "text": " as you as you increase the sparsity level, it could just be something to do with that." }, { "start": 1013.04, "end": 1018.0799999999999, "text": " So in your paper, you make really, which I appreciate you make really sure that you sort" }, { "start": 1018.0799999999999, "end": 1023.8, "text": " of always have the same amount of let's say trainable parameters in your architectures." }, { "start": 1023.8, "end": 1029.24, "text": " And you show that by arranging them correctly, you can you can achieve a better result. You" }, { "start": 1029.24, "end": 1036.52, "text": " always use this name of non zero parameters, right? Is there like, is there a difference?" }, { "start": 1036.52, "end": 1042.56, "text": " Are there large swaths of zero parameters in one or the one of these architectures?" }, { "start": 1042.56, "end": 1047.52, "text": " Yeah, so this is something that we control for. In the beginning, this is why we mentioned" }, { "start": 1047.52, "end": 1052.22, "text": " the idea of weight sparsity. So in the beginning, when when we're actually creating the architecture" }, { "start": 1052.22, "end": 1058.8, "text": " from scratch, we decide that some layers have an X percent sparsity level applied to it." }, { "start": 1058.8, "end": 1062.6399999999999, "text": " And what that really means is that X percent of the parameters are zero throughout the" }, { "start": 1062.6399999999999, "end": 1069.04, "text": " entire part of training, and even towards the end. So that's why we express everything" }, { "start": 1069.04, "end": 1074.6399999999999, "text": " in non zero parameters. So the MLPs, for instance, at least in reinforcement learning, are trained" }, { "start": 1074.6399999999999, "end": 1080.56, "text": " with no weight sparsity. So it's completely dense. There are no zeros anywhere in the" }, { "start": 1080.56, "end": 1084.18, "text": " in the layers." }, { "start": 1084.18, "end": 1089.3600000000001, "text": " And then the your your architecture, you sort of modulate the amount of sparsity. And that" }, { "start": 1089.3600000000001, "end": 1095.6000000000001, "text": " is on top of modulating the K parameter of the K winner takes all layers." }, { "start": 1095.6000000000001, "end": 1101.44, "text": " Yeah, there's two aspects to the sparsity. So one is activation sparsity, which is like," }, { "start": 1101.44, "end": 1106.3600000000001, "text": " at a hidden, like when you have a hidden state vector, how many neurons remain non zero after" }, { "start": 1106.3600000000001, "end": 1111.24, "text": " the activation is applied, which is a K winner activation. And then the second aspect of" }, { "start": 1111.24, "end": 1117.88, "text": " sparsity is weight sparsity, which is how connected are subsequent layers in the network." }, { "start": 1117.88, "end": 1123.68, "text": " So if a lot of the units in the weight matrix are zero, then this models the fact that subsequent" }, { "start": 1123.68, "end": 1128.4, "text": " layers in the network are not very connected, they're sparsely connected." }, { "start": 1128.4, "end": 1133, "text": " To I guess answer your question again on that is, it's not something with weight sparsity," }, { "start": 1133, "end": 1136.92, "text": " at least it's something that it's not something we modulate, it's fixed. It's a fixed percentage" }, { "start": 1136.92, "end": 1142.3600000000001, "text": " that we find. And this can either be done through fine tuning, or just Yeah, just just" }, { "start": 1142.3600000000001, "end": 1143.3600000000001, "text": " experimentation." }, { "start": 1143.3600000000001, "end": 1150.88, "text": " Okay, because I think yeah, I might I might have just over read that. But but I recall" }, { "start": 1150.88, "end": 1156, "text": " that in the introduction, you say, you know, both the weights and the both the weights" }, { "start": 1156, "end": 1162.4, "text": " and the the activations are sparse, but then sort of the I think the winner takes all really" }, { "start": 1162.4, "end": 1169.76, "text": " focuses on the on the activations itself. Have you experimented with setting, you know," }, { "start": 1169.76, "end": 1176.02, "text": " something else than K to a number or a percentage, setting maybe a threshold for sparsity or" }, { "start": 1176.02, "end": 1188.4, "text": " something like this, where whenever a signal is strong enough, it is let through?" }, { "start": 1188.4, "end": 1194.0400000000002, "text": " We haven't, we haven't done anything like that. But we could do that. And you know," }, { "start": 1194.0400000000002, "end": 1199.72, "text": " there is a chance that it could work out pretty well if we if we have a fixed threshold. But" }, { "start": 1199.72, "end": 1205.72, "text": " one potential downside there is that, you know, if you have if you have too many signals" }, { "start": 1205.72, "end": 1210.2, "text": " that cross the threshold, too many units whose activation crosses the threshold, you're going" }, { "start": 1210.2, "end": 1215.52, "text": " to get more interference when you train. Or if you have not not enough neurons whose activation" }, { "start": 1215.52, "end": 1219.76, "text": " crosses the threshold, you're going to get you're going to get that phenomenon which" }, { "start": 1219.76, "end": 1224.24, "text": " you're showing on the screen right now on the left side where you have a drop in accuracy" }, { "start": 1224.24, "end": 1229.92, "text": " because your representations don't have enough capacity. So that's why we we opted to go" }, { "start": 1229.92, "end": 1236.8, "text": " for a fixed value of K. But even if you know, we didn't have even if we did have a threshold," }, { "start": 1236.8, "end": 1240.42, "text": " I think one of your critiques were here, you know, now we have another hyper parameter" }, { "start": 1240.42, "end": 1244.68, "text": " K that we're choosing. In the other case, I mean, we'd have to with our hyper parameter" }, { "start": 1244.68, "end": 1251.8, "text": " would just be the threshold value there, right? Obviously, yeah. Yeah. So to me, this this" }, { "start": 1251.8, "end": 1256.28, "text": " continual learning setup is very cool. And you can generate data very easily using this" }, { "start": 1256.28, "end": 1264.1200000000001, "text": " permuted MNIST. But there is a bit of an issue that I have. And that is that if I use permuted" }, { "start": 1264.1200000000001, "end": 1268.72, "text": " MNIST, there is another thing there's like all the tasks are like the same difficulty," }, { "start": 1268.72, "end": 1274.0800000000002, "text": " right? They're essentially the same task. It's just permuted. So I need to learn. Yes," }, { "start": 1274.08, "end": 1278.32, "text": " I need to learn like a different function. So this would be the permutation identity." }, { "start": 1278.32, "end": 1283.56, "text": " And then the pixels are permuted somehow, right? So all the tasks are kind of the same," }, { "start": 1283.56, "end": 1289.24, "text": " right? Which warrants a static network architecture and every context vector is kind of the same" }, { "start": 1289.24, "end": 1294.24, "text": " length, right? And all the dendrites, they can they can sort of specialize in each of" }, { "start": 1294.24, "end": 1300.36, "text": " their little task recognition. What would change here? Or is it is this a drastic requirement" }, { "start": 1300.36, "end": 1306.7199999999998, "text": " to your architecture? Or do you think if many of the tasks were wildly different from each" }, { "start": 1306.7199999999998, "end": 1312.8799999999999, "text": " other, and you have this a little bit in the robot example, so what can you tell about" }, { "start": 1312.8799999999999, "end": 1319.4799999999998, "text": " when tasks are very different in their difficulty, maybe in their amount of training data, like" }, { "start": 1319.4799999999998, "end": 1326.9599999999998, "text": " how do these things influence an architecture that's targeted towards continual learning?" }, { "start": 1326.96, "end": 1334.2, "text": " In our case, I think there might actually be similarities between different tasks. And" }, { "start": 1334.2, "end": 1340.64, "text": " so like, you know, for example, in this case, in permuted MNIST, right, there's a certain" }, { "start": 1340.64, "end": 1344.8, "text": " certain pixels are more likely to be white. And certain pixels are more likely to be black," }, { "start": 1344.8, "end": 1348.96, "text": " depending on the permutation. So maybe, you know, two different permutations could have" }, { "start": 1348.96, "end": 1353.16, "text": " more overlap in terms of which pixels are white, which pixels are black, or they could" }, { "start": 1353.16, "end": 1358.6000000000001, "text": " be totally separate. And if they're more, if they're more similar, if the permutations" }, { "start": 1358.6000000000001, "end": 1364.0800000000002, "text": " are more similar, then we could expect that the the sub networks that are selected by" }, { "start": 1364.0800000000002, "end": 1368.78, "text": " the dendrites will probably have more are likely to overlap more in which neurons become" }, { "start": 1368.78, "end": 1373.16, "text": " active, since there's a lot of there's probably a lot of similar computation going on. But" }, { "start": 1373.16, "end": 1380.24, "text": " of course, you know, in that case, difficulty doesn't really change at all." }, { "start": 1380.24, "end": 1386.36, "text": " I think to kind of add on to that, I think a lot of it depends on the quality of the" }, { "start": 1386.36, "end": 1391.56, "text": " context signal. Because ultimately, that's the part of the network that indicates to" }, { "start": 1391.56, "end": 1396, "text": " the active dendrites, what kind of task you're solving, how similar is it to previous tasks" }, { "start": 1396, "end": 1400.6, "text": " you might have seen and things like that. So I think that in this in this permuted MNIST" }, { "start": 1400.6, "end": 1404.64, "text": " case, the way we're computing the context does allow for this property that Karen just" }, { "start": 1404.64, "end": 1409.72, "text": " mentioned, where if there's some overlap in the input space, then the context signal for" }, { "start": 1409.72, "end": 1415.4, "text": " that will demonstrate this input and perhaps allow for overlapping subnetworks to emerge." }, { "start": 1415.4, "end": 1418.56, "text": " Whereas if you have like wildly different tasks, which is something we see more in the" }, { "start": 1418.56, "end": 1426.92, "text": " robotics environment, then these context signals can like differ more and indicate that the" }, { "start": 1426.92, "end": 1431.72, "text": " sub networks must be like must not overlap. I think it would be really interesting. And" }, { "start": 1431.72, "end": 1436.56, "text": " we've talked about this before to try a similar setup in a continual like robotics learning" }, { "start": 1436.56, "end": 1441.36, "text": " case where you have a streaming set of like robotics tasks. And I think that would probably" }, { "start": 1441.36, "end": 1448.3999999999999, "text": " be a super interesting study to do. And something that hopefully we will try at some point in" }, { "start": 1448.3999999999999, "end": 1451.04, "text": " the future." }, { "start": 1451.04, "end": 1456.84, "text": " So I had I had some observations with respect to your experimental setup. It's very cool" }, { "start": 1456.84, "end": 1462.84, "text": " that you do two different things. But there are also noticeable differences on how you" }, { "start": 1462.84, "end": 1469.48, "text": " implement the two different tasks, right in the first task, you give the task ID directly." }, { "start": 1469.48, "end": 1474.52, "text": " In the second tasks, you do this, this this prototyping approach, which is a more advanced" }, { "start": 1474.52, "end": 1482.08, "text": " approach. Can you tell a little bit about how is there like a reason why in because" }, { "start": 1482.08, "end": 1487.4399999999998, "text": " I could also imagine you just give me the task ID in the second task, or I do the prototyping" }, { "start": 1487.44, "end": 1493.2, "text": " in the first task. Is there like a research process reason? Like did you find that some" }, { "start": 1493.2, "end": 1499.04, "text": " things did work or didn't work? Or how did this come about? That all of a sudden, we're" }, { "start": 1499.04, "end": 1505.3200000000002, "text": " introduced in the new task, we're introduced to this new way of detecting the context." }, { "start": 1505.3200000000002, "end": 1511.04, "text": " I think in the context of the multi agent, like, sorry, the multitask reinforcement setup," }, { "start": 1511.04, "end": 1516.68, "text": " like the environment setup itself gives the task ID. And I think that the concept of multitask" }, { "start": 1516.68, "end": 1521.5600000000002, "text": " learning itself is more focused on if you have different tasks, which may conflict with" }, { "start": 1521.5600000000002, "end": 1525.8, "text": " one another, in terms of the types of behavior you have to do, or the types of predictions," }, { "start": 1525.8, "end": 1531.3600000000001, "text": " can how can you mathematically still optimize your like joint objective function without" }, { "start": 1531.3600000000001, "end": 1535.4, "text": " and still be able to perform well on all the tasks. And the problem shifts not so much" }, { "start": 1535.4, "end": 1539.96, "text": " from trying to infer what tasks you're doing, to more, you know what tasks you're doing," }, { "start": 1539.96, "end": 1544.96, "text": " and you want to try to do all of them. How can we like optimize this joint objective." }, { "start": 1544.96, "end": 1549.32, "text": " This is kind of the way we use this one hot task encoding is in line with passwords that" }, { "start": 1549.32, "end": 1553.32, "text": " deal with multitask learning and multitask reinforcement learning, where you have this" }, { "start": 1553.32, "end": 1557.8400000000001, "text": " like one hot task encoding that is provided. I do agree that like the one hot encoding" }, { "start": 1557.8400000000001, "end": 1563.16, "text": " is quite convenient and a little bit arbitrary, you can probably use like a denser representation" }, { "start": 1563.16, "end": 1569.04, "text": " for each task or try to infer it. But I think for the purposes of our experiments, this" }, { "start": 1569.04, "end": 1574.92, "text": " one hot encoding seemed simple as it was environment provided and kind of like the point of the" }, { "start": 1574.92, "end": 1582.2, "text": " multitask setup was to again like try to show that this network architecture prevents from" }, { "start": 1582.2, "end": 1588.96, "text": " like conflicting updates across tasks and avoids this like interfering updates from" }, { "start": 1588.96, "end": 1594.72, "text": " occurring. I think for continual learning, the kind of the kind of setup of the problem" }, { "start": 1594.72, "end": 1600.28, "text": " itself is a little bit bigger and that you have to you're not always provided with the" }, { "start": 1600.28, "end": 1604.68, "text": " task IDs and you have to infer this on the fly, which again, I think Karn can talk a" }, { "start": 1604.68, "end": 1605.68, "text": " little bit more about." }, { "start": 1605.68, "end": 1610.68, "text": " Yeah, in continual learning, there are a couple other recent papers that have come out in" }, { "start": 1610.68, "end": 1616.28, "text": " the last couple of years and they're not providing task ID and the model actually needs to infer" }, { "start": 1616.28, "end": 1623.8, "text": " the task ID as it does some sort of modulation or whatever their technique is. So we thought" }, { "start": 1623.8, "end": 1627.44, "text": " that makes the problem a bit more challenging, a bit more interesting. So since we are working" }, { "start": 1627.44, "end": 1632, "text": " on continual learning and comparing to some of these other methods, let's also try to" }, { "start": 1632, "end": 1636.76, "text": " infer what the task should be." }, { "start": 1636.76, "end": 1642.64, "text": " So if I hear this correctly, it's very much inspired by the environment itself, like what" }, { "start": 1642.64, "end": 1648.44, "text": " the problem is supposed to be. Because if I see something like this, I always have the" }, { "start": 1648.44, "end": 1653.64, "text": " vague suspicion that people try something and it didn't work and it's like, well, let's" }, { "start": 1653.64, "end": 1658.92, "text": " try something else. But there's also, I mean, I don't want to infer that. So it's always" }, { "start": 1658.92, "end": 1665.3200000000002, "text": " good to hear like, okay, this really came about through the environment. And I mean," }, { "start": 1665.3200000000002, "end": 1670.68, "text": " it would be equally cool if it was the other thing. But I'm just always interested to hear" }, { "start": 1670.68, "end": 1673.88, "text": " so I can adjust my priors." }, { "start": 1673.88, "end": 1678.2, "text": " What I think is just to add really quick, sorry, just to add really quickly, I think" }, { "start": 1678.2, "end": 1684.04, "text": " in the reinforcement learning setup as well, because the state space is shared across all" }, { "start": 1684.04, "end": 1688.0800000000002, "text": " the tasks, because essentially, it's hard to infer from the states, what task you might" }, { "start": 1688.08, "end": 1691.76, "text": " be doing if you weren't given such an ID. And the only information you would have is" }, { "start": 1691.76, "end": 1699.9199999999998, "text": " the reward signal. And that might not be enough to infer what the task is. So giving a task" }, { "start": 1699.9199999999998, "end": 1700.9199999999998, "text": " ID is part of the solution." }, { "start": 1700.9199999999998, "end": 1703.12, "text": " Given that it's at the end, right?" }, { "start": 1703.12, "end": 1704.12, "text": " Yeah." }, { "start": 1704.12, "end": 1709.24, "text": " It's like, you do something and then you get a reward and then you find out what task you" }, { "start": 1709.24, "end": 1715.28, "text": " just did. Okay, I agree with you. That's really not helpful at all." }, { "start": 1715.28, "end": 1719.32, "text": " Also I think one thing to add here is that we did try a couple, so I think this is something" }, { "start": 1719.32, "end": 1723.76, "text": " you pointed out in your intro where the task IDs that we're using are one-on-encoded, right?" }, { "start": 1723.76, "end": 1728.6, "text": " At least for multitask RL. And that means that all these tasks are entirely orthogonal" }, { "start": 1728.6, "end": 1733.6, "text": " to each other. And it really doesn't reflect how similar one task is to another. And it" }, { "start": 1733.6, "end": 1737.8, "text": " really doesn't also reflect how different one task might be from another. So one thing" }, { "start": 1737.8, "end": 1742.24, "text": " that we were experimenting with, I think we mentioned briefly in the paper is that we" }, { "start": 1742.24, "end": 1746.92, "text": " tried having an embedding layer that effectively embeds this one-hot encode into some other" }, { "start": 1746.92, "end": 1752.96, "text": " higher dimensional representation and using this instead of that one-hot encode as a context." }, { "start": 1752.96, "end": 1758.6, "text": " And I think what we eventually found was that using the embedding or not using the embedding" }, { "start": 1758.6, "end": 1765.08, "text": " produced fairly similar results. So we just decided to remove it for simplicity's sake." }, { "start": 1765.08, "end": 1769.52, "text": " But one thing to note is that using the embedding allows you to represent contexts, I think," }, { "start": 1769.52, "end": 1775.28, "text": " that are a little bit more nuanced in the sense that the embedding, since it's trained" }, { "start": 1775.28, "end": 1782.04, "text": " via end-to-end backprop, any task that is similar to another task would have a shared" }, { "start": 1782.04, "end": 1785.8, "text": " representation in that higher dimensional embedding. And ones that are really separate" }, { "start": 1785.8, "end": 1791.08, "text": " from each other would likewise correspond to huge distances apart in that higher dimensional" }, { "start": 1791.08, "end": 1797.8, "text": " space. But the one-hot encode is entirely orthogonal from each task, but it still worked" }, { "start": 1797.8, "end": 1802.8, "text": " out pretty well compared to the embedding." }, { "start": 1802.8, "end": 1809.48, "text": " And if it gets more complicated, I think you could put entire sub-neural networks, instead" }, { "start": 1809.48, "end": 1817.04, "text": " of even that embedding layer, you could have non-linearities inferring more complicated" }, { "start": 1817.04, "end": 1826.56, "text": " task embedding or task relations. It is interesting though with respect to the context itself," }, { "start": 1826.56, "end": 1833.2, "text": " to learn these things, all of this through backprop. And my question, I think I brought" }, { "start": 1833.2, "end": 1839.12, "text": " this up, is would this be like a candidate for maybe unsupervised pre-training that you" }, { "start": 1839.12, "end": 1843.9199999999998, "text": " sort of maybe collect episodes or something in your multitask RL and then just sort of" }, { "start": 1843.9199999999998, "end": 1848.72, "text": " decide based on this, you know, how do we structure our dendritic segments in order" }, { "start": 1848.72, "end": 1854.44, "text": " to recognize the context, maybe some sort of contrastive objective or anything like," }, { "start": 1854.44, "end": 1858.6000000000001, "text": " is this something that came, I just blurt these things out when I do the reviews, right?" }, { "start": 1858.6000000000001, "end": 1863.4, "text": " I never know if they're entirely stupid or if people have thought about it or discarded" }, { "start": 1863.4, "end": 1866.4, "text": " it. Is that something that is a candidate?" }, { "start": 1866.4, "end": 1871, "text": " I don't think it's something that we considered. But an interesting thing to note is that if" }, { "start": 1871, "end": 1874.8400000000001, "text": " we did use this for some kind of unsupervised pre-training tactic, is that when you're" }, { "start": 1874.8400000000001, "end": 1879.3200000000002, "text": " actually fine-tuning the network, your context vectors are different. So that's something" }, { "start": 1879.3200000000002, "end": 1884.42, "text": " I think that would be the most important nuance to investigate. I personally don't" }, { "start": 1884.42, "end": 1888.3600000000001, "text": " know how well that would work if we trained on a set of contexts that are different during" }, { "start": 1888.3600000000001, "end": 1893.16, "text": " the unsupervised portion and then use a totally different set of contexts during the fine-tuning" }, { "start": 1893.16, "end": 1899.8400000000001, "text": " procedure. I would imagine that doesn't work well. So yeah." }, { "start": 1899.8400000000001, "end": 1904.2, "text": " To add on to that, I think, yeah, kind of like when I heard you say that in your review," }, { "start": 1904.2, "end": 1908.24, "text": " it was quite interesting. I think from the perspective of reinforcement learning at a" }, { "start": 1908.24, "end": 1912.3600000000001, "text": " high level, I don't know if this will work out, but it would be quite cool to see if" }, { "start": 1912.36, "end": 1916, "text": " you can train these dendritic segments to either produce... If you can train them to" }, { "start": 1916, "end": 1920.24, "text": " recognize different contexts and maybe guide exploration in different ways based on the" }, { "start": 1920.24, "end": 1925.76, "text": " context in an unsupervised manner and maybe do different things in different contexts" }, { "start": 1925.76, "end": 1929.8799999999999, "text": " as an exploration strategy, I think that'd be super cool. Again, I think the challenge" }, { "start": 1929.8799999999999, "end": 1934.76, "text": " there would be to come up with a clever way of generating contexts in an unsupervised" }, { "start": 1934.76, "end": 1940.84, "text": " way. So I think that would be an interesting area of investigation. It's still like, how" }, { "start": 1940.84, "end": 1944.9599999999998, "text": " do you come up with context signals in an unsupervised manner? A contrastive approach" }, { "start": 1944.9599999999998, "end": 1949.48, "text": " might be cool there. And given these contexts, how do you train these active dendrites to" }, { "start": 1949.48, "end": 1955.3999999999999, "text": " modulate neurons to do what you want it to do? And I think thinking about that in the" }, { "start": 1955.3999999999999, "end": 1959, "text": " lens of exploration in RL could be quite interesting." }, { "start": 1959, "end": 1967.1799999999998, "text": " Yeah. You could sort of even prepare for contexts that you hadn't considered before, maybe new" }, { "start": 1967.18, "end": 1973.96, "text": " instructions in a familiar environment or something like this. You have this notion" }, { "start": 1973.96, "end": 1980.3600000000001, "text": " of prototyping to recognize the context, which I found very interesting because it's kind" }, { "start": 1980.3600000000001, "end": 1986.3, "text": " of like an unsupervised online way even, as the data streams in, you create these new" }, { "start": 1986.3, "end": 1989.92, "text": " prototypes and so on. And sure, there are some hyperparameters, but I think my main" }, { "start": 1989.92, "end": 1996.5600000000002, "text": " concern is that just taking the average of the samples as they come in right here, it's" }, { "start": 1996.56, "end": 2003.72, "text": " going to work for something very simple, like permuted MNIST or so. But this gets to its" }, { "start": 2003.72, "end": 2011.52, "text": " limits very quickly, right? If I think about ImageNet classification or so, it is quite" }, { "start": 2011.52, "end": 2020.12, "text": " limited. How can this idea be extended to, let's say, arbitrary complexity? Like, what" }, { "start": 2020.12, "end": 2029.12, "text": " would I have to do with this online prototyping approach to make it usable for more complex" }, { "start": 2029.12, "end": 2030.12, "text": " problems?" }, { "start": 2030.12, "end": 2034.4599999999998, "text": " Hey, look, I think you're absolutely right that this technique only works for something" }, { "start": 2034.4599999999998, "end": 2039.6799999999998, "text": " like permuted MNIST, where you get really good task separation through just averaging" }, { "start": 2039.6799999999998, "end": 2044.6799999999998, "text": " the examples from a single task. And that's why it works so well here, right? We actually" }, { "start": 2044.68, "end": 2051.08, "text": " evaluated how well this clustering procedure works, and it works pretty well. It's not" }, { "start": 2051.08, "end": 2055.6, "text": " misclassifying things when it's clustering the prototypes. But if we want something that's" }, { "start": 2055.6, "end": 2063.48, "text": " a bit more general and can apply to other domains, like ImageNet, as you mentioned," }, { "start": 2063.48, "end": 2070.04, "text": " I think something along the lines of self-supervised learning might help there. That way, you're" }, { "start": 2070.04, "end": 2077.36, "text": " trying to build a context vector that is going to provide you sufficiently good task separation," }, { "start": 2077.36, "end": 2084.08, "text": " and it's not as simple as just averaging. Does that get at your question?" }, { "start": 2084.08, "end": 2087.64, "text": " Yeah, no, absolutely." }, { "start": 2087.64, "end": 2093.52, "text": " And I think also in meta-learning literature, there are prototyping methods that maybe process" }, { "start": 2093.52, "end": 2097.8, "text": " the raw input into an embedding space and then do clustering similar to what we're doing" }, { "start": 2097.8, "end": 2103.52, "text": " there. So I think that would be a quite simple approach that is similar in flavor to this" }, { "start": 2103.52, "end": 2109.6800000000003, "text": " one, but kind of embeds the raw input, like an ImageNet input, into some better clusterable" }, { "start": 2109.6800000000003, "end": 2115.5600000000004, "text": " space." }, { "start": 2115.5600000000004, "end": 2121.1600000000003, "text": " Another thing I noticed, and this is a minor thing, but here you feed the context signal" }, { "start": 2121.16, "end": 2128.56, "text": " into both of your layers. And in the experiment before here, you draw this very accurately." }, { "start": 2128.56, "end": 2133.6, "text": " You feed the context signal into only one of the layers, so it doesn't go in here. Is" }, { "start": 2133.6, "end": 2137.56, "text": " there a particular reason behind the choice of this?" }, { "start": 2137.56, "end": 2143.64, "text": " Yeah, so there's a bit of background regarding this. I want to say first that the continual" }, { "start": 2143.64, "end": 2150.52, "text": " learning and reinforcement learning projects started out as separate areas within Numenta." }, { "start": 2150.52, "end": 2153.96, "text": " And the goal for this was really to see if the same principles of the same model could" }, { "start": 2153.96, "end": 2159.08, "text": " work equally in both of these areas. So while we did modulate both the layers in continual" }, { "start": 2159.08, "end": 2163.72, "text": " learning, the intuition for not doing so in reinforcement learning was a bit different." }, { "start": 2163.72, "end": 2169.36, "text": " It was that the first layer should contain all the shared information the model needs," }, { "start": 2169.36, "end": 2173.32, "text": " and you could really do this without activating any specific sub-networks, and that the second" }, { "start": 2173.32, "end": 2179.56, "text": " layer would then activate the context-dependent sub-networks for each task. But you're absolutely" }, { "start": 2179.56, "end": 2183.72, "text": " right that we could have tried doing in-depth experiments where we modulated both layers" }, { "start": 2183.72, "end": 2189.16, "text": " for the RL setup. I think we started doing that at the beginning of this project, but" }, { "start": 2189.16, "end": 2193.48, "text": " we found it worked reasonably well. But because of the time and computing constraints of running" }, { "start": 2193.48, "end": 2198.56, "text": " each of these RL experiments, we decided to stick with the original plan and really pick" }, { "start": 2198.56, "end": 2203.7999999999997, "text": " a few key experiments and key architectures to run, and really leave the ablations for" }, { "start": 2203.7999999999997, "end": 2208.56, "text": " the continual learning experiments, which are really significantly faster to run. But" }, { "start": 2208.56, "end": 2215.7599999999998, "text": " you are absolutely right, though. We just went off of our intuition on this one." }, { "start": 2215.7599999999998, "end": 2223.04, "text": " It's just my reviewer too popping up and be like, hey! But it's good. It's even interesting" }, { "start": 2223.04, "end": 2228.2799999999997, "text": " to see that this is kind of a convergence of projects. Could you tell us a little bit" }, { "start": 2228.2799999999997, "end": 2234.7999999999997, "text": " more about just the research process? You already talked about how this came to be," }, { "start": 2234.8, "end": 2241.5600000000004, "text": " but the process of researching this, it's kind of a new thing, right? You propose a" }, { "start": 2241.5600000000004, "end": 2248.28, "text": " new architecture. The tasks are, let's say, not that mainstream. People work on them," }, { "start": 2248.28, "end": 2255.84, "text": " but they're not super mainstream. Was it smooth sailing from beginning to end, like stepwise" }, { "start": 2255.84, "end": 2261.44, "text": " improvement? Or was there points that just didn't work at all for a long time? Or are" }, { "start": 2261.44, "end": 2270.28, "text": " there entire avenues that you discarded and didn't end up working out? Could you let" }, { "start": 2270.28, "end": 2275.84, "text": " other people... I don't know what you can or want to disclose, but it's always interesting" }, { "start": 2275.84, "end": 2280.44, "text": " to hear what also didn't work out during a project." }, { "start": 2280.44, "end": 2287.64, "text": " I can start off. When we first tried implementing some of these ideas behind dendrites, you" }, { "start": 2287.64, "end": 2296.24, "text": " noticed that we talk about this, that we're picking the maximum dendritic activation and" }, { "start": 2296.24, "end": 2300, "text": " we're using that to modulate. But actually, it was through the process of trial and error" }, { "start": 2300, "end": 2306.08, "text": " that we realized that we were just working on an initial toy task. We weren't working" }, { "start": 2306.08, "end": 2311.08, "text": " on continual learning back then. We found that, hey, we actually can't turn things" }, { "start": 2311.08, "end": 2315.04, "text": " off. We can only turn them on because you are picking the maximum value, right? So how" }, { "start": 2315.04, "end": 2318.7599999999998, "text": " do you get something that's super sparse? So we actually want to turn things off. So" }, { "start": 2318.7599999999998, "end": 2323.6, "text": " we're like, oh, okay, let's go back and let's actually not just pick the maximum, but pick" }, { "start": 2323.6, "end": 2328.88, "text": " the maximum and keep the sign. So if something's really negative, we're picking that. And so" }, { "start": 2328.88, "end": 2333.64, "text": " there's a whole appendix section and that's actually the detail of how... That's in the" }, { "start": 2333.64, "end": 2336.96, "text": " details of how we're actually implementing this. So through a bit of trial and error." }, { "start": 2336.96, "end": 2343.2799999999997, "text": " And then also with going back to the prototype, for a while we were thinking, well, how can" }, { "start": 2343.28, "end": 2347.88, "text": " we get something that really provides sufficient task differentiation? So we tried a bunch" }, { "start": 2347.88, "end": 2355.52, "text": " of different things. Just like Avi mentioned, he had a linear embedding, which was created" }, { "start": 2355.52, "end": 2360.2000000000003, "text": " from his context. We also had one for continual learning, but that didn't really work too" }, { "start": 2360.2000000000003, "end": 2364.7000000000003, "text": " well either. And we ended up settling, converging on something that's really dumb and simple" }, { "start": 2364.7000000000003, "end": 2370.76, "text": " for permutativeness that ended up working out. Yeah." }, { "start": 2370.76, "end": 2375, "text": " There's actually, just based off of what Karan was saying, if you go to figure 11, I think" }, { "start": 2375, "end": 2382.1200000000003, "text": " you had some points there as well. It's a visualization, if I remember correctly. Yeah," }, { "start": 2382.1200000000003, "end": 2388.1600000000003, "text": " this one. 11. Yeah. So if you notice, we use the exact same gating technique for both continual" }, { "start": 2388.1600000000003, "end": 2393.88, "text": " learning and multitask reinforcement learning. And that's the absolute max gating. So you're" }, { "start": 2393.88, "end": 2398.98, "text": " picking not only the absolute max, but you're retaining the sign. And what you'll notice" }, { "start": 2398.98, "end": 2403.06, "text": " is that the initial intuition for doing this was that, as Karan just said, is you want" }, { "start": 2403.06, "end": 2409.16, "text": " to give each neuron the ability to either turn on or turn off. And it's very interesting" }, { "start": 2409.16, "end": 2414.12, "text": " because if you look at the results in multitask RL, you can see that for neuron B at least," }, { "start": 2414.12, "end": 2418.88, "text": " you see some negative activations, those red squares that you see. So that's effectively" }, { "start": 2418.88, "end": 2427.32, "text": " the neuron being told to turn off. It's the exact opposite of a strongly positive activation." }, { "start": 2427.32, "end": 2430.28, "text": " I think something that's very interesting to see is, at least for the two neurons that" }, { "start": 2430.28, "end": 2434.4, "text": " we've showed for continual learning on the right-hand side, you don't really see that" }, { "start": 2434.4, "end": 2439.6400000000003, "text": " happening. It's either the neuron doesn't receive high magnitudes of activation or it" }, { "start": 2439.6400000000003, "end": 2444.4, "text": " receives really high magnitudes, but it's all positive. So something interesting to" }, { "start": 2444.4, "end": 2450.1600000000003, "text": " note that we were, even in the multitask RL part, we were working trying to understand" }, { "start": 2450.1600000000003, "end": 2455, "text": " would max gating work better than absolute max gating in the sense that do we want to" }, { "start": 2455, "end": 2462.24, "text": " discard the sign or keep the sign? In the beginning, there was a lot of trial and error" }, { "start": 2462.24, "end": 2468.2, "text": " process. In multitask RL too, we had a good amount of time spent on understanding what" }, { "start": 2468.2, "end": 2475.16, "text": " the right sparsity levels were to apply for the weight sparsity and the feed forward layers." }, { "start": 2475.16, "end": 2480.6, "text": " What we saw, I think, is also pretty intuitive. If you really increase your sparsity level" }, { "start": 2480.6, "end": 2485.08, "text": " to a really high sparsity, there's just not enough information in the network to keep" }, { "start": 2485.08, "end": 2489.16, "text": " training, and your accuracy plummets. But something that's interesting to note is that" }, { "start": 2489.16, "end": 2494.64, "text": " there's always a sweet spot for sparsity. Once you reach there, that's when the accuracy" }, { "start": 2494.64, "end": 2498.7599999999998, "text": " is the best." }, { "start": 2498.7599999999998, "end": 2503.3199999999997, "text": " How do you debug these things? What is your main method? Is your main method mainly setting" }, { "start": 2503.3199999999997, "end": 2510.36, "text": " a parameter and then running things? Are there good ways to peek inside and what's" }, { "start": 2510.36, "end": 2515.28, "text": " happening? What are things that you look at to debug something like this? Like, oh, we" }, { "start": 2515.28, "end": 2519.6800000000003, "text": " are not sparse enough or we're too sparse or we don't turn off neurons or something" }, { "start": 2519.6800000000003, "end": 2520.6800000000003, "text": " like this." }, { "start": 2520.6800000000003, "end": 2525.8, "text": " I think diagrams like this, which you have on your screen, are a perfect example, visualizations" }, { "start": 2525.8, "end": 2532.2000000000003, "text": " of how the dendrites are behaving. I think there was at one point early on, here you" }, { "start": 2532.2000000000003, "end": 2537.4, "text": " have in both cases after learning that different segments are responding to different tasks" }, { "start": 2537.4, "end": 2547.12, "text": " contexts. But there are cases early on where these diagrams looked exactly like just really" }, { "start": 2547.12, "end": 2552.7200000000003, "text": " just horizontal bars. So you have the same segment that's just winning all the time." }, { "start": 2552.7200000000003, "end": 2556.48, "text": " So we realized, okay, well, this is not right. We don't want the same segment to always win." }, { "start": 2556.48, "end": 2561.56, "text": " So that helps in identifying, okay, this is why the network is failing." }, { "start": 2561.56, "end": 2566.8, "text": " So you would look at these things even during your research process. It's not just something" }, { "start": 2566.8, "end": 2571.88, "text": " that you made after the fact just to demonstrate to the readers." }, { "start": 2571.88, "end": 2575.5600000000004, "text": " Yeah. Oh, yeah. This was a very helpful tool for debugging." }, { "start": 2575.5600000000004, "end": 2579.52, "text": " Cool. I mean, that's really interesting to hear." }, { "start": 2579.52, "end": 2585.7200000000003, "text": " A lot of the architecture decisions that were made in continual learning were used in multitask" }, { "start": 2585.7200000000003, "end": 2593.6000000000004, "text": " RL simply because I think each multitask experiment took 25 hours to run plus easily. So it was" }, { "start": 2593.6, "end": 2598.92, "text": " really hard to change a parameter, observe how the results and visualizations looked," }, { "start": 2598.92, "end": 2603.04, "text": " and then sort of edit from there on. So a lot of the intuitions that we got in RL came" }, { "start": 2603.04, "end": 2609.12, "text": " from current continual learning experiments. So that was nice." }, { "start": 2609.12, "end": 2615.7599999999998, "text": " Did you ever compare these things to, well, it's not too easy to compare, but sort of" }, { "start": 2615.7599999999998, "end": 2620.64, "text": " a baseline because there is the danger with these things that you kind of interpret. I" }, { "start": 2620.64, "end": 2625.72, "text": " think I said, well, couldn't this be just like the difference between the top and the" }, { "start": 2625.72, "end": 2631.72, "text": " bottom just be one is at initialization and one is trained and maybe has not much to do" }, { "start": 2631.72, "end": 2637.2799999999997, "text": " with sparsity? Did you ever compare this to something that isn't explicitly sparse or" }, { "start": 2637.2799999999997, "end": 2642.48, "text": " anything like this? Is there something you can say as a reference point?" }, { "start": 2642.48, "end": 2647.3199999999997, "text": " Yeah. So there's two things to note there. The first is that at least for this visualization," }, { "start": 2647.32, "end": 2652.7200000000003, "text": " the activations are normalized with respect to when they were trained. So I think you" }, { "start": 2652.7200000000003, "end": 2657, "text": " mentioned this in your intro as well. You said that could it potentially be that you" }, { "start": 2657, "end": 2660.6800000000003, "text": " have really high activations in the beginning and the area that you've circled there in" }, { "start": 2660.6800000000003, "end": 2665.52, "text": " purple, it just sort of gets dimmed down. And I think the important thing to note is" }, { "start": 2665.52, "end": 2671.52, "text": " they're all normalized. So the range of values between the highest activated neurons are" }, { "start": 2671.52, "end": 2676.6400000000003, "text": " much higher than the lowest activated neurons after training than before training. But to" }, { "start": 2676.64, "end": 2683.52, "text": " address the second point, I think that's regarding figure 10, if you scroll up. And that was" }, { "start": 2683.52, "end": 2688.64, "text": " why don't we have like a baseline for this? Is it really that the active dendrites networks" }, { "start": 2688.64, "end": 2694.12, "text": " that are creating these hyper sparse sub networks? And to that, you're absolutely right. We should" }, { "start": 2694.12, "end": 2699.8399999999997, "text": " have had a nice diagram here that also showed how this would look in a baseline MLP. You're" }, { "start": 2699.8399999999997, "end": 2703.44, "text": " absolutely right. That's something that we could definitely include." }, { "start": 2703.44, "end": 2708.08, "text": " I mean, I totally believe you that it's like very sparse. It's just that it's not it's" }, { "start": 2708.08, "end": 2712.64, "text": " not obvious from a diagram like this. Like what, you know, what what should I expect?" }, { "start": 2712.64, "end": 2722.84, "text": " I Yeah, but cool. Yeah, there is one one other thing in that big, by the way, like, I have" }, { "start": 2722.84, "end": 2730.44, "text": " mad respect for you for including the graph on the right. Like, like mad respect, like," }, { "start": 2730.44, "end": 2737.08, "text": " 90% plus of researchers where they try something like this specifically because no one would" }, { "start": 2737.08, "end": 2742.6, "text": " notice if you leave this away, right? No one no one comes to you and says, Well, okay," }, { "start": 2742.6, "end": 2748.84, "text": " maybe someone comes to you, but no, no one would seriously miss adding the SI to both" }, { "start": 2748.84, "end": 2754.48, "text": " of these things. And you, you know, at the left, you beat them very clearly. So, you" }, { "start": 2754.48, "end": 2759.84, "text": " know, huge respect for for including that that is, it's, it's, I think, to be commended" }, { "start": 2759.84, "end": 2766.4, "text": " and to be highlighted. I think, you know, when we present a new architecture like this," }, { "start": 2766.4, "end": 2771.08, "text": " you know, we really want to show the community that, hey, we can we can do things like continual" }, { "start": 2771.08, "end": 2778.92, "text": " learning with our more biologically inspired ideas. And it's competitive with what's already" }, { "start": 2778.92, "end": 2783, "text": " out there, right? So even if we're not beating the state of the art, I think that that's" }, { "start": 2783, "end": 2787.1600000000003, "text": " perfectly fine. Even though you know, nowadays, a lot of machine learning has turned into" }, { "start": 2787.16, "end": 2791.2799999999997, "text": " this competition of, you know, getting getting the best numbers. And if you don't have the" }, { "start": 2791.2799999999997, "end": 2794.8799999999997, "text": " best numbers, apparently that that means you you won't be able to publish anymore. So" }, { "start": 2794.8799999999997, "end": 2801.3999999999996, "text": " yeah, to add on to that, I think the purpose of this paper is really something I said that" }, { "start": 2801.3999999999996, "end": 2806.3999999999996, "text": " we all said in the beginning, and now it's we really want to show a proof of concept" }, { "start": 2806.3999999999996, "end": 2810.3199999999997, "text": " for this completely novel architecture, where the goal is really not to get state of the" }, { "start": 2810.3199999999997, "end": 2814.72, "text": " art, I can see on either of these benchmarks. It's really about the promise of something" }, { "start": 2814.72, "end": 2819.04, "text": " new, something I think that deep learning is has been missing for the past, what 10" }, { "start": 2819.04, "end": 2824.24, "text": " years or so. So yeah, it's exciting." }, { "start": 2824.24, "end": 2829.8399999999997, "text": " And the last thing maybe we can get into is this comparison to other to other networks," }, { "start": 2829.8399999999997, "end": 2837.24, "text": " because you you you very clearly address this in like a paragraph. And I think, wait, I" }, { "start": 2837.24, "end": 2842.08, "text": " have like even a transformer diagram somewhere, you clearly address this in a paragraph saying," }, { "start": 2842.08, "end": 2847.96, "text": " like, isn't this just equivalent to to like a bigger network? And I try to myself also" }, { "start": 2847.96, "end": 2853.36, "text": " to come up with, you know, is there some way I could do the multiplication in like an MLP?" }, { "start": 2853.36, "end": 2859.56, "text": " And I'm fairly convinced there isn't. But there is a connection clearly to like LSTM" }, { "start": 2859.56, "end": 2864.88, "text": " which do modulate things with like forget gates and so on. They even have sigmoids," }, { "start": 2864.88, "end": 2873.12, "text": " right? So they can they can module model this, this on or off, and also sparsity to an extent." }, { "start": 2873.12, "end": 2878.08, "text": " And I also think that a transformer could conceivably like a two layer transformer could" }, { "start": 2878.08, "end": 2884.36, "text": " conceivably model the interaction right here. Did you explore at all, like the the inter" }, { "start": 2884.36, "end": 2890.84, "text": " like the connections of sort of this active dendrites framework to other models? Is there" }, { "start": 2890.84, "end": 2893.44, "text": " something you can say about that?" }, { "start": 2893.44, "end": 2897.48, "text": " I definitely think that these are great observations, by the way, that the kind of relationship" }, { "start": 2897.48, "end": 2903.56, "text": " between attention and transformers and like the gating and LSTMs and GRUs, there's definitely" }, { "start": 2903.56, "end": 2908.62, "text": " a relationship between those mechanisms and what we're doing here. I think in our research" }, { "start": 2908.62, "end": 2913.56, "text": " process, we definitely thought a lot about how this gating mechanism could be related" }, { "start": 2913.56, "end": 2917.04, "text": " to like things like multi headed attention, where basically you're doing a similar thing" }, { "start": 2917.04, "end": 2921.94, "text": " where you're matching keys and queries as vectors with an inner product and then using" }, { "start": 2921.94, "end": 2926.36, "text": " that as a way to see what parts of a sequence, for example, to weight when you're considering" }, { "start": 2926.36, "end": 2934, "text": " a certain position. I think the key difference in terms of I think the similarity is that" }, { "start": 2934, "end": 2942.16, "text": " for in the specific instance of attention, you are using learned weights to match a given" }, { "start": 2942.16, "end": 2947.64, "text": " input. So for example, in our active dendrites, you're matching the context with the set of" }, { "start": 2947.64, "end": 2952.7999999999997, "text": " dendritic segments and in attention, you're matching like the query vector with a set" }, { "start": 2952.7999999999997, "end": 2959.68, "text": " of keys. I think that the key difference is that the purpose for which it's done here" }, { "start": 2959.68, "end": 2963.4, "text": " in active dendrites, you're looking at a specific neuron and you're saying, okay, given the" }, { "start": 2963.4, "end": 2969.7999999999997, "text": " context, is this neuron relevant? In transformers, you're saying, okay, here's a position. What" }, { "start": 2969.7999999999997, "end": 2974.7999999999997, "text": " context around me in terms of the sentence, for example, is relevant for me? And how can" }, { "start": 2974.8, "end": 2981.5600000000004, "text": " I weight certain aspects of it? So I think it's a little bit like flipped in how an interpretation" }, { "start": 2981.5600000000004, "end": 2988.96, "text": " of the focus. Kind of shifting to the LSTM aspect, I think as a mechanism, it's quite" }, { "start": 2988.96, "end": 2994.96, "text": " similar in that the LSTM is actually like turn off or turn on certain units themselves" }, { "start": 2994.96, "end": 3001.52, "text": " to carry forward in time. I think, yeah, exactly. That's what's done here. I think the difference" }, { "start": 3001.52, "end": 3006.84, "text": " is now like focus more on the sparsity aspect of it. In LSTMs, you're doing like a weighted" }, { "start": 3006.84, "end": 3010.7599999999998, "text": " sum between what's in the past and what's current and saying, okay, let's pass this" }, { "start": 3010.7599999999998, "end": 3017.36, "text": " forward. And there's no aspect of like using this to enforce a level of sparsity. Here," }, { "start": 3017.36, "end": 3021.62, "text": " we're saying, okay, let's turn off certain things and do that in order to remain sparse" }, { "start": 3021.62, "end": 3026.12, "text": " and pass forward this information. So there's definitely a relationship there. I think the" }, { "start": 3026.12, "end": 3033.2799999999997, "text": " interpretation is similar, but a little bit different. And I think in all of these things," }, { "start": 3033.2799999999997, "end": 3040, "text": " again, to highlight, LSTMs and transformers, they're all trained, let's say, with back" }, { "start": 3040, "end": 3046.12, "text": " prop, and all the parameters are trained. So still, you'd run into the same problems" }, { "start": 3046.12, "end": 3050.8399999999997, "text": " where if you do discontinue learning, tasks would interfere with each other, no matter" }, { "start": 3050.84, "end": 3058.6400000000003, "text": " how much they can implement the multiplication. So that's definitely a difference. So in your" }, { "start": 3058.6400000000003, "end": 3062.2400000000002, "text": " outlook section, I haven't mentioned this in the video, but you discuss sort of what" }, { "start": 3062.2400000000002, "end": 3069.84, "text": " to do next. And you mentioned a lot of like, oh, yeah, we want to investigate maybe the" }, { "start": 3069.84, "end": 3078.84, "text": " combination of RL and continual learning and so on. Is there something that's here? Is" }, { "start": 3078.84, "end": 3087.2400000000002, "text": " there? Yeah, you said, you mentioned neuroscience a little bit, what would be sort of the next" }, { "start": 3087.2400000000002, "end": 3095.52, "text": " big things from neuroscience to include in deep learning architectures that aren't yet" }, { "start": 3095.52, "end": 3101.2000000000003, "text": " really done by other people? Like, is there something where, you know, you could say," }, { "start": 3101.2000000000003, "end": 3107.08, "text": " well, if we had that, that's not really in our deep networks yet. But if we had that," }, { "start": 3107.08, "end": 3117.12, "text": " that would be like, amazing. I think this is a very small point. But the" }, { "start": 3117.12, "end": 3121.2599999999998, "text": " dendrites that we're sort of modeling right now are, they can be considered the basal" }, { "start": 3121.2599999999998, "end": 3125.5, "text": " dendrites. I think you went over this briefly in your intro. And the basal dendrites are" }, { "start": 3125.5, "end": 3130.7599999999998, "text": " responsible for receiving this context and depolarizing the main cell to either fire" }, { "start": 3130.7599999999998, "end": 3135.72, "text": " or not, if that context was recognized. Something that we haven't looked into, which could be" }, { "start": 3135.72, "end": 3140.24, "text": " potentially interesting is modeling apical dendrites. And the apical dendrites receive" }, { "start": 3140.24, "end": 3149.04, "text": " feedback from other cells that also biases the soma to fire or not. I think that could" }, { "start": 3149.04, "end": 3155.7999999999997, "text": " be a potentially interesting way to also gate each individual neuron. I think standard deep" }, { "start": 3155.7999999999997, "end": 3159.9599999999996, "text": " learning doesn't do any of this anyway. They only consider the proximal dendrites, which" }, { "start": 3159.96, "end": 3166.48, "text": " is mimicked by the simple linear weighted sum to determine if the neuron is fired. But" }, { "start": 3166.48, "end": 3170.92, "text": " if we can gather all this other neuroscience background from all the other kinds of dendrites" }, { "start": 3170.92, "end": 3174.84, "text": " too, like apical dendrites, it could be a very potentially interesting architecture," }, { "start": 3174.84, "end": 3180.6, "text": " like a very powerful one for dynamic scenarios." }, { "start": 3180.6, "end": 3186.96, "text": " The issue of top down feedback or lateral inhibition or anything like this, a lot of" }, { "start": 3186.96, "end": 3193.48, "text": " people talk about it, but I haven't yet seen anyone successfully bring it into a deep network" }, { "start": 3193.48, "end": 3200.4, "text": " and actually do something useful with it. Definitely think beyond dendrites, just mechanisms" }, { "start": 3200.4, "end": 3203.76, "text": " like this would be super helpful." }, { "start": 3203.76, "end": 3208.12, "text": " I think another aspect, which is a little bit quite different from what Avi just said," }, { "start": 3208.12, "end": 3214.04, "text": " that would be quite interesting is the local learning rule aspects that are present in" }, { "start": 3214.04, "end": 3218.32, "text": " biological neurons and how they might relate to unsupervised learning in conditional machine" }, { "start": 3218.32, "end": 3223.12, "text": " learning. I think a lot of the unsupervised learning objectives are addendums to the loss" }, { "start": 3223.12, "end": 3229.2799999999997, "text": " function that we think might be useful and it just flows through the network. I might" }, { "start": 3229.2799999999997, "end": 3232.14, "text": " be wrong, but I don't think there's a lot of research until figuring out which parts" }, { "start": 3232.14, "end": 3236.9, "text": " of the network could focus on certain things in an unsupervised way, which might be better" }, { "start": 3236.9, "end": 3243.8, "text": " done in biological networks. I think thinking about that and getting inspiration to see" }, { "start": 3243.8, "end": 3249.92, "text": " what local learning rules in an unsupervised way could improve performance in modern deep" }, { "start": 3249.92, "end": 3252.6800000000003, "text": " learning would be super cool." }, { "start": 3252.6800000000003, "end": 3260.2400000000002, "text": " Cool. Do you have anything to add, anything people should know or that we haven't talked" }, { "start": 3260.2400000000002, "end": 3265.36, "text": " about yet about the paper? People can get started with your code, which is online. I've" }, { "start": 3265.36, "end": 3273.48, "text": " seen that, which is very cool. Anything you want to get out there to the viewers?" }, { "start": 3273.48, "end": 3284.2, "text": " The take home message from this is what we want to be is that the brain is able to do" }, { "start": 3284.2, "end": 3288.7400000000002, "text": " a lot of different things. It's using different neural circuits to do it, but neural networks," }, { "start": 3288.7400000000002, "end": 3292.72, "text": " as they've been designed decades ago, they're really just optimizing for one thing. They're" }, { "start": 3292.72, "end": 3296.2400000000002, "text": " great function approximators, but you don't just want to approximate one function. You" }, { "start": 3296.2400000000002, "end": 3302.88, "text": " want to be able to approximate multiple functions. We're trying to show that, hey, there are" }, { "start": 3302.88, "end": 3309.88, "text": " ways where we can get neural networks to actually have different sub-networks, different neural" }, { "start": 3309.88, "end": 3318.28, "text": " circuits that are able to be different function approximators. If we can do that, then neural" }, { "start": 3318.28, "end": 3325.32, "text": " networks will be able to operate in more dynamic, changing scenarios. I think that's really" }, { "start": 3325.32, "end": 3331.36, "text": " exciting because the world is constantly changing, but a lot of the applications for deep learning" }, { "start": 3331.36, "end": 3338.04, "text": " right now are the environments that they operate in, are static. If we can get to that, then" }, { "start": 3338.04, "end": 3341.04, "text": " that's great." }, { "start": 3341.04, "end": 3349.6800000000003, "text": " Cool. Well, Akash, Karen, Avi, thank you very much for being here today. This was great" }, { "start": 3349.6800000000003, "end": 3351.6800000000003, "text": " fun and I learned a lot." }, { "start": 3351.6800000000003, "end": 3356.28, "text": " Yeah, thanks, Yannick. Now you're influencing my fashion." }, { "start": 3356.28, "end": 3357.28, "text": " Nice." }, { "start": 3357.28, "end": 3364.1200000000003, "text": " I'll join the show." }, { "start": 3364.1200000000003, "end": 3368.88, "text": " Thanks so much for being here. Yeah, I hope you continue this because it's really cool" }, { "start": 3368.88, "end": 3372.32, "text": " and I think we're missing it in deep learning." }, { "start": 3372.32, "end": 3373.32, "text": " Thanks, Yannick. That was a lot of fun." }, { "start": 3373.32, "end": 3374.32, "text": " It was a pleasure." }, { "start": 3374.32, "end": 3375.32, "text": " Thanks for having us." }, { "start": 3375.32, "end": 3390.32, "text": " Thanks for having me." } ]
rd3R_G6_UfY
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Full Self-Driving is HARD! Analyzing Elon Musk re: Tesla Autopilot on Lex Fridman's Podcast
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "lex fridman", "elon musk", "elon", "musk", "tesla fsd", "when will fsd ship", "when will fsd be ready", "tesla fsd release", "tesla fsd release date", "how does tesla autopilot work", "does tesla use neural networks", "andrej karpathy", "self driving", "tesla self driving", "how good is tesla fsd", "how safe is tesla", "vector space", "podcast", "analysis", "elon musk self-driving", "how good is tesla autopilot" ]
#tesla #fsd #elon Watch the original podcast: https://www.youtube.com/watch?v=DxREm3s1scA An analysis of Elon's appearance on Lex Fridman. Very interesting conversation and a good overview of past, current, and future versions of Tesla's Autopilot system. OUTLINE: 0:00 - Intro 0:40 - Tesla Autopilot: How hard is it? 9:05 - Building an accurate understanding of the world 16:25 - History of Tesla's neural network stack 26:00 - When is full self-driving ready? 29:55 - FSD 11: Less code, more neural networks 37:00 - Auto-labelling is essential 39:05 - Tesla Bot & Discussion Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hey, how's everyone doing today? We're going to analyze Elon Musk's appearance on the Lex Friedman podcast. Specifically, we're going to look at the part where Elon talks about the Tesla autopilot and to a certain degree, also the Tesla bot. We've previously analyzed the talk by Andrej Karpati about what kind of architectures and so on goes into the Tesla self-driving system. And this naturally progresses over time. So Elon's going to drop some more hints here. What exactly is going on under the hood? We're going to dive right in. Let me know if you enjoy talk analysis or not. Who knows? All I know is that whenever you put Elon Musk on something, you get insanely many clicks. So thank you for that. Autopilot. Tesla autopilot. I love how they go like autopilot and then both are like, yeah, as if they're saying like, yeah, like, like, like that's ever going to work. As you might know, autopilot is a bit behind schedule. It's been promised again and again and again, especially the full self-driving sort of autopilot. But there also has been insanely much progress. Like no one is pushing that. People have told me, you know, other car companies are doing it as well. Yeah, but no one's kind of pushing it quite like that. And sure, there are some risks to to go along with rolling out alpha and beta versions just to users. But I mean, come on. And so there is a natural skepticism. When I first drove a Tesla with the initial system based on Mobileye, I thought there's no way. So first, when I got in, I thought there's no way this car could maintain like stay in the lane and create a comfortable experience. OK, so I didn't know that the first system was based on on Mobileye, which is interesting because at one point during my PhD, we got visit from a researcher who also worked on Mobileye. I won't name the researcher here because I might be about to tell some stuff that would get them into trouble. But they showed us a video of themselves in a car. I remember this vividly. And the car was just kind of opened. The whole dashboard was opened. All the cables were like hanging out and going into some laptop that was just kind of dangling on sort of the the middle of the car, you know, where the stick, I don't know what what you call that stuff in. In English, it was like a super instable setup and, you know, a cable flying around everywhere. And then the camera kind of pans up and you can see that car is on the highway, like middle of the highway. Car is here, car is here and just driving itself. You see the steering wheel, no hands on it. And it was insane. Like when I when I saw this, I never expected technology to be this far already. And yes, I know in the 70s and 80s, people have done self-driving on highways. But still, for someone to trust the system enough to essentially sit there and let the system steer the car based on nothing but cameras was insane. This system is just the beginning, like the baseline for the Tesla system. I didn't know that. And I thought it was an interesting story to tell. I was already super impressed by the Mobilize system. Yet, as you will see, this has been surpassed a lot. What are some insights you've gained over those five, six years of autopilot about the problem of autonomous driving? So you leaped in having some sort of first principles kinds of intuitions, but nobody knows how difficult the problem is. I thought the self-driving problem would be hard, but it was harder than I thought. It's not like I thought it would be easy. I thought it would be very hard, but it was actually way harder than even that. So what it comes down to at the end of the day is to solve self-driving, you have to solve... You basically need to recreate what humans do to drive, which is humans drive with optical sensors, eyes, and biological neural nets. And so in order to... That's how the entire road system is designed to work, with basically passive optical and neural nets, biologically. And now that we need to... So actually for full self-driving to work, we have to recreate that in digital form. So we have to... So the argument here is, I guess, if you want to solve the self-driving problem, you need to essentially do what humans do. And I'm not exactly buying this argument, just because humans only drive with vision, especially just because humans have neural networks. We also must use neural networks. That seems a bit shady, but there is a point to it, right? That the whole road system and cars and whatnot are designed around human capabilities and vision and audio and stuff like this. And therefore, yes, it's good to drive if you have like a radar and a lidar and whatnot, that's additional sensors, but you're not going to get around building in the human sensors as well. So a car that just drives mainly on radar or lidar is probably good at avoiding obstacles that are just on the road somewhere, but it's not going to be able to see any signs. It's not going to be able to sort of make sense of the world visually, understand what's going on and things like this, which if something's speeding along, coming along, and you can anticipate it by vision, it's probably a lot better than you having to somehow detect it on the radar. So I think that's a fair point right here. But humans having neural network, therefore, we must have neural network. I'm not super sure that's valid. How much game theoretic kind of stuff needs to be involved at a four-way stop sign? As humans, when we drive, our actions affect the world. It changes how others behave. Most of the time, when driving, you're usually just responding to the scene as opposed to really asserting yourself in the scene. Do you think... I think these sort of control logic conundrums are not the hard part. What do you think is the hard part in this whole beautiful complex problem? So it's a lot of freaking software, man. A lot of smart lines of code. For sure, in order to have... Create an accurate vector space. So like you're coming from image space, which is... So I think Elon's gonna make the point here that... What Lex's concern is that there's a lot of game theoretic stuff. And he mentions the four-way crossroads. And then you sort of have to communicate who goes first, who goes last, and so on. And Elon says that that's not the big problem in self-driving. He's gonna make the point that once you do have an accurate representation of the world, once you know where every car is and so on, what every sign means, that you can figure this stuff out easily. And I think I agree. At least the number of situations you can broadly cover with programming heuristics is sort of countable. And I would guess that that would work. Though I'm not super sure if that goes all the way. Because there is game theoretic stuff. Like you can, you know, change a lane based on the fact that you know, kind of game theoretically, that other people won't sort of cut you off while you do it, because they'd crash their car and so on. Which you can't just know by looking at their speeds and the positions of the cars. Sort of the anticipation of how everyone else is going to react in certain situations is, I think, a big part of driving and also a big part of sort of predicting dangers. So I'm not super sure if you can just hard code all of that. But I think saying that, you know, the perception problem is conceptually the harder problem. Because for the perception problem, there isn't even an approach with regular programming, right? You have to sort of learn it then. Yes, if you make a mistake in the perception problem, that's going to have vast downstream effects. So I do agree here that probably the self-driving problem might at least at this time, largely be a computer vision, or let's say, not only vision, but sort of world understanding perception problem. After that, it becomes sort of easier. Once you have an accurate vector space, the control problem is similar to that of a video game, like a Grand Theft Auto or Cyberpunk. Oh, yeah. Yes, I want my traffic management system. I want my self-driving system to be the one from cyberpunk, please. Lord help us, please. Yeah, I mean, point taken, right? What Elon calls vector space right here, I guess you'd sort of call a scene understanding, a scene graph, you know, anything like this. Essentially, where are the objects in the scene, sort of what's their position, their momentum, I guess, you know, where are the signs, what do they mean, where are the traffic lights, all of this kind of stuff. Once you have that, the problem of sort of planning ahead what you should do becomes probably relatively easy, at least compared to that perception problem. Like when's the last time you looked right and left, you know, or and rearward, or even diagonally, you know, forward to actually refresh your vector space. So you're glancing around and what your mind is doing is trying to distill the relevant vectors, basically objects with a position and motion. And then editing that down to the least amount that's necessary for you to drive. It does seem to be able to edit it down or compress it even further into things like concepts. So it's not, it's like it goes beyond, the human mind seems to go sometimes beyond vector space, to sort of space of concepts, to where you'll see a thing, it's no longer represented spatially somehow. It's almost like a concept that you should be aware of. Like if this is a school zone, you'll remember that as a concept, which is a... That's a really good point. So Elon made the point essentially that what your brain is doing and therefore what, you know, the AI should be doing is take all that information and build what Elon calls this vector space, which is, as he said, sort of objects and their motions. But Lex goes a step further and says, well, you also know sort of that this is a school zone. And in a school zone, not only should I be driving slower, but there might be children around. So I need to be sort of careful. I in fact, adapt my attention and my vision on different things than if something like, then if it's a highway. And I think that is as of yet, probably not considered by these AI systems. I'm pretty sure they, the input feed is all the same, no matter whether it's a school zone or whether it is a highway. Of course, there's different things. Us humans have limited amounts of attention and Elon just pointed out, sort of all the ways in which your system is screwed up like blind spots and yada, yada, yada. And that might be the reason why we have to sort of focus our attention on different things. And, you know, depending on where we are. So it could be that the machines are just, you know, they don't care. They can always pay attention to everything. And therefore, this is not a concern to them. I'm not entirely convinced by this. The sort of guiding of attention and sort of the top down feedback loop to the lower systems, I think is as of yet, completely missing from the AI systems. I'm not sure actually. Maybe they do sort of feed, let's say they know they're in a school zone. They know, you know, the speed limit is such and such and, or there's a construction site. Maybe they feed sort of embeddings of this stuff into sort of the vision networks. And the vision networks might be able to adjust sort of their attention patterns. Not that probably they don't use attention. They probably use con nets or so. But it would be interesting to see if that was happening. I would be very surprised if it was though. So not sure. This might be a fundamental limitation. It might be that without this, the driving problem is essentially unsolvable or, or there's, there's major hurdles that can't be overcome. It could also be that just, you know, the machines can always pay attention to everything. And therefore it just doesn't matter. You saw that there were some kids about to cross the road in front of the truck. Now you can no longer see the kids, but you, you need to be able, but you would now know, okay, those kids are probably going to pass by the truck and cross the road, even though you cannot see them. So you have to have, um, memory, uh, you have to need to remember that there were kids there and you need to have some forward prediction of what their position will be. It's a really hard problem. I mean, yeah, exactly. So they're going to talk about occlusions here, occlusions, uh, detecting occluded objects and so on. But I think Elon's point is bigger than that. You need to have a forward predicting model in order to do the self driving, you know, solve the self driving problem to a realistic degree. And here I would, you know, challenge zero to your statement that once you have the vector space, the problem is sort of, you know, not that hard. I think this particular part of the remaining problem is actually quite hard in itself because it's not like you can just calculate the Nash equilibrium of self driving and then assume that everyone's acting rationally. You have to sort of take into account all the human factors right here and how you expect other humans to act, be that pedestrians or other drivers or anything like this. Yeah, I think this is another area, this sort of forward prediction where neuro-sensory prediction where neural net or in general machine learning is going to make a big difference. And then as I said, I'd be wondering if there is sort of a top down feedback loop that as you're predicting forward, you're going to change sort of the perception pipeline on the fly or not. But like, let's say you, you're parked at a light and you, and you saw, you use a pedestrian example that people were waiting to cross the, across the road and you can't, you can't quite see them because of an occlusion. But they might wait for a minute before the light changes for them to cross the road. You still need to remember that that's where they were and that they're probably going to cross the road type of thing. So even if that exceeds your time-based memory, it should not exceed your space memory. And I just think the data engine side of that, so getting the data to learn all of the concepts that you're saying now is an incredible process. It's this iterative process of just... And I just think... So what he said right there, I think is quite important as well. You know, you can probably understand it in the concept. If you do reinforcement learning, let's say you did reinforcement learning in this thing, typically in reinforcement learning, we have a finite amount of time where you can go back over time and still be able to do back propagation, especially if you're at like a high frame rate like these systems operate right here. That's not going to be a long time. It's not going to be a minute of real time. And therefore, yes, if you need to learn to remember something like there are pedestrians right there and they're still there a minute later because all the lights were red, that is going to be quite a bit of a problem and a challenge in itself. Sort of learning to remember things is a long-standing challenge in reinforcement learning. And you probably be better off sort of coding all the objects in this, what Elon calls the vector space. So understand the scene and then explicitly representing each object that's there rather than having the neural networks learn everything from perception. I think the data engine side of that, so getting the data to learn all the concepts that you're saying now is an incredible process. It's this iterative process of just... This is HydroNet, many... HydroNet. We're changing the name to something else. Okay. I'm sure it'll be equally as Rick and Morty like... There's a lot of... Yeah. We've re-architected the neural net in the cars so many times. It's crazy. Oh, so every time there's a new major version, you'll rename it to something more ridiculous or memorable and beautiful? Sorry. Not ridiculous, of course. If you see the full array of neural nets that are operating in the cars, it boggles the mind. There's so many layers, it's crazy. What is he actually saying here? It's hard to decipher Elon because obviously he's not a deep learning engineer, so he sort of probably gets the pitch from Andre and some diagrams or something like this. But as of now, we don't know if there are many neural nets, but it's unlikely because he says it's mind bogglingly many and you'd have to sort of train all of them. I couldn't really imagine how you'd put mind bogglingly many neural networks into a system like this. I'm going to guess that they have a couple and these are just kind of big and complicated. And that's exactly what we saw in Karpati's talk when he explained how they go vision only and so on. If you haven't seen this, watch my analysis of that. He's about to explain a bit more in depth of what's going on. We started off with simple neural nets that were basically image recognition on a single frame from a single camera and then trying to knit those together with C. I should say we're primarily running C here because C++ is too much overhead and we have our own C compiler. So to get maximum performance, we actually wrote our own C compiler and are continuing to optimize our C compiler for maximum efficiency. In fact, we've just recently done a new rev on a C compiler that will compile directly to our autopilot hardware. So you want to compile the whole thing down? I mean, he's going to talk about two things kind of interleaved right here that have on the surface not too much to do with each other. So apparently there is a C compiler that compiles directly to the hardware, which makes sense, right? These cars have the property that you have to be super duper efficient and power saving and whatnot. And running Python on top of that, the overhead of that might just be too much. You can in fact save a lot of energy, a lot of time and so on by building a compiler that uses the hardware as optimally as possible. Now that being said, this has little to do with how you build the neural network system other than the neural networks will be faster if you compile them down correctly. And so there's actually a lot of work done by some very talented software engineers at Tesla at a very foundational level to improve the efficiency of compute and how we use the trip accelerators, which are basically doing matrix math dot products like a bazillion dot products. And it's like what are neural nets, it's like compute wise like 99% dot products. So yeah, I mean, he's obviously correct right here, though it has to be said, you know, for anyone who's listening to this, your neural network isn't slow because you don't have the right compiler. It is true that if you do it correctly, you compile your network down to like a format that is optimal for some hardware and you run it with you know, the correct libraries and and you set up everything correctly, you can probably get like maybe if you if you did if you did it terribly wrong, and then you do it terribly right, you can get up to a 10x speed up I would guess maybe you know, 5x 10x speed up something like this best case. However, usually, usually, the first thing you should investigate is whether or not the architecture you're using is the correct one. You can get like many, many more times a speed up by simply changing the architecture to something more appropriate. So Elon says this here, because obviously, this is the last step. And you know, they need to they need to get every, every millisecond they can out of these systems. But just for most people listening, this is sort of the the sugar, the icing on the cake, you should first care about the cake and try to make your architecture, you know, more optimal, maybe use less layers or anything like this change from this operation to that operation analyze your bottlenecks. And only once you have everything through and you have the exact model you want, then you can care about doing all the engineering things. One of the things we're moving towards now is no post processing of the image through the image signal processor. So like, what happens for cameras is that almost all cameras is they there's a lot of post processing done in order to make pictures look pretty. And so we don't care about pictures looking pretty. We just want the data. So we're moving just roll photon counts. So the system will like the image that that the computer sees is actually much more than what you'd see if you represented on a camera. It's got much more data. And even in very low light conditions, you can see that there's a small photon count difference between, you know, this spot here and that spot there, which means that so it can see in the dark incredibly well, because it can detect these tiny differences in photon counts. That's much better than you could possibly imagine. So I mean, that is, again, like that is a third issue next to the the C compiler. And what the neural networks do is essentially saying that if you remove the post processing within the camera sensors that are usually built into, let's say cameras that you could buy on the market, then you get the raw data. And since you don't have to look at the pictures, the raw data is much more useful than the post process data, since it's a machine anyway, that analyzes the signal. And therefore, you might as well make it machine friendly. I think it is a good lesson for maybe other fields as well to think about, you know, what parts of the pipeline are just there to make it, you know, because because humans are involved and try to remove those. But you know, it doesn't really add to what's the what's the deal with the neural networks, which I think was the original question here. And then we also save 13 milliseconds on latency. So from removing the post processing an image? Yes. Yeah. It's like because we've got eight cameras and then there's roughly, I don't know, one and a half milliseconds or so, maybe one point six milliseconds of latency for each camera. And so like going to just basically bypassing the image processor gets us back 13 milliseconds of latency, which is important. Yeah, I think this, you know, besides getting the raw data, this is also again, they need to squeeze out sort of the last mile here or the last milliseconds here. And this is another thing they they can practically do. So getting rid of jitter is extremely important. And that affects your control decisions and all those kinds of things. OK. Yeah, the cars is going to fundamentally maneuver better with lower jitter. The cars will maneuver with superhuman ability and reaction time much faster than a human. I mean, I think over time, the autopilot full self driving will be capable of maneuvers that are far more than what James Bond could do in the best movie type of thing. That's exactly what I was imagining in my mind, as you said. It's like impossible maneuvers that a human couldn't do. Well, OK, it's two things. Impossible maneuvers are impossible and things that humans could do are things that humans could do. I have no doubt that at one point in the near future, self driving cars will be able to do things that humans couldn't do. The question is more, are there going to be things that humans do that the cars couldn't do? Right. Or can't do? Because that's the actual gap you're trying to close. You know, look at Boston Dynamics or so. If you hard code stuff and you have extremely, extremely good sensors and actuators, you can do many things that humans couldn't do. But on the other hand, it's the things that humans can do that the machines can't. Those are the problem. Well, let me ask sort of looking back the six years, looking out into the future, based on your current understanding, how hard do you think this full self driving problem, when do you think Tesla will solve level four FSD? I think Elon gets asked this question every year and every year he says next year. So I mean, it's looking quite likely that it will be next year. This is the thing with Elon Musk, he always promises things like next year or on ridiculously short amounts of time. And I wonder how long it's going to take for people to just, you know, stop believing him. I guess many people already did. But it's still, you know, a thing to consider that on one hand, obviously, if you do it too much, then people are simply going to say, oh, well, probably in five years if he says next year. But on the other hand, he's also able to sort of it's a motivating thing. It's a cool thing. It drives momentum. And that itself accelerates the development of these things, people being ready to just flip on a beta version and so on. It's a bit insane. But I do think his optimism and a little bit salesmanship also a lot of benefits besides the obvious negatives. So the interventions, you know, per million miles has been dropping dramatically at some point. And that trend looks like it happens next year is that the probability of an accident on FSD is less than that of the average human and then significantly less than that of the average human. So it certainly appears like we will get there next year. There's a lot of hedging going on here. But you know, you can this is this is actually a nice method, I think, of making these types of predictions, you see that the rate of disengagement is dropping at a certain speed, you can extrapolate maybe a little bit and say, look, you know, here's going to be the sort of threshold where we're better than a human. I think that's a quite a sober analysis if done correctly. And I also think people who are, you know, it's obviously good to be skeptical of fully self driving systems. But on the other hand, you also have to think if they're a lot better than humans, it makes makes total sense, right? It also makes total sense to have them and not engage them all the time, right? There might still be situations you want to drive yourself. The question is a little bit, can you just continue the trend? Or is there a sort of an okay, you solve the easy problems. And that is what makes the rates of disengagement go down now. But now come the more and more hard problems and sort of it gets exponentially harder to continue that trend, in which case, we're not going to be there for a long time. Then there's going to be a case of, okay, we'll not have to prove this to regulators and prove it to you know, and we want a standard that is not just equivalent to a human, but much better than the average human, I think it's got to be at least two or three times higher safety than a human. Probably more like 10, like knowing, you know, regulators and how the public perceives these types of things. Of course, right now they're cool, but then it's really easy to publicize in a few accidents that few stupid accidents that happen if you build machine learning systems for the real world, they are going to make stupid mistakes. It doesn't matter how accurate they are on average, they're going to make stupid mistakes that a human would never do and people are just going to point at it and never forget that one instance. And I think it's pretty easy to sort of scare people publicizing those kinds of things. And therefore, yeah, you have to be like massively better than humans. I agree here. There is some fundamental leap that really deserves the 11. I mean, that's a pretty cool number. Yeah. 11 would be a single stack for all, one stack to rule them all. But there are just some really fundamental neural net architecture changes that will allow for much more capability, but at first they're going to have issues. So we have this working on like sort of alpha software and it's good, but it's basically taking a whole bunch of C++ code and deleting a massive amount of C++ code and replacing it with a neural net. And Andrei makes this point a lot, which is like neural nets are kind of eating software. So it's interesting what Elon says right here. This upcoming version 11 of the Tesla software seems to have kind of a rewrite in what he calls the creation of the vector space. And specifically, he says you replace a whole bunch of C and C++ code with neural networks. And I guess what that means is that they used to have certain heuristics for what he calls creating the vector space, right? And remember, creating the vector space means seeing and understanding. So what objects exist? Where are they? How are they moving? And so on. And you want to get that out of your cameras and whatever other sensors you have. So it seems like until now, they had a bunch of neural networks that would do, you know, their stuff. I can imagine they had maybe single frame neural networks or kind of short frames, one after another neural networks that would recognize sort of bounding boxing the objects in the image. And then they would use sort of an algorithm heuristic algorithm that they wrote themselves to stitch that together over time. Maybe they use algorithms to do some kind of inferences like what he mentioned with the object tracking, and so on. And it seems to be that what they want to do is just end to end train one big neural network that just does it all. You input all of the sensor data, let's say from, you know, not only just right now, but you know, from the from the recent past, you just input it all in there. And the neural network will spit out this finished vector space, this finished scene understanding graph. And this obviously you can see where it comes from. This has been the story of deep learning so far, replacing more and more classical heuristics with an end to end learning system. And it also matches exactly with what Elon is saying, namely that right now, it doesn't seem to work quite well yet, but in time, it will get there. And again, this has been the story of deep learning in pretty much everything we've tackled since the beginning of deep learning. End to end systems ultimately came to be the heuristic systems, but it takes time, it takes work, it takes data, obviously massive amounts of compute. You know, over time, there's like, less and less conventional software, more and more neural net, which is still software, but it's, you know, still comes out the lines of software, but it's more more neural net stuff, and less, you know, heuristics, basically. If you're more more more matrix based stuff, and less heuristics based stuff. So by the way, the reason why this is the case, the reason why it works to replace heuristics with neural networks with data driven systems is that the world is always more complicated than you can encode in any heuristic. That's why we use machine learning in the first place, because we can't just program the algorithms that do image recognition, or speech recognition or whatnot. So the only representation of this really complex world, like the actual underlying world that is so complicated is the data. And therefore, our best chance to create systems that deal well with the world as such is systems that actually learn from data from the real world. And that's why it often works to replace the heuristics with data driven systems. If you have the data, and if you have the compute, which Tesla obviously does. We call it the giant bag of points. And it's like, so you go to pixel and something associated with that pixel, like this pixel is probably car, the pixel is probably lane line. Then you've got to assemble this giant bag of points in the C code and turn it into vectors. And it does a pretty good job of it, but we need another layer of neural nets on top of that to take the giant bag of points and distill that down to vector space in the neural net part of the software as opposed to the heuristics part of the software. So the translation of this is probably, if I understand Elon correctly, what they were doing so far is sort of semantic segmentation or pixel based pixel labeling. I can also imagine that they estimated things like depth maps and so on just from pixels. But then, as I said before, it was heuristics, it was sort of classical algorithms. And these aren't, I mean, classical, these are advanced algorithms, right, that take point clouds that take sort of segmentation maps and depth maps and all of that and turn them into objects. These are mostly heuristic based but very sophisticated algorithms. But it is clearly a good or a, let's say a modern move to ditch all of that and also teach the neural networks to just handle it until you have the semantic result that you want, namely the space of objects, the scene understanding graph. It's really outputting proper vectors to the CC++ control code, as opposed to the sort of constructing the vectors in C. We've done, I think, quite a good job of, but it's kind of hitting a local maximum on how well the C can do this. So this is really a big deal. And just all of the networks in the car need to... By the way, whenever you hear him talk about C and C++ code, just replace that with human authored code, right? The difference isn't necessarily the language you use, the difference is more like who writes the code. And when he says C and C++, it's humans, very smart humans, but still humans that write the code out of their thinking. And whenever he says neural networks, it's some sort of a data-driven systems, which obviously human author in the first place, but probably also is as well implemented in C and C++. The training, the amount of work done with... We've written all this custom software for training and labeling and to do auto labeling. Auto labeling is essential, especially when you've got surround video. It's very difficult to label surround video from scratch. It's extremely difficult. Like a human's such a long time to even label one video clip, like several hours. Or the auto label it, basically we just apply a heavy duty, like a lot of compute to the video clips to pre-assign and guess what all the things are that are going on in the surround video. And then there's like correcting it. Yeah. And then all the human has to do is like tweet, like say, adjust what is incorrect. This is like increase this productivity by effect a hundred or more. Yeah. So you've presented that... I mean, we've discussed this in the last video that I did about Karpotty's talk. And this to me is, I think too few people are currently doing something like this. Essentially it's active learning, right? It's sort of, if you're not sure about something, ask the human. It has a slight twist on it in that they probably always ask the human, but they suggest a label which is super powerful, especially in something like semantic segmentation where you need to annotate every pixel or you need to place bounding boxes around many objects. It's really different if you simply have to check and adjust a little bit versus if, you know, there's a data point and you have to place the labels yourself. I think we're going to see quite a bit more of that in sort of the near future. A lot of people are already doing something like this, but I think still too few are. It's not quite in Tesla's primary mission direction of accelerating sustainable energy, but it is an extremely useful thing that we can do for the world, which is to make a useful humanoid robot that is capable of interacting with the world. All right. The rest of them talking about AI is talking about the Tesla bot, which is a bit more far fetched I have to say. The Tesla bot just on its face is way more complicated than a car, especially if it is supposed to not only, you know, be on the factory floor in which case they just build like a robot arm, right? These are like the most useful things in a factory on a factory floor. But if it's actually to sort of interact with humans or in a human way navigate not only unknown terrain, but also society potentially. I mean, this is just this is just futurism at this point and that there's really nothing we can legitimately say about what's possible, what's not possible, where this is. And obviously they like we don't we don't have a prototype. We just have like a human in a suit to demonstrate the Tesla bot. So I will not comment much further on that with respect to the Tesla fully self driving system. I would say that obviously, you know, for Elon Musk, there's always kind of lovers and haters and I think you can acknowledge both sides. He is a bit of a salesperson. He sells these things very well. He always promises, you know, next year we'll be ready, next year we'll be ready. And then they never are or he over promises massively on you know, how much cost you can save and yada, yada, yada. But then on the other hand, he also delivers a lot more than other people deliver. Maybe that's just because a little bit of recklessness, but also the sort of optimism and momentum that he's able to to to come up and drive. And all of that together, I think just makes for like an interesting person. And I think the advances itself are remarkable. Even if you say other car companies are on the track and whatnot, Tesla has done more than all other car companies together for the adoption of electric vehicles. Yes, you can debate whether or not that in itself is a good thing. But just to say that it's not only salesmanship, there are also results. And I have no doubt that in the near future, we will see self driving cars. Sure, they're not going to be accident free, but I believe they will be much, much better than humans. And the question is simply is this next year in two years in five years? I cannot tell you, but I'm excited to see. I hope you like this talk analysis interview analysis. If you want more of these things, let me know. Otherwise, let me know what you think in the comments and I'll see you next time. Bye bye.
[ { "start": 0, "end": 1.32, "text": " Hey, how's everyone doing today?" }, { "start": 1.32, "end": 6.16, "text": " We're going to analyze Elon Musk's appearance on the Lex Friedman podcast." }, { "start": 6.24, "end": 10.120000000000001, "text": " Specifically, we're going to look at the part where Elon talks about the Tesla" }, { "start": 10.120000000000001, "end": 13.4, "text": " autopilot and to a certain degree, also the Tesla bot." }, { "start": 13.52, "end": 18.36, "text": " We've previously analyzed the talk by Andrej Karpati about what kind of" }, { "start": 18.36, "end": 22.48, "text": " architectures and so on goes into the Tesla self-driving system." }, { "start": 22.52, "end": 25.12, "text": " And this naturally progresses over time." }, { "start": 25.240000000000002, "end": 28.080000000000002, "text": " So Elon's going to drop some more hints here." }, { "start": 28.08, "end": 30.68, "text": " What exactly is going on under the hood?" }, { "start": 30.72, "end": 31.68, "text": " We're going to dive right in." }, { "start": 31.68, "end": 35.56, "text": " Let me know if you enjoy talk analysis or not." }, { "start": 36.2, "end": 39.64, "text": " Who knows? All I know is that whenever you put Elon Musk on something," }, { "start": 39.64, "end": 41.36, "text": " you get insanely many clicks." }, { "start": 41.36, "end": 43.4, "text": " So thank you for that." }, { "start": 43.4, "end": 45.16, "text": " Autopilot." }, { "start": 45.16, "end": 46.64, "text": " Tesla autopilot." }, { "start": 49.16, "end": 53.08, "text": " I love how they go like autopilot and then both are like, yeah," }, { "start": 53.4, "end": 56.72, "text": " as if they're saying like, yeah, like, like, like that's ever going to work." }, { "start": 56.72, "end": 60.16, "text": " As you might know, autopilot is a bit behind schedule." }, { "start": 60.32, "end": 63.8, "text": " It's been promised again and again and again, especially the full" }, { "start": 63.8, "end": 66.16, "text": " self-driving sort of autopilot." }, { "start": 66.16, "end": 69.48, "text": " But there also has been insanely much progress." }, { "start": 69.48, "end": 71.56, "text": " Like no one is pushing that." }, { "start": 71.56, "end": 74.8, "text": " People have told me, you know, other car companies are doing it as well." }, { "start": 74.84, "end": 78.48, "text": " Yeah, but no one's kind of pushing it quite like that." }, { "start": 78.52, "end": 81.92, "text": " And sure, there are some risks to to go along with rolling out" }, { "start": 81.92, "end": 84.08, "text": " alpha and beta versions just to users." }, { "start": 84.08, "end": 85.64, "text": " But I mean, come on." }, { "start": 85.64, "end": 87.36, "text": " And so there is a natural skepticism." }, { "start": 87.36, "end": 91.88, "text": " When I first drove a Tesla with the initial system based on Mobileye," }, { "start": 92.36, "end": 94.56, "text": " I thought there's no way." }, { "start": 94.56, "end": 98.6, "text": " So first, when I got in, I thought there's no way this car could maintain" }, { "start": 100.44, "end": 102.92, "text": " like stay in the lane and create a comfortable experience." }, { "start": 103.8, "end": 108.04, "text": " OK, so I didn't know that the first system was based on on Mobileye," }, { "start": 108.04, "end": 111.32, "text": " which is interesting because at one point during my PhD," }, { "start": 111.32, "end": 115.6, "text": " we got visit from a researcher who also worked on Mobileye." }, { "start": 115.6, "end": 120.88, "text": " I won't name the researcher here because I might be about to tell some stuff" }, { "start": 120.88, "end": 122.64, "text": " that would get them into trouble." }, { "start": 122.64, "end": 127.91999999999999, "text": " But they showed us a video of themselves in a car." }, { "start": 128, "end": 129.51999999999998, "text": " I remember this vividly." }, { "start": 129.51999999999998, "end": 132.04, "text": " And the car was just kind of opened." }, { "start": 132.04, "end": 133.44, "text": " The whole dashboard was opened." }, { "start": 133.44, "end": 137.16, "text": " All the cables were like hanging out and going into some laptop" }, { "start": 137.16, "end": 140.68, "text": " that was just kind of dangling on sort of the the middle of the car," }, { "start": 140.68, "end": 143.35999999999999, "text": " you know, where the stick, I don't know what what you call that stuff in." }, { "start": 143.36, "end": 146.88000000000002, "text": " In English, it was like a super instable setup and, you know," }, { "start": 146.88000000000002, "end": 149.44000000000003, "text": " a cable flying around everywhere." }, { "start": 149.44000000000003, "end": 154.24, "text": " And then the camera kind of pans up and you can see that car is on the highway," }, { "start": 154.24, "end": 155.76000000000002, "text": " like middle of the highway." }, { "start": 155.76000000000002, "end": 159.44000000000003, "text": " Car is here, car is here and just driving itself." }, { "start": 159.44000000000003, "end": 161.76000000000002, "text": " You see the steering wheel, no hands on it." }, { "start": 161.76000000000002, "end": 163.4, "text": " And it was insane." }, { "start": 163.4, "end": 168.20000000000002, "text": " Like when I when I saw this, I never expected technology to be this far already." }, { "start": 168.20000000000002, "end": 171.36, "text": " And yes, I know in the 70s and 80s," }, { "start": 171.36, "end": 173.76000000000002, "text": " people have done self-driving on highways." }, { "start": 173.76000000000002, "end": 178.68, "text": " But still, for someone to trust the system enough to essentially sit there" }, { "start": 178.68, "end": 184.4, "text": " and let the system steer the car based on nothing but cameras was insane." }, { "start": 184.4, "end": 188.8, "text": " This system is just the beginning, like the baseline for the Tesla system." }, { "start": 188.8, "end": 189.8, "text": " I didn't know that." }, { "start": 189.8, "end": 192.44000000000003, "text": " And I thought it was an interesting story to tell." }, { "start": 192.44000000000003, "end": 195.20000000000002, "text": " I was already super impressed by the Mobilize system." }, { "start": 195.20000000000002, "end": 198.72000000000003, "text": " Yet, as you will see, this has been surpassed a lot." }, { "start": 198.72, "end": 204.96, "text": " What are some insights you've gained over those five, six years of autopilot" }, { "start": 204.96, "end": 207.84, "text": " about the problem of autonomous driving?" }, { "start": 207.84, "end": 214.32, "text": " So you leaped in having some sort of first principles kinds of intuitions," }, { "start": 214.32, "end": 219.12, "text": " but nobody knows how difficult the problem is." }, { "start": 219.12, "end": 220.88, "text": " I thought the self-driving problem would be hard," }, { "start": 220.88, "end": 222.56, "text": " but it was harder than I thought." }, { "start": 222.56, "end": 223.68, "text": " It's not like I thought it would be easy." }, { "start": 223.68, "end": 227.84, "text": " I thought it would be very hard, but it was actually way harder than even that." }, { "start": 227.84, "end": 232.72, "text": " So what it comes down to at the end of the day is to solve self-driving," }, { "start": 232.72, "end": 234.72, "text": " you have to solve..." }, { "start": 236.72, "end": 242.72, "text": " You basically need to recreate what humans do to drive," }, { "start": 242.72, "end": 247.76, "text": " which is humans drive with optical sensors, eyes, and biological neural nets." }, { "start": 247.76, "end": 250.32, "text": " And so in order to..." }, { "start": 250.32, "end": 253.12, "text": " That's how the entire road system is designed to work," }, { "start": 253.12, "end": 260.32, "text": " with basically passive optical and neural nets, biologically." }, { "start": 260.32, "end": 261.92, "text": " And now that we need to..." }, { "start": 261.92, "end": 266.96, "text": " So actually for full self-driving to work, we have to recreate that in digital form." }, { "start": 266.96, "end": 268.88, "text": " So we have to..." }, { "start": 268.88, "end": 274.24, "text": " So the argument here is, I guess, if you want to solve the self-driving problem," }, { "start": 274.24, "end": 276.64, "text": " you need to essentially do what humans do." }, { "start": 276.64, "end": 278.96, "text": " And I'm not exactly buying this argument," }, { "start": 278.96, "end": 281.92, "text": " just because humans only drive with vision," }, { "start": 281.92, "end": 285.44, "text": " especially just because humans have neural networks." }, { "start": 285.44, "end": 287.68, "text": " We also must use neural networks." }, { "start": 287.68, "end": 290.8, "text": " That seems a bit shady, but there is a point to it, right?" }, { "start": 290.8, "end": 293.84000000000003, "text": " That the whole road system and cars and whatnot" }, { "start": 293.84000000000003, "end": 298.64, "text": " are designed around human capabilities and vision and audio and stuff like this." }, { "start": 298.64, "end": 304.16, "text": " And therefore, yes, it's good to drive if you have like a radar and a lidar and whatnot," }, { "start": 304.16, "end": 305.84000000000003, "text": " that's additional sensors," }, { "start": 305.84000000000003, "end": 310.24, "text": " but you're not going to get around building in the human sensors as well." }, { "start": 310.24, "end": 314, "text": " So a car that just drives mainly on radar or lidar" }, { "start": 314, "end": 319.28000000000003, "text": " is probably good at avoiding obstacles that are just on the road somewhere," }, { "start": 319.28000000000003, "end": 321.68, "text": " but it's not going to be able to see any signs." }, { "start": 321.68, "end": 326.08, "text": " It's not going to be able to sort of make sense of the world visually," }, { "start": 326.08, "end": 328.64, "text": " understand what's going on and things like this," }, { "start": 328.64, "end": 332.08, "text": " which if something's speeding along, coming along," }, { "start": 332.08, "end": 334.32, "text": " and you can anticipate it by vision," }, { "start": 334.32, "end": 340.4, "text": " it's probably a lot better than you having to somehow detect it on the radar." }, { "start": 340.4, "end": 342.15999999999997, "text": " So I think that's a fair point right here." }, { "start": 342.15999999999997, "end": 346.48, "text": " But humans having neural network, therefore, we must have neural network." }, { "start": 346.48, "end": 349.68, "text": " I'm not super sure that's valid." }, { "start": 349.68, "end": 355.52, "text": " How much game theoretic kind of stuff needs to be involved at a four-way stop sign?" }, { "start": 357.12, "end": 360.64, "text": " As humans, when we drive, our actions affect the world." }, { "start": 362, "end": 363.92, "text": " It changes how others behave." }, { "start": 363.92, "end": 370, "text": " Most of the time, when driving, you're usually just responding to the scene" }, { "start": 370.64000000000004, "end": 374.48, "text": " as opposed to really asserting yourself in the scene." }, { "start": 374.48, "end": 374.88, "text": " Do you think..." }, { "start": 376.24, "end": 382.24, "text": " I think these sort of control logic conundrums are not the hard part." }, { "start": 388.56, "end": 393.68, "text": " What do you think is the hard part in this whole beautiful complex problem?" }, { "start": 393.68, "end": 395.44, "text": " So it's a lot of freaking software, man." }, { "start": 396.08, "end": 397.28000000000003, "text": " A lot of smart lines of code." }, { "start": 400.40000000000003, "end": 402.40000000000003, "text": " For sure, in order to have..." }, { "start": 404.64, "end": 406.40000000000003, "text": " Create an accurate vector space." }, { "start": 407.12, "end": 412.32, "text": " So like you're coming from image space, which is..." }, { "start": 412.32, "end": 415.04, "text": " So I think Elon's gonna make the point here that..." }, { "start": 415.76, "end": 419.84000000000003, "text": " What Lex's concern is that there's a lot of game theoretic stuff." }, { "start": 419.84000000000003, "end": 423.04, "text": " And he mentions the four-way crossroads." }, { "start": 423.04, "end": 427.44, "text": " And then you sort of have to communicate who goes first, who goes last, and so on." }, { "start": 427.44, "end": 431.76000000000005, "text": " And Elon says that that's not the big problem in self-driving." }, { "start": 431.76000000000005, "end": 436.24, "text": " He's gonna make the point that once you do have an accurate representation of the world," }, { "start": 436.24, "end": 439.92, "text": " once you know where every car is and so on, what every sign means," }, { "start": 439.92, "end": 442.48, "text": " that you can figure this stuff out easily." }, { "start": 442.48, "end": 444.08000000000004, "text": " And I think I agree." }, { "start": 444.08000000000004, "end": 448.88, "text": " At least the number of situations you can broadly cover with programming heuristics" }, { "start": 448.88, "end": 450.40000000000003, "text": " is sort of countable." }, { "start": 450.4, "end": 452.96, "text": " And I would guess that that would work." }, { "start": 452.96, "end": 455.59999999999997, "text": " Though I'm not super sure if that goes all the way." }, { "start": 455.59999999999997, "end": 457.28, "text": " Because there is game theoretic stuff." }, { "start": 457.28, "end": 462.32, "text": " Like you can, you know, change a lane based on the fact that you know," }, { "start": 462.32, "end": 466.96, "text": " kind of game theoretically, that other people won't sort of cut you off while you do it," }, { "start": 466.96, "end": 469.2, "text": " because they'd crash their car and so on." }, { "start": 469.2, "end": 474.56, "text": " Which you can't just know by looking at their speeds and the positions of the cars." }, { "start": 474.56, "end": 479.76, "text": " Sort of the anticipation of how everyone else is going to react in certain situations" }, { "start": 479.76, "end": 484.88, "text": " is, I think, a big part of driving and also a big part of sort of predicting dangers." }, { "start": 484.88, "end": 488.64, "text": " So I'm not super sure if you can just hard code all of that." }, { "start": 488.64, "end": 494.64, "text": " But I think saying that, you know, the perception problem is conceptually the harder problem." }, { "start": 494.64, "end": 499.92, "text": " Because for the perception problem, there isn't even an approach with regular programming, right?" }, { "start": 499.92, "end": 501.36, "text": " You have to sort of learn it then." }, { "start": 501.36, "end": 504.15999999999997, "text": " Yes, if you make a mistake in the perception problem," }, { "start": 504.15999999999997, "end": 506.56, "text": " that's going to have vast downstream effects." }, { "start": 506.56, "end": 513.36, "text": " So I do agree here that probably the self-driving problem might at least at this time," }, { "start": 513.36, "end": 518, "text": " largely be a computer vision, or let's say, not only vision," }, { "start": 518, "end": 521.6, "text": " but sort of world understanding perception problem." }, { "start": 521.6, "end": 524.56, "text": " After that, it becomes sort of easier." }, { "start": 524.56, "end": 527.76, "text": " Once you have an accurate vector space," }, { "start": 528.96, "end": 534.08, "text": " the control problem is similar to that of a video game, like a Grand Theft Auto or Cyberpunk." }, { "start": 534.08, "end": 536.1600000000001, "text": " Oh, yeah." }, { "start": 536.1600000000001, "end": 539.5200000000001, "text": " Yes, I want my traffic management system." }, { "start": 539.5200000000001, "end": 544.48, "text": " I want my self-driving system to be the one from cyberpunk, please." }, { "start": 547.84, "end": 549.44, "text": " Lord help us, please." }, { "start": 550.48, "end": 552.48, "text": " Yeah, I mean, point taken, right?" }, { "start": 552.48, "end": 557.76, "text": " What Elon calls vector space right here, I guess you'd sort of call a scene understanding," }, { "start": 557.76, "end": 560.48, "text": " a scene graph, you know, anything like this." }, { "start": 560.48, "end": 566, "text": " Essentially, where are the objects in the scene, sort of what's their position," }, { "start": 566, "end": 569.9200000000001, "text": " their momentum, I guess, you know, where are the signs, what do they mean," }, { "start": 569.9200000000001, "end": 572.48, "text": " where are the traffic lights, all of this kind of stuff." }, { "start": 572.48, "end": 577.6, "text": " Once you have that, the problem of sort of planning ahead what you should do" }, { "start": 577.6, "end": 582.16, "text": " becomes probably relatively easy, at least compared to that perception problem." }, { "start": 582.16, "end": 585.76, "text": " Like when's the last time you looked right and left, you know, or and rearward," }, { "start": 585.76, "end": 591.28, "text": " or even diagonally, you know, forward to actually refresh your vector space." }, { "start": 591.92, "end": 596.48, "text": " So you're glancing around and what your mind is doing is trying to distill" }, { "start": 597.52, "end": 601.84, "text": " the relevant vectors, basically objects with a position and motion." }, { "start": 603.6, "end": 610.24, "text": " And then editing that down to the least amount that's necessary for you to drive." }, { "start": 610.24, "end": 616.08, "text": " It does seem to be able to edit it down or compress it even further into things like concepts." }, { "start": 616.08, "end": 621.12, "text": " So it's not, it's like it goes beyond, the human mind seems to go sometimes beyond vector space," }, { "start": 621.76, "end": 625.36, "text": " to sort of space of concepts, to where you'll see a thing," }, { "start": 625.36, "end": 627.76, "text": " it's no longer represented spatially somehow." }, { "start": 627.76, "end": 630.24, "text": " It's almost like a concept that you should be aware of." }, { "start": 630.24, "end": 635.6, "text": " Like if this is a school zone, you'll remember that as a concept, which is a..." }, { "start": 636.64, "end": 638.16, "text": " That's a really good point." }, { "start": 638.16, "end": 644.0799999999999, "text": " So Elon made the point essentially that what your brain is doing and therefore what," }, { "start": 644.0799999999999, "end": 649.04, "text": " you know, the AI should be doing is take all that information and build what Elon calls" }, { "start": 649.04, "end": 653.68, "text": " this vector space, which is, as he said, sort of objects and their motions." }, { "start": 653.68, "end": 658.88, "text": " But Lex goes a step further and says, well, you also know sort of that this is a school zone." }, { "start": 658.88, "end": 664.3199999999999, "text": " And in a school zone, not only should I be driving slower, but there might be children around." }, { "start": 664.3199999999999, "end": 666.3199999999999, "text": " So I need to be sort of careful." }, { "start": 666.32, "end": 672.96, "text": " I in fact, adapt my attention and my vision on different things than if something like," }, { "start": 672.96, "end": 674.48, "text": " then if it's a highway." }, { "start": 674.48, "end": 680.24, "text": " And I think that is as of yet, probably not considered by these AI systems." }, { "start": 680.24, "end": 687.2800000000001, "text": " I'm pretty sure they, the input feed is all the same, no matter whether it's a school zone" }, { "start": 687.2800000000001, "end": 689.36, "text": " or whether it is a highway." }, { "start": 689.36, "end": 691.6, "text": " Of course, there's different things." }, { "start": 691.6, "end": 695.5200000000001, "text": " Us humans have limited amounts of attention and Elon just pointed out," }, { "start": 695.52, "end": 701.4399999999999, "text": " sort of all the ways in which your system is screwed up like blind spots and yada, yada, yada." }, { "start": 701.4399999999999, "end": 707.6, "text": " And that might be the reason why we have to sort of focus our attention on different things." }, { "start": 707.6, "end": 709.4399999999999, "text": " And, you know, depending on where we are." }, { "start": 709.4399999999999, "end": 713.1999999999999, "text": " So it could be that the machines are just, you know, they don't care." }, { "start": 713.1999999999999, "end": 715.4399999999999, "text": " They can always pay attention to everything." }, { "start": 715.4399999999999, "end": 718, "text": " And therefore, this is not a concern to them." }, { "start": 718, "end": 720.24, "text": " I'm not entirely convinced by this." }, { "start": 720.24, "end": 726.16, "text": " The sort of guiding of attention and sort of the top down feedback loop to the lower systems," }, { "start": 726.16, "end": 730.32, "text": " I think is as of yet, completely missing from the AI systems." }, { "start": 730.32, "end": 731.52, "text": " I'm not sure actually." }, { "start": 731.52, "end": 735.92, "text": " Maybe they do sort of feed, let's say they know they're in a school zone." }, { "start": 735.92, "end": 739.76, "text": " They know, you know, the speed limit is such and such and, or there's a construction site." }, { "start": 739.76, "end": 745.36, "text": " Maybe they feed sort of embeddings of this stuff into sort of the vision networks." }, { "start": 745.36, "end": 750.64, "text": " And the vision networks might be able to adjust sort of their attention patterns." }, { "start": 750.64, "end": 752.5600000000001, "text": " Not that probably they don't use attention." }, { "start": 752.5600000000001, "end": 754.64, "text": " They probably use con nets or so." }, { "start": 754.64, "end": 757.6800000000001, "text": " But it would be interesting to see if that was happening." }, { "start": 757.6800000000001, "end": 759.76, "text": " I would be very surprised if it was though." }, { "start": 759.76, "end": 761.28, "text": " So not sure." }, { "start": 761.28, "end": 762.88, "text": " This might be a fundamental limitation." }, { "start": 762.88, "end": 768.24, "text": " It might be that without this, the driving problem is essentially unsolvable or, or there's," }, { "start": 768.24, "end": 770.72, "text": " there's major hurdles that can't be overcome." }, { "start": 770.72, "end": 774.48, "text": " It could also be that just, you know, the machines can always pay attention to everything." }, { "start": 774.48, "end": 776.5600000000001, "text": " And therefore it just doesn't matter." }, { "start": 776.5600000000001, "end": 780.88, "text": " You saw that there were some kids about to cross the road in front of the truck." }, { "start": 780.88, "end": 785.36, "text": " Now you can no longer see the kids, but you, you need to be able, but you would now know," }, { "start": 785.36, "end": 790.16, "text": " okay, those kids are probably going to pass by the truck and cross the road, even though" }, { "start": 790.16, "end": 791.28, "text": " you cannot see them." }, { "start": 791.28, "end": 798.4, "text": " So you have to have, um, memory, uh, you have to need to remember that there were kids there" }, { "start": 798.4, "end": 803.6, "text": " and you need to have some forward prediction of what their position will be." }, { "start": 803.6, "end": 805.28, "text": " It's a really hard problem." }, { "start": 805.28, "end": 806.88, "text": " I mean, yeah, exactly." }, { "start": 806.88, "end": 812.24, "text": " So they're going to talk about occlusions here, occlusions, uh, detecting occluded objects" }, { "start": 812.24, "end": 813.2, "text": " and so on." }, { "start": 813.2, "end": 816, "text": " But I think Elon's point is bigger than that." }, { "start": 816, "end": 820.88, "text": " You need to have a forward predicting model in order to do the self driving, you know," }, { "start": 820.88, "end": 824.24, "text": " solve the self driving problem to a realistic degree." }, { "start": 824.24, "end": 828.4, "text": " And here I would, you know, challenge zero to your statement that once you have the vector" }, { "start": 828.4, "end": 831.28, "text": " space, the problem is sort of, you know, not that hard." }, { "start": 831.28, "end": 836.24, "text": " I think this particular part of the remaining problem is actually quite hard in itself because" }, { "start": 836.24, "end": 841.12, "text": " it's not like you can just calculate the Nash equilibrium of self driving and then assume" }, { "start": 841.12, "end": 843.04, "text": " that everyone's acting rationally." }, { "start": 843.04, "end": 848.9599999999999, "text": " You have to sort of take into account all the human factors right here and how you expect" }, { "start": 848.9599999999999, "end": 854.9599999999999, "text": " other humans to act, be that pedestrians or other drivers or anything like this." }, { "start": 854.9599999999999, "end": 860, "text": " Yeah, I think this is another area, this sort of forward prediction where neuro-sensory" }, { "start": 860, "end": 865.6, "text": " prediction where neural net or in general machine learning is going to make a big difference." }, { "start": 865.6, "end": 871.12, "text": " And then as I said, I'd be wondering if there is sort of a top down feedback loop that as" }, { "start": 871.12, "end": 875.84, "text": " you're predicting forward, you're going to change sort of the perception pipeline on" }, { "start": 875.84, "end": 878.24, "text": " the fly or not." }, { "start": 878.24, "end": 883.84, "text": " But like, let's say you, you're parked at a light and you, and you saw, you use a pedestrian" }, { "start": 883.84, "end": 889.52, "text": " example that people were waiting to cross the, across the road and you can't, you can't" }, { "start": 889.52, "end": 892.56, "text": " quite see them because of an occlusion." }, { "start": 892.56, "end": 896.8, "text": " But they might wait for a minute before the light changes for them to cross the road." }, { "start": 896.8, "end": 901.9399999999999, "text": " You still need to remember that that's where they were and that they're probably going" }, { "start": 901.9399999999999, "end": 904.24, "text": " to cross the road type of thing." }, { "start": 904.24, "end": 911.8, "text": " So even if that exceeds your time-based memory, it should not exceed your space memory." }, { "start": 911.8, "end": 917.04, "text": " And I just think the data engine side of that, so getting the data to learn all of the concepts" }, { "start": 917.04, "end": 919.8399999999999, "text": " that you're saying now is an incredible process." }, { "start": 919.8399999999999, "end": 921.8399999999999, "text": " It's this iterative process of just..." }, { "start": 921.8399999999999, "end": 923.64, "text": " And I just think..." }, { "start": 923.64, "end": 927.9599999999999, "text": " So what he said right there, I think is quite important as well." }, { "start": 927.9599999999999, "end": 930.36, "text": " You know, you can probably understand it in the concept." }, { "start": 930.36, "end": 935.36, "text": " If you do reinforcement learning, let's say you did reinforcement learning in this thing," }, { "start": 935.36, "end": 940.5799999999999, "text": " typically in reinforcement learning, we have a finite amount of time where you can go back" }, { "start": 940.5799999999999, "end": 945.48, "text": " over time and still be able to do back propagation, especially if you're at like a high frame" }, { "start": 945.48, "end": 948.9200000000001, "text": " rate like these systems operate right here." }, { "start": 948.9200000000001, "end": 950.6, "text": " That's not going to be a long time." }, { "start": 950.6, "end": 953.28, "text": " It's not going to be a minute of real time." }, { "start": 953.28, "end": 958.36, "text": " And therefore, yes, if you need to learn to remember something like there are pedestrians" }, { "start": 958.36, "end": 962.28, "text": " right there and they're still there a minute later because all the lights were red, that" }, { "start": 962.28, "end": 966.64, "text": " is going to be quite a bit of a problem and a challenge in itself." }, { "start": 966.64, "end": 971.28, "text": " Sort of learning to remember things is a long-standing challenge in reinforcement learning." }, { "start": 971.28, "end": 977, "text": " And you probably be better off sort of coding all the objects in this, what Elon calls the" }, { "start": 977, "end": 978.4399999999999, "text": " vector space." }, { "start": 978.4399999999999, "end": 983.92, "text": " So understand the scene and then explicitly representing each object that's there rather" }, { "start": 983.92, "end": 987.04, "text": " than having the neural networks learn everything from perception." }, { "start": 987.04, "end": 992.52, "text": " I think the data engine side of that, so getting the data to learn all the concepts that you're" }, { "start": 992.52, "end": 995.0799999999999, "text": " saying now is an incredible process." }, { "start": 995.0799999999999, "end": 997.6, "text": " It's this iterative process of just..." }, { "start": 997.6, "end": 999.6, "text": " This is HydroNet, many..." }, { "start": 999.6, "end": 1001.6, "text": " HydroNet." }, { "start": 1001.6, "end": 1004.28, "text": " We're changing the name to something else." }, { "start": 1004.28, "end": 1005.28, "text": " Okay." }, { "start": 1005.28, "end": 1008.64, "text": " I'm sure it'll be equally as Rick and Morty like..." }, { "start": 1008.64, "end": 1009.64, "text": " There's a lot of..." }, { "start": 1009.64, "end": 1010.64, "text": " Yeah." }, { "start": 1010.64, "end": 1015.52, "text": " We've re-architected the neural net in the cars so many times." }, { "start": 1015.52, "end": 1016.52, "text": " It's crazy." }, { "start": 1016.52, "end": 1020.6, "text": " Oh, so every time there's a new major version, you'll rename it to something more ridiculous" }, { "start": 1020.6, "end": 1023.44, "text": " or memorable and beautiful?" }, { "start": 1023.44, "end": 1024.44, "text": " Sorry." }, { "start": 1024.44, "end": 1027.16, "text": " Not ridiculous, of course." }, { "start": 1027.16, "end": 1033.76, "text": " If you see the full array of neural nets that are operating in the cars, it boggles the" }, { "start": 1033.76, "end": 1034.76, "text": " mind." }, { "start": 1034.76, "end": 1040.72, "text": " There's so many layers, it's crazy." }, { "start": 1040.72, "end": 1044.16, "text": " What is he actually saying here?" }, { "start": 1044.16, "end": 1050.0800000000002, "text": " It's hard to decipher Elon because obviously he's not a deep learning engineer, so he sort" }, { "start": 1050.08, "end": 1057.72, "text": " of probably gets the pitch from Andre and some diagrams or something like this." }, { "start": 1057.72, "end": 1062.1599999999999, "text": " But as of now, we don't know if there are many neural nets, but it's unlikely because" }, { "start": 1062.1599999999999, "end": 1068.04, "text": " he says it's mind bogglingly many and you'd have to sort of train all of them." }, { "start": 1068.04, "end": 1073.6, "text": " I couldn't really imagine how you'd put mind bogglingly many neural networks into a system" }, { "start": 1073.6, "end": 1074.6, "text": " like this." }, { "start": 1074.6, "end": 1080.6399999999999, "text": " I'm going to guess that they have a couple and these are just kind of big and complicated." }, { "start": 1080.6399999999999, "end": 1086.1999999999998, "text": " And that's exactly what we saw in Karpati's talk when he explained how they go vision" }, { "start": 1086.1999999999998, "end": 1087.52, "text": " only and so on." }, { "start": 1087.52, "end": 1090.4399999999998, "text": " If you haven't seen this, watch my analysis of that." }, { "start": 1090.4399999999998, "end": 1094.26, "text": " He's about to explain a bit more in depth of what's going on." }, { "start": 1094.26, "end": 1102.9599999999998, "text": " We started off with simple neural nets that were basically image recognition on a single" }, { "start": 1102.96, "end": 1114.16, "text": " frame from a single camera and then trying to knit those together with C. I should say" }, { "start": 1114.16, "end": 1119.8400000000001, "text": " we're primarily running C here because C++ is too much overhead and we have our own C" }, { "start": 1119.8400000000001, "end": 1121.08, "text": " compiler." }, { "start": 1121.08, "end": 1125.72, "text": " So to get maximum performance, we actually wrote our own C compiler and are continuing" }, { "start": 1125.72, "end": 1128.96, "text": " to optimize our C compiler for maximum efficiency." }, { "start": 1128.96, "end": 1134.48, "text": " In fact, we've just recently done a new rev on a C compiler that will compile directly" }, { "start": 1134.48, "end": 1135.88, "text": " to our autopilot hardware." }, { "start": 1135.88, "end": 1138.92, "text": " So you want to compile the whole thing down?" }, { "start": 1138.92, "end": 1143.52, "text": " I mean, he's going to talk about two things kind of interleaved right here that have on" }, { "start": 1143.52, "end": 1146.8, "text": " the surface not too much to do with each other." }, { "start": 1146.8, "end": 1152.4, "text": " So apparently there is a C compiler that compiles directly to the hardware, which makes sense," }, { "start": 1152.4, "end": 1153.4, "text": " right?" }, { "start": 1153.4, "end": 1156.8400000000001, "text": " These cars have the property that you have to be super duper efficient and power saving" }, { "start": 1156.8400000000001, "end": 1157.96, "text": " and whatnot." }, { "start": 1157.96, "end": 1164.28, "text": " And running Python on top of that, the overhead of that might just be too much." }, { "start": 1164.28, "end": 1170.66, "text": " You can in fact save a lot of energy, a lot of time and so on by building a compiler that" }, { "start": 1170.66, "end": 1173.88, "text": " uses the hardware as optimally as possible." }, { "start": 1173.88, "end": 1180.32, "text": " Now that being said, this has little to do with how you build the neural network system" }, { "start": 1180.32, "end": 1187.04, "text": " other than the neural networks will be faster if you compile them down correctly." }, { "start": 1187.04, "end": 1191.92, "text": " And so there's actually a lot of work done by some very talented software engineers at" }, { "start": 1191.92, "end": 1200.44, "text": " Tesla at a very foundational level to improve the efficiency of compute and how we use the" }, { "start": 1200.44, "end": 1208.8799999999999, "text": " trip accelerators, which are basically doing matrix math dot products like a bazillion" }, { "start": 1208.8799999999999, "end": 1209.8799999999999, "text": " dot products." }, { "start": 1209.88, "end": 1217.3200000000002, "text": " And it's like what are neural nets, it's like compute wise like 99% dot products." }, { "start": 1217.3200000000002, "end": 1224.3600000000001, "text": " So yeah, I mean, he's obviously correct right here, though it has to be said, you know," }, { "start": 1224.3600000000001, "end": 1230.3600000000001, "text": " for anyone who's listening to this, your neural network isn't slow because you don't have" }, { "start": 1230.3600000000001, "end": 1231.3600000000001, "text": " the right compiler." }, { "start": 1231.3600000000001, "end": 1236.5200000000002, "text": " It is true that if you do it correctly, you compile your network down to like a format" }, { "start": 1236.52, "end": 1240.72, "text": " that is optimal for some hardware and you run it with you know, the correct libraries" }, { "start": 1240.72, "end": 1245.72, "text": " and and you set up everything correctly, you can probably get like maybe if you if you" }, { "start": 1245.72, "end": 1251.32, "text": " did if you did it terribly wrong, and then you do it terribly right, you can get up to" }, { "start": 1251.32, "end": 1258.16, "text": " a 10x speed up I would guess maybe you know, 5x 10x speed up something like this best case." }, { "start": 1258.16, "end": 1262.96, "text": " However, usually, usually, the first thing you should investigate is whether or not the" }, { "start": 1262.96, "end": 1266.06, "text": " architecture you're using is the correct one." }, { "start": 1266.06, "end": 1271.6399999999999, "text": " You can get like many, many more times a speed up by simply changing the architecture to" }, { "start": 1271.6399999999999, "end": 1273.44, "text": " something more appropriate." }, { "start": 1273.44, "end": 1277.3999999999999, "text": " So Elon says this here, because obviously, this is the last step." }, { "start": 1277.3999999999999, "end": 1282.28, "text": " And you know, they need to they need to get every, every millisecond they can out of these" }, { "start": 1282.28, "end": 1283.3799999999999, "text": " systems." }, { "start": 1283.3799999999999, "end": 1289.48, "text": " But just for most people listening, this is sort of the the sugar, the icing on the cake," }, { "start": 1289.48, "end": 1295.52, "text": " you should first care about the cake and try to make your architecture, you know, more" }, { "start": 1295.52, "end": 1300.84, "text": " optimal, maybe use less layers or anything like this change from this operation to that" }, { "start": 1300.84, "end": 1303.4, "text": " operation analyze your bottlenecks." }, { "start": 1303.4, "end": 1307.6, "text": " And only once you have everything through and you have the exact model you want, then" }, { "start": 1307.6, "end": 1311.72, "text": " you can care about doing all the engineering things." }, { "start": 1311.72, "end": 1318.68, "text": " One of the things we're moving towards now is no post processing of the image through" }, { "start": 1318.68, "end": 1322.8, "text": " the image signal processor." }, { "start": 1322.8, "end": 1332.44, "text": " So like, what happens for cameras is that almost all cameras is they there's a lot of" }, { "start": 1332.44, "end": 1336.6, "text": " post processing done in order to make pictures look pretty." }, { "start": 1336.6, "end": 1339.76, "text": " And so we don't care about pictures looking pretty." }, { "start": 1339.76, "end": 1341.52, "text": " We just want the data." }, { "start": 1341.52, "end": 1344.9199999999998, "text": " So we're moving just roll photon counts." }, { "start": 1344.9199999999998, "end": 1352.48, "text": " So the system will like the image that that the computer sees is actually much more than" }, { "start": 1352.48, "end": 1355.1200000000001, "text": " what you'd see if you represented on a camera." }, { "start": 1355.1200000000001, "end": 1357.08, "text": " It's got much more data." }, { "start": 1357.08, "end": 1360.64, "text": " And even in very low light conditions, you can see that there's a small photon count" }, { "start": 1360.64, "end": 1366.16, "text": " difference between, you know, this spot here and that spot there, which means that so it" }, { "start": 1366.16, "end": 1371.48, "text": " can see in the dark incredibly well, because it can detect these tiny differences in photon" }, { "start": 1371.48, "end": 1372.48, "text": " counts." }, { "start": 1372.48, "end": 1376.92, "text": " That's much better than you could possibly imagine." }, { "start": 1376.92, "end": 1384.16, "text": " So I mean, that is, again, like that is a third issue next to the the C compiler." }, { "start": 1384.16, "end": 1388.96, "text": " And what the neural networks do is essentially saying that if you remove the post processing" }, { "start": 1388.96, "end": 1394.3200000000002, "text": " within the camera sensors that are usually built into, let's say cameras that you could" }, { "start": 1394.3200000000002, "end": 1397.88, "text": " buy on the market, then you get the raw data." }, { "start": 1397.88, "end": 1401.64, "text": " And since you don't have to look at the pictures, the raw data is much more useful than the" }, { "start": 1401.64, "end": 1406.3600000000001, "text": " post process data, since it's a machine anyway, that analyzes the signal." }, { "start": 1406.36, "end": 1409.28, "text": " And therefore, you might as well make it machine friendly." }, { "start": 1409.28, "end": 1414.04, "text": " I think it is a good lesson for maybe other fields as well to think about, you know, what" }, { "start": 1414.04, "end": 1419.3, "text": " parts of the pipeline are just there to make it, you know, because because humans are involved" }, { "start": 1419.3, "end": 1421.12, "text": " and try to remove those." }, { "start": 1421.12, "end": 1426.8799999999999, "text": " But you know, it doesn't really add to what's the what's the deal with the neural networks," }, { "start": 1426.8799999999999, "end": 1430.52, "text": " which I think was the original question here." }, { "start": 1430.52, "end": 1436.16, "text": " And then we also save 13 milliseconds on latency." }, { "start": 1436.16, "end": 1440.12, "text": " So from removing the post processing an image?" }, { "start": 1440.12, "end": 1441.12, "text": " Yes." }, { "start": 1441.12, "end": 1442.12, "text": " Yeah." }, { "start": 1442.12, "end": 1448.52, "text": " It's like because we've got eight cameras and then there's roughly, I don't know, one" }, { "start": 1448.52, "end": 1455.08, "text": " and a half milliseconds or so, maybe one point six milliseconds of latency for each camera." }, { "start": 1455.08, "end": 1466.32, "text": " And so like going to just basically bypassing the image processor gets us back 13 milliseconds" }, { "start": 1466.32, "end": 1468.6, "text": " of latency, which is important." }, { "start": 1468.6, "end": 1474.82, "text": " Yeah, I think this, you know, besides getting the raw data, this is also again, they need" }, { "start": 1474.82, "end": 1478.8799999999999, "text": " to squeeze out sort of the last mile here or the last milliseconds here." }, { "start": 1478.8799999999999, "end": 1482.48, "text": " And this is another thing they they can practically do." }, { "start": 1482.48, "end": 1485.32, "text": " So getting rid of jitter is extremely important." }, { "start": 1485.32, "end": 1488.48, "text": " And that affects your control decisions and all those kinds of things." }, { "start": 1488.48, "end": 1489.48, "text": " OK." }, { "start": 1489.48, "end": 1495.64, "text": " Yeah, the cars is going to fundamentally maneuver better with lower jitter." }, { "start": 1495.64, "end": 1501.32, "text": " The cars will maneuver with superhuman ability and reaction time much faster than a human." }, { "start": 1501.32, "end": 1507.28, "text": " I mean, I think over time, the autopilot full self driving will be capable of maneuvers" }, { "start": 1507.28, "end": 1517.44, "text": " that are far more than what James Bond could do in the best movie type of thing." }, { "start": 1517.44, "end": 1521.32, "text": " That's exactly what I was imagining in my mind, as you said." }, { "start": 1521.32, "end": 1524.92, "text": " It's like impossible maneuvers that a human couldn't do." }, { "start": 1524.92, "end": 1528.8799999999999, "text": " Well, OK, it's two things." }, { "start": 1528.8799999999999, "end": 1533.04, "text": " Impossible maneuvers are impossible and things that humans could do are things that humans" }, { "start": 1533.04, "end": 1534.04, "text": " could do." }, { "start": 1534.04, "end": 1538.44, "text": " I have no doubt that at one point in the near future, self driving cars will be able to" }, { "start": 1538.44, "end": 1540.92, "text": " do things that humans couldn't do." }, { "start": 1540.92, "end": 1546.92, "text": " The question is more, are there going to be things that humans do that the cars couldn't" }, { "start": 1546.92, "end": 1547.92, "text": " do?" }, { "start": 1547.92, "end": 1548.92, "text": " Right." }, { "start": 1548.92, "end": 1549.92, "text": " Or can't do?" }, { "start": 1549.92, "end": 1550.92, "text": " Because that's the actual gap you're trying to close." }, { "start": 1550.92, "end": 1552.96, "text": " You know, look at Boston Dynamics or so." }, { "start": 1552.96, "end": 1557.92, "text": " If you hard code stuff and you have extremely, extremely good sensors and actuators, you" }, { "start": 1557.92, "end": 1561.08, "text": " can do many things that humans couldn't do." }, { "start": 1561.08, "end": 1566.1999999999998, "text": " But on the other hand, it's the things that humans can do that the machines can't." }, { "start": 1566.1999999999998, "end": 1567.1999999999998, "text": " Those are the problem." }, { "start": 1567.1999999999998, "end": 1573.3999999999999, "text": " Well, let me ask sort of looking back the six years, looking out into the future, based" }, { "start": 1573.3999999999999, "end": 1578.48, "text": " on your current understanding, how hard do you think this full self driving problem," }, { "start": 1578.48, "end": 1583.48, "text": " when do you think Tesla will solve level four FSD?" }, { "start": 1583.48, "end": 1589.1599999999999, "text": " I think Elon gets asked this question every year and every year he says next year." }, { "start": 1589.16, "end": 1597.4, "text": " So I mean, it's looking quite likely that it will be next year." }, { "start": 1597.4, "end": 1602.96, "text": " This is the thing with Elon Musk, he always promises things like next year or on ridiculously" }, { "start": 1602.96, "end": 1604.68, "text": " short amounts of time." }, { "start": 1604.68, "end": 1609.2, "text": " And I wonder how long it's going to take for people to just, you know, stop believing him." }, { "start": 1609.2, "end": 1611.44, "text": " I guess many people already did." }, { "start": 1611.44, "end": 1616.5400000000002, "text": " But it's still, you know, a thing to consider that on one hand, obviously, if you do it" }, { "start": 1616.54, "end": 1622.04, "text": " too much, then people are simply going to say, oh, well, probably in five years if he" }, { "start": 1622.04, "end": 1623.28, "text": " says next year." }, { "start": 1623.28, "end": 1627.78, "text": " But on the other hand, he's also able to sort of it's a motivating thing." }, { "start": 1627.78, "end": 1629.24, "text": " It's a cool thing." }, { "start": 1629.24, "end": 1631, "text": " It drives momentum." }, { "start": 1631, "end": 1636.24, "text": " And that itself accelerates the development of these things, people being ready to just" }, { "start": 1636.24, "end": 1638.28, "text": " flip on a beta version and so on." }, { "start": 1638.28, "end": 1639.28, "text": " It's a bit insane." }, { "start": 1639.28, "end": 1644.44, "text": " But I do think his optimism and a little bit salesmanship also a lot of benefits besides" }, { "start": 1644.44, "end": 1647.16, "text": " the obvious negatives." }, { "start": 1647.16, "end": 1652.68, "text": " So the interventions, you know, per million miles has been dropping dramatically at some" }, { "start": 1652.68, "end": 1655.2, "text": " point." }, { "start": 1655.2, "end": 1662.8, "text": " And that trend looks like it happens next year is that the probability of an accident" }, { "start": 1662.8, "end": 1669.64, "text": " on FSD is less than that of the average human and then significantly less than that of the" }, { "start": 1669.64, "end": 1671.8400000000001, "text": " average human." }, { "start": 1671.84, "end": 1677.4399999999998, "text": " So it certainly appears like we will get there next year." }, { "start": 1677.4399999999998, "end": 1680.1599999999999, "text": " There's a lot of hedging going on here." }, { "start": 1680.1599999999999, "end": 1685.48, "text": " But you know, you can this is this is actually a nice method, I think, of making these types" }, { "start": 1685.48, "end": 1691.28, "text": " of predictions, you see that the rate of disengagement is dropping at a certain speed, you can extrapolate" }, { "start": 1691.28, "end": 1695.8799999999999, "text": " maybe a little bit and say, look, you know, here's going to be the sort of threshold where" }, { "start": 1695.8799999999999, "end": 1697.12, "text": " we're better than a human." }, { "start": 1697.12, "end": 1700.3799999999999, "text": " I think that's a quite a sober analysis if done correctly." }, { "start": 1700.38, "end": 1704.64, "text": " And I also think people who are, you know, it's obviously good to be skeptical of fully" }, { "start": 1704.64, "end": 1706.4, "text": " self driving systems." }, { "start": 1706.4, "end": 1711.0400000000002, "text": " But on the other hand, you also have to think if they're a lot better than humans, it makes" }, { "start": 1711.0400000000002, "end": 1712.0400000000002, "text": " makes total sense, right?" }, { "start": 1712.0400000000002, "end": 1716.7, "text": " It also makes total sense to have them and not engage them all the time, right?" }, { "start": 1716.7, "end": 1719.4, "text": " There might still be situations you want to drive yourself." }, { "start": 1719.4, "end": 1722.7600000000002, "text": " The question is a little bit, can you just continue the trend?" }, { "start": 1722.7600000000002, "end": 1725.96, "text": " Or is there a sort of an okay, you solve the easy problems." }, { "start": 1725.96, "end": 1729.7600000000002, "text": " And that is what makes the rates of disengagement go down now." }, { "start": 1729.76, "end": 1734.32, "text": " But now come the more and more hard problems and sort of it gets exponentially harder to" }, { "start": 1734.32, "end": 1738.96, "text": " continue that trend, in which case, we're not going to be there for a long time." }, { "start": 1738.96, "end": 1741.8799999999999, "text": " Then there's going to be a case of, okay, we'll not have to prove this to regulators" }, { "start": 1741.8799999999999, "end": 1748.4, "text": " and prove it to you know, and we want a standard that is not just equivalent to a human, but" }, { "start": 1748.4, "end": 1751.72, "text": " much better than the average human, I think it's got to be at least two or three times" }, { "start": 1751.72, "end": 1754.68, "text": " higher safety than a human." }, { "start": 1754.68, "end": 1761.28, "text": " Probably more like 10, like knowing, you know, regulators and how the public perceives these" }, { "start": 1761.28, "end": 1762.3600000000001, "text": " types of things." }, { "start": 1762.3600000000001, "end": 1767.5800000000002, "text": " Of course, right now they're cool, but then it's really easy to publicize in a few accidents" }, { "start": 1767.5800000000002, "end": 1772.04, "text": " that few stupid accidents that happen if you build machine learning systems for the real" }, { "start": 1772.04, "end": 1775.16, "text": " world, they are going to make stupid mistakes." }, { "start": 1775.16, "end": 1779.8, "text": " It doesn't matter how accurate they are on average, they're going to make stupid mistakes" }, { "start": 1779.8, "end": 1784.6399999999999, "text": " that a human would never do and people are just going to point at it and never forget" }, { "start": 1784.6399999999999, "end": 1786.08, "text": " that one instance." }, { "start": 1786.08, "end": 1790.72, "text": " And I think it's pretty easy to sort of scare people publicizing those kinds of things." }, { "start": 1790.72, "end": 1794.48, "text": " And therefore, yeah, you have to be like massively better than humans." }, { "start": 1794.48, "end": 1796.6, "text": " I agree here." }, { "start": 1796.6, "end": 1800.52, "text": " There is some fundamental leap that really deserves the 11." }, { "start": 1800.52, "end": 1802.3999999999999, "text": " I mean, that's a pretty cool number." }, { "start": 1802.3999999999999, "end": 1803.3999999999999, "text": " Yeah." }, { "start": 1803.4, "end": 1813.52, "text": " 11 would be a single stack for all, one stack to rule them all." }, { "start": 1813.52, "end": 1821.1200000000001, "text": " But there are just some really fundamental neural net architecture changes that will" }, { "start": 1821.1200000000001, "end": 1828.0800000000002, "text": " allow for much more capability, but at first they're going to have issues." }, { "start": 1828.08, "end": 1836.6399999999999, "text": " So we have this working on like sort of alpha software and it's good, but it's basically" }, { "start": 1836.6399999999999, "end": 1842.6, "text": " taking a whole bunch of C++ code and deleting a massive amount of C++ code and replacing" }, { "start": 1842.6, "end": 1843.6, "text": " it with a neural net." }, { "start": 1843.6, "end": 1849.32, "text": " And Andrei makes this point a lot, which is like neural nets are kind of eating software." }, { "start": 1849.32, "end": 1851.6399999999999, "text": " So it's interesting what Elon says right here." }, { "start": 1851.6399999999999, "end": 1857.58, "text": " This upcoming version 11 of the Tesla software seems to have kind of a rewrite in what he" }, { "start": 1857.58, "end": 1860.4399999999998, "text": " calls the creation of the vector space." }, { "start": 1860.4399999999998, "end": 1866.24, "text": " And specifically, he says you replace a whole bunch of C and C++ code with neural networks." }, { "start": 1866.24, "end": 1872.36, "text": " And I guess what that means is that they used to have certain heuristics for what he calls" }, { "start": 1872.36, "end": 1874.52, "text": " creating the vector space, right?" }, { "start": 1874.52, "end": 1877.6399999999999, "text": " And remember, creating the vector space means seeing and understanding." }, { "start": 1877.6399999999999, "end": 1879.6799999999998, "text": " So what objects exist?" }, { "start": 1879.6799999999998, "end": 1880.6799999999998, "text": " Where are they?" }, { "start": 1880.6799999999998, "end": 1881.6799999999998, "text": " How are they moving?" }, { "start": 1881.6799999999998, "end": 1882.6799999999998, "text": " And so on." }, { "start": 1882.6799999999998, "end": 1887.22, "text": " And you want to get that out of your cameras and whatever other sensors you have." }, { "start": 1887.22, "end": 1893.24, "text": " So it seems like until now, they had a bunch of neural networks that would do, you know," }, { "start": 1893.24, "end": 1894.24, "text": " their stuff." }, { "start": 1894.24, "end": 1899, "text": " I can imagine they had maybe single frame neural networks or kind of short frames, one" }, { "start": 1899, "end": 1903.4, "text": " after another neural networks that would recognize sort of bounding boxing the objects in the" }, { "start": 1903.4, "end": 1904.4, "text": " image." }, { "start": 1904.4, "end": 1908.3600000000001, "text": " And then they would use sort of an algorithm heuristic algorithm that they wrote themselves" }, { "start": 1908.3600000000001, "end": 1910.82, "text": " to stitch that together over time." }, { "start": 1910.82, "end": 1915.76, "text": " Maybe they use algorithms to do some kind of inferences like what he mentioned with" }, { "start": 1915.76, "end": 1917.96, "text": " the object tracking, and so on." }, { "start": 1917.96, "end": 1922.76, "text": " And it seems to be that what they want to do is just end to end train one big neural" }, { "start": 1922.76, "end": 1924.76, "text": " network that just does it all." }, { "start": 1924.76, "end": 1930.36, "text": " You input all of the sensor data, let's say from, you know, not only just right now, but" }, { "start": 1930.36, "end": 1933.92, "text": " you know, from the from the recent past, you just input it all in there." }, { "start": 1933.92, "end": 1939.04, "text": " And the neural network will spit out this finished vector space, this finished scene" }, { "start": 1939.04, "end": 1940.32, "text": " understanding graph." }, { "start": 1940.32, "end": 1942.24, "text": " And this obviously you can see where it comes from." }, { "start": 1942.24, "end": 1948.08, "text": " This has been the story of deep learning so far, replacing more and more classical heuristics" }, { "start": 1948.08, "end": 1950.32, "text": " with an end to end learning system." }, { "start": 1950.32, "end": 1955.24, "text": " And it also matches exactly with what Elon is saying, namely that right now, it doesn't" }, { "start": 1955.24, "end": 1960.26, "text": " seem to work quite well yet, but in time, it will get there." }, { "start": 1960.26, "end": 1965.46, "text": " And again, this has been the story of deep learning in pretty much everything we've tackled" }, { "start": 1965.46, "end": 1968.36, "text": " since the beginning of deep learning." }, { "start": 1968.36, "end": 1973.9199999999998, "text": " End to end systems ultimately came to be the heuristic systems, but it takes time, it takes" }, { "start": 1973.9199999999998, "end": 1977.84, "text": " work, it takes data, obviously massive amounts of compute." }, { "start": 1977.84, "end": 1981.8, "text": " You know, over time, there's like, less and less conventional software, more and more" }, { "start": 1981.8, "end": 1986.76, "text": " neural net, which is still software, but it's, you know, still comes out the lines of software," }, { "start": 1986.76, "end": 1997.04, "text": " but it's more more neural net stuff, and less, you know, heuristics, basically." }, { "start": 1997.04, "end": 2007.04, "text": " If you're more more more matrix based stuff, and less heuristics based stuff." }, { "start": 2007.04, "end": 2013.28, "text": " So by the way, the reason why this is the case, the reason why it works to replace heuristics" }, { "start": 2013.28, "end": 2018.44, "text": " with neural networks with data driven systems is that the world is always more complicated" }, { "start": 2018.44, "end": 2021.1, "text": " than you can encode in any heuristic." }, { "start": 2021.1, "end": 2025.18, "text": " That's why we use machine learning in the first place, because we can't just program" }, { "start": 2025.18, "end": 2029.96, "text": " the algorithms that do image recognition, or speech recognition or whatnot." }, { "start": 2029.96, "end": 2035.0800000000002, "text": " So the only representation of this really complex world, like the actual underlying" }, { "start": 2035.0800000000002, "end": 2038.48, "text": " world that is so complicated is the data." }, { "start": 2038.48, "end": 2044.88, "text": " And therefore, our best chance to create systems that deal well with the world as such is systems" }, { "start": 2044.88, "end": 2047.96, "text": " that actually learn from data from the real world." }, { "start": 2047.96, "end": 2053.44, "text": " And that's why it often works to replace the heuristics with data driven systems." }, { "start": 2053.44, "end": 2057.7200000000003, "text": " If you have the data, and if you have the compute, which Tesla obviously does." }, { "start": 2057.7200000000003, "end": 2060.16, "text": " We call it the giant bag of points." }, { "start": 2060.16, "end": 2065.6, "text": " And it's like, so you go to pixel and something associated with that pixel, like this pixel" }, { "start": 2065.6, "end": 2069.2000000000003, "text": " is probably car, the pixel is probably lane line." }, { "start": 2069.2000000000003, "end": 2079.2400000000002, "text": " Then you've got to assemble this giant bag of points in the C code and turn it into vectors." }, { "start": 2079.24, "end": 2087.04, "text": " And it does a pretty good job of it, but we need another layer of neural nets on top of" }, { "start": 2087.04, "end": 2095.7999999999997, "text": " that to take the giant bag of points and distill that down to vector space in the neural net" }, { "start": 2095.7999999999997, "end": 2100.8799999999997, "text": " part of the software as opposed to the heuristics part of the software." }, { "start": 2100.8799999999997, "end": 2105.7999999999997, "text": " So the translation of this is probably, if I understand Elon correctly, what they were" }, { "start": 2105.8, "end": 2111.52, "text": " doing so far is sort of semantic segmentation or pixel based pixel labeling." }, { "start": 2111.52, "end": 2116.6000000000004, "text": " I can also imagine that they estimated things like depth maps and so on just from pixels." }, { "start": 2116.6000000000004, "end": 2121.76, "text": " But then, as I said before, it was heuristics, it was sort of classical algorithms." }, { "start": 2121.76, "end": 2126.1000000000004, "text": " And these aren't, I mean, classical, these are advanced algorithms, right, that take" }, { "start": 2126.1000000000004, "end": 2131, "text": " point clouds that take sort of segmentation maps and depth maps and all of that and turn" }, { "start": 2131, "end": 2133.0600000000004, "text": " them into objects." }, { "start": 2133.06, "end": 2137.2, "text": " These are mostly heuristic based but very sophisticated algorithms." }, { "start": 2137.2, "end": 2143.44, "text": " But it is clearly a good or a, let's say a modern move to ditch all of that and also" }, { "start": 2143.44, "end": 2150.04, "text": " teach the neural networks to just handle it until you have the semantic result that you" }, { "start": 2150.04, "end": 2154.08, "text": " want, namely the space of objects, the scene understanding graph." }, { "start": 2154.08, "end": 2163, "text": " It's really outputting proper vectors to the CC++ control code, as opposed to the" }, { "start": 2163, "end": 2171, "text": " sort of constructing the vectors in C." }, { "start": 2171, "end": 2178.32, "text": " We've done, I think, quite a good job of, but it's kind of hitting a local maximum on" }, { "start": 2178.32, "end": 2182.08, "text": " how well the C can do this." }, { "start": 2182.08, "end": 2185.44, "text": " So this is really a big deal." }, { "start": 2185.44, "end": 2187.64, "text": " And just all of the networks in the car need to..." }, { "start": 2187.64, "end": 2193.52, "text": " By the way, whenever you hear him talk about C and C++ code, just replace that with human" }, { "start": 2193.52, "end": 2194.92, "text": " authored code, right?" }, { "start": 2194.92, "end": 2199.24, "text": " The difference isn't necessarily the language you use, the difference is more like who writes" }, { "start": 2199.24, "end": 2200.24, "text": " the code." }, { "start": 2200.24, "end": 2205.3199999999997, "text": " And when he says C and C++, it's humans, very smart humans, but still humans that write" }, { "start": 2205.3199999999997, "end": 2207.72, "text": " the code out of their thinking." }, { "start": 2207.72, "end": 2212.48, "text": " And whenever he says neural networks, it's some sort of a data-driven systems, which" }, { "start": 2212.48, "end": 2217.96, "text": " obviously human author in the first place, but probably also is as well implemented in" }, { "start": 2217.96, "end": 2220.2400000000002, "text": " C and C++." }, { "start": 2220.2400000000002, "end": 2222.2400000000002, "text": " The training, the amount of work done with..." }, { "start": 2222.2400000000002, "end": 2228.36, "text": " We've written all this custom software for training and labeling and to do auto labeling." }, { "start": 2228.36, "end": 2233.76, "text": " Auto labeling is essential, especially when you've got surround video." }, { "start": 2233.76, "end": 2238.44, "text": " It's very difficult to label surround video from scratch." }, { "start": 2238.44, "end": 2241.88, "text": " It's extremely difficult." }, { "start": 2241.88, "end": 2247.2000000000003, "text": " Like a human's such a long time to even label one video clip, like several hours." }, { "start": 2247.2000000000003, "end": 2255.6, "text": " Or the auto label it, basically we just apply a heavy duty, like a lot of compute to the" }, { "start": 2255.6, "end": 2261.8, "text": " video clips to pre-assign and guess what all the things are that are going on in the surround" }, { "start": 2261.8, "end": 2262.8, "text": " video." }, { "start": 2262.8, "end": 2263.8, "text": " And then there's like correcting it." }, { "start": 2263.8, "end": 2264.8, "text": " Yeah." }, { "start": 2264.8, "end": 2269.7200000000003, "text": " And then all the human has to do is like tweet, like say, adjust what is incorrect." }, { "start": 2269.72, "end": 2274.3999999999996, "text": " This is like increase this productivity by effect a hundred or more." }, { "start": 2274.3999999999996, "end": 2275.3999999999996, "text": " Yeah." }, { "start": 2275.3999999999996, "end": 2276.3999999999996, "text": " So you've presented that..." }, { "start": 2276.3999999999996, "end": 2282.3999999999996, "text": " I mean, we've discussed this in the last video that I did about Karpotty's talk." }, { "start": 2282.3999999999996, "end": 2288.64, "text": " And this to me is, I think too few people are currently doing something like this." }, { "start": 2288.64, "end": 2290.24, "text": " Essentially it's active learning, right?" }, { "start": 2290.24, "end": 2293.4199999999996, "text": " It's sort of, if you're not sure about something, ask the human." }, { "start": 2293.4199999999996, "end": 2299.64, "text": " It has a slight twist on it in that they probably always ask the human, but they suggest a label" }, { "start": 2299.64, "end": 2305.52, "text": " which is super powerful, especially in something like semantic segmentation where you need" }, { "start": 2305.52, "end": 2310.3199999999997, "text": " to annotate every pixel or you need to place bounding boxes around many objects." }, { "start": 2310.3199999999997, "end": 2314.8399999999997, "text": " It's really different if you simply have to check and adjust a little bit versus if, you" }, { "start": 2314.8399999999997, "end": 2319, "text": " know, there's a data point and you have to place the labels yourself." }, { "start": 2319, "end": 2323.24, "text": " I think we're going to see quite a bit more of that in sort of the near future." }, { "start": 2323.24, "end": 2328.2, "text": " A lot of people are already doing something like this, but I think still too few are." }, { "start": 2328.2, "end": 2333.48, "text": " It's not quite in Tesla's primary mission direction of accelerating sustainable energy," }, { "start": 2333.48, "end": 2338.74, "text": " but it is an extremely useful thing that we can do for the world, which is to make a useful" }, { "start": 2338.74, "end": 2343.72, "text": " humanoid robot that is capable of interacting with the world." }, { "start": 2343.72, "end": 2344.72, "text": " All right." }, { "start": 2344.72, "end": 2350.54, "text": " The rest of them talking about AI is talking about the Tesla bot, which is a bit more far" }, { "start": 2350.54, "end": 2352.3999999999996, "text": " fetched I have to say." }, { "start": 2352.4, "end": 2359.36, "text": " The Tesla bot just on its face is way more complicated than a car, especially if it is" }, { "start": 2359.36, "end": 2364.08, "text": " supposed to not only, you know, be on the factory floor in which case they just build" }, { "start": 2364.08, "end": 2366.14, "text": " like a robot arm, right?" }, { "start": 2366.14, "end": 2369.54, "text": " These are like the most useful things in a factory on a factory floor." }, { "start": 2369.54, "end": 2374.96, "text": " But if it's actually to sort of interact with humans or in a human way navigate not only" }, { "start": 2374.96, "end": 2378.28, "text": " unknown terrain, but also society potentially." }, { "start": 2378.28, "end": 2383.2400000000002, "text": " I mean, this is just this is just futurism at this point and that there's really nothing" }, { "start": 2383.2400000000002, "end": 2389.32, "text": " we can legitimately say about what's possible, what's not possible, where this is." }, { "start": 2389.32, "end": 2392.52, "text": " And obviously they like we don't we don't have a prototype." }, { "start": 2392.52, "end": 2396.88, "text": " We just have like a human in a suit to demonstrate the Tesla bot." }, { "start": 2396.88, "end": 2403.5600000000004, "text": " So I will not comment much further on that with respect to the Tesla fully self driving" }, { "start": 2403.5600000000004, "end": 2404.5600000000004, "text": " system." }, { "start": 2404.56, "end": 2409.7599999999998, "text": " I would say that obviously, you know, for Elon Musk, there's always kind of lovers and" }, { "start": 2409.7599999999998, "end": 2413.48, "text": " haters and I think you can acknowledge both sides." }, { "start": 2413.48, "end": 2416, "text": " He is a bit of a salesperson." }, { "start": 2416, "end": 2418.48, "text": " He sells these things very well." }, { "start": 2418.48, "end": 2423.24, "text": " He always promises, you know, next year we'll be ready, next year we'll be ready." }, { "start": 2423.24, "end": 2429.12, "text": " And then they never are or he over promises massively on you know, how much cost you can" }, { "start": 2429.12, "end": 2430.92, "text": " save and yada, yada, yada." }, { "start": 2430.92, "end": 2437.52, "text": " But then on the other hand, he also delivers a lot more than other people deliver." }, { "start": 2437.52, "end": 2442.38, "text": " Maybe that's just because a little bit of recklessness, but also the sort of optimism" }, { "start": 2442.38, "end": 2446.38, "text": " and momentum that he's able to to to come up and drive." }, { "start": 2446.38, "end": 2450.96, "text": " And all of that together, I think just makes for like an interesting person." }, { "start": 2450.96, "end": 2455.26, "text": " And I think the advances itself are remarkable." }, { "start": 2455.26, "end": 2459.96, "text": " Even if you say other car companies are on the track and whatnot, Tesla has done more" }, { "start": 2459.96, "end": 2465.12, "text": " than all other car companies together for the adoption of electric vehicles." }, { "start": 2465.12, "end": 2468.48, "text": " Yes, you can debate whether or not that in itself is a good thing." }, { "start": 2468.48, "end": 2473.56, "text": " But just to say that it's not only salesmanship, there are also results." }, { "start": 2473.56, "end": 2477.96, "text": " And I have no doubt that in the near future, we will see self driving cars." }, { "start": 2477.96, "end": 2482.76, "text": " Sure, they're not going to be accident free, but I believe they will be much, much better" }, { "start": 2482.76, "end": 2483.76, "text": " than humans." }, { "start": 2483.76, "end": 2487.84, "text": " And the question is simply is this next year in two years in five years?" }, { "start": 2487.84, "end": 2490.44, "text": " I cannot tell you, but I'm excited to see." }, { "start": 2490.44, "end": 2494.32, "text": " I hope you like this talk analysis interview analysis." }, { "start": 2494.32, "end": 2496.44, "text": " If you want more of these things, let me know." }, { "start": 2496.44, "end": 2500.7200000000003, "text": " Otherwise, let me know what you think in the comments and I'll see you next time." }, { "start": 2500.72, "end": 2524.12, "text": " Bye bye." } ]
-_2AF9Lhweo
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Linformer: Self-Attention with Linear Complexity (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "facebook", "linear", "quadratic", "transformer", "attention", "self-attention", "multi-head attention", "t2t", "vasvani", "bert", "devlin", "roberta", "glue", "language modeling", "perplexity", "dot product", "johnson", "lindenstrauss", "random projection" ]
Transformers are notoriously resource-intensive because their self-attention mechanism requires a squared number of memory and computations in the length of the input sequence. The Linformer Model gets around that by using the fact that often, the actual information in the attention matrix is of lower rank and can be approximated. OUTLINE: 0:00 - Intro & Overview 1:40 - The Complexity of Self-Attention 4:50 - Embedding Dimension & Multiple Heads 8:45 - Formal Attention 10:30 - Empirical Investigation into RoBERTa 20:00 - Theorem: Self-Attention is Low Rank 28:10 - Linear Self-Attention Method 36:15 - Theorem: Linear Self-Attention 44:10 - Language Modeling 46:40 - NLP Benchmarks 47:50 - Compute Time & Memory Gains 48:20 - Broader Impact Statement 49:55 - Conclusion Paper: https://arxiv.org/abs/2006.04768 Abstract: Large transformer models have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications. However, training and deploying these models can be prohibitively costly for long sequences, as the standard self-attention mechanism of the Transformer uses O(n2) time and space with respect to sequence length. In this paper, we demonstrate that the self-attention mechanism can be approximated by a low-rank matrix. We further exploit this finding to propose a new self-attention mechanism, which reduces the overall self-attention complexity from O(n2) to O(n) in both time and space. The resulting linear transformer, the \textit{Linformer}, performs on par with standard Transformer models, while being much more memory- and time-efficient. Authors: Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, Hao Ma Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're going to look at Linformer self-attention with linear complexity by Sinon Wang, Belinda Li, Madian Kabsa, Han Feng and Hao Ma of Facebook AI. So on a high level this paper observes that often the way we build transformers the self-attention matrix is low rank and can be approximated by first projecting the signal to a lower dimensional space and then performing these inner products that are responsible for attention in there. And thereby you save a lot of the complexity of multiplying full sequence length by full sequence length matrices but instead do these operations in the lower dimensional space. And they achieve a linear scaling of the transformer attention and we'll figure out how that is. As always if you like content like this consider subscribing, sharing, liking and commenting if you feel like it. Okay let's dive in. They say large transformer models have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications. Okay so these, if you don't know what a transformer model is you can watch my video on the paper Attention is all you need. That was sort of the beginning of these transformers and it introduces the attention mechanism that we're going to look at today. If you don't know what an attention mechanism is you're not going to have a fun time in this paper. They say however training and deploying these models can be prohibitively costly for long sequences as the standard self attention mechanism of the transformer uses n squared time and space with respect to the sequence length. Now why is that? So really shortly to recap recap this the attention mechanism. These attention, these transformers they transform for basics let's say they transform one sequence into another. So here we have five tokens and the next layer will output five tokens. Okay so five tokens in five tokens out and the question is how do you route information between these five tokens from the first layer to produce the next layer. In a feed-forward network you would simply connect everything to everything and sort of learn the weights of these of these connections. That's not what we do here. In a convolutional network you would simply connect each node to its immediate neighbors like this. But this is also not what we do here. What we do here is we route the information according to the information itself. So according to the incoming information right here we route the information that goes out and we do that by expressing queries and keys. So this incoming information is transformed first of all into what are called keys. Now keys are simply vectors so each node is going to expose a vector right here and each node in the higher layer. Now these are produced by the same from the same information down here but I'm going to draw it conceptually on the higher layer. So each node here is going to expose a query which is sort of like calling the query is calling for what kind of information do you want from the lower layer and the key is sort of exposing what type of information this node contains right now. Now the information is simply routed by looking at the inner products of the keys and the queries. So this information right here would probably be routed to this node right here whereas this one would probably be routed here. This one would be routed here. In fact this is a soft assignment so it's not like a hard routing it's a soft routing. Everything is routed to everything with different weights but the majority goes to the place where the inner product is high and this one is again routed here. So you can see this is the attention mechanism. In order to do this we need to compute the inner product of every single one of these queries with every single one of these keys. And this if our sequence length here is of length n is going to require n squared operations. Now here is another parameter we need to pay attention. These vectors here they have a certain dimension and the certain dimension we're going to call D. The inner the embedding dimension of the vectors. Now in modern transformers you can think of n as something like maybe 512 tokens go into a transformer like this. And the hidden dimension here also is in the same order of magnitude. So you can also imagine this to be something like 512. Now if you think of these matrices if you multiply the keys by the queries however you want to let's do it like this then you have the keys are n by D and the queries are D by n. Now since n and D in this case are the same dimension this matrix is of rank 512. It doesn't have to be but it's a pretty good bet that it's of rank 512. Maybe it's approximately lower rank but now this isn't actually the modern way of transformers as such because usually what we have is multi-head attention which means that we're going to split this inner dimension right here. We're going to split these vectors into many many lower dimensional vectors and then have attention mechanism on these lower dimensional vectors. And that's such that you don't only have one attention mechanism you have multiple attention mechanisms so you can route different kinds of information with these multiple attention heads. Now sometimes you would split this you could split this in a modern transformer up to like 16 different heads but here we're going to let's say we're going to split this into four subvectors each of 128 dimensions. So we're going to split this up and now if this product here is only computed on these lower dimensional vectors so all of a sudden you no longer have n by D but you have like n by D over 4 and now this is 512 still but this now is 128 so the rank of this matrix is going to be 128. Mind it's still the thing that comes out is still a 512 by 512 matrix but it is of rank 128 and that means even though this matrix contains vectors that are of size 512 they could be they could be represented accurately by a matrix that's just 128 dimensions. Okay so these these 512 dimensions actually only contain information that is 128 dimensional in nature. It's just distributed over 512 dimensions but most of these are redundant. So in fact in these modern transformers these thing here this matrix here is low rank and therefore that's what this paper sort of exploits we could we could approximate this by 128 dimensions. Okay this is our starting point. They go on and they say in this paper we demonstrate that the self-attention mechanism can be approximated by a low rank matrix. We further exploit this finding to propose a new self-attention mechanism which reduces the overall self-attention complexity from n squared to n in both time and space. The resulting linear transformer the Lin former performs on par with standard transformer models while being much more memory and time efficient. Alright so let's dive into their thing. This is how they formulate the attention mechanism. So right here the attention has queries and keys as you can see here. Now these W matrices you can largely ignore. The W simply maps the queries to so this is these are simply d by d matrices that are a linear transformation of the queries. You can sort of overlook them for the arguments in this paper. So these are the keys and the queries we talked about. The values here this is the actual information that's being routed. So what we want to do is we want to compute this product between queries and keys right here and scale it appropriately. But ultimately this is this product. Then run this through a softmax operation. That means we normalize it such that it sums to one, the distribution sums to one. And then we want to route this information according to that distribution. So that's how they formulate an attention mechanism. Now notice something. This thing in here is what they call the matrix A and this is what I've demonstrated to be low rank. Now the actual thing that you would need to be low rank for their paper to hold is the matrix P which is different because this is after the softmax right. So if the matrix P is low rank then you have a legitimate claim of approximating this routing via a low rank matrix. However if P is not low rank you don't. Okay all right now the first thing they're going to show is that this is in fact low rank. So self-attention is low rank. And for that they make an empirical investigation into Roberta. So Roberta is a model that's based on BERT and I have made videos of both BERT and Roberta I believe. If sorry if you want to go look those up. But it is one of these transformer models and they take two data sets wiki103 and IMDB and they run them through this model and they look at this P matrix. So they look at how this information routing matrix is built and then they calculate the eigenvalues of that. So you calculate the eigenvalues and by looking at the eigenvalues you can look at the rank of a matrix broadly speaking. So if you list the eigenvalues in order of their size then a matrix that is sort of high dimensional has a high rank would have sort of a slope like this and that means as you go as you go to the next and next and next eigenvalue they drop like if you order a set of uniformly distributed numbers if you order them then it would look like this right so there is no particular dimension that's that's better than any or has much more information than any other. However if the matrix is approximately low rank you would look something like this and that would mean that most of the information is concentrated in very few dimensions and those are the ones with very high eigenvalues and most of the dimensions have no information. The thing you see here is simply the cumulative sum of these things so if you calculate the cumulative sum of this you'll get that over here. So if this is very high rank you would expect a curve that goes like this sort of slanted but not very. If this is very low rank you would expect a curve that goes very much into the corner right here and they show that the general shape here is such that there is this kind of a kink to it as you can see here. Now also notice that the axis here starts at 0.4 so actually this comes from down here somewhere and goes up and then goes like this. So they have a I feel they have a legitimate claim here that these matrices are approximately low rank and here they look at I don't actually know at which layer this is or if this is in all of the layers overall or something like this but they look at how this develops inside the layers so they look at the always the 128th eigenvalue and they discover that as they go deeper and deeper into the network this cumulative eigenvalue is higher and higher that means that network puts more and more information into fewer and fewer dimension in this routing as you go up the layers so it gets more and more skewed as you go up the layers it gets more and more into this corner right here so their claim appears to be more and more true. Now I have sort of thought about this a little and I've tried it out a bit myself and I invite you to just follow me here shortly. So right here I have a matrix that is just a random Gaussian matrix of size 512 by 512. If we look at the eigen spectrum of that so I have this function SVD it simply gives me the eigen spectrum of that then you can see that it sort of falls off uniformly and that will result in a in this cumulative sum of pretty much flat curve or slowly ascending curve like this. Now if we actually have a low rank matrix this would look different this would have this sort of typical kink in it and we can demonstrate that by making a lower dimensional matrix so let's just take let's just go 512 by 128 of this lower dimensional and and let's look at the MT now this only goes to 128 because we only get back 128 singular values so let's make a lower dimensional matrix that's actually 512 by 512 so if we do this this is sort of what they're doing in this this will construct a 512 by 512 matrix but that is only of rank 128 right and you can see that at the 128 singular or eigenvalue this snaps right at the at the one so it's sort of like what they what they have okay so we've seen the difference between a let's say higher rank matrix and the low rank matrix in this cumulative sum plot now I want to go back to the original matrix right here of course there the matrices they look at these routing matrices they're not Gaussian they're not sort of distributed with mean zero and the nice variance they are the result of a softmax operation and in particular that means they're all positive and that means that their mean is not zero so if you look at a data set and it's mean it's not zero and you calculate like the the eigenvalues or in this case the principal component you will find that the first one will be very strong because that must account for the fact that the mean is not at the center or the first few will be like this so it is sort of maybe we can replicate this right here so let's say we'll put M through let's first go with the absolute value of M okay not much of a change but you already see that this axis doesn't start at zero so let's go let's actually how do we do this xlim right xlim zero none so haha okay so the first one you simply have to imagine or I can do even something something more we can just put a zero in front here and that should do the trick no yes oh that's X I meant Y calm and dumb never mind this will work as well so you already get this sort of of kink and let's put it into the softmax so we'll put a softmax and that gives you also this kink now you might think that wait this is that this kink looks a lot smaller than the other kink so but if we simply modify let's modify the standard deviation of this random matrix and you can see that this spectrum immediately changes right because of the interaction now between the softmax and the standard deviation if I only were to change the standard deviation on the normal M matrix and we can actually try this right here that wouldn't do much that would still look pretty much the same it's just differently scaled but in the interaction with the softmax now this changes the spectrum dramatically and here as you know these these transformers have always sort of like layer normalization and so on so probably the standard deviation if we if if these are sort of Gaussian the standard deviation before the softmax would be a lot smaller so let's go something like this so smaller than one and can we run this please and you can see that this kink immediately appears now it's not it's it's it's not the same thing as this other as this here because this is a lot smoother as you can see right here but still I feel that this might not actually be a result of the you know the fact that this is an attention mechanism but it simply might be the result of that you apply a softmax now still that doesn't change the fact that it is approximately a lower rank matrix everything they say holds but yeah maybe maybe one should also look into why exactly that happens but in fact it is low rank okay it is approximately low rank they've demonstrated this and now they go to their first first theory below we provide a theoretical answer a theoretical analysis of the above spectrum results okay so the theoretical analysis theorem one is self-attention is low rank and we're going to go through this just glance at it for now they say for any of these query key values and these matrices which of course you can ignore for now for any column vector W of matrix V W and W here that's the information that needs to be routed there exists a low rank matrix P tilde so this P tilde here is going to be their low rank approximation of the P matrix you can see it's still n by n but it's going to be low rank in fact it's going to be of the order of the logarithm of the rank of the full matrix or well the full matrix of the rank that the full matrix could have as we have already seen the full matrix doesn't have full rank but yeah okay so if you use and this is the type of guarantee you get so what do we see here it basically means that this distance here is smaller than this and this here this is just the norm of one of these vectors projected times this error coefficient epsilon so all it says is that the distance on the left is smaller than something and that occurs with high probability okay so the entire guarantee here the entire formula just basically means that this thing is small this norm is small what's this norm this norm is the distance between these two things now what are these two things this is the information that we want to route and this is the routing matrix and that simply means that if I route my information using the P tilde this approximation then I won't be too far away as if I had routed my information using the original P matrix okay that's that's it that's what the theorem says the theorem says if I route my information using this approximation then I am not too far away as it had I routed my information using the original routing matrix that I don't say how they're going to construct they simply say there exists a low rank matrix like this and the proof of this and it's sort of worth looking at the proof of it it uses the Johnson Linden Strauss lemma this thing here or the JL for short and they're going to get this out of the JL now the Johnson Linden Strauss lemma in a classic sense says something like this if I have data in a high dimensional space here in a three dimensional space okay I have data distributed and I use a certain kind of projection matrix and there are a number so the the JL gives conditions on what these projections can be but for example a randomly sampled matrix with zero mean Gaussian entries and 1 over K standard deviation where K is the dimension you project into can do the trick so if I project my data in a certain way into a lower dimension here dimension 2 then the projected data is related to the original data by the fact that the distances between the points in the original space will not be distorted too much so the distances between these points are approximately preserved through this projection okay so that's that's the that's the Johnson Linden Strauss lemma now you'll notice here there is no reference to the fact that this data is or isn't low rank it's simply high dimensional data projected to lower dimension and the distances are approximately preserved and this theory here and I've looked at it for a while now they simply define okay they define this P matrix as this attention mechanism and here you can see the A matrix we've discussed before which is actually low rank but we don't know yet if the softmax is they write it as this form right here of the exponential of each entry of a divided by this diagonal right here so in the softmax of course you have the exponential of each entry divided by the sum of the entries and they write this simply as two matrices but ultimately this is a matrix right here and all they do is they take this P matrix and they apply the Johnson Linden Strauss lemma by having this projection matrix R and R is entries from this Gaussian as I said so this is the special type of projection that the JL addresses and then simply says if you pull if you this here is going to be your P tilde so if you project R in this manner and obtain P tilde and then you use P tilde instead of P then this this is going to be very close in fact you can reformulate the JL into different variants such that it gives you things like this things like saying that the distance between this projected version and this unprojected version is going to be a constant smaller than a constant time the norms of the unprojected version that is equivalent to saying that the distances are preserved now you can see right here nowhere in this theorem is the fact that this is self-attention and nowhere in the theorem appears the fact that this inner matrix A is low rank or even that this matrix A exists it's you can do this with any matrix P right the JL doesn't concern itself with the nature of this matrix P it says any matrix any sort of high dimensional data you can project to low dimensional data and this holds if you choose the projection correctly which they do right here so to claim that this theorem proves that self-attention is low rank to me is a bit it's a bit of a statement that is not warranted like this here should read something like the Johnson Lindenstrout's lemma exists or something like this it I'm not I'm not sure like convince me otherwise but yeah so they go with this so they say given the low rank property of the context mapping matrix P now again I disagree that this has been shown except empirically one straightforward idea is to use singularality composition to approximate P with a low rank matrix P low as follows so what you could do is you could simply learn these low rank matrices and approximate P through it or you can decompose P as such and then have these easier inner products in dimension K but they say however this approach requires performing an SVD decomposition in each self-attention matrix which adds additional complexity therefore we propose another approach for a low rank approximation that avoids this added complexity okay so they now come up with their model and their model goes as follows so here on the left you see a classic attention mechanism with their projections built in what they're proposing is they say let's project the matrix K using one of these random projections and then this attention routing if you route if you now multiply so you multiply K and Q right here K times Q and then you put it into the softmax and then you use it to route this W so they say if we build in this projection matrix that will project K to a lower dimension and then we won't have as expensive of inner products now the important part to see here is that if you think of this lower projection the first thing you think is that you project this inner this hidden dimension D right to a lower dimension and that's not the case here you actually project the N so in in a conceptual framework so you can see right here forget about this this is this W matrix in a conceptual framework you see here is this N by D matrix which are the keys so N is the sequence length and D is the dimensions and what you want to do is you want to project that by this matrix which is K by N so you want to reduce the sequence length you can see in this matrix right here why that might work because N is much larger than D and that means this matrix can be at most rank D right so you should not lose too much you should sort of be able to preserve the information if you project this N to a K where the K if the K is still larger than the D or approximately in the same order of magnitude you should be able to preserve that information if you do it in a smart way so conceptually if we have our five token sequence like here and the next layer produces five tokens again what we first do is we say we know we know that the information we want is not five dimensional it's actually two dimensional because okay let's say these this inner dimension D is is two as well so we have two dimensional vectors each thing exposes two dimensional vectors so we first project the sequence of length five to a sequence of length two and we simply do that in a random manner so we have a random Gaussian matrix that assigns weights to mix these five into these two and again because the JL works for any sort of data but in my argumentation if you you know think that this here is low rank it's of rank two then you shouldn't lose too much information by projecting it to a sequence length two and now we do this attention mechanism so now we expose the keys and now we expose the queries up here and now you can see instead of routing five things with five things you only have to route five things with two things and so instead of having O and squared you now have O N K if K K is the number right here okay so this is the idea you project the sequence length and it comes from the fact that the sequence length is much larger than the dimensionality and therefore you can sort of preserve the information if you project in a smart way they build this in this fashion right here so the attention mechanism now before we saw it was between the queries and the keys right here they built now this projection matrix here that projects the keys into a lower dimensional sequence and the now such that this will result in an N by K attention matrix we saw over here you don't need to route N by N things you need to route N by K so this this routing table in here is now N by K now the next layer as you can see here it actually needs to produce a sequence of length five again right so we always transform sequence of length five into sequence of length five but now we have we have this N corresponds to the sorry corresponds to the next layer and this K corresponds to the down projected sequence of the last layer and in order for that to fit we of course also need to down project the information that we're routing so if we don't project the routing table we also need to down project the information that we're routing that's we do this by a similar matrix F that is also sampled in this way in this special way and that gives us a K by D so we have projected the sequence to size K and if we multiply these two things again of course we'll get out an N by D matrix which is the signal for the next layer okay so an N by D signal comes in down here it's projected down to K sequence length it's and it's routed up again to N sequence length and you have again an N by D matrix here cool so that's how they do it and they build this into the transformer now as I understand it these projection matrices again they're not learned they are built up in this JL conscribed way they are not learned they are fixed once and then that's that's that at least that's how I understand it so there are no more learnable parameters okay so here they have a demonstration where they up the sequence length and you can see the batch size decreases but that's just to sort of keep the total amount of flops to be done the same you up the sequence length and down the batch size as the sequence length increases the standard transformers requirement in inference time goes up and this here as you can see this is not a linear scale it's a log scale log 2 so this goes up with the sequence length and it should go up quadratically right and you can also see that the Lin former keeps fairly constant for the same K now of course as you increase the K of the Lin former the inference time will go up because now it's dependent on N times K and not on N times N okay so let's look a bit further of how you have to choose that K up here in the first theorem we there was a already a hint to it in the first theorem you had to choose K by 5 log N and this is a problem so here you have log N that means it's not so O of N K is equal to O of N log N now that's not linear that's actually that's the same as the reformer but they want to get to a linear place and theorem 2 explains goes now to a linear shows how you can make self-attention linear okay they show again blah blah blah blah now you have to choose K at the minimum of these two things and you can see right here that one of them is independent of N so that means as N grows of course the minimum is no longer going to be this here the minimum is actually going to be the thing on the left and that is dependent on just D okay so you have D log D in here and that makes sense because in the very beginning we said hey D is actually much smaller than N and that means the information that is contained in these matrices is at most rank D so if we down project to K we should adjust K to what D is right if we adjust K to about the same thing as D we're guaranteed to not lose too much information so now we choose K according to D instead of according to N and therefore the computation is linear in N and N times K is like N times D to log D so it's linear in K and linear in D how do we get there so the first thing they do is they make these sort of Johnson-Lindenstrout statements again but now instead of the general statement they plug in their actual modified attention mechanism so here they have a bound on the distance between if I route my this is the information to be routed right if I route my information using the original softmax and this in here is the matrix A if the original tension mechanism I won't be too far away as if I were to route my information using this modified attention mechanism now the tricky part here mathematically I believe is that is is exactly the softmax what what I alluded to right so this softmax is the tricky part because if this weren't a softmax so if the softmax weren't here this would simply be a projection down and a projection up and the dilemma would almost apply as it is written right there you wouldn't have to actually do anything but the question is if this inside the softmax is is low rank can you make a claim that the entire softmax then is also low rank and it's not entirely clear because because oh yes we've done this so you can see right here that the softmax we have actually done the softmax of a low rank matrix so we have already seen the low rank matrix itself and how it immediately snaps to the to the upper axis after 128 now if we do the same thing for the softmax of that and we probably have to take away some of these dimensions the first few let's go with let's go to dimension 100 and look from there okay same thing okay that's pretty good I did not expect that hi there so this is Yannick from the future I've realized I've been an idiot in how I constructed these low rank matrices right here by multiplying MT by itself of course what's a better way to do it is to construct two independent 128 dimensional matrices like these two sub slices of M right here and then multiplying those together and looking at the SVD and you as you can see right here so the softmax of this is now not of this super low rank anymore it's still low rank but it's not not very it's not like hard low rank so if I just look at the matrix without the softmax then you can see it has a very peak that by at the 128 which gives us the indication it's actually 128 rank which we already knew but if we now introduce the softmax then you can see that this vanishes and it's no longer 128 dimensional and it's only approximately low rank as you can see all right back to Yannick in the past who is wholly surprised that the two that if you multiply MT by itself that that will give you back the the exact same thing all right so did we try this before maybe we did okay but the mathematical difficulty still remains and their main thing here is so they have a first first version where they pretty much plug it into the JL again and they they get out this K is the K needs to be by log n but they say this result does not utilize the low rank property of matrix A and the resultant K has a dependency on sequence length n and then in the appendix they finally go through the math to show that now if they choose E and F like this they can actually pull out this and show that the K is where you have it the decay is independent of n like this and I think the main the main step in this proof is the step B here where they say uses the fact that the exponential function is Lipschitz continuous in a compact region then we can choose a small enough Delta such that the as you can see here this now directly relates to this projection matrix within the exponential function to the projection matrix out of the exponential function so you can basically say that if I project first and then use the exponential function that's not too different than if I first use the exponential function and then project okay so that's the that's the sort of of of catch here now they only do this for the exponential function not the actual softmax as you can see here throughout they do it to the exponential function and also here in their statements the softmax isn't the exponential function the softmax is the exponential function divided by the sum of the exponential functions but I believe that this generalizes straightforwardly alright so for given choices of Delta and K they have shown that the Lin former in fact can do in a linear fashion what a transformer can do in a quadratic fashion and they are not too far off ok that's that's their point right here the results on these benchmarks sorry let's first go to the perplexities in language modeling so they show right here that they pretty much can keep up with the standard transformer as you can see here so with the standard transformer they can keep up here now think that this the the computation is n times K ok so something like this Lin former with K cost 256 will only so instead of n by n it's n times K it won't save you too much in that case but it's it's not too surprising that in fact you have the same performance because probably the standard transformer is distributed over more heads than two so the information necessarily has a lower dimensionality 10 to 56 one thing I want to draw attention to though here is that you can see that here it's not really done learning yet and as you can see the standard transformer sort of surpasses all of these models towards the end I wonder I wonder what happens I wouldn't be surprised if they end up sort of at the same place but I wonder if these diverge even more right here after that they also compare with a higher sequence length and the standard transformer outperforms the Lin former but of course the point here is that the Lin former is much much much faster and can keep up now also the scale here of the perplexity you see these are percentage points in perplexity but I can't actually tell if that matters or not I think I think in the original transformer paper the perplexities hovered between like three point something and five point something so this might actually be sort of significant differences and I'm not sure they investigate different methods of sharing these weights of these of these projections and they seems like they don't find real differences but I don't want to go into that because this video is already really long and then they look at what happens if they up the sequence length that they put into the Lin former and you can see that the Lin former can deal with higher sequence lengths and arrive at the same perplexities though again I don't know how much different that is and the scale here is larger than before but yeah so how does this fare on these benchmarks where you first train a transformer with pre training with language modeling and then you use it to do certain NLP tasks and here you can see that the Lin former is on par in some of these tasks with the original transformer but also you can see like a pattern where you can see pretty wild results in that you know sometimes the the Lin former here will be better than this but then also variants of the Lin former will be worse and they'll even be worse than this and sometimes they'll be better sometimes this Lin former is good and sometimes the original model is the best so this sort of points to you can make the general claim that the Lin former doesn't destroy your your gains but also it's not like a a better model it's simply a faster model that in some tasks can keep up with the original model and they show that of course this is the real deal here that as you go up in length the performance gains and also sorry this this way the performance gains and the memory gains that you get by the Lin former are dramatic of course the longer and you go and to the lower dimension you project the more these gains are but of course the more performance you're going to lose potentially hello again Yannick from the future just wanted to draw your attention on this beautiful broader impact statement in this paper saying our work focuses on making transformers more efficient everything cool potential positive in spec impacts of efficient transformers that's pretty cool it also has potential impact on training transformers on images since we can support very long sequences very cool furthermore there are positive environmental benefits very cool I mean these are all very cool things they say as such we see no immediate negative ethical or societal impacts of our work beyond what applies to the core building blocks of deep learning do better now this this honestly I agree with them right I completely agree with them that this is sort of a good thing you might trade off you know some accuracy might some make some approximations but you will get a much faster model and this model has any model can be used you know for things and and that they now have to pull out of there out of their but some way in in in over five steps of intermediate layers this could be used for bad it just seems ridiculous so good on them for defying the please also think about negative impacts right here all right back to back back to past Yannick all right this was the Lin former paper I hope this somewhat makes sense made sense to you I had to read it multiple times for it to make sense to me but ultimately it's all about the fact that you have these multiple heads and therefore your information is probably lower dimensional and you can abuse that and to just calculate in this lower dimension all right I'll see you next time bye bye
[ { "start": 0, "end": 4.96, "text": " Hi there! Today we're going to look at Linformer self-attention with linear" }, { "start": 4.96, "end": 11.84, "text": " complexity by Sinon Wang, Belinda Li, Madian Kabsa, Han Feng and Hao Ma of" }, { "start": 11.84, "end": 17.94, "text": " Facebook AI. So on a high level this paper observes that often the way we" }, { "start": 17.94, "end": 24, "text": " build transformers the self-attention matrix is low rank and can be" }, { "start": 24, "end": 29.080000000000002, "text": " approximated by first projecting the signal to a lower dimensional space and" }, { "start": 29.08, "end": 33.48, "text": " then performing these inner products that are responsible for attention in" }, { "start": 33.48, "end": 40.76, "text": " there. And thereby you save a lot of the complexity of multiplying full sequence" }, { "start": 40.76, "end": 46.84, "text": " length by full sequence length matrices but instead do these" }, { "start": 46.84, "end": 53.76, "text": " operations in the lower dimensional space. And they achieve a linear scaling" }, { "start": 53.76, "end": 60.64, "text": " of the transformer attention and we'll figure out how that is. As always if you" }, { "start": 60.64, "end": 65.48, "text": " like content like this consider subscribing, sharing, liking and" }, { "start": 65.48, "end": 73.4, "text": " commenting if you feel like it. Okay let's dive in. They say large transformer" }, { "start": 73.4, "end": 77.8, "text": " models have shown extraordinary success in achieving state-of-the-art results in" }, { "start": 77.8, "end": 83.44, "text": " many natural language processing applications. Okay so these, if you don't" }, { "start": 83.44, "end": 87.36, "text": " know what a transformer model is you can watch my video on the paper" }, { "start": 87.36, "end": 92.44, "text": " Attention is all you need. That was sort of the beginning of these" }, { "start": 92.44, "end": 96.75999999999999, "text": " transformers and it introduces the attention mechanism that we're going to" }, { "start": 96.75999999999999, "end": 100.72, "text": " look at today. If you don't know what an attention mechanism is you're not going" }, { "start": 100.72, "end": 106.96, "text": " to have a fun time in this paper. They say however training and deploying these" }, { "start": 106.96, "end": 111.92, "text": " models can be prohibitively costly for long sequences as the standard self" }, { "start": 111.92, "end": 117.12, "text": " attention mechanism of the transformer uses n squared time and space with" }, { "start": 117.12, "end": 122.44, "text": " respect to the sequence length. Now why is that? So really shortly to recap" }, { "start": 122.44, "end": 128.72, "text": " recap this the attention mechanism. These attention, these transformers they" }, { "start": 128.72, "end": 134.44, "text": " transform for basics let's say they transform one sequence into another. So" }, { "start": 134.44, "end": 141.28, "text": " here we have five tokens and the next layer will output five tokens. Okay so" }, { "start": 141.28, "end": 146.2, "text": " five tokens in five tokens out and the question is how do you route" }, { "start": 146.2, "end": 152.48, "text": " information between these five tokens from the first layer to produce the next" }, { "start": 152.48, "end": 156.92000000000002, "text": " layer. In a feed-forward network you would simply connect everything to" }, { "start": 156.92000000000002, "end": 162.56, "text": " everything and sort of learn the weights of these of these connections. That's not" }, { "start": 162.56, "end": 166.12, "text": " what we do here. In a convolutional network you would simply connect each" }, { "start": 166.12, "end": 172.36, "text": " node to its immediate neighbors like this. But this is also not what we do here." }, { "start": 172.36, "end": 176.98000000000002, "text": " What we do here is we route the information according to the information" }, { "start": 176.98000000000002, "end": 181.36, "text": " itself. So according to the incoming information right here we route the" }, { "start": 181.36, "end": 186.64000000000001, "text": " information that goes out and we do that by expressing" }, { "start": 186.64000000000001, "end": 192.32, "text": " queries and keys. So this incoming information is transformed first of all" }, { "start": 192.32, "end": 197.28, "text": " into what are called keys. Now keys are simply vectors so each node is going to" }, { "start": 197.28, "end": 203.23999999999998, "text": " expose a vector right here and each node in the higher layer. Now these are" }, { "start": 203.23999999999998, "end": 207.44, "text": " produced by the same from the same information down here but I'm going to" }, { "start": 207.44, "end": 212.24, "text": " draw it conceptually on the higher layer. So each node here is going to expose a" }, { "start": 212.24, "end": 217.64, "text": " query which is sort of like calling the query is calling for what kind of" }, { "start": 217.64, "end": 221.6, "text": " information do you want from the lower layer and the key is sort of exposing" }, { "start": 221.6, "end": 229.12, "text": " what type of information this node contains right now. Now the information" }, { "start": 229.12, "end": 234, "text": " is simply routed by looking at the inner products of the keys and" }, { "start": 234, "end": 238.72, "text": " the queries. So this information right here would probably be routed to this" }, { "start": 238.72, "end": 244.35999999999999, "text": " node right here whereas this one would probably be routed here. This one would" }, { "start": 244.35999999999999, "end": 248.28, "text": " be routed here. In fact this is a soft assignment so it's not like a hard" }, { "start": 248.28, "end": 252.56, "text": " routing it's a soft routing. Everything is routed to everything with different" }, { "start": 252.56, "end": 257.56, "text": " weights but the majority goes to the place where the inner product is high" }, { "start": 257.56, "end": 262.24, "text": " and this one is again routed here. So you can see this is the attention" }, { "start": 262.24, "end": 268.04, "text": " mechanism. In order to do this we need to compute the inner product of every" }, { "start": 268.04, "end": 273.96, "text": " single one of these queries with every single one of these keys. And this if" }, { "start": 273.96, "end": 281.28, "text": " our sequence length here is of length n is going to require n squared" }, { "start": 281.28, "end": 288.56, "text": " operations. Now here is another parameter we need to pay attention." }, { "start": 288.56, "end": 293.64, "text": " These vectors here they have a certain dimension and the certain dimension" }, { "start": 293.64, "end": 300.35999999999996, "text": " we're going to call D. The inner the embedding dimension of the vectors. Now" }, { "start": 300.36, "end": 308.40000000000003, "text": " in modern transformers you can think of n as something like maybe 512 tokens go" }, { "start": 308.40000000000003, "end": 313.40000000000003, "text": " into a transformer like this. And the hidden dimension here also is in the" }, { "start": 313.40000000000003, "end": 318.44, "text": " same order of magnitude. So you can also imagine this to be something like 512." }, { "start": 318.44, "end": 323.96000000000004, "text": " Now if you think of these matrices if you multiply the keys by the queries" }, { "start": 323.96, "end": 330.32, "text": " however you want to let's do it like this then you have the keys are n by D and" }, { "start": 330.32, "end": 336.56, "text": " the queries are D by n. Now since n and D in this case are the same" }, { "start": 336.56, "end": 342.79999999999995, "text": " dimension this matrix is of rank 512. It doesn't have to be but it's" }, { "start": 342.79999999999995, "end": 348.67999999999995, "text": " a pretty good bet that it's of rank 512. Maybe it's approximately lower rank but" }, { "start": 348.68, "end": 357.08, "text": " now this isn't actually the modern way of transformers as such because usually" }, { "start": 357.08, "end": 361.6, "text": " what we have is multi-head attention which means that we're going to split" }, { "start": 361.6, "end": 366.2, "text": " this inner dimension right here. We're going to split these vectors into many" }, { "start": 366.2, "end": 370.88, "text": " many lower dimensional vectors and then have attention mechanism on these lower" }, { "start": 370.88, "end": 376.36, "text": " dimensional vectors. And that's such that you don't only have one attention" }, { "start": 376.36, "end": 380.62, "text": " mechanism you have multiple attention mechanisms so you can route different" }, { "start": 380.62, "end": 386.88, "text": " kinds of information with these multiple attention heads. Now sometimes you would" }, { "start": 386.88, "end": 391.44, "text": " split this you could split this in a modern transformer up to like 16" }, { "start": 391.44, "end": 395.72, "text": " different heads but here we're going to let's say we're going to split this into" }, { "start": 395.72, "end": 403.12, "text": " four subvectors each of 128 dimensions. So we're going to split this up" }, { "start": 403.12, "end": 408.16, "text": " and now if this product here is only computed on these lower" }, { "start": 408.16, "end": 412.84000000000003, "text": " dimensional vectors so all of a sudden you no longer have n by D but you have" }, { "start": 412.84000000000003, "end": 421.7, "text": " like n by D over 4 and now this is 512 still but this now is 128 so the rank of" }, { "start": 421.7, "end": 427.88, "text": " this matrix is going to be 128. Mind it's still the thing that comes out is" }, { "start": 427.88, "end": 437.15999999999997, "text": " still a 512 by 512 matrix but it is of rank 128 and that means even though this" }, { "start": 437.15999999999997, "end": 444.56, "text": " matrix contains vectors that are of size 512 they could be they could be" }, { "start": 444.56, "end": 453.6, "text": " represented accurately by a matrix that's just 128 dimensions. Okay so these" }, { "start": 453.6, "end": 459.72, "text": " these 512 dimensions actually only contain information that is 128" }, { "start": 459.72, "end": 466.08000000000004, "text": " dimensional in nature. It's just distributed over 512 dimensions but most" }, { "start": 466.08000000000004, "end": 472.88, "text": " of these are redundant. So in fact in these modern transformers these thing" }, { "start": 472.88, "end": 478.84000000000003, "text": " here this matrix here is low rank and therefore that's what this paper sort" }, { "start": 478.84, "end": 488.88, "text": " of exploits we could we could approximate this by 128 dimensions. Okay" }, { "start": 488.88, "end": 494.56, "text": " this is our starting point. They go on and they say in this paper we" }, { "start": 494.56, "end": 499.79999999999995, "text": " demonstrate that the self-attention mechanism can be approximated by a low" }, { "start": 499.79999999999995, "end": 504.79999999999995, "text": " rank matrix. We further exploit this finding to propose a new self-attention" }, { "start": 504.8, "end": 510.16, "text": " mechanism which reduces the overall self-attention complexity from n squared" }, { "start": 510.16, "end": 515.08, "text": " to n in both time and space. The resulting linear transformer the Lin" }, { "start": 515.08, "end": 520.28, "text": " former performs on par with standard transformer models while being much more" }, { "start": 520.28, "end": 527.92, "text": " memory and time efficient. Alright so let's dive into their thing. This is how" }, { "start": 527.92, "end": 534.64, "text": " they formulate the attention mechanism. So right here the attention has queries" }, { "start": 534.64, "end": 540.16, "text": " and keys as you can see here. Now these W matrices you can largely ignore. The W" }, { "start": 540.16, "end": 545.84, "text": " simply maps the queries to so this is these are simply d by d matrices that" }, { "start": 545.84, "end": 550.72, "text": " are a linear transformation of the queries. You can sort of overlook them" }, { "start": 550.72, "end": 556.24, "text": " for the arguments in this paper. So these are the keys and the queries we" }, { "start": 556.24, "end": 561.12, "text": " talked about. The values here this is the actual information that's being routed." }, { "start": 561.12, "end": 564.88, "text": " So what we want to do is we want to compute this product between queries and" }, { "start": 564.88, "end": 570, "text": " keys right here and scale it appropriately. But ultimately this is" }, { "start": 570, "end": 577.44, "text": " this product. Then run this through a softmax operation. That means we" }, { "start": 577.44, "end": 583.48, "text": " normalize it such that it sums to one, the distribution sums to one. And then we" }, { "start": 583.48, "end": 591.5600000000001, "text": " want to route this information according to that distribution. So that's" }, { "start": 591.5600000000001, "end": 595.6, "text": " how they formulate an attention mechanism. Now notice something. This thing in here" }, { "start": 595.6, "end": 601.32, "text": " is what they call the matrix A and this is what I've demonstrated to be low rank." }, { "start": 601.32, "end": 608.44, "text": " Now the actual thing that you would need to be low rank for their paper to hold" }, { "start": 608.44, "end": 614.44, "text": " is the matrix P which is different because this is after the softmax right." }, { "start": 614.44, "end": 620.08, "text": " So if the matrix P is low rank then you have a legitimate claim of approximating" }, { "start": 620.08, "end": 626.6800000000001, "text": " this routing via a low rank matrix. However if P is not low rank you don't." }, { "start": 626.6800000000001, "end": 634.84, "text": " Okay all right now the first thing they're going to show is that this is in" }, { "start": 634.84, "end": 639.72, "text": " fact low rank. So self-attention is low rank. And for that they make an" }, { "start": 639.72, "end": 647.08, "text": " empirical investigation into Roberta. So Roberta is a model that's based on BERT" }, { "start": 647.08, "end": 653.32, "text": " and I have made videos of both BERT and Roberta I believe. If sorry if you want" }, { "start": 653.32, "end": 658.9200000000001, "text": " to go look those up. But it is one of these transformer models and they take" }, { "start": 658.92, "end": 665.4, "text": " two data sets wiki103 and IMDB and they run them through this model and they" }, { "start": 665.4, "end": 671.8, "text": " look at this P matrix. So they look at how this information routing matrix is" }, { "start": 671.8, "end": 679.4, "text": " built and then they calculate the eigenvalues of that. So you calculate" }, { "start": 679.4, "end": 684.24, "text": " the eigenvalues and by looking at the eigenvalues you can look at the rank of" }, { "start": 684.24, "end": 692.12, "text": " a matrix broadly speaking. So if you list the eigenvalues in order of their size" }, { "start": 692.12, "end": 698.76, "text": " then a matrix that is sort of high dimensional has a high rank would have" }, { "start": 698.76, "end": 707.12, "text": " sort of a slope like this and that means as you go as you go to the next and" }, { "start": 707.12, "end": 712.2, "text": " next and next eigenvalue they drop like if you order a set of uniformly" }, { "start": 712.2, "end": 717.9200000000001, "text": " distributed numbers if you order them then it would look like this right so" }, { "start": 717.9200000000001, "end": 723.12, "text": " there is no particular dimension that's that's better than any or has much more" }, { "start": 723.12, "end": 728.4000000000001, "text": " information than any other. However if the matrix is approximately low rank you" }, { "start": 728.4000000000001, "end": 732, "text": " would look something like this and that would mean that most of the information" }, { "start": 732, "end": 736.74, "text": " is concentrated in very few dimensions and those are the ones with very high" }, { "start": 736.74, "end": 742.96, "text": " eigenvalues and most of the dimensions have no information. The thing you see" }, { "start": 742.96, "end": 747.6800000000001, "text": " here is simply the cumulative sum of these things so if you calculate the" }, { "start": 747.6800000000001, "end": 754, "text": " cumulative sum of this you'll get that over here. So if this is very high rank" }, { "start": 754, "end": 760.72, "text": " you would expect a curve that goes like this sort of slanted but not very. If" }, { "start": 760.72, "end": 765.4, "text": " this is very low rank you would expect a curve that goes very much into the" }, { "start": 765.4, "end": 773.84, "text": " corner right here and they show that the general shape here is such that there is" }, { "start": 773.84, "end": 780.88, "text": " this kind of a kink to it as you can see here. Now also notice that the axis here" }, { "start": 780.88, "end": 785.1999999999999, "text": " starts at 0.4 so actually this comes from down here somewhere and goes up and" }, { "start": 785.1999999999999, "end": 790.12, "text": " then goes like this. So they have a I feel they have a legitimate claim here" }, { "start": 790.12, "end": 795.92, "text": " that these matrices are approximately low rank and here they look at I don't" }, { "start": 795.92, "end": 800.68, "text": " actually know at which layer this is or if this is in all of the layers overall" }, { "start": 800.68, "end": 806.24, "text": " or something like this but they look at how this develops inside the layers so" }, { "start": 806.24, "end": 813.48, "text": " they look at the always the 128th eigenvalue and they discover that as" }, { "start": 813.48, "end": 818.94, "text": " they go deeper and deeper into the network this cumulative eigenvalue is" }, { "start": 818.94, "end": 823.2800000000001, "text": " higher and higher that means that network puts more and more information" }, { "start": 823.2800000000001, "end": 829.48, "text": " into fewer and fewer dimension in this routing as you go up the layers so it" }, { "start": 829.48, "end": 834.44, "text": " gets more and more skewed as you go up the layers it gets more and more into" }, { "start": 834.44, "end": 840.48, "text": " this corner right here so their claim appears to be more and more true. Now I" }, { "start": 840.48, "end": 846.44, "text": " have sort of thought about this a little and I've tried it out a bit myself and I" }, { "start": 846.44, "end": 852.24, "text": " invite you to just follow me here shortly. So right here I have a matrix" }, { "start": 852.24, "end": 860.6400000000001, "text": " that is just a random Gaussian matrix of size 512 by 512. If we look at the" }, { "start": 860.6400000000001, "end": 864.6400000000001, "text": " eigen spectrum of that so I have this function SVD it simply gives me the" }, { "start": 864.6400000000001, "end": 871.6400000000001, "text": " eigen spectrum of that then you can see that it sort of falls off uniformly and" }, { "start": 871.64, "end": 882.8, "text": " that will result in a in this cumulative sum of pretty much flat curve or slowly" }, { "start": 882.8, "end": 890.4, "text": " ascending curve like this. Now if we actually have a low rank matrix this" }, { "start": 890.4, "end": 894.8199999999999, "text": " would look different this would have this sort of typical kink in it and we" }, { "start": 894.8199999999999, "end": 899.48, "text": " can demonstrate that by making a lower dimensional matrix so let's just take" }, { "start": 899.48, "end": 910.12, "text": " let's just go 512 by 128 of this lower dimensional and and let's look at the MT" }, { "start": 910.12, "end": 917.48, "text": " now this only goes to 128 because we only get back 128 singular values so" }, { "start": 917.48, "end": 923.44, "text": " let's make a lower dimensional matrix that's actually 512 by 512 so if we do" }, { "start": 923.44, "end": 931.72, "text": " this this is sort of what they're doing in this this will construct a" }, { "start": 931.72, "end": 941.44, "text": " 512 by 512 matrix but that is only of rank 128 right and you can see that at" }, { "start": 941.44, "end": 948.84, "text": " the 128 singular or eigenvalue this snaps right at the at the one so it's" }, { "start": 948.84, "end": 955.08, "text": " sort of like what they what they have okay so we've seen the difference between" }, { "start": 955.08, "end": 960.64, "text": " a let's say higher rank matrix and the low rank matrix in this cumulative sum" }, { "start": 960.64, "end": 966.64, "text": " plot now I want to go back to the original matrix right here of course" }, { "start": 966.64, "end": 971.5600000000001, "text": " there the matrices they look at these routing matrices they're not Gaussian" }, { "start": 971.5600000000001, "end": 977, "text": " they're not sort of distributed with mean zero and the nice variance they are" }, { "start": 977, "end": 981.36, "text": " the result of a softmax operation and in particular that means they're all" }, { "start": 981.36, "end": 986.96, "text": " positive and that means that their mean is not zero so if you look at a data" }, { "start": 986.96, "end": 992.24, "text": " set and it's mean it's not zero and you calculate like the the eigenvalues or in" }, { "start": 992.24, "end": 998.64, "text": " this case the principal component you will find that the first one will be" }, { "start": 998.64, "end": 1003.48, "text": " very strong because that must account for the fact that the mean is not at the" }, { "start": 1003.48, "end": 1012.24, "text": " center or the first few will be like this so it is sort of maybe we can" }, { "start": 1012.24, "end": 1018, "text": " replicate this right here so let's say we'll put M through let's first go with" }, { "start": 1018, "end": 1027.6, "text": " the absolute value of M okay not much of a change but you already see that this" }, { "start": 1027.6, "end": 1037.1999999999998, "text": " axis doesn't start at zero so let's go let's actually how do we do this xlim" }, { "start": 1037.1999999999998, "end": 1050.36, "text": " right xlim zero none so haha okay so the first one you simply have to imagine or" }, { "start": 1050.36, "end": 1055.7199999999998, "text": " I can do even something something more we can just put a zero in front here and" }, { "start": 1055.72, "end": 1067.96, "text": " that should do the trick no yes oh that's X I meant Y calm and dumb never" }, { "start": 1067.96, "end": 1072.28, "text": " mind this will work as well so you already get this sort of of kink and" }, { "start": 1072.28, "end": 1082.32, "text": " let's put it into the softmax so we'll put a softmax and that gives you also" }, { "start": 1082.32, "end": 1087, "text": " this kink now you might think that wait this is that this kink looks a lot" }, { "start": 1087, "end": 1093.76, "text": " smaller than the other kink so but if we simply modify let's modify the standard" }, { "start": 1093.76, "end": 1098.12, "text": " deviation of this random matrix and you can see that this spectrum immediately" }, { "start": 1098.12, "end": 1103.1599999999999, "text": " changes right because of the interaction now between the softmax and the" }, { "start": 1103.1599999999999, "end": 1108.04, "text": " standard deviation if I only were to change the standard deviation on the" }, { "start": 1108.04, "end": 1115.1599999999999, "text": " normal M matrix and we can actually try this right here that wouldn't do much" }, { "start": 1115.1599999999999, "end": 1119.3999999999999, "text": " that would still look pretty much the same it's just differently scaled but in" }, { "start": 1119.3999999999999, "end": 1124.56, "text": " the interaction with the softmax now this changes the spectrum dramatically" }, { "start": 1124.56, "end": 1129.04, "text": " and here as you know these these transformers have always sort of like" }, { "start": 1129.04, "end": 1134, "text": " layer normalization and so on so probably the standard deviation if we" }, { "start": 1134, "end": 1139.76, "text": " if if these are sort of Gaussian the standard deviation before the softmax" }, { "start": 1139.76, "end": 1147.72, "text": " would be a lot smaller so let's go something like this so smaller than one" }, { "start": 1147.72, "end": 1155.44, "text": " and can we run this please and you can see that this kink immediately appears" }, { "start": 1155.44, "end": 1162.56, "text": " now it's not it's it's it's not the same thing as this other as this here because" }, { "start": 1162.56, "end": 1168.56, "text": " this is a lot smoother as you can see right here but still I feel that this" }, { "start": 1168.56, "end": 1173.36, "text": " might not actually be a result of the you know the fact that this is an" }, { "start": 1173.36, "end": 1178.72, "text": " attention mechanism but it simply might be the result of that you apply a softmax" }, { "start": 1178.72, "end": 1185.08, "text": " now still that doesn't change the fact that it is approximately a lower rank" }, { "start": 1185.08, "end": 1192.8799999999999, "text": " matrix everything they say holds but yeah maybe maybe one should also look" }, { "start": 1192.8799999999999, "end": 1198.6799999999998, "text": " into why exactly that happens but in fact it is low rank okay it is" }, { "start": 1198.6799999999998, "end": 1202, "text": " approximately low rank they've demonstrated this and now they go to" }, { "start": 1202, "end": 1208.4399999999998, "text": " their first first theory below we provide a theoretical answer a" }, { "start": 1208.4399999999998, "end": 1214.56, "text": " theoretical analysis of the above spectrum results okay so the theoretical" }, { "start": 1214.56, "end": 1220.32, "text": " analysis theorem one is self-attention is low rank and we're going to go" }, { "start": 1220.32, "end": 1227.56, "text": " through this just glance at it for now they say for any of these query key" }, { "start": 1227.56, "end": 1233.1599999999999, "text": " values and these matrices which of course you can ignore for now for any" }, { "start": 1233.1599999999999, "end": 1239.6399999999999, "text": " column vector W of matrix V W and W here that's the information that needs to be" }, { "start": 1239.64, "end": 1247, "text": " routed there exists a low rank matrix P tilde so this P tilde here is going to" }, { "start": 1247, "end": 1254.8000000000002, "text": " be their low rank approximation of the P matrix you can see it's still n by n but" }, { "start": 1254.8000000000002, "end": 1259.68, "text": " it's going to be low rank in fact it's going to be of the order of the" }, { "start": 1259.68, "end": 1267.0400000000002, "text": " logarithm of the rank of the full matrix or well the full matrix of the rank that" }, { "start": 1267.04, "end": 1271.76, "text": " the full matrix could have as we have already seen the full matrix doesn't" }, { "start": 1271.76, "end": 1279.44, "text": " have full rank but yeah okay so if you use and this is the type of guarantee" }, { "start": 1279.44, "end": 1284.84, "text": " you get so what do we see here it basically means that this distance here" }, { "start": 1284.84, "end": 1291.2, "text": " is smaller than this and this here this is just the norm of one of these vectors" }, { "start": 1291.2, "end": 1297.3600000000001, "text": " projected times this error coefficient epsilon so all it says is that the" }, { "start": 1297.3600000000001, "end": 1302.32, "text": " distance on the left is smaller than something and that occurs with high" }, { "start": 1302.32, "end": 1307.64, "text": " probability okay so the entire guarantee here the entire formula just basically" }, { "start": 1307.64, "end": 1314.04, "text": " means that this thing is small this norm is small what's this norm this norm is" }, { "start": 1314.04, "end": 1319.8400000000001, "text": " the distance between these two things now what are these two things this is" }, { "start": 1319.84, "end": 1324.9199999999998, "text": " the information that we want to route and this is the routing matrix and that" }, { "start": 1324.9199999999998, "end": 1331.08, "text": " simply means that if I route my information using the P tilde this" }, { "start": 1331.08, "end": 1337.8799999999999, "text": " approximation then I won't be too far away as if I had routed my information" }, { "start": 1337.8799999999999, "end": 1344.12, "text": " using the original P matrix okay that's that's it that's what the theorem says" }, { "start": 1344.12, "end": 1348.04, "text": " the theorem says if I route my information using this approximation" }, { "start": 1348.04, "end": 1354.32, "text": " then I am not too far away as it had I routed my information using the original" }, { "start": 1354.32, "end": 1358.92, "text": " routing matrix that I don't say how they're going to construct they simply" }, { "start": 1358.92, "end": 1367.08, "text": " say there exists a low rank matrix like this and the proof of this and it's sort" }, { "start": 1367.08, "end": 1371.92, "text": " of worth looking at the proof of it it uses the Johnson Linden Strauss lemma" }, { "start": 1371.92, "end": 1381.3200000000002, "text": " this thing here or the JL for short and they're going to get this out of the JL" }, { "start": 1381.3200000000002, "end": 1385.96, "text": " now the Johnson Linden Strauss lemma in a classic sense says something like this" }, { "start": 1385.96, "end": 1390.52, "text": " if I have data in a high dimensional space here in a three dimensional space" }, { "start": 1390.52, "end": 1398, "text": " okay I have data distributed and I use a certain kind of projection matrix and" }, { "start": 1398, "end": 1403.48, "text": " there are a number so the the JL gives conditions on what these projections can" }, { "start": 1403.48, "end": 1410.12, "text": " be but for example a randomly sampled matrix with zero mean Gaussian entries" }, { "start": 1410.12, "end": 1418.04, "text": " and 1 over K standard deviation where K is the dimension you project into can do" }, { "start": 1418.04, "end": 1424.08, "text": " the trick so if I project my data in a certain way into a lower dimension here" }, { "start": 1424.08, "end": 1431.04, "text": " dimension 2 then the projected data is related to the original data by the fact" }, { "start": 1431.04, "end": 1438.32, "text": " that the distances between the points in the original space will not be distorted" }, { "start": 1438.32, "end": 1443.8799999999999, "text": " too much so the distances between these points are approximately preserved" }, { "start": 1443.8799999999999, "end": 1450.6799999999998, "text": " through this projection okay so that's that's the that's the Johnson Linden" }, { "start": 1450.68, "end": 1455.92, "text": " Strauss lemma now you'll notice here there is no reference to the fact that" }, { "start": 1455.92, "end": 1462.72, "text": " this data is or isn't low rank it's simply high dimensional data projected" }, { "start": 1462.72, "end": 1468.1200000000001, "text": " to lower dimension and the distances are approximately preserved and this theory" }, { "start": 1468.1200000000001, "end": 1474.2, "text": " here and I've looked at it for a while now they simply define okay they define" }, { "start": 1474.2, "end": 1478.64, "text": " this P matrix as this attention mechanism and here you can see the A" }, { "start": 1478.64, "end": 1482.44, "text": " matrix we've discussed before which is actually low rank but we don't know yet" }, { "start": 1482.44, "end": 1489.3600000000001, "text": " if the softmax is they write it as this form right here of the exponential of" }, { "start": 1489.3600000000001, "end": 1496.8000000000002, "text": " each entry of a divided by this diagonal right here so in the softmax of course" }, { "start": 1496.8000000000002, "end": 1500.68, "text": " you have the exponential of each entry divided by the sum of the entries and" }, { "start": 1500.68, "end": 1504.8400000000001, "text": " they write this simply as two matrices but ultimately this is a matrix right" }, { "start": 1504.84, "end": 1510.8799999999999, "text": " here and all they do is they take this P matrix and they apply the Johnson" }, { "start": 1510.8799999999999, "end": 1519.12, "text": " Linden Strauss lemma by having this projection matrix R and R is entries" }, { "start": 1519.12, "end": 1523.8, "text": " from this Gaussian as I said so this is the special type of projection that the" }, { "start": 1523.8, "end": 1530, "text": " JL addresses and then simply says if you pull if you this here is going to be" }, { "start": 1530, "end": 1536.96, "text": " your P tilde so if you project R in this manner and obtain P tilde and then you" }, { "start": 1536.96, "end": 1544.84, "text": " use P tilde instead of P then this this is going to be very close in fact you" }, { "start": 1544.84, "end": 1548.52, "text": " can reformulate the JL into different variants such that it gives you things" }, { "start": 1548.52, "end": 1553.8, "text": " like this things like saying that the distance between this projected version" }, { "start": 1553.8, "end": 1558.36, "text": " and this unprojected version is going to be a constant smaller than a constant" }, { "start": 1558.36, "end": 1562.9199999999998, "text": " time the norms of the unprojected version that is equivalent to saying that" }, { "start": 1562.9199999999998, "end": 1568, "text": " the distances are preserved now you can see right here nowhere in this theorem" }, { "start": 1568, "end": 1575.6399999999999, "text": " is the fact that this is self-attention and nowhere in the theorem appears the" }, { "start": 1575.6399999999999, "end": 1581.12, "text": " fact that this inner matrix A is low rank or even that this matrix A exists" }, { "start": 1581.12, "end": 1586.36, "text": " it's you can do this with any matrix P right the JL doesn't concern itself with" }, { "start": 1586.36, "end": 1591.7199999999998, "text": " the nature of this matrix P it says any matrix any sort of high dimensional data" }, { "start": 1591.7199999999998, "end": 1596, "text": " you can project to low dimensional data and this holds if you choose the" }, { "start": 1596, "end": 1601.7199999999998, "text": " projection correctly which they do right here so to claim that this theorem" }, { "start": 1601.7199999999998, "end": 1611.52, "text": " proves that self-attention is low rank to me is a bit it's a bit of a statement" }, { "start": 1611.52, "end": 1618.4, "text": " that is not warranted like this here should read something like the Johnson" }, { "start": 1618.4, "end": 1626.52, "text": " Lindenstrout's lemma exists or something like this it I'm not I'm not sure like" }, { "start": 1626.52, "end": 1634.84, "text": " convince me otherwise but yeah so they go with this so they say given the low" }, { "start": 1634.84, "end": 1640.68, "text": " rank property of the context mapping matrix P now again I disagree that this" }, { "start": 1640.68, "end": 1646.44, "text": " has been shown except empirically one straightforward idea is to use" }, { "start": 1646.44, "end": 1650.4, "text": " singularality composition to approximate P with a low rank matrix P low as" }, { "start": 1650.4, "end": 1654.44, "text": " follows so what you could do is you could simply learn these low rank" }, { "start": 1654.44, "end": 1660.2, "text": " matrices and approximate P through it or you can decompose P as such and then" }, { "start": 1660.2, "end": 1670, "text": " have these easier inner products in dimension K but they say however this" }, { "start": 1670, "end": 1674.48, "text": " approach requires performing an SVD decomposition in each self-attention" }, { "start": 1674.48, "end": 1679.16, "text": " matrix which adds additional complexity therefore we propose another approach" }, { "start": 1679.16, "end": 1687.32, "text": " for a low rank approximation that avoids this added complexity okay so they now" }, { "start": 1687.32, "end": 1691.72, "text": " come up with their model and their model goes as follows so here on the left you" }, { "start": 1691.72, "end": 1696.28, "text": " see a classic attention mechanism with their projections built in what they're" }, { "start": 1696.28, "end": 1704.32, "text": " proposing is they say let's project the matrix K using one of these random" }, { "start": 1704.32, "end": 1710.6, "text": " projections and then this attention routing if you route if you now" }, { "start": 1710.6, "end": 1717.32, "text": " multiply so you multiply K and Q right here K times Q and then you put it into" }, { "start": 1717.32, "end": 1722.48, "text": " the softmax and then you use it to route this W so they say if we build in this" }, { "start": 1722.48, "end": 1728.3600000000001, "text": " projection matrix that will project K to a lower dimension and then we won't have" }, { "start": 1728.3600000000001, "end": 1733.68, "text": " as expensive of inner products now the important part to see here is that if" }, { "start": 1733.68, "end": 1737.32, "text": " you think of this lower projection the first thing you think is that you" }, { "start": 1737.32, "end": 1743.6, "text": " project this inner this hidden dimension D right to a lower dimension and that's" }, { "start": 1743.6, "end": 1751.08, "text": " not the case here you actually project the N so in in a conceptual framework so" }, { "start": 1751.08, "end": 1755.4399999999998, "text": " you can see right here forget about this this is this W matrix in a conceptual" }, { "start": 1755.4399999999998, "end": 1760.1999999999998, "text": " framework you see here is this N by D matrix which are the keys so N is the" }, { "start": 1760.1999999999998, "end": 1766.1599999999999, "text": " sequence length and D is the dimensions and what you want to do is you want to" }, { "start": 1766.1599999999999, "end": 1771.6399999999999, "text": " project that by this matrix which is K by N so you want to reduce the sequence" }, { "start": 1771.6399999999999, "end": 1776.6, "text": " length you can see in this matrix right here why that might work because N is" }, { "start": 1776.6, "end": 1784.1599999999999, "text": " much larger than D and that means this matrix can be at most rank D right so" }, { "start": 1784.1599999999999, "end": 1788.84, "text": " you should not lose too much you should sort of be able to preserve the" }, { "start": 1788.84, "end": 1795.56, "text": " information if you project this N to a K where the K if the K is still larger" }, { "start": 1795.56, "end": 1799.9599999999998, "text": " than the D or approximately in the same order of magnitude you should be able to" }, { "start": 1799.9599999999998, "end": 1803.8999999999999, "text": " preserve that information if you do it in a smart way so conceptually if we" }, { "start": 1803.9, "end": 1810.26, "text": " have our five token sequence like here and the next layer produces five tokens" }, { "start": 1810.26, "end": 1817.16, "text": " again what we first do is we say we know we know that the information we want is" }, { "start": 1817.16, "end": 1825.4, "text": " not five dimensional it's actually two dimensional because okay let's say these" }, { "start": 1825.4, "end": 1832.52, "text": " this inner dimension D is is two as well so we have two dimensional vectors each" }, { "start": 1832.52, "end": 1838, "text": " thing exposes two dimensional vectors so we first project the sequence of length" }, { "start": 1838, "end": 1843.8799999999999, "text": " five to a sequence of length two and we simply do that in a random manner so we" }, { "start": 1843.8799999999999, "end": 1850.1, "text": " have a random Gaussian matrix that assigns weights to mix these five into" }, { "start": 1850.1, "end": 1856.48, "text": " these two and again because the JL works for any sort of data but in my" }, { "start": 1856.48, "end": 1862.72, "text": " argumentation if you you know think that this here is low rank it's of rank two" }, { "start": 1862.72, "end": 1867.04, "text": " then you shouldn't lose too much information by projecting it to a" }, { "start": 1867.04, "end": 1872.84, "text": " sequence length two and now we do this attention mechanism so now we expose the" }, { "start": 1872.84, "end": 1879.88, "text": " keys and now we expose the queries up here and now you can see instead of" }, { "start": 1879.88, "end": 1885.08, "text": " routing five things with five things you only have to route five things with two" }, { "start": 1885.08, "end": 1894.3999999999999, "text": " things and so instead of having O and squared you now have O N K if K K is the" }, { "start": 1894.3999999999999, "end": 1901.8, "text": " number right here okay so this is the idea you project the sequence length and" }, { "start": 1901.8, "end": 1907.62, "text": " it comes from the fact that the sequence length is much larger than the" }, { "start": 1907.62, "end": 1913.02, "text": " dimensionality and therefore you can sort of preserve the information if you" }, { "start": 1913.02, "end": 1920.52, "text": " project in a smart way they build this in this fashion right here so the" }, { "start": 1920.52, "end": 1926.32, "text": " attention mechanism now before we saw it was between the queries and the keys" }, { "start": 1926.32, "end": 1933.16, "text": " right here they built now this projection matrix here that projects the" }, { "start": 1933.16, "end": 1940.76, "text": " keys into a lower dimensional sequence and the now such that this will result" }, { "start": 1940.76, "end": 1946.64, "text": " in an N by K attention matrix we saw over here you don't need to route N by" }, { "start": 1946.64, "end": 1953.08, "text": " N things you need to route N by K so this this routing table in here is now N" }, { "start": 1953.08, "end": 1959.24, "text": " by K now the next layer as you can see here it actually needs to produce a" }, { "start": 1959.24, "end": 1963.8799999999999, "text": " sequence of length five again right so we always transform sequence of length" }, { "start": 1963.88, "end": 1972.5200000000002, "text": " five into sequence of length five but now we have we have this N corresponds" }, { "start": 1972.5200000000002, "end": 1976.7600000000002, "text": " to the sorry corresponds to the next layer and this K corresponds to the" }, { "start": 1976.7600000000002, "end": 1983.3200000000002, "text": " down projected sequence of the last layer and in order for that to fit we of" }, { "start": 1983.3200000000002, "end": 1987.68, "text": " course also need to down project the information that we're routing so if we" }, { "start": 1987.68, "end": 1991.5600000000002, "text": " don't project the routing table we also need to down project the information" }, { "start": 1991.56, "end": 1998, "text": " that we're routing that's we do this by a similar matrix F that is also sampled" }, { "start": 1998, "end": 2006.96, "text": " in this way in this special way and that gives us a K by D so we have projected" }, { "start": 2006.96, "end": 2011.72, "text": " the sequence to size K and if we multiply these two things again of" }, { "start": 2011.72, "end": 2018.6, "text": " course we'll get out an N by D matrix which is the signal for the next layer" }, { "start": 2018.6, "end": 2027.32, "text": " okay so an N by D signal comes in down here it's projected down to K sequence" }, { "start": 2027.32, "end": 2032.84, "text": " length it's and it's routed up again to N sequence length and you have again an" }, { "start": 2032.84, "end": 2041.36, "text": " N by D matrix here cool so that's how they do it and they build this into the" }, { "start": 2041.36, "end": 2046.6, "text": " transformer now as I understand it these projection matrices again they're not" }, { "start": 2046.6, "end": 2054.56, "text": " learned they are built up in this JL conscribed way they are not" }, { "start": 2054.56, "end": 2061.56, "text": " learned they are fixed once and then that's that's that at least that's how I" }, { "start": 2061.56, "end": 2070.2799999999997, "text": " understand it so there are no more learnable parameters okay so here they" }, { "start": 2070.2799999999997, "end": 2075.44, "text": " have a demonstration where they up the sequence length and you can see the" }, { "start": 2075.44, "end": 2080.2400000000002, "text": " batch size decreases but that's just to sort of keep the total amount of flops" }, { "start": 2080.2400000000002, "end": 2085, "text": " to be done the same you up the sequence length and down the batch size as the" }, { "start": 2085, "end": 2090.68, "text": " sequence length increases the standard transformers requirement in inference" }, { "start": 2090.68, "end": 2095.32, "text": " time goes up and this here as you can see this is not a linear scale it's a" }, { "start": 2095.32, "end": 2103.04, "text": " log scale log 2 so this goes up with the sequence length and it should go up" }, { "start": 2103.04, "end": 2108.64, "text": " quadratically right and you can also see that the Lin former keeps fairly" }, { "start": 2108.64, "end": 2114.72, "text": " constant for the same K now of course as you increase the K of the Lin former" }, { "start": 2114.72, "end": 2121.84, "text": " the inference time will go up because now it's dependent on N times K and not" }, { "start": 2121.84, "end": 2129.84, "text": " on N times N okay so let's look a bit further of how you have to choose that" }, { "start": 2129.84, "end": 2136.6800000000003, "text": " K up here in the first theorem we there was a already a hint to it in the first" }, { "start": 2136.6800000000003, "end": 2145.2000000000003, "text": " theorem you had to choose K by 5 log N and this is a problem so here you have" }, { "start": 2145.2000000000003, "end": 2155.1600000000003, "text": " log N that means it's not so O of N K is equal to O of N log N now that's not" }, { "start": 2155.1600000000003, "end": 2159.82, "text": " linear that's actually that's the same as the reformer but they want to get" }, { "start": 2159.82, "end": 2169.6800000000003, "text": " to a linear place and theorem 2 explains goes now to a linear shows how you can" }, { "start": 2169.6800000000003, "end": 2178.92, "text": " make self-attention linear okay they show again blah blah blah blah now you" }, { "start": 2178.92, "end": 2184.44, "text": " have to choose K at the minimum of these two things and you can see right here" }, { "start": 2184.44, "end": 2191.04, "text": " that one of them is independent of N so that means as N grows of course the" }, { "start": 2191.04, "end": 2194.52, "text": " minimum is no longer going to be this here the minimum is actually going to" }, { "start": 2194.52, "end": 2201.16, "text": " be the thing on the left and that is dependent on just D okay so you have D" }, { "start": 2201.16, "end": 2207.6, "text": " log D in here and that makes sense because in the very beginning we said" }, { "start": 2207.6, "end": 2216.08, "text": " hey D is actually much smaller than N and that means the information that is" }, { "start": 2216.08, "end": 2222.72, "text": " contained in these matrices is at most rank D so if we down project to K we" }, { "start": 2222.72, "end": 2228.7599999999998, "text": " should adjust K to what D is right if we adjust K to about the same thing as D" }, { "start": 2228.7599999999998, "end": 2236.7599999999998, "text": " we're guaranteed to not lose too much information so now we choose K" }, { "start": 2236.76, "end": 2242.36, "text": " according to D instead of according to N and therefore the computation is linear" }, { "start": 2242.36, "end": 2251, "text": " in N and N times K is like N times D to log D so it's linear in K and linear in" }, { "start": 2251, "end": 2259, "text": " D how do we get there so the first thing they do is they make these sort of" }, { "start": 2259, "end": 2266.1600000000003, "text": " Johnson-Lindenstrout statements again but now instead of the general statement" }, { "start": 2266.16, "end": 2270.92, "text": " they plug in their actual modified attention mechanism so here they have a" }, { "start": 2270.92, "end": 2277.24, "text": " bound on the distance between if I route my this is the information to be routed" }, { "start": 2277.24, "end": 2283.52, "text": " right if I route my information using the original softmax and this in here" }, { "start": 2283.52, "end": 2291.12, "text": " is the matrix A if the original tension mechanism I won't be too far away as if" }, { "start": 2291.12, "end": 2298.68, "text": " I were to route my information using this modified attention mechanism now" }, { "start": 2298.68, "end": 2308.2, "text": " the tricky part here mathematically I believe is that is is exactly the softmax" }, { "start": 2308.2, "end": 2314.2, "text": " what what I alluded to right so this softmax is the tricky part because if" }, { "start": 2314.2, "end": 2318.56, "text": " this weren't a softmax so if the softmax weren't here this would simply be a" }, { "start": 2318.56, "end": 2324.12, "text": " projection down and a projection up and the dilemma would almost apply as it is" }, { "start": 2324.12, "end": 2328.7599999999998, "text": " written right there you wouldn't have to actually do anything but the question is" }, { "start": 2328.7599999999998, "end": 2336.16, "text": " if this inside the softmax is is low rank can you make a claim that the" }, { "start": 2336.16, "end": 2344.7599999999998, "text": " entire softmax then is also low rank and it's not entirely clear because because" }, { "start": 2344.76, "end": 2351.6000000000004, "text": " oh yes we've done this so you can see right here that the softmax we have" }, { "start": 2351.6000000000004, "end": 2355.32, "text": " actually done the softmax of a low rank matrix so we have already seen the low" }, { "start": 2355.32, "end": 2360.76, "text": " rank matrix itself and how it immediately snaps to the to the upper" }, { "start": 2360.76, "end": 2371.2400000000002, "text": " axis after 128 now if we do the same thing for the softmax of that and we" }, { "start": 2371.24, "end": 2379.8799999999997, "text": " probably have to take away some of these dimensions the first few let's go with" }, { "start": 2379.8799999999997, "end": 2388.72, "text": " let's go to dimension 100 and look from there okay same thing okay that's pretty" }, { "start": 2388.72, "end": 2399.68, "text": " good I did not expect that hi there so this is Yannick from the future I've" }, { "start": 2399.68, "end": 2403.8799999999997, "text": " realized I've been an idiot in how I constructed these low rank matrices" }, { "start": 2403.8799999999997, "end": 2410, "text": " right here by multiplying MT by itself of course what's a better way to do it" }, { "start": 2410, "end": 2416.3999999999996, "text": " is to construct two independent 128 dimensional matrices like these two" }, { "start": 2416.3999999999996, "end": 2421.2799999999997, "text": " sub slices of M right here and then multiplying those together and looking" }, { "start": 2421.2799999999997, "end": 2429.3599999999997, "text": " at the SVD and you as you can see right here so the softmax of this is now not" }, { "start": 2429.36, "end": 2436.56, "text": " of this super low rank anymore it's still low rank but it's not not very it's" }, { "start": 2436.56, "end": 2442.92, "text": " not like hard low rank so if I just look at the matrix without the softmax then" }, { "start": 2442.92, "end": 2448.6400000000003, "text": " you can see it has a very peak that by at the 128 which gives us the indication" }, { "start": 2448.6400000000003, "end": 2455.92, "text": " it's actually 128 rank which we already knew but if we now introduce the softmax" }, { "start": 2455.92, "end": 2463.28, "text": " then you can see that this vanishes and it's no longer 128 dimensional and it's" }, { "start": 2463.28, "end": 2469.64, "text": " only approximately low rank as you can see all right back to Yannick in the" }, { "start": 2469.64, "end": 2475.84, "text": " past who is wholly surprised that the two that if you multiply MT by itself" }, { "start": 2475.84, "end": 2483.32, "text": " that that will give you back the the exact same thing all right so did we try" }, { "start": 2483.32, "end": 2489.76, "text": " this before maybe we did okay but the mathematical difficulty still remains" }, { "start": 2489.76, "end": 2495.04, "text": " and their main thing here is so they have a first first version where they" }, { "start": 2495.04, "end": 2502.1600000000003, "text": " pretty much plug it into the JL again and they they get out this K is the K" }, { "start": 2502.1600000000003, "end": 2506.92, "text": " needs to be by log n but they say this result does not utilize the low rank" }, { "start": 2506.92, "end": 2511.88, "text": " property of matrix A and the resultant K has a dependency on sequence length n" }, { "start": 2511.88, "end": 2522.6400000000003, "text": " and then in the appendix they finally go through the math to show that now if" }, { "start": 2522.6400000000003, "end": 2532.2400000000002, "text": " they choose E and F like this they can actually pull out this and show that" }, { "start": 2532.24, "end": 2542.9199999999996, "text": " the K is where you have it the decay is independent of n like this and I think" }, { "start": 2542.9199999999996, "end": 2551.8399999999997, "text": " the main the main step in this proof is the step B here where they say uses the" }, { "start": 2551.8399999999997, "end": 2556.12, "text": " fact that the exponential function is Lipschitz continuous in a compact" }, { "start": 2556.12, "end": 2562.6, "text": " region then we can choose a small enough Delta such that the as you can see here" }, { "start": 2562.6, "end": 2568.3199999999997, "text": " this now directly relates to this projection matrix within the exponential" }, { "start": 2568.3199999999997, "end": 2572.92, "text": " function to the projection matrix out of the exponential function so you can" }, { "start": 2572.92, "end": 2577.8399999999997, "text": " basically say that if I project first and then use the exponential function" }, { "start": 2577.8399999999997, "end": 2582.52, "text": " that's not too different than if I first use the exponential function and then" }, { "start": 2582.52, "end": 2591.52, "text": " project okay so that's the that's the sort of of of catch here now they only" }, { "start": 2591.52, "end": 2596.36, "text": " do this for the exponential function not the actual softmax as you can see here" }, { "start": 2596.36, "end": 2600.96, "text": " throughout they do it to the exponential function and also here in their" }, { "start": 2600.96, "end": 2607.04, "text": " statements the softmax isn't the exponential function the softmax is the" }, { "start": 2607.04, "end": 2611.92, "text": " exponential function divided by the sum of the exponential functions but I" }, { "start": 2611.92, "end": 2615.88, "text": " believe that this generalizes straightforwardly" }, { "start": 2615.88, "end": 2623.8, "text": " alright so for given choices of Delta and K they have shown that the Lin" }, { "start": 2623.8, "end": 2629.52, "text": " former in fact can do in a linear fashion what a transformer can do in a" }, { "start": 2629.52, "end": 2634, "text": " quadratic fashion and they are not too far off" }, { "start": 2634, "end": 2640.7200000000003, "text": " ok that's that's their point right here the results on these benchmarks" }, { "start": 2640.72, "end": 2645.52, "text": " sorry let's first go to the perplexities in language modeling so they show right" }, { "start": 2645.52, "end": 2650.08, "text": " here that they pretty much can keep up with the standard transformer as you can" }, { "start": 2650.08, "end": 2655.3599999999997, "text": " see here so with the standard transformer they can keep up here now" }, { "start": 2655.3599999999997, "end": 2663.24, "text": " think that this the the computation is n times K ok so something like this Lin" }, { "start": 2663.24, "end": 2670.56, "text": " former with K cost 256 will only so instead of n by n it's n times K it" }, { "start": 2670.56, "end": 2678.64, "text": " won't save you too much in that case but it's it's not too surprising that in" }, { "start": 2678.64, "end": 2683.4799999999996, "text": " fact you have the same performance because probably the standard transformer" }, { "start": 2683.4799999999996, "end": 2688.3599999999997, "text": " is distributed over more heads than two so the information necessarily has a" }, { "start": 2688.36, "end": 2694.2000000000003, "text": " lower dimensionality 10 to 56 one thing I want to draw attention to though here" }, { "start": 2694.2000000000003, "end": 2700.96, "text": " is that you can see that here it's not really done learning yet and as you can" }, { "start": 2700.96, "end": 2707.1600000000003, "text": " see the standard transformer sort of surpasses all of these models towards" }, { "start": 2707.1600000000003, "end": 2713.6400000000003, "text": " the end I wonder I wonder what happens I wouldn't be surprised if they end up" }, { "start": 2713.64, "end": 2719.04, "text": " sort of at the same place but I wonder if these diverge even more right here" }, { "start": 2719.04, "end": 2727.52, "text": " after that they also compare with a higher sequence length and the standard" }, { "start": 2727.52, "end": 2731.2799999999997, "text": " transformer outperforms the Lin former but of course the point here is that the" }, { "start": 2731.2799999999997, "end": 2739.72, "text": " Lin former is much much much faster and can keep up now also the scale here of" }, { "start": 2739.72, "end": 2745.7599999999998, "text": " the perplexity you see these are percentage points in perplexity but I" }, { "start": 2745.7599999999998, "end": 2751.52, "text": " can't actually tell if that matters or not I think I think in the original" }, { "start": 2751.52, "end": 2755.7999999999997, "text": " transformer paper the perplexities hovered between like three point" }, { "start": 2755.7999999999997, "end": 2762.12, "text": " something and five point something so this might actually be sort of" }, { "start": 2762.12, "end": 2768.04, "text": " significant differences and I'm not sure they investigate different methods of" }, { "start": 2768.04, "end": 2773.24, "text": " sharing these weights of these of these projections and they seems like they" }, { "start": 2773.24, "end": 2776.36, "text": " don't find real differences but I don't want to go into that because this video" }, { "start": 2776.36, "end": 2781.92, "text": " is already really long and then they look at what happens if they up the" }, { "start": 2781.92, "end": 2786.94, "text": " sequence length that they put into the Lin former and you can see that the Lin" }, { "start": 2786.94, "end": 2792.88, "text": " former can deal with higher sequence lengths and arrive at the same" }, { "start": 2792.88, "end": 2798.6400000000003, "text": " perplexities though again I don't know how much different that is and" }, { "start": 2798.6400000000003, "end": 2806.08, "text": " the scale here is larger than before but yeah so how does this fare on these" }, { "start": 2806.08, "end": 2811.32, "text": " benchmarks where you first train a transformer with pre training with" }, { "start": 2811.32, "end": 2817.2000000000003, "text": " language modeling and then you use it to do certain NLP tasks and here you can" }, { "start": 2817.2000000000003, "end": 2822.46, "text": " see that the Lin former is on par in some of these tasks with the original" }, { "start": 2822.46, "end": 2829.32, "text": " transformer but also you can see like a pattern where you can see pretty wild" }, { "start": 2829.32, "end": 2836.4, "text": " results in that you know sometimes the the Lin former here will be better than" }, { "start": 2836.4, "end": 2842.44, "text": " this but then also variants of the Lin former will be worse and they'll even be" }, { "start": 2842.44, "end": 2846.64, "text": " worse than this and sometimes they'll be better sometimes this Lin former is good" }, { "start": 2846.64, "end": 2854.2799999999997, "text": " and sometimes the original model is the best so this sort of points to you can" }, { "start": 2854.2799999999997, "end": 2860.7999999999997, "text": " make the general claim that the Lin former doesn't destroy your your gains" }, { "start": 2860.7999999999997, "end": 2867.2, "text": " but also it's not like a a better model it's simply a faster model that in some" }, { "start": 2867.2, "end": 2873.04, "text": " tasks can keep up with the original model and they show that of course this" }, { "start": 2873.04, "end": 2879.7599999999998, "text": " is the real deal here that as you go up in length the performance gains and also" }, { "start": 2879.7599999999998, "end": 2886.7599999999998, "text": " sorry this this way the performance gains and the memory gains that you get" }, { "start": 2886.7599999999998, "end": 2892.08, "text": " by the Lin former are dramatic of course the longer and you go and to the lower" }, { "start": 2892.08, "end": 2896.2799999999997, "text": " dimension you project the more these gains are but of course the more" }, { "start": 2896.2799999999997, "end": 2900.92, "text": " performance you're going to lose potentially hello again Yannick from the" }, { "start": 2900.92, "end": 2904.56, "text": " future just wanted to draw your attention on this beautiful broader" }, { "start": 2904.56, "end": 2911.28, "text": " impact statement in this paper saying our work focuses on making transformers" }, { "start": 2911.28, "end": 2915.7200000000003, "text": " more efficient everything cool potential positive in spec impacts of efficient" }, { "start": 2915.7200000000003, "end": 2919.48, "text": " transformers that's pretty cool it also has potential impact on training" }, { "start": 2919.48, "end": 2924.12, "text": " transformers on images since we can support very long sequences very cool" }, { "start": 2924.12, "end": 2929.2400000000002, "text": " furthermore there are positive environmental benefits very cool I mean" }, { "start": 2929.24, "end": 2934.9199999999996, "text": " these are all very cool things they say as such we see no immediate negative" }, { "start": 2934.9199999999996, "end": 2940.04, "text": " ethical or societal impacts of our work beyond what applies to the core building" }, { "start": 2940.04, "end": 2947.7599999999998, "text": " blocks of deep learning do better now this this honestly I agree with them" }, { "start": 2947.7599999999998, "end": 2953.4799999999996, "text": " right I completely agree with them that this is sort of a good thing you might" }, { "start": 2953.4799999999996, "end": 2957.52, "text": " trade off you know some accuracy might some make some approximations but you" }, { "start": 2957.52, "end": 2963.36, "text": " will get a much faster model and this model has any model can be used you" }, { "start": 2963.36, "end": 2969.2, "text": " know for things and and that they now have to pull out of there out of their" }, { "start": 2969.2, "end": 2980.16, "text": " but some way in in in over five steps of intermediate layers this could be used" }, { "start": 2980.16, "end": 2987.96, "text": " for bad it just seems ridiculous so good on them for defying the please also" }, { "start": 2987.96, "end": 2994, "text": " think about negative impacts right here all right back to back back to past" }, { "start": 2994, "end": 3000.24, "text": " Yannick all right this was the Lin former paper I hope this somewhat makes" }, { "start": 3000.24, "end": 3007.16, "text": " sense made sense to you I had to read it multiple times for it to make sense to" }, { "start": 3007.16, "end": 3011.3999999999996, "text": " me but ultimately it's all about the fact that you have these multiple heads" }, { "start": 3011.3999999999996, "end": 3016, "text": " and therefore your information is probably lower dimensional and you can" }, { "start": 3016, "end": 3022, "text": " abuse that and to just calculate in this lower dimension all right I'll see you" }, { "start": 3022, "end": 3037.84, "text": " next time bye bye" } ]
G3pOvrKkFuk
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Code] PyTorch sentiment classifier from scratch with Huggingface NLP Library (Full Tutorial)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "code", "pytorch", "bert", "pretrained", "lightning", "live", "tutorial", "pip", "nlp", "transformers", "tokenizers", "sequence", "sentiment", "imdb", "dataset", "full", "github" ]
Huggingface released its newest library called NLP, which gives you easy access to almost any NLP dataset and metric in one convenient interface. We will combine this with a BERT model from Huggingface's Transformers library to build a sentiment classifier for IMDB. OUTLINE: 0:00 - Intro 1:30 - Boilerplate 3:20 - PyTorch Lightning Module 9:50 - Load Dataset 12:15 - Tokenization 20:50 - Torch Tensors 25:50 - Data Loader 28:00 - Create BERT Model 32:00 - Implement Validation and Train Step 47:00 - Run & Recap 50:20 - Epilogue My Code: https://github.com/yk/huggingface-nlp-demo NLP Library: https://github.com/huggingface/nlp Tutorial Colab: https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb Transformers Library: https://github.com/huggingface/transformers Pytorch Lightning: https://github.com/PyTorchLightning/pytorch-lightning Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
How did it do? So Hugging Face just released this NLP library right here and this is pretty cool because it allows you access to about a hundred NLP data sets and ten evaluation metrics pre-packaged. So knowing Hugging Face this is going to be a breeze to work with. So what I thought we would do is we would try to use this. I have not used this yet and it's been a while since I've used any Hugging Face stuff. So what we're trying to do is use this to load up the IMDB data set and then use a BERT model maybe to build a sentiment classifier on top of that using PyTorch, specifically PyTorch Lightning. So all of that combined from scratch and basically if I can do it then so can you and we're going to make some mistakes and have to look at the documentation a bit and so on but that's the process. So first of all if you like content like this let me know if you're not subscribed. Let me know in the comments if you have any sort of criticism or tips. I'm always happy for Vim tips honestly. So I have a pretty empty repo, git repo here. I have a git ignore but that's about it. So we'll just dive right in, start up Vim and let's make a file. So first some boilerplate code. I'm terrible at talking and coding at the same time but you know. So I like to use this APSAL library and I'm using as you can see I'm using the tab 9 completion engine with CoC with Neo Vim. This is absolutely great. We maybe need apps, app flags logging. That sounds good. So we'll need Torch probably right and we'll need PyTorch Lightning as PL. We'll need the NLP library of course since we're gonna use that and we'll need the Transformers library. Now I know Hugging Face has this tokenizers library too but there are some tokenizers in the transformer library already and we'll just keep it light like this. So maybe NumPy, maybe not. Let's see. So we'll export, we'll have these flags object here. Maybe we'll do some flags later and the main function. Let's just call hello. Actually let's log that info and alright. Run main. So this is our boilerplate and let's just quickly try it out just to see whether it works. So here we are. Hello. That's fine. Alright so where do we go from here? So in PyTorch Lightning what you'll have to do is you have to build this kind of model class. We'll build an IMDB sentiment classifier and that's going to extend this Lightning module of PyTorch Lightning. So you need different things in the PyTorch Lightning module. First of all you need the init and we'll just do like a very basic init. We'll call super on it and that's about it. And you need a forward method since this is a module. So in the forward method you're going to get a batch and you have to do something with it. What we also need is a training step method. Training step which gets a batch and a batch index and we'll have to output some kind of loss or some kind of training procedure. Then we'll need a train data loader. So all of this you can look up in the documentation of PyTorch Lightning. Basically you implement these methods and it will do the rest for you. So it will do all the training loop and it will do the handling of GPUs and whatnot. The whole looping over epochs. All of that is basically taken care of for you when you use PyTorch Lightning. So last thing we need is maybe a prepare data. Let's put that more up here. Prepare data. That method is optional but it gets called at the beginning and that's going to be pretty good for us. I have downloaded the weights of a BERT model and the data set so we don't need to do that anymore. That's about it. Maybe I've forgotten something. Lightning examples. There's what we're going to do. We're going to look at it like an example of PyTorch Lightning and just to see whether we'll have it. Maybe here domain examples, ImageNet sounds good. We'll have these methods. This is way more than we need but down here. Basically what you do is you instantiate your model and we won't have these hyper parameters here. These will be our flags but then you'll implement this trainer and then you call fit on the model. Let's maybe copy this down here. This is our IMDB sentiment classifier and the trainer. The root here, let's call that logs. GPUs. We'll give it a GPU if CUDA is available. Else zero. Then we'll make a flag for the epochs. We don't need the rest of this. Then at the end we'll call fit model. If we had a classifier this would already run. Now what I like to do is to have this module called SH which gives you some sort of easy shell commands. At the beginning of each run whenever the file loads I remove the logs folder. I have a clean logs folder and then I make it again like this. It just deletes the logs and then runs them again. If we run this right now this is going to give us an error. We don't have an epochs flag. We need to define a flag. Let's call define integer. We'll go for 10 epochs right now. We haven't configured our optimizers. In PyTorch Lightning you need some sort of optimizer configuration. I'll just copy that from an example. I'm going full Siraj here people. We need to configure optimizers. I like the SGD for this. It tends to work well in neural networks. We don't need the scheduler. We don't need any of that. Let's just return the SGD optimizer with the parameters and we'll make a flag for the learning rate and we'll make a flag for the momentum. We don't need any weight decay. Let's put these. We'll make floats for the learning rate. Maybe start off with something like this. I never put help strings if the description is rather clear. Only losers need help. Don't be kidding yourself. If you put the help string you need help. That's how it works. I just don't like that this library forces you to put the help string because it somehow makes me feel bad. It's very opinionated. It says basically you should put something there. We have this and now when we run this we don't have anything to optimize yet. First of all we need the model. Do we need to prepare data first? Let's check. I have this short snippet here that embeds an IPython shell. I just plug this into anywhere so I can see if I reach it. I reach the prepare data. Let's care about the data set first. This NLP library as you can see right here. There's the usage right here. You can load a data set here with the appropriate split. It will basically just give it back. If you don't have it, it will download it. It's pretty cool. We'll just load the data set. I've already checked out what they have and they have the IMDB data set. In this split argument we can say give me the train split. As a string you can say give me whatever the first 5% of the train split. This is just my laptop here. We won't be able to train a super high grade model. We'll go for 5% of the train split. This is to train data set. Now if we run until here. If you had not downloaded this, it would download this. Given the train data set, I hope you can see this. It says it's a data set. It has 1250 rows. Each entry has a text and a label. You can just index this like a data set. That's the first sample. The label is 1 here. It means that we should predict the label that this is a good sentiment. It's either 1 or 0. I think so. Either good sentiment or bad sentiment. Our first task is going to be to get this into a form where BERT can consume it. How do we do this with this NLP library? That's the pretty cool part. Right now you see this is text. In NLP we need to map this text into token IDs. We need to tokenize and we need to map this to IDs. Huggingface of course has very nice libraries for that. They're called tokenizers. We'll have one of these tokenizers. We'll use this from the transformers library. I think this is called BERT tokenizer. That the BERT models can use. Let's check it out. We're at the documentation. BERT tokenizer. There we go. There's a BERT tokenizer. Fast. Yes, okay. We'll take the fast one. Maybe not. Yeah, we'll take the fast one. Come on. Be risky. BERT tokenizer fast. I think we can do this from pre-trained. They have these methods from pre-trained. We'll take this from pre-trained. We'll put the model name here. I want to make this a flag. Such that I'm not bound to a particular model. Oops. Cool. This is called model. This is our model. BERT based on case. We have a tokenizer right now. We can now tokenize these things. Every entry in the data set. In a classic setting we'd have to write a loop for that. With this data set library, with this nlp library, it's pretty cool that we can tokenize each of the samples. We can map this tokenizer function across the training data set. How do we do that? We have this tokenizer. I'm pretty sure it has a tokenizer, an encode or something method. There's forward. This is the BERT model. Where's the BERT tokenizer? Right here. It has this encode or something. Here. Oh yeah. Encode. Where is the definition of that? Can we click on this? This encode takes text and it takes a bunch of other arguments. I hope you can see this. There we go. Whether or not you should add the special tokens or the max length. This is going to be pretty important. And pad to max length. We want everything to be of the same length. If you apply this encode function to a text of these samples. Let's just take the first sample here and let's take the text entry. Then what you're going to get is a list of these IDs. This is exactly what we want. The 101 here is this CLS token that BERT takes in. Then it's just the word pieces. You could also say instead of this say tokenize. I think. That will just give you the word pieces. Not the encodes yet. These are the word pieces right here. This is the tokenized text and with the encode function it does this. Then it maps these two IDs such that BERT can consume that. For this NLP, this library, has this convenient function called map in their data set. What we'll have to do first is define a tokenized function that takes in a single sample. Then it will run the tokenizer encode function across the text entry. We have already seen we need like... Add special tokens is true. This is cool. Max length, yes. We'll make a flag sequence length or something. We are going to pad to max length is true. Every single sample will be of the same size. In this function there's a number of ways what you can return here. One way is to return the original sample and actually just set a new attribute. I think. Set a new attribute on this original sample right here. Let's format this a bit nicer. You see we have this tokenized function. It takes a sample, it takes the text, it tokenizes it, encodes it and puts this as the new attribute input IDs and returns it again. Now what we can do is we can map this function across the training data set. This will go over the training data set and basically for each entry do this thing. Hopefully after this operation we'll have a data set where each sample not only has a text and a label but also an input IDs attribute. We don't have this sequence length thing right here. Let's put that here. Let's just go with 32 since this is just my laptop. 32 samples should be fine. Here it says can't pickle tokenizer objects. What it tries to do right here is it tries to it tries to parallelize basically this thing right here. If we look at this NLP thing, is there documentation to this? We can just look at the data sets maybe. Naming, splits, builder, arrow, data set. Map right here. This function I think it will try to multiprocess and therefore it needs to basically pickle all of the things that go into the function. It pickles all of the things that go into the function which means this tokenizer right here it needs to be pickled. Maybe there's a way to get around this. One thing we can try is we can try another tokenizer. Maybe this one can be pickled. This did library is pretty good but it can't pickle everything. This tokenizer can actually be pickled. I'm not entirely sure what you'd have to do honestly because I don't know the library but what you could do is make a thread or process local variable of this and basically make it a singleton in each process and then basically in here you call the function to get it and it returns the already instantiated object and so on. If you really want to multiprocess all of this. Anyway we have this train data set right now and you see the schema. If you can see this the schema has been extended so there is now text, there is label and there is input IDs which is a list of int64 things. That's pretty cool. So now what we can do since this is still a python list right this is still a python list. Now I know the tokenizers can already output PyTorch tensors but that's kind of cheating. So we want to use this library right here. We want the train data set. There is a method called set format right here and you say type equals torch. What that does and I think you need to say which columns you want. So we want columns. Maybe we should get all columns. Can we output the text? So you can select from the sample which of the columns you want and let's check it out again. For now as long as we're just debugging here I like to do a debug flag. So this is usually one of the first flags I do. It's define boolean debug. What this does is whenever this is active I try to be as fast as possible. So there in this PyTorch lightning trainer there's actually this fast def run argument. Which does the same thing but I can push it a bit harder with this debug here. So let me say this is like one. We'll just load batch size samples if we are in debug mode. We don't actually have a batch size argument yet do we? If flags.debug else 5 percent. So we don't have batch size yet. We're surely gonna need that at some point. So let's go with a batch size of 8 just because we can. Now if we run this in debug we should... Ah okay yes this needs to be a string. Shag-a-boom! Cool so it says it's the fast def run and if we run it in debug it just loads very few data points. So this map function here doesn't take this whole while. Maybe there's a way you can stream that I don't know. For now this is pretty good. So if we look at the train data set again you can see that it has the same entry. So this is still a list of 64 but if you index it right now if you go to the zero data point. Okay then it crashes because it tries to convert these two PyTorch tensors and it can't convert the string so we'll have to say we just want the columns input IDs and we want the label. Label can't spell. Okay let's try it again. So right here you see that what we get out is actually a PyTorch tensors for this and not kind of Python lists anymore. So this is now pretty this is one-to-one so with duck typing maybe it's even subclassed. This is a PyTorch data set right which we can load into a data loader. So this is a perfectly fine data set so we can now say self train data set is this train data set. Now we want to do this for the test as well but in order to do that we would have to write all of this code again which I'm not really in the mood so we'll just loop it. We'll create a function prepare data set and we'll take in the split name. The split name right like this and we'll just go with the split name here. That should do it and we just call it data set. Data set and return that. So now we can say train data set self dot test data set is prepare data set. For train and test. Excellent so now we have a training data set and a testing data set. So here in the train data loader we need to take the training data set and construct a data loader from it. This is super easy so what we'll do is we'll do it in one line. Data loader so what does the data loader need? The data loader needs a data set. So the prepare data is called at the beginning so we have this data set right here and I think we can go with a batch size right here and we already have a flag for that and I think there is like a drop last yes so the drop last will go for true we only want full batches during training and we'll also shuffle. Okay and the same goes for we need a validation data loader for our validation set. So in PyTorch Lightning you have train validation and test and test is really only for like the final final test. If the test data set we have here is the would be called the validation data set in PyTorch Lightning so we false here false we don't want to shuffle particularly. Okay so we have a training data loader and a validation data loader. Now what do we need? We have optimizer very good. Now what do we need? All we need to do is to actually pass our data through the BERT model. So this forward thing here we're just going to leave not implemented. Maybe we can implement it. Okay so we do need a model as you can see right here this batch let's say this batch is going to let's go right here right so if you know if you don't sometimes don't know what to do you just go to where you should be okay at ultimate empty parameter we don't have parameters yet all right so what do we do we go up here and we make a model we need to actually make the BERT model right here so from transformers we can use the BERT model now they have a lot of BERT models and we'll go back right here to the BERT models because they as you know BERT is just an encoder so we need to build a classifier on top of BERT but they already have done this so they have a bunch of BERT different configurations and the one we're looking for here would be this this BERT for sequence classification right this is BERT BERT model transformer with a sequence classification or regression head on top right so this is exactly what we need a classifier on top of BERT and we can um i think we can also load this with this from pre-trained and just put in the same name so we can this BERT for sequence classification and we'll load up the same model that we had okay so um this is our model easy as that so what do we what do we do with this BERT if we put in data what what happens for that we quickly go back again so in the forward method we can in we can input the input ids right which is batch size sequence length tensor we can input the attention mask that basically tells you where there's padding and where there isn't masks to avoid performing attention on padding token mask value selected in zero one one for tokens that are not masks zero for tokens that are masks then we can input the token type ids which we don't have here we just have one sentence but usually in BERT you have the capability of inputting two different types like a question and a paragraph or a first sentence and the second sentence um position ids are optional um blah blah blah blah blah blah blah none of that okay we could also input the labels these are optional and it would already compute a loss for us uh which we we don't this that's almost cheating so let's just focus on putting in the input ids and i think that's gonna be enough since we basically truncate our long text to 32 tokens we don't need to worry about masking right here otherwise you would input a mask for um actually we we can do it we can do it okay so what you could input a mask for basically where um your tokens are not pad tokens and the pad tokens in BERT are zero so basically your mask should just be whatever's non-zero uh but maybe also your model learns um to ignore the pad tokens i might be wrong here and it does it automatically right so in your forward pass what do you do actually let's go to the training step we'll put something here you can see it so if you if you didn't have BERT um it would actually uh BERT you it BERT you up it would download BERT right here but since i have it you can see here this is the smaller BERT model um pytorch lightning i don't have enough space in my console right here but it would give you a nice overview over your model how many parameters it has how what kind of layers it has and so on so uh we also need a validation step if we have a validation data loader validation step and we need the um validation epoch end function so usually in training you don't really care about epochs too much because you just have many batch after mini batch but in validation uh what you want is kind of one single metric across your entire test data set or validation data set and therefore you sort of in the validation step you'll just kind of output things you output local things per batch and then in the epoch end function you aggregate them into one big number so um we'll we'll we'll just put we'll put things into each thing thing thing so i'm pretty sure we're going to end up in the validation step first because if especially if we do this debug run it basically it tries to run a validation first uh at the very start of training so we can look at a batch right here so what's a batch um the batch seems to be a dictionary if you look at its keys we can see um the batch seems to be a dictionary if you look at its keys we have label and input ids okay so that's pretty cool so if we go for the input ids that gives us a tensor and the tensors of shape eight which is our batch size and 32 which is our sequence length and we should be able to pretty much input that into the BERT model that we created boom okay and what do we get out we get out a tuple and the first entry is going to be this looks like logits all right okay let's check the shape and this is eight so this is our batch size and two is the logit so one for the negative class and one for the positive class and this is this we can basically input into a cross entropy loss given our labels so we also have our label here and their label is all ones nice um is this maybe sorted is the data set sorted into good and bad things because that would be that would be bad in any case um so what do we have to do so in the forward method we get the input ids let's let's say we get the input ids and we run this through our model and we can actually construct a mask here and the mask is going to be wherever the input ids are not zero and um that as a what does it need to be so these mask this attention mask is going to be a float tensor okay so we'll put it as a float tensor cool um right like this so our logits are going to be that and yeah tuple with one entry so the comma here is important we're going to return the logits so this is our forward function so in the validation and the training step the first thing we got to do is we got to uh call this forward function with the input ids and these of course are in our batch like this so these are going to be our logits and then in the validation what we want to do is we first of all want to compute our loss right so we have to construct this up here in the init we can actually fold this prepare data um loss is going to be a cross entropy loss yes that exists with read reduction i like to put reduction none i don't think there's like an a deprecated reduce and there is like a reduction where you can put mean or something i like to not reduce the loss at first because then i can agro i can use the same thing for validation and training so in the validation step i just want to compute my loss right here with self so loss loss um and we'll have to cheat a bit so look up the cross entropy loss and come on okay where is the cross entropy loss loss cross entropy loss it takes yes it's reduction ha tada and so the input to the function that we construct is going to be first um n by c first the input and then the targets so first the logits and then the targets right criterion that combines logs of max and nl loss over a single class nice nice nice okay okay cool so first logits and then labels label okay that's our loss so if we check out what our loss is going to be it's probably going to be an vector of size eight because we have reduction none none loss yes c vector of size eight very nice so we can just um basically return i'll say we can return this loss just as is and then in the validation epoch end the outputs here is going to be a list of and every entry in the list is going to be one of these validation steps for for one batch so we can aggregate those so losses is will concatenate them since they're going to be chunks of eight outputs at the dimension zero and then we can calculate the mean right so um we can do that and then we can oh no we need to do more we also want to know the accuracy right so um the accuracy is going to be whether or not the logits dot arg max is go is equal to the label label so the accuracy for each sample is going to be that it's either going to be one or zero and we want that as a float so here let's output a dictionary with loss um and accuracy all right excellent so here then we can aggregate so the loss is going to be and i like to have um like a construction here that aggregates this still so we go out loss for o in outputs so these are now going to be uh entries each one is going to be a dictionary right so our loss losses we have concatenation to the mean okay our accuracy is going to be the same thing for the accuracy nice so our output here is going to be a dictionary and i think in pytorch lightning there there if you put validation accuracy select valac it selects the model according to this but i'm not sure so also in pytorch lightning i can now output this here but also if you have a log entry it will forward this to the logger which is the logger which we can uh actually do and make a tensor board logger out of this so what have we done we have first of all set up the validation step so the the pytorch lightning is going to run through the data loader for each batch do this so we forward it through the bert model to get our log it's and then we compute our loss by the cross entropy loss of the log it's and the labels and we also compute our accuracy by seeing how much the log it's agree with the labels or the maximum log it and then we aggregate all of this over the entire epoch and output that now let's set up a logger so for the logger we can put this i think in the trainer here pytorch lightning logger dot and i think there is a tensor board logger uh pretty sure pytorch lightning is there tensor board no pytorch nying logger i'm pretty sure that exists that's not the newest version i hate these these old docs so latest come on oh this was called logging logger log loggers tensor board logger right here nice so our save dear is going to be called logs and then what we what do we want we want the name imdb and there's also this version thing where if if if you don't put version zero it will just make a new kind of folder each time but i guess we delete the logs anyway we delete the logs folder at the beginning so we don't have this problem but i generally like to overwrite my logs and not make new runs but if you like something different that's you know fine all right so let's run this again and we're cool though this is the bird configuration that we loaded and then we have no attribute logger pytorch lightning loggers loggers loggers okay again loading the weights very cool blah blah blah blah blah blah blah blah and we're in the night python shell and do we have a night python shell remaining only in the training step okay so we're at the training step right here and we can actually can we can check whether or not um ah now we have lightning logs and logs my okay so these appear to be our tensor board logs so we are maybe able to run the tensor board here later um let's run it logs we don't have tensor board okay oh yeah i've uninstalled it because i was angry at it oh come on what's going on um tensor board i should have tensor board somewhere uh it's it's like in um in local bin or something um in local bin or something local bin no it's not in local bin oh oh we'll find it we'll figure it out uh how to get a tensor board maybe we need to install tensor flow well that's gonna take a while okay so back to the training step in the training step we basically need to do the same as in the validation step so we'll need to forward our batch through the model but here we don't need to compute an accuracy but we do need to compute a actually a batch loss that we can back propagate on now in the training step you can either specify how you back propagate um per se or what you can do is you can just output this log loss attribute and then pytorch lightning will basically do the back propagation for you we have the tensor board now please all right there we go and we can we can put this into a different uh thing right here um git uh lp demo yes um okay so this is running and if everything goes correctly 06 shaboom we have a tensor board okay so we need to forward our training step and we need to calculate a loss for the batch so these loss here we do the same thing but here we call mean on it so this is the mean loss from this batch and we can now return um the loss right here and we can also in the training step you can also output a log dictionary and we'll output the loss again here in order so this is our going to be our training loss that we output right here um let's call it train loss and this also will go into the tensor board so if we run this right now we don't have an ipython shell simply by outputting this loss attribute we already instruct pytorch lightning to now run backprop on this loss uh using the optimizer that we have defined okay and by outputting the log we instructed to put this into the tensor board so now we have a scalar entry and um you can see this it only contains the valid no it contains everything very cool very very cool so let's remove the debug flag and we'll just see what happens so to recap right to recap we have um oh now you go see epoch one epoch two go go go go go ah very cool um what we've done is we've set up this pytorch lightning module it needs a bunch of functions but in the init we've basically just set up our bird model from the hogging face transformers library we've loaded in a pre-trained bird model that we're going to fine tune the main thing that the pytorch lightning module needs is a training step function where you define what it should do with the data and this data loader function so in the data loader function um we've loaded up a data set um and we basically specified the batch size this is very easy where does the data set come from we do it here in prepare data this is a special function in pytorch lightning that's basically called after the init but before anything else runs and here we are loading this data set from the nlp library and this is kind of the magic part we specify the split and the size that we want inside of the string and you can do this in percent or in a number of samples that you want i'm sort of sure you can do more things but i haven't explored that then we run map on the data set in order to tokenize it and that's right here and we use a tokenizer again from the pytorch lightning and um just run this encode function this is very simple like if how complicated was this just like a year ago crazy then we need to to put set format and set format tells the data set how it needs to output its samples and we tell it please output torch tensors and um we want these columns right here and we make a train and test data set with from the train and test split accordingly so we have this this goes into a data loader pytorch lightning will take the data loader and run training on it using this train step function in this train step function we get a batch um in the batch there are these two columns that we specified previously input ids and label the input ids will put through the forward function of the model itself this is the forward function we'll construct a mask and run it through the model um we wouldn't actually need to construct a mask but okay and we get back the logits of the classification and then we run this through a cross entropy loss uh get the mean of the batch and there we go in the validation step we do the same thing but also calculate the accuracy but don't calculate the mean we want to keep it per sample and only at the end we want to concatenate everything and calculate the mean if we've done everything correctly you see right here our train loss goes down down down until it is almost zero because we've just and the validation accuracy is super high is this is this because all the labels are equal okay so we have a all the labels are equal like for real um okay so we'll do something else we'll make an integer um with percent and this was five right so that we loaded five percent of the data set um but let's load some more and this might take longer but let's load 50 percent of the data set and just see what happens no present i called it present very good so we'll load up 50 percent of the data set and um we'll do the same thing and we can track in real time what happens in tensorboard and unrecognized instruction format um okay who can we make a format string in a format string this is nasty does it work please work we can make a format string in a format string absolutely bonkers okay so it takes a little bit longer and you could actually i think you can speed this up this mapping of the data set maybe you can stream it um i'm pretty sure you can batch it you can do a batch um processing of this but for our case right here uh we think it's enough so it was like what 1200 if we had five percent so now it should be something like 12 000 um so let's continue with the recap of what we did here we have the train data set the validation data set and on yes so we have everything like this the configure optimizers you can put an optimizer you can also put a learning rate scheduler if you want to and then in the main function we load this pytorch lightning module and we specify a trainer and the trainer we tell it you know the max epochs and so on and we set up the logger and we just run fit on this model and this runs epochs of the model and after each epoch it does a validation epoch and uh minimizes our loss very cool very effective so now if if please if you would all right here we go this is my laptop training burnt oh okay we don't seem to make too much progress let's check the tensor board training loss goes down training loss goes to zero training loss goes down training loss goes to zero i have the sneaking suspicion that this is not entirely shuffled so um is there like a shuffle like a shuffle thing because this seems a bit this seems a bit bit fishy um this imdb data set right here it just seems like you know we could use a bit of shuffling because all the labels yeah the training loss instantly goes to zero so maybe maybe it's not we can we shuffle here let's look at the load data set function load data set batched uh no keeping memory no none of that okay this does not seem to go to continue right here data sets nlp data sets i hope here we know we should find this load data set somewhere builder features load load data set split can we not shuffle anywhere we'll search shuffle builder okay so generate examples this function pre-processed examples key will be hashed okay we are not able to shuffle this um just like that and we can't do that okay we are not able to shuffle this um just like that but i hope this at least gives you an impression i guess if you were to take the full data set and map it then it would actually work we'll just try again with 10% of the data just uh to see it go down tensorboard see this is now good because we always delete the logs folder we don't have any uh remnant uh old tensorflow logs all right come on come on so 10% should be yeah about this about this okay train loss looking good looking good so far looking good looking good so far look at these models how large is that how large is the bert base case hugging face pre-trained models pre-trained models bert based on case that's the one we have 12 layers 110 million parameters easy easy easy oh no it's too large training loss goes to zero again okay so we've determined that this data set very probably isn't entirely shuffled it might just have all the good labels first and all the bad labels last and um yeah just to confirm let's confirm this uh right here let's go with 100% but let's put an ipython shell down um just before we map the data set so we don't have to go through the whole mapping procedure actually that would be here right yes can we not map this asynchronously map i might be doing something really wrong with this library but i think that's that's how it should go so map def map right here we can do batched we could do batched and then i think hugging face has a function to encode batched encode batch encode encode batch no um let's go to the tokenizer build inputs create token type ids get special token mask save where is encode code right here can we have batch encode build inputs no this might be it batch encode yes there is a batch encode where you have batches of these things so okay what if we do the negative one see here's the label zero um i'm pretty sure i'm pretty sure uh batch true let's do that and in our function here we'll say batch encode so let's see how fast this is with 100% where tokenizer has no but we just had batch encode oh but this might be we have batch encode plus batch encode plus or text pairs okay we need this batch encode plus but then that gives us a dictionary right this gives us a dictionary with the fields input ids right here so like this how about that and then we still might want to limit the actual data set um once we have once we have mapped it because we need to train on it as well but i just want to see how fast this batch uh encoding is yes okay reasonably fast but it takes like three minutes um yeah so we won't go on here i will put i will put this as is on um i'll put this as is on github and i will put this as is on github and i hope you can profit from that in any way you want the hugging face site has a tutorial on squad where they also use the metrics so they have basically these pre-defined metrics like blur or rouge i think and you can just use them as you use these data sets so it's very very very convenient to work with these things in nlp so nlp has come a long way absolutely invite you to check out the um the transformers and tokenizers and nlp repos and with that that's it for me i think i hope you enjoyed this again leave a comment if you see improvements or if i maybe should edit this a bit more or if i should add a little bit more see improvements or if i maybe should edit this a bit more i thought the entire process of just going through and making mistakes um would be entertaining to some all right bye bye
[ { "start": 0, "end": 6.96, "text": " How did it do? So Hugging Face just released this NLP library right here and" }, { "start": 6.96, "end": 13.48, "text": " this is pretty cool because it allows you access to about a hundred NLP data" }, { "start": 13.48, "end": 19.3, "text": " sets and ten evaluation metrics pre-packaged. So knowing Hugging Face" }, { "start": 19.3, "end": 24.18, "text": " this is going to be a breeze to work with. So what I thought we would do is we" }, { "start": 24.18, "end": 28.52, "text": " would try to use this. I have not used this yet and it's been a while since" }, { "start": 28.52, "end": 34.44, "text": " I've used any Hugging Face stuff. So what we're trying to do is use this to load" }, { "start": 34.44, "end": 41.32, "text": " up the IMDB data set and then use a BERT model maybe to build a sentiment" }, { "start": 41.32, "end": 48.16, "text": " classifier on top of that using PyTorch, specifically PyTorch Lightning. So all" }, { "start": 48.16, "end": 54.239999999999995, "text": " of that combined from scratch and basically if I can do it then so can you" }, { "start": 54.24, "end": 58.440000000000005, "text": " and we're going to make some mistakes and have to look at the" }, { "start": 58.440000000000005, "end": 64.92, "text": " documentation a bit and so on but that's the process. So first of all if" }, { "start": 64.92, "end": 70.48, "text": " you like content like this let me know if you're not subscribed. Let" }, { "start": 70.48, "end": 74.52000000000001, "text": " me know in the comments if you have any sort of criticism or tips. I'm always" }, { "start": 74.52000000000001, "end": 81.38, "text": " happy for Vim tips honestly. So I have a pretty empty repo, git repo here. I have" }, { "start": 81.38, "end": 86.96, "text": " a git ignore but that's about it. So we'll just dive right in, start up Vim" }, { "start": 86.96, "end": 101.03999999999999, "text": " and let's make a file. So first some boilerplate code. I'm terrible at" }, { "start": 101.03999999999999, "end": 106.19999999999999, "text": " talking and coding at the same time but you know. So I like to use this APSAL" }, { "start": 106.19999999999999, "end": 111.19999999999999, "text": " library and I'm using as you can see I'm using the tab 9 completion engine" }, { "start": 111.2, "end": 119.84, "text": " with CoC with Neo Vim. This is absolutely great. We maybe need apps, app flags" }, { "start": 119.84, "end": 128.64000000000001, "text": " logging. That sounds good. So we'll need Torch probably right and we'll" }, { "start": 128.64000000000001, "end": 137.84, "text": " need PyTorch Lightning as PL. We'll need the NLP library of course since" }, { "start": 137.84, "end": 142.6, "text": " we're gonna use that and we'll need the Transformers library. Now I know" }, { "start": 142.6, "end": 146.8, "text": " Hugging Face has this tokenizers library too but there are some tokenizers in the" }, { "start": 146.8, "end": 154.16, "text": " transformer library already and we'll just keep it light like this. So maybe" }, { "start": 154.16, "end": 161.32, "text": " NumPy, maybe not. Let's see. So we'll export, we'll have these flags object" }, { "start": 161.32, "end": 172.68, "text": " here. Maybe we'll do some flags later and the main function. Let's just call hello." }, { "start": 172.68, "end": 186.72, "text": " Actually let's log that info and alright. Run main. So this is our boilerplate" }, { "start": 186.72, "end": 195.6, "text": " and let's just quickly try it out just to see whether it works. So here we are." }, { "start": 195.6, "end": 202.64, "text": " Hello. That's fine. Alright so where do we go from here? So in PyTorch Lightning" }, { "start": 202.64, "end": 207.92, "text": " what you'll have to do is you have to build this kind of model class." }, { "start": 207.92, "end": 216.95999999999998, "text": " We'll build an IMDB sentiment classifier and that's going to extend this Lightning" }, { "start": 216.95999999999998, "end": 221.51999999999998, "text": " module of PyTorch Lightning. So you need different things in the PyTorch Lightning" }, { "start": 221.51999999999998, "end": 227.56, "text": " module. First of all you need the init and we'll just do like a very basic" }, { "start": 227.56, "end": 234.32, "text": " init. We'll call super on it and that's about it. And you need a forward method" }, { "start": 234.32, "end": 239.84, "text": " since this is a module. So in the forward method you're going to get a batch and" }, { "start": 239.84, "end": 247.04, "text": " you have to do something with it. What we also need is a training step method." }, { "start": 247.04, "end": 255.16, "text": " Training step which gets a batch and a batch index and we'll have to output" }, { "start": 255.16, "end": 261.76, "text": " some kind of loss or some kind of training procedure. Then we'll need a" }, { "start": 261.76, "end": 268.71999999999997, "text": " train data loader. So all of this you can look up in the" }, { "start": 268.71999999999997, "end": 272.71999999999997, "text": " documentation of PyTorch Lightning. Basically you implement these methods" }, { "start": 272.71999999999997, "end": 276.56, "text": " and it will do the rest for you. So it will do all the training loop and it" }, { "start": 276.56, "end": 285.58, "text": " will do the handling of GPUs and whatnot. The whole looping over epochs. All of" }, { "start": 285.58, "end": 289.96, "text": " that is basically taken care of for you when you use PyTorch Lightning. So last" }, { "start": 289.96, "end": 297.59999999999997, "text": " thing we need is maybe a prepare data. Let's put that more up here. Prepare data." }, { "start": 297.59999999999997, "end": 302.32, "text": " That method is optional but it gets called at the beginning and that's going" }, { "start": 302.32, "end": 306.79999999999995, "text": " to be pretty good for us. I have downloaded the weights of a BERT model" }, { "start": 306.79999999999995, "end": 310.91999999999996, "text": " and the data set so we don't need to do that anymore." }, { "start": 310.91999999999996, "end": 318.96, "text": " That's about it. Maybe I've forgotten something." }, { "start": 318.96, "end": 325.23999999999995, "text": " Lightning examples. There's what we're going to do. We're going to look at it" }, { "start": 325.23999999999995, "end": 330.96, "text": " like an example of PyTorch Lightning and just to see whether we'll have it." }, { "start": 330.96, "end": 337.76, "text": " Maybe here domain examples, ImageNet sounds good. We'll have these methods." }, { "start": 337.76, "end": 342, "text": " This is way more than we need but down here. Basically what you do is you" }, { "start": 342, "end": 347.15999999999997, "text": " instantiate your model and we won't have these hyper parameters here." }, { "start": 347.16, "end": 351.48, "text": " These will be our flags but then you'll implement this trainer and then you call" }, { "start": 351.48, "end": 363.16, "text": " fit on the model. Let's maybe copy this down here." }, { "start": 363.16, "end": 371.84000000000003, "text": " This is our IMDB sentiment classifier and the trainer." }, { "start": 371.84, "end": 383.32, "text": " The root here, let's call that logs. GPUs. We'll give it a GPU if CUDA is available." }, { "start": 384.44, "end": 392.23999999999995, "text": " Else zero. Then we'll make a flag for the epochs. We don't need the rest of this." }, { "start": 392.23999999999995, "end": 399.84, "text": " Then at the end we'll call fit model. If we had a classifier this" }, { "start": 399.84, "end": 411.59999999999997, "text": " would already run. Now what I like to do is to have this module" }, { "start": 411.59999999999997, "end": 418.67999999999995, "text": " called SH which gives you some sort of easy shell commands. At the beginning" }, { "start": 418.67999999999995, "end": 427.2, "text": " of each run whenever the file loads I remove the logs folder." }, { "start": 427.2, "end": 435.12, "text": " I have a clean logs folder and then I make it again like this." }, { "start": 435.12, "end": 440.28, "text": " It just deletes the logs and then runs them again. If we run this right now" }, { "start": 440.28, "end": 447.24, "text": " this is going to give us an error. We don't have an epochs flag." }, { "start": 447.24, "end": 459.36, "text": " We need to define a flag. Let's call define integer. We'll go for 10 epochs right now." }, { "start": 459.36, "end": 470.92, "text": " We haven't configured our optimizers." }, { "start": 470.92, "end": 476.12, "text": " In PyTorch Lightning you need some sort of optimizer configuration." }, { "start": 476.12, "end": 485, "text": " I'll just copy that from an example. I'm going full Siraj here people." }, { "start": 485, "end": 491.96, "text": " We need to configure optimizers. I like the SGD for this. It tends to work well in neural networks." }, { "start": 491.96, "end": 496.6, "text": " We don't need the scheduler. We don't need any of that." }, { "start": 496.6, "end": 505.72, "text": " Let's just return the SGD optimizer with the parameters and we'll make a flag for" }, { "start": 505.72, "end": 513.1600000000001, "text": " the learning rate and we'll make a flag for the momentum. We don't need any weight decay." }, { "start": 513.1600000000001, "end": 523.96, "text": " Let's put these. We'll make floats for the learning rate." }, { "start": 523.96, "end": 533, "text": " Maybe start off with something like this. I never put help strings if the description is rather clear." }, { "start": 533, "end": 542.36, "text": " Only losers need help. Don't be kidding yourself." }, { "start": 542.36, "end": 551.4, "text": " If you put the help string you need help. That's how it works." }, { "start": 551.4, "end": 557.64, "text": " I just don't like that this library forces you to put the help string because it somehow makes me feel bad." }, { "start": 557.64, "end": 565, "text": " It's very opinionated. It says basically you should put something there." }, { "start": 565, "end": 572.36, "text": " We have this and now when we run this we don't have anything to optimize yet." }, { "start": 572.36, "end": 583.3199999999999, "text": " First of all we need the model. Do we need to prepare data first?" }, { "start": 583.32, "end": 589.6400000000001, "text": " Let's check. I have this short snippet here that embeds an IPython shell." }, { "start": 589.6400000000001, "end": 593.48, "text": " I just plug this into anywhere so I can see if I reach it." }, { "start": 593.48, "end": 598.6800000000001, "text": " I reach the prepare data. Let's care about the data set first." }, { "start": 598.6800000000001, "end": 606.6, "text": " This NLP library as you can see right here. There's the usage right here." }, { "start": 606.6, "end": 614.44, "text": " You can load a data set here with the appropriate split." }, { "start": 614.44, "end": 619.08, "text": " It will basically just give it back. If you don't have it, it will download it." }, { "start": 619.08, "end": 624.6800000000001, "text": " It's pretty cool. We'll just load the data set." }, { "start": 624.6800000000001, "end": 634.0400000000001, "text": " I've already checked out what they have and they have the IMDB data set." }, { "start": 634.04, "end": 642.28, "text": " In this split argument we can say give me the train split." }, { "start": 642.28, "end": 648.92, "text": " As a string you can say give me whatever the first 5% of the train split." }, { "start": 648.92, "end": 652.52, "text": " This is just my laptop here." }, { "start": 652.52, "end": 656.68, "text": " We won't be able to train a super high grade model." }, { "start": 656.68, "end": 663.24, "text": " We'll go for 5% of the train split. This is to train data set." }, { "start": 663.24, "end": 668.36, "text": " Now if we run until here." }, { "start": 668.36, "end": 674.92, "text": " If you had not downloaded this, it would download this." }, { "start": 674.92, "end": 677.5600000000001, "text": " Given the train data set, I hope you can see this." }, { "start": 677.5600000000001, "end": 683.88, "text": " It says it's a data set. It has 1250 rows." }, { "start": 683.88, "end": 687.88, "text": " Each entry has a text and a label." }, { "start": 687.88, "end": 692.36, "text": " You can just index this like a data set." }, { "start": 692.36, "end": 696.36, "text": " That's the first sample. The label is 1 here." }, { "start": 696.36, "end": 704.12, "text": " It means that we should predict the label that this is a good sentiment." }, { "start": 704.12, "end": 707.32, "text": " It's either 1 or 0." }, { "start": 707.32, "end": 712.6800000000001, "text": " I think so. Either good sentiment or bad sentiment." }, { "start": 712.6800000000001, "end": 719.88, "text": " Our first task is going to be to get this into a form where BERT can consume it." }, { "start": 719.88, "end": 723.96, "text": " How do we do this with this NLP library? That's the pretty cool part." }, { "start": 723.96, "end": 726.36, "text": " Right now you see this is text." }, { "start": 726.36, "end": 730.92, "text": " In NLP we need to map this text into token IDs." }, { "start": 730.92, "end": 735.08, "text": " We need to tokenize and we need to map this to IDs." }, { "start": 735.08, "end": 738.6, "text": " Huggingface of course has very nice libraries for that." }, { "start": 738.6, "end": 742.36, "text": " They're called tokenizers." }, { "start": 742.36, "end": 746.52, "text": " We'll have one of these tokenizers." }, { "start": 746.52, "end": 751.3199999999999, "text": " We'll use this from the transformers library." }, { "start": 751.3199999999999, "end": 757, "text": " I think this is called BERT tokenizer." }, { "start": 757, "end": 761.48, "text": " That the BERT models can use. Let's check it out." }, { "start": 761.48, "end": 765.16, "text": " We're at the documentation." }, { "start": 765.16, "end": 769.3199999999999, "text": " BERT tokenizer. There we go. There's a BERT tokenizer." }, { "start": 769.3199999999999, "end": 772.36, "text": " Fast." }, { "start": 772.36, "end": 778.12, "text": " Yes, okay. We'll take the fast one." }, { "start": 778.12, "end": 780.76, "text": " Maybe not." }, { "start": 781.88, "end": 787.32, "text": " Yeah, we'll take the fast one. Come on. Be risky." }, { "start": 787.32, "end": 790.76, "text": " BERT tokenizer fast." }, { "start": 790.76, "end": 793.88, "text": " I think we can do this from pre-trained." }, { "start": 793.88, "end": 796.84, "text": " They have these methods from pre-trained." }, { "start": 796.84, "end": 802.6800000000001, "text": " We'll take this from pre-trained." }, { "start": 804.36, "end": 807.32, "text": " We'll put the model name here." }, { "start": 807.32, "end": 810.44, "text": " I want to make this a flag." }, { "start": 810.44, "end": 814.9200000000001, "text": " Such that I'm not bound to a particular model." }, { "start": 817.4, "end": 819.88, "text": " Oops." }, { "start": 820.84, "end": 823.64, "text": " Cool." }, { "start": 823.64, "end": 826.84, "text": " This is called model." }, { "start": 828.12, "end": 832.76, "text": " This is our model. BERT based on case." }, { "start": 832.76, "end": 835.72, "text": " We have a tokenizer right now." }, { "start": 835.72, "end": 838.36, "text": " We can now tokenize these things." }, { "start": 838.36, "end": 841.24, "text": " Every entry in the data set." }, { "start": 841.24, "end": 845.88, "text": " In a classic setting we'd have to write a loop for that." }, { "start": 845.88, "end": 848.92, "text": " With this data set library, with this nlp library," }, { "start": 848.92, "end": 852.52, "text": " it's pretty cool that we can tokenize" }, { "start": 852.52, "end": 855.4, "text": " each of the samples. We can map this" }, { "start": 855.4, "end": 861.72, "text": " tokenizer function across the training data set." }, { "start": 861.72, "end": 864.52, "text": " How do we do that?" }, { "start": 865, "end": 868.6, "text": " We have this tokenizer." }, { "start": 868.6, "end": 872.84, "text": " I'm pretty sure it has a tokenizer, an encode or something method." }, { "start": 872.84, "end": 876.76, "text": " There's forward. This is the BERT model." }, { "start": 876.76, "end": 879.8, "text": " Where's the BERT tokenizer?" }, { "start": 879.8, "end": 884.4399999999999, "text": " Right here." }, { "start": 884.4399999999999, "end": 892.3599999999999, "text": " It has this encode or something." }, { "start": 892.5999999999999, "end": 896.3599999999999, "text": " Here. Oh yeah. Encode." }, { "start": 896.52, "end": 899.56, "text": " Where is the definition of that?" }, { "start": 899.56, "end": 903.3199999999999, "text": " Can we click on this?" }, { "start": 903.3199999999999, "end": 907.56, "text": " This encode takes text and it takes a bunch of other arguments." }, { "start": 907.56, "end": 910.8399999999999, "text": " I hope you can see this." }, { "start": 910.8399999999999, "end": 913.4, "text": " There we go." }, { "start": 913.7199999999999, "end": 918.4399999999999, "text": " Whether or not you should add the special tokens" }, { "start": 918.4399999999999, "end": 922.3599999999999, "text": " or the max length. This is going to be pretty important." }, { "start": 922.3599999999999, "end": 926.4399999999999, "text": " And pad to max length. We want everything to be of the same length." }, { "start": 926.4399999999999, "end": 934.1199999999999, "text": " If you apply this encode function to a text of these samples." }, { "start": 934.12, "end": 938.2, "text": " Let's just take the first sample here and let's take the text entry." }, { "start": 938.2, "end": 942.6, "text": " Then what you're going to get is a list of these IDs." }, { "start": 942.6, "end": 944.84, "text": " This is exactly what we want." }, { "start": 944.84, "end": 949.16, "text": " The 101 here is this CLS token that BERT takes in." }, { "start": 949.16, "end": 952.04, "text": " Then it's just the word pieces." }, { "start": 952.04, "end": 957.08, "text": " You could also say instead of this say tokenize." }, { "start": 957.08, "end": 961.32, "text": " I think. That will just give you the word pieces." }, { "start": 961.32, "end": 964.84, "text": " Not the encodes yet." }, { "start": 964.84, "end": 967.72, "text": " These are the word pieces right here." }, { "start": 967.72, "end": 972.5200000000001, "text": " This is the tokenized text and with the encode function it does this." }, { "start": 972.5200000000001, "end": 978.0400000000001, "text": " Then it maps these two IDs such that BERT can consume that." }, { "start": 978.0400000000001, "end": 984.5200000000001, "text": " For this NLP, this library, has this convenient function called map in their data set." }, { "start": 984.5200000000001, "end": 991.24, "text": " What we'll have to do first is define a tokenized function that takes in a single sample." }, { "start": 991.24, "end": 1000.92, "text": " Then it will run the tokenizer encode function across the text entry." }, { "start": 1000.92, "end": 1003.88, "text": " We have already seen we need like..." }, { "start": 1003.88, "end": 1010.28, "text": " Add special tokens is true. This is cool. Max length, yes." }, { "start": 1010.28, "end": 1016.12, "text": " We'll make a flag sequence length or something." }, { "start": 1016.12, "end": 1023.24, "text": " We are going to pad to max length is true." }, { "start": 1023.24, "end": 1027.48, "text": " Every single sample will be of the same size." }, { "start": 1027.48, "end": 1030.84, "text": " In this function there's a number of ways what you can return here." }, { "start": 1030.84, "end": 1035.48, "text": " One way is to return the original sample and actually just set a new attribute." }, { "start": 1035.48, "end": 1038.68, "text": " I think." }, { "start": 1038.68, "end": 1042.68, "text": " Set a new attribute on this original sample right here." }, { "start": 1042.68, "end": 1047.48, "text": " Let's format this a bit nicer." }, { "start": 1047.48, "end": 1051, "text": " You see we have this tokenized function. It takes a sample, it takes the text," }, { "start": 1051, "end": 1057.3200000000002, "text": " it tokenizes it, encodes it and puts this as the new attribute input IDs and returns it again." }, { "start": 1057.3200000000002, "end": 1066.52, "text": " Now what we can do is we can map this function across the training data set." }, { "start": 1066.52, "end": 1072.6000000000001, "text": " This will go over the training data set and basically for each entry do this thing." }, { "start": 1072.6, "end": 1077.32, "text": " Hopefully after this operation we'll have a data set" }, { "start": 1077.32, "end": 1088.9199999999998, "text": " where each sample not only has a text and a label but also an input IDs attribute." }, { "start": 1088.9199999999998, "end": 1094.6, "text": " We don't have this sequence length thing right here." }, { "start": 1094.6, "end": 1098.1999999999998, "text": " Let's put that here." }, { "start": 1098.2, "end": 1104.44, "text": " Let's just go with 32 since this is just my laptop." }, { "start": 1104.44, "end": 1108.04, "text": " 32 samples should be fine." }, { "start": 1108.04, "end": 1113.4, "text": " Here it says can't pickle tokenizer objects." }, { "start": 1113.4, "end": 1120.2, "text": " What it tries to do right here is it tries to" }, { "start": 1120.76, "end": 1124.76, "text": " it tries to" }, { "start": 1124.76, "end": 1128.76, "text": " parallelize basically this thing right here." }, { "start": 1128.76, "end": 1134.92, "text": " If we look at this NLP thing, is there documentation to this?" }, { "start": 1136.12, "end": 1140.28, "text": " We can just look at the data sets maybe." }, { "start": 1140.28, "end": 1145.96, "text": " Naming, splits, builder, arrow, data set." }, { "start": 1146.92, "end": 1150.04, "text": " Map right here." }, { "start": 1150.04, "end": 1156.44, "text": " This function I think it will try to multiprocess and therefore it needs to basically" }, { "start": 1157.08, "end": 1167.6399999999999, "text": " pickle all of the things that go into the function." }, { "start": 1167.6399999999999, "end": 1172.2, "text": " It pickles all of the things that go into the function which means this tokenizer right here" }, { "start": 1172.2, "end": 1179.8, "text": " it needs to be pickled." }, { "start": 1179.8, "end": 1184.76, "text": " Maybe there's a way to get around this." }, { "start": 1187, "end": 1190.28, "text": " One thing we can try is we can try another tokenizer." }, { "start": 1192.28, "end": 1194.92, "text": " Maybe this one can be pickled." }, { "start": 1194.92, "end": 1198.76, "text": " This did library is pretty good but it can't pickle everything." }, { "start": 1198.76, "end": 1203.32, "text": " This tokenizer can actually be pickled." }, { "start": 1203.32, "end": 1210.6, "text": " I'm not entirely sure what you'd have to do honestly" }, { "start": 1210.6, "end": 1215.32, "text": " because I don't know the library but what you could do is make a thread or" }, { "start": 1215.32, "end": 1219.56, "text": " process local variable of this and basically make it a singleton in each" }, { "start": 1219.56, "end": 1223.08, "text": " process and then basically in here you call the function to get it" }, { "start": 1223.08, "end": 1227.96, "text": " and it returns the already instantiated object and so on." }, { "start": 1227.96, "end": 1230.52, "text": " If you really want to multiprocess all of this." }, { "start": 1230.52, "end": 1234.44, "text": " Anyway we have this train data set right now and you see the schema." }, { "start": 1235.24, "end": 1239.16, "text": " If you can see this the schema has been extended so there is now text, there is" }, { "start": 1239.16, "end": 1244.52, "text": " label and there is input IDs which is a list of int64 things." }, { "start": 1245.16, "end": 1246.2, "text": " That's pretty cool." }, { "start": 1247.16, "end": 1253.08, "text": " So now what we can do since this is still a python list right this is still a python list." }, { "start": 1253.08, "end": 1258.36, "text": " Now I know the tokenizers can already output PyTorch tensors but that's kind of cheating." }, { "start": 1258.36, "end": 1262.1999999999998, "text": " So we want to use this library right here." }, { "start": 1262.1999999999998, "end": 1264.36, "text": " We want the train data set." }, { "start": 1264.36, "end": 1270.6799999999998, "text": " There is a method called set format right here and you say type equals torch." }, { "start": 1271.48, "end": 1277.96, "text": " What that does and I think you need to say which columns you want." }, { "start": 1277.96, "end": 1282.52, "text": " So we want columns." }, { "start": 1282.52, "end": 1284.52, "text": " Maybe we should get all columns." }, { "start": 1284.52, "end": 1286.52, "text": " Can we output the text?" }, { "start": 1287.48, "end": 1292.68, "text": " So you can select from the sample which of the columns you want and let's check it out again." }, { "start": 1292.68, "end": 1298.92, "text": " For now as long as we're just debugging here I like to do a debug flag." }, { "start": 1300.04, "end": 1302.68, "text": " So this is usually one of the first flags I do." }, { "start": 1302.68, "end": 1308.3600000000001, "text": " It's define boolean debug." }, { "start": 1312.6000000000001, "end": 1316.92, "text": " What this does is whenever this is active I try to be as fast as possible." }, { "start": 1316.92, "end": 1322.68, "text": " So there in this PyTorch lightning trainer there's actually this fast def run argument." }, { "start": 1325.0800000000002, "end": 1330.8400000000001, "text": " Which does the same thing but I can push it a bit harder with this debug here." }, { "start": 1330.84, "end": 1337.24, "text": " So let me say this is like one." }, { "start": 1337.24, "end": 1345.3999999999999, "text": " We'll just load batch size samples if we are in debug mode." }, { "start": 1348.9199999999998, "end": 1352.9199999999998, "text": " We don't actually have a batch size argument yet do we?" }, { "start": 1352.92, "end": 1361.16, "text": " If flags.debug else 5 percent." }, { "start": 1361.16, "end": 1363.16, "text": " So we don't have batch size yet." }, { "start": 1363.16, "end": 1365.16, "text": " We're surely gonna need that at some point." }, { "start": 1365.16, "end": 1373.16, "text": " So let's go with a batch size of 8 just because we can." }, { "start": 1373.16, "end": 1384.44, "text": " Now if we run this in debug we should..." }, { "start": 1385.72, "end": 1388.28, "text": " Ah okay yes this needs to be a string." }, { "start": 1390.6000000000001, "end": 1391.24, "text": " Shag-a-boom!" }, { "start": 1392.0400000000002, "end": 1398.2, "text": " Cool so it says it's the fast def run and if we run it in debug it just loads very few data points." }, { "start": 1398.2, "end": 1401.8000000000002, "text": " So this map function here doesn't take this whole while." }, { "start": 1401.8, "end": 1404.68, "text": " Maybe there's a way you can stream that I don't know." }, { "start": 1404.68, "end": 1406.04, "text": " For now this is pretty good." }, { "start": 1406.04, "end": 1414.04, "text": " So if we look at the train data set again you can see that it has the same entry." }, { "start": 1414.04, "end": 1420.68, "text": " So this is still a list of 64 but if you index it right now if you go to the zero data point." }, { "start": 1421.56, "end": 1428.84, "text": " Okay then it crashes because it tries to convert these two PyTorch tensors" }, { "start": 1428.84, "end": 1436.12, "text": " and it can't convert the string so we'll have to say we just want the columns input IDs" }, { "start": 1436.12, "end": 1437.6399999999999, "text": " and we want the label." }, { "start": 1438.9199999999998, "end": 1440.9199999999998, "text": " Label can't spell." }, { "start": 1441.9599999999998, "end": 1444.36, "text": " Okay let's try it again." }, { "start": 1448.4399999999998, "end": 1456.1999999999998, "text": " So right here you see that what we get out is actually a PyTorch tensors for this" }, { "start": 1456.1999999999998, "end": 1458.6799999999998, "text": " and not kind of Python lists anymore." }, { "start": 1458.68, "end": 1464.76, "text": " So this is now pretty this is one-to-one so with duck typing maybe it's even subclassed." }, { "start": 1464.76, "end": 1470.2, "text": " This is a PyTorch data set right which we can load into a data loader." }, { "start": 1470.2, "end": 1480.1200000000001, "text": " So this is a perfectly fine data set so we can now say self train data set is this train data set." }, { "start": 1480.12, "end": 1488.36, "text": " Now we want to do this for the test as well but in order to do that we would have to write" }, { "start": 1488.36, "end": 1495.08, "text": " all of this code again which I'm not really in the mood so we'll just loop it." }, { "start": 1496.12, "end": 1504.28, "text": " We'll create a function prepare data set and we'll take in the split name." }, { "start": 1504.28, "end": 1512.36, "text": " The split name right like this and we'll just go with the split name here." }, { "start": 1517.48, "end": 1520.36, "text": " That should do it and we just call it data set." }, { "start": 1523, "end": 1527.96, "text": " Data set and return that." }, { "start": 1527.96, "end": 1538.68, "text": " So now we can say train data set self dot test data set is prepare data set." }, { "start": 1543.16, "end": 1544.8400000000001, "text": " For train and test." }, { "start": 1548.92, "end": 1552.8400000000001, "text": " Excellent so now we have a training data set and a testing data set." }, { "start": 1552.84, "end": 1559.08, "text": " So here in the train data loader we need to take the training data set" }, { "start": 1559.08, "end": 1568.52, "text": " and construct a data loader from it. This is super easy so what we'll do is we'll do it in one line." }, { "start": 1570.6799999999998, "end": 1574.84, "text": " Data loader so what does the data loader need? The data loader needs a data set." }, { "start": 1576.1999999999998, "end": 1581, "text": " So the prepare data is called at the beginning so we have this data set right here and I think we" }, { "start": 1581, "end": 1589.72, "text": " can go with a batch size right here and we already have a flag for that and I think there is like a" }, { "start": 1589.72, "end": 1597.16, "text": " drop last yes so the drop last will go for true we only want full batches during training and we'll" }, { "start": 1597.16, "end": 1608.52, "text": " also shuffle. Okay and the same goes for we need a validation data loader for our validation set." }, { "start": 1608.52, "end": 1613.8, "text": " So in PyTorch Lightning you have train validation and test and test is really only for like the" }, { "start": 1613.8, "end": 1621.32, "text": " final final test. If the test data set we have here is the would be called the validation data" }, { "start": 1621.32, "end": 1629.48, "text": " set in PyTorch Lightning so we false here false we don't want to shuffle particularly. Okay" }, { "start": 1630.44, "end": 1635.16, "text": " so we have a training data loader and a validation data loader." }, { "start": 1635.16, "end": 1639.0800000000002, "text": " Now what do we need? We have optimizer very good." }, { "start": 1641.48, "end": 1647.3200000000002, "text": " Now what do we need? All we need to do is to actually pass our data through the BERT model." }, { "start": 1647.3200000000002, "end": 1651.16, "text": " So this forward thing here we're just going to leave not implemented." }, { "start": 1652.76, "end": 1662.44, "text": " Maybe we can implement it. Okay so we do need a model as you can see right here this batch" }, { "start": 1662.44, "end": 1668.6000000000001, "text": " let's say this batch is going to let's go right here right so if you know if you don't sometimes" }, { "start": 1668.6000000000001, "end": 1676.2, "text": " don't know what to do you just go to where you should be okay at ultimate empty parameter we" }, { "start": 1676.2, "end": 1685.0800000000002, "text": " don't have parameters yet all right so what do we do we go up here and we make a model we need to" }, { "start": 1685.08, "end": 1694.4399999999998, "text": " actually make the BERT model right here so from transformers we can use the BERT model now they" }, { "start": 1694.4399999999998, "end": 1704.04, "text": " have a lot of BERT models and we'll go back right here to the BERT models because they as you know" }, { "start": 1704.04, "end": 1710.6799999999998, "text": " BERT is just an encoder so we need to build a classifier on top of BERT but they already have" }, { "start": 1710.68, "end": 1716.6000000000001, "text": " done this so they have a bunch of BERT different configurations and the one we're looking for here" }, { "start": 1716.6000000000001, "end": 1722.44, "text": " would be this this BERT for sequence classification right this is BERT BERT model transformer with a" }, { "start": 1722.44, "end": 1729.16, "text": " sequence classification or regression head on top right so this is exactly what we need a classifier" }, { "start": 1729.16, "end": 1738.3600000000001, "text": " on top of BERT and we can um i think we can also load this with this from pre-trained and just put" }, { "start": 1738.36, "end": 1748.9199999999998, "text": " in the same name so we can this BERT for sequence classification and we'll load up the same model" }, { "start": 1748.9199999999998, "end": 1760.9199999999998, "text": " that we had okay so um this is our model easy as that so what do we what do we do with this BERT if" }, { "start": 1760.92, "end": 1768.52, "text": " we put in data what what happens for that we quickly go back again so in the forward method we can" }, { "start": 1768.52, "end": 1776.92, "text": " in we can input the input ids right which is batch size sequence length tensor we can input the" }, { "start": 1776.92, "end": 1782.92, "text": " attention mask that basically tells you where there's padding and where there isn't" }, { "start": 1782.92, "end": 1790.3600000000001, "text": " masks to avoid performing attention on padding token mask value selected in zero one one for tokens" }, { "start": 1790.3600000000001, "end": 1796.76, "text": " that are not masks zero for tokens that are masks then we can input the token type ids which we don't" }, { "start": 1796.76, "end": 1801.64, "text": " have here we just have one sentence but usually in BERT you have the capability of inputting two" }, { "start": 1801.64, "end": 1806.6000000000001, "text": " different types like a question and a paragraph or a first sentence and the second sentence" }, { "start": 1806.6, "end": 1816.9199999999998, "text": " um position ids are optional um blah blah blah blah blah blah blah none of that okay we could also" }, { "start": 1816.9199999999998, "end": 1825.8799999999999, "text": " input the labels these are optional and it would already compute a loss for us uh which we we don't" }, { "start": 1825.8799999999999, "end": 1831.8799999999999, "text": " this that's almost cheating so let's just focus on putting in the input ids and i think that's" }, { "start": 1831.88, "end": 1837.4, "text": " gonna be enough since we basically truncate our long text to 32 tokens we don't need to worry about" }, { "start": 1837.4, "end": 1845.4, "text": " masking right here otherwise you would input a mask for um actually we we can do it we can do it" }, { "start": 1846.1200000000001, "end": 1855.64, "text": " okay so what you could input a mask for basically where um your tokens are not pad tokens and the" }, { "start": 1855.64, "end": 1862.6000000000001, "text": " pad tokens in BERT are zero so basically your mask should just be whatever's non-zero uh but" }, { "start": 1863.3200000000002, "end": 1869.5600000000002, "text": " maybe also your model learns um to ignore the pad tokens i might be wrong here and it does it" }, { "start": 1869.5600000000002, "end": 1877.0800000000002, "text": " automatically right so in your forward pass what do you do actually let's go to the training step" }, { "start": 1877.08, "end": 1884.6799999999998, "text": " we'll put something here you can see it so if you if you didn't have BERT um it would actually" }, { "start": 1885.6399999999999, "end": 1890.6799999999998, "text": " uh BERT you it BERT you up it would download BERT right here but since i have it you can see here" }, { "start": 1890.6799999999998, "end": 1897.8799999999999, "text": " this is the smaller BERT model um pytorch lightning i don't have enough space in my console" }, { "start": 1897.8799999999999, "end": 1903.48, "text": " right here but it would give you a nice overview over your model how many parameters it has how" }, { "start": 1903.48, "end": 1910.28, "text": " what kind of layers it has and so on so uh we also need a validation step if we have a validation" }, { "start": 1910.28, "end": 1920.44, "text": " data loader validation step and we need the um validation epoch end function so" }, { "start": 1922.3600000000001, "end": 1927.88, "text": " usually in training you don't really care about epochs too much because you just have many batch" }, { "start": 1927.88, "end": 1934.0400000000002, "text": " after mini batch but in validation uh what you want is kind of one single metric across your entire" }, { "start": 1934.0400000000002, "end": 1940.6000000000001, "text": " test data set or validation data set and therefore you sort of in the validation step you'll just" }, { "start": 1940.6000000000001, "end": 1946.44, "text": " kind of output things you output local things per batch and then in the epoch end function you" }, { "start": 1946.44, "end": 1954.2800000000002, "text": " aggregate them into one big number so um we'll we'll we'll just put" }, { "start": 1954.28, "end": 1960.84, "text": " we'll put things into each thing thing thing so i'm pretty sure we're going to end up in the" }, { "start": 1960.84, "end": 1966.44, "text": " validation step first because if especially if we do this debug run it basically it tries to" }, { "start": 1966.44, "end": 1973.6399999999999, "text": " run a validation first uh at the very start of training so we can look at a batch right here" }, { "start": 1974.44, "end": 1980.36, "text": " so what's a batch um the batch seems to be a dictionary if you look at its keys we can see" }, { "start": 1980.36, "end": 1987.1599999999999, "text": " um the batch seems to be a dictionary if you look at its keys we have label and input ids okay so" }, { "start": 1987.1599999999999, "end": 1996.04, "text": " that's pretty cool so if we go for the input ids that gives us a tensor and the tensors of shape" }, { "start": 1996.04, "end": 2002.76, "text": " eight which is our batch size and 32 which is our sequence length and we should be able to pretty" }, { "start": 2002.76, "end": 2011.8799999999999, "text": " much input that into the BERT model that we created boom okay and what do we get out we get out a tuple" }, { "start": 2011.8799999999999, "end": 2018.2, "text": " and the first entry is going to be this looks like logits all right okay let's check the shape" }, { "start": 2019.08, "end": 2023.96, "text": " and this is eight so this is our batch size and two is the logit so one for the negative class" }, { "start": 2023.96, "end": 2030.6, "text": " and one for the positive class and this is this we can basically input into a cross entropy loss" }, { "start": 2030.6, "end": 2042.1999999999998, "text": " given our labels so we also have our label here and their label is all ones nice um is this maybe" }, { "start": 2042.1999999999998, "end": 2049.24, "text": " sorted is the data set sorted into good and bad things because that would be that would be bad" }, { "start": 2050.12, "end": 2059.7999999999997, "text": " in any case um so what do we have to do so in the forward method we get the input ids let's let's" }, { "start": 2059.8, "end": 2068.04, "text": " say we get the input ids and we run this through our model and we can actually construct a mask" }, { "start": 2068.04, "end": 2080.28, "text": " here and the mask is going to be wherever the input ids are not zero and um that as a what does" }, { "start": 2080.28, "end": 2089.1600000000003, "text": " it need to be so these mask this attention mask is going to be a float tensor okay so we'll put it" }, { "start": 2089.16, "end": 2103.64, "text": " as a float tensor cool um right like this so our logits are going to be that and yeah tuple with" }, { "start": 2103.64, "end": 2109.48, "text": " one entry so the comma here is important we're going to return the logits so this is our forward" }, { "start": 2109.48, "end": 2115.3999999999996, "text": " function so in the validation and the training step the first thing we got to do is we got to" }, { "start": 2115.4, "end": 2122.12, "text": " uh call this forward function with the input ids and these of course are in our batch" }, { "start": 2124.44, "end": 2132.92, "text": " like this so these are going to be our logits and then in the validation what we want to do is we" }, { "start": 2132.92, "end": 2138.44, "text": " first of all want to compute our loss right so we have to construct this up here in the init" }, { "start": 2138.44, "end": 2150.28, "text": " we can actually fold this prepare data um loss is going to be a cross entropy loss yes that exists" }, { "start": 2150.28, "end": 2160.36, "text": " with read reduction i like to put reduction none i don't think there's like an a deprecated reduce" }, { "start": 2160.36, "end": 2165.64, "text": " and there is like a reduction where you can put mean or something i like to not reduce the loss" }, { "start": 2165.64, "end": 2170.12, "text": " at first because then i can agro i can use the same thing for validation and training" }, { "start": 2171.64, "end": 2182.44, "text": " so in the validation step i just want to compute my loss right here with self so loss loss um and" }, { "start": 2183.3199999999997, "end": 2184.92, "text": " we'll have to cheat a bit" }, { "start": 2184.92, "end": 2192.92, "text": " so look up the cross entropy loss and" }, { "start": 2196.92, "end": 2202.92, "text": " come on okay where is the cross entropy loss" }, { "start": 2202.92, "end": 2215.48, "text": " loss cross entropy loss it takes yes it's reduction ha tada and" }, { "start": 2219.32, "end": 2224.28, "text": " so the input to the function that we construct is going to be first um" }, { "start": 2224.28, "end": 2231.48, "text": " n by c first the input and then the targets so first the logits and then the targets right" }, { "start": 2233, "end": 2243.4, "text": " criterion that combines logs of max and nl loss over a single class nice nice nice okay okay cool" }, { "start": 2243.4, "end": 2254.6800000000003, "text": " so first logits and then labels label okay that's our loss so if we check out what our loss is going" }, { "start": 2254.6800000000003, "end": 2264.28, "text": " to be it's probably going to be an vector of size eight because we have reduction none" }, { "start": 2264.28, "end": 2275.6400000000003, "text": " none loss yes c vector of size eight very nice so we can just um basically return" }, { "start": 2277.1600000000003, "end": 2285.0800000000004, "text": " i'll say we can return this loss just as is and then in the validation epoch end the outputs here" }, { "start": 2285.0800000000004, "end": 2291.7200000000003, "text": " is going to be a list of and every entry in the list is going to be one of these validation steps" }, { "start": 2291.72, "end": 2301.72, "text": " for for one batch so we can aggregate those so losses is will concatenate them since they're" }, { "start": 2301.72, "end": 2313.64, "text": " going to be chunks of eight outputs at the dimension zero and then we can calculate the mean right so" }, { "start": 2313.64, "end": 2318.92, "text": " um we can do that and then" }, { "start": 2321.96, "end": 2332.92, "text": " we can oh no we need to do more we also want to know the accuracy right so um the accuracy is" }, { "start": 2332.92, "end": 2346.76, "text": " going to be whether or not the logits dot arg max is go is equal to the label label" }, { "start": 2347.7200000000003, "end": 2352.44, "text": " so the accuracy for each sample is going to be that it's either going to be one or zero and" }, { "start": 2352.44, "end": 2364.76, "text": " we want that as a float so here let's output a dictionary with loss um and accuracy all right" }, { "start": 2366.04, "end": 2375, "text": " excellent so here then we can aggregate so the loss is going to be and i like to have um like" }, { "start": 2375, "end": 2386.2, "text": " a construction here that aggregates this still so we go out loss for o in outputs so these are now" }, { "start": 2386.2, "end": 2394.44, "text": " going to be uh entries each one is going to be a dictionary right so our loss losses we have" }, { "start": 2394.44, "end": 2403.88, "text": " concatenation to the mean okay our accuracy is going to be the same thing for the accuracy" }, { "start": 2403.88, "end": 2408.04, "text": " nice so our output here is going to be a dictionary" }, { "start": 2412.2000000000003, "end": 2418.12, "text": " and i think in pytorch lightning there there if you put validation accuracy select valac" }, { "start": 2418.76, "end": 2425.88, "text": " it selects the model according to this but i'm not sure so also in pytorch lightning i can now" }, { "start": 2425.88, "end": 2433.48, "text": " output this here but also if you have a log entry it will forward this to the logger which is" }, { "start": 2433.48, "end": 2439.48, "text": " the logger which we can uh actually do and make a tensor board logger out of this so what have we" }, { "start": 2439.48, "end": 2446.2, "text": " done we have first of all set up the validation step so the the pytorch lightning is going to" }, { "start": 2446.2, "end": 2452.12, "text": " run through the data loader for each batch do this so we forward it through the bert model to" }, { "start": 2452.12, "end": 2457.8, "text": " get our log it's and then we compute our loss by the cross entropy loss of the log it's and the" }, { "start": 2457.8, "end": 2463.4, "text": " labels and we also compute our accuracy by seeing how much the log it's agree with the labels or the" }, { "start": 2463.4, "end": 2469.8, "text": " maximum log it and then we aggregate all of this over the entire epoch and output that now let's" }, { "start": 2469.8, "end": 2478.92, "text": " set up a logger so for the logger we can put this i think in the trainer here pytorch lightning" }, { "start": 2478.92, "end": 2489.48, "text": " logger dot and i think there is a tensor board logger uh pretty sure pytorch lightning is there" }, { "start": 2489.48, "end": 2494.04, "text": " tensor board no pytorch" }, { "start": 2495.4, "end": 2503.2400000000002, "text": " nying logger i'm pretty sure that exists that's not the newest version i hate these" }, { "start": 2503.96, "end": 2511.72, "text": " these old docs so latest come on oh this was called logging logger" }, { "start": 2511.72, "end": 2513.72, "text": " log" }, { "start": 2519.8799999999997, "end": 2520.3599999999997, "text": " loggers" }, { "start": 2523.48, "end": 2530.2799999999997, "text": " tensor board logger right here nice so our save dear is going to be called logs and then" }, { "start": 2530.28, "end": 2540.2000000000003, "text": " what we what do we want we want the name imdb and there's also this version thing" }, { "start": 2542.6000000000004, "end": 2548.76, "text": " where if if if you don't put version zero it will just make a new kind of folder each time but i" }, { "start": 2548.76, "end": 2552.76, "text": " guess we delete the logs anyway we delete the logs folder at the beginning so we don't have" }, { "start": 2552.76, "end": 2558.2000000000003, "text": " this problem but i generally like to overwrite my logs and not make new runs but if you like" }, { "start": 2558.2, "end": 2565.56, "text": " something different that's you know fine all right so let's run this again and we're cool" }, { "start": 2565.56, "end": 2573.08, "text": " though this is the bird configuration that we loaded and then we have no attribute logger" }, { "start": 2573.7999999999997, "end": 2577.3199999999997, "text": " pytorch lightning loggers loggers" }, { "start": 2577.32, "end": 2582.76, "text": " loggers" }, { "start": 2584.6800000000003, "end": 2592.6000000000004, "text": " okay again loading the weights very cool blah blah blah blah blah blah blah blah and we're in" }, { "start": 2592.6000000000004, "end": 2598.44, "text": " the night python shell and do we have a night python shell remaining only in the training step" }, { "start": 2598.44, "end": 2604.1200000000003, "text": " okay so we're at the training step right here and we can actually can we can check whether or not" }, { "start": 2604.12, "end": 2607.96, "text": " um ah now we have lightning logs and logs my" }, { "start": 2612.44, "end": 2618.2799999999997, "text": " okay so these appear to be our tensor board logs so we are maybe able to run the tensor board here" }, { "start": 2618.2799999999997, "end": 2626.8399999999997, "text": " later um let's run it logs we don't have tensor board okay" }, { "start": 2626.84, "end": 2636.6800000000003, "text": " oh yeah i've uninstalled it because i was angry at it oh come on what's going on um" }, { "start": 2639, "end": 2647.8, "text": " tensor board i should have tensor board somewhere uh it's it's like in um in local bin or something" }, { "start": 2647.8, "end": 2655.88, "text": " um in local bin or something local bin no it's not in local bin" }, { "start": 2657.8, "end": 2662.6000000000004, "text": " oh oh we'll find it we'll figure it out" }, { "start": 2664.92, "end": 2670.2000000000003, "text": " uh how to get a tensor board maybe we need to install tensor flow" }, { "start": 2672.92, "end": 2677.0800000000004, "text": " well that's gonna take a while okay so back to the training step in the training step we" }, { "start": 2677.08, "end": 2682.6, "text": " basically need to do the same as in the validation step so we'll need to forward our batch through" }, { "start": 2682.6, "end": 2687.56, "text": " the model but here we don't need to compute an accuracy but we do need to compute a actually a" }, { "start": 2687.56, "end": 2693.64, "text": " batch loss that we can back propagate on now in the training step you can either specify how you" }, { "start": 2693.64, "end": 2702.2, "text": " back propagate um per se or what you can do is you can just output this log loss attribute and then" }, { "start": 2702.2, "end": 2707.96, "text": " pytorch lightning will basically do the back propagation for you we have the tensor board" }, { "start": 2709.96, "end": 2720.12, "text": " now please all right there we go and we can we can put this into a different uh thing right here" }, { "start": 2720.12, "end": 2731.96, "text": " um git uh lp demo yes um" }, { "start": 2735.16, "end": 2740.6, "text": " okay so this is running and if everything goes correctly" }, { "start": 2740.6, "end": 2752.12, "text": " 06 shaboom we have a tensor board okay so we need to forward our training step and we need to" }, { "start": 2752.12, "end": 2758.36, "text": " calculate a loss for the batch so these loss here we do the same thing but here we call mean on it" }, { "start": 2758.36, "end": 2765.96, "text": " so this is the mean loss from this batch and we can now return um the loss right here and we can" }, { "start": 2765.96, "end": 2773.16, "text": " also in the training step you can also output a log dictionary and we'll output the loss again" }, { "start": 2773.8, "end": 2780.44, "text": " here in order so this is our going to be our training loss that we output right here um let's" }, { "start": 2780.44, "end": 2787.16, "text": " call it train loss and this also will go into the tensor board so if we run this right now we don't" }, { "start": 2787.16, "end": 2794.84, "text": " have an ipython shell simply by outputting this loss attribute we already instruct pytorch lightning" }, { "start": 2794.84, "end": 2800.84, "text": " to now run backprop on this loss uh using the optimizer that we have defined okay" }, { "start": 2802.84, "end": 2808.36, "text": " and by outputting the log we instructed to put this into the tensor board so now we have a scalar" }, { "start": 2808.36, "end": 2816.36, "text": " entry and um you can see this it only contains the valid no it contains everything very cool very very" }, { "start": 2816.36, "end": 2821.32, "text": " cool so let's remove the debug flag and we'll just see what happens" }, { "start": 2821.32, "end": 2831.48, "text": " so to recap right to recap we have um oh now you go see epoch one epoch two go go go go go" }, { "start": 2833.48, "end": 2841.1600000000003, "text": " ah very cool um what we've done is we've set up this pytorch lightning module it needs a bunch of" }, { "start": 2841.1600000000003, "end": 2846.52, "text": " functions but in the init we've basically just set up our bird model from the hogging face" }, { "start": 2846.52, "end": 2852.52, "text": " transformers library we've loaded in a pre-trained bird model that we're going to fine tune the main" }, { "start": 2852.52, "end": 2859.56, "text": " thing that the pytorch lightning module needs is a training step function where you define what it" }, { "start": 2859.56, "end": 2866.52, "text": " should do with the data and this data loader function so in the data loader function um we've" }, { "start": 2866.52, "end": 2874.12, "text": " loaded up a data set um and we basically specified the batch size this is very easy" }, { "start": 2874.12, "end": 2879.08, "text": " where does the data set come from we do it here in prepare data this is a special function in" }, { "start": 2879.08, "end": 2885.64, "text": " pytorch lightning that's basically called after the init but before anything else runs and here" }, { "start": 2886.3599999999997, "end": 2892.3599999999997, "text": " we are loading this data set from the nlp library and this is kind of the magic part" }, { "start": 2893.16, "end": 2899.16, "text": " we specify the split and the size that we want inside of the string and you can do this in percent" }, { "start": 2899.16, "end": 2904.68, "text": " or in a number of samples that you want i'm sort of sure you can do more things but i haven't" }, { "start": 2904.68, "end": 2910.68, "text": " explored that then we run map on the data set in order to tokenize it and that's right here" }, { "start": 2910.68, "end": 2919.24, "text": " and we use a tokenizer again from the pytorch lightning and um just run this encode function" }, { "start": 2919.24, "end": 2930.04, "text": " this is very simple like if how complicated was this just like a year ago crazy then we need to" }, { "start": 2930.04, "end": 2937, "text": " to put set format and set format tells the data set how it needs to output its samples and we tell" }, { "start": 2937, "end": 2943.56, "text": " it please output torch tensors and um we want these columns right here and we make a train and" }, { "start": 2943.56, "end": 2951.88, "text": " test data set with from the train and test split accordingly so we have this this goes into a data" }, { "start": 2951.88, "end": 2957.56, "text": " loader pytorch lightning will take the data loader and run training on it using this train step" }, { "start": 2957.56, "end": 2963.08, "text": " function in this train step function we get a batch um in the batch there are these two columns" }, { "start": 2963.08, "end": 2968.52, "text": " that we specified previously input ids and label the input ids will put through the forward function" }, { "start": 2968.52, "end": 2974.7599999999998, "text": " of the model itself this is the forward function we'll construct a mask and run it through the model" }, { "start": 2976.44, "end": 2981.56, "text": " um we wouldn't actually need to construct a mask but okay and we get back the logits of the" }, { "start": 2981.56, "end": 2988.36, "text": " classification and then we run this through a cross entropy loss uh get the mean of the batch" }, { "start": 2988.36, "end": 2994.92, "text": " and there we go in the validation step we do the same thing but also calculate the accuracy but" }, { "start": 2994.92, "end": 2999.88, "text": " don't calculate the mean we want to keep it per sample and only at the end we want to concatenate" }, { "start": 2999.88, "end": 3008.52, "text": " everything and calculate the mean if we've done everything correctly you see right here our train" }, { "start": 3008.52, "end": 3016.28, "text": " loss goes down down down until it is almost zero because we've just and the validation accuracy is" }, { "start": 3016.28, "end": 3022.12, "text": " super high is this is this because all the labels are equal okay so we have a" }, { "start": 3022.12, "end": 3031.7999999999997, "text": " all the labels are equal like for real um okay so we'll do something else we'll make an integer" }, { "start": 3031.7999999999997, "end": 3040.8399999999997, "text": " um with percent and this was five right so that we loaded five percent of the data set" }, { "start": 3040.84, "end": 3050.44, "text": " um but let's load some more and this might take longer but let's load" }, { "start": 3051.1600000000003, "end": 3054.6000000000004, "text": " 50 percent of the data set and just see what happens" }, { "start": 3057.1600000000003, "end": 3061.48, "text": " no present i called it present" }, { "start": 3066.28, "end": 3066.84, "text": " very good" }, { "start": 3066.84, "end": 3072.44, "text": " so we'll load up 50 percent of the data set and um we'll do the same thing and we can track in" }, { "start": 3072.44, "end": 3079.2400000000002, "text": " real time what happens in tensorboard and unrecognized instruction format um" }, { "start": 3083.8, "end": 3084.36, "text": " okay" }, { "start": 3085.96, "end": 3093.2400000000002, "text": " who can we make a format string in a format string this is nasty does it work" }, { "start": 3093.24, "end": 3101.24, "text": " please work we can make a format string in a format string absolutely bonkers okay so it takes a" }, { "start": 3101.24, "end": 3106.52, "text": " little bit longer and you could actually i think you can speed this up this mapping of the data set" }, { "start": 3107.08, "end": 3115.3999999999996, "text": " maybe you can stream it um i'm pretty sure you can batch it you can do a batch um processing of this" }, { "start": 3115.4, "end": 3124.92, "text": " but for our case right here uh we think it's enough so it was like what 1200 if we had five percent so" }, { "start": 3124.92, "end": 3134.12, "text": " now it should be something like 12 000 um so let's continue with the recap of what we did here" }, { "start": 3135.4, "end": 3140.12, "text": " we have the train data set the validation data set and on yes so we have everything like" }, { "start": 3140.12, "end": 3145.24, "text": " this the configure optimizers you can put an optimizer you can also put a learning rate" }, { "start": 3145.24, "end": 3152.8399999999997, "text": " scheduler if you want to and then in the main function we load this pytorch lightning module" }, { "start": 3153.24, "end": 3159.16, "text": " and we specify a trainer and the trainer we tell it you know the max epochs and so on" }, { "start": 3159.96, "end": 3164.8399999999997, "text": " and we set up the logger and we just run fit on this model and this runs" }, { "start": 3164.84, "end": 3171.96, "text": " epochs of the model and after each epoch it does a validation epoch and uh minimizes our loss" }, { "start": 3173.1600000000003, "end": 3185.96, "text": " very cool very effective so now if if please if you would all right here we go this is my laptop" }, { "start": 3185.96, "end": 3195.2400000000002, "text": " training burnt oh okay we don't seem to make too much progress" }, { "start": 3199.08, "end": 3200.36, "text": " let's check the tensor board" }, { "start": 3202.92, "end": 3206.28, "text": " training loss goes down training loss goes to zero" }, { "start": 3206.28, "end": 3214.52, "text": " training loss goes down training loss goes to zero i have the sneaking suspicion that this is not" }, { "start": 3214.92, "end": 3223.5600000000004, "text": " entirely shuffled so um is there like a shuffle like a shuffle thing" }, { "start": 3225, "end": 3230.52, "text": " because this seems a bit this seems a bit bit fishy um" }, { "start": 3230.52, "end": 3235.24, "text": " this imdb data set right here it just seems like" }, { "start": 3236.84, "end": 3244.2, "text": " you know we could use a bit of shuffling because all the labels yeah the training loss instantly" }, { "start": 3244.2, "end": 3254.6, "text": " goes to zero so maybe maybe it's not we" }, { "start": 3254.6, "end": 3262.6, "text": " can we shuffle here let's look at the load data set function" }, { "start": 3268.52, "end": 3277, "text": " load data set batched uh no keeping memory no none of that okay" }, { "start": 3277, "end": 3290.28, "text": " this does not seem to go to continue right here data sets nlp data sets" }, { "start": 3292.84, "end": 3299.64, "text": " i hope here we know we should find this load data set somewhere builder features load" }, { "start": 3299.64, "end": 3307, "text": " load data set split" }, { "start": 3309.7999999999997, "end": 3318.52, "text": " can we not shuffle anywhere we'll search shuffle builder" }, { "start": 3318.52, "end": 3327.48, "text": " okay so generate examples this function pre-processed examples key will be hashed" }, { "start": 3331.32, "end": 3338.92, "text": " okay we are not able to shuffle this um just like that and we can't do that" }, { "start": 3338.92, "end": 3351.7200000000003, "text": " okay we are not able to shuffle this um just like that but i hope this at least gives you an" }, { "start": 3351.7200000000003, "end": 3360.2000000000003, "text": " impression i guess if you were to take the full data set and map it then it would actually work" }, { "start": 3360.2, "end": 3367.56, "text": " we'll just try again with 10% of the data just uh to see it go down" }, { "start": 3368.9199999999996, "end": 3376.2, "text": " tensorboard see this is now good because we always delete the logs folder we don't have any uh remnant" }, { "start": 3377.96, "end": 3381.64, "text": " uh old tensorflow logs" }, { "start": 3381.64, "end": 3392.68, "text": " all right come on come on so 10% should be yeah about this about this okay" }, { "start": 3399.56, "end": 3405.7999999999997, "text": " train loss looking good looking good so far" }, { "start": 3405.8, "end": 3409.2400000000002, "text": " looking good looking good so far" }, { "start": 3413.96, "end": 3420.28, "text": " look at these models how large is that how large is the bert" }, { "start": 3420.28, "end": 3425.7200000000003, "text": " base case hugging face pre-trained models" }, { "start": 3425.72, "end": 3435.8799999999997, "text": " pre-trained models bert based on case that's the one we have 12 layers 110 million parameters easy" }, { "start": 3435.8799999999997, "end": 3437.8799999999997, "text": " easy easy" }, { "start": 3442.4399999999996, "end": 3449.48, "text": " oh no it's too large training loss goes to zero again okay so we've determined that this data set" }, { "start": 3449.48, "end": 3457.88, "text": " very probably isn't entirely shuffled it might just have all the good labels first and all the" }, { "start": 3457.88, "end": 3470.44, "text": " bad labels last and um yeah just to confirm let's confirm this uh right here let's go with 100%" }, { "start": 3471.08, "end": 3478.6, "text": " but let's put an ipython shell down um just before we map the data set so we don't have to go through" }, { "start": 3478.6, "end": 3489.08, "text": " the whole mapping procedure actually that would be here right yes" }, { "start": 3492.6, "end": 3496.36, "text": " can we not map this asynchronously map" }, { "start": 3499.08, "end": 3504.6, "text": " i might be doing something really wrong with this library but i think that's that's how it should go" }, { "start": 3504.6, "end": 3511.96, "text": " so map def" }, { "start": 3515, "end": 3524.44, "text": " map right here we can do batched we could do batched and then i think hugging face has a function to" }, { "start": 3526.52, "end": 3530.92, "text": " encode batched encode batch encode" }, { "start": 3530.92, "end": 3541.32, "text": " encode batch no um let's go to the tokenizer" }, { "start": 3546.52, "end": 3553.32, "text": " build inputs create token type ids get special token mask save where is encode" }, { "start": 3553.32, "end": 3556.36, "text": " code" }, { "start": 3559.96, "end": 3570.76, "text": " right here can we have batch encode build inputs no this might be it batch encode yes there is a" }, { "start": 3570.76, "end": 3578.6000000000004, "text": " batch encode where you have batches of these things so okay what if we do the negative one" }, { "start": 3578.6, "end": 3587.16, "text": " see here's the label zero um i'm pretty sure i'm pretty sure uh" }, { "start": 3589.72, "end": 3597.3199999999997, "text": " batch true let's do that and in our function here we'll say batch encode" }, { "start": 3597.32, "end": 3607.96, "text": " so let's see how fast this is with 100%" }, { "start": 3613, "end": 3614.28, "text": " where tokenizer has no" }, { "start": 3616.76, "end": 3618.44, "text": " but we just had batch encode" }, { "start": 3620.52, "end": 3623.7200000000003, "text": " oh but this might be we have batch encode plus" }, { "start": 3623.72, "end": 3633, "text": " batch encode plus or text pairs okay we need this batch encode plus" }, { "start": 3636.6, "end": 3645.72, "text": " but then that gives us a dictionary right this gives us a dictionary with the fields input ids" }, { "start": 3646.3599999999997, "end": 3647.8799999999997, "text": " right here so" }, { "start": 3647.88, "end": 3657.08, "text": " like this how about that and then we still might want to limit the actual data set um once we have" }, { "start": 3657.08, "end": 3659.08, "text": " once we have" }, { "start": 3661, "end": 3665.1600000000003, "text": " mapped it because we need to train on it as well" }, { "start": 3668.2000000000003, "end": 3672.12, "text": " but i just want to see how fast this batch uh encoding is" }, { "start": 3672.12, "end": 3675.64, "text": " yes" }, { "start": 3676.8399999999997, "end": 3685.24, "text": " okay reasonably fast but it takes like three minutes um yeah so we won't go on here i will put" }, { "start": 3685.24, "end": 3696.2799999999997, "text": " i will put this as is on um i'll put this as is on github and i will put this as is on github" }, { "start": 3696.28, "end": 3706.36, "text": " and i hope you can profit from that in any way you want the hugging face site has a tutorial on" }, { "start": 3706.36, "end": 3711.88, "text": " squad where they also use the metrics so they have basically these pre-defined metrics like blur" }, { "start": 3712.6000000000004, "end": 3721.4, "text": " or rouge i think and you can just use them as you use these data sets so" }, { "start": 3721.4, "end": 3727.2400000000002, "text": " it's very very very convenient to work with these things in nlp so nlp has come a long way" }, { "start": 3727.2400000000002, "end": 3735.32, "text": " absolutely invite you to check out the um the transformers and tokenizers and nlp repos" }, { "start": 3735.7200000000003, "end": 3741.88, "text": " and with that that's it for me i think i hope you enjoyed this again leave a comment if you" }, { "start": 3741.88, "end": 3747.8, "text": " see improvements or if i maybe should edit this a bit more or if i should add a little bit more" }, { "start": 3747.8, "end": 3753.4, "text": " see improvements or if i maybe should edit this a bit more i thought the entire process" }, { "start": 3753.4, "end": 3778.52, "text": " of just going through and making mistakes um would be entertaining to some all right bye bye" } ]
CA8JPbJ75tY
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
CornerNet: Detecting Objects as Paired Keypoints (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "corner", "top left", "bottom right", "corners", "cv", "computer vision", "vision", "object detection", "detr", "bounding box", "center", "anchor", "pooling", "local", "cnn", "convolutions", "convolutional neural network", "hourglass", "skip connection", "heatmap", "embedding", "push", "pull", "loss", "overlap", "filters", "channels" ]
Many object detectors focus on locating the center of the object they want to find. However, this leaves them with the secondary problem of determining the specifications of the bounding box, leading to undesirable solutions like anchor boxes. This paper directly detects the top left and the bottom right corners of objects independently, along with descriptors that allows to match the two later and form a complete bounding box. For this, a new pooling method, called corner pooling, is introduced. OUTLINE: 0:00 - Intro & High-Level Overview 1:40 - Object Detection 2:40 - Pipeline I - Hourglass 4:00 - Heatmap & Embedding Outputs 8:40 - Heatmap Loss 10:55 - Embedding Loss 14:35 - Corner Pooling 20:40 - Experiments Paper: https://arxiv.org/abs/1808.01244 Code: https://github.com/princeton-vl/CornerNet Abstract: We propose CornerNet, a new approach to object detection where we detect an object bounding box as a pair of keypoints, the top-left corner and the bottom-right corner, using a single convolution neural network. By detecting objects as paired keypoints, we eliminate the need for designing a set of anchor boxes commonly used in prior single-stage detectors. In addition to our novel formulation, we introduce corner pooling, a new type of pooling layer that helps the network better localize corners. Experiments show that CornerNet achieves a 42.2% AP on MS COCO, outperforming all existing one-stage detectors. Authors: Hei Law, Jia Deng Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hello there! Today we're looking at corner net detecting objects as paired key points by Hylah and Jia Ding. So on a high level this paper detects objects in images. Let's say this is an image and here's a chair. You know you have your chair. And the way you detect the chair for this paper is going to be you detect the bottom right and the top left corners of the bounding box of the image. So rather than detecting the middle and then specifying height and width, like we saw in the Facebook DETR paper, you detect the two corners. And this paper goes through what they have to do to get this to work, including a new pooling method called corner pooling. So that's the gist of the paper. As always, if you like content like this, consider subscribing and sharing it out to other people. That would be very helpful. So a commenter actually recommended this paper to me after I made a video on Facebook's DETR object detection pipeline. I said something like, okay, since that paper always would detect the middle of the object and the height and width, couldn't you make something that also that detects the corners here and the corner here and then that would define a bounding box just as well. And in the comments, and thank you very much for that, I, someone made that pointed me to this paper, it's a bit older, as you can see, but it's, I still think it's, it's pretty cool. So we've already seen the the problem, like the problem isn't hard. And it's detecting bounding boxes in images. And in these data set, the problems, the difficult parts are that you sometimes have multiple objects, like here, if two humans, they can be overlapping, they can be of different sizes, that could be like a third human, like small back here, there can be other objects, you don't know how many there are, and so on. So it is it is a fairly complicated problem. But as I already said, the way that corner net here does this is by predicting the locations of the top left and bottom right corner, thereby defining a bounding box. And it does this independently. So there's one network basically, that does the top left and one that does the bottom right, and they are then combined. And at the end, they're sort of refined, I think. So the architecture is pretty simple. First, you put the image through a con net, which is like a feature extractor. So this is the basic part. It was even the basic part of Facebook's DETR pipeline. First, you have some sort of con net. Now they in this case use in this hourglass architecture that described down, down here somewhere. And this basically compresses the image into a smaller resolution. So I would take that image and compress it down to very small resolution, but many, many channels. So it's sort of forced to learn a global semantic representation, and then it up samples the image again, and down samples it again, and it up samples it again through. So at each of these steps, there are many convolutional layers right here. And because that would lose you too much space, like local information, there are skip connections built in between pairs of layers where information can travel without computation, basically. So this is a fairly standard architecture right here. But then after this hourglass CNN, you get to these prediction modules. Now let me switch back to the top drawing. Ultimately, what you want as an output of these prediction modules is two things. So first of all, you want these heat maps, sorry about that. And these heat maps will simply tell you where are the corners. Okay. Now the heat maps, their dimensions are the height of the image. Sorry, the height here, h, come on, and the width of the image. And this here would be the number of classes C. Okay, so you have one channel for each of the classes that you predict. And the heat map will basically be very high at the location and channel where there is a corner of that. So you see have one heat map for the top left corners, and one heat map for the bottom right corners. And then also what you want to predict are these embeddings. Now, simply because you have, you know, I said there can be multiple instances of the same class in the in the same image. So now you have in this case, particular case, you're gonna, even if you predict absolutely correctly, you predict two top left corners and two bottom right corners. Now this isn't particularly hard because there's only one configuration that can possibly be but there could be situations where there are multiple. And that's why you need to somehow match these corners, you have to match, you have to know which ones of those are the same objects. And they do this by a second output in their heads called this embeddings. These embeddings, they're simply vectors. And the only thing that they're asked to do is they're asked to have a large inner product, whenever they belong to the same object, and they are asked to have a small inner product, sorry, when they're when they belong to the different two different objects. So this orange thing here would have a large inner product with this green bottom right corner embedding. Okay, so you train these embeddings, they don't need to mean anything. You simply train them to predict the same thing for the same objects and different things for different objects. So after that, when you match the corners, you can simply go over you can say, ah, this which one of these two right here has the larger inner product, or you can do like some Hungarian matching and maximize the total inner product or something like this. This was quite surprising to me that it works, but it's based on a line of research that is already has already established that this can work. Because ultimately, these things, these two pipelines do not really communicate, right. So I'm going to guess what they learn is sort of a sort of a descriptor of the actual object that's there. Because if both describe the objects that that's there, with their embeddings, their embeddings are going to have a large inner product. And if they describe different objects, then their embeddings are not going to match, right. So even though you train that this objective, I still think that these embeddings would pick up something about the object, something about the visual characteristics of the objects will be very interesting to see whether someone could actually parse out what they what they do, because it's almost impossible otherwise for these things to be learnable. Alright, so that's the that's the goal right here, you want to get these heat maps in these embeddings. And the way you do it is fairly easy architecturally, you have these two prediction modules, one for top left and one for bottom right. And each of them have three outputs, the heat maps, the embeddings. And here the offsets are simply a way for you to deal with the fact that you downsample and by downsampling, you have to round certain pixels to certain locations. And then the offsets, they they compensate for this. But I don't want to focus on these right now. So you simply have these two outputs right here. Now we'll look at corner pooling in a second. But how do you train this? So you can now say, okay, if I have a picture like this, there there is exactly two locations in the class human, where the the top left corner is correct. And that's right here. And that's right here. Okay, so two locations. So I fill I make my matrix, my target matrix with a one here and the one here, and zeros everywhere else. Alright 0000000000. And I train my network to give me this particular thing as an output for these heat for this heat map in the channel human. This this might work, but it is more profitable, let's say, if you allow for some slack. So what they say is, you know, since if I'm anywhere within this orange circle right here with my prediction, my resulting bounding box is still going to overlap fairly well with the ground truth bounding box. And the accuracy measures for these things, I think, are based on how much you overlap with the ground truth bounding boxes. So what they do basically is they, they give, they put a one in the spot where the actual corner is, and then they put like a 0.9 around it 0.9 0.9, and so on, and they kind of flatten out. So this is sort of a Gaussian right here in multiple dimensions. If that drawing makes any sense. And they say, well, you the closer you are, basically, the more reward you get. So you train it to predict in this general location. Now, of course, the size, exact size of this Gaussian has to be dependent on the actual size of the box itself. And they have, they, they regard that and say exactly how they calculate these Gaussians. But for the understanding, it's just important that they do give some slack here in how they compute the loss with respect to the heat map. Now the loss with respect to the to the embeddings is pretty simple, pretty straightforward. So remember these embeddings, you have two embeddings per, you have the top left embedding, that's the ETK, the top embed, the top corner embedding, and you have the bottom right embedding. And what you want is for them to be close together when they describe the same object, right? So this is this push and pull losses. So in the pull loss, what you want to do is you want to minimize the distances of these two things to this thing right here. And this thing is simply, so EK is simply the the mean, so it's ETK plus EBK divided by two, that's simply if your top left corner is here and your bottom right corner is here, and they have embeddings, this one has this embedding, and this one has that embedding, then the mean of the two embeddings, which I guess is whatever this right here. Yeah, that's about the mean. So the location is not important, actually. So it's about the embedding vectors. It's not about where the corners are. The two embedding vectors must be close together, and you model that not directly by making them close to each other, but by making both close to their mean. And that probably saves you some back propagation troubles where you, if you have two moving parts in a loss function, and you optimize both, then you tend to, so you have two things, you want to bring them closer together, they might tend to overshoot or something like this. Okay, so this brings those two closer together. And in the push loss, what you want to do is you want to simply make the mean between the two, remember this is this is the mean, this the mean embedding of this object far away from the mean embedding of any other object in the picture. Okay, so this here is a margin loss, which means that you cap it at some point. So if they're close together, if the embeddings of two different objects are close together, you can see here this quantity will be small, and therefore it will lead to this delta, you give a loss of one, the delta here is one in this case. But as they get further apart, you're more and more happy, and you reduce your loss until you don't give, you don't give any any bonus for them being super far apart, you don't simply don't want them to be closer together than one. All right. In their case, I think they have a the dimension of these vectors is actually one, which basically means they just output the single number, which I find astonishing that that works. Yes, they use embeddings of one dimension. So they just use numbers. Astonishing that it works, but okay. So that's how you train the the embedding output embeddings close together of the same objects of the two corners, and embeddings far apart for different objects. Alright, so we can now predict where the corners are, and we can match them. Now, one center part of this is the corner pooling. And why is the corner pooling necessary? So what's the problem with this sort of approach? The problem, and they have an example right here, the problem when you want to predict a corner of an object is that in a CNN, what CNN is good at is like local neighborhood information, right? So if you have to predict, let's go for the moon, actually, here, let's predict the location of the moon, if I have to predict the location of the moon, and I'm a CNN, and I have this receptive field, I'm like, Oh, yes, it's like in here. And then I have this receptive field. And I'm like, yes, it's in here. And then I zoom in on the corner, not on the moon itself, but on the corner where I need to predict, right? At some point, I like I'm sort of, I'm like, wait, wait, where is it? Because in this particular receptive field of this resolution, I have I have no clue if the moon is close, right? So at the location where the actual bounding box is, I have no local information of the object, because usually objects are not squares, they're sort of round like the moon, or like here, the plane, these corners, they have no local information about where the plane is. And corner pooling is a method to propagate that information along the axis. So what corner in corner pooling, what you would allow the location here in the CNN to do is to not only look at its itself, so its own location, but actually to extend its field of view over to the right, and down to the bottom, it's asked to predict a top left corner. So what you do is you max pool everything from here to this corner detector. So the corner detector will basically be able to detect whenever in either this band right here. So whenever in this band right here, there is the top, like the top of an object, like the top of the moon here, this corner detector can say, ah, that's probably the right height right here for a corner. And it combines this with the information of this side here, where it also says, oh, there is the side of the moon, that's probably the correct, you know, up down. So there's probably a corner right here. Okay, whereas a location right here would get the same signal from the right, or like almost the same signal, plus this signal right here. But in essence, it would also detect the top of the moon, but it would not get the same signal from down here. And therefore it says, ah, even though to the right, I see some the top of an object, I don't see the left of an object to my bottom. So I'm not going to predict a corner right here. Alright, so this corner pooling goes for the top left, and of course, equivalently goes for the bottom right, that can always max pools to up and to the left of itself. And that's exactly what you see here. So in this corner pooling, what you can do is you can propagate signal to the left and to up, and then you add the two informations, and that will give you your output feature. And you can calculate this actually fairly efficiently by doing like, like you do a cumulative sum, you do like a cumulative maximum across the different axes, and then you simply add two arrays. And that's it. So you simply put the corner pooling before you predict the these different outputs right here, the heat maps and the embeddings, which means that this hourglass network is not affected by this. Just the predictors of heat maps and embeddings, they then get the information from this hourglass network into these into these directions. Okay, I think that's a pretty, pretty neat method of solving this. And here they show how you can calculate this. And then the corner pooling is right here, they do add a skip connection here. Because sometimes, if you just aggregate this information, you might, you might actually get confused because so the trouble of course comes when there are multiple, like different objects that have, you know, the same top. And then there's also a person right here that so it gets like a signal, it gets another signal that there is the left side of a person right here. Or maybe, you know, not like this. So it will it will predict like a corner, maybe here, where there is none. So it's, it's, sometimes it is important to have local information still. And that's exactly what this skip connection is supposed to address. I guess the situation up here would be resolved by the different embeddings. But still, so you have that you add and you put another bunch of convolutional layers on top of that, and then you'll get your predictions. And that's it. You mix all the losses. So there is a detection loss from the embeddings, there's the sorry, from the heat maps, there is the pull and push losses for the embeddings. And there's this offset loss that you train to compensate for the down the down sampling errors. And that's it. And they ablate the various things here, basically, they show that they're better than other one shot or one stage predictors. So apparently, there's one stage predictors where you have a single pass through a neural network. And there's two stage predictors where you have multiple or two passes through different neural networks. And they compete in the in the one, one stage neural network category, if so to say. And they show that they get significant improvements with and due to this corner pooling, which is pretty cool to see, because it makes sense. It sort of makes sense, how you would like to think about it like this. And to see that it helps is pretty neat. Yeah, they also investigate how large they have to make these these Gaussians and so on. And these are some qualitative examples, you can see that without the corner pooling, what you'll get is that so the top here and the left and the right are correct are detected correctly. But you can see that probably the network thinks that there is an extension of the object right here, and therefore, doesn't do doesn't do a good job. Because this this position right here, it has no access to sort of it has to use like a long range access, it can't it can't really look in detail at the features here or here. So when it scans up and down the side, where the bottom corner where the bottom break is, it can it can only look at very coarse features, because it has to basically transmit information in the CNN of a higher layer and the higher layer has a higher receptive field, which means it has a lower resolution. So it can't really go and look very in very detailed fashion at this border right here. So it misses it. Okay. Same right here, as you can see, so there are a number of failure cases that they can now solve using this method compared to if they didn't use the corner pooling. They show some also some times where their method fails, for example, here, it matches the top, top left and bottom right corners of two different objects. Because they their embeddings were close enough. And yeah, that's what I'm saying. I'm I'm wondering what these embeddings actually learn, because they are generated independently. So not entirely sure. It's also not exactly what I had in mind when I formulated this idea in the last video. But I'm actually not sure what I had in mind myself, to be honest. But in my mind, it seemed to be like you should be able to train a network. If there is an object right here, you could train a network to predict for any given location, let's say how many pixels to its bottom right, or maybe you want to normalize by the area that's there, are part of a particular object. And then you could use, you could predict each pixel and use like the differences between the differences between the points as as scores for bounding boxes. I don't know if you see what I mean. You could basically tell the you have you'd have one network predict everything to the to the bottom right, and then you'd use the differences. And the transformers would be very good at that because they can sort of have this attention between each pairs of points and so on. I'm not entirely sure, but this might just be crap. Yeah, here's some more examples. This appears to work really nicely. But of course, in the qualitative, qualitative examples, it always works nicely, but they also demonstrated. All right, I found this paper all in all pretty cool, pretty neat. It's a simple idea. It's executed well. I don't have the feeling that there are like too many tricks in here. And they show really that the improvement seems to be due to their their corner pooling method. And that's pretty neat. So if you like this paper, make sure to check it out. And I'll see you next time. Bye bye.
[ { "start": 0, "end": 6.08, "text": " Hello there! Today we're looking at corner net detecting objects as paired key points by" }, { "start": 6.08, "end": 15.280000000000001, "text": " Hylah and Jia Ding. So on a high level this paper detects objects in images. Let's say this is an" }, { "start": 15.280000000000001, "end": 22.8, "text": " image and here's a chair. You know you have your chair. And the way you detect the chair for this" }, { "start": 22.8, "end": 29.64, "text": " paper is going to be you detect the bottom right and the top left corners of the bounding box of" }, { "start": 29.64, "end": 35.72, "text": " the image. So rather than detecting the middle and then specifying height and width, like we saw in" }, { "start": 35.72, "end": 41.4, "text": " the Facebook DETR paper, you detect the two corners. And this paper goes through what they" }, { "start": 41.4, "end": 48.519999999999996, "text": " have to do to get this to work, including a new pooling method called corner pooling. So that's" }, { "start": 48.519999999999996, "end": 55.120000000000005, "text": " the gist of the paper. As always, if you like content like this, consider subscribing and" }, { "start": 55.12, "end": 63.36, "text": " sharing it out to other people. That would be very helpful. So a commenter actually recommended" }, { "start": 63.36, "end": 70.67999999999999, "text": " this paper to me after I made a video on Facebook's DETR object detection pipeline. I said something" }, { "start": 70.67999999999999, "end": 78, "text": " like, okay, since that paper always would detect the middle of the object and the height and width," }, { "start": 78, "end": 84.03999999999999, "text": " couldn't you make something that also that detects the corners here and the corner here and then that" }, { "start": 84.04, "end": 89.92, "text": " would define a bounding box just as well. And in the comments, and thank you very much for that," }, { "start": 89.92, "end": 98.64, "text": " I, someone made that pointed me to this paper, it's a bit older, as you can see, but it's, I still" }, { "start": 98.64, "end": 105.96000000000001, "text": " think it's, it's pretty cool. So we've already seen the the problem, like the problem isn't hard. And" }, { "start": 105.96000000000001, "end": 113.88000000000001, "text": " it's detecting bounding boxes in images. And in these data set, the problems, the difficult parts" }, { "start": 113.88, "end": 121.36, "text": " are that you sometimes have multiple objects, like here, if two humans, they can be overlapping," }, { "start": 121.36, "end": 127, "text": " they can be of different sizes, that could be like a third human, like small back here, there can be" }, { "start": 127, "end": 132.51999999999998, "text": " other objects, you don't know how many there are, and so on. So it is it is a fairly complicated" }, { "start": 132.51999999999998, "end": 140.16, "text": " problem. But as I already said, the way that corner net here does this is by predicting the" }, { "start": 140.16, "end": 146.32, "text": " locations of the top left and bottom right corner, thereby defining a bounding box. And it does this" }, { "start": 146.32, "end": 154.51999999999998, "text": " independently. So there's one network basically, that does the top left and one that does the bottom" }, { "start": 154.51999999999998, "end": 163.8, "text": " right, and they are then combined. And at the end, they're sort of refined, I think. So the" }, { "start": 163.8, "end": 169.64, "text": " architecture is pretty simple. First, you put the image through a con net, which is like a feature" }, { "start": 169.64, "end": 178.88, "text": " extractor. So this is the basic part. It was even the basic part of Facebook's DETR pipeline. First," }, { "start": 178.88, "end": 185.11999999999998, "text": " you have some sort of con net. Now they in this case use in this hourglass architecture that" }, { "start": 185.11999999999998, "end": 195.88, "text": " described down, down here somewhere. And this basically compresses the image into a smaller" }, { "start": 195.88, "end": 200.68, "text": " resolution. So I would take that image and compress it down to very small resolution," }, { "start": 200.68, "end": 206.88, "text": " but many, many channels. So it's sort of forced to learn a global semantic representation," }, { "start": 206.88, "end": 211.35999999999999, "text": " and then it up samples the image again, and down samples it again, and it up samples it again" }, { "start": 211.35999999999999, "end": 218.44, "text": " through. So at each of these steps, there are many convolutional layers right here. And because that" }, { "start": 218.44, "end": 223.44, "text": " would lose you too much space, like local information, there are skip connections built" }, { "start": 223.44, "end": 230.44, "text": " in between pairs of layers where information can travel without computation, basically. So this is" }, { "start": 230.44, "end": 237.8, "text": " a fairly standard architecture right here. But then after this hourglass CNN, you get to these" }, { "start": 237.8, "end": 245.64, "text": " prediction modules. Now let me switch back to the top drawing. Ultimately, what you want as an output" }, { "start": 245.64, "end": 252.76, "text": " of these prediction modules is two things. So first of all, you want these heat maps, sorry about" }, { "start": 252.76, "end": 259.64, "text": " that. And these heat maps will simply tell you where are the corners. Okay. Now the heat maps," }, { "start": 259.64, "end": 270.71999999999997, "text": " their dimensions are the height of the image. Sorry, the height here, h, come on, and the width" }, { "start": 270.71999999999997, "end": 279.15999999999997, "text": " of the image. And this here would be the number of classes C. Okay, so you have one channel for each" }, { "start": 279.16, "end": 287, "text": " of the classes that you predict. And the heat map will basically be very high at the location and" }, { "start": 287, "end": 293.64000000000004, "text": " channel where there is a corner of that. So you see have one heat map for the top left corners," }, { "start": 293.64000000000004, "end": 299.96000000000004, "text": " and one heat map for the bottom right corners. And then also what you want to predict are these" }, { "start": 299.96000000000004, "end": 308.08000000000004, "text": " embeddings. Now, simply because you have, you know, I said there can be multiple instances of the same" }, { "start": 308.08, "end": 315.88, "text": " class in the in the same image. So now you have in this case, particular case, you're gonna, even if" }, { "start": 315.88, "end": 323.12, "text": " you predict absolutely correctly, you predict two top left corners and two bottom right corners. Now" }, { "start": 323.12, "end": 329.08, "text": " this isn't particularly hard because there's only one configuration that can possibly be but there" }, { "start": 329.08, "end": 334.56, "text": " could be situations where there are multiple. And that's why you need to somehow match these" }, { "start": 334.56, "end": 340.92, "text": " corners, you have to match, you have to know which ones of those are the same objects. And they do" }, { "start": 340.92, "end": 348.56, "text": " this by a second output in their heads called this embeddings. These embeddings, they're simply" }, { "start": 348.56, "end": 356.96, "text": " vectors. And the only thing that they're asked to do is they're asked to have a large inner product," }, { "start": 356.96, "end": 364.91999999999996, "text": " whenever they belong to the same object, and they are asked to have a small inner product, sorry," }, { "start": 364.91999999999996, "end": 373.79999999999995, "text": " when they're when they belong to the different two different objects. So this orange thing here would" }, { "start": 373.79999999999995, "end": 378.76, "text": " have a large inner product with this green bottom right corner embedding. Okay, so you train these" }, { "start": 378.76, "end": 384.32, "text": " embeddings, they don't need to mean anything. You simply train them to predict the same thing for" }, { "start": 384.32, "end": 392.15999999999997, "text": " the same objects and different things for different objects. So after that, when you match the corners," }, { "start": 392.15999999999997, "end": 399.64, "text": " you can simply go over you can say, ah, this which one of these two right here has the larger inner" }, { "start": 399.64, "end": 404.84, "text": " product, or you can do like some Hungarian matching and maximize the total inner product" }, { "start": 404.84, "end": 411.92, "text": " or something like this. This was quite surprising to me that it works, but it's based on a line of" }, { "start": 411.92, "end": 419.08000000000004, "text": " research that is already has already established that this can work. Because ultimately, these" }, { "start": 419.08000000000004, "end": 425.56, "text": " things, these two pipelines do not really communicate, right. So I'm going to guess what" }, { "start": 425.56, "end": 434.16, "text": " they learn is sort of a sort of a descriptor of the actual object that's there. Because if both" }, { "start": 434.16, "end": 440.32, "text": " describe the objects that that's there, with their embeddings, their embeddings are going to have a" }, { "start": 440.32, "end": 444.96, "text": " large inner product. And if they describe different objects, then their embeddings are not going to" }, { "start": 444.96, "end": 450.92, "text": " match, right. So even though you train that this objective, I still think that these embeddings" }, { "start": 450.92, "end": 457.08, "text": " would pick up something about the object, something about the visual characteristics of the objects" }, { "start": 457.08, "end": 463.88, "text": " will be very interesting to see whether someone could actually parse out what they what they do," }, { "start": 463.88, "end": 475.04, "text": " because it's almost impossible otherwise for these things to be learnable. Alright, so that's the" }, { "start": 475.04, "end": 479.71999999999997, "text": " that's the goal right here, you want to get these heat maps in these embeddings. And the way you do" }, { "start": 479.71999999999997, "end": 485.6, "text": " it is fairly easy architecturally, you have these two prediction modules, one for top left and one" }, { "start": 485.6, "end": 491.56, "text": " for bottom right. And each of them have three outputs, the heat maps, the embeddings. And here" }, { "start": 491.56, "end": 498.12, "text": " the offsets are simply a way for you to deal with the fact that you downsample and by downsampling," }, { "start": 498.12, "end": 505.36, "text": " you have to round certain pixels to certain locations. And then the offsets, they they" }, { "start": 505.36, "end": 512.5, "text": " compensate for this. But I don't want to focus on these right now. So you simply have these two" }, { "start": 512.5, "end": 520.84, "text": " outputs right here. Now we'll look at corner pooling in a second. But how do you train this? So you" }, { "start": 520.84, "end": 528.44, "text": " can now say, okay, if I have a picture like this, there there is exactly two locations in the class" }, { "start": 528.44, "end": 537.0400000000001, "text": " human, where the the top left corner is correct. And that's right here. And that's right here. Okay," }, { "start": 537.0400000000001, "end": 544.52, "text": " so two locations. So I fill I make my matrix, my target matrix with a one here and the one here," }, { "start": 544.52, "end": 552.76, "text": " and zeros everywhere else. Alright 0000000000. And I train my network to give me this particular" }, { "start": 552.76, "end": 561.64, "text": " thing as an output for these heat for this heat map in the channel human. This this might work," }, { "start": 561.64, "end": 570.96, "text": " but it is more profitable, let's say, if you allow for some slack. So what they say is, you know," }, { "start": 570.96, "end": 577.72, "text": " since if I'm anywhere within this orange circle right here with my prediction, my resulting bounding" }, { "start": 577.72, "end": 583.6, "text": " box is still going to overlap fairly well with the ground truth bounding box. And the accuracy" }, { "start": 583.6, "end": 589.2800000000001, "text": " measures for these things, I think, are based on how much you overlap with the ground truth bounding" }, { "start": 589.28, "end": 600.64, "text": " boxes. So what they do basically is they, they give, they put a one in the spot where the actual" }, { "start": 600.64, "end": 609.4, "text": " corner is, and then they put like a 0.9 around it 0.9 0.9, and so on, and they kind of flatten out." }, { "start": 609.4, "end": 616.1999999999999, "text": " So this is sort of a Gaussian right here in multiple dimensions. If that drawing makes any" }, { "start": 616.2, "end": 625.08, "text": " sense. And they say, well, you the closer you are, basically, the more reward you get. So you train" }, { "start": 625.08, "end": 633, "text": " it to predict in this general location. Now, of course, the size, exact size of this Gaussian has" }, { "start": 633, "end": 641.72, "text": " to be dependent on the actual size of the box itself. And they have, they, they regard that and" }, { "start": 641.72, "end": 647.64, "text": " say exactly how they calculate these Gaussians. But for the understanding, it's just important" }, { "start": 647.64, "end": 654.0400000000001, "text": " that they do give some slack here in how they compute the loss with respect to the heat map." }, { "start": 654.0400000000001, "end": 665.12, "text": " Now the loss with respect to the to the embeddings is pretty simple, pretty straightforward. So" }, { "start": 665.12, "end": 672.48, "text": " remember these embeddings, you have two embeddings per, you have the top left embedding, that's the" }, { "start": 672.48, "end": 680.04, "text": " ETK, the top embed, the top corner embedding, and you have the bottom right embedding. And what you" }, { "start": 680.04, "end": 688.12, "text": " want is for them to be close together when they describe the same object, right? So this is this" }, { "start": 688.12, "end": 694.98, "text": " push and pull losses. So in the pull loss, what you want to do is you want to minimize the distances" }, { "start": 694.98, "end": 702.64, "text": " of these two things to this thing right here. And this thing is simply, so EK is simply the the mean," }, { "start": 702.64, "end": 711.5600000000001, "text": " so it's ETK plus EBK divided by two, that's simply if your top left corner is here and your bottom" }, { "start": 711.5600000000001, "end": 716.48, "text": " right corner is here, and they have embeddings, this one has this embedding, and this one has" }, { "start": 716.48, "end": 724.44, "text": " that embedding, then the mean of the two embeddings, which I guess is whatever this right here. Yeah," }, { "start": 724.44, "end": 730.12, "text": " that's about the mean. So the location is not important, actually. So it's about the embedding" }, { "start": 730.12, "end": 735.6800000000001, "text": " vectors. It's not about where the corners are. The two embedding vectors must be close together," }, { "start": 735.6800000000001, "end": 741.72, "text": " and you model that not directly by making them close to each other, but by making both close to" }, { "start": 741.72, "end": 750.08, "text": " their mean. And that probably saves you some back propagation troubles where you, if you have two" }, { "start": 750.08, "end": 755.48, "text": " moving parts in a loss function, and you optimize both, then you tend to, so you have two things," }, { "start": 755.48, "end": 762.08, "text": " you want to bring them closer together, they might tend to overshoot or something like this. Okay," }, { "start": 762.08, "end": 768.8000000000001, "text": " so this brings those two closer together. And in the push loss, what you want to do is you want to" }, { "start": 768.8000000000001, "end": 778.88, "text": " simply make the mean between the two, remember this is this is the mean, this the mean embedding" }, { "start": 778.88, "end": 787.76, "text": " of this object far away from the mean embedding of any other object in the picture. Okay, so this" }, { "start": 787.76, "end": 796.04, "text": " here is a margin loss, which means that you cap it at some point. So if they're close together," }, { "start": 796.04, "end": 804.72, "text": " if the embeddings of two different objects are close together, you can see here this" }, { "start": 804.72, "end": 812.48, "text": " quantity will be small, and therefore it will lead to this delta, you give a loss of one," }, { "start": 812.48, "end": 818.96, "text": " the delta here is one in this case. But as they get further apart, you're more and more happy," }, { "start": 818.96, "end": 827.96, "text": " and you reduce your loss until you don't give, you don't give any any bonus for them being super far" }, { "start": 827.96, "end": 835.88, "text": " apart, you don't simply don't want them to be closer together than one. All right. In their case," }, { "start": 835.88, "end": 841.6800000000001, "text": " I think they have a the dimension of these vectors is actually one, which basically means they just" }, { "start": 841.6800000000001, "end": 848.9200000000001, "text": " output the single number, which I find astonishing that that works. Yes, they use embeddings of one" }, { "start": 848.92, "end": 861.8, "text": " dimension. So they just use numbers. Astonishing that it works, but okay. So that's how you train" }, { "start": 861.8, "end": 868.7199999999999, "text": " the the embedding output embeddings close together of the same objects of the two corners," }, { "start": 868.7199999999999, "end": 875.8, "text": " and embeddings far apart for different objects. Alright, so we can now predict where the corners" }, { "start": 875.8, "end": 882.8399999999999, "text": " are, and we can match them. Now, one center part of this is the corner pooling. And why is the" }, { "start": 882.8399999999999, "end": 889.7199999999999, "text": " corner pooling necessary? So what's the problem with this sort of approach? The problem, and they" }, { "start": 889.7199999999999, "end": 898.88, "text": " have an example right here, the problem when you want to predict a corner of an object is that in" }, { "start": 898.88, "end": 906.24, "text": " a CNN, what CNN is good at is like local neighborhood information, right? So if you have to" }, { "start": 906.24, "end": 910.64, "text": " predict, let's go for the moon, actually, here, let's predict the location of the moon, if I have" }, { "start": 910.64, "end": 914.96, "text": " to predict the location of the moon, and I'm a CNN, and I have this receptive field, I'm like," }, { "start": 914.96, "end": 920.04, "text": " Oh, yes, it's like in here. And then I have this receptive field. And I'm like, yes, it's in here." }, { "start": 920.04, "end": 925.2, "text": " And then I zoom in on the corner, not on the moon itself, but on the corner where I need to predict," }, { "start": 925.2, "end": 933.2, "text": " right? At some point, I like I'm sort of, I'm like, wait, wait, where is it? Because in this" }, { "start": 933.2, "end": 941.6, "text": " particular receptive field of this resolution, I have I have no clue if the moon is close, right?" }, { "start": 941.6, "end": 948.72, "text": " So at the location where the actual bounding box is, I have no local information of the object," }, { "start": 948.72, "end": 957.36, "text": " because usually objects are not squares, they're sort of round like the moon, or like here, the" }, { "start": 957.36, "end": 963.4, "text": " plane, these corners, they have no local information about where the plane is. And corner pooling is a" }, { "start": 963.4, "end": 971.36, "text": " method to propagate that information along the axis. So what corner in corner pooling, what you" }, { "start": 971.36, "end": 980, "text": " would allow the location here in the CNN to do is to not only look at its itself, so its own location," }, { "start": 980, "end": 989.5600000000001, "text": " but actually to extend its field of view over to the right, and down to the bottom, it's asked to" }, { "start": 989.5600000000001, "end": 998.84, "text": " predict a top left corner. So what you do is you max pool everything from here to this corner" }, { "start": 998.84, "end": 1006.6800000000001, "text": " detector. So the corner detector will basically be able to detect whenever in either this band" }, { "start": 1006.6800000000001, "end": 1014.8000000000001, "text": " right here. So whenever in this band right here, there is the top, like the top of an object, like" }, { "start": 1014.8000000000001, "end": 1021.8000000000001, "text": " the top of the moon here, this corner detector can say, ah, that's probably the right height right" }, { "start": 1021.8, "end": 1030.52, "text": " here for a corner. And it combines this with the information of this side here, where it also says," }, { "start": 1030.52, "end": 1036.96, "text": " oh, there is the side of the moon, that's probably the correct, you know, up down. So there's probably" }, { "start": 1036.96, "end": 1045.48, "text": " a corner right here. Okay, whereas a location right here would get the same signal from the right," }, { "start": 1045.48, "end": 1051.88, "text": " or like almost the same signal, plus this signal right here. But in essence, it would also detect" }, { "start": 1051.88, "end": 1056.92, "text": " the top of the moon, but it would not get the same signal from down here. And therefore it says," }, { "start": 1056.92, "end": 1064.2, "text": " ah, even though to the right, I see some the top of an object, I don't see the left of an object to" }, { "start": 1064.2, "end": 1070.52, "text": " my bottom. So I'm not going to predict a corner right here. Alright, so this corner pooling goes" }, { "start": 1070.52, "end": 1076.04, "text": " for the top left, and of course, equivalently goes for the bottom right, that can always max pools to" }, { "start": 1076.04, "end": 1083.24, "text": " up and to the left of itself. And that's exactly what you see here. So in this corner pooling," }, { "start": 1083.24, "end": 1091.28, "text": " what you can do is you can propagate signal to the left and to up, and then you add the two" }, { "start": 1091.28, "end": 1096.68, "text": " informations, and that will give you your output feature. And you can calculate this actually" }, { "start": 1096.68, "end": 1103.0800000000002, "text": " fairly efficiently by doing like, like you do a cumulative sum, you do like a cumulative maximum" }, { "start": 1103.0800000000002, "end": 1110.96, "text": " across the different axes, and then you simply add two arrays. And that's it. So you simply put" }, { "start": 1110.96, "end": 1118.4, "text": " the corner pooling before you predict the these different outputs right here, the heat maps and" }, { "start": 1118.4, "end": 1125.04, "text": " the embeddings, which means that this hourglass network is not affected by this. Just the" }, { "start": 1125.04, "end": 1131.96, "text": " predictors of heat maps and embeddings, they then get the information from this hourglass network" }, { "start": 1131.96, "end": 1139.6, "text": " into these into these directions. Okay, I think that's a pretty, pretty neat method of solving" }, { "start": 1139.6, "end": 1146.24, "text": " this. And here they show how you can calculate this. And then the corner pooling is right here," }, { "start": 1146.24, "end": 1152.84, "text": " they do add a skip connection here. Because sometimes, if you just aggregate this information," }, { "start": 1152.84, "end": 1161.32, "text": " you might, you might actually get confused because so the trouble of course comes when there are" }, { "start": 1161.32, "end": 1171.24, "text": " multiple, like different objects that have, you know, the same top. And then there's also a person" }, { "start": 1171.24, "end": 1179.3999999999999, "text": " right here that so it gets like a signal, it gets another signal that there is the left side of a" }, { "start": 1179.4, "end": 1187.8400000000001, "text": " person right here. Or maybe, you know, not like this. So it will it will predict like a corner," }, { "start": 1187.8400000000001, "end": 1195.6000000000001, "text": " maybe here, where there is none. So it's, it's, sometimes it is important to have local information" }, { "start": 1195.6000000000001, "end": 1201.0800000000002, "text": " still. And that's exactly what this skip connection is supposed to address. I guess the situation up" }, { "start": 1201.0800000000002, "end": 1208.5600000000002, "text": " here would be resolved by the different embeddings. But still, so you have that you add and you put" }, { "start": 1208.56, "end": 1214.08, "text": " another bunch of convolutional layers on top of that, and then you'll get your predictions. And" }, { "start": 1214.08, "end": 1221.28, "text": " that's it. You mix all the losses. So there is a detection loss from the embeddings, there's the" }, { "start": 1221.28, "end": 1228.32, "text": " sorry, from the heat maps, there is the pull and push losses for the embeddings. And there's this" }, { "start": 1228.32, "end": 1237.32, "text": " offset loss that you train to compensate for the down the down sampling errors. And that's it. And" }, { "start": 1237.32, "end": 1244.52, "text": " they ablate the various things here, basically, they show that they're better than other one shot" }, { "start": 1244.52, "end": 1251.36, "text": " or one stage predictors. So apparently, there's one stage predictors where you have a single pass" }, { "start": 1251.36, "end": 1256.2, "text": " through a neural network. And there's two stage predictors where you have multiple or two passes" }, { "start": 1256.2, "end": 1262.4399999999998, "text": " through different neural networks. And they compete in the in the one, one stage neural network" }, { "start": 1262.44, "end": 1270.1200000000001, "text": " category, if so to say. And they show that they get significant improvements with and due to this" }, { "start": 1270.1200000000001, "end": 1277, "text": " corner pooling, which is pretty cool to see, because it makes sense. It sort of makes sense," }, { "start": 1277, "end": 1288.04, "text": " how you would like to think about it like this. And to see that it helps is pretty neat. Yeah," }, { "start": 1288.04, "end": 1295.3999999999999, "text": " they also investigate how large they have to make these these Gaussians and so on. And these are" }, { "start": 1295.3999999999999, "end": 1303.1599999999999, "text": " some qualitative examples, you can see that without the corner pooling, what you'll get is that so the" }, { "start": 1303.1599999999999, "end": 1309.08, "text": " top here and the left and the right are correct are detected correctly. But you can see that" }, { "start": 1309.08, "end": 1315.48, "text": " probably the network thinks that there is an extension of the object right here, and therefore," }, { "start": 1315.48, "end": 1327.4, "text": " doesn't do doesn't do a good job. Because this this position right here, it has no access to sort" }, { "start": 1327.4, "end": 1334.92, "text": " of it has to use like a long range access, it can't it can't really look in detail at the features" }, { "start": 1334.92, "end": 1340.76, "text": " here or here. So when it scans up and down the side, where the bottom corner where the bottom" }, { "start": 1340.76, "end": 1346.52, "text": " break is, it can it can only look at very coarse features, because it has to basically transmit" }, { "start": 1346.52, "end": 1351.8, "text": " information in the CNN of a higher layer and the higher layer has a higher receptive field," }, { "start": 1351.8, "end": 1358.2, "text": " which means it has a lower resolution. So it can't really go and look very in very detailed fashion" }, { "start": 1358.2, "end": 1367.24, "text": " at this border right here. So it misses it. Okay. Same right here, as you can see, so there are a" }, { "start": 1367.24, "end": 1374.6, "text": " number of failure cases that they can now solve using this method compared to if they didn't use" }, { "start": 1374.6, "end": 1382.76, "text": " the corner pooling. They show some also some times where their method fails, for example, here," }, { "start": 1382.76, "end": 1391.88, "text": " it matches the top, top left and bottom right corners of two different objects. Because they" }, { "start": 1391.88, "end": 1399.48, "text": " their embeddings were close enough. And yeah, that's what I'm saying. I'm I'm wondering what" }, { "start": 1399.48, "end": 1407.3200000000002, "text": " these embeddings actually learn, because they are generated independently. So not entirely sure." }, { "start": 1408.5200000000002, "end": 1415.0800000000002, "text": " It's also not exactly what I had in mind when I formulated this idea in the last video. But" }, { "start": 1415.08, "end": 1421.6399999999999, "text": " I'm actually not sure what I had in mind myself, to be honest. But in my mind, it seemed to be like" }, { "start": 1421.6399999999999, "end": 1428.12, "text": " you should be able to train a network. If there is an object right here, you could train a network" }, { "start": 1428.12, "end": 1436.36, "text": " to predict for any given location, let's say how many pixels to its bottom right, or maybe you want" }, { "start": 1436.36, "end": 1442.6, "text": " to normalize by the area that's there, are part of a particular object. And then you could use," }, { "start": 1442.6, "end": 1448.52, "text": " you could predict each pixel and use like the differences between the differences between the" }, { "start": 1448.52, "end": 1457.8799999999999, "text": " points as as scores for bounding boxes. I don't know if you see what I mean. You could basically" }, { "start": 1457.8799999999999, "end": 1465.3999999999999, "text": " tell the you have you'd have one network predict everything to the to the bottom right, and then" }, { "start": 1465.3999999999999, "end": 1471.32, "text": " you'd use the differences. And the transformers would be very good at that because they can" }, { "start": 1471.32, "end": 1477.32, "text": " sort of have this attention between each pairs of points and so on. I'm not entirely sure," }, { "start": 1477.32, "end": 1484.9199999999998, "text": " but this might just be crap. Yeah, here's some more examples. This appears to work really nicely." }, { "start": 1484.9199999999998, "end": 1491.1599999999999, "text": " But of course, in the qualitative, qualitative examples, it always works nicely, but they also" }, { "start": 1491.1599999999999, "end": 1497.24, "text": " demonstrated. All right, I found this paper all in all pretty cool, pretty neat. It's a simple idea." }, { "start": 1497.24, "end": 1502.52, "text": " It's executed well. I don't have the feeling that there are like too many tricks in here." }, { "start": 1502.52, "end": 1509.88, "text": " And they show really that the improvement seems to be due to their their corner pooling method." }, { "start": 1509.88, "end": 1517.72, "text": " And that's pretty neat. So if you like this paper, make sure to check it out. And I'll see you next" }, { "start": 1517.72, "end": 1528.04, "text": " time. Bye bye." } ]
x6T1zMSE4Ts
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
NVAE: A Deep Hierarchical Variational Autoencoder (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "gan", "vae", "kl", "elbo", "autoencoder", "variational", "latent", "sampling", "hierarchical", "scales", "faces", "mnist", "cifar10", "swish", "batch norm", "generative", "nvidia", "mixed precision", "memory", "deep", "layers", "depthwise convolutions", "cnn", "convolutional", "generation", "generative model" ]
VAEs have been traditionally hard to train at high resolutions and unstable when going deep with many layers. In addition, VAE samples are often more blurry and less crisp than those from GANs. This paper details all the engineering choices necessary to successfully train a deep hierarchical VAE that exhibits global consistency and astounding sharpness at high resolutions. OUTLINE: 0:00 - Intro & Overview 1:55 - Variational Autoencoders 8:25 - Hierarchical VAE Decoder 12:45 - Output Samples 15:00 - Hierarchical VAE Encoder 17:20 - Engineering Decisions 22:10 - KL from Deltas 26:40 - Experimental Results 28:40 - Appendix 33:00 - Conclusion Paper: https://arxiv.org/abs/2007.03898 Abstract: Normalizing flows, autoregressive models, variational autoencoders (VAEs), and deep energy-based models are among competing likelihood-based frameworks for deep generative learning. Among them, VAEs have the advantage of fast and tractable sampling and easy-to-access encoding networks. However, they are currently outperformed by other models such as normalizing flows and autoregressive models. While the majority of the research in VAEs is focused on the statistical challenges, we explore the orthogonal direction of carefully designing neural architectures for hierarchical VAEs. We propose Nouveau VAE (NVAE), a deep hierarchical VAE built for image generation using depth-wise separable convolutions and batch normalization. NVAE is equipped with a residual parameterization of Normal distributions and its training is stabilized by spectral regularization. We show that NVAE achieves state-of-the-art results among non-autoregressive likelihood-based models on the MNIST, CIFAR-10, and CelebA HQ datasets and it provides a strong baseline on FFHQ. For example, on CIFAR-10, NVAE pushes the state-of-the-art from 2.98 to 2.91 bits per dimension, and it produces high-quality images on CelebA HQ as shown in Fig. 1. To the best of our knowledge, NVAE is the first successful VAE applied to natural images as large as 256×256 pixels. Authors: Arash Vahdat, Jan Kautz Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher
Alright, hi there. Have a look at these faces right here. So you're probably used by now to seeing computer-generated faces of really high quality, but probably you're used to seeing these faces coming from a generative adversarial network. However, these faces right here are from a variational autoencoder. Now, variational autoencoders are fundamentally different than GANs, and traditionally they've been a bit harder to scale up to high-resolution images and give sort of very detailed, sharp output. This paper right here attempts to build such a VAE for these high resolution large data set. And it basically details everything you need to do to get a VAE like this. So the paper is called NVAE or NVAE, I don't know how to pronounce that, a deep hierarchical variational autoencoder by Arash Wadat and Jan Kautz of NVIDIA. As I said, on a high level, this paper is about how to build a deep hierarchical variational autoencoder, which is sort of a combination of already existing techniques combined in a clever way, and then listing all the engineering efforts that you need to do to actually make this work. And there is not one thing where you can say, ah, this is the thing that really made it work. But each of these techniques is going to stack and stack and stack until they reach a model that surpasses the state of the art on these data sets. And they are also able to apply this to an entirely new high quality image data set. So these again are some of the samples from that model. And as you can see, they look very, very crisp, very sharp, and also very, let's say, real. Yeah. So really briefly, variational autoencoders. So this paper attempts to build a variational autoencoder. What is it? For that, you need to start with what an autoencoder is. So an autoencoder traditionally, let's say you have an image data set, and you take an image and you train a model that consists of an encoder that maps your image to a lower dimensional space, a compressed space, which you call the latent space Z. And then you train a decoder to, again, go from the latent space back to the image space. And then you train those two models such that the distance between the output and the input is minimized. Okay, this is called the reconstruction loss. And you train the encoder and the decoder to minimize that reconstruction loss. And thereby, you hope that this latent space will learn something about the data. Now, a sort of advanced version of this and a probabilistic version of this is the variational autoencoder, where we say, what we want to do is we don't want the encoder to just output directly the latent code, but we interpret this in a probabilistic fashion. So the encoder is now a probabilistic function that outputs a distribution over latent codes. So we take our same image. And what we want to do is we want a Bayesian, basically, it's a Bayesian way of thinking of it, we want a distribution over latent codes corresponding to that image. So our encoder here is not going to output Z, but it's going to output mu and sigma. So it would be ideal if you could output an entire distribution, but we're going to make some assumptions here that that is a normal distribution. And it's going to output the mean and the standard deviation of that normal distribution. And then you actually, because now you how you're going to feed this into the decoder, if you just feed mu, you are back to the normal autoencoder. So that doesn't work. What you do is you actually instantiate that normal distribution with the mu and the sigma. So you plug that in here, you sample one sample from that normal distribution. And then you feed that sample into your decoder. Again, your decoder outputs an image from that sample. And you compare this with the reconstruction loss. And now you train the entire process. So you train the encoder and the decoder to reproduce these images correctly. Now, if you only do that, then the model will basically regress to a standard autoencoder. Why is that? Well, what's pretty easy for the... You can see that estimating the distribution is harder than estimating just the latent code, at least for the training data set, right? So if you don't pay attention, what's going to what the encoder is going to do is it's going to say, oh, well, if I just make this here, my latent code, and if I just make this as small as possible, like zero, or like one to the minus 10, 10, that's still one, 10 to the minus one. That's not that small. 10 to the minus 10, 11, 12, okay, a very small number, then that normal distribution will basically be just spiky around the thing around my mean. And so this here will always be kind of the same Z. So it won't be a distribution at all. It will just be this dirac. And I'm back to the original autoencoder, which I don't want. I want my probabilistic framework so I can compute likelihoods and so on. There are various advantages to having a probabilistic view of the data rather than just a model that produces it. Okay, and that's why in a VAE, there is not only the objective, not only this objective, the reconstruction objective, but there is a second objective where we say that we impose a regularization. And the regularization is that this here is as close as possible to a standard normal distribution. And I guess you can choose that the prior but in regularly you say, okay, this here, I don't want you encoder, I don't want you to go far away from a standard normal distribution, like do what you have to do to make the loss small, but don't go away too far. Alright, so that's the kind of balance in the VAE. And as you can imagine, if you have a normal distribution, and you sample these Z vectors here, and the reconstruction loss is always the same. So if you input the same x here a bunch of times, you'll get different Z's, right? You get Z1, Z2, Z3, because it's sampled from this distribution, there's a sampling procedure right here. So if your discriminator here is kind of smooth, then it will output different images. Now these images will always be compared to that same input image, right? So you're training this whole architecture to always reconstruct that input image from different images. So there is an interaction, I think, I think that's what's happening. I guess I'm not an expert on VAEs. But this here usually is something like the L2 loss. So in terms of how this affects the images, if I have different images that are sort of the same, but sort of different, and I have to make it L2 loss close to this image right here, then one option I have is to make them kind of blurry. So if I make all of them kind of blurry in the L2 loss, that will give me a lower penalty. So that's, I believe that's one of the explanations I heard at some point why VAEs produce usually blurry images. And that's been a problem for a long time that everything's kind of kind of blurry. So here, the VA, the hierarchical VAE comes to the rescue. So how they are going to battle this problem is by doing a hierarchical variational auto encoder. And this is how it works. So you start off, this is your generator, by the way, once you've trained your VAE, right, once you've trained it, you can simply sample from your prior from this here, because that's, you know, close enough to this, or you can, I guess, learn the prior of your data distribution, and so on. And you can just use the generator right here, the generative part, this part right here is your generator in order to produce images. So you can sample from a VAE, like you could sample from a GAN. Okay, so here, we'll look at a model that could combat those things. On the right side, you can see the model that you would ultimately sample from. So this is going to be your decoder, okay, this generative model right here. And what they do is it's very similar to if you, Nvidia also had this paper about this GAN, where they are on different scales, like progressive, I think, prog GAN, which was the first that introduced actually this high quality face data sets, I believe, at least. So here, we're going to do a very, very similar trick. So the idea is that we start out, this is a learned quantity, but you can also view it as just kind of the zero vector, we start out with our noise, we sample our noise, but our noise is going to be, it's going to be, let's say it's in the shape of an image, we can do that, we can reshape images, right. So it's going to just be a 16 entry vector. And it's going to be shaped like this, okay, we sample noise like this. And then we produce an image of 16 by 16 from it, I think they start with eight by eight or something like this. But in conceptually, you do that, then you have a neural network, this is a residual neural network produce an image out of that noise, right, it maps the noise to the image. So this is your, this is your D, your discriminator part. But then you're not done. What you do is you would actually upscale that image, or that can happen in the neural network, I believe, or you are up sampled from the beginning, and you enlarge these things. But what you would do is you would upscale your neural network. And you go higher. And so on. So you go higher and higher and higher in the hierarchy of noises. So this is a hierarchical model. Oh, yeah, down here. So they start, they start from, as you can see, it consists of 36 groups, in their case, of latent variables, starting from eight by eight, scaled up to 128 to 128, with two residual cells per latent variable groups. Okay, so you continuously scale and scale and scale up your your your images, and each time you add another bunch of these noises right here. So that means that in this model, you can, the uppermost residual model can sort of get the coarse details of the image. And that's going to be blurry, because it's a VAE, but it's going to be blurry in that coarse scale. And then you up sample it and you let another model add on top of that, the next layer of the next layer of features, you can see this is kind of a residual connection right here. And you, again, sample, and you let another neural network up sample, sorry, you know, let another neural network add more features in a higher resolution. So even though each VAE can be blurry in its own scale, it will be upscaled and there, there will be additional details added. And that's why in their samples, you will see that they're not super blurry at all. Though, I have to say something right here, if you look at these images, and you compare them, so later they compare them, they, like, they're almost look like puppets, right. So here you compare it to these are these are previous methods down there. Now, you know, to say that they're pretty, they're, you can see they're clearly kind of worse in that you can hear the symmetry of the faces aren't really given also the symmetries. Here, you can see that there are no symmetries. Here, there are no long range dependencies. The hair details are often missing as compared to like here, this is pretty crisp. But if you look at like the skin of people, and can just kind of the image composition in shadows, it looks like these, these people are like cardboard cutouts here, they have like these multiple layers where I mean, I'm I the only one that just sees this, this is like a plastic cutout. And then the face is again, like a plastic cutout and the faces are so smooth. I mean, look at this. These are like, too pretty. Like you can just look at this for hours. This is so like the diff. It maybe it just seems like this to me, but the difference if you kind of look at the skin and it almost feels like the bottom ones are actual real photographs in just in terms of the faces and the the kind of the color, just the smoothness is just all look like porcelain. This might actually be an effect of the VAE, right? Because it's not blurry, right? As a you know, the the lines and so on. But the the skin texture might just be one one scale too much here. And that's where we now see the blurriness or it might just be that I don't I don't know. Okay, I have no idea. This just this just somehow whereas was popping out to me as the main difference, like they are much more crisp and so on and much more beautiful, but also they look like puppets. Yeah. Alright, so let's get back to the model right here because so once we decided that we want such a hierarchical model, what we need to do is we need to simply build a VAE for each of these hierarchies, right? So the the uppermost thing here is a regular VAE noise, right? We have a noise, we sample from it, and we generate this particular scale of image. Okay, so how do we get that noise? We simply this is this down here is our this is our encoder and this is our decoder. We simply have our encoder, this is a series of neural networks, and we get our latent encoding, right? So the Z is obtained through the the kind of VAE encoding method. Okay, now the interesting part is how do we get Z two, and you might just think, well, we'll just go like one layer up here, but Z two, as you can see here, it depends on Z one during sampling. So during inference, we have also have to have that Z two depends on Z one. And that's why we first need to go to Z one and actually produce a sample. So our method of inferring the latent codes includes already sampling from those latent codes, right? So you sample, and you do the same thing as you would do in the right. In fact, these models are shared. And then you can see that Z two now depends on Z one in this procedure, because you go here, and you go here and here. So Z two depends on Z one, Z three, in turn would depend on Z two and Z one. And you have a properly hierarchically factorized model right here. Okay, so this, this is called a hierarchical VAE, it pretty much works like a VAE, except that it is hierarchical. And you need to do here need to have this bottom up and this top down model in order in your encoder. And so now there are a bunch of questions with respect to the hierarchical VAE. The problem here is that you have not only one sampling procedure, but you have sampling procedure upon sampling procedure upon sampling procedure. And this can get pretty unstable, I guess pretty quickly. So the rest of the paper is going to be how to get this to work. So the main, I think one of the main parts they do in order to get this to work in order to get this to train our residual connections. So we know that residual connections are kind of a sort of a gradient flow highway in order in order to to train very deep networks. And we've already seen this with residual networks in CNNs, where you have an input, and you have some computation in form of a neural network, or in this case, a sampling procedure through a distribution, and you have an output and the residual connection would allow you to skip part of that, as you can see, used here in both the encoders and the decoders. So in the encoders, you have residual connections and also in the decoders right here, you can see you have residual connections. In fact, you always take that lower scale, and you don't transform it into an upper scale, you actually sample noise, and then you add the lower scale and the upper scale together. So it's really an additive model in a hierarchical fashion, even okay, the the pluses might actually not be okay, the pluses can also be combination, I guess. I guess that that I might be wrong, and they can actually be combinations. In any case, they use residual networks in in a in a lot of cases in their generative and in their generator and in their encoder. You can see right here, there is a residual cell for the generative model and a residual cell for the encoder. Now, the exact method of these residual cell, you can see that they use batch norm, then they use one by one convolutions in order to go to a higher to a higher channel number before they do the depth separated five by five convolutions. So five by five, because you need a larger receptive field, they make that clear, they need a large receptive field. However, the large receptive field means many parameters means their model would be too big and too much memory. So they do the depth separated convolutions, which simply means that you don't mix the channels during the convolutions. So you go up the channels, you do a depth separated convolution and go down the channels again. All of these are kind of hacks to make it work, right? Then also they have batch norm and a swish non-linearity as you can see here. And then here as well in the encoder, they also say in the text, like they stress the importance, we found that first the batch norm and then the convolution is better than the other way around and so on. So this there's a lot of engineering work that went into this right here. So you see there's batch norm. And also you have to kind of hack the batch norm, because in batch norm you have these training parameters and people have observed that in VAEs. If you during inference, during sampling, if you use the training way where you only regularize within the batch, it's better than if you use the running averages. So you kind of have to hack that. We modify the momentum parameter of batch norm such that running statistic can catch up faster with the batch statistics. There's a lot of engineering in here. Like there's a lot of things that you have to get right to get something like this to work apparently. And yeah, this the paper, the paper just goes on in this style. So you can see they use the swish activation. They use squeeze and excitation blocks, which are another form of residual blocks that were introduced quite a long time ago, but still being used as you can see. And yeah, so that's the architecture. So you can see they have residual cells there, residual cells here, reducing the memory requirements. They say they use two tricks. First of all, we they do mixed precision using a cool new Nvidia library. Given that they're from Nvidia, they get to try these things out first. And second of all, they also to reduce the memory, we fuse batch norm and swish and we store only one feature map for the backward pass instead of two. And they have to then recompute this trick is known as gradient checkpointing and cries, recomputing batch norm in the backward pass. I believe like future deep learning frameworks should just take care of that for you, instead of you having to do this kind of stuff. Honestly, so they also need to, they hear they say taming the unbounded KL term. So the KL term is what makes the distribution that the encoder outputs close to that distribution that you want like that normal distribution. So this is the regularization term, you can see here, it's a KL divergence between q, which is what your encoder outputs, you can see that's the, the latent code for the image x, between the two, the two, the two, the two, the two. And between that and between your prior which you say it should be, it should be a like a normal distribution. In this case, it should be a hierarchical normal distribution. And they have a special characterization here where they say, because it's hierarchical, right. So So I'm going to have a hierarchy of normal distributions. This is my top hierarchy and then I'm going to sample one sample right here. And then in the next layer I'm going to have a normal distribution around that sample right here and I'm going to sample from that and so on. So my hierarchical normal distribution is going to be always where the next distribution in the next layer is dependent on the distribution in the hierarchy. And they have a special parameterization where in order for the encoder to produce that, so the encoder has to produce a z of the first layer and then a z of the second layer and so on, in order for the encoder to reproduce that and to be close it must match this distribution and it must match this distribution. So if it doesn't match this distribution correctly it will kind of sample somewhere else a bit. And then that distribution, that base, will already be shifted right here. So it thinks that the distribution to match is now this normal distribution. So you can see that the base is already shifted and that's why their encoder only outputs the delta to the, as you can see here, it only outputs the delta to the prior. We define here, we define the Q of the z in a given layer as the normal distribution of the mu i, with mu i, that's your prior, see that's your prior of that layer, plus a delta mu. And also the sigma is the sigma from the prior times a delta sigma that you output. So you're kind of saying you're not supposed to output the actual distribution, you're supposed to output the difference of distribution to the prior. Now in layer 0 that's the same thing, right, because the prior is going to be zero mean and unit variance. So that's this here, this here will be zero and this here will be one. But in all the upper layers this is going to make it easier. So that's one trick you have to make this repeated sampling not hurt you as much. The other trick they employ here is special regular, sorry, spectral regularization, which is a regularization where you regularize the top singular value per layer. You can use that, you can compute that with a power iteration, people have been done doing this before, and also you can build in some normalizing flows. So here if we sample the different layers, what we're going to do is we're going to sample all of these things at once, right, they're dependent on the upper layer in the hierarchy, but we'll sample them all at once. And that means they are not sort of connected to each other. Now if we introduce a flow we'll basically make them all connected to each other and build like a singular distribution of them. But I don't want to go too much into this because it doesn't gain that much, they say you can just build that in if you want. Okay so these are all the things that at least they list in the method section. Now there are like a lot more that they have to do, but ultimately as you can see right here on these on four of these five datasets they achieve state-of-the-art. In fact okay on this dataset no one else has tried, but at least on the other datasets they are very very competitive as you can see right here. And they compare this to, first of all, to other models and even other models with and without auto regressive flows. And they come pretty close to these auto regressive models. So an auto regressive model would be one that generates like one pixel at a time conditioned on the other pixels. This model doesn't do that, this model generates all pixels at once, so it's not auto regressive. But as you can see it beats all the other non or auto regressive models and it gets pretty close to the best auto regressive models which are down here. They are still better, but the gap is kind of shrinking is what they say. Cool so that's the main result. Then they have ablations where they basically, as I said, all of these things kind of contribute a little bit, a little bit, a little bit, a little bit to building this bigger and bigger and deeper variational auto encoder. So it's hard to say what exactly makes this work because all of it makes it work. And I guess they just kept going until they beat state-of-the-art or until you know they ran out of tricks. Again these are the samples that we looked at and I do want to spend some time in the appendix right here because I think it's pretty interesting what they do. So first of all they show that their model doesn't remember the training samples. As you can see right here these are always the nearest neighbor from the training sample so the model is fairly you know fairly far away from the training samples. But yeah I mean okay maybe it's just me but the left they just look like more kind of more ideal idealized humans like very smooth humans like designer babies. Here they show that if you use batch norm as you would use it I think regularly where you keep these running stats or you do the batch norm from training then you get into this kind of degenerate case if you sample at lower temperatures. So the temperature that you sample from describes the width of the Gaussian that you ultimately want to sample from. And if you do they have this method to readjust the batch norm statistics which I don't want to go into here but you can you can read it up to basically fix that problem. It is a problem that apparently other people have observed as well and their method apparently is you know is a is one that manages to do that. Okay lastly there are some more samples right here. And yeah this right here this is honestly this is one of the I think one of the most interesting things where they go and since they have this hierarchical model right so here is like z1 it gives right and so that give gets you like an image and then there's z2 and that gets you an image and then there's z3 and so on and you continuously upscale and hierarchically add the features. Here they say what if what happens if we if we sample z1 once and then we fix it and then we only sample the other ones conditioned on z1 and here see where you see top scale fixed and you can see there is considerable variation in the image but there is there is not really a large scale variation. Okay so the general face keeps constant but there are details changing as you can see so here the hair is kind of going over the image the color is changing here there are a lot of changes the mouth looks slightly different as far as I can see but I might be hallucinating here and then if you fix continuously the top two scales or the top three scales right here top four scales you can see that there are more and more just little details that change more and more so yeah so this is we they are operating at five scales starting from 8 by 8 up to 128 to 128 in each row we fix the samples at a number of top scales and we sample from the rest of the hierarchy as we can see the long-range global structure is mostly recorded at the top of the hierarchy in the 8 by 8 dimensional groups the second scale does apply at some global motive does apply some global modifications such as changing eyes hair color skin tone the shape of the face the bottom groups capture mostly low-level variations however the lowest scale can still still make some subtle long-range modifications for example the hair color is slightly modified when we are only sampling from the lowest scale in the last row this is potentially enabled because of the larger receptive field in our depth wise separate separable residual cell yeah I don't the hair color changes okay slightly maybe I don't know my my eyes are too many faces okay but you know what's certainly the case is that their models exhibit much better kind of global unity compared to these other samples where you can pretty clearly see like the different sides of the faces have little to do with each other and so on and this is the benefit that you get from doing this hierarchically so you have part of your model that's responsible for kind of the global shape of the image and then that keeps it consistent and then you have other parts that are responsible for the details okay so I hope this was something to you know that interested you I myself it's as I said it's it's an engineering paper so there is lots of things described there is not like one jumping idea I guess residual connections are pretty important and these depth wise convolutions save memory and but also all of the all of the other things that you have to do to build something like this are pretty pretty interesting yeah I I hope you gained something from it and I'll see you next time
[ { "start": 0, "end": 5.6000000000000005, "text": " Alright, hi there. Have a look at these faces right here. So you're probably used by now to seeing" }, { "start": 5.6000000000000005, "end": 11.200000000000001, "text": " computer-generated faces of really high quality, but probably you're used to seeing these faces" }, { "start": 11.200000000000001, "end": 17.64, "text": " coming from a generative adversarial network. However, these faces right here are from a" }, { "start": 17.64, "end": 23, "text": " variational autoencoder. Now, variational autoencoders are fundamentally different than GANs," }, { "start": 23, "end": 29.64, "text": " and traditionally they've been a bit harder to scale up to high-resolution images and give sort" }, { "start": 29.64, "end": 37.6, "text": " of very detailed, sharp output. This paper right here attempts to build such a VAE for these high" }, { "start": 37.6, "end": 45.64, "text": " resolution large data set. And it basically details everything you need to do to get a VAE like this." }, { "start": 45.64, "end": 51.92, "text": " So the paper is called NVAE or NVAE, I don't know how to pronounce that, a deep hierarchical" }, { "start": 51.92, "end": 58.2, "text": " variational autoencoder by Arash Wadat and Jan Kautz of NVIDIA. As I said, on a high level," }, { "start": 58.2, "end": 65.64, "text": " this paper is about how to build a deep hierarchical variational autoencoder, which is sort of a" }, { "start": 65.64, "end": 72.12, "text": " combination of already existing techniques combined in a clever way, and then listing all" }, { "start": 72.12, "end": 78.60000000000001, "text": " the engineering efforts that you need to do to actually make this work. And there is not one" }, { "start": 78.60000000000001, "end": 83.12, "text": " thing where you can say, ah, this is the thing that really made it work. But each of these" }, { "start": 83.12, "end": 90.86, "text": " techniques is going to stack and stack and stack until they reach a model that surpasses the state" }, { "start": 90.86, "end": 96.80000000000001, "text": " of the art on these data sets. And they are also able to apply this to an entirely new high quality" }, { "start": 96.80000000000001, "end": 103.4, "text": " image data set. So these again are some of the samples from that model. And as you can see," }, { "start": 103.4, "end": 114.92, "text": " they look very, very crisp, very sharp, and also very, let's say, real. Yeah. So really briefly," }, { "start": 114.92, "end": 121.04, "text": " variational autoencoders. So this paper attempts to build a variational autoencoder. What is it?" }, { "start": 121.04, "end": 126.28, "text": " For that, you need to start with what an autoencoder is. So an autoencoder traditionally," }, { "start": 126.28, "end": 131.84, "text": " let's say you have an image data set, and you take an image and you train a model that consists of" }, { "start": 131.84, "end": 138.12, "text": " an encoder that maps your image to a lower dimensional space, a compressed space, which" }, { "start": 138.12, "end": 145.68, "text": " you call the latent space Z. And then you train a decoder to, again, go from the latent space back" }, { "start": 145.68, "end": 152.76, "text": " to the image space. And then you train those two models such that the distance between the output" }, { "start": 152.76, "end": 159.88, "text": " and the input is minimized. Okay, this is called the reconstruction loss. And you train the encoder" }, { "start": 159.88, "end": 166.12, "text": " and the decoder to minimize that reconstruction loss. And thereby, you hope that this latent space" }, { "start": 166.12, "end": 173.56, "text": " will learn something about the data. Now, a sort of advanced version of this and a probabilistic" }, { "start": 173.56, "end": 179.48, "text": " version of this is the variational autoencoder, where we say, what we want to do is we don't want" }, { "start": 179.48, "end": 187.51999999999998, "text": " the encoder to just output directly the latent code, but we interpret this in a probabilistic" }, { "start": 187.52, "end": 193.96, "text": " fashion. So the encoder is now a probabilistic function that outputs a distribution over latent" }, { "start": 193.96, "end": 200.8, "text": " codes. So we take our same image. And what we want to do is we want a Bayesian, basically," }, { "start": 200.8, "end": 205.76000000000002, "text": " it's a Bayesian way of thinking of it, we want a distribution over latent codes corresponding to" }, { "start": 205.76000000000002, "end": 213.64000000000001, "text": " that image. So our encoder here is not going to output Z, but it's going to output mu and sigma." }, { "start": 213.64, "end": 218.48, "text": " So it would be ideal if you could output an entire distribution, but we're going to make" }, { "start": 218.48, "end": 223.48, "text": " some assumptions here that that is a normal distribution. And it's going to output the mean" }, { "start": 223.48, "end": 231.67999999999998, "text": " and the standard deviation of that normal distribution. And then you actually, because" }, { "start": 231.67999999999998, "end": 236.64, "text": " now you how you're going to feed this into the decoder, if you just feed mu, you are back to the" }, { "start": 236.64, "end": 242.32, "text": " normal autoencoder. So that doesn't work. What you do is you actually instantiate that normal" }, { "start": 242.32, "end": 249.32, "text": " distribution with the mu and the sigma. So you plug that in here, you sample one sample from that" }, { "start": 249.32, "end": 256.76, "text": " normal distribution. And then you feed that sample into your decoder. Again, your decoder outputs" }, { "start": 256.76, "end": 262.92, "text": " an image from that sample. And you compare this with the reconstruction loss. And now you train" }, { "start": 262.92, "end": 273.72, "text": " the entire process. So you train the encoder and the decoder to reproduce these images correctly." }, { "start": 273.72, "end": 281.88, "text": " Now, if you only do that, then the model will basically regress to a standard autoencoder." }, { "start": 281.88, "end": 287.64, "text": " Why is that? Well, what's pretty easy for the... You can see that estimating the distribution is" }, { "start": 287.64, "end": 294.91999999999996, "text": " harder than estimating just the latent code, at least for the training data set, right? So if you" }, { "start": 294.91999999999996, "end": 300.91999999999996, "text": " don't pay attention, what's going to what the encoder is going to do is it's going to say," }, { "start": 300.91999999999996, "end": 309.32, "text": " oh, well, if I just make this here, my latent code, and if I just make this as small as possible," }, { "start": 309.32, "end": 318.92, "text": " like zero, or like one to the minus 10, 10, that's still one, 10 to the minus one. That's not that" }, { "start": 318.92, "end": 327.24, "text": " small. 10 to the minus 10, 11, 12, okay, a very small number, then that normal distribution will" }, { "start": 327.24, "end": 337.8, "text": " basically be just spiky around the thing around my mean. And so this here will always be kind of the" }, { "start": 337.8, "end": 344.6, "text": " same Z. So it won't be a distribution at all. It will just be this dirac. And I'm back to the" }, { "start": 344.6, "end": 349.8, "text": " original autoencoder, which I don't want. I want my probabilistic framework so I can compute" }, { "start": 349.8, "end": 354.28000000000003, "text": " likelihoods and so on. There are various advantages to having a probabilistic view" }, { "start": 354.28000000000003, "end": 362.2, "text": " of the data rather than just a model that produces it. Okay, and that's why in a VAE," }, { "start": 362.2, "end": 367.96, "text": " there is not only the objective, not only this objective, the reconstruction objective, but there" }, { "start": 367.96, "end": 376.76, "text": " is a second objective where we say that we impose a regularization. And the regularization is that" }, { "start": 376.76, "end": 386.52, "text": " this here is as close as possible to a standard normal distribution. And I guess you can choose" }, { "start": 386.52, "end": 393.4, "text": " that the prior but in regularly you say, okay, this here, I don't want you encoder, I don't want" }, { "start": 393.4, "end": 398.91999999999996, "text": " you to go far away from a standard normal distribution, like do what you have to do to" }, { "start": 398.91999999999996, "end": 406.76, "text": " make the loss small, but don't go away too far. Alright, so that's the kind of balance in the VAE." }, { "start": 406.76, "end": 412.91999999999996, "text": " And as you can imagine, if you have a normal distribution, and you sample these Z vectors here," }, { "start": 412.92, "end": 418.52000000000004, "text": " and the reconstruction loss is always the same. So if you input the same x here a bunch of times," }, { "start": 418.52000000000004, "end": 425.16, "text": " you'll get different Z's, right? You get Z1, Z2, Z3, because it's sampled from this distribution," }, { "start": 425.16, "end": 432.44, "text": " there's a sampling procedure right here. So if your discriminator here is kind of smooth," }, { "start": 432.44, "end": 439.16, "text": " then it will output different images. Now these images will always be compared to that same input" }, { "start": 439.16, "end": 446.28000000000003, "text": " image, right? So you're training this whole architecture to always reconstruct that input" }, { "start": 446.28000000000003, "end": 453.16, "text": " image from different images. So there is an interaction, I think, I think that's what's" }, { "start": 453.16, "end": 460.76000000000005, "text": " happening. I guess I'm not an expert on VAEs. But this here usually is something like the L2 loss." }, { "start": 460.76000000000005, "end": 467.72, "text": " So in terms of how this affects the images, if I have different images that are sort of the same," }, { "start": 467.72, "end": 475.72, "text": " but sort of different, and I have to make it L2 loss close to this image right here, then one" }, { "start": 475.72, "end": 483.40000000000003, "text": " option I have is to make them kind of blurry. So if I make all of them kind of blurry in the L2 loss," }, { "start": 483.40000000000003, "end": 490.20000000000005, "text": " that will give me a lower penalty. So that's, I believe that's one of the explanations I heard" }, { "start": 490.20000000000005, "end": 496.6, "text": " at some point why VAEs produce usually blurry images. And that's been a problem for a long time" }, { "start": 496.6, "end": 506.36, "text": " that everything's kind of kind of blurry. So here, the VA, the hierarchical VAE comes to the rescue." }, { "start": 507.32000000000005, "end": 513.88, "text": " So how they are going to battle this problem is by doing a hierarchical variational auto encoder." }, { "start": 513.88, "end": 519.64, "text": " And this is how it works. So you start off, this is your generator, by the way, once you've trained" }, { "start": 519.64, "end": 526.2, "text": " your VAE, right, once you've trained it, you can simply sample from your prior from this here," }, { "start": 526.2, "end": 531.4000000000001, "text": " because that's, you know, close enough to this, or you can, I guess, learn the prior of your data" }, { "start": 531.4000000000001, "end": 536.5200000000001, "text": " distribution, and so on. And you can just use the generator right here, the generative part," }, { "start": 536.5200000000001, "end": 543.88, "text": " this part right here is your generator in order to produce images. So you can sample from a VAE," }, { "start": 544.5200000000001, "end": 552.84, "text": " like you could sample from a GAN. Okay, so here, we'll look at a model that could combat those" }, { "start": 552.84, "end": 559.1600000000001, "text": " things. On the right side, you can see the model that you would ultimately sample from. So this is" }, { "start": 559.1600000000001, "end": 566.12, "text": " going to be your decoder, okay, this generative model right here. And what they do is it's very" }, { "start": 566.12, "end": 573.48, "text": " similar to if you, Nvidia also had this paper about this GAN, where they are on different scales," }, { "start": 573.48, "end": 579.24, "text": " like progressive, I think, prog GAN, which was the first that introduced actually this high quality" }, { "start": 579.24, "end": 585.96, "text": " face data sets, I believe, at least. So here, we're going to do a very, very similar trick." }, { "start": 585.96, "end": 593.32, "text": " So the idea is that we start out, this is a learned quantity, but you can also view it as" }, { "start": 593.32, "end": 598.84, "text": " just kind of the zero vector, we start out with our noise, we sample our noise, but our noise is" }, { "start": 598.84, "end": 604.6800000000001, "text": " going to be, it's going to be, let's say it's in the shape of an image, we can do that, we can" }, { "start": 604.68, "end": 613.0799999999999, "text": " reshape images, right. So it's going to just be a 16 entry vector. And it's going to be shaped like" }, { "start": 613.0799999999999, "end": 621, "text": " this, okay, we sample noise like this. And then we produce an image of 16 by 16 from it, I think" }, { "start": 621, "end": 627.4, "text": " they start with eight by eight or something like this. But in conceptually, you do that, then you" }, { "start": 627.4, "end": 633.24, "text": " have a neural network, this is a residual neural network produce an image out of that noise, right," }, { "start": 633.24, "end": 640.04, "text": " it maps the noise to the image. So this is your, this is your D, your discriminator part. But then" }, { "start": 640.04, "end": 646.6800000000001, "text": " you're not done. What you do is you would actually upscale that image, or that can happen in the" }, { "start": 646.6800000000001, "end": 653.08, "text": " neural network, I believe, or you are up sampled from the beginning, and you enlarge these things." }, { "start": 654.12, "end": 658.36, "text": " But what you would do is you would upscale your neural network." }, { "start": 658.36, "end": 669.5600000000001, "text": " And you go higher. And so on. So you go higher and higher and higher in the hierarchy of noises. So" }, { "start": 669.5600000000001, "end": 676.84, "text": " this is a hierarchical model. Oh, yeah, down here. So they start, they start from, as you can see," }, { "start": 677.4, "end": 684.9200000000001, "text": " it consists of 36 groups, in their case, of latent variables, starting from eight by eight," }, { "start": 684.92, "end": 695.4, "text": " scaled up to 128 to 128, with two residual cells per latent variable groups. Okay, so you continuously" }, { "start": 695.4, "end": 703, "text": " scale and scale and scale up your your your images, and each time you add another bunch of these" }, { "start": 703, "end": 711.56, "text": " noises right here. So that means that in this model, you can, the uppermost residual model can" }, { "start": 711.56, "end": 717.2399999999999, "text": " sort of get the coarse details of the image. And that's going to be blurry, because it's a VAE," }, { "start": 717.2399999999999, "end": 723, "text": " but it's going to be blurry in that coarse scale. And then you up sample it and you let another" }, { "start": 723, "end": 730.04, "text": " model add on top of that, the next layer of the next layer of features, you can see this is kind" }, { "start": 730.04, "end": 737.0799999999999, "text": " of a residual connection right here. And you, again, sample, and you let another neural network" }, { "start": 737.08, "end": 744.44, "text": " up sample, sorry, you know, let another neural network add more features in a higher resolution." }, { "start": 744.44, "end": 751.48, "text": " So even though each VAE can be blurry in its own scale, it will be upscaled and there," }, { "start": 752.84, "end": 758.6800000000001, "text": " there will be additional details added. And that's why in their samples, you will see that" }, { "start": 758.6800000000001, "end": 765, "text": " they're not super blurry at all. Though, I have to say something right here, if you look at" }, { "start": 765, "end": 772.92, "text": " these images, and you compare them, so later they compare them, they, like, they're almost look like" }, { "start": 772.92, "end": 780.44, "text": " puppets, right. So here you compare it to these are these are previous methods down there. Now," }, { "start": 780.44, "end": 787.56, "text": " you know, to say that they're pretty, they're, you can see they're clearly kind of worse in that you" }, { "start": 787.56, "end": 794.28, "text": " can hear the symmetry of the faces aren't really given also the symmetries. Here, you can see that" }, { "start": 794.28, "end": 800.6, "text": " there are no symmetries. Here, there are no long range dependencies. The hair details are often" }, { "start": 800.6, "end": 807.72, "text": " missing as compared to like here, this is pretty crisp. But if you look at like the skin of people," }, { "start": 807.72, "end": 812.76, "text": " and can just kind of the image composition in shadows, it looks like these, these people are" }, { "start": 812.76, "end": 819.9599999999999, "text": " like cardboard cutouts here, they have like these multiple layers where I mean, I'm I the only one" }, { "start": 819.96, "end": 825.8000000000001, "text": " that just sees this, this is like a plastic cutout. And then the face is again, like a plastic cutout" }, { "start": 825.8000000000001, "end": 833.8000000000001, "text": " and the faces are so smooth. I mean, look at this. These are like, too pretty. Like you can just look" }, { "start": 833.8000000000001, "end": 840.76, "text": " at this for hours. This is so like the diff. It maybe it just seems like this to me, but the" }, { "start": 840.76, "end": 848.6800000000001, "text": " difference if you kind of look at the skin and it almost feels like the bottom ones are actual real" }, { "start": 848.68, "end": 858.28, "text": " photographs in just in terms of the faces and the the kind of the color, just the smoothness is just" }, { "start": 858.28, "end": 865.8, "text": " all look like porcelain. This might actually be an effect of the VAE, right? Because it's not blurry," }, { "start": 865.8, "end": 873.56, "text": " right? As a you know, the the lines and so on. But the the skin texture might just be one one scale" }, { "start": 873.56, "end": 879.16, "text": " too much here. And that's where we now see the blurriness or it might just be that I don't I" }, { "start": 879.16, "end": 888.76, "text": " don't know. Okay, I have no idea. This just this just somehow whereas was popping out to me as the" }, { "start": 888.76, "end": 894.76, "text": " main difference, like they are much more crisp and so on and much more beautiful, but also they look" }, { "start": 894.76, "end": 905.16, "text": " like puppets. Yeah. Alright, so let's get back to the model right here because so once we decided" }, { "start": 905.16, "end": 910.2, "text": " that we want such a hierarchical model, what we need to do is we need to simply build a VAE for" }, { "start": 910.2, "end": 917.72, "text": " each of these hierarchies, right? So the the uppermost thing here is a regular VAE noise," }, { "start": 917.72, "end": 924.52, "text": " right? We have a noise, we sample from it, and we generate this particular scale of image." }, { "start": 924.52, "end": 930.1999999999999, "text": " Okay, so how do we get that noise? We simply this is this down here is our this is our encoder and" }, { "start": 930.1999999999999, "end": 936.68, "text": " this is our decoder. We simply have our encoder, this is a series of neural networks, and we get" }, { "start": 936.68, "end": 946.12, "text": " our latent encoding, right? So the Z is obtained through the the kind of VAE encoding method. Okay," }, { "start": 946.12, "end": 951, "text": " now the interesting part is how do we get Z two, and you might just think, well, we'll just go" }, { "start": 951, "end": 957.56, "text": " like one layer up here, but Z two, as you can see here, it depends on Z one during sampling." }, { "start": 957.56, "end": 963.16, "text": " So during inference, we have also have to have that Z two depends on Z one. And that's why we" }, { "start": 963.16, "end": 969.96, "text": " first need to go to Z one and actually produce a sample. So our method of inferring the latent" }, { "start": 969.96, "end": 979.08, "text": " codes includes already sampling from those latent codes, right? So you sample, and you do the same" }, { "start": 979.08, "end": 985.24, "text": " thing as you would do in the right. In fact, these models are shared. And then you can see that Z two" }, { "start": 985.24, "end": 993.1600000000001, "text": " now depends on Z one in this procedure, because you go here, and you go here and here. So Z two" }, { "start": 993.1600000000001, "end": 999.8000000000001, "text": " depends on Z one, Z three, in turn would depend on Z two and Z one. And you have a properly" }, { "start": 999.8000000000001, "end": 1008.2, "text": " hierarchically factorized model right here. Okay, so this, this is called a hierarchical VAE, it" }, { "start": 1008.2, "end": 1013.96, "text": " pretty much works like a VAE, except that it is hierarchical. And you need to do here need to have" }, { "start": 1013.96, "end": 1021.6400000000001, "text": " this bottom up and this top down model in order in your encoder. And so now there are a bunch of" }, { "start": 1021.6400000000001, "end": 1027.32, "text": " questions with respect to the hierarchical VAE. The problem here is that you have not only one" }, { "start": 1027.32, "end": 1032.3600000000001, "text": " sampling procedure, but you have sampling procedure upon sampling procedure upon sampling" }, { "start": 1032.36, "end": 1038.1999999999998, "text": " procedure. And this can get pretty unstable, I guess pretty quickly. So the rest of the paper is" }, { "start": 1038.1999999999998, "end": 1045.7199999999998, "text": " going to be how to get this to work. So the main, I think one of the main parts they do in order to" }, { "start": 1045.7199999999998, "end": 1052.36, "text": " get this to work in order to get this to train our residual connections. So we know that residual" }, { "start": 1052.36, "end": 1061.32, "text": " connections are kind of a sort of a gradient flow highway in order in order to to train very deep" }, { "start": 1061.32, "end": 1068.28, "text": " networks. And we've already seen this with residual networks in CNNs, where you have an input," }, { "start": 1068.28, "end": 1073.8, "text": " and you have some computation in form of a neural network, or in this case, a sampling procedure" }, { "start": 1073.8, "end": 1079.96, "text": " through a distribution, and you have an output and the residual connection would allow you to skip" }, { "start": 1080.52, "end": 1088.2, "text": " part of that, as you can see, used here in both the encoders and the decoders. So in the encoders," }, { "start": 1088.2, "end": 1092.6000000000001, "text": " you have residual connections and also in the decoders right here, you can see you have residual" }, { "start": 1092.6000000000001, "end": 1099.48, "text": " connections. In fact, you always take that lower scale, and you don't transform it into an upper" }, { "start": 1099.48, "end": 1107.56, "text": " scale, you actually sample noise, and then you add the lower scale and the upper scale together." }, { "start": 1109.64, "end": 1116.8400000000001, "text": " So it's really an additive model in a hierarchical fashion, even okay, the the pluses might actually" }, { "start": 1116.84, "end": 1124.76, "text": " not be okay, the pluses can also be combination, I guess. I guess that that I might be wrong," }, { "start": 1124.76, "end": 1132.36, "text": " and they can actually be combinations. In any case, they use residual networks in in a in a lot of" }, { "start": 1132.36, "end": 1139.8, "text": " cases in their generative and in their generator and in their encoder. You can see right here," }, { "start": 1139.8, "end": 1145.9599999999998, "text": " there is a residual cell for the generative model and a residual cell for the encoder. Now," }, { "start": 1145.96, "end": 1151.88, "text": " the exact method of these residual cell, you can see that they use batch norm, then they use one" }, { "start": 1151.88, "end": 1159.88, "text": " by one convolutions in order to go to a higher to a higher channel number before they do the depth" }, { "start": 1159.88, "end": 1168.1200000000001, "text": " separated five by five convolutions. So five by five, because you need a larger receptive field," }, { "start": 1168.1200000000001, "end": 1173.48, "text": " they make that clear, they need a large receptive field. However, the large receptive field means" }, { "start": 1173.48, "end": 1180.6, "text": " many parameters means their model would be too big and too much memory. So they do the depth" }, { "start": 1180.6, "end": 1185.72, "text": " separated convolutions, which simply means that you don't mix the channels during the convolutions." }, { "start": 1185.72, "end": 1191.56, "text": " So you go up the channels, you do a depth separated convolution and go down the channels again." }, { "start": 1191.56, "end": 1197.48, "text": " All of these are kind of hacks to make it work, right? Then also they have batch norm and a swish" }, { "start": 1197.48, "end": 1204.44, "text": " non-linearity as you can see here. And then here as well in the encoder, they also say in the text," }, { "start": 1204.44, "end": 1209.8, "text": " like they stress the importance, we found that first the batch norm and then the convolution" }, { "start": 1209.8, "end": 1215, "text": " is better than the other way around and so on. So this there's a lot of engineering work that" }, { "start": 1215, "end": 1222.92, "text": " went into this right here. So you see there's batch norm. And also you have to kind of hack" }, { "start": 1222.92, "end": 1227.88, "text": " the batch norm, because in batch norm you have these training parameters and people have observed" }, { "start": 1227.88, "end": 1235.88, "text": " that in VAEs. If you during inference, during sampling, if you use the training way where you" }, { "start": 1235.88, "end": 1241.4, "text": " only regularize within the batch, it's better than if you use the running averages. So you kind of" }, { "start": 1241.4, "end": 1247, "text": " have to hack that. We modify the momentum parameter of batch norm such that running statistic can" }, { "start": 1247, "end": 1252.44, "text": " catch up faster with the batch statistics. There's a lot of engineering in here. Like there's a lot" }, { "start": 1252.44, "end": 1258.1200000000001, "text": " of things that you have to get right to get something like this to work apparently. And" }, { "start": 1259.56, "end": 1264.52, "text": " yeah, this the paper, the paper just goes on in this style. So you can see they use the swish" }, { "start": 1264.52, "end": 1270.92, "text": " activation. They use squeeze and excitation blocks, which are another form of residual" }, { "start": 1270.92, "end": 1277.88, "text": " blocks that were introduced quite a long time ago, but still being used as you can see. And" }, { "start": 1277.88, "end": 1284.44, "text": " yeah, so that's the architecture. So you can see they have residual cells there, residual cells" }, { "start": 1284.44, "end": 1291.5600000000002, "text": " here, reducing the memory requirements. They say they use two tricks. First of all, we they do" }, { "start": 1291.5600000000002, "end": 1296.92, "text": " mixed precision using a cool new Nvidia library. Given that they're from Nvidia, they get to try" }, { "start": 1296.92, "end": 1304.2, "text": " these things out first. And second of all, they also to reduce the memory, we fuse batch norm and" }, { "start": 1304.2, "end": 1309.72, "text": " swish and we store only one feature map for the backward pass instead of two. And they have to" }, { "start": 1309.72, "end": 1315.4, "text": " then recompute this trick is known as gradient checkpointing and cries, recomputing batch norm" }, { "start": 1315.4, "end": 1322.28, "text": " in the backward pass. I believe like future deep learning frameworks should just take care of that" }, { "start": 1322.28, "end": 1330.8400000000001, "text": " for you, instead of you having to do this kind of stuff. Honestly, so they also need to, they hear" }, { "start": 1330.84, "end": 1340.36, "text": " they say taming the unbounded KL term. So the KL term is what makes the distribution that the encoder" }, { "start": 1340.36, "end": 1346.36, "text": " outputs close to that distribution that you want like that normal distribution. So this is the" }, { "start": 1346.36, "end": 1352.6, "text": " regularization term, you can see here, it's a KL divergence between q, which is what your encoder" }, { "start": 1352.6, "end": 1360.28, "text": " outputs, you can see that's the, the latent code for the image x, between the two, the two, the" }, { "start": 1360.28, "end": 1368.76, "text": " two, the two, the two. And between that and between your prior which you say it should be, it should" }, { "start": 1368.76, "end": 1375.72, "text": " be a like a normal distribution. In this case, it should be a hierarchical normal distribution. And" }, { "start": 1377.6399999999999, "end": 1384.52, "text": " they have a special characterization here where they say, because it's hierarchical, right. So" }, { "start": 1384.52, "end": 1389.48, "text": " So I'm going to have a hierarchy of normal distributions. This is my top" }, { "start": 1389.48, "end": 1395.04, "text": " hierarchy and then I'm going to sample one sample right here. And then in" }, { "start": 1395.04, "end": 1400.52, "text": " the next layer I'm going to have a normal distribution around that sample" }, { "start": 1400.52, "end": 1406.72, "text": " right here and I'm going to sample from that and so on. So my hierarchical" }, { "start": 1406.72, "end": 1411.04, "text": " normal distribution is going to be always where the next distribution in" }, { "start": 1411.04, "end": 1416.44, "text": " the next layer is dependent on the distribution in the hierarchy." }, { "start": 1416.44, "end": 1424.3999999999999, "text": " And they have a special parameterization where in order for the encoder to produce" }, { "start": 1424.3999999999999, "end": 1429.84, "text": " that, so the encoder has to produce a z of the first layer and then a z of the" }, { "start": 1429.84, "end": 1434.8799999999999, "text": " second layer and so on, in order for the encoder to reproduce that and to be close" }, { "start": 1434.8799999999999, "end": 1440.58, "text": " it must match this distribution and it must match this distribution. So if it" }, { "start": 1440.58, "end": 1444.78, "text": " doesn't match this distribution correctly it will kind of sample" }, { "start": 1444.78, "end": 1450.1999999999998, "text": " somewhere else a bit. And then that distribution, that base, will already be" }, { "start": 1450.1999999999998, "end": 1454.8799999999999, "text": " shifted right here. So it thinks that the distribution to match is now this" }, { "start": 1454.8799999999999, "end": 1460, "text": " normal distribution. So you can see that the base is already shifted and that's" }, { "start": 1460, "end": 1467.48, "text": " why their encoder only outputs the delta to the, as you can see here, it" }, { "start": 1467.48, "end": 1476.24, "text": " only outputs the delta to the prior. We define here, we define the Q of the z in" }, { "start": 1476.24, "end": 1482.72, "text": " a given layer as the normal distribution of the mu i, with mu i, that's your" }, { "start": 1482.72, "end": 1488.76, "text": " prior, see that's your prior of that layer, plus a delta mu. And also the" }, { "start": 1488.76, "end": 1496.52, "text": " sigma is the sigma from the prior times a delta sigma that you output. So you're" }, { "start": 1496.52, "end": 1501.48, "text": " kind of saying you're not supposed to output the actual distribution, you're" }, { "start": 1501.48, "end": 1506.6399999999999, "text": " supposed to output the difference of distribution to the prior. Now in layer" }, { "start": 1506.6399999999999, "end": 1511.96, "text": " 0 that's the same thing, right, because the prior is going to be zero mean and" }, { "start": 1511.96, "end": 1519.6, "text": " unit variance. So that's this here, this here will be zero and this here will be" }, { "start": 1519.6, "end": 1525.44, "text": " one. But in all the upper layers this is going to make it easier. So that's one" }, { "start": 1525.44, "end": 1531.16, "text": " trick you have to make this repeated sampling not hurt you as much. The other" }, { "start": 1531.16, "end": 1537.04, "text": " trick they employ here is special regular, sorry, spectral regularization," }, { "start": 1537.04, "end": 1542.4, "text": " which is a regularization where you regularize the top singular value per" }, { "start": 1542.4, "end": 1547.3600000000001, "text": " layer. You can use that, you can compute that with a power iteration, people have" }, { "start": 1547.3600000000001, "end": 1553.0800000000002, "text": " been done doing this before, and also you can build in some normalizing flows." }, { "start": 1553.08, "end": 1560.1999999999998, "text": " So here if we sample the different layers, what we're" }, { "start": 1560.1999999999998, "end": 1564.08, "text": " going to do is we're going to sample all of these things at once, right, they're" }, { "start": 1564.08, "end": 1569.46, "text": " dependent on the upper layer in the hierarchy, but we'll sample them all at" }, { "start": 1569.46, "end": 1575.56, "text": " once. And that means they are not sort of connected to each other. Now if we" }, { "start": 1575.56, "end": 1580.1999999999998, "text": " introduce a flow we'll basically make them all connected to each other and" }, { "start": 1580.2, "end": 1586.04, "text": " build like a singular distribution of them. But I don't want to go too" }, { "start": 1586.04, "end": 1590.24, "text": " much into this because it doesn't gain that much, they say you can just build" }, { "start": 1590.24, "end": 1596.48, "text": " that in if you want. Okay so these are all the things that at least they list" }, { "start": 1596.48, "end": 1603.16, "text": " in the method section. Now there are like a lot more that they have" }, { "start": 1603.16, "end": 1610.0800000000002, "text": " to do, but ultimately as you can see right here on these on four of these" }, { "start": 1610.08, "end": 1614.52, "text": " five datasets they achieve state-of-the-art. In fact okay on this" }, { "start": 1614.52, "end": 1620, "text": " dataset no one else has tried, but at least on the other datasets they are" }, { "start": 1620, "end": 1628.24, "text": " very very competitive as you can see right here. And they compare this to, first" }, { "start": 1628.24, "end": 1635.96, "text": " of all, to other models and even other models with and without auto" }, { "start": 1635.96, "end": 1641.96, "text": " regressive flows. And they come pretty close to these auto regressive models. So" }, { "start": 1641.96, "end": 1647.2, "text": " an auto regressive model would be one that generates like one pixel at a time" }, { "start": 1647.2, "end": 1651.92, "text": " conditioned on the other pixels. This model doesn't do that, this model" }, { "start": 1651.92, "end": 1658.24, "text": " generates all pixels at once, so it's not auto regressive. But as you can see it" }, { "start": 1658.24, "end": 1667.8, "text": " beats all the other non or auto regressive models and it gets" }, { "start": 1667.8, "end": 1673.8, "text": " pretty close to the best auto regressive models which are down here. They are" }, { "start": 1673.8, "end": 1681.84, "text": " still better, but the gap is kind of shrinking is what they say. Cool so" }, { "start": 1681.84, "end": 1686.92, "text": " that's the main result. Then they have ablations where they basically, as I said," }, { "start": 1686.92, "end": 1691.8400000000001, "text": " all of these things kind of contribute a little bit, a little bit, a little bit, a" }, { "start": 1691.8400000000001, "end": 1696.4, "text": " little bit to building this bigger and bigger and deeper variational auto" }, { "start": 1696.4, "end": 1702.24, "text": " encoder. So it's hard to say what exactly makes this work because all of it makes" }, { "start": 1702.24, "end": 1708.8400000000001, "text": " it work. And I guess they just kept going until they beat state-of-the-art or until" }, { "start": 1708.8400000000001, "end": 1713.4, "text": " you know they ran out of tricks. Again these are the samples that we looked at" }, { "start": 1713.4, "end": 1718.44, "text": " and I do want to spend some time in the appendix right here because I think it's" }, { "start": 1718.44, "end": 1725.88, "text": " pretty interesting what they do. So first of all they show that their" }, { "start": 1725.88, "end": 1731.0800000000002, "text": " model doesn't remember the training samples. As you can see right here these" }, { "start": 1731.0800000000002, "end": 1735.92, "text": " are always the nearest neighbor from the training sample so the model is fairly" }, { "start": 1735.92, "end": 1745.44, "text": " you know fairly far away from the training samples. But yeah I mean okay" }, { "start": 1745.44, "end": 1752.6000000000001, "text": " maybe it's just me but the left they just look like more kind of more ideal" }, { "start": 1752.6000000000001, "end": 1763.96, "text": " idealized humans like very smooth humans like designer babies. Here they show that" }, { "start": 1763.96, "end": 1771.28, "text": " if you use batch norm as you would use it I think regularly where you keep these" }, { "start": 1771.28, "end": 1777.1200000000001, "text": " running stats or you do the batch norm from training then you get into" }, { "start": 1777.1200000000001, "end": 1782, "text": " this kind of degenerate case if you sample at lower temperatures. So the" }, { "start": 1782, "end": 1785.4, "text": " temperature that you sample from describes the width of the Gaussian" }, { "start": 1785.4, "end": 1790.56, "text": " that you ultimately want to sample from. And if you do they have this method to" }, { "start": 1790.56, "end": 1796.1599999999999, "text": " readjust the batch norm statistics which I don't want to go into here but you can" }, { "start": 1796.1599999999999, "end": 1801.12, "text": " you can read it up to basically fix that problem. It is a problem that" }, { "start": 1801.12, "end": 1807.32, "text": " apparently other people have observed as well and their method apparently is you" }, { "start": 1807.32, "end": 1816.32, "text": " know is a is one that manages to do that. Okay lastly there are some more samples" }, { "start": 1816.32, "end": 1823.8, "text": " right here. And yeah this right here this is honestly this is one of the I think" }, { "start": 1823.8, "end": 1828.6799999999998, "text": " one of the most interesting things where they go and since they have this" }, { "start": 1828.6799999999998, "end": 1835.24, "text": " hierarchical model right so here is like z1 it gives right and so that give gets" }, { "start": 1835.24, "end": 1839.36, "text": " you like an image and then there's z2 and that gets you an image and then" }, { "start": 1839.36, "end": 1844.08, "text": " there's z3 and so on and you continuously upscale and hierarchically" }, { "start": 1844.08, "end": 1851.4399999999998, "text": " add the features. Here they say what if what happens if we if we sample z1 once" }, { "start": 1851.4399999999998, "end": 1857.12, "text": " and then we fix it and then we only sample the other ones conditioned on z1" }, { "start": 1857.12, "end": 1863.28, "text": " and here see where you see top scale fixed and you can see there is" }, { "start": 1863.28, "end": 1869.9199999999998, "text": " considerable variation in the image but there is there is not really a large" }, { "start": 1869.92, "end": 1877.04, "text": " scale variation. Okay so the general face keeps constant but there are details" }, { "start": 1877.04, "end": 1882.04, "text": " changing as you can see so here the hair is kind of going over the image the" }, { "start": 1882.04, "end": 1888.52, "text": " color is changing here there are a lot of changes the mouth looks slightly" }, { "start": 1888.52, "end": 1894.52, "text": " different as far as I can see but I might be hallucinating here and then if" }, { "start": 1894.52, "end": 1900.12, "text": " you fix continuously the top two scales or the top three scales right here top" }, { "start": 1900.12, "end": 1905.24, "text": " four scales you can see that there are more and more just little details that" }, { "start": 1905.24, "end": 1914.08, "text": " change more and more so yeah so this is we they are operating at five scales" }, { "start": 1914.08, "end": 1920.48, "text": " starting from 8 by 8 up to 128 to 128 in each row we fix the samples at a number" }, { "start": 1920.48, "end": 1925.3600000000001, "text": " of top scales and we sample from the rest of the hierarchy as we can see the" }, { "start": 1925.3600000000001, "end": 1930.16, "text": " long-range global structure is mostly recorded at the top of the hierarchy in" }, { "start": 1930.16, "end": 1936.16, "text": " the 8 by 8 dimensional groups the second scale does apply at some global motive" }, { "start": 1936.16, "end": 1939.96, "text": " does apply some global modifications such as changing eyes hair color skin" }, { "start": 1939.96, "end": 1944.2, "text": " tone the shape of the face the bottom groups capture mostly low-level" }, { "start": 1944.2, "end": 1948.96, "text": " variations however the lowest scale can still still make some subtle long-range" }, { "start": 1948.96, "end": 1953.52, "text": " modifications for example the hair color is slightly modified when we are only" }, { "start": 1953.52, "end": 1957.96, "text": " sampling from the lowest scale in the last row this is potentially enabled" }, { "start": 1957.96, "end": 1962.96, "text": " because of the larger receptive field in our depth wise separate separable" }, { "start": 1962.96, "end": 1975, "text": " residual cell yeah I don't the hair color changes okay slightly maybe I don't" }, { "start": 1975, "end": 1984.28, "text": " know my my eyes are too many faces okay but you know what's certainly the case" }, { "start": 1984.28, "end": 1990.36, "text": " is that their models exhibit much better kind of global unity compared to these" }, { "start": 1990.36, "end": 1994.72, "text": " other samples where you can pretty clearly see like the different sides of" }, { "start": 1994.72, "end": 1999.56, "text": " the faces have little to do with each other and so on and this is the benefit" }, { "start": 1999.56, "end": 2002.64, "text": " that you get from doing this hierarchically so you have part of your" }, { "start": 2002.64, "end": 2007.92, "text": " model that's responsible for kind of the global shape of the image and then that" }, { "start": 2007.92, "end": 2012.48, "text": " keeps it consistent and then you have other parts that are responsible for the" }, { "start": 2012.48, "end": 2021.2, "text": " details okay so I hope this was something to you know that interested you I" }, { "start": 2021.2, "end": 2026.4, "text": " myself it's as I said it's it's an engineering paper so there is lots of" }, { "start": 2026.4, "end": 2030.64, "text": " things described there is not like one jumping idea I guess residual" }, { "start": 2030.64, "end": 2034.66, "text": " connections are pretty important and these depth wise convolutions save" }, { "start": 2034.66, "end": 2040.2800000000002, "text": " memory and but also all of the all of the other things that you have to do to" }, { "start": 2040.2800000000002, "end": 2047.1000000000001, "text": " build something like this are pretty pretty interesting yeah I I hope you" }, { "start": 2047.1, "end": 2062.6, "text": " gained something from it and I'll see you next time" } ]
F5aaXrIMWyU
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "reinforcement learning", "society", "gini index", "welfare", "taxes", "brackets", "progressive", "regressive", "us", "poor", "rich", "equality", "redistribution", "outer loop", "world", "resources", "labor", "trade", "neural networks", "ppo" ]
Hail the AI Tax Collector! This very visual framework has RL Agents maximize their coins in a tiny world through collecting, building and trading. But at the same time, the government is also an AI trying to maximize social welfare via taxes. What emerges is very interesting. Paper: https://arxiv.org/abs/2004.13332 Blog: https://blog.einstein.ai/the-ai-economist/ Abstract: Tackling real-world socio-economic challenges requires designing and testing economic policies. However, this is hard in practice, due to a lack of appropriate (micro-level) economic data and limited opportunity to experiment. In this work, we train social planners that discover tax policies in dynamic economies that can effectively trade-off economic equality and productivity. We propose a two-level deep reinforcement learning approach to learn dynamic tax policies, based on economic simulations in which both agents and a government learn and adapt. Our data-driven approach does not make use of economic modeling assumptions, and learns from observational data alone. We make four main contributions. First, we present an economic simulation environment that features competitive pressures and market dynamics. We validate the simulation by showing that baseline tax systems perform in a way that is consistent with economic theory, including in regard to learned agent behaviors and specializations. Second, we show that AI-driven tax policies improve the trade-off between equality and productivity by 16% over baseline policies, including the prominent Saez tax framework. Third, we showcase several emergent features: AI-driven tax policies are qualitatively different from baselines, setting a higher top tax rate and higher net subsidies for low incomes. Moreover, AI-driven tax policies perform strongly in the face of emergent tax-gaming strategies learned by AI agents. Lastly, AI-driven tax policies are also effective when used in experiments with human participants. In experiments conducted on MTurk, an AI tax policy provides an equality-productivity trade-off that is similar to that provided by the Saez framework along with higher inverse-income weighted social welfare. Authors: Stephan Zheng, Alexander Trott, Sunil Srinivasa, Nikhil Naik, Melvin Gruesbeck, David C. Parkes, Richard Socher Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Alright, today we're going to find out why AI is much better at governing people, why poor people really should pay more taxes, and how Donald Trump is just a normal human. Alright, we'll dive into it. We're looking at the AI Economist by Salesforce Research. Now Salesforce Research has kind of created a simulated world environment where they can place agents in it and the agents, they can move around, they can collect resources, they can trade those resources, and they can use those resources to build houses and that will earn them coins. And each agent wants to maximize its own coins, but also there's the government and the government can set taxes. So they collect money from everyone and they redistribute it. And the goal now is going to be that the AI handles both the agent and the taxes and we want to maximize the social welfare of the entire population. Alright, that's the goal. So the paper here is called The AI Economist Improving Equality and Productivity with AI Driven Tax Policies by Stefan Cheng and Alexander Trott and other people from Salesforce Research and Harvard University. So as I said, this is a simulated environment and the simulated environment works like this. There is a 2D plane, kind of like a game playing field and in this game there are agents. Here you can see the agents, there are always four agents. Where? Oh, down here. What are you doing in the corner? Come on, be productive. The agents are in this world and they can do certain things. They have certain actions at their disposal. So first of all, they can move around. They can move down, left, right and so on. Whenever they walk past a resource tile, they collect the resource. This is stone and this is wood. So there are two kinds of resources. And then the last actions the agents have is building a house. One wood and one stone will create one house and the house gives you coins. So this is a house and that will give you coins. But how much coins you get is different from agent to agent and this represents the agents different skill levels. This is an abstraction and the kind of economic theory behind it is that the income inequality in people, one of the main drivers of it is that they are skilled differently and therefore are able to convert one unit of labor into more money than another lower skilled worker. So this is here represented by the fact that maybe if this agent here builds the house, they'll get 50 coins. But if this agent here would build the same house, they'll only get 10 coins. So we'll call this here a high skilled worker and this here a low skilled worker. Now the last thing, sorry, I saw the last thing before, but the very last thing the agents can do is they can trade. So if one agent has too many resources and the other one has not enough, they can trade those resources among each other for those coins. So once you build a house, you collect some coins, you can then either go and collect more resources or you can use those coins in order to buy resources off of other people. This is unlucky. No coins, no houses, and no resources. Look at them. Oh yeah, so you also can't move across the water here. You can only move on the grass. You can also not move through a house, which gives you some interesting abilities because you can just build a house right here. And yeah, so and you can't move over other players. But these are the rules are pretty simple. And the goal here is for the agents to maximize the number of coins they get in 1000 steps. So the number H here is 1000, which is the number of steps that the agents can take before the game is over and it restarts again. So each agent is using reinforcement learning in order to learn how to achieve the maximum number of coins. Now the policy is of course going to be different depending on whether that is a high or a low skilled worker. The catch here is that outside of this there is the government, the government here, let's draw this big house with the flag of our fictitious nation, which is like this. That's the flag. And the government will observe what's happening here and they will issue a tax taxes. So it will issue a tax distribution. Now how do you imagine that? So if you imagine the government says something like this for the first 10 coins you own, you owe us 5% of that. For the next 10 coins, so from 10 to 20 you earn, you owe us 10% and so on. So if you earn even more, you owe us more and more percent of those extra coins. This is what you might know as a progressive tax schedule. The more you earn, the more percentage wise you pay on that extra earned money. This is what you might be used to, but there are other tax schedules and the exact histogram you see or the exact how many percent for which amount of coins, that is the action of the government. So the government decides on the taxes and the taxes are just collected from the income. So if an agent earns these coins, then it has to pay taxes to the government and the government will redistribute all the taxes it has collected equally among the population. So if you pay a lot, you might lose through this process and if you just pay a little taxes you might gain through this process. So that's it. That is the basic premise of the game. The agents are using reinforcement learning and I believe the newness of this paper is also that the government now is using reinforcement learning in order to determine the optimal tax policy. There is kind of this inner loop here and there is this outer game where the government also tries to maximize the RL. And what does the government try to maximize? Good question. It is a measure that's called social welfare. Now social welfare consists of two things and they have this here way down in the paper. Social welfare in this paper consists of two things. First of all, economic productivity, which basically just means how many coins has anyone produced. It doesn't matter who, but just the total amount of coins produced. The second one is income equality and this is related to the Gini index. So if you plot the cumulative distribution of wealth, a fully equal society would be a straight line because 50% of the people would have 50% of the money and so on. But almost all true societies have something like this where 50% of the people might have 10% of the money and the rest 50% of the people has the other 90%. And the measure of inequality is this area here. This is called the Gini index and 1 minus this area is what this paper has as an equality measure. So the higher this number, the more equal is the society in terms of their income distribution. Now what is actually optimized for here is this thing, equality times productivity. So you want both to be high, your income equality and your productivity. There's a trade off here of course, but you can have multiple ways to trade that off and that will give you the different thing. They call this the social welfare function. And that's the thing that the government RL agent optimizes for. So you can see here already the free market, even though it's the most productive, produces the most coins because if you have a free market means no taxes. If you have no taxes, then people are basically encouraged to earn more money because they don't have to pay taxes on them. As soon as you tax them, they're less encouraged to earn more money. And therefore if you have no taxes, the most coins will be earned in total. But the equality suffers. So the equality is the lowest among these things considered. If you compare that to the AI economist, the AI economist achieves the highest social welfare. It achieves the highest equality, but it doesn't suffer as much in productivity as other systems here. And the baseline systems are first of all, the US federal system. This is not particularly tied to the US. This is basically every system or most of the systems that you have currently in the world is the progressive tax system and the SAES formula, which I believe is an economically theory based system, which is a regressive tax schedule. You can see them down here where the US federal will be progressive, means the more you earn, the more percentage wise you pay. While the SAES formula will be regressive, which generally means the more you earn, the less you pay. I believe this was derived under some assumptions to be the optimal tax distribution. And the AI economist will come to this in a second. Let's actually just look at one of these things first, one of these games, how this plays out. The cool thing here is that they have pretty flashy animations. So you can look how does one of these games turn out. Now this is a free market game and you can see the agents moving around collecting things, building houses. And you might notice that one of the agents, namely agent one, is just building all of the houses. And generally just kind of being a dick, being in everyone's face and kind of building things everywhere. And the other ones don't. Or just very few, like the light blue on the bottom left builds some houses. On the right you can see how the distribution of wealth is structured. And you see agent one ends up with most of the wealth. Now the size of the circle I think is the total productivity. So you can see this grows over time mainly because agent one becomes so rich. And if you analyze this, if you analyze what's happening here, then you'll see that agent one and I might be... Yeah, they have a graph up here. So it is very interesting what happens. This is kind of the same game. So agent one here is this orange dot and agents two, three and four are these dots here. And this graph here is coin from trading. So how much money they win or lose from trading. Now the green bars are trading wood and the brown bars are trading stone. So you see agent number four, which is the lowest skilled, the skill is just determined at the beginning of the episode. It will just make all of its coins basically by selling wood. And agent three will make all of its coins by selling stone. And agent two will collect both and sell both. And agent one will just spend money in trading. So you'll have a specialization here. Agent one, which is the highest skill one right here, will buy resources in order to build more houses because it clearly profits from building lots and lots and lots and lots of houses. So it will use that money to buy more resources rather than go and collecting them. While all the other ones basically forgo building houses in favor of they just collect the resources and they just trade them way to the agent one that's more profitable for them than building houses themselves. So you see this kind of specialization emerging in these games, which I find, I find this to be pretty cool that you see something like this, like a really stark division of labor emerging just from these very, very small set of rules. And you can analyze this game in different ways. They have a few more plots where this becomes quite apparent that sorry, that these agents specialize. So you see here resources collected, sorry about that, resources collected. If you have the lowest skill and the highest skill labors, the lowest skills, they mainly, this should be a 10. They mainly collect resources, while the highest skill labor mainly goes for building things. It doesn't collect resources, but net income from building is really high while everyone else just doesn't build at all. All right, so we have a division of labor emerging. Now this was a free market. Let's actually compare the different algorithms. So if you look at social welfare, this is this thing here, equality times productivity. You can see that the AI economist will outperform over time over the training progress, it will outperform all of the other systems. So it will outperform the free market, the US federal tax system, and the SAS formula if trained for long enough, which is to be expected, right? If you put RL onto a cost function, it will then optimize that cost function. But it's pretty cool to see that there's a lot of headroom here over what we currently have. Now let's look at some of the strategies it comes up with. So what do these games look like where the AI has imposed different tax strategies? So this is with the SAS strategy. You can see that here. Again, you see this inequality emerging with the yellow player here building most of the houses. With the AI economist, again, there is inequality, but you can see at the distribution that agent one only ends up with about half of the wealth, where if you compare this to the free market here, then agent one ends up with like two thirds of the wealth, right? This is the game we saw before. But there is not qualitatively that much of a difference, but there is in the end result. All right, let's look at what these policies actually come up with. So what is the tax policy that the AI comes up with? So this tax policy outperforms on this social welfare metric. And this is very interesting, right? So first of all, you see that it's right zigzag. It's like down, up, down, up, which is already weird. So the first very weird thing is the spike at the very bottom. So that thing here, what's that thing here? Those are the poorest people in your society, and you're taxing them the highest. Right? So just imagine this, you're here downtrodden by life, abandoned by society, you have no money, no house, no nothing. And you're just trying to get a job, you're just getting like a little bit of money. And you can buy a cheeseburger, and then the government comes. Give us that. Give us that money. Come on. So basically, these are the poor. And the poor in this system is just F you. F you the poor. Now, the reason why this happens is pretty clear, right? The reason why this happens is because you want to encourage people to go here to earn more money. So it's not like the government makes any money from the poor people independently of how high it taxes them. But it is basically an incentive structure to make them move over to the somewhat more productive population. Because here it's assumed kind of that even the lowest skilled ones can move over a bit if you just tax them enough at the low brackets, right? So this is what I find to be you just have to realize that it is so hard, I believe it is almost impossible to encapsulate what we really want in a system into a formula to be into a cost function to be optimized. It is so incredibly hard. And you see that here, of course, it is going to result in a better social outcome, but it just doesn't feel right to tax the poor at what 60%? Okay, so F the poor, right? And then you get to this level right here. And interestingly, if you earn even more, you'll be taxed high again, right? So this, we're kind of used to that. You earn little, you pay little, you earn more. You pay more. But then comes this entire valley here. What's up with that? Right? Like WT, and this can be this is now of course, the same reasoning as you have with this size formula here is where the rich people, you want to tax them less so that they are more productive such that they generate more coins. And even though you tax them less percentage wise, they will end up paying more money in absolute terms. Because because you basically encourage them to produce more. So that is that is kind of that is the, I guess the reasoning behind this. But what you have to wreck, you have to recognize what's happening here, right? What are we optimizing? We're optimizing this productivity times equality. And what do we get? You see, you get two big valleys of attraction, one here, and one here. And that means that this algorithm favors a two class society. Right? And I believe this is this is partially the limitations of this simulation here, the fact that you only have four agents, the fact that you can only do two things either collect or build, right? It encourages a two class society, this specialization that you saw, right? So you say these here are the moneymakers, right? And these here are the collectors. And it is very hard to move from one group to the other. Because if you you earn more coins as a collector, you're here, and you're really discouraged here. If you move there, you want to move all the way over here, right? Now, the people that are already over here, if they earn an extra coin, that doesn't bother them too much. So they're very encouraged to earn more money. But the very, the poorer people on this side, they're basically discouraged from earning more money, because the system needs them to stay at that collector level, right? So the system encourages the two class society because we have not built social mobility into the into the into the equation, we have not built a measure for social social mobility into the cost function. And therefore, the AI doesn't care that the poor people will stay poor and rich people will stay rich. It just knows that this is the best outcome for society overall, given the cost function that we had, again, this just doesn't seem like fair to us, like what we want, we want someone to be able to make it over here, right, even if they start out from the bottom. And so we'd have to we have to build that in. So we have a system that is effing eff the poor, right? No social mobility, mobility. No. And then what's happening at the end? What's happening at the end? This is beautiful. Very rich people. These are the moneymaker, right? This is the this is the monopoly guy top hat monocle wearing Scrooge McDuck bathing in coins. This is where the the government makes their money. And the discrepancy is really stunning, because you could also argue, hey, why don't we apply the same reasoning as we applied here and here? Why is not is it not like the case that if the rich people if you tax them lower, they'll pay more money and so on. I believe again, this might be just a result of this, how the simulation is set up. So we'll move away quickly and we'll come back to this. Here's what I find particularly interesting about this paper, which just confuses the heck out of me. It is a double periodic game. So it's an inner outer loop game. What do I mean by that? They have these episodes, right? Here is the start. And here is the end. And they subdivide this into, as we said, 1000 steps. So an agent is here and it can do step, step, step, step, step, and it can perform these actions. This is the agent. There are 1000 steps here and the agent just tries to collect as much coins. So this is your classic RL problem. But also they divide this into 10, what they call periods. And I'm just going to draw maybe four periods, right? So this thing here, they call one period where the whole thing is an episode. Now the purpose of the period is that at the beginning of each period, the government, the government can impose a new tax schedule. So the government doesn't only fix the taxes once, but it can change the taxes over the course of the episode, right? Now this is what I find. I just don't see why. So now you're formulating the tax giving objective as a sequential decision making. It's like the government saying, well, today we have high taxes, but tomorrow we have low taxes and the day after that we have high taxes again. And it just doesn't make sense for any government to do this. What you should do is you should set taxes once at the beginning of the episode and then see how that turns out and then try to maximize your tax schedule. Because all we're looking at, we're only ever looking at how the taxes are at the end, right? The things that we've examined are just the last taxes that the AI has issued. We don't know the dynamic of what happens in between. This might be super wild actually, what the AI does in between. And I just don't see the framing as a sequential decision problem. And I believe this is just an over engineered thing. Because someone wanted a reason and here is the architecture, right? You see someone wanted a reason to put an LSTM in there. Someone is thinking like, well, RL, that means like sequential decisions and so on. And RL in this outer loop, the way I propose it would just be a one step per episode decision, which is a bandit problem. And as we all know, bandits are boring. So they didn't want this to be a bandit problem. They wanted to be a sequential problem. And that's why they made this period thing, which I find dumb. So another factor here, and I'm going to tell you how this relates to the to the weird rich people are taxed high. Another factor here is look at this. It's a CNN, an MLP, an LSTM and an MLP and the agent as well. And I can tell you right now, the CNN has two layers. Two. And the LSTM has like 128 units in its hidden state. So these are tiny, tiny models. And it is not a model based RL, it's model free RLs, proximal policy optimization. And the the the ability of these agents or planner to learn anything substantial here, I believe is just not super duper well, right. So the I believe that these are rather dumb agents. And you can see the tax rates given by the planner is fed into the agent model. But I don't think that the agent given such a small model can actually adjust to these inputs because you have to do some pretty good logic in order to from these tax brackets to determine how you should act right now. What I think is happening is the agent just kind of is aware of its skill level and through its rewards, it's trying to maximize its future rewards. And then when the government changes the tax rate, it will not, I'm almost positive it will not directly change its response to that. But it will kind of observe that something's happening in the world and then adjust maybe a little bit its overall strategy, but not in that particular instance, and it will be delayed or it will be like an overall strategy. And this might be one of the reasons why the tax brackets here might be screwed up because who says who says if I were this AI, what I could do is in period one through nine, I make the taxes really low for the rich people. So I just encourage everyone to make more money, right? Like come on, become more productive and I get the benefits of that. And then in the last episode, last period, right, I just freaking jack up that final tax bracket. It's like you, you have lots of money, give it to me. And then you just redistribute what you got there to the poor people in the very last period and thereby you achieve your goal of this social welfare function. But of course, this is not sustainable because all the rich people would just be kind of screwed through that and move down again, but it's the end of the episode. So what are they going to do? So I think the fact how this is framed, that there are just two different ways to get coins. But the fact that this is this periodical nature of the outer loop all might lead to something that becomes slowly more and more and more uninterpretable. Still cool though. All right, so the final thing, they do this with humans. Yes, real humans. So they let humans try it and they have this interface here and the humans, they behave quite differently from the AI. So there are a few different things where the humans act. But look at that here, AI economists, this is what the agents do, right? So this AI economist is the tax strategy. They just take these developed tax strategies and let the humans be the agents so that the you just want to observe how the agents act and whether or not the tax strategies also work when it's real humans acting in this environment and not our agents. So compare this to how the humans act. The humans they just build their houses in like neat little packets or straight lines or stuff like this. I just find it to be very funny. Now there are some things lacking in the human environment which I find really important. So first of all, they have no cost for moving, which I guess is minor. But second of all, they have no trade. And I think that just kills the whole experiment. Because now of course what you're going to get is the wealth is just going to be proportional to how much you get coins per house, which is different for each agent, right? So to me that that is now a pointless experiment if you can't trade because the outcome is just predictable. And I don't think that the human behavior changes in response to the different tax brackets. I think they'll just do and however they can make money, they'll make money, they'll build more houses until it becomes unprofitable. And that's it. So I don't see the I don't see the value of these experiments, even though they show that again, the AI economist outperforms the other tax strategies in this equality times productivity metric and also in another metric that they measure. The second problem I have is for the human experiments, they take this distribution here, they say, well, the AI, this is one of the distributions that the AI came up with. But you notice the lack of the F you poor people, and the lack of this big spike here for the rich people, which I find are one of the two features of the other distribution. So I think there's quite a bit of variance in what this AI comes up with. Or maybe it's just because this is periodical. But this is really confusing because they show and discuss that other distribution. And now all of a sudden, they say, well, we use this distribution that was also created by our AI. And it seems to be qualitatively quite different. In any case, let's look at how the humans behave under the different strategies. So in the size formula, you'll see that the light blue person here is kind of spreading out a bit, probably playing correctly. Everyone else is just neatly building their houses. Humans are so territorial. And most of them, they kind of stay in their little corner. And they're like, this is my corner. I'm going to build my houses here in a nice thing. And under the AI economist, again, you don't really see a different thing just because the taxes are different. The qualitative behavior is quite the same. It's just building straight lines. And I think the difference is more between the humans. So I think it's not always the same humans. And the difference might be more between the humans. And you kind of see that humans clearly haven't really trained or discovered the optimal strategy. They're just doing something. And what you're seeing is just a result of the taxation. It's not different behavior. And this here, this is the best. Okay, watch the on the bottom right, the human. They're just first they do something, they're just walling up the other players. And look, this is this is the best. I am going to build a big beautiful wall. And I'm going to have the orange guy pay for it. It's Donald Trump in the game. Amazing. And look at the end, they actually managed to lock in the other players so they can't move anymore. Donald Trump wins. Amazing. And though, actually, the yellow player appears to win economy wise. But what do you want with lots of money if you can't move? So I again, I find these human experiments to be rather pointless here because you disable trade and you don't train the humans to find a good strategy. Alright, but in that, I find the entire paper to be pretty cool code is going to be released, they promise and they have checked that they have no ethical problems. Of course, I invite you to check out the paper. If you like content like this, please subscribe, share and leave a comment of what you think. Thank you so much for listening and bye bye.
[ { "start": 0, "end": 5.76, "text": " Alright, today we're going to find out why AI is much better at governing people, why" }, { "start": 5.76, "end": 12.8, "text": " poor people really should pay more taxes, and how Donald Trump is just a normal human." }, { "start": 12.8, "end": 15.16, "text": " Alright, we'll dive into it." }, { "start": 15.16, "end": 19.400000000000002, "text": " We're looking at the AI Economist by Salesforce Research." }, { "start": 19.400000000000002, "end": 26.92, "text": " Now Salesforce Research has kind of created a simulated world environment where they can" }, { "start": 26.92, "end": 32.36, "text": " place agents in it and the agents, they can move around, they can collect resources, they" }, { "start": 32.36, "end": 39.36, "text": " can trade those resources, and they can use those resources to build houses and that will" }, { "start": 39.36, "end": 41.56, "text": " earn them coins." }, { "start": 41.56, "end": 47.44, "text": " And each agent wants to maximize its own coins, but also there's the government and the government" }, { "start": 47.44, "end": 49.24, "text": " can set taxes." }, { "start": 49.24, "end": 52.980000000000004, "text": " So they collect money from everyone and they redistribute it." }, { "start": 52.98, "end": 61.31999999999999, "text": " And the goal now is going to be that the AI handles both the agent and the taxes and" }, { "start": 61.31999999999999, "end": 65.92, "text": " we want to maximize the social welfare of the entire population." }, { "start": 65.92, "end": 68.1, "text": " Alright, that's the goal." }, { "start": 68.1, "end": 73.96, "text": " So the paper here is called The AI Economist Improving Equality and Productivity with AI" }, { "start": 73.96, "end": 81.94, "text": " Driven Tax Policies by Stefan Cheng and Alexander Trott and other people from Salesforce Research" }, { "start": 81.94, "end": 85, "text": " and Harvard University." }, { "start": 85, "end": 94.16, "text": " So as I said, this is a simulated environment and the simulated environment works like this." }, { "start": 94.16, "end": 101.96, "text": " There is a 2D plane, kind of like a game playing field and in this game there are agents." }, { "start": 101.96, "end": 105.6, "text": " Here you can see the agents, there are always four agents." }, { "start": 105.6, "end": 106.6, "text": " Where?" }, { "start": 106.6, "end": 108.96, "text": " Oh, down here." }, { "start": 108.96, "end": 111.24, "text": " What are you doing in the corner?" }, { "start": 111.24, "end": 117.08, "text": " Come on, be productive." }, { "start": 117.08, "end": 121.36, "text": " The agents are in this world and they can do certain things." }, { "start": 121.36, "end": 123.11999999999999, "text": " They have certain actions at their disposal." }, { "start": 123.11999999999999, "end": 124.88, "text": " So first of all, they can move around." }, { "start": 124.88, "end": 128.35999999999999, "text": " They can move down, left, right and so on." }, { "start": 128.35999999999999, "end": 131.95999999999998, "text": " Whenever they walk past a resource tile, they collect the resource." }, { "start": 131.95999999999998, "end": 134.29999999999998, "text": " This is stone and this is wood." }, { "start": 134.29999999999998, "end": 136.56, "text": " So there are two kinds of resources." }, { "start": 136.56, "end": 140.78, "text": " And then the last actions the agents have is building a house." }, { "start": 140.78, "end": 147.52, "text": " One wood and one stone will create one house and the house gives you coins." }, { "start": 147.52, "end": 151.72, "text": " So this is a house and that will give you coins." }, { "start": 151.72, "end": 157.3, "text": " But how much coins you get is different from agent to agent and this represents the agents" }, { "start": 157.3, "end": 159.44, "text": " different skill levels." }, { "start": 159.44, "end": 167.28, "text": " This is an abstraction and the kind of economic theory behind it is that the income inequality" }, { "start": 167.28, "end": 174.16, "text": " in people, one of the main drivers of it is that they are skilled differently and therefore" }, { "start": 174.16, "end": 185.76, "text": " are able to convert one unit of labor into more money than another lower skilled worker." }, { "start": 185.76, "end": 190.96, "text": " So this is here represented by the fact that maybe if this agent here builds the house," }, { "start": 190.96, "end": 193.12, "text": " they'll get 50 coins." }, { "start": 193.12, "end": 197.48000000000002, "text": " But if this agent here would build the same house, they'll only get 10 coins." }, { "start": 197.48000000000002, "end": 203, "text": " So we'll call this here a high skilled worker and this here a low skilled worker." }, { "start": 203, "end": 206.88, "text": " Now the last thing, sorry, I saw the last thing before, but the very last thing the" }, { "start": 206.88, "end": 209.32, "text": " agents can do is they can trade." }, { "start": 209.32, "end": 214.4, "text": " So if one agent has too many resources and the other one has not enough, they can trade" }, { "start": 214.4, "end": 218.28, "text": " those resources among each other for those coins." }, { "start": 218.28, "end": 222.28, "text": " So once you build a house, you collect some coins, you can then either go and collect" }, { "start": 222.28, "end": 231.96, "text": " more resources or you can use those coins in order to buy resources off of other people." }, { "start": 231.96, "end": 233.32, "text": " This is unlucky." }, { "start": 233.32, "end": 237.64, "text": " No coins, no houses, and no resources." }, { "start": 237.64, "end": 238.64, "text": " Look at them." }, { "start": 238.64, "end": 243.44, "text": " Oh yeah, so you also can't move across the water here." }, { "start": 243.44, "end": 245.48, "text": " You can only move on the grass." }, { "start": 245.48, "end": 252.12, "text": " You can also not move through a house, which gives you some interesting abilities because" }, { "start": 252.12, "end": 255.4, "text": " you can just build a house right here." }, { "start": 255.4, "end": 260.28000000000003, "text": " And yeah, so and you can't move over other players." }, { "start": 260.28000000000003, "end": 263.16, "text": " But these are the rules are pretty simple." }, { "start": 263.16, "end": 268.62, "text": " And the goal here is for the agents to maximize the number of coins they get in 1000 steps." }, { "start": 268.62, "end": 275.28000000000003, "text": " So the number H here is 1000, which is the number of steps that the agents can take before" }, { "start": 275.28000000000003, "end": 278.52, "text": " the game is over and it restarts again." }, { "start": 278.52, "end": 284.76, "text": " So each agent is using reinforcement learning in order to learn how to achieve the maximum" }, { "start": 284.76, "end": 286.24, "text": " number of coins." }, { "start": 286.24, "end": 290.32, "text": " Now the policy is of course going to be different depending on whether that is a high or a low" }, { "start": 290.32, "end": 292.15999999999997, "text": " skilled worker." }, { "start": 292.15999999999997, "end": 297.4, "text": " The catch here is that outside of this there is the government, the government here, let's" }, { "start": 297.4, "end": 308.4, "text": " draw this big house with the flag of our fictitious nation, which is like this." }, { "start": 308.4, "end": 310, "text": " That's the flag." }, { "start": 310, "end": 319.12, "text": " And the government will observe what's happening here and they will issue a tax taxes." }, { "start": 319.12, "end": 321.71999999999997, "text": " So it will issue a tax distribution." }, { "start": 321.71999999999997, "end": 323.44, "text": " Now how do you imagine that?" }, { "start": 323.44, "end": 329.84, "text": " So if you imagine the government says something like this for the first 10 coins you own," }, { "start": 329.84, "end": 334.58, "text": " you owe us 5% of that." }, { "start": 334.58, "end": 340.84, "text": " For the next 10 coins, so from 10 to 20 you earn, you owe us 10% and so on." }, { "start": 340.84, "end": 347.5, "text": " So if you earn even more, you owe us more and more percent of those extra coins." }, { "start": 347.5, "end": 350.08, "text": " This is what you might know as a progressive tax schedule." }, { "start": 350.08, "end": 356.47999999999996, "text": " The more you earn, the more percentage wise you pay on that extra earned money." }, { "start": 356.47999999999996, "end": 362.56, "text": " This is what you might be used to, but there are other tax schedules and the exact histogram" }, { "start": 362.56, "end": 369.14, "text": " you see or the exact how many percent for which amount of coins, that is the action" }, { "start": 369.14, "end": 370.14, "text": " of the government." }, { "start": 370.14, "end": 375.68, "text": " So the government decides on the taxes and the taxes are just collected from the income." }, { "start": 375.68, "end": 383.76, "text": " So if an agent earns these coins, then it has to pay taxes to the government and the" }, { "start": 383.76, "end": 389.96, "text": " government will redistribute all the taxes it has collected equally among the population." }, { "start": 389.96, "end": 394.28, "text": " So if you pay a lot, you might lose through this process and if you just pay a little" }, { "start": 394.28, "end": 398.96, "text": " taxes you might gain through this process." }, { "start": 398.96, "end": 400.88, "text": " So that's it." }, { "start": 400.88, "end": 403.79999999999995, "text": " That is the basic premise of the game." }, { "start": 403.79999999999995, "end": 409, "text": " The agents are using reinforcement learning and I believe the newness of this paper is" }, { "start": 409, "end": 415.4, "text": " also that the government now is using reinforcement learning in order to determine the optimal" }, { "start": 415.4, "end": 417.76, "text": " tax policy." }, { "start": 417.76, "end": 422.4, "text": " There is kind of this inner loop here and there is this outer game where the government" }, { "start": 422.4, "end": 425.32, "text": " also tries to maximize the RL." }, { "start": 425.32, "end": 427.84, "text": " And what does the government try to maximize?" }, { "start": 427.84, "end": 428.96, "text": " Good question." }, { "start": 428.96, "end": 434.32, "text": " It is a measure that's called social welfare." }, { "start": 434.32, "end": 439.02, "text": " Now social welfare consists of two things and they have this here way down in the paper." }, { "start": 439.02, "end": 442.15999999999997, "text": " Social welfare in this paper consists of two things." }, { "start": 442.16, "end": 449.28000000000003, "text": " First of all, economic productivity, which basically just means how many coins has anyone" }, { "start": 449.28000000000003, "end": 450.44, "text": " produced." }, { "start": 450.44, "end": 453.92, "text": " It doesn't matter who, but just the total amount of coins produced." }, { "start": 453.92, "end": 459.84000000000003, "text": " The second one is income equality and this is related to the Gini index." }, { "start": 459.84000000000003, "end": 465.3, "text": " So if you plot the cumulative distribution of wealth, a fully equal society would be" }, { "start": 465.3, "end": 473.46000000000004, "text": " a straight line because 50% of the people would have 50% of the money and so on." }, { "start": 473.46000000000004, "end": 480.56, "text": " But almost all true societies have something like this where 50% of the people might have" }, { "start": 480.56, "end": 486.92, "text": " 10% of the money and the rest 50% of the people has the other 90%." }, { "start": 486.92, "end": 491.44, "text": " And the measure of inequality is this area here." }, { "start": 491.44, "end": 498.52, "text": " This is called the Gini index and 1 minus this area is what this paper has as an equality" }, { "start": 498.52, "end": 499.52, "text": " measure." }, { "start": 499.52, "end": 506.6, "text": " So the higher this number, the more equal is the society in terms of their income distribution." }, { "start": 506.6, "end": 512.3, "text": " Now what is actually optimized for here is this thing, equality times productivity." }, { "start": 512.3, "end": 518.36, "text": " So you want both to be high, your income equality and your productivity." }, { "start": 518.36, "end": 525.04, "text": " There's a trade off here of course, but you can have multiple ways to trade that off and" }, { "start": 525.04, "end": 528.2, "text": " that will give you the different thing." }, { "start": 528.2, "end": 532.44, "text": " They call this the social welfare function." }, { "start": 532.44, "end": 537.44, "text": " And that's the thing that the government RL agent optimizes for." }, { "start": 537.44, "end": 543.88, "text": " So you can see here already the free market, even though it's the most productive, produces" }, { "start": 543.88, "end": 549, "text": " the most coins because if you have a free market means no taxes." }, { "start": 549, "end": 555.68, "text": " If you have no taxes, then people are basically encouraged to earn more money because they" }, { "start": 555.68, "end": 557.24, "text": " don't have to pay taxes on them." }, { "start": 557.24, "end": 561.64, "text": " As soon as you tax them, they're less encouraged to earn more money." }, { "start": 561.64, "end": 567.2, "text": " And therefore if you have no taxes, the most coins will be earned in total." }, { "start": 567.2, "end": 569.06, "text": " But the equality suffers." }, { "start": 569.06, "end": 573.76, "text": " So the equality is the lowest among these things considered." }, { "start": 573.76, "end": 581.4399999999999, "text": " If you compare that to the AI economist, the AI economist achieves the highest social welfare." }, { "start": 581.4399999999999, "end": 587.72, "text": " It achieves the highest equality, but it doesn't suffer as much in productivity as other systems" }, { "start": 587.72, "end": 588.72, "text": " here." }, { "start": 588.72, "end": 592.84, "text": " And the baseline systems are first of all, the US federal system." }, { "start": 592.84, "end": 594.96, "text": " This is not particularly tied to the US." }, { "start": 594.96, "end": 601.28, "text": " This is basically every system or most of the systems that you have currently in the" }, { "start": 601.28, "end": 607.3199999999999, "text": " world is the progressive tax system and the SAES formula, which I believe is an economically" }, { "start": 607.3199999999999, "end": 611.88, "text": " theory based system, which is a regressive tax schedule." }, { "start": 611.88, "end": 619.64, "text": " You can see them down here where the US federal will be progressive, means the more you earn," }, { "start": 619.64, "end": 622.0799999999999, "text": " the more percentage wise you pay." }, { "start": 622.0799999999999, "end": 627.4399999999999, "text": " While the SAES formula will be regressive, which generally means the more you earn, the" }, { "start": 627.4399999999999, "end": 628.4399999999999, "text": " less you pay." }, { "start": 628.44, "end": 634.6800000000001, "text": " I believe this was derived under some assumptions to be the optimal tax distribution." }, { "start": 634.6800000000001, "end": 643.08, "text": " And the AI economist will come to this in a second." }, { "start": 643.08, "end": 649.6400000000001, "text": " Let's actually just look at one of these things first, one of these games, how this plays" }, { "start": 649.6400000000001, "end": 650.6400000000001, "text": " out." }, { "start": 650.6400000000001, "end": 653.5, "text": " The cool thing here is that they have pretty flashy animations." }, { "start": 653.5, "end": 656.12, "text": " So you can look how does one of these games turn out." }, { "start": 656.12, "end": 661.48, "text": " Now this is a free market game and you can see the agents moving around collecting things," }, { "start": 661.48, "end": 662.48, "text": " building houses." }, { "start": 662.48, "end": 668.04, "text": " And you might notice that one of the agents, namely agent one, is just building all of" }, { "start": 668.04, "end": 669.52, "text": " the houses." }, { "start": 669.52, "end": 675.48, "text": " And generally just kind of being a dick, being in everyone's face and kind of building things" }, { "start": 675.48, "end": 676.64, "text": " everywhere." }, { "start": 676.64, "end": 679.96, "text": " And the other ones don't." }, { "start": 679.96, "end": 685.64, "text": " Or just very few, like the light blue on the bottom left builds some houses." }, { "start": 685.64, "end": 691.76, "text": " On the right you can see how the distribution of wealth is structured." }, { "start": 691.76, "end": 694.84, "text": " And you see agent one ends up with most of the wealth." }, { "start": 694.84, "end": 698.76, "text": " Now the size of the circle I think is the total productivity." }, { "start": 698.76, "end": 706.52, "text": " So you can see this grows over time mainly because agent one becomes so rich." }, { "start": 706.52, "end": 711.84, "text": " And if you analyze this, if you analyze what's happening here, then you'll see that agent" }, { "start": 711.84, "end": 716.4, "text": " one and I might be..." }, { "start": 716.4, "end": 721.64, "text": " Yeah, they have a graph up here." }, { "start": 721.64, "end": 725.5600000000001, "text": " So it is very interesting what happens." }, { "start": 725.5600000000001, "end": 727.86, "text": " This is kind of the same game." }, { "start": 727.86, "end": 737.12, "text": " So agent one here is this orange dot and agents two, three and four are these dots here." }, { "start": 737.12, "end": 740.72, "text": " And this graph here is coin from trading." }, { "start": 740.72, "end": 745.64, "text": " So how much money they win or lose from trading." }, { "start": 745.64, "end": 753, "text": " Now the green bars are trading wood and the brown bars are trading stone." }, { "start": 753, "end": 758.44, "text": " So you see agent number four, which is the lowest skilled, the skill is just determined" }, { "start": 758.44, "end": 761.24, "text": " at the beginning of the episode." }, { "start": 761.24, "end": 766.08, "text": " It will just make all of its coins basically by selling wood." }, { "start": 766.08, "end": 769.34, "text": " And agent three will make all of its coins by selling stone." }, { "start": 769.34, "end": 772.84, "text": " And agent two will collect both and sell both." }, { "start": 772.84, "end": 778.44, "text": " And agent one will just spend money in trading." }, { "start": 778.44, "end": 782.8000000000001, "text": " So you'll have a specialization here." }, { "start": 782.8000000000001, "end": 789.26, "text": " Agent one, which is the highest skill one right here, will buy resources in order to" }, { "start": 789.26, "end": 793.6, "text": " build more houses because it clearly profits from building lots and lots and lots and lots" }, { "start": 793.6, "end": 795.0600000000001, "text": " of houses." }, { "start": 795.06, "end": 799.7199999999999, "text": " So it will use that money to buy more resources rather than go and collecting them." }, { "start": 799.7199999999999, "end": 805.8399999999999, "text": " While all the other ones basically forgo building houses in favor of they just collect the resources" }, { "start": 805.8399999999999, "end": 810.7199999999999, "text": " and they just trade them way to the agent one that's more profitable for them than building" }, { "start": 810.7199999999999, "end": 812.1999999999999, "text": " houses themselves." }, { "start": 812.1999999999999, "end": 817.9599999999999, "text": " So you see this kind of specialization emerging in these games, which I find, I find this" }, { "start": 817.9599999999999, "end": 824.3599999999999, "text": " to be pretty cool that you see something like this, like a really stark division of labor" }, { "start": 824.36, "end": 831, "text": " emerging just from these very, very small set of rules." }, { "start": 831, "end": 833.88, "text": " And you can analyze this game in different ways." }, { "start": 833.88, "end": 841.6800000000001, "text": " They have a few more plots where this becomes quite apparent that sorry, that these agents" }, { "start": 841.6800000000001, "end": 843.3000000000001, "text": " specialize." }, { "start": 843.3000000000001, "end": 849.94, "text": " So you see here resources collected, sorry about that, resources collected." }, { "start": 849.94, "end": 859.6400000000001, "text": " If you have the lowest skill and the highest skill labors, the lowest skills, they mainly," }, { "start": 859.6400000000001, "end": 861.86, "text": " this should be a 10." }, { "start": 861.86, "end": 871.6, "text": " They mainly collect resources, while the highest skill labor mainly goes for building things." }, { "start": 871.6, "end": 876.7600000000001, "text": " It doesn't collect resources, but net income from building is really high while everyone" }, { "start": 876.76, "end": 880.08, "text": " else just doesn't build at all." }, { "start": 880.08, "end": 885.52, "text": " All right, so we have a division of labor emerging." }, { "start": 885.52, "end": 887.3199999999999, "text": " Now this was a free market." }, { "start": 887.3199999999999, "end": 890.76, "text": " Let's actually compare the different algorithms." }, { "start": 890.76, "end": 897.1, "text": " So if you look at social welfare, this is this thing here, equality times productivity." }, { "start": 897.1, "end": 902.8, "text": " You can see that the AI economist will outperform over time over the training progress, it will" }, { "start": 902.8, "end": 906.16, "text": " outperform all of the other systems." }, { "start": 906.16, "end": 912.36, "text": " So it will outperform the free market, the US federal tax system, and the SAS formula" }, { "start": 912.36, "end": 915.78, "text": " if trained for long enough, which is to be expected, right?" }, { "start": 915.78, "end": 921.24, "text": " If you put RL onto a cost function, it will then optimize that cost function." }, { "start": 921.24, "end": 927.52, "text": " But it's pretty cool to see that there's a lot of headroom here over what we currently" }, { "start": 927.52, "end": 929.16, "text": " have." }, { "start": 929.16, "end": 933.92, "text": " Now let's look at some of the strategies it comes up with." }, { "start": 933.92, "end": 941.8399999999999, "text": " So what do these games look like where the AI has imposed different tax strategies?" }, { "start": 941.8399999999999, "end": 943.76, "text": " So this is with the SAS strategy." }, { "start": 943.76, "end": 945.88, "text": " You can see that here." }, { "start": 945.88, "end": 951.56, "text": " Again, you see this inequality emerging with the yellow player here building most of the" }, { "start": 951.56, "end": 953, "text": " houses." }, { "start": 953, "end": 959.7199999999999, "text": " With the AI economist, again, there is inequality, but you can see at the distribution that agent" }, { "start": 959.72, "end": 965.52, "text": " one only ends up with about half of the wealth, where if you compare this to the free market" }, { "start": 965.52, "end": 972.38, "text": " here, then agent one ends up with like two thirds of the wealth, right?" }, { "start": 972.38, "end": 975, "text": " This is the game we saw before." }, { "start": 975, "end": 982.64, "text": " But there is not qualitatively that much of a difference, but there is in the end result." }, { "start": 982.64, "end": 987.84, "text": " All right, let's look at what these policies actually come up with." }, { "start": 987.84, "end": 991.52, "text": " So what is the tax policy that the AI comes up with?" }, { "start": 991.52, "end": 998.1, "text": " So this tax policy outperforms on this social welfare metric." }, { "start": 998.1, "end": 1001.76, "text": " And this is very interesting, right?" }, { "start": 1001.76, "end": 1005.12, "text": " So first of all, you see that it's right zigzag." }, { "start": 1005.12, "end": 1010.24, "text": " It's like down, up, down, up, which is already weird." }, { "start": 1010.24, "end": 1017.5, "text": " So the first very weird thing is the spike at the very bottom." }, { "start": 1017.5, "end": 1021.52, "text": " So that thing here, what's that thing here?" }, { "start": 1021.52, "end": 1026.72, "text": " Those are the poorest people in your society, and you're taxing them the highest." }, { "start": 1026.72, "end": 1027.72, "text": " Right?" }, { "start": 1027.72, "end": 1035.2, "text": " So just imagine this, you're here downtrodden by life, abandoned by society, you have no" }, { "start": 1035.2, "end": 1037.04, "text": " money, no house, no nothing." }, { "start": 1037.04, "end": 1042.8, "text": " And you're just trying to get a job, you're just getting like a little bit of money." }, { "start": 1042.8, "end": 1049.32, "text": " And you can buy a cheeseburger, and then the government comes." }, { "start": 1049.32, "end": 1050.32, "text": " Give us that." }, { "start": 1050.32, "end": 1053.32, "text": " Give us that money." }, { "start": 1053.32, "end": 1054.32, "text": " Come on." }, { "start": 1054.32, "end": 1059.6, "text": " So basically, these are the poor." }, { "start": 1059.6, "end": 1063.36, "text": " And the poor in this system is just F you." }, { "start": 1063.36, "end": 1064.36, "text": " F you the poor." }, { "start": 1064.36, "end": 1068.76, "text": " Now, the reason why this happens is pretty clear, right?" }, { "start": 1068.76, "end": 1075.64, "text": " The reason why this happens is because you want to encourage people to go here to earn" }, { "start": 1075.64, "end": 1077.6, "text": " more money." }, { "start": 1077.6, "end": 1081.8799999999999, "text": " So it's not like the government makes any money from the poor people independently of" }, { "start": 1081.8799999999999, "end": 1084.28, "text": " how high it taxes them." }, { "start": 1084.28, "end": 1090.48, "text": " But it is basically an incentive structure to make them move over to the somewhat more" }, { "start": 1090.48, "end": 1092.44, "text": " productive population." }, { "start": 1092.44, "end": 1097.44, "text": " Because here it's assumed kind of that even the lowest skilled ones can move over a bit" }, { "start": 1097.44, "end": 1101.48, "text": " if you just tax them enough at the low brackets, right?" }, { "start": 1101.48, "end": 1110.76, "text": " So this is what I find to be you just have to realize that it is so hard, I believe it" }, { "start": 1110.76, "end": 1117.92, "text": " is almost impossible to encapsulate what we really want in a system into a formula to" }, { "start": 1117.92, "end": 1120.06, "text": " be into a cost function to be optimized." }, { "start": 1120.06, "end": 1121.68, "text": " It is so incredibly hard." }, { "start": 1121.68, "end": 1125.72, "text": " And you see that here, of course, it is going to result in a better social outcome, but" }, { "start": 1125.72, "end": 1132.32, "text": " it just doesn't feel right to tax the poor at what 60%?" }, { "start": 1132.32, "end": 1136.84, "text": " Okay, so F the poor, right?" }, { "start": 1136.84, "end": 1140.04, "text": " And then you get to this level right here." }, { "start": 1140.04, "end": 1147.1200000000001, "text": " And interestingly, if you earn even more, you'll be taxed high again, right?" }, { "start": 1147.1200000000001, "end": 1152.04, "text": " So this, we're kind of used to that." }, { "start": 1152.04, "end": 1155.16, "text": " You earn little, you pay little, you earn more." }, { "start": 1155.16, "end": 1156.8000000000002, "text": " You pay more." }, { "start": 1156.8000000000002, "end": 1159.72, "text": " But then comes this entire valley here." }, { "start": 1159.72, "end": 1161.0400000000002, "text": " What's up with that?" }, { "start": 1161.0400000000002, "end": 1162.0400000000002, "text": " Right?" }, { "start": 1162.0400000000002, "end": 1169.92, "text": " Like WT, and this can be this is now of course, the same reasoning as you have with this size" }, { "start": 1169.92, "end": 1177.92, "text": " formula here is where the rich people, you want to tax them less so that they are more" }, { "start": 1177.92, "end": 1181.24, "text": " productive such that they generate more coins." }, { "start": 1181.24, "end": 1187.84, "text": " And even though you tax them less percentage wise, they will end up paying more money in" }, { "start": 1187.84, "end": 1189.96, "text": " absolute terms." }, { "start": 1189.96, "end": 1194.4, "text": " Because because you basically encourage them to produce more." }, { "start": 1194.4, "end": 1201.6, "text": " So that is that is kind of that is the, I guess the reasoning behind this." }, { "start": 1201.6, "end": 1205.92, "text": " But what you have to wreck, you have to recognize what's happening here, right?" }, { "start": 1205.92, "end": 1207, "text": " What are we optimizing?" }, { "start": 1207, "end": 1210.88, "text": " We're optimizing this productivity times equality." }, { "start": 1210.88, "end": 1213.1200000000001, "text": " And what do we get?" }, { "start": 1213.1200000000001, "end": 1219.68, "text": " You see, you get two big valleys of attraction, one here, and one here." }, { "start": 1219.68, "end": 1225.8400000000001, "text": " And that means that this algorithm favors a two class society." }, { "start": 1225.8400000000001, "end": 1227.0600000000002, "text": " Right?" }, { "start": 1227.0600000000002, "end": 1231.8600000000001, "text": " And I believe this is this is partially the limitations of this simulation here, the fact" }, { "start": 1231.8600000000001, "end": 1235.68, "text": " that you only have four agents, the fact that you can only do two things either collect" }, { "start": 1235.68, "end": 1237.0400000000002, "text": " or build, right?" }, { "start": 1237.04, "end": 1242.8, "text": " It encourages a two class society, this specialization that you saw, right?" }, { "start": 1242.8, "end": 1247.1599999999999, "text": " So you say these here are the moneymakers, right?" }, { "start": 1247.1599999999999, "end": 1249.3, "text": " And these here are the collectors." }, { "start": 1249.3, "end": 1252.8799999999999, "text": " And it is very hard to move from one group to the other." }, { "start": 1252.8799999999999, "end": 1259.08, "text": " Because if you you earn more coins as a collector, you're here, and you're really discouraged" }, { "start": 1259.08, "end": 1260.08, "text": " here." }, { "start": 1260.08, "end": 1263.6399999999999, "text": " If you move there, you want to move all the way over here, right?" }, { "start": 1263.64, "end": 1268.96, "text": " Now, the people that are already over here, if they earn an extra coin, that doesn't bother" }, { "start": 1268.96, "end": 1269.96, "text": " them too much." }, { "start": 1269.96, "end": 1272.0400000000002, "text": " So they're very encouraged to earn more money." }, { "start": 1272.0400000000002, "end": 1277.68, "text": " But the very, the poorer people on this side, they're basically discouraged from earning" }, { "start": 1277.68, "end": 1285.1000000000001, "text": " more money, because the system needs them to stay at that collector level, right?" }, { "start": 1285.1000000000001, "end": 1292.2, "text": " So the system encourages the two class society because we have not built social mobility" }, { "start": 1292.2, "end": 1301.7, "text": " into the into the into the equation, we have not built a measure for social social mobility" }, { "start": 1301.7, "end": 1303.18, "text": " into the cost function." }, { "start": 1303.18, "end": 1308.04, "text": " And therefore, the AI doesn't care that the poor people will stay poor and rich people" }, { "start": 1308.04, "end": 1310.04, "text": " will stay rich." }, { "start": 1310.04, "end": 1314.64, "text": " It just knows that this is the best outcome for society overall, given the cost function" }, { "start": 1314.64, "end": 1320, "text": " that we had, again, this just doesn't seem like fair to us, like what we want, we want" }, { "start": 1320, "end": 1326.6, "text": " someone to be able to make it over here, right, even if they start out from the bottom." }, { "start": 1326.6, "end": 1330.48, "text": " And so we'd have to we have to build that in." }, { "start": 1330.48, "end": 1335.6, "text": " So we have a system that is effing eff the poor, right?" }, { "start": 1335.6, "end": 1340.96, "text": " No social mobility, mobility." }, { "start": 1340.96, "end": 1342.8, "text": " No." }, { "start": 1342.8, "end": 1345.48, "text": " And then what's happening at the end?" }, { "start": 1345.48, "end": 1346.64, "text": " What's happening at the end?" }, { "start": 1346.64, "end": 1348.44, "text": " This is beautiful." }, { "start": 1348.44, "end": 1350.1200000000001, "text": " Very rich people." }, { "start": 1350.1200000000001, "end": 1352, "text": " These are the moneymaker, right?" }, { "start": 1352, "end": 1359.92, "text": " This is the this is the monopoly guy top hat monocle wearing Scrooge McDuck bathing in" }, { "start": 1359.92, "end": 1361.22, "text": " coins." }, { "start": 1361.22, "end": 1365.56, "text": " This is where the the government makes their money." }, { "start": 1365.56, "end": 1373.4, "text": " And the discrepancy is really stunning, because you could also argue, hey, why don't we apply" }, { "start": 1373.4, "end": 1376, "text": " the same reasoning as we applied here and here?" }, { "start": 1376, "end": 1382.08, "text": " Why is not is it not like the case that if the rich people if you tax them lower, they'll" }, { "start": 1382.08, "end": 1383.32, "text": " pay more money and so on." }, { "start": 1383.32, "end": 1389.78, "text": " I believe again, this might be just a result of this, how the simulation is set up." }, { "start": 1389.78, "end": 1393.08, "text": " So we'll move away quickly and we'll come back to this." }, { "start": 1393.08, "end": 1398.4, "text": " Here's what I find particularly interesting about this paper, which just confuses the" }, { "start": 1398.4, "end": 1400.84, "text": " heck out of me." }, { "start": 1400.84, "end": 1404.5, "text": " It is a double periodic game." }, { "start": 1404.5, "end": 1407.02, "text": " So it's an inner outer loop game." }, { "start": 1407.02, "end": 1408.3, "text": " What do I mean by that?" }, { "start": 1408.3, "end": 1409.76, "text": " They have these episodes, right?" }, { "start": 1409.76, "end": 1411.58, "text": " Here is the start." }, { "start": 1411.58, "end": 1416.04, "text": " And here is the end." }, { "start": 1416.04, "end": 1421.16, "text": " And they subdivide this into, as we said, 1000 steps." }, { "start": 1421.16, "end": 1425.32, "text": " So an agent is here and it can do step, step, step, step, step, and it can perform these" }, { "start": 1425.32, "end": 1426.32, "text": " actions." }, { "start": 1426.32, "end": 1427.64, "text": " This is the agent." }, { "start": 1427.64, "end": 1431.48, "text": " There are 1000 steps here and the agent just tries to collect as much coins." }, { "start": 1431.48, "end": 1434.4, "text": " So this is your classic RL problem." }, { "start": 1434.4, "end": 1438.64, "text": " But also they divide this into 10, what they call periods." }, { "start": 1438.64, "end": 1443.2800000000002, "text": " And I'm just going to draw maybe four periods, right?" }, { "start": 1443.2800000000002, "end": 1452.6200000000001, "text": " So this thing here, they call one period where the whole thing is an episode." }, { "start": 1452.6200000000001, "end": 1458.0400000000002, "text": " Now the purpose of the period is that at the beginning of each period, the government," }, { "start": 1458.0400000000002, "end": 1462.3000000000002, "text": " the government can impose a new tax schedule." }, { "start": 1462.3, "end": 1468.12, "text": " So the government doesn't only fix the taxes once, but it can change the taxes over the" }, { "start": 1468.12, "end": 1472.5, "text": " course of the episode, right?" }, { "start": 1472.5, "end": 1475.24, "text": " Now this is what I find." }, { "start": 1475.24, "end": 1477.02, "text": " I just don't see why." }, { "start": 1477.02, "end": 1483.3999999999999, "text": " So now you're formulating the tax giving objective as a sequential decision making." }, { "start": 1483.3999999999999, "end": 1488.56, "text": " It's like the government saying, well, today we have high taxes, but tomorrow we have low" }, { "start": 1488.56, "end": 1492.04, "text": " taxes and the day after that we have high taxes again." }, { "start": 1492.04, "end": 1498.44, "text": " And it just doesn't make sense for any government to do this." }, { "start": 1498.44, "end": 1503.56, "text": " What you should do is you should set taxes once at the beginning of the episode and then" }, { "start": 1503.56, "end": 1508.8, "text": " see how that turns out and then try to maximize your tax schedule." }, { "start": 1508.8, "end": 1515.44, "text": " Because all we're looking at, we're only ever looking at how the taxes are at the end, right?" }, { "start": 1515.44, "end": 1520.48, "text": " The things that we've examined are just the last taxes that the AI has issued." }, { "start": 1520.48, "end": 1523.76, "text": " We don't know the dynamic of what happens in between." }, { "start": 1523.76, "end": 1528.72, "text": " This might be super wild actually, what the AI does in between." }, { "start": 1528.72, "end": 1534.44, "text": " And I just don't see the framing as a sequential decision problem." }, { "start": 1534.44, "end": 1538.6, "text": " And I believe this is just an over engineered thing." }, { "start": 1538.6, "end": 1543.24, "text": " Because someone wanted a reason and here is the architecture, right?" }, { "start": 1543.24, "end": 1548, "text": " You see someone wanted a reason to put an LSTM in there." }, { "start": 1548, "end": 1552.88, "text": " Someone is thinking like, well, RL, that means like sequential decisions and so on." }, { "start": 1552.88, "end": 1560, "text": " And RL in this outer loop, the way I propose it would just be a one step per episode decision," }, { "start": 1560, "end": 1561.28, "text": " which is a bandit problem." }, { "start": 1561.28, "end": 1564, "text": " And as we all know, bandits are boring." }, { "start": 1564, "end": 1567.72, "text": " So they didn't want this to be a bandit problem." }, { "start": 1567.72, "end": 1569.44, "text": " They wanted to be a sequential problem." }, { "start": 1569.44, "end": 1574.56, "text": " And that's why they made this period thing, which I find dumb." }, { "start": 1574.56, "end": 1581.12, "text": " So another factor here, and I'm going to tell you how this relates to the to the weird rich" }, { "start": 1581.12, "end": 1582.84, "text": " people are taxed high." }, { "start": 1582.84, "end": 1585.52, "text": " Another factor here is look at this." }, { "start": 1585.52, "end": 1590.52, "text": " It's a CNN, an MLP, an LSTM and an MLP and the agent as well." }, { "start": 1590.52, "end": 1594.6799999999998, "text": " And I can tell you right now, the CNN has two layers." }, { "start": 1594.6799999999998, "end": 1596.22, "text": " Two." }, { "start": 1596.22, "end": 1601.22, "text": " And the LSTM has like 128 units in its hidden state." }, { "start": 1601.22, "end": 1605.4, "text": " So these are tiny, tiny models." }, { "start": 1605.4, "end": 1610.52, "text": " And it is not a model based RL, it's model free RLs, proximal policy optimization." }, { "start": 1610.52, "end": 1619.6000000000001, "text": " And the the the ability of these agents or planner to learn anything substantial here," }, { "start": 1619.6000000000001, "end": 1626.38, "text": " I believe is just not super duper well, right." }, { "start": 1626.38, "end": 1632.0800000000002, "text": " So the I believe that these are rather dumb agents." }, { "start": 1632.0800000000002, "end": 1638.72, "text": " And you can see the tax rates given by the planner is fed into the agent model." }, { "start": 1638.72, "end": 1645.5600000000002, "text": " But I don't think that the agent given such a small model can actually adjust to these" }, { "start": 1645.5600000000002, "end": 1651.38, "text": " inputs because you have to do some pretty good logic in order to from these tax brackets" }, { "start": 1651.38, "end": 1654.7, "text": " to determine how you should act right now." }, { "start": 1654.7, "end": 1659.0800000000002, "text": " What I think is happening is the agent just kind of is aware of its skill level and through" }, { "start": 1659.0800000000002, "end": 1664.76, "text": " its rewards, it's trying to maximize its future rewards." }, { "start": 1664.76, "end": 1671.8400000000001, "text": " And then when the government changes the tax rate, it will not, I'm almost positive it" }, { "start": 1671.8400000000001, "end": 1675.76, "text": " will not directly change its response to that." }, { "start": 1675.76, "end": 1681.4, "text": " But it will kind of observe that something's happening in the world and then adjust maybe" }, { "start": 1681.4, "end": 1687.1200000000001, "text": " a little bit its overall strategy, but not in that particular instance, and it will be" }, { "start": 1687.1200000000001, "end": 1690.74, "text": " delayed or it will be like an overall strategy." }, { "start": 1690.74, "end": 1700.72, "text": " And this might be one of the reasons why the tax brackets here might be screwed up because" }, { "start": 1700.72, "end": 1707.8000000000002, "text": " who says who says if I were this AI, what I could do is in period one through nine," }, { "start": 1707.8, "end": 1711.96, "text": " I make the taxes really low for the rich people." }, { "start": 1711.96, "end": 1716.1599999999999, "text": " So I just encourage everyone to make more money, right?" }, { "start": 1716.1599999999999, "end": 1719.96, "text": " Like come on, become more productive and I get the benefits of that." }, { "start": 1719.96, "end": 1726.2, "text": " And then in the last episode, last period, right, I just freaking jack up that final" }, { "start": 1726.2, "end": 1727.2, "text": " tax bracket." }, { "start": 1727.2, "end": 1731.3799999999999, "text": " It's like you, you have lots of money, give it to me." }, { "start": 1731.3799999999999, "end": 1736.12, "text": " And then you just redistribute what you got there to the poor people in the very last" }, { "start": 1736.12, "end": 1740.8799999999999, "text": " period and thereby you achieve your goal of this social welfare function." }, { "start": 1740.8799999999999, "end": 1745.3999999999999, "text": " But of course, this is not sustainable because all the rich people would just be kind of" }, { "start": 1745.3999999999999, "end": 1749, "text": " screwed through that and move down again, but it's the end of the episode." }, { "start": 1749, "end": 1751.6, "text": " So what are they going to do?" }, { "start": 1751.6, "end": 1759.3999999999999, "text": " So I think the fact how this is framed, that there are just two different ways to get coins." }, { "start": 1759.4, "end": 1766.2800000000002, "text": " But the fact that this is this periodical nature of the outer loop all might lead to" }, { "start": 1766.2800000000002, "end": 1773.96, "text": " something that becomes slowly more and more and more uninterpretable." }, { "start": 1773.96, "end": 1774.96, "text": " Still cool though." }, { "start": 1774.96, "end": 1779.5600000000002, "text": " All right, so the final thing, they do this with humans." }, { "start": 1779.5600000000002, "end": 1781.48, "text": " Yes, real humans." }, { "start": 1781.48, "end": 1790.92, "text": " So they let humans try it and they have this interface here and the humans, they behave" }, { "start": 1790.92, "end": 1793.82, "text": " quite differently from the AI." }, { "start": 1793.82, "end": 1797.64, "text": " So there are a few different things where the humans act." }, { "start": 1797.64, "end": 1802.6200000000001, "text": " But look at that here, AI economists, this is what the agents do, right?" }, { "start": 1802.6200000000001, "end": 1805.72, "text": " So this AI economist is the tax strategy." }, { "start": 1805.72, "end": 1811.96, "text": " They just take these developed tax strategies and let the humans be the agents so that the" }, { "start": 1811.96, "end": 1817.04, "text": " you just want to observe how the agents act and whether or not the tax strategies also" }, { "start": 1817.04, "end": 1823.1200000000001, "text": " work when it's real humans acting in this environment and not our agents." }, { "start": 1823.1200000000001, "end": 1826.76, "text": " So compare this to how the humans act." }, { "start": 1826.76, "end": 1831.76, "text": " The humans they just build their houses in like neat little packets or straight lines" }, { "start": 1831.76, "end": 1833.76, "text": " or stuff like this." }, { "start": 1833.76, "end": 1836.24, "text": " I just find it to be very funny." }, { "start": 1836.24, "end": 1841.54, "text": " Now there are some things lacking in the human environment which I find really important." }, { "start": 1841.54, "end": 1845.8799999999999, "text": " So first of all, they have no cost for moving, which I guess is minor." }, { "start": 1845.8799999999999, "end": 1850.7, "text": " But second of all, they have no trade." }, { "start": 1850.7, "end": 1853.76, "text": " And I think that just kills the whole experiment." }, { "start": 1853.76, "end": 1857.96, "text": " Because now of course what you're going to get is the wealth is just going to be proportional" }, { "start": 1857.96, "end": 1863.44, "text": " to how much you get coins per house, which is different for each agent, right?" }, { "start": 1863.44, "end": 1870.74, "text": " So to me that that is now a pointless experiment if you can't trade because the outcome is" }, { "start": 1870.74, "end": 1872.0800000000002, "text": " just predictable." }, { "start": 1872.0800000000002, "end": 1879.1200000000001, "text": " And I don't think that the human behavior changes in response to the different tax brackets." }, { "start": 1879.1200000000001, "end": 1883.88, "text": " I think they'll just do and however they can make money, they'll make money, they'll build" }, { "start": 1883.88, "end": 1886, "text": " more houses until it becomes unprofitable." }, { "start": 1886, "end": 1887, "text": " And that's it." }, { "start": 1887, "end": 1892.88, "text": " So I don't see the I don't see the value of these experiments, even though they show that" }, { "start": 1892.88, "end": 1901.3600000000001, "text": " again, the AI economist outperforms the other tax strategies in this equality times productivity" }, { "start": 1901.3600000000001, "end": 1905.64, "text": " metric and also in another metric that they measure." }, { "start": 1905.64, "end": 1911.1000000000001, "text": " The second problem I have is for the human experiments, they take this distribution here," }, { "start": 1911.1000000000001, "end": 1915.2800000000002, "text": " they say, well, the AI, this is one of the distributions that the AI came up with." }, { "start": 1915.2800000000002, "end": 1921.44, "text": " But you notice the lack of the F you poor people, and the lack of this big spike here" }, { "start": 1921.44, "end": 1928.68, "text": " for the rich people, which I find are one of the two features of the other distribution." }, { "start": 1928.68, "end": 1933.04, "text": " So I think there's quite a bit of variance in what this AI comes up with." }, { "start": 1933.04, "end": 1934.88, "text": " Or maybe it's just because this is periodical." }, { "start": 1934.88, "end": 1941.06, "text": " But this is really confusing because they show and discuss that other distribution." }, { "start": 1941.06, "end": 1945.64, "text": " And now all of a sudden, they say, well, we use this distribution that was also created" }, { "start": 1945.64, "end": 1946.68, "text": " by our AI." }, { "start": 1946.68, "end": 1950.16, "text": " And it seems to be qualitatively quite different." }, { "start": 1950.16, "end": 1958.28, "text": " In any case, let's look at how the humans behave under the different strategies." }, { "start": 1958.28, "end": 1964.2, "text": " So in the size formula, you'll see that the light blue person here is kind of spreading" }, { "start": 1964.2, "end": 1966.52, "text": " out a bit, probably playing correctly." }, { "start": 1966.52, "end": 1969.68, "text": " Everyone else is just neatly building their houses." }, { "start": 1969.68, "end": 1970.68, "text": " Humans are so territorial." }, { "start": 1970.68, "end": 1974.64, "text": " And most of them, they kind of stay in their little corner." }, { "start": 1974.64, "end": 1976.6000000000001, "text": " And they're like, this is my corner." }, { "start": 1976.6, "end": 1981.48, "text": " I'm going to build my houses here in a nice thing." }, { "start": 1981.48, "end": 1987.1999999999998, "text": " And under the AI economist, again, you don't really see a different thing just because" }, { "start": 1987.1999999999998, "end": 1989.36, "text": " the taxes are different." }, { "start": 1989.36, "end": 1992.1999999999998, "text": " The qualitative behavior is quite the same." }, { "start": 1992.1999999999998, "end": 1994.8, "text": " It's just building straight lines." }, { "start": 1994.8, "end": 1997.98, "text": " And I think the difference is more between the humans." }, { "start": 1997.98, "end": 2000.36, "text": " So I think it's not always the same humans." }, { "start": 2000.36, "end": 2003.36, "text": " And the difference might be more between the humans." }, { "start": 2003.36, "end": 2009.6399999999999, "text": " And you kind of see that humans clearly haven't really trained or discovered the optimal strategy." }, { "start": 2009.6399999999999, "end": 2011.3999999999999, "text": " They're just doing something." }, { "start": 2011.3999999999999, "end": 2015.08, "text": " And what you're seeing is just a result of the taxation." }, { "start": 2015.08, "end": 2016.08, "text": " It's not different behavior." }, { "start": 2016.08, "end": 2018.7199999999998, "text": " And this here, this is the best." }, { "start": 2018.7199999999998, "end": 2023.08, "text": " Okay, watch the on the bottom right, the human." }, { "start": 2023.08, "end": 2030, "text": " They're just first they do something, they're just walling up the other players." }, { "start": 2030, "end": 2034.04, "text": " And look, this is this is the best." }, { "start": 2034.04, "end": 2037.52, "text": " I am going to build a big beautiful wall." }, { "start": 2037.52, "end": 2041.72, "text": " And I'm going to have the orange guy pay for it." }, { "start": 2041.72, "end": 2043.88, "text": " It's Donald Trump in the game." }, { "start": 2043.88, "end": 2044.88, "text": " Amazing." }, { "start": 2044.88, "end": 2050.4, "text": " And look at the end, they actually managed to lock in the other players so they can't" }, { "start": 2050.4, "end": 2052.2, "text": " move anymore." }, { "start": 2052.2, "end": 2054.44, "text": " Donald Trump wins." }, { "start": 2054.44, "end": 2055.44, "text": " Amazing." }, { "start": 2055.44, "end": 2061.88, "text": " And though, actually, the yellow player appears to win economy wise." }, { "start": 2061.88, "end": 2066.6, "text": " But what do you want with lots of money if you can't move?" }, { "start": 2066.6, "end": 2072.54, "text": " So I again, I find these human experiments to be rather pointless here because you disable" }, { "start": 2072.54, "end": 2077.32, "text": " trade and you don't train the humans to find a good strategy." }, { "start": 2077.32, "end": 2083.86, "text": " Alright, but in that, I find the entire paper to be pretty cool code is going to be released," }, { "start": 2083.86, "end": 2088.8, "text": " they promise and they have checked that they have no ethical problems." }, { "start": 2088.8, "end": 2093, "text": " Of course, I invite you to check out the paper." }, { "start": 2093, "end": 2099.76, "text": " If you like content like this, please subscribe, share and leave a comment of what you think." }, { "start": 2099.76, "end": 2114.44, "text": " Thank you so much for listening and bye bye." } ]
kl3aBni87jg
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
First Author Interview: AI & formal math (Formal Mathematics Statement Curriculum Learning)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "openai", "formal math", "ai math", "ai math prover", "machine learning for math", "ml math", "artificial intelligence math", "ai mathematics", "automated proof search", "mini f2f", "ai imo", "ai math olympiad", "openai mathematics", "openai formal math", "language models formal math", "lean", "lean prover", "lean proof", "lean math", "ai lean environment", "ai proves theorems", "ai theorem prover" ]
#openai #math #imo This is an interview with Stanislas Polu, research engineer at OpenAI and first author of the paper "Formal Mathematics Statement Curriculum Learning". Watch the paper review here: https://youtu.be/lvYVuOmUVs8 OUTLINE: 0:00 - Intro 2:00 - How do you explain the big public reaction? 4:00 - What's the history behind the paper? 6:15 - How does algorithmic formal math work? 13:10 - How does expert iteration replace self-play? 22:30 - How is the language model trained and used? 30:50 - Why is every model fine-tuned on the initial state? 33:05 - What if we want to prove something we don't know already? 40:35 - How can machines and humans work together? 43:40 - Aren't most produced statements useless? 46:20 - A deeper look at the experimental results 50:10 - What were the high and low points during the research? 54:25 - Where do we go from here? Paper: https://arxiv.org/abs/2202.01344 miniF2F benchmark: https://github.com/openai/miniF2F Follow Stan here: https://twitter.com/spolu Abstract: We explore the use of expert iteration in the context of language modeling applied to formal mathematics. We show that at same compute budget, expert iteration, by which we mean proof search interleaved with learning, dramatically outperforms proof search only. We also observe that when applied to a collection of formal statements of sufficiently varied difficulty, expert iteration is capable of finding and solving a curriculum of increasingly difficult problems, without the need for associated ground-truth proofs. Finally, by applying this expert iteration to a manually curated set of problem statements, we achieve state-of-the-art on the miniF2F benchmark, automatically solving multiple challenging problems drawn from high school olympiads. Authors: Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, Ilya Sutskever Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello there, this is an interview with the first author of the paper, Formal Mathematics Statement Curriculum Learning, in which an automated system was able to solve two problems of the International Mathematics Olympiad. Now, this is an unprecedented level of skill in formal mathematics for an AI system. The system uses language models in combination with a technique called expert iteration to build itself a harder and harder curriculum of theorems to prove. Now, if you haven't seen it, I've made a comprehensive paper review about this paper in the last video. So be sure to check that out because Stan, the author who I'm interviewing today, has seen that video. So we all start from a common level. Stan is able to directly respond to any criticisms and questions that I had during the paper review. And we go into the details into the behind the scenes of the research, what didn't work out what problems came up, how the project came to be and what this all means beyond the domain of mathematics. It is a huge privilege to have the authors of these papers on here. And I want to get the most information that I can out of them. So please let me know how I can improve these videos. Let me know in the comments, leave a like if you like and I'll see you around. Bye. All right, everyone. Hi. So we're here with Stan Polu, who is the first author of the formal mathematics statement curriculum learning of the paper that uses expert iteration to end up proving two IMO problems, which I think was was very well received by everyone in the community. And we're going to look at the paper, going to go maybe through some of my criticisms that I had and that I just threw out there. And yeah, we're going to have we're going to hopefully inform everyone a little bit more. Stan, welcome to the channel. Thank you, Yannick. Thank you very much for having me. It's a pleasure to be here. So this this obviously the paper, it helps that OpenAI is as a name on the paper, right? It gives it like a little bit of a boost in publicity, but still it was the reception was quite widespread, I want to say, even though it appeared, I think in the same week as some other big papers, like I think AlphaCode was in the same week or so. Yet still you made quite an impression on people. And do you have an idea of why sort of the paper was widely received? There have been other papers in this domain, but this was kind of special. What's your impression? Yeah. So, so first, yeah, you mentioned I work at OpenAI, just to give you a little bit of context. So I'm a research engineer at OpenAI. OpenAI is focused on building and deploying safe and beneficial AI systems. It's a bit part research lab and part deployment company and I myself focus on the research lab part. The release was actually the same day as AlphaCode. We actually decided to go for it right after the release that work and I think it was just fine. We did release a first paper before the first GPTF paper, which is reference from that paper a year ago. And it didn't have much support from OpenAI because it was kind of a shadow release. We just put the paper up there, it was a blog post. And it did bring quite a lot of interest as well. I think people are interested in the domain because mass seems like a frontier that we haven't reached yet. And so any progress in that direction seems is probably exciting to most other people in the community. That would be my kind of main understanding of as to why people reacted positively and are engaging with the work. So you were already in this domain, you said, and I think I've also commented on this a little bit. You had previous work in using language models to guide these provers. Was this sort of a natural continuation for that? Or was there some impulse behind you tackling sort of these more challenging problems? Yes, it's really a continuation of the previous work. And actually, to give you a little bit of color on all of that, I joined OpenAI two years ago, and I actually wanted to work on formal math and AI before I joined OpenAI. And I did have quite an original trajectory within the field. I don't have a PhD in machine learning. I don't have a PhD at all, actually. And I was actually a software engineer at Stripe before and eventually wanted to work on subjects that pertain to AI and decided that formal math was the things that I wanted to work on. And then I found that it was well aligned with OpenAI mission and the way we were executing it. And so I joined and shortly after started working on it. So I've actually been working on this for the last two years. And that paper is really a continuation of the first paper. It's just kind of a real continuous work that we are tackling. And I think we'll definitely continue working on that because those two problems are quite impressive, but we're still far away from being at best students level. It is to some extent mind blowing because that system can prove statements that I'm actually myself not capable of proving. I'm not a math competitor, but I did do quite a lot of math studying for engineering school in France. And there are some things that I just can't prove and that this system can prove. But at the same time, there's so many stuff that I find easy and this kind of proven. So we were still a long way away from being able to be at best human level. But still those progress have been really continuous and continuously exciting over the past two years. You've seen my explanation of the paper. And I think with this paper specifically, I'm not that much of an expert in the domain itself. So I'm not too much into formal math and these sort of proving algorithms, how provers even work. I've tried to explain that a little bit by building this proof tree right here. Do you maybe have any more comments, any insights that could help people understand what is formal math even? How does it look from the inside? What is the main problem? How do you do things there? Of course. To be honest, you really made the explanation. It was really clear and I think it's a really good explanation of what's happening. Formal math was kind of invented when computers came out. The main problem that it tries to solve is that when you have a math paper and a very impressive proof, you only have generally a few people in the world that can review that proof because those proof are generally so complicated that only a few people can just understand those. And so there's actually no way to be sure that those massive proof are indeed true. That's kind of annoying because we're talking about mathematics supposed to be rock solid, yet it's not the case because those subjects are so advanced. And so the motivation for formal math is to say, well, let's actually encode math for computers so that computers can check every step. And we're going to get rid of that problem and forever be confident in our math progress. The only caveat is that because people working in formal math needs to reformat the proof in a way that computers can pass, despite a lot of automation that helps in that process, it's still a very, very, very time consuming effort. And so the advance of formalization of math concepts has been lagging behind the state of the art in math tremendously, but it's still starting to pick up, especially in Lean, where we've seen some recent formalization of very advanced and new work. But the main problem of formal math, I think, is that it's really hard to formalize. And so what is formalization like? It's exactly as you stated. You basically state your statements. Stating statements once you have the right definitions is almost natural. It feels a bit complicated when you look at the statements from the paper, as you mentioned, but it's actually close to what you would write in English. But then the proof is really completely different because you really have to contrive it in a way that the computer can understand. And the way it works is, as you mentioned, it's really an interaction between the human and the machine. You have that first statement, which is your goal. You apply some tactics, which are the automation I mentioned, to try to help in the formalization. To generally provide some direction to tactics. And tactics are meta programs that are taking your directions and trying to generate proof terms, which are much lower level artifacts that are understood by the machine. So they bridge between the human and the machine. And you keep going like that. You generally know the informal proof, of course. You generally have to change it in non-trivial ways to make it provable with all the theories you have available and the constraint of the formal system. And eventually you keep making progress like that with trial and error. So you have the feedback from the formal system, which are your current goals, and you try and make progress this way until you, as you mentioned, you reach something that you know is true because it's already been proven or it's an axiom or it's an hypothesis. You mentioned right now that people formalize by already sort of knowing the proof from the math domain, maybe. Are there people that seriously prove things for the first time in the formal way? Or is it largely just a translation effort? Because I'm wondering the way your system works in proof searching, this is not necessarily this paper alone, but it seems to me proof searching, what it does is it simply traverses the tree of all possible kind of like a chess engine or so would do something like this. And I'm wondering if you think that is similar to how humans try to go about proving mathematical concepts or is there some fundamental difference on how the machine does it and how the humans do it? In my opinion, there are some similarities and some massive difference. If you know what the proof is already, it looks a little bit like a translation exercise, but one that is quite challenging because you really have to generally refactor the proof in non-trivial ways. As an example, Peter Scholes, who is a very well-known mathematician, came to the formal community and said, I have that new proof that I'm super excited about, but it's kind of complicated and I want to make sure that it's true. Please help me or please formalize it so that we can know for sure. And that effort, it's a kind of 10 dozen of page PhD of math, so it's not that big. And I think the effort took six months or a bit more to dozens of people. So it's not just translation because generally you have definitions that are missing and so you need to add them, you need to create the theories that are missing, etc. It's a very complicated book. And so that's one of the main differences between what we're doing and what a mathematician do actually. Today we are really focusing on proving theorems at fixed theories in a sense that we are tackling Olympiad problems for which we know that all the theorems and the definitions that we'll need are already proven in the formal system in a sense. But when a mathematician is doing his job, he's not spending his day proving stuff. What a mathematician do most is actually coming up with new definitions, new objects, finding correlations, finding a link between those definitions and those domains. That's something that we're actually not tackling at all today. We're really focusing on trying to solve exercise rather than creating new theories. And so the main thing is essentially knowing which tactic do I need to apply to use the existing theorems that I have or the existing concepts that I have in order to prove the particular statement. You say there are two main problems right here. So there's first this infinite action space thing. And this can be solved by having this search be guided by whatever language model you use. People I think know this from AlphaZero type algorithms, right, where we use some sort of a neural network to guide that search. And this is already a little bit in your previous work. But then the other thing you mentioned is you have no direct self-play setup, which obviously is very helpful in these types of automated things in these search procedures if you have like some adversary that's playing against you and both get better at the same time. So in this question here, you make a statement that says this paper focuses on the second problem. Our basis for addressing it is the observation that the key role of self-play is to provide an unsupervised curriculum. And the statement just kind of stands here as such. You kind of claim this. Do you want to comment maybe a little bit? I mean, it seems intuitive, right? But how do you arrive at this conclusion? So it's indeed more of an hypothesis than a strong statement. I totally admit and agree. We have some experimental evidence that if you think of AlphaZero, it's actually what's happening. But basically, if you take all the data that has been generated through a training loop of an AlphaGo type algorithm, if you take the final data set and train on it, you'll get the same performance as if you've been training sequentially basically. And so there is nothing kind of special in self-play episodes basically. It's more about generating the right data at the end. And I think it's not just about the difficulty, it's just about creating a lot of diverse data that explores the space quite nicely. And that kind of stems from having a player against which you're playing and by exploration, you dig a little bit and find new strategies that are interesting. And eventually, all that, if you accumulate all that, you train on that, you get a very good policy of value function. And I think that's why we say this is that the self-play that we have in two-player games is really about getting a data generation pipeline that generates good data, right? And that's why we call it an unsupervised curriculum. And in formal math, if you have a bunch of statements that you cannot prove because your program is just not good enough, you're just not going to get any data. You're going to just be stuck at that point. And so that's kind of the main difference. There is no way to reframe. I mean, there's no trivial or easy or obvious to me at least ways to reframe a problem that is just too hard into a set of easier problems. And it makes sense that you're trying to build up a curriculum, but also I've displayed this here with this sort of arrow of complexity that just gets more and more complex. But it is not really the case. It doesn't really look like this because complexity isn't just in one direction. It's not just a statement is more complex than another one, but there's also a direction. I think if I want to work myself up to prove, let's say, the whatever, general Riemann hypothesis or something like this, I can't just prove harder and harder statements in numerics or something because I really want to be in, I don't even know what category the Riemann hypothesis number theory or complex analysis. But the point is I can't just go about just proving any old theorems. I have to have some sort of a direction. So how does your... and you make a little bit of a point in manual curation might help here and so on. But what's the main force in your system driving sort of the direction that the system becomes an expert at? Because there's so many directions in math, right? It's impossible that it just becomes better, right? Yeah, so I mean, we took the very obvious and easy way. Basically you have in a with a formal system, you have a library of theorems that is actually with it. That's what the formal community generally working on. This is what we call mathlib. It's called mathlib in lean. And there is very few exercise or Olympiad type exercise, even exercise in mathlib. It's generally general purpose theorems, right? And so if you train on that data only, you're actually not that good at solving exercise because you haven't seen any. The very easy exercise you'll be able to solve, but the somewhat hard ones not at all. And so we had that mini F2F benchmark, which is made of exercise, Olympiad exercise that we cared about for many reasons that we can dive into. And so we took the easy way, which is let's just formalize a bunch of statements around that benchmark that we care about. And we did the most obvious thing is that we took the textbook that humans use to train for those competitions and formalize everything out of it. And we didn't ask ourselves much more questions than that. And the reason why it works is because it's a textbook. So there is a bunch of easy examples to begin with and the difficulty can have been proved nicely for humans. And so as we formalize the statements, we run our expectation loop on it. And as you mentioned in that illustration, you get a few statements first, but you retrain on them to get a few more, et cetera, et cetera. And as you do it, the way I visualize it is that you're really shifting the distribution of the model away from mathlib and towards mini F2F or towards the group of statements that you provided as a curriculum. And so that is that creation that gives the direction. In terms of direction, you're very right that it's a challenge. Something that you can do as an example with formalize is you can do forward proving. Instead of going backward, as you said, you take things that you know and try to compose them with theorems that unify to the things you know. And you keep going forward like that. And we've tried generating some data this way. And that data is actually, I mean, you cannot direct it easily. And so it goes a little bit all over the place. And we haven't found a way to make it beneficial for targeting a benchmark in particular that we care about. Do you see maybe a future where you mentioned the lack of self play, but there could be some sort of an agent that comes up with these intermediate statements, these these curriculum statements that sort of tries to guess, you know, maybe here is a statement that's kind of in between where you want to go and where you are currently. This could be some sort of, I mean, I'm never sure because a lot of times when people propose these agents, it's like, well, you if you have that agent, you've essentially solved the problem, right? But there could be some sort of thing that replaces you the human as who has to come up with this curriculum. But I guess it's a bit of a future thing. And the other avenue where I see sorry, so I'd like to jump on this one. Just for a second. It is plausible that we could build a model. I mean, it's theoretically plausible that we could build a model that creates those intermediate statements. There's two challenges here is the first one is that the number of statements that we have is actually extremely small. When you look at the proof data in formal math, and I didn't mention it before, right? It's also a good thing to mention it. One challenge of formal math is that data is extremely scarce. The proof data is scarce and the statement data is even scarcer. MassLib is something like 60k, 60k statements, 60k contexts, length things. And the curriculum we use is a few hundred. And so to train the agents to try to simplify statements, the data that you have access to is like in existence by standards, modern language modeling standards. So that's a really big challenge. One thing that I think is extremely exciting, that is, again, same idea, just make it simpler, is probably actually machine translation from informal statements to formal statements. Try the work that we've been doing, try to harvest a lot of informal statements that there are many more out there and try to auto formalize them. Formalizing a statement is actually much easier than formalizing a proof. It's still challenging, but definitely much easier. And no, no, no. Sorry for jumping in. So with respect to, yeah, I was also thinking, yeah, you could take all sort of the math that's out there, but yeah, that's obviously also curated by humans a little bit. The other point of controlling things would be the language model. There's a lot of work in prompt engineering and things like this. Now, your language model, maybe we can go a little bit into how you train and query the language model, which I think might, you know, might need or might benefit from a bit more explanation because I was quite vague here, right? But essentially you have two different types of inputs that you train the language model on. The one you call this proof step objective and the other one you call this proof size objective. And both of them, they have a declaration and the goal. Do you want to maybe give us a little bit, because for the declaration I was like, yeah, it's kind of like the things you have access to. Do you want to maybe give us a bit of insight into what these things are? Yeah, so if we go back to, if we think about your schema about proving backwards, so the goal is the current goal that you want to prove. The proof step is the tactic that you want to apply. So this is really mapping exactly the process of generating a tactic to try to simplify the current goal. Sorry, the goal, so if I'm here, right, the goal would be the top thing, this one right here and the tactic would be one node, one link to a sort of the next node. Okay. To a new goal. Yeah, exactly. But then this could also be the new goal and then these could be the proof steps or, okay, okay. Yes, exactly. In your, here the lines are the tactics and the circles are the goals. And in Lean you actually have just one goal, the tactic goes back to another goal because sometimes some tactic can create multiple sub goals, but because you could say, hey, I want to introduce that cut, the cut is kind of a mini conjecture inside a proof and, but Lean kind of stacks them together. So technically speaking, there's only one node at each end of each line. Okay. Yeah, exactly. The proof looks like a chain, the proof, the final proof looks like a chain. Okay. And the proof search looks like a tree. And so the, the, the decal, we condition on the decal name, so the decal name is the declaration name and it's simply the CRM name or the exercise name. And the, the motivation here is to provide a proxy information for the model as to what is the state of the formal environment at this stage, because the actual formal environment is gigantic. There's no easy way to represent it in a compact way. You have all the inputs, you have all the CRMs that have been defined in the same file before that very CRM, that the CRM you're trying to prove right now, you have a bunch of definitions, et cetera. And so the, if you wanted to represent that to the model, it's technically challenging and more importantly, it's really big. So instead we just give it the name of the CRM and we kind of hope that it'll provide signal as to, to the model as to what are the CRMs that it has access to for this one, because it's trained, it's trained on, on, on CRMs that are close to this one and the names of CRMs are somewhat similar and related. It was in the same file, et cetera, et cetera. So it's really kind of a trick to, to try to infuse a little bit of information about the environment. How can we imagine such a name? Is this like a human readable name or is this more like, you know, theorem two eight four five point eight? No, no, it's somewhat readable for the, for the experts at least it's in the floor smaller than floor positive. Some kind of stuff like that. It's, it's, it's a little bit compact, but it's still readable. And for the exercise that we use, it's actually just the name of the competition, the gear and the exercise number. And the proof step that would be the tactic itself. How is a tactic kind of described? Is this an index into some bucket or is it also a piece of text or? Yeah. So if you're scrolling the appendix, well, I describe it. The tactic is really a function call. You're calling the tactic, which is a meta program. So if you, yeah, as an example, this one apply tactic is very trivial. It just says, try to apply that serum to the current goal, but you have much more advanced tactics. And so that tactic takes an argument. So you not only have to pick your tactic, there's only a few of those, but you actually have to provide an argument. So here it's a serum name. There's many more, but still finite. This here is a theorem. And then you will, oh yeah, here you go. Yeah. Okay. Not prime. I see. Yeah. So that's a typical theorem. So that's the decoration name that we condition on if we wanted to try to prove it. And you have to apply it with here. It's applying the serum by providing a first argument to the serum and then looking at the one side only. And so all of that kind of explodes the action space, obviously. And the action space is actually infinite because some tactic has arguments, mathematical terms. And those mathematical terms, they don't necessarily exist in the context. If you're trying to prove an existential statement, often the easiest way is to provide a witness. The witness is not generally in the statements. And so you have to generate it. And so that's the reason why the action space is actually infinite. And that's the major difference between neural proving techniques and the kind of classical theorem proving automated reasoning techniques. They are extremely powerful, but there's one thing they cannot do. It's generating exogenous mathematical terms. And you would, in this case, your language model would directly suggest you such tactics to apply. So you would sample from the language model and then suggest a bunch of things. The language model generates the full string here, apply, netprime, hpmp. And so we generate a number of those that gives us an approximation of a potential interesting action space to explore. And on top of that, we run a proof search. How does the proof step come into this? Because I was a little bit... You already have some sort of a log likelihood estimation, I would guess, for the things that you sample. But then you also have this value, some sort of a value that you assign to how long you think a proof is going to be. Yeah. So the proof size objective takes the declaration name and the current goal and try to estimate the size of the proof for that goal. And that's really just an instance of a value function. That's the one that we've used here. And it really helps guiding the proof search. When you don't have the value function yet, so in your review, you mentioned that we bootstrap from theta zero, which is the first model that is only trained on proof steps. When we don't have a value function to available, what we do is that we do the same proof search, but we prioritize by log prob, as you said. But what we use is the cumulative log prob that took for us to apply the different tactics all the way to the current goal, which is another flavor of a value function. A bit of a beam search type. That is a... Yeah. Yeah, it's a beam tree depth search. Okay. And, okay, so I think we got a good idea of how the search itself works. And you keep going until you prove statements. And then you do this expert iteration steps, right? Which essentially consists of you try to prove new things, you add them back to the data set, and you train a new model on it. What I was kind of surprised by is that you always train from this sort of this initial model that you have right here. So you create your new data sets and you always train from that. What prevents you or what's the reasoning behind not always just continuing to train from the most recent model? Yeah, there's two motivations, two rational for that. The first one is that it makes controlling for overfit much easier because you're really training from scratch in a sense. And so you control overfit on your validation set much more cleanly. If you iteratively train the behavior of your validation loss, it has a tendency to be quite erratic and unpredictable, which makes controlling for overfit much less obvious. So that's the one thing, it's for basically scientific convenience in a sense. The other thing is that it gives us an opportunity to duplicate aggressively the data. The reason why it's important is because, to be honest, to generate those proofs, we sample proof search a lot. There are some easy statements, we can find thousands of different proofs for it. And so the goal is to retake all those proofs that we found so far and duplicate as much out of it to prevent nefarious overfitting behaviors in the training. So that's really the two main motivations for training from scratch. Again, formal math, data is scarce. So those data sets are not that big, even when we generate a lot of data. And so training is not taking that much time. So it's actually really fine to train from scratch in each iteration. One second. So you say you have easy statements, you're able to find a lot of proofs for them, you have hard statements, and that's difficult to reach. But you still said at the beginning, all the statements you are attempting to prove, you essentially already know that they're provable, right? And even the ones in the curriculum, the ones you take from the textbook, I think textbooks, they don't try to trick you with like exercises that ultimately don't really work out. What would change here if you were to go about proving something you don't know if it's even provable, right? Obviously, you also don't know the statements in between that might lead up to that. Like how would that look like to prove something that isn't proven yet? Okay, so I think there's two questions there. What would happen if you inject statements that are potentially false or even undecidable in the mix? And what would it take to try to prove something that we don't really know is provable yet? I think that's at least the way I understood the question. If we inject statements that are not provable, that are false or undecidable, same difference to us, at least in the context of one formal system, what happens is that nothing happens. There's no data generated. So you're just wasting compute. You're really just wasting compute on the statements. And that's going to be a challenge if we think back about automatizing the generation of statements, that's going to be a noisy imperfect process. And so whether it's going to be useful for that expectation process is really a function of the number of statements that are actually provable versus unprovable. If your automated translation system generates one out of 20 statements that is provable and 19 are unprovable, you're just going to be wasting a lot of computes trying to prove something that's not going to generate any data for you. So that's going to be a challenge there if we want to apply machine translation. And then proving something. What do you mean by proving something that's not always provable? Is it like trying to prove a conjecture? You want to train or you want to solve a conjecture that exists, but no one knows. We think it's provable, which we do with most conjectures, but no one knows. And now it's up to you and someone comes to you and says, well, let's use your system. How would you go about that? How would you build the curriculum? What would change maybe in the data collection? There are some conjectures that we can hope do not require inventing new math. So there may be some conjecture that are eluding humans despite being very close to us. It's just one trick away. And so for such conjecture and imagining a system that is much more powerful than what we have today, let's say it beats human at competitions, then you could just take your best system, take the conjecture and search for a lot of time. And you maybe have a hope of finding a proof that has eluded humans because it was really tricky but you didn't need new theorems. You didn't need new definitions. And for most of conjectures that are out there, there is good reason to believe, at least if we look at this directly, that they're going to require new mathematical concepts to be proved. And so that exercise, which is the mathematician's exercise of defining new concepts, is something that we're not even considering yet as a problem. It's a whole different problem. And to be honest, I think that it's a task that will probably more likely happen in the future in the informal realm more than in the formal realm. It feels like the informal realm seems to be a better space to try to come up with new concepts and maybe then we have good data formalization and then we can use a formal prover to prove all the things that we conjectured, etc. But that's something that is really far away from us. You could sort of abuse the language models maybe to go a step, let's say, further. You always have your declaration and your goal and you generate the proof step. Could you also maybe just input a declaration of a theorem name that you think might conceivably exist and then let the system come up with a goal by itself even? So like even the statement to be proven. We've tried that. It definitely works. You can let the model generate goals that are valid and that can then prove. You can even orient, we were talking about how do you orient your work towards stuff that interests you. You can definitely, in that case, you can definitely prompt the model where you're interested to explore by the declaration name. You can make up kind of funky names that look like analysis or funky names that look like group theory or even funky names that look like math Olympiads. The model will definitely and gladly conjecture statements. It's actually conjecturing all the time in a way that is not leverageable, unfortunately, when we do proof search. When we do proof search, the way we refer to theorems that exist is by declaration name, not by the statement themselves in Lean at least. All the time, every proof search, the model will just invent a theorem by name and the name look really legit. There should be math limb actually because it's just a missing API because the name, it's generally very interpretable, but the model sync should be there. That kind of conjecturing behavior really exists in the model today and is probably leverageable in interesting ways. It's a bit crazy because that is really how I think mathematicians go about proving something. They say they're at some statement and they say, well, here I need some inequality that relates these two things to each other. Essentially that is exactly coming up with a name of a theorem like this. The name would be something like, this greater than this or it's crazy. We actually can extract from math limb what we call the type elaboration. Type elaboration is to take a name of the theorem and you infer the type. The type is in type theory, the type is the statement itself. We can train models and type elaboration. We could have them conjecture names while we proof search and then take the name and try to type elaborate them. That gives us a statement and then try to prove that statement. That's something we haven't explored. It sounds crazy. Given the directions of these automated systems that can essentially generate data for themselves, if you introduce something like this, I'm pretty convinced this can get us a whole lot further. How fast have these Go and Chess algorithms become? They've become human and one month later they were totally superhuman. It happened in an instant, which is crazy. My question would be a little bit, this is a machine, the formal machine, you have the humans on the other side. Is there a good way of the two working together? It seems like they have complementary skills. One can search and try to prove things very quickly. The other one maybe has more of that idea, like introducing new math and so on. Is there a tight way where the two can work together or will it always be in the, well, we have to translate from one domain to the other? Definitely a way. We actually released our early models, it was almost a year ago, to the Lean community through a tactic that is called GPTF and so Formalizer could say GPTF and GPTF would answer with suggestions of things to try. It's broken and clunky in many ways and there's a technical challenge, which is that the mass library advances every day. It's the models are easy to, they can rot quite rapidly. For research purposes, it's very convenient for us to just say for the next three months, we're going to work on that commit and just not look at what's happening out there. But yet if you want to provide value to the community, you have to stay fresh, which is more of an engineering challenge than anything else. But it's definitely a plan to provide our models to the community. To be honest, anybody working on formal math and ML, think about that, that just makes sense. Because formalization is so, it's not that hard, but it's time consuming. So if our models can speed up formalization by another magnitude, that would be just tremendous. Right there, there's already a very nice symbiosis, as you say, because if we speed up formalization by 10x or by 2x, even by 2x, people will formalize much more stuff and we'll get much more data and we'll get better. It's a loop that goes through actually people committing stuff to Mathlib and us injecting it back eventually. So it's kind of a long, very long loop. It's a loop that we plan to try to set up. Yeah, I mean, I think that would be sort of the best case outcome right here, that there is like the symbiosis of just the machine helping the humans and so on, before it eventually will outperform them and make mathematicians useless. Oh yeah, we're far away from that anyway. Maybe last technical question from my side. It seems like in such an iteration process, you said, for example, you know, we can be easy statements, we can find thousands of proofs for them and you do some deduplication, right, to sort of reduce the number of proofs. If two proofs are equivalent, you take the shorter one, which is very sensible. But still, how do you avoid that most data that you add back to the data set is kind of useless? Because given like three basic facts, a mathematician can probably prove 16 things, right? And only very few of them are going to be valuable to advance towards my ultimate goals. Like how do you make sure that what you add back to the data set actually has some sort of value to the expert iteration? So the explosion of statements and proof that goes into a lot of noisy and uninteresting stuff generally comes when you do forward proving. If you do backward proving, you're really bounded by the statements you're trying to prove. So you might find thousands different proofs for something easy and all the thousands vary just because the model decided to name a variable differently and so they're not that interesting. And there we have much more work to do into having smarter deduplication. But really, in a sense, because that's the main advantage of working on formal math, because that data has been verified by the formal system, we know it's legit. It's one key massive advantage that we have to explore interesting research ideas compared to other domains is that we can lean on that verifier to really make sure that we only use legit data, even if it's the model that generated it. And I think that's key here. And generally speaking, empirically, it's always felt like the training, basically gradient descent is about compression and the training process is actually good at sifting through repetitive, not necessarily repetitive, but somewhat similar data. And so having a lot of different proofs is actually generally beneficial. I guess the story of deep learning is that the more the better, whatever it is. I've not gone too much into the results other than saying the expert iteration obviously helps you to prove much harder statements compared to just the solver, whether you adjust for a computer or not. It's also interesting that the larger models, whenever you scale up stuff, essentially, you get better. Is there anything in the experimental results that maybe I haven't touched on that you would like to highlight specifically? Well, I think you really covered it well. One result that I think you almost touched on, one question, and that is unanswered in the paper, is we do include the synthetic inequalities in the final experimental setup to target Mini F2F. And actually, I've run the ablation of that and they don't help that much on Mini F2F. I mean, it's not that much that surprising. So it's really, if you remove them and plot the curves against Mini F2F, you really get somewhat sensibly similar stuff. There is a few inequalities that have been solved that are challenging. And it's always a challenge because the graph tells you that it's roughly the same. But then when you look at the proof, you feel like it's been learned through the curriculum on synthetic inequalities. So that's the reason why we kind of kept it here. And I think it does unlock a few problems, but it's kind of a few problems at the margin. So it's hard to make sure by just looking at averages. And one interesting thing, of course, is as you say, you scale your compute, whether you scale in model size or you scale in number of atoms and you scale in depth of search, you always get better. It really seems to be, and I mean, it's true of most of recent deep learning, there really seems to be performance being really a function of computes that you efficiently pour into the system. Though we've been very surprised many times that model size scaling is hard to leverage. We know those larger models are so much smarter when you interact with them directly. You ask questions with GPT-3, it's qualitatively better than GPT-2, right? And here we are at the GPT-1 or 2 kind of size. And so common wisdom would say GPT-1 or 2, just dumb, right? So why not use GPT-3 size because we're talking about math. And really what we've seen empirically and that's probably and potentially because of bottlenecks in our setup that we haven't yet correctly identified, is that you don't need to have that big of a model to be efficient. It's actually detrimental to scale the model size because then your proof search becomes much more compute intensive. And in terms of Flop's allocation, it's much more efficient to sample many more times from a smaller models. It tells something quite interesting. It tells that the smaller model is basically is not completely, it's not much less smart than a larger model. It's just that the distribution is not as crisp. And here because we have the verifier and we can sample many times, we can choose the good samples out of a small model by trying many times. Maybe that becomes... It's only because we have a verifier.... go to like more like really hard math statements. Maybe at some point you really need sort of the large models, but who knows? Was there... I'm a bit interested also in the process of the research itself. Seeing a final paper is always really nice and cool and wow, you get to... your model does all this thing. Was there particular low points during the research as well, like particular moments where you think, this isn't going to work out after all or things like this? Maybe any you would like to share, maybe so that other people... It helps to identify because I think most people find themselves in spots like that. Yes, definitely. To be honest, I've been quite... We've been quite lucky with that project in the sense that there's been some low points, but at any point of time, looking back three months in the past, we always felt like we had made good motivating progress over those three months. But it's obviously been a lot of struggles at many times. I think research, at least the way I see it, is a lot about struggling for quite some time on some problems. There's a reason why you really want to care about the problem you're working on to be able to go through that struggle. It's actually the same as a startup in a sense. You really have to care enough to be able to go through the struggle. To give you an idea, I started working alone. There's no multiple people working on the project with me, but when I started, I really took a language model and I took a data set of tactics that I exported from... It was Metamask at the time. Nobody had any idea whether a language model was capable of generating a tactic because the syntax was so precise when you're talking about interacting with the formal system. There were no code generation results at the time. It really was an open question whether a language model is good enough to generate synthetically formal sentences in a sense. The first win was really that. Not only you train your model and start sampling and you just look at your sequence accuracy and you see that it's not zero. Right there, it doesn't prove anything and it's far from being able to prove anything, but it's a massive win. You're like, yes, language models can generate formal statements. That was really the start. I think leading to the first paper, the first GPTF paper, the two key moments where, okay, let's try to scale the model size and seeing that scaling is really beneficial. It's not, as we discussed, not as clear, but if you're just looking at performance in terms of model size, you see that very nice scaling if you don't adjust the compute basically. That's something that is quite motivating and exciting because it's the trend of the domain in many aspects. The key finding of the first paper that was really a motivation to continue working was that pre-training. You talked about that in the review and you had some questions, but that pre-training really helps a lot and transfers very beneficially to formal math. That's the bulk of that first paper. Then after the first paper, you're like, oh, we have a nice result. We've shown that language models can do some formal mathematics, but we were still completely unable to prove Olympiad's problems at all, even the really easy ones. That's really what we started working on. There, it's been also a long struggle, I think, until we just decided to bite the bullet and formalize some statements ourselves to generate that curriculum that really unlocks new capabilities and led to the work that we've shared. Is there anything about the paper that you want people to get away or to take away with? Maybe you can look also a little bit beyond math, like what does this tell us or anything you'd like people to know? The main takeaway I think I want to share is why we look at beyond math, but first it's why formal math is awesome. I think we covered that quite nicely, but to me, the main reason is that it's reasoning incomplete. If you get a really impressive result in formal math, you're really confident that you have a very impressive result in reasoning. Other interesting aspects of it is that it's inherently a safe setup. A lot of people are talking about safety, and that's a last harbor where we're not yet at all at human level, yet it's safe to try to push as hard as you can because it's like games. But in a formal system, there is no escape hatch. And finally, the reason why I think it's so exciting is because it lets you combine a language model with a formal verifier. And so you're really getting the best of both worlds. You have language models that are really impressive into what they can generate, but even GPT-3, if you give it a few deductive steps, it falls off really rapidly. And so they are capable of one-step reasoning that are interesting, but not multi-step reasonings. And so that's when you tie it with a verifier that you can basically get the value of multi-step reasoning by interacting with the verifier that is here to verify the prediction. And that's, I think, what is really exciting here. The verifier kind of almost gives you the internal monologue that humans have when they think. It's hard to imagine a language model thinking hard during the duration of one context size, right? Yet here, we do have that kind of property, which is exciting. And finally, the reason why I'm super excited about it goes beyond mass, in a sense. I think that's the reason why it's really... OpenAI is really a great place to work on that because it's really aligned with our mission and how we want to execute it. The reason why is that I think if we crack formal mass, we really will be providing a blueprint on how to infuse much more reasoning in large informal language models. And so I really see it as kind of a small experimental lab where we can study reasoning when we know that reasoning is kind of still lacking in those very large language models. And so that's really that that excites me and I think it will transfer nicely. You have formal mass, you have code generation in the middle because you have unit tests, but beyond unit tests, you cannot know for sure that your program is correct. And then you have fully informal setups where you just cannot verify your predictions. I think that wraps it up pretty nicely. Stan, thank you very much for being here. This was really cool.
[ { "start": 0, "end": 9.32, "text": " Hello there, this is an interview with the first author of the paper, Formal Mathematics" }, { "start": 9.32, "end": 15.540000000000001, "text": " Statement Curriculum Learning, in which an automated system was able to solve two problems" }, { "start": 15.540000000000001, "end": 18.1, "text": " of the International Mathematics Olympiad." }, { "start": 18.1, "end": 23.98, "text": " Now, this is an unprecedented level of skill in formal mathematics for an AI system." }, { "start": 23.98, "end": 28.94, "text": " The system uses language models in combination with a technique called expert iteration to" }, { "start": 28.94, "end": 33.72, "text": " build itself a harder and harder curriculum of theorems to prove." }, { "start": 33.72, "end": 39.36, "text": " Now, if you haven't seen it, I've made a comprehensive paper review about this paper in the last" }, { "start": 39.36, "end": 40.36, "text": " video." }, { "start": 40.36, "end": 44.68, "text": " So be sure to check that out because Stan, the author who I'm interviewing today, has" }, { "start": 44.68, "end": 46, "text": " seen that video." }, { "start": 46, "end": 48.6, "text": " So we all start from a common level." }, { "start": 48.6, "end": 53.8, "text": " Stan is able to directly respond to any criticisms and questions that I had during the paper" }, { "start": 53.8, "end": 54.8, "text": " review." }, { "start": 54.8, "end": 59.239999999999995, "text": " And we go into the details into the behind the scenes of the research, what didn't work" }, { "start": 59.239999999999995, "end": 64.84, "text": " out what problems came up, how the project came to be and what this all means beyond" }, { "start": 64.84, "end": 66.44, "text": " the domain of mathematics." }, { "start": 66.44, "end": 70.06, "text": " It is a huge privilege to have the authors of these papers on here." }, { "start": 70.06, "end": 73.52, "text": " And I want to get the most information that I can out of them." }, { "start": 73.52, "end": 76.08, "text": " So please let me know how I can improve these videos." }, { "start": 76.08, "end": 80.92, "text": " Let me know in the comments, leave a like if you like and I'll see you around." }, { "start": 80.92, "end": 82.56, "text": " Bye." }, { "start": 82.56, "end": 83.56, "text": " All right, everyone." }, { "start": 83.56, "end": 84.56, "text": " Hi." }, { "start": 84.56, "end": 89.96000000000001, "text": " So we're here with Stan Polu, who is the first author of the formal mathematics statement" }, { "start": 89.96000000000001, "end": 96.24000000000001, "text": " curriculum learning of the paper that uses expert iteration to end up proving two IMO" }, { "start": 96.24000000000001, "end": 103.04, "text": " problems, which I think was was very well received by everyone in the community." }, { "start": 103.04, "end": 106.88, "text": " And we're going to look at the paper, going to go maybe through some of my criticisms" }, { "start": 106.88, "end": 110.08, "text": " that I had and that I just threw out there." }, { "start": 110.08, "end": 114.67999999999999, "text": " And yeah, we're going to have we're going to hopefully inform everyone a little bit" }, { "start": 114.67999999999999, "end": 115.67999999999999, "text": " more." }, { "start": 115.67999999999999, "end": 116.67999999999999, "text": " Stan, welcome to the channel." }, { "start": 116.67999999999999, "end": 117.67999999999999, "text": " Thank you, Yannick." }, { "start": 117.67999999999999, "end": 120.12, "text": " Thank you very much for having me." }, { "start": 120.12, "end": 123.12, "text": " It's a pleasure to be here." }, { "start": 123.12, "end": 130.04, "text": " So this this obviously the paper, it helps that OpenAI is as a name on the paper, right?" }, { "start": 130.04, "end": 133.92, "text": " It gives it like a little bit of a boost in publicity, but still it was the reception" }, { "start": 133.92, "end": 139.94, "text": " was quite widespread, I want to say, even though it appeared, I think in the same week" }, { "start": 139.94, "end": 145.4, "text": " as some other big papers, like I think AlphaCode was in the same week or so." }, { "start": 145.4, "end": 149.8, "text": " Yet still you made quite an impression on people." }, { "start": 149.8, "end": 156.32, "text": " And do you have an idea of why sort of the paper was widely received?" }, { "start": 156.32, "end": 160.96, "text": " There have been other papers in this domain, but this was kind of special." }, { "start": 160.96, "end": 161.96, "text": " What's your impression?" }, { "start": 161.96, "end": 162.96, "text": " Yeah." }, { "start": 162.96, "end": 168.56, "text": " So, so first, yeah, you mentioned I work at OpenAI, just to give you a little bit of context." }, { "start": 168.56, "end": 171.2, "text": " So I'm a research engineer at OpenAI." }, { "start": 171.2, "end": 176.12, "text": " OpenAI is focused on building and deploying safe and beneficial AI systems." }, { "start": 176.12, "end": 181.28, "text": " It's a bit part research lab and part deployment company and I myself focus on the research" }, { "start": 181.28, "end": 182.84, "text": " lab part." }, { "start": 182.84, "end": 187.8, "text": " The release was actually the same day as AlphaCode." }, { "start": 187.8, "end": 193.64000000000001, "text": " We actually decided to go for it right after the release that work and I think it was just" }, { "start": 193.64000000000001, "end": 196.44, "text": " fine." }, { "start": 196.44, "end": 203.32, "text": " We did release a first paper before the first GPTF paper, which is reference from that paper" }, { "start": 203.32, "end": 204.52, "text": " a year ago." }, { "start": 204.52, "end": 211.92, "text": " And it didn't have much support from OpenAI because it was kind of a shadow release." }, { "start": 211.92, "end": 215, "text": " We just put the paper up there, it was a blog post." }, { "start": 215, "end": 219.2, "text": " And it did bring quite a lot of interest as well." }, { "start": 219.2, "end": 228, "text": " I think people are interested in the domain because mass seems like a frontier that we" }, { "start": 228, "end": 229.14, "text": " haven't reached yet." }, { "start": 229.14, "end": 234.39999999999998, "text": " And so any progress in that direction seems is probably exciting to most other people" }, { "start": 234.39999999999998, "end": 235.39999999999998, "text": " in the community." }, { "start": 235.39999999999998, "end": 240.48, "text": " That would be my kind of main understanding of as to why people reacted positively and" }, { "start": 240.48, "end": 242.88, "text": " are engaging with the work." }, { "start": 242.88, "end": 247.6, "text": " So you were already in this domain, you said, and I think I've also commented on this a" }, { "start": 247.6, "end": 248.6, "text": " little bit." }, { "start": 248.6, "end": 254.72, "text": " You had previous work in using language models to guide these provers." }, { "start": 254.72, "end": 258.6, "text": " Was this sort of a natural continuation for that?" }, { "start": 258.6, "end": 265.84, "text": " Or was there some impulse behind you tackling sort of these more challenging problems?" }, { "start": 265.84, "end": 269.71999999999997, "text": " Yes, it's really a continuation of the previous work." }, { "start": 269.71999999999997, "end": 273.88, "text": " And actually, to give you a little bit of color on all of that, I joined OpenAI two" }, { "start": 273.88, "end": 280.4, "text": " years ago, and I actually wanted to work on formal math and AI before I joined OpenAI." }, { "start": 280.4, "end": 285.48, "text": " And I did have quite an original trajectory within the field." }, { "start": 285.48, "end": 287.84, "text": " I don't have a PhD in machine learning." }, { "start": 287.84, "end": 289.94, "text": " I don't have a PhD at all, actually." }, { "start": 289.94, "end": 294.36, "text": " And I was actually a software engineer at Stripe before and eventually wanted to work" }, { "start": 294.36, "end": 302.36, "text": " on subjects that pertain to AI and decided that formal math was the things that I wanted" }, { "start": 302.36, "end": 303.36, "text": " to work on." }, { "start": 303.36, "end": 309.36, "text": " And then I found that it was well aligned with OpenAI mission and the way we were executing" }, { "start": 309.36, "end": 310.36, "text": " it." }, { "start": 310.36, "end": 313.16, "text": " And so I joined and shortly after started working on it." }, { "start": 313.16, "end": 317.28000000000003, "text": " So I've actually been working on this for the last two years." }, { "start": 317.28000000000003, "end": 320.68, "text": " And that paper is really a continuation of the first paper." }, { "start": 320.68, "end": 324.40000000000003, "text": " It's just kind of a real continuous work that we are tackling." }, { "start": 324.40000000000003, "end": 328.68, "text": " And I think we'll definitely continue working on that because those two problems are quite" }, { "start": 328.68, "end": 335.64, "text": " impressive, but we're still far away from being at best students level." }, { "start": 335.64, "end": 344.64, "text": " It is to some extent mind blowing because that system can prove statements that I'm" }, { "start": 344.64, "end": 346.8, "text": " actually myself not capable of proving." }, { "start": 346.8, "end": 353.48, "text": " I'm not a math competitor, but I did do quite a lot of math studying for engineering school" }, { "start": 353.48, "end": 355.12, "text": " in France." }, { "start": 355.12, "end": 357.6, "text": " And there are some things that I just can't prove and that this system can prove." }, { "start": 357.6, "end": 361.68, "text": " But at the same time, there's so many stuff that I find easy and this kind of proven." }, { "start": 361.68, "end": 370.90000000000003, "text": " So we were still a long way away from being able to be at best human level." }, { "start": 370.90000000000003, "end": 375.32000000000005, "text": " But still those progress have been really continuous and continuously exciting over" }, { "start": 375.32000000000005, "end": 378.08000000000004, "text": " the past two years." }, { "start": 378.08000000000004, "end": 381.94, "text": " You've seen my explanation of the paper." }, { "start": 381.94, "end": 386.92, "text": " And I think with this paper specifically, I'm not that much of an expert in the domain" }, { "start": 386.92, "end": 387.92, "text": " itself." }, { "start": 387.92, "end": 395.04, "text": " So I'm not too much into formal math and these sort of proving algorithms, how provers even" }, { "start": 395.04, "end": 396.04, "text": " work." }, { "start": 396.04, "end": 400.04, "text": " I've tried to explain that a little bit by building this proof tree right here." }, { "start": 400.04, "end": 406.8, "text": " Do you maybe have any more comments, any insights that could help people understand what is" }, { "start": 406.8, "end": 408.92, "text": " formal math even?" }, { "start": 408.92, "end": 411.48, "text": " How does it look from the inside?" }, { "start": 411.48, "end": 412.92, "text": " What is the main problem?" }, { "start": 412.92, "end": 415.20000000000005, "text": " How do you do things there?" }, { "start": 415.20000000000005, "end": 416.20000000000005, "text": " Of course." }, { "start": 416.2, "end": 418.32, "text": " To be honest, you really made the explanation." }, { "start": 418.32, "end": 424.4, "text": " It was really clear and I think it's a really good explanation of what's happening." }, { "start": 424.4, "end": 429.12, "text": " Formal math was kind of invented when computers came out." }, { "start": 429.12, "end": 434.12, "text": " The main problem that it tries to solve is that when you have a math paper and a very" }, { "start": 434.12, "end": 438.84, "text": " impressive proof, you only have generally a few people in the world that can review" }, { "start": 438.84, "end": 443.02, "text": " that proof because those proof are generally so complicated that only a few people can" }, { "start": 443.02, "end": 445.88, "text": " just understand those." }, { "start": 445.88, "end": 454.12, "text": " And so there's actually no way to be sure that those massive proof are indeed true." }, { "start": 454.12, "end": 458.76, "text": " That's kind of annoying because we're talking about mathematics supposed to be rock solid," }, { "start": 458.76, "end": 462.28, "text": " yet it's not the case because those subjects are so advanced." }, { "start": 462.28, "end": 468.56, "text": " And so the motivation for formal math is to say, well, let's actually encode math for" }, { "start": 468.56, "end": 473.28, "text": " computers so that computers can check every step." }, { "start": 473.28, "end": 479.84, "text": " And we're going to get rid of that problem and forever be confident in our math progress." }, { "start": 479.84, "end": 487.08, "text": " The only caveat is that because people working in formal math needs to reformat the proof" }, { "start": 487.08, "end": 493.47999999999996, "text": " in a way that computers can pass, despite a lot of automation that helps in that process," }, { "start": 493.47999999999996, "end": 497.59999999999997, "text": " it's still a very, very, very time consuming effort." }, { "start": 497.59999999999997, "end": 502.94, "text": " And so the advance of formalization of math concepts has been lagging behind the state" }, { "start": 502.94, "end": 508.71999999999997, "text": " of the art in math tremendously, but it's still starting to pick up, especially in Lean," }, { "start": 508.71999999999997, "end": 512.52, "text": " where we've seen some recent formalization of very advanced and new work." }, { "start": 512.52, "end": 518.76, "text": " But the main problem of formal math, I think, is that it's really hard to formalize." }, { "start": 518.76, "end": 521.84, "text": " And so what is formalization like?" }, { "start": 521.84, "end": 523.44, "text": " It's exactly as you stated." }, { "start": 523.44, "end": 527.52, "text": " You basically state your statements." }, { "start": 527.52, "end": 531.04, "text": " Stating statements once you have the right definitions is almost natural." }, { "start": 531.04, "end": 534.8399999999999, "text": " It feels a bit complicated when you look at the statements from the paper, as you mentioned," }, { "start": 534.8399999999999, "end": 538.0799999999999, "text": " but it's actually close to what you would write in English." }, { "start": 538.0799999999999, "end": 545.52, "text": " But then the proof is really completely different because you really have to contrive it in" }, { "start": 545.52, "end": 547.92, "text": " a way that the computer can understand." }, { "start": 547.92, "end": 551.4399999999999, "text": " And the way it works is, as you mentioned, it's really an interaction between the human" }, { "start": 551.4399999999999, "end": 552.4399999999999, "text": " and the machine." }, { "start": 552.4399999999999, "end": 554.8, "text": " You have that first statement, which is your goal." }, { "start": 554.8, "end": 559.5999999999999, "text": " You apply some tactics, which are the automation I mentioned, to try to help in the formalization." }, { "start": 559.6, "end": 562.88, "text": " To generally provide some direction to tactics." }, { "start": 562.88, "end": 568.6800000000001, "text": " And tactics are meta programs that are taking your directions and trying to generate proof" }, { "start": 568.6800000000001, "end": 572.96, "text": " terms, which are much lower level artifacts that are understood by the machine." }, { "start": 572.96, "end": 575.72, "text": " So they bridge between the human and the machine." }, { "start": 575.72, "end": 577.8000000000001, "text": " And you keep going like that." }, { "start": 577.8000000000001, "end": 580.2, "text": " You generally know the informal proof, of course." }, { "start": 580.2, "end": 585.9200000000001, "text": " You generally have to change it in non-trivial ways to make it provable with all the theories" }, { "start": 585.9200000000001, "end": 589, "text": " you have available and the constraint of the formal system." }, { "start": 589, "end": 592.36, "text": " And eventually you keep making progress like that with trial and error." }, { "start": 592.36, "end": 596.48, "text": " So you have the feedback from the formal system, which are your current goals, and you try" }, { "start": 596.48, "end": 600.74, "text": " and make progress this way until you, as you mentioned, you reach something that you know" }, { "start": 600.74, "end": 607.8, "text": " is true because it's already been proven or it's an axiom or it's an hypothesis." }, { "start": 607.8, "end": 612.94, "text": " You mentioned right now that people formalize by already sort of knowing the proof from" }, { "start": 612.94, "end": 617.64, "text": " the math domain, maybe." }, { "start": 617.64, "end": 623.68, "text": " Are there people that seriously prove things for the first time in the formal way?" }, { "start": 623.68, "end": 626.52, "text": " Or is it largely just a translation effort?" }, { "start": 626.52, "end": 631.48, "text": " Because I'm wondering the way your system works in proof searching, this is not necessarily" }, { "start": 631.48, "end": 636.48, "text": " this paper alone, but it seems to me proof searching, what it does is it simply traverses" }, { "start": 636.48, "end": 643.2, "text": " the tree of all possible kind of like a chess engine or so would do something like this." }, { "start": 643.2, "end": 652.76, "text": " And I'm wondering if you think that is similar to how humans try to go about proving mathematical" }, { "start": 652.76, "end": 658, "text": " concepts or is there some fundamental difference on how the machine does it and how the humans" }, { "start": 658, "end": 662.6400000000001, "text": " do it?" }, { "start": 662.6400000000001, "end": 670.4000000000001, "text": " In my opinion, there are some similarities and some massive difference." }, { "start": 670.4, "end": 677.56, "text": " If you know what the proof is already, it looks a little bit like a translation exercise," }, { "start": 677.56, "end": 681.92, "text": " but one that is quite challenging because you really have to generally refactor the" }, { "start": 681.92, "end": 684.28, "text": " proof in non-trivial ways." }, { "start": 684.28, "end": 692, "text": " As an example, Peter Scholes, who is a very well-known mathematician, came to the formal" }, { "start": 692, "end": 696.76, "text": " community and said, I have that new proof that I'm super excited about, but it's kind" }, { "start": 696.76, "end": 699.72, "text": " of complicated and I want to make sure that it's true." }, { "start": 699.72, "end": 704.52, "text": " Please help me or please formalize it so that we can know for sure." }, { "start": 704.52, "end": 712.96, "text": " And that effort, it's a kind of 10 dozen of page PhD of math, so it's not that big." }, { "start": 712.96, "end": 719.84, "text": " And I think the effort took six months or a bit more to dozens of people." }, { "start": 719.84, "end": 724.6, "text": " So it's not just translation because generally you have definitions that are missing and" }, { "start": 724.6, "end": 729, "text": " so you need to add them, you need to create the theories that are missing, etc." }, { "start": 729, "end": 731.96, "text": " It's a very complicated book." }, { "start": 731.96, "end": 735.84, "text": " And so that's one of the main differences between what we're doing and what a mathematician" }, { "start": 735.84, "end": 737.6, "text": " do actually." }, { "start": 737.6, "end": 742.6, "text": " Today we are really focusing on proving theorems at fixed theories in a sense that we are" }, { "start": 742.6, "end": 748, "text": " tackling Olympiad problems for which we know that all the theorems and the definitions" }, { "start": 748, "end": 752.44, "text": " that we'll need are already proven in the formal system in a sense." }, { "start": 752.44, "end": 756.72, "text": " But when a mathematician is doing his job, he's not spending his day proving stuff." }, { "start": 756.72, "end": 763.4, "text": " What a mathematician do most is actually coming up with new definitions, new objects, finding" }, { "start": 763.4, "end": 767.6800000000001, "text": " correlations, finding a link between those definitions and those domains." }, { "start": 767.6800000000001, "end": 770.36, "text": " That's something that we're actually not tackling at all today." }, { "start": 770.36, "end": 776.12, "text": " We're really focusing on trying to solve exercise rather than creating new theories." }, { "start": 776.12, "end": 784.36, "text": " And so the main thing is essentially knowing which tactic do I need to apply to use the" }, { "start": 784.36, "end": 789.64, "text": " existing theorems that I have or the existing concepts that I have in order to prove the" }, { "start": 789.64, "end": 793.12, "text": " particular statement." }, { "start": 793.12, "end": 795.12, "text": " You say there are two main problems right here." }, { "start": 795.12, "end": 800.72, "text": " So there's first this infinite action space thing." }, { "start": 800.72, "end": 808.4, "text": " And this can be solved by having this search be guided by whatever language model you use." }, { "start": 808.4, "end": 815.68, "text": " People I think know this from AlphaZero type algorithms, right, where we use some sort" }, { "start": 815.68, "end": 818.1999999999999, "text": " of a neural network to guide that search." }, { "start": 818.1999999999999, "end": 820.84, "text": " And this is already a little bit in your previous work." }, { "start": 820.84, "end": 825.42, "text": " But then the other thing you mentioned is you have no direct self-play setup, which" }, { "start": 825.42, "end": 830.4399999999999, "text": " obviously is very helpful in these types of automated things in these search procedures" }, { "start": 830.4399999999999, "end": 835.6, "text": " if you have like some adversary that's playing against you and both get better at the same" }, { "start": 835.6, "end": 836.6, "text": " time." }, { "start": 836.6, "end": 842.48, "text": " So in this question here, you make a statement that says this paper focuses on the second" }, { "start": 842.48, "end": 843.48, "text": " problem." }, { "start": 843.48, "end": 848.48, "text": " Our basis for addressing it is the observation that the key role of self-play is to provide" }, { "start": 848.48, "end": 851.16, "text": " an unsupervised curriculum." }, { "start": 851.16, "end": 854.58, "text": " And the statement just kind of stands here as such." }, { "start": 854.58, "end": 856.2, "text": " You kind of claim this." }, { "start": 856.2, "end": 858.44, "text": " Do you want to comment maybe a little bit?" }, { "start": 858.44, "end": 861.1600000000001, "text": " I mean, it seems intuitive, right?" }, { "start": 861.1600000000001, "end": 866.0600000000001, "text": " But how do you arrive at this conclusion?" }, { "start": 866.06, "end": 870.76, "text": " So it's indeed more of an hypothesis than a strong statement." }, { "start": 870.76, "end": 875.1999999999999, "text": " I totally admit and agree." }, { "start": 875.1999999999999, "end": 884.1999999999999, "text": " We have some experimental evidence that if you think of AlphaZero, it's actually what's" }, { "start": 884.1999999999999, "end": 885.1999999999999, "text": " happening." }, { "start": 885.1999999999999, "end": 889.4, "text": " But basically, if you take all the data that has been generated through a training loop" }, { "start": 889.4, "end": 894.9599999999999, "text": " of an AlphaGo type algorithm, if you take the final data set and train on it, you'll" }, { "start": 894.96, "end": 901.12, "text": " get the same performance as if you've been training sequentially basically." }, { "start": 901.12, "end": 909.8000000000001, "text": " And so there is nothing kind of special in self-play episodes basically." }, { "start": 909.8000000000001, "end": 913.9200000000001, "text": " It's more about generating the right data at the end." }, { "start": 913.9200000000001, "end": 919, "text": " And I think it's not just about the difficulty, it's just about creating a lot of diverse" }, { "start": 919, "end": 922.6800000000001, "text": " data that explores the space quite nicely." }, { "start": 922.68, "end": 927.52, "text": " And that kind of stems from having a player against which you're playing and by exploration," }, { "start": 927.52, "end": 931.04, "text": " you dig a little bit and find new strategies that are interesting." }, { "start": 931.04, "end": 934.16, "text": " And eventually, all that, if you accumulate all that, you train on that, you get a very" }, { "start": 934.16, "end": 936.52, "text": " good policy of value function." }, { "start": 936.52, "end": 942.0799999999999, "text": " And I think that's why we say this is that the self-play that we have in two-player games" }, { "start": 942.0799999999999, "end": 950.28, "text": " is really about getting a data generation pipeline that generates good data, right?" }, { "start": 950.28, "end": 953.36, "text": " And that's why we call it an unsupervised curriculum." }, { "start": 953.36, "end": 957.9599999999999, "text": " And in formal math, if you have a bunch of statements that you cannot prove because your" }, { "start": 957.9599999999999, "end": 961.36, "text": " program is just not good enough, you're just not going to get any data." }, { "start": 961.36, "end": 964.3199999999999, "text": " You're going to just be stuck at that point." }, { "start": 964.3199999999999, "end": 966.24, "text": " And so that's kind of the main difference." }, { "start": 966.24, "end": 968.12, "text": " There is no way to reframe." }, { "start": 968.12, "end": 973.0799999999999, "text": " I mean, there's no trivial or easy or obvious to me at least ways to reframe a problem that" }, { "start": 973.0799999999999, "end": 975.76, "text": " is just too hard into a set of easier problems." }, { "start": 975.76, "end": 981.2, "text": " And it makes sense that you're trying to build up a curriculum, but also I've displayed this" }, { "start": 981.2, "end": 986.12, "text": " here with this sort of arrow of complexity that just gets more and more complex." }, { "start": 986.12, "end": 987.96, "text": " But it is not really the case." }, { "start": 987.96, "end": 992.92, "text": " It doesn't really look like this because complexity isn't just in one direction." }, { "start": 992.92, "end": 998.6, "text": " It's not just a statement is more complex than another one, but there's also a direction." }, { "start": 998.6, "end": 1005.22, "text": " I think if I want to work myself up to prove, let's say, the whatever, general Riemann hypothesis" }, { "start": 1005.22, "end": 1012.08, "text": " or something like this, I can't just prove harder and harder statements in numerics or" }, { "start": 1012.08, "end": 1015.72, "text": " something because I really want to be in, I don't even know what category the Riemann" }, { "start": 1015.72, "end": 1021, "text": " hypothesis number theory or complex analysis." }, { "start": 1021, "end": 1026.88, "text": " But the point is I can't just go about just proving any old theorems." }, { "start": 1026.88, "end": 1030, "text": " I have to have some sort of a direction." }, { "start": 1030, "end": 1037.24, "text": " So how does your... and you make a little bit of a point in manual curation might help" }, { "start": 1037.24, "end": 1039.1, "text": " here and so on." }, { "start": 1039.1, "end": 1047.14, "text": " But what's the main force in your system driving sort of the direction that the system becomes" }, { "start": 1047.14, "end": 1048.38, "text": " an expert at?" }, { "start": 1048.38, "end": 1051.02, "text": " Because there's so many directions in math, right?" }, { "start": 1051.02, "end": 1054.88, "text": " It's impossible that it just becomes better, right?" }, { "start": 1054.88, "end": 1062.88, "text": " Yeah, so I mean, we took the very obvious and easy way." }, { "start": 1062.88, "end": 1066.8000000000002, "text": " Basically you have in a with a formal system, you have a library of theorems that is actually" }, { "start": 1066.8000000000002, "end": 1067.8000000000002, "text": " with it." }, { "start": 1067.8000000000002, "end": 1070.72, "text": " That's what the formal community generally working on." }, { "start": 1070.72, "end": 1072.0800000000002, "text": " This is what we call mathlib." }, { "start": 1072.0800000000002, "end": 1073.92, "text": " It's called mathlib in lean." }, { "start": 1073.92, "end": 1078.64, "text": " And there is very few exercise or Olympiad type exercise, even exercise in mathlib." }, { "start": 1078.64, "end": 1081.5600000000002, "text": " It's generally general purpose theorems, right?" }, { "start": 1081.56, "end": 1087.84, "text": " And so if you train on that data only, you're actually not that good at solving exercise" }, { "start": 1087.84, "end": 1090.36, "text": " because you haven't seen any." }, { "start": 1090.36, "end": 1095, "text": " The very easy exercise you'll be able to solve, but the somewhat hard ones not at all." }, { "start": 1095, "end": 1099.24, "text": " And so we had that mini F2F benchmark, which is made of exercise, Olympiad exercise that" }, { "start": 1099.24, "end": 1103.02, "text": " we cared about for many reasons that we can dive into." }, { "start": 1103.02, "end": 1111.24, "text": " And so we took the easy way, which is let's just formalize a bunch of statements around" }, { "start": 1111.24, "end": 1114.1200000000001, "text": " that benchmark that we care about." }, { "start": 1114.1200000000001, "end": 1119.1200000000001, "text": " And we did the most obvious thing is that we took the textbook that humans use to train" }, { "start": 1119.1200000000001, "end": 1125.84, "text": " for those competitions and formalize everything out of it." }, { "start": 1125.84, "end": 1129.76, "text": " And we didn't ask ourselves much more questions than that." }, { "start": 1129.76, "end": 1133.02, "text": " And the reason why it works is because it's a textbook." }, { "start": 1133.02, "end": 1138.04, "text": " So there is a bunch of easy examples to begin with and the difficulty can have been proved" }, { "start": 1138.04, "end": 1140.24, "text": " nicely for humans." }, { "start": 1140.24, "end": 1145.64, "text": " And so as we formalize the statements, we run our expectation loop on it." }, { "start": 1145.64, "end": 1150.64, "text": " And as you mentioned in that illustration, you get a few statements first, but you retrain" }, { "start": 1150.64, "end": 1153.8, "text": " on them to get a few more, et cetera, et cetera." }, { "start": 1153.8, "end": 1158.32, "text": " And as you do it, the way I visualize it is that you're really shifting the distribution" }, { "start": 1158.32, "end": 1163.8, "text": " of the model away from mathlib and towards mini F2F or towards the group of statements" }, { "start": 1163.8, "end": 1166.72, "text": " that you provided as a curriculum." }, { "start": 1166.72, "end": 1172.64, "text": " And so that is that creation that gives the direction." }, { "start": 1172.64, "end": 1177.44, "text": " In terms of direction, you're very right that it's a challenge." }, { "start": 1177.44, "end": 1182.66, "text": " Something that you can do as an example with formalize is you can do forward proving." }, { "start": 1182.66, "end": 1187.8, "text": " Instead of going backward, as you said, you take things that you know and try to compose" }, { "start": 1187.8, "end": 1192.1200000000001, "text": " them with theorems that unify to the things you know." }, { "start": 1192.1200000000001, "end": 1194.46, "text": " And you keep going forward like that." }, { "start": 1194.46, "end": 1197.96, "text": " And we've tried generating some data this way." }, { "start": 1197.96, "end": 1205.04, "text": " And that data is actually, I mean, you cannot direct it easily." }, { "start": 1205.04, "end": 1208.32, "text": " And so it goes a little bit all over the place." }, { "start": 1208.32, "end": 1216.72, "text": " And we haven't found a way to make it beneficial for targeting a benchmark in particular that" }, { "start": 1216.72, "end": 1217.72, "text": " we care about." }, { "start": 1217.72, "end": 1223.64, "text": " Do you see maybe a future where you mentioned the lack of self play, but there could be" }, { "start": 1223.64, "end": 1229.42, "text": " some sort of an agent that comes up with these intermediate statements, these these curriculum" }, { "start": 1229.42, "end": 1233.5200000000002, "text": " statements that sort of tries to guess, you know, maybe here is a statement that's kind" }, { "start": 1233.5200000000002, "end": 1238.6000000000001, "text": " of in between where you want to go and where you are currently." }, { "start": 1238.6000000000001, "end": 1245.42, "text": " This could be some sort of, I mean, I'm never sure because a lot of times when people propose" }, { "start": 1245.42, "end": 1249.2, "text": " these agents, it's like, well, you if you have that agent, you've essentially solved" }, { "start": 1249.2, "end": 1251, "text": " the problem, right?" }, { "start": 1251, "end": 1257.72, "text": " But there could be some sort of thing that replaces you the human as who has to come" }, { "start": 1257.72, "end": 1258.72, "text": " up with this curriculum." }, { "start": 1258.72, "end": 1261.6, "text": " But I guess it's a bit of a future thing." }, { "start": 1261.6, "end": 1269.68, "text": " And the other avenue where I see sorry, so I'd like to jump on this one." }, { "start": 1269.68, "end": 1273.24, "text": " Just for a second." }, { "start": 1273.24, "end": 1275.16, "text": " It is plausible that we could build a model." }, { "start": 1275.16, "end": 1278.3, "text": " I mean, it's theoretically plausible that we could build a model that creates those" }, { "start": 1278.3, "end": 1280.2, "text": " intermediate statements." }, { "start": 1280.2, "end": 1283.76, "text": " There's two challenges here is the first one is that the number of statements that we have" }, { "start": 1283.76, "end": 1285.68, "text": " is actually extremely small." }, { "start": 1285.68, "end": 1289.2, "text": " When you look at the proof data in formal math, and I didn't mention it before, right?" }, { "start": 1289.2, "end": 1291.32, "text": " It's also a good thing to mention it." }, { "start": 1291.32, "end": 1294.96, "text": " One challenge of formal math is that data is extremely scarce." }, { "start": 1294.96, "end": 1300.16, "text": " The proof data is scarce and the statement data is even scarcer." }, { "start": 1300.16, "end": 1307.76, "text": " MassLib is something like 60k, 60k statements, 60k contexts, length things." }, { "start": 1307.76, "end": 1310.24, "text": " And the curriculum we use is a few hundred." }, { "start": 1310.24, "end": 1315.12, "text": " And so to train the agents to try to simplify statements, the data that you have access" }, { "start": 1315.12, "end": 1322.7, "text": " to is like in existence by standards, modern language modeling standards." }, { "start": 1322.7, "end": 1325.4, "text": " So that's a really big challenge." }, { "start": 1325.4, "end": 1331.16, "text": " One thing that I think is extremely exciting, that is, again, same idea, just make it simpler," }, { "start": 1331.16, "end": 1337.16, "text": " is probably actually machine translation from informal statements to formal statements." }, { "start": 1337.16, "end": 1340.4, "text": " Try the work that we've been doing, try to harvest a lot of informal statements that" }, { "start": 1340.4, "end": 1345.6000000000001, "text": " there are many more out there and try to auto formalize them." }, { "start": 1345.6000000000001, "end": 1348.3600000000001, "text": " Formalizing a statement is actually much easier than formalizing a proof." }, { "start": 1348.3600000000001, "end": 1350.76, "text": " It's still challenging, but definitely much easier." }, { "start": 1350.76, "end": 1351.76, "text": " And no, no, no." }, { "start": 1351.76, "end": 1352.88, "text": " Sorry for jumping in." }, { "start": 1352.88, "end": 1359.6000000000001, "text": " So with respect to, yeah, I was also thinking, yeah, you could take all sort of the math" }, { "start": 1359.6000000000001, "end": 1365.88, "text": " that's out there, but yeah, that's obviously also curated by humans a little bit." }, { "start": 1365.88, "end": 1371.0800000000002, "text": " The other point of controlling things would be the language model." }, { "start": 1371.0800000000002, "end": 1375.68, "text": " There's a lot of work in prompt engineering and things like this." }, { "start": 1375.68, "end": 1380.72, "text": " Now, your language model, maybe we can go a little bit into how you train and query" }, { "start": 1380.72, "end": 1386.7, "text": " the language model, which I think might, you know, might need or might benefit from a bit" }, { "start": 1386.7, "end": 1391.4, "text": " more explanation because I was quite vague here, right?" }, { "start": 1391.4, "end": 1396.1200000000001, "text": " But essentially you have two different types of inputs that you train the language model" }, { "start": 1396.1200000000001, "end": 1397.1200000000001, "text": " on." }, { "start": 1397.1200000000001, "end": 1401.52, "text": " The one you call this proof step objective and the other one you call this proof size" }, { "start": 1401.52, "end": 1403.0400000000002, "text": " objective." }, { "start": 1403.0400000000002, "end": 1408.3200000000002, "text": " And both of them, they have a declaration and the goal." }, { "start": 1408.3200000000002, "end": 1412.74, "text": " Do you want to maybe give us a little bit, because for the declaration I was like, yeah," }, { "start": 1412.74, "end": 1415, "text": " it's kind of like the things you have access to." }, { "start": 1415, "end": 1419.2800000000002, "text": " Do you want to maybe give us a bit of insight into what these things are?" }, { "start": 1419.28, "end": 1428.3999999999999, "text": " Yeah, so if we go back to, if we think about your schema about proving backwards, so the" }, { "start": 1428.3999999999999, "end": 1430.44, "text": " goal is the current goal that you want to prove." }, { "start": 1430.44, "end": 1433.16, "text": " The proof step is the tactic that you want to apply." }, { "start": 1433.16, "end": 1438.2, "text": " So this is really mapping exactly the process of generating a tactic to try to simplify" }, { "start": 1438.2, "end": 1439.2, "text": " the current goal." }, { "start": 1439.2, "end": 1445.6, "text": " Sorry, the goal, so if I'm here, right, the goal would be the top thing, this one right" }, { "start": 1445.6, "end": 1452.08, "text": " here and the tactic would be one node, one link to a sort of the next node." }, { "start": 1452.08, "end": 1453.08, "text": " Okay." }, { "start": 1453.08, "end": 1454.08, "text": " To a new goal." }, { "start": 1454.08, "end": 1455.08, "text": " Yeah, exactly." }, { "start": 1455.08, "end": 1460.76, "text": " But then this could also be the new goal and then these could be the proof steps or, okay," }, { "start": 1460.76, "end": 1461.76, "text": " okay." }, { "start": 1461.76, "end": 1462.76, "text": " Yes, exactly." }, { "start": 1462.76, "end": 1468.52, "text": " In your, here the lines are the tactics and the circles are the goals." }, { "start": 1468.52, "end": 1475.56, "text": " And in Lean you actually have just one goal, the tactic goes back to another goal because" }, { "start": 1475.56, "end": 1478.8, "text": " sometimes some tactic can create multiple sub goals, but because you could say, hey," }, { "start": 1478.8, "end": 1484.48, "text": " I want to introduce that cut, the cut is kind of a mini conjecture inside a proof and, but" }, { "start": 1484.48, "end": 1486.24, "text": " Lean kind of stacks them together." }, { "start": 1486.24, "end": 1491.6, "text": " So technically speaking, there's only one node at each end of each line." }, { "start": 1491.6, "end": 1492.6, "text": " Okay." }, { "start": 1492.6, "end": 1493.6, "text": " Yeah, exactly." }, { "start": 1493.6, "end": 1497.72, "text": " The proof looks like a chain, the proof, the final proof looks like a chain." }, { "start": 1497.72, "end": 1499.24, "text": " Okay." }, { "start": 1499.24, "end": 1500.6799999999998, "text": " And the proof search looks like a tree." }, { "start": 1500.68, "end": 1506.68, "text": " And so the, the, the decal, we condition on the decal name, so the decal name is the declaration" }, { "start": 1506.68, "end": 1512.28, "text": " name and it's simply the CRM name or the exercise name." }, { "start": 1512.28, "end": 1519.96, "text": " And the, the motivation here is to provide a proxy information for the model as to what" }, { "start": 1519.96, "end": 1526.72, "text": " is the state of the formal environment at this stage, because the actual formal environment" }, { "start": 1526.72, "end": 1529.1200000000001, "text": " is gigantic." }, { "start": 1529.12, "end": 1532.32, "text": " There's no easy way to represent it in a compact way." }, { "start": 1532.32, "end": 1538.12, "text": " You have all the inputs, you have all the CRMs that have been defined in the same file" }, { "start": 1538.12, "end": 1542.4799999999998, "text": " before that very CRM, that the CRM you're trying to prove right now, you have a bunch" }, { "start": 1542.4799999999998, "end": 1543.9199999999998, "text": " of definitions, et cetera." }, { "start": 1543.9199999999998, "end": 1547.6, "text": " And so the, if you wanted to represent that to the model, it's technically challenging" }, { "start": 1547.6, "end": 1550.8, "text": " and more importantly, it's really big." }, { "start": 1550.8, "end": 1556.6399999999999, "text": " So instead we just give it the name of the CRM and we kind of hope that it'll provide" }, { "start": 1556.64, "end": 1563.5200000000002, "text": " signal as to, to the model as to what are the CRMs that it has access to for this one," }, { "start": 1563.5200000000002, "end": 1566.96, "text": " because it's trained, it's trained on, on, on CRMs that are close to this one and the" }, { "start": 1566.96, "end": 1569.48, "text": " names of CRMs are somewhat similar and related." }, { "start": 1569.48, "end": 1571.76, "text": " It was in the same file, et cetera, et cetera." }, { "start": 1571.76, "end": 1575.4, "text": " So it's really kind of a trick to, to try to infuse a little bit of information about" }, { "start": 1575.4, "end": 1576.4, "text": " the environment." }, { "start": 1576.4, "end": 1577.48, "text": " How can we imagine such a name?" }, { "start": 1577.48, "end": 1582.7, "text": " Is this like a human readable name or is this more like, you know, theorem two eight four" }, { "start": 1582.7, "end": 1584.72, "text": " five point eight?" }, { "start": 1584.72, "end": 1597.76, "text": " No, no, it's somewhat readable for the, for the experts at least it's in the floor smaller" }, { "start": 1597.76, "end": 1600.76, "text": " than floor positive." }, { "start": 1600.76, "end": 1602.44, "text": " Some kind of stuff like that." }, { "start": 1602.44, "end": 1605.76, "text": " It's, it's, it's a little bit compact, but it's still readable." }, { "start": 1605.76, "end": 1609.88, "text": " And for the exercise that we use, it's actually just the name of the competition, the gear" }, { "start": 1609.88, "end": 1611.8, "text": " and the exercise number." }, { "start": 1611.8, "end": 1615.6399999999999, "text": " And the proof step that would be the tactic itself." }, { "start": 1615.6399999999999, "end": 1618.28, "text": " How is a tactic kind of described?" }, { "start": 1618.28, "end": 1624.12, "text": " Is this an index into some bucket or is it also a piece of text or?" }, { "start": 1624.12, "end": 1625.12, "text": " Yeah." }, { "start": 1625.12, "end": 1630.12, "text": " So if you're scrolling the appendix, well, I describe it." }, { "start": 1630.12, "end": 1633.6, "text": " The tactic is really a function call." }, { "start": 1633.6, "end": 1635.96, "text": " You're calling the tactic, which is a meta program." }, { "start": 1635.96, "end": 1640.84, "text": " So if you, yeah, as an example, this one apply tactic is very trivial." }, { "start": 1640.84, "end": 1646.12, "text": " It just says, try to apply that serum to the current goal, but you have much more advanced" }, { "start": 1646.12, "end": 1647.32, "text": " tactics." }, { "start": 1647.32, "end": 1649, "text": " And so that tactic takes an argument." }, { "start": 1649, "end": 1654, "text": " So you not only have to pick your tactic, there's only a few of those, but you actually" }, { "start": 1654, "end": 1655.36, "text": " have to provide an argument." }, { "start": 1655.36, "end": 1657.6399999999999, "text": " So here it's a serum name." }, { "start": 1657.6399999999999, "end": 1659.48, "text": " There's many more, but still finite." }, { "start": 1659.48, "end": 1660.48, "text": " This here is a theorem." }, { "start": 1660.48, "end": 1664.52, "text": " And then you will, oh yeah, here you go." }, { "start": 1664.52, "end": 1665.52, "text": " Yeah." }, { "start": 1665.52, "end": 1666.52, "text": " Okay." }, { "start": 1666.52, "end": 1667.52, "text": " Not prime." }, { "start": 1667.52, "end": 1668.52, "text": " I see." }, { "start": 1668.52, "end": 1669.52, "text": " Yeah." }, { "start": 1669.52, "end": 1671.08, "text": " So that's a typical theorem." }, { "start": 1671.08, "end": 1675.72, "text": " So that's the decoration name that we condition on if we wanted to try to prove it." }, { "start": 1675.72, "end": 1679.24, "text": " And you have to apply it with here." }, { "start": 1679.24, "end": 1683.68, "text": " It's applying the serum by providing a first argument to the serum and then looking at" }, { "start": 1683.68, "end": 1685.32, "text": " the one side only." }, { "start": 1685.32, "end": 1691.16, "text": " And so all of that kind of explodes the action space, obviously." }, { "start": 1691.16, "end": 1694.8, "text": " And the action space is actually infinite because some tactic has arguments, mathematical" }, { "start": 1694.8, "end": 1696.08, "text": " terms." }, { "start": 1696.08, "end": 1701.32, "text": " And those mathematical terms, they don't necessarily exist in the context." }, { "start": 1701.32, "end": 1708.84, "text": " If you're trying to prove an existential statement, often the easiest way is to provide a witness." }, { "start": 1708.84, "end": 1711.6, "text": " The witness is not generally in the statements." }, { "start": 1711.6, "end": 1713.72, "text": " And so you have to generate it." }, { "start": 1713.72, "end": 1716.8, "text": " And so that's the reason why the action space is actually infinite." }, { "start": 1716.8, "end": 1725.28, "text": " And that's the major difference between neural proving techniques and the kind of classical" }, { "start": 1725.28, "end": 1728.3999999999999, "text": " theorem proving automated reasoning techniques." }, { "start": 1728.3999999999999, "end": 1732.6, "text": " They are extremely powerful, but there's one thing they cannot do." }, { "start": 1732.6, "end": 1735.76, "text": " It's generating exogenous mathematical terms." }, { "start": 1735.76, "end": 1742.28, "text": " And you would, in this case, your language model would directly suggest you such tactics" }, { "start": 1742.28, "end": 1743.28, "text": " to apply." }, { "start": 1743.28, "end": 1749.92, "text": " So you would sample from the language model and then suggest a bunch of things." }, { "start": 1749.92, "end": 1758.04, "text": " The language model generates the full string here, apply, netprime, hpmp." }, { "start": 1758.04, "end": 1764.68, "text": " And so we generate a number of those that gives us an approximation of a potential interesting" }, { "start": 1764.68, "end": 1766.68, "text": " action space to explore." }, { "start": 1766.68, "end": 1768.52, "text": " And on top of that, we run a proof search." }, { "start": 1768.52, "end": 1771.0800000000002, "text": " How does the proof step come into this?" }, { "start": 1771.0800000000002, "end": 1772.48, "text": " Because I was a little bit..." }, { "start": 1772.48, "end": 1777.98, "text": " You already have some sort of a log likelihood estimation, I would guess, for the things" }, { "start": 1777.98, "end": 1779.0600000000002, "text": " that you sample." }, { "start": 1779.06, "end": 1785.48, "text": " But then you also have this value, some sort of a value that you assign to how long you" }, { "start": 1785.48, "end": 1787.8799999999999, "text": " think a proof is going to be." }, { "start": 1787.8799999999999, "end": 1788.8799999999999, "text": " Yeah." }, { "start": 1788.8799999999999, "end": 1795.6, "text": " So the proof size objective takes the declaration name and the current goal and try to estimate" }, { "start": 1795.6, "end": 1799.36, "text": " the size of the proof for that goal." }, { "start": 1799.36, "end": 1803.36, "text": " And that's really just an instance of a value function." }, { "start": 1803.36, "end": 1805.76, "text": " That's the one that we've used here." }, { "start": 1805.76, "end": 1809.66, "text": " And it really helps guiding the proof search." }, { "start": 1809.66, "end": 1814.12, "text": " When you don't have the value function yet, so in your review, you mentioned that we bootstrap" }, { "start": 1814.12, "end": 1818.96, "text": " from theta zero, which is the first model that is only trained on proof steps." }, { "start": 1818.96, "end": 1825.36, "text": " When we don't have a value function to available, what we do is that we do the same proof search," }, { "start": 1825.36, "end": 1828.4, "text": " but we prioritize by log prob, as you said." }, { "start": 1828.4, "end": 1835.2, "text": " But what we use is the cumulative log prob that took for us to apply the different tactics" }, { "start": 1835.2, "end": 1838.1200000000001, "text": " all the way to the current goal, which is another flavor of a value function." }, { "start": 1838.1200000000001, "end": 1839.88, "text": " A bit of a beam search type." }, { "start": 1839.88, "end": 1840.88, "text": " That is a..." }, { "start": 1840.88, "end": 1841.88, "text": " Yeah." }, { "start": 1841.88, "end": 1846.1200000000001, "text": " Yeah, it's a beam tree depth search." }, { "start": 1846.1200000000001, "end": 1847.1200000000001, "text": " Okay." }, { "start": 1847.1200000000001, "end": 1853.0800000000002, "text": " And, okay, so I think we got a good idea of how the search itself works." }, { "start": 1853.0800000000002, "end": 1856.96, "text": " And you keep going until you prove statements." }, { "start": 1856.96, "end": 1860.68, "text": " And then you do this expert iteration steps, right?" }, { "start": 1860.68, "end": 1865.6000000000001, "text": " Which essentially consists of you try to prove new things, you add them back to the data" }, { "start": 1865.6000000000001, "end": 1868.04, "text": " set, and you train a new model on it." }, { "start": 1868.04, "end": 1873.48, "text": " What I was kind of surprised by is that you always train from this sort of this initial" }, { "start": 1873.48, "end": 1875.64, "text": " model that you have right here." }, { "start": 1875.64, "end": 1879.48, "text": " So you create your new data sets and you always train from that." }, { "start": 1879.48, "end": 1886.24, "text": " What prevents you or what's the reasoning behind not always just continuing to train" }, { "start": 1886.24, "end": 1888.8, "text": " from the most recent model?" }, { "start": 1888.8, "end": 1893.72, "text": " Yeah, there's two motivations, two rational for that." }, { "start": 1893.72, "end": 1899.2, "text": " The first one is that it makes controlling for overfit much easier because you're really" }, { "start": 1899.2, "end": 1902.84, "text": " training from scratch in a sense." }, { "start": 1902.84, "end": 1906.56, "text": " And so you control overfit on your validation set much more cleanly." }, { "start": 1906.56, "end": 1912.52, "text": " If you iteratively train the behavior of your validation loss, it has a tendency to be quite" }, { "start": 1912.52, "end": 1917.44, "text": " erratic and unpredictable, which makes controlling for overfit much less obvious." }, { "start": 1917.44, "end": 1922.76, "text": " So that's the one thing, it's for basically scientific convenience in a sense." }, { "start": 1922.76, "end": 1927.56, "text": " The other thing is that it gives us an opportunity to duplicate aggressively the data." }, { "start": 1927.56, "end": 1931.72, "text": " The reason why it's important is because, to be honest, to generate those proofs, we" }, { "start": 1931.72, "end": 1936.24, "text": " sample proof search a lot." }, { "start": 1936.24, "end": 1942.44, "text": " There are some easy statements, we can find thousands of different proofs for it." }, { "start": 1942.44, "end": 1949.1200000000001, "text": " And so the goal is to retake all those proofs that we found so far and duplicate as much" }, { "start": 1949.1200000000001, "end": 1955.72, "text": " out of it to prevent nefarious overfitting behaviors in the training." }, { "start": 1955.72, "end": 1959.4, "text": " So that's really the two main motivations for training from scratch." }, { "start": 1959.4, "end": 1963.3, "text": " Again, formal math, data is scarce." }, { "start": 1963.3, "end": 1968.24, "text": " So those data sets are not that big, even when we generate a lot of data." }, { "start": 1968.24, "end": 1970.6000000000001, "text": " And so training is not taking that much time." }, { "start": 1970.6, "end": 1976.6399999999999, "text": " So it's actually really fine to train from scratch in each iteration." }, { "start": 1976.6399999999999, "end": 1981.28, "text": " One second." }, { "start": 1981.28, "end": 1988.8, "text": " So you say you have easy statements, you're able to find a lot of proofs for them, you" }, { "start": 1988.8, "end": 1992.24, "text": " have hard statements, and that's difficult to reach." }, { "start": 1992.24, "end": 1996.76, "text": " But you still said at the beginning, all the statements you are attempting to prove, you" }, { "start": 1996.76, "end": 1999.74, "text": " essentially already know that they're provable, right?" }, { "start": 1999.74, "end": 2006.36, "text": " And even the ones in the curriculum, the ones you take from the textbook, I think textbooks," }, { "start": 2006.36, "end": 2013.44, "text": " they don't try to trick you with like exercises that ultimately don't really work out." }, { "start": 2013.44, "end": 2020.88, "text": " What would change here if you were to go about proving something you don't know if it's even" }, { "start": 2020.88, "end": 2021.88, "text": " provable, right?" }, { "start": 2021.88, "end": 2025.52, "text": " Obviously, you also don't know the statements in between that might lead up to that." }, { "start": 2025.52, "end": 2032.96, "text": " Like how would that look like to prove something that isn't proven yet?" }, { "start": 2032.96, "end": 2038.2, "text": " Okay, so I think there's two questions there." }, { "start": 2038.2, "end": 2044.2, "text": " What would happen if you inject statements that are potentially false or even undecidable" }, { "start": 2044.2, "end": 2047.12, "text": " in the mix?" }, { "start": 2047.12, "end": 2052.88, "text": " And what would it take to try to prove something that we don't really know is provable yet?" }, { "start": 2052.88, "end": 2056.36, "text": " I think that's at least the way I understood the question." }, { "start": 2056.36, "end": 2063.44, "text": " If we inject statements that are not provable, that are false or undecidable, same difference" }, { "start": 2063.44, "end": 2070.08, "text": " to us, at least in the context of one formal system, what happens is that nothing happens." }, { "start": 2070.08, "end": 2071.2400000000002, "text": " There's no data generated." }, { "start": 2071.2400000000002, "end": 2072.7200000000003, "text": " So you're just wasting compute." }, { "start": 2072.7200000000003, "end": 2075.6400000000003, "text": " You're really just wasting compute on the statements." }, { "start": 2075.6400000000003, "end": 2081.4, "text": " And that's going to be a challenge if we think back about automatizing the generation of" }, { "start": 2081.4, "end": 2085.2400000000002, "text": " statements, that's going to be a noisy imperfect process." }, { "start": 2085.2400000000002, "end": 2092.7200000000003, "text": " And so whether it's going to be useful for that expectation process is really a function" }, { "start": 2092.7200000000003, "end": 2097.52, "text": " of the number of statements that are actually provable versus unprovable." }, { "start": 2097.52, "end": 2102.96, "text": " If your automated translation system generates one out of 20 statements that is provable" }, { "start": 2102.96, "end": 2109.92, "text": " and 19 are unprovable, you're just going to be wasting a lot of computes trying to prove" }, { "start": 2109.92, "end": 2112.16, "text": " something that's not going to generate any data for you." }, { "start": 2112.16, "end": 2117.88, "text": " So that's going to be a challenge there if we want to apply machine translation." }, { "start": 2117.88, "end": 2121.76, "text": " And then proving something." }, { "start": 2121.76, "end": 2124.16, "text": " What do you mean by proving something that's not always provable?" }, { "start": 2124.16, "end": 2126.2000000000003, "text": " Is it like trying to prove a conjecture?" }, { "start": 2126.2000000000003, "end": 2132.12, "text": " You want to train or you want to solve a conjecture that exists, but no one knows." }, { "start": 2132.12, "end": 2136.54, "text": " We think it's provable, which we do with most conjectures, but no one knows." }, { "start": 2136.54, "end": 2142.34, "text": " And now it's up to you and someone comes to you and says, well, let's use your system." }, { "start": 2142.34, "end": 2143.5, "text": " How would you go about that?" }, { "start": 2143.5, "end": 2145.08, "text": " How would you build the curriculum?" }, { "start": 2145.08, "end": 2157.4, "text": " What would change maybe in the data collection?" }, { "start": 2157.4, "end": 2162.96, "text": " There are some conjectures that we can hope do not require inventing new math." }, { "start": 2162.96, "end": 2171.32, "text": " So there may be some conjecture that are eluding humans despite being very close to us." }, { "start": 2171.32, "end": 2174.08, "text": " It's just one trick away." }, { "start": 2174.08, "end": 2181.68, "text": " And so for such conjecture and imagining a system that is much more powerful than what" }, { "start": 2181.68, "end": 2187.36, "text": " we have today, let's say it beats human at competitions, then you could just take your" }, { "start": 2187.36, "end": 2191.92, "text": " best system, take the conjecture and search for a lot of time." }, { "start": 2191.92, "end": 2198.2400000000002, "text": " And you maybe have a hope of finding a proof that has eluded humans because it was really" }, { "start": 2198.2400000000002, "end": 2200.36, "text": " tricky but you didn't need new theorems." }, { "start": 2200.36, "end": 2203.64, "text": " You didn't need new definitions." }, { "start": 2203.64, "end": 2208.42, "text": " And for most of conjectures that are out there, there is good reason to believe, at least" }, { "start": 2208.42, "end": 2212.84, "text": " if we look at this directly, that they're going to require new mathematical concepts" }, { "start": 2212.84, "end": 2215.7200000000003, "text": " to be proved." }, { "start": 2215.7200000000003, "end": 2220.82, "text": " And so that exercise, which is the mathematician's exercise of defining new concepts, is something" }, { "start": 2220.82, "end": 2226.6800000000003, "text": " that we're not even considering yet as a problem." }, { "start": 2226.6800000000003, "end": 2228.52, "text": " It's a whole different problem." }, { "start": 2228.52, "end": 2237.2000000000003, "text": " And to be honest, I think that it's a task that will probably more likely happen in the" }, { "start": 2237.2000000000003, "end": 2242.1400000000003, "text": " future in the informal realm more than in the formal realm." }, { "start": 2242.1400000000003, "end": 2247.48, "text": " It feels like the informal realm seems to be a better space to try to come up with new" }, { "start": 2247.48, "end": 2252.16, "text": " concepts and maybe then we have good data formalization and then we can use a formal" }, { "start": 2252.16, "end": 2254.68, "text": " prover to prove all the things that we conjectured, etc." }, { "start": 2254.68, "end": 2258, "text": " But that's something that is really far away from us." }, { "start": 2258, "end": 2264.28, "text": " You could sort of abuse the language models maybe to go a step, let's say, further." }, { "start": 2264.28, "end": 2268.4, "text": " You always have your declaration and your goal and you generate the proof step." }, { "start": 2268.4, "end": 2276.04, "text": " Could you also maybe just input a declaration of a theorem name that you think might conceivably" }, { "start": 2276.04, "end": 2280.64, "text": " exist and then let the system come up with a goal by itself even?" }, { "start": 2280.64, "end": 2287.8, "text": " So like even the statement to be proven." }, { "start": 2287.8, "end": 2288.8, "text": " We've tried that." }, { "start": 2288.8, "end": 2289.8, "text": " It definitely works." }, { "start": 2289.8, "end": 2297.88, "text": " You can let the model generate goals that are valid and that can then prove." }, { "start": 2297.88, "end": 2305.6, "text": " You can even orient, we were talking about how do you orient your work towards stuff" }, { "start": 2305.6, "end": 2306.6, "text": " that interests you." }, { "start": 2306.6, "end": 2312.16, "text": " You can definitely, in that case, you can definitely prompt the model where you're interested" }, { "start": 2312.16, "end": 2313.88, "text": " to explore by the declaration name." }, { "start": 2313.88, "end": 2318.68, "text": " You can make up kind of funky names that look like analysis or funky names that look like" }, { "start": 2318.68, "end": 2323.08, "text": " group theory or even funky names that look like math Olympiads." }, { "start": 2323.08, "end": 2329.68, "text": " The model will definitely and gladly conjecture statements." }, { "start": 2329.68, "end": 2335.64, "text": " It's actually conjecturing all the time in a way that is not leverageable, unfortunately," }, { "start": 2335.64, "end": 2337.3599999999997, "text": " when we do proof search." }, { "start": 2337.3599999999997, "end": 2343.04, "text": " When we do proof search, the way we refer to theorems that exist is by declaration name," }, { "start": 2343.04, "end": 2346.7999999999997, "text": " not by the statement themselves in Lean at least." }, { "start": 2346.7999999999997, "end": 2353.08, "text": " All the time, every proof search, the model will just invent a theorem by name and the" }, { "start": 2353.08, "end": 2354.96, "text": " name look really legit." }, { "start": 2354.96, "end": 2361.44, "text": " There should be math limb actually because it's just a missing API because the name," }, { "start": 2361.44, "end": 2366.56, "text": " it's generally very interpretable, but the model sync should be there." }, { "start": 2366.56, "end": 2372.2400000000002, "text": " That kind of conjecturing behavior really exists in the model today and is probably" }, { "start": 2372.2400000000002, "end": 2373.84, "text": " leverageable in interesting ways." }, { "start": 2373.84, "end": 2380.32, "text": " It's a bit crazy because that is really how I think mathematicians go about proving something." }, { "start": 2380.32, "end": 2385.56, "text": " They say they're at some statement and they say, well, here I need some inequality that" }, { "start": 2385.56, "end": 2389.8, "text": " relates these two things to each other." }, { "start": 2389.8, "end": 2394.0800000000004, "text": " Essentially that is exactly coming up with a name of a theorem like this." }, { "start": 2394.0800000000004, "end": 2404.4, "text": " The name would be something like, this greater than this or it's crazy." }, { "start": 2404.4, "end": 2411.32, "text": " We actually can extract from math limb what we call the type elaboration." }, { "start": 2411.32, "end": 2416.2400000000002, "text": " Type elaboration is to take a name of the theorem and you infer the type." }, { "start": 2416.2400000000002, "end": 2421.88, "text": " The type is in type theory, the type is the statement itself." }, { "start": 2421.88, "end": 2423.84, "text": " We can train models and type elaboration." }, { "start": 2423.84, "end": 2427.7200000000003, "text": " We could have them conjecture names while we proof search and then take the name and" }, { "start": 2427.7200000000003, "end": 2429.2000000000003, "text": " try to type elaborate them." }, { "start": 2429.2000000000003, "end": 2431.92, "text": " That gives us a statement and then try to prove that statement." }, { "start": 2431.92, "end": 2432.92, "text": " That's something we haven't explored." }, { "start": 2432.92, "end": 2436.28, "text": " It sounds crazy." }, { "start": 2436.28, "end": 2443.16, "text": " Given the directions of these automated systems that can essentially generate data for themselves," }, { "start": 2443.16, "end": 2448.12, "text": " if you introduce something like this, I'm pretty convinced this can get us a whole lot" }, { "start": 2448.12, "end": 2449.12, "text": " further." }, { "start": 2449.12, "end": 2453.8, "text": " How fast have these Go and Chess algorithms become?" }, { "start": 2453.8, "end": 2459.2400000000002, "text": " They've become human and one month later they were totally superhuman." }, { "start": 2459.24, "end": 2464.4799999999996, "text": " It happened in an instant, which is crazy." }, { "start": 2464.4799999999996, "end": 2469.56, "text": " My question would be a little bit, this is a machine, the formal machine, you have the" }, { "start": 2469.56, "end": 2470.8399999999997, "text": " humans on the other side." }, { "start": 2470.8399999999997, "end": 2476.6, "text": " Is there a good way of the two working together?" }, { "start": 2476.6, "end": 2478.8799999999997, "text": " It seems like they have complementary skills." }, { "start": 2478.8799999999997, "end": 2483.4799999999996, "text": " One can search and try to prove things very quickly." }, { "start": 2483.48, "end": 2489.72, "text": " The other one maybe has more of that idea, like introducing new math and so on." }, { "start": 2489.72, "end": 2495.8, "text": " Is there a tight way where the two can work together or will it always be in the, well," }, { "start": 2495.8, "end": 2500.08, "text": " we have to translate from one domain to the other?" }, { "start": 2500.08, "end": 2505.4, "text": " Definitely a way." }, { "start": 2505.4, "end": 2510.8, "text": " We actually released our early models, it was almost a year ago, to the Lean community" }, { "start": 2510.8, "end": 2516.2400000000002, "text": " through a tactic that is called GPTF and so Formalizer could say GPTF and GPTF would answer" }, { "start": 2516.2400000000002, "end": 2522.28, "text": " with suggestions of things to try." }, { "start": 2522.28, "end": 2528.04, "text": " It's broken and clunky in many ways and there's a technical challenge, which is that the mass" }, { "start": 2528.04, "end": 2530.36, "text": " library advances every day." }, { "start": 2530.36, "end": 2536.7000000000003, "text": " It's the models are easy to, they can rot quite rapidly." }, { "start": 2536.7, "end": 2540.8799999999997, "text": " For research purposes, it's very convenient for us to just say for the next three months," }, { "start": 2540.8799999999997, "end": 2545, "text": " we're going to work on that commit and just not look at what's happening out there." }, { "start": 2545, "end": 2549.72, "text": " But yet if you want to provide value to the community, you have to stay fresh, which is" }, { "start": 2549.72, "end": 2553.08, "text": " more of an engineering challenge than anything else." }, { "start": 2553.08, "end": 2558.56, "text": " But it's definitely a plan to provide our models to the community." }, { "start": 2558.56, "end": 2563.72, "text": " To be honest, anybody working on formal math and ML, think about that, that just makes" }, { "start": 2563.72, "end": 2565.52, "text": " sense." }, { "start": 2565.52, "end": 2569.24, "text": " Because formalization is so, it's not that hard, but it's time consuming." }, { "start": 2569.24, "end": 2576.36, "text": " So if our models can speed up formalization by another magnitude, that would be just tremendous." }, { "start": 2576.36, "end": 2582.6, "text": " Right there, there's already a very nice symbiosis, as you say, because if we speed up formalization" }, { "start": 2582.6, "end": 2590.44, "text": " by 10x or by 2x, even by 2x, people will formalize much more stuff and we'll get much more data" }, { "start": 2590.44, "end": 2592, "text": " and we'll get better." }, { "start": 2592, "end": 2597.2, "text": " It's a loop that goes through actually people committing stuff to Mathlib and us injecting" }, { "start": 2597.2, "end": 2598.2, "text": " it back eventually." }, { "start": 2598.2, "end": 2602.24, "text": " So it's kind of a long, very long loop." }, { "start": 2602.24, "end": 2605.4, "text": " It's a loop that we plan to try to set up." }, { "start": 2605.4, "end": 2612.96, "text": " Yeah, I mean, I think that would be sort of the best case outcome right here, that there" }, { "start": 2612.96, "end": 2619.04, "text": " is like the symbiosis of just the machine helping the humans and so on, before it eventually" }, { "start": 2619.04, "end": 2622.36, "text": " will outperform them and make mathematicians useless." }, { "start": 2622.36, "end": 2628.68, "text": " Oh yeah, we're far away from that anyway." }, { "start": 2628.68, "end": 2631.4, "text": " Maybe last technical question from my side." }, { "start": 2631.4, "end": 2634.8, "text": " It seems like in such an iteration process, you said, for example, you know, we can be" }, { "start": 2634.8, "end": 2638.92, "text": " easy statements, we can find thousands of proofs for them and you do some deduplication," }, { "start": 2638.92, "end": 2641.88, "text": " right, to sort of reduce the number of proofs." }, { "start": 2641.88, "end": 2646.64, "text": " If two proofs are equivalent, you take the shorter one, which is very sensible." }, { "start": 2646.64, "end": 2653.6, "text": " But still, how do you avoid that most data that you add back to the data set is kind" }, { "start": 2653.6, "end": 2654.92, "text": " of useless?" }, { "start": 2654.92, "end": 2662.6, "text": " Because given like three basic facts, a mathematician can probably prove 16 things, right?" }, { "start": 2662.6, "end": 2668.2799999999997, "text": " And only very few of them are going to be valuable to advance towards my ultimate goals." }, { "start": 2668.2799999999997, "end": 2674.46, "text": " Like how do you make sure that what you add back to the data set actually has some sort" }, { "start": 2674.46, "end": 2682.7200000000003, "text": " of value to the expert iteration?" }, { "start": 2682.7200000000003, "end": 2690.48, "text": " So the explosion of statements and proof that goes into a lot of noisy and uninteresting" }, { "start": 2690.48, "end": 2693.06, "text": " stuff generally comes when you do forward proving." }, { "start": 2693.06, "end": 2695.92, "text": " If you do backward proving, you're really bounded by the statements you're trying to" }, { "start": 2695.92, "end": 2696.92, "text": " prove." }, { "start": 2696.92, "end": 2703.2400000000002, "text": " So you might find thousands different proofs for something easy and all the thousands vary" }, { "start": 2703.24, "end": 2708.68, "text": " just because the model decided to name a variable differently and so they're not that interesting." }, { "start": 2708.68, "end": 2714.6, "text": " And there we have much more work to do into having smarter deduplication." }, { "start": 2714.6, "end": 2722.66, "text": " But really, in a sense, because that's the main advantage of working on formal math," }, { "start": 2722.66, "end": 2728.62, "text": " because that data has been verified by the formal system, we know it's legit." }, { "start": 2728.62, "end": 2735.88, "text": " It's one key massive advantage that we have to explore interesting research ideas compared" }, { "start": 2735.88, "end": 2741.8399999999997, "text": " to other domains is that we can lean on that verifier to really make sure that we only" }, { "start": 2741.8399999999997, "end": 2748.24, "text": " use legit data, even if it's the model that generated it." }, { "start": 2748.24, "end": 2751.4, "text": " And I think that's key here." }, { "start": 2751.4, "end": 2759.92, "text": " And generally speaking, empirically, it's always felt like the training, basically gradient" }, { "start": 2759.92, "end": 2766, "text": " descent is about compression and the training process is actually good at sifting through" }, { "start": 2766, "end": 2771.2400000000002, "text": " repetitive, not necessarily repetitive, but somewhat similar data." }, { "start": 2771.2400000000002, "end": 2775.32, "text": " And so having a lot of different proofs is actually generally beneficial." }, { "start": 2775.32, "end": 2783.48, "text": " I guess the story of deep learning is that the more the better, whatever it is." }, { "start": 2783.48, "end": 2790.56, "text": " I've not gone too much into the results other than saying the expert iteration obviously" }, { "start": 2790.56, "end": 2796.2000000000003, "text": " helps you to prove much harder statements compared to just the solver, whether you adjust" }, { "start": 2796.2000000000003, "end": 2797.5800000000004, "text": " for a computer or not." }, { "start": 2797.5800000000004, "end": 2805.28, "text": " It's also interesting that the larger models, whenever you scale up stuff, essentially," }, { "start": 2805.28, "end": 2807.42, "text": " you get better." }, { "start": 2807.42, "end": 2812.1600000000003, "text": " Is there anything in the experimental results that maybe I haven't touched on that you would" }, { "start": 2812.1600000000003, "end": 2815.88, "text": " like to highlight specifically?" }, { "start": 2815.88, "end": 2824.36, "text": " Well, I think you really covered it well." }, { "start": 2824.36, "end": 2828.5600000000004, "text": " One result that I think you almost touched on, one question, and that is unanswered in" }, { "start": 2828.5600000000004, "end": 2834.48, "text": " the paper, is we do include the synthetic inequalities in the final experimental setup" }, { "start": 2834.48, "end": 2836.88, "text": " to target Mini F2F." }, { "start": 2836.88, "end": 2843.2400000000002, "text": " And actually, I've run the ablation of that and they don't help that much on Mini F2F." }, { "start": 2843.2400000000002, "end": 2847, "text": " I mean, it's not that much that surprising." }, { "start": 2847, "end": 2852.16, "text": " So it's really, if you remove them and plot the curves against Mini F2F, you really get" }, { "start": 2852.16, "end": 2857.64, "text": " somewhat sensibly similar stuff." }, { "start": 2857.64, "end": 2862.16, "text": " There is a few inequalities that have been solved that are challenging." }, { "start": 2862.16, "end": 2867.12, "text": " And it's always a challenge because the graph tells you that it's roughly the same." }, { "start": 2867.12, "end": 2871.64, "text": " But then when you look at the proof, you feel like it's been learned through the curriculum" }, { "start": 2871.64, "end": 2873.3999999999996, "text": " on synthetic inequalities." }, { "start": 2873.3999999999996, "end": 2876.7599999999998, "text": " So that's the reason why we kind of kept it here." }, { "start": 2876.7599999999998, "end": 2881.92, "text": " And I think it does unlock a few problems, but it's kind of a few problems at the margin." }, { "start": 2881.92, "end": 2886.64, "text": " So it's hard to make sure by just looking at averages." }, { "start": 2886.64, "end": 2893.72, "text": " And one interesting thing, of course, is as you say, you scale your compute, whether you" }, { "start": 2893.72, "end": 2898.44, "text": " scale in model size or you scale in number of atoms and you scale in depth of search," }, { "start": 2898.44, "end": 2899.44, "text": " you always get better." }, { "start": 2899.44, "end": 2905.8799999999997, "text": " It really seems to be, and I mean, it's true of most of recent deep learning, there really" }, { "start": 2905.8799999999997, "end": 2914.3599999999997, "text": " seems to be performance being really a function of computes that you efficiently pour into" }, { "start": 2914.36, "end": 2917.6800000000003, "text": " the system." }, { "start": 2917.6800000000003, "end": 2924.2000000000003, "text": " Though we've been very surprised many times that model size scaling is hard to leverage." }, { "start": 2924.2000000000003, "end": 2928.7200000000003, "text": " We know those larger models are so much smarter when you interact with them directly." }, { "start": 2928.7200000000003, "end": 2934.76, "text": " You ask questions with GPT-3, it's qualitatively better than GPT-2, right?" }, { "start": 2934.76, "end": 2939.32, "text": " And here we are at the GPT-1 or 2 kind of size." }, { "start": 2939.32, "end": 2944.84, "text": " And so common wisdom would say GPT-1 or 2, just dumb, right?" }, { "start": 2944.84, "end": 2949.36, "text": " So why not use GPT-3 size because we're talking about math." }, { "start": 2949.36, "end": 2956.6400000000003, "text": " And really what we've seen empirically and that's probably and potentially because of" }, { "start": 2956.6400000000003, "end": 2961.44, "text": " bottlenecks in our setup that we haven't yet correctly identified, is that you don't need" }, { "start": 2961.44, "end": 2965.1200000000003, "text": " to have that big of a model to be efficient." }, { "start": 2965.12, "end": 2971.2, "text": " It's actually detrimental to scale the model size because then your proof search becomes" }, { "start": 2971.2, "end": 2974.24, "text": " much more compute intensive." }, { "start": 2974.24, "end": 2979, "text": " And in terms of Flop's allocation, it's much more efficient to sample many more times from" }, { "start": 2979, "end": 2981, "text": " a smaller models." }, { "start": 2981, "end": 2982.3199999999997, "text": " It tells something quite interesting." }, { "start": 2982.3199999999997, "end": 2991, "text": " It tells that the smaller model is basically is not completely, it's not much less smart" }, { "start": 2991, "end": 2992, "text": " than a larger model." }, { "start": 2992, "end": 2995.92, "text": " It's just that the distribution is not as crisp." }, { "start": 2995.92, "end": 3000.44, "text": " And here because we have the verifier and we can sample many times, we can choose the" }, { "start": 3000.44, "end": 3004.48, "text": " good samples out of a small model by trying many times." }, { "start": 3004.48, "end": 3005.48, "text": " Maybe that becomes..." }, { "start": 3005.48, "end": 3006.48, "text": " It's only because we have a verifier." }, { "start": 3006.48, "end": 3010, "text": "... go to like more like really hard math statements." }, { "start": 3010, "end": 3016, "text": " Maybe at some point you really need sort of the large models, but who knows?" }, { "start": 3016, "end": 3023.76, "text": " Was there... I'm a bit interested also in the process of the research itself." }, { "start": 3023.76, "end": 3029.16, "text": " Seeing a final paper is always really nice and cool and wow, you get to... your model" }, { "start": 3029.16, "end": 3030.88, "text": " does all this thing." }, { "start": 3030.88, "end": 3036.28, "text": " Was there particular low points during the research as well, like particular moments" }, { "start": 3036.28, "end": 3041.8, "text": " where you think, this isn't going to work out after all or things like this?" }, { "start": 3041.8, "end": 3047.0800000000004, "text": " Maybe any you would like to share, maybe so that other people..." }, { "start": 3047.0800000000004, "end": 3056.36, "text": " It helps to identify because I think most people find themselves in spots like that." }, { "start": 3056.36, "end": 3061.96, "text": " Yes, definitely." }, { "start": 3061.96, "end": 3063.92, "text": " To be honest, I've been quite..." }, { "start": 3063.92, "end": 3067.96, "text": " We've been quite lucky with that project in the sense that there's been some low points," }, { "start": 3067.96, "end": 3075.96, "text": " but at any point of time, looking back three months in the past, we always felt like we" }, { "start": 3075.96, "end": 3082.96, "text": " had made good motivating progress over those three months." }, { "start": 3082.96, "end": 3086.7200000000003, "text": " But it's obviously been a lot of struggles at many times." }, { "start": 3086.7200000000003, "end": 3093.88, "text": " I think research, at least the way I see it, is a lot about struggling for quite some time" }, { "start": 3093.88, "end": 3094.88, "text": " on some problems." }, { "start": 3094.88, "end": 3099.44, "text": " There's a reason why you really want to care about the problem you're working on to be" }, { "start": 3099.44, "end": 3100.84, "text": " able to go through that struggle." }, { "start": 3100.84, "end": 3103.32, "text": " It's actually the same as a startup in a sense." }, { "start": 3103.32, "end": 3108.1600000000003, "text": " You really have to care enough to be able to go through the struggle." }, { "start": 3108.1600000000003, "end": 3113.6800000000003, "text": " To give you an idea, I started working alone." }, { "start": 3113.6800000000003, "end": 3118.48, "text": " There's no multiple people working on the project with me, but when I started, I really" }, { "start": 3118.48, "end": 3124.92, "text": " took a language model and I took a data set of tactics that I exported from..." }, { "start": 3124.92, "end": 3127.42, "text": " It was Metamask at the time." }, { "start": 3127.42, "end": 3132, "text": " Nobody had any idea whether a language model was capable of generating a tactic because" }, { "start": 3132, "end": 3136.28, "text": " the syntax was so precise when you're talking about interacting with the formal system." }, { "start": 3136.28, "end": 3143.08, "text": " There were no code generation results at the time." }, { "start": 3143.08, "end": 3149.6, "text": " It really was an open question whether a language model is good enough to generate synthetically" }, { "start": 3149.6, "end": 3152.52, "text": " formal sentences in a sense." }, { "start": 3152.52, "end": 3155.7599999999998, "text": " The first win was really that." }, { "start": 3155.7599999999998, "end": 3160.42, "text": " Not only you train your model and start sampling and you just look at your sequence accuracy" }, { "start": 3160.42, "end": 3163, "text": " and you see that it's not zero." }, { "start": 3163, "end": 3167.2799999999997, "text": " Right there, it doesn't prove anything and it's far from being able to prove anything," }, { "start": 3167.2799999999997, "end": 3168.2799999999997, "text": " but it's a massive win." }, { "start": 3168.28, "end": 3174.52, "text": " You're like, yes, language models can generate formal statements." }, { "start": 3174.52, "end": 3178.2000000000003, "text": " That was really the start." }, { "start": 3178.2000000000003, "end": 3185.7200000000003, "text": " I think leading to the first paper, the first GPTF paper, the two key moments where, okay," }, { "start": 3185.7200000000003, "end": 3192.8, "text": " let's try to scale the model size and seeing that scaling is really beneficial." }, { "start": 3192.8, "end": 3198.1200000000003, "text": " It's not, as we discussed, not as clear, but if you're just looking at performance in terms" }, { "start": 3198.12, "end": 3204.52, "text": " of model size, you see that very nice scaling if you don't adjust the compute basically." }, { "start": 3204.52, "end": 3208.92, "text": " That's something that is quite motivating and exciting because it's the trend of the" }, { "start": 3208.92, "end": 3214.64, "text": " domain in many aspects." }, { "start": 3214.64, "end": 3219.3199999999997, "text": " The key finding of the first paper that was really a motivation to continue working was" }, { "start": 3219.3199999999997, "end": 3220.3199999999997, "text": " that pre-training." }, { "start": 3220.3199999999997, "end": 3226.48, "text": " You talked about that in the review and you had some questions, but that pre-training" }, { "start": 3226.48, "end": 3232, "text": " really helps a lot and transfers very beneficially to formal math." }, { "start": 3232, "end": 3234.6, "text": " That's the bulk of that first paper." }, { "start": 3234.6, "end": 3237.96, "text": " Then after the first paper, you're like, oh, we have a nice result." }, { "start": 3237.96, "end": 3243.52, "text": " We've shown that language models can do some formal mathematics, but we were still completely" }, { "start": 3243.52, "end": 3248.32, "text": " unable to prove Olympiad's problems at all, even the really easy ones." }, { "start": 3248.32, "end": 3250.6, "text": " That's really what we started working on." }, { "start": 3250.6, "end": 3257.08, "text": " There, it's been also a long struggle, I think, until we just decided to bite the bullet" }, { "start": 3257.08, "end": 3263.96, "text": " and formalize some statements ourselves to generate that curriculum that really unlocks" }, { "start": 3263.96, "end": 3267.72, "text": " new capabilities and led to the work that we've shared." }, { "start": 3267.72, "end": 3276.64, "text": " Is there anything about the paper that you want people to get away or to take away with?" }, { "start": 3276.64, "end": 3282.3599999999997, "text": " Maybe you can look also a little bit beyond math, like what does this tell us or anything" }, { "start": 3282.3599999999997, "end": 3290.96, "text": " you'd like people to know?" }, { "start": 3290.96, "end": 3297.48, "text": " The main takeaway I think I want to share is why we look at beyond math, but first it's" }, { "start": 3297.48, "end": 3301.3199999999997, "text": " why formal math is awesome." }, { "start": 3301.3199999999997, "end": 3305.72, "text": " I think we covered that quite nicely, but to me, the main reason is that it's reasoning" }, { "start": 3305.72, "end": 3306.72, "text": " incomplete." }, { "start": 3306.72, "end": 3311.64, "text": " If you get a really impressive result in formal math, you're really confident that you have" }, { "start": 3311.64, "end": 3315.08, "text": " a very impressive result in reasoning." }, { "start": 3315.08, "end": 3319.56, "text": " Other interesting aspects of it is that it's inherently a safe setup." }, { "start": 3319.56, "end": 3327.12, "text": " A lot of people are talking about safety, and that's a last harbor where we're not yet" }, { "start": 3327.12, "end": 3333.64, "text": " at all at human level, yet it's safe to try to push as hard as you can because it's like" }, { "start": 3333.64, "end": 3334.64, "text": " games." }, { "start": 3334.64, "end": 3339.04, "text": " But in a formal system, there is no escape hatch." }, { "start": 3339.04, "end": 3343.8799999999997, "text": " And finally, the reason why I think it's so exciting is because it lets you combine a" }, { "start": 3343.8799999999997, "end": 3346.8399999999997, "text": " language model with a formal verifier." }, { "start": 3346.8399999999997, "end": 3349.64, "text": " And so you're really getting the best of both worlds." }, { "start": 3349.64, "end": 3355.92, "text": " You have language models that are really impressive into what they can generate, but even GPT-3," }, { "start": 3355.92, "end": 3361.64, "text": " if you give it a few deductive steps, it falls off really rapidly." }, { "start": 3361.64, "end": 3367.12, "text": " And so they are capable of one-step reasoning that are interesting, but not multi-step reasonings." }, { "start": 3367.12, "end": 3373.64, "text": " And so that's when you tie it with a verifier that you can basically get the value of multi-step" }, { "start": 3373.64, "end": 3377.52, "text": " reasoning by interacting with the verifier that is here to verify the prediction." }, { "start": 3377.52, "end": 3380.44, "text": " And that's, I think, what is really exciting here." }, { "start": 3380.44, "end": 3385.8799999999997, "text": " The verifier kind of almost gives you the internal monologue that humans have when they" }, { "start": 3385.8799999999997, "end": 3386.8799999999997, "text": " think." }, { "start": 3386.88, "end": 3393.1600000000003, "text": " It's hard to imagine a language model thinking hard during the duration of one context size," }, { "start": 3393.1600000000003, "end": 3394.1600000000003, "text": " right?" }, { "start": 3394.1600000000003, "end": 3399.6800000000003, "text": " Yet here, we do have that kind of property, which is exciting." }, { "start": 3399.6800000000003, "end": 3406.32, "text": " And finally, the reason why I'm super excited about it goes beyond mass, in a sense." }, { "start": 3406.32, "end": 3411.08, "text": " I think that's the reason why it's really..." }, { "start": 3411.08, "end": 3415.28, "text": " OpenAI is really a great place to work on that because it's really aligned with our mission" }, { "start": 3415.28, "end": 3417.96, "text": " and how we want to execute it." }, { "start": 3417.96, "end": 3425.2000000000003, "text": " The reason why is that I think if we crack formal mass, we really will be providing a" }, { "start": 3425.2000000000003, "end": 3431.8, "text": " blueprint on how to infuse much more reasoning in large informal language models." }, { "start": 3431.8, "end": 3438.44, "text": " And so I really see it as kind of a small experimental lab where we can study reasoning" }, { "start": 3438.44, "end": 3444.2400000000002, "text": " when we know that reasoning is kind of still lacking in those very large language models." }, { "start": 3444.24, "end": 3448.2799999999997, "text": " And so that's really that that excites me and I think it will transfer nicely." }, { "start": 3448.2799999999997, "end": 3452.8799999999997, "text": " You have formal mass, you have code generation in the middle because you have unit tests," }, { "start": 3452.8799999999997, "end": 3459.04, "text": " but beyond unit tests, you cannot know for sure that your program is correct." }, { "start": 3459.04, "end": 3463.64, "text": " And then you have fully informal setups where you just cannot verify your predictions." }, { "start": 3463.64, "end": 3465.9599999999996, "text": " I think that wraps it up pretty nicely." }, { "start": 3465.9599999999996, "end": 3468.3799999999997, "text": " Stan, thank you very much for being here." }, { "start": 3468.38, "end": 3486.36, "text": " This was really cool." } ]
2h4tRsQzipQ
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Autoregressive Diffusion Models (Machine Learning Research Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "diffusion models", "autoregressive models", "generative models", "nlp", "natural language processing", "gpt", "image-gpt", "gpt-3", "gpt-2", "order agnostic", "order agnostic diffusion", "generative diffusion models", "bert", "autoregressive bert", "bert text generation", "character level language model", "upscaling", "dynamic programming", "pixelwise sampling" ]
#machinelearning #ardm #generativemodels Diffusion models have made large advances in recent months as a new type of generative models. This paper introduces Autoregressive Diffusion Models (ARDMs), which are a mix between autoregressive generative models and diffusion models. ARDMs are trained to be agnostic to the order of autoregressive decoding and give the user a dynamic tradeoff between speed and performance at decoding time. This paper applies ARDMs to both text and image data, and as an extension, the models can also be used to perform lossless compression. OUTLINE: 0:00 - Intro & Overview 3:15 - Decoding Order in Autoregressive Models 6:15 - Autoregressive Diffusion Models 8:35 - Dependent and Independent Sampling 14:25 - Application to Character-Level Language Models 18:15 - How Sampling & Training Works 26:05 - Extension 1: Parallel Sampling 29:20 - Extension 2: Depth Upscaling 33:10 - Conclusion & Comments Paper: https://arxiv.org/abs/2110.02037 Abstract: We introduce Autoregressive Diffusion Models (ARDMs), a model class encompassing and generalizing order-agnostic autoregressive models (Uria et al., 2014) and absorbing discrete diffusion (Austin et al., 2021), which we show are special cases of ARDMs under mild assumptions. ARDMs are simple to implement and easy to train. Unlike standard ARMs, they do not require causal masking of model representations, and can be trained using an efficient objective similar to modern probabilistic diffusion models that scales favourably to highly-dimensional data. At test time, ARDMs support parallel generation which can be adapted to fit any given generation budget. We find that ARDMs require significantly fewer steps than discrete diffusion models to attain the same performance. Finally, we apply ARDMs to lossless compression, and show that they are uniquely suited to this task. Contrary to existing approaches based on bits-back coding, ARDMs obtain compelling results not only on complete datasets, but also on compressing single data points. Moreover, this can be done using a modest number of network calls for (de)compression due to the model's adaptable parallel generation. Authors: Emiel Hoogeboom, Alexey A. Gritsenko, Jasmijn Bastings, Ben Poole, Rianne van den Berg, Tim Salimans Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there! Today we'll look at autoregressive diffusion models by Emil Hageboom and others of Google research. This paper on a high level proposes a new type of autoregressive model, specifically one where variables can be decoded in arbitrary orders. This is akin to the new types of diffusion models that have been used for generative models and it essentially amounts to something like BERT in sequence. The training objective is made such that we can decode variables as we like and I can show you the results. The results are going to be that we can for example sample pictures pixel by pixel in order to make a generative model. So rather than GANs which produce pictures all at once or what we had so far autoregressive models but with a fixed order from for example from left to right, now we can do it in any order. In addition to this they introduce techniques where you don't have to go pixel by pixel but you can do multiple pixels at the same time and speed up by a lot. So this is a paper which is also community informed. So this is a community informed paper review which means that on our discord server we have regular paper discussions. This was one of them. I tried to pay attention. I can't say yet whether that has worked but I'm trying to try to recount here a little bit also. So my opinions are influenced a lot by what was said at the paper discussion. If you want to influence my opinion feel free to join our paper discussions. Okay so there we go. They say they introduce these autoregressive diffusion models which is a model class encompassing and generalizing order-agnostic autoregressive models and absorbing discrete diffusion models which they show are special cases yada yada yada. They say they're simple to implement and easy to train unlike standard autoregressive models which you might know as LSTM or standard autoregressive models or GPT type transformers. These are all autoregressive models. They do not require causal masking of model representations and can be trained using an effective objective similar to modern probabilistic diffusion models that scales favorably to high dimensional data. At test time the ARDM support parallel generation which can be adapted to fit any given generation budget. So you can trade off how long you need to produce a given sample with how with the quality. So you can say I want it faster and you'll still get a sample you'll just get a like a lower quality sample. We find that they require significantly fewer steps than the discrete diffusion models to attain the same performance yada yada yada. They also do lossless compression with it. Okay so what's the deal with autoregressive models? If I have a bunch of variables let's say I have a piece of text or something like this what I'd have to do is what you'd usually do in GPT you give a prefix and then you decode a token by token from left to right right a cat and then the model has to predict sat on the and so on. So you predict from left to right one by one that's also how you train right you train from left to right you predict from left to right and with text that makes kind of sense because we also read from left to right right however it would also make sense to do this in a different order so if you have a cat and you first decode let's say mat right here then if you first do that then it becomes pretty clear what's in here so in order to give the model sort of the the biggest freedom you could let it decode in other places first and then it could decode the mat here first which would sort of determine the rest of the sentence whereas on the top the model already sort of has to have in mind what it wants to say later like the fact that that there's math here in order to produce all of these things here but in this way the model could predict that first and then the rest is sort of determined so it could impute that a little bit and this all of this is just to show you that it's not the only way to decode left to right and even more so in something like image GPT so you have an image and in again I produce the whole picture as one at once but in something like image GDP what I do is I start at the top left and I simply start producing the pixels left to right top to bottom right that's it and there is not really a reason why this is the best order to produce things out it's simply that we train in this way and that means we have to predict in this way what the autoregressive diffusion models do is they say we're gonna train a model that can produce a sample in any order it doesn't matter which one so we could start off with like this pixel then go to this and ask for this then ask for this we can even ask the model something like which one do you feel best about like which one are you most sure about and the model can tell us and then that's the one that we could we could decode further we can also tell the model to decode like three pixels at a time and then these three pixels and so on so that's the trade-off I mentioned so this is how it looks in practice what you're going to have is you're going to have a neural so here the vector is your sample right and usually you would decode top to bottom that's sort of the analogous to left to right that's what you usually would do however in this model you can see first it's empty so nothing is decoded yet you have your neural network you have your predictor let's say that predicts a distribution so for every single item in the sample it predicts a distribution so these here are categorical variables so it's going to be predicting a distribution and so all of these for example if there are pixels all of them predict color so prediction is made for the whole image and not just for the thing you want to decode and after that you decide on one of them that you actually want to decode you sample that or you take the maximum class or whatever and then you continue right then the next step so in the next step you have the same sample except that one of the values is now already decoded the other ones are still empty again you use a neural network to predict a distribution for the entire image you'll see that you know for technical reasons even this here is actually predicted it doesn't need to be but the important part is that you're going to predict the entire image at once and then you decide to again decode one of them that's your choosing so this one and you can see that you know this how this goes on specifically which ones you decode is given by a by this thing right here this sigma is a variable that stands for a given permutation so what you would do is if before before you sample you can select a permutation you can say here is the the order in which I want to decode and then you decode according to that but in my mind it doesn't matter even if you decide on the fly so you can decide on the fly you know here is here's my desired order I want to decode in that way now if this is seems familiar to you if you have seen a model something like this already before then if you're thinking of BERT you would be sort of correct so even the paper says that this is kind of like you take the BERT model and you just kind of stack it or you just repeat it notice the this here these are always the same neural network so the same neural network will predict every single step right here that's why it's an autoregressive model right because you input the output into the same neural network again so what do you do in BERT you have a bunch you have a sentence right a cat sat on if you do masked language modeling you put that through the neural network right that's BERT and out comes one sort of output per token now what you do when you train BERT you mask some of the tokens right for example this one and this one and then BERT predicts these BERT predicts these at once this one and this one and what you want to do sorry BERT predicts these tokens at once and that's a categorical distribution that's a classification into your vocabulary right which word was masked right here so what BERT needs to do is BERT needs to infer from the words that exist what other words could be here notice one interesting property about BERT the question is of course you know why do we even have to do this in a particular order can't we just if we are already predicting all pixels at once right the network already for each pixel that's not yet there predicts a categorical distribution why can't we just sample that right and the answer is because these things are not independent so if I if I simply if I have a bunch of variables right here let me use this one if every single one of these nodes gives me a distribution or let's say just the ones that are not just the ones that are not filled out yet right here I have two pixels or two elements that are not filled yet now I'm going to take my input vector and I want to use that to predict for every of one of these two pixels what's the distribution of values that could be there right so the distribution of values could be well the first number one is really popular to not so much number three a little bit and here it could be let's say number one also popular number two a little bit number three not that much right now if if those two are independent then we could totally fill these in at the same time but they might not be right pixels typically aren't independent if they're in the same image for example right if the entire if the pixel here is blue that makes it makes it's not independent of the fact of whether the pixel you know right next to it is blue and that doesn't only count for pixels next to one another that counts for pixels farther away of course the further they are the less dependent they probably are but still I can't just sample both independently I need to in order to sample one I need to know what the other is so I need to sample this one first and not just have the distribution I need to commit to one of the outcomes before I even try to sample the other one and by committing to one that will actually change the distribution of the other one because this here assumes that the other pixel will be according to this distribution however once it's sampled it's no longer this distribution it's actually one of these things for sure like it's maybe this one for sure if that has been sampled and that will change in turn the distribution so what I want to do is I want to put the whole thing through the neural network again in order to really get the true distribution of this node right here so maybe it's maybe it was really likely that number class number one was hit but now that it sees well this other node really has chosen number one so I'm probably not number one so I am class number two maybe I hope this is this is a bit clear that even though we can train in BERT style so we can predict all the things that are missing at once what we cannot do is we cannot decode all the things at once because what some of the elements or all of the elements are dependent on all of the other elements and being dependent means that we they need to know what the other elements are before they themselves commit to one of the classes of their distribution and that's the whole the whole point of it the point is these models they train like BERT but they decode like like autoregressive models except that the order isn't fixed the order can be any order you want and they do actually apply this to text so just so you can see that this how this looks so here's how it looks this is a character level language model right so the it starts off with a relatively empty empty sentence let's say so the underscores are just empty these are variables that are not chosen yet and then it's going to fill in a bunch at the beginning you can see that right here and it's going to fill in some more right so here it's going to fill in some more you'll notice that all of the ones that existed they should still exist do they do they i'm not even sure like here the x still exists the i still exists this i still exists yeah okay so all of the ones that were there they are still there but they're just more now and then more are imputed more are imputed until you finally come to the fully imputed sentence and you can see that these are actual samples from their model so on text on character level text it's not yet like super good the the sentence doesn't really make sense i don't think that's actually an english word it sounds english but it may not exactly be an english word a potentially unsucked proof or inject operational weapons in the game car us individual model so yeah this is it's unclear because these are the sort of the beginnings of these types of models of whether that's the case or whether it's just much much much more um a much better objective to just train order aggressive from left to right because there is also trade-offs right if you predict every single thing at once in your loss function has to split between all the things that there are to predict however if you just train left to right then your loss function can focus fully on what the next token is right in the given order so you gain the ability to decode in any order you want but that has a trade-off namely a performance trade-off because the model that specializes in one particular in one particular order will always beat you so let's go back and i think that's you know that's the the entire point i've sort of found you can simplify this relatively much by essentially saying you know this is BERT training but you decode one after another and you can i'm pretty sure the way this this is you can you could take you could take the pre-trained BERT checkpoints and sort of decode like this however the problem is of course these BERT checkpoints they have been trained with like a fixed percentage of tokens masked out so they usually say it's like 10 to 20 of tokens masked out however in order to really get these models to produce samples they also had had to have seen cases where like this case where zero percent sorry not zero 100 percent of the tokens are masked right so the way you train this is you mask tokens like BERT and then you predict all of them at once so the model would have to have seen every single proportion of masked tokens so that's not what exactly what what BERT is trained for but in essence you could do it so what's the background the background is essentially that these models what they usually do is they say look the whole sample has a given probability i can decompose that probability due to the multiplicative rule into products or in the log space sums of probabilities and this here this part here is what the order aggressive models take they say look if i have a bunch of nodes then the probability of for example this node is conditioned on everything that's before so i can factorize this into products where every probability is conditioned on the ones before and these models they essentially go and they say well there is no reason no particular reason why you have to factorize in this way you can in fact factorize in any order that you want and if you do that if you recognize that you can factorize in any order you want you can also say that you can also say that the you can essentially not only train in the order that you decode in you can already train for all the orders at once right so if if my chosen order is i go from here to here to here to here right once i'm at the purple node right in this particular order i would go here next but in many other orders right where i came from from here in a other order i would go here next and in yet another order i could choose i would go here next and these orders i sample uniformly okay so i can reasonably assume that the next time i see the sample i'm in one of those other orderings right and therefore the expectation of my loss function is just the average if i were to predict this one or this one or this one at this time and therefore if why do i have to wait for the next samples i can simply say right now well i'm simply going to predict all of them at the same time and then take the mean as my loss function so the mean classification error as my loss function rather than just predict the one in the order where i happen to be left to right models don't need to do that because they are always left to right so the next time they see the sample they will have to only decode the exact same next variable however these models we train them to work in arbitrary orders and therefore we might as well predict all of the orders at once and take the mean of the loss function as the loss function and there again you see the trade-off this allows us then to decode in any order we want however also there's a trade-off now only one over the number of of remaining nodes is the portion of the loss function that is really trained on the order that we're eventually going to have and all the others are essentially superfluous well they might help for generalization a bit but you know the you you significantly reduce loss mass on the order that you actually then care about at the end when you sample so here is how you sample it's pretty simple it's what i said so you initialize x empty you sample one order as i said you don't have to commit to one at the beginning but that's how you specify you sample and order uniformly then you go through the through the ordering through the permutation here sigma is the permutation of nodes decode this is very complicated written so they build these masks right here you can see they build these masks and essentially m is just whatever has been decoded so far n is whatever is whatever one node is to be predicted right now so what you do is you build a categorical distribution you put the masked x into your neural network build a categorical distribution so this here means you predict all of the nodes at once given what you've predicted so far so m times x is what you've predicted so far that goes into a neural network that's essentially the learned part of this and the neural network will output a distribution a categorical distribution for every single other node there is and what you do then is you choose the one the n you know that's the entry in the ordering that you chose you choose the one that you want to decode and you simply augment amend the sample that you have by the one you want to decode this is written very complicated in a very complicated way so optimizing training these models isn't too hard either what you're going to do is you have a data point that i guess you sample from the data set you're going to sample one particular time step so notice here we go over all the time steps because we actually want to get a sample when we train that's much like transformer autoregressive models actually there we can train all the time steps at once but the individual training sample is just we select one particular time step in one particular ordering right so we select an ordering and in that ordering we select the time step and typically what you do is so you have a picture you have pixels what this amounts to is we say okay we're just going to mask a bunch of these pixels right here we're just going to black them out right that will correspond to some time step in some ordering so we're just going to assume we've predicted all of the ones that we haven't masked and now we're trying to predict all of the ones that we did mask right all of these ones we're going to predict at once and um yeah that will so you notice that there is no n right here the n specifies the one pixel you want to predict next but during training we simply mask out a bunch of pixels and then we predict all at once so again we have the m which is what we've predicted so far we input m times x into the neural network so the neural network will predict the distribution of every single thing that we haven't predicted so far and rather than selecting n from it we now select one minus m so everything that hasn't been predicted so far and then we average that and that will become our loss function okay now given that we know what the pixels are that we've masked during training we can actually compute this loss function and you know that's that's it that's how you train uh pretty simple as i said this should remind you of BERT and yeah so they have several extensions to this which i just briefly want to touch so they now they say well if we if we sort of allow a bunch of times these dependence independency mistakes so you know given that we have like i don't know a million pixels in an image right can't we just sort of assume that you know the pixel up here and maybe the pixel here they're kind of independent from each other so couldn't we just sort of sample um sample them at once so we can sample multiple pixels at once if they're kind of far away from each other we we're just kind of fine with that um and uh yeah so we trade off speed predicting multiple pixels at a time by we trade off speed and accuracy essentially because now the pixels that we predict at the same time they have no knowledge of the other pixels in the same time step that's the problem we've talked about before and then they go a step further and they say well rather than deciding you know we want to decode five pixels at a time instead of just one what we're going to do is we're going to give the algorithm a budget and they say look you have an entire image we have 20 steps so you need to decide this is the visualization right here you have 20 steps you need to decide do i want to go like do i want to go like um do i want to go so here is like one pixel then two pixels then three pixels then five pixels then the rest of the pixels right these are five time steps that's your budget you decide so they use a dynamic programming algorithm essentially they build up they go through their as far as i understand it they go through their training data set and um they compute what they call loss components so here is your your budget and here is the number of nodes in the uh in the here is the number of nodes in your data points and so you can say okay for step number three if i were to decode five uh steps in step number three right how much would that cost and then you can try to find in classic dynamic programming fashion a path through this matrix and you know at the end this path is going to give you what how many pixels you should decode at what step so for example here in step one we decode two then we decode one i don't know what this is actually means one no zero that makes no sense and then we decode the rest but you know how dynamic programming works and this isn't this is from a different paper actually but they just say you know we can use this given that we train for any order at all and predict all at the same time this is an option so you can technically trade this off what they also do is this depth upscaling and what they do in the depth upscaling is they say well you know if we're trying to predict a pixel value for a pixel right the pixel value is like 256 classes yeah it's it's a big thing right let's not have the model so the model needs to sort of commit to one of them you know in immediately like that's my pixel value what if what if we could do the following what if we could have the model just predict which half of the pixel values it's in right are you bright in the blue channel or are you not bright are you dark okay and then we do this for all the pixels so all the pixels in the image they simply first in the first iteration decide am i light or am i dark right am i light am i dark am i light am i dark and so on and then once everyone has decided on that we go over the image again and we say well okay now okay i should have filled all of them just imagine all of them filled in now they say okay now you pixel who previously decided you were light now that you see all the other pixel and their crude decision you know what sub part of the light do you fall in are you very light or are just a bit light and then so we go through the image multiple times right it can even be in different orders and the advantage here is that you first let the other parts make crude decisions and then you don't have to decide out of the blue right so you you know sort of approximately what all the others are before you refine and then you refine refine refine until you get to the final choice so this is i think this is a neat idea they specify exactly you know how to do this however i can't help noticing that as you can see the ordering here by which you decode so you first predict the the crude part then the not so crude part then the not so not so crude part and finally you predict the the final choice the the full part i can't help but notice that this is again a fixed order autoregressive model right this is this is again like this is exactly what they're trying to run away from so they they just introduce it again in a sub part of their model which i find to be funny right and on the on the other hand this this only works really this is my other problem with this this only works if this isn't really a categorical variable right pixel value pixel value is a continuous variable you can be anywhere we just discretize it right and that's why this works the you know decide on your crude and then go go more less and less crude go more and more detailed if you have something like true classification right let's say into tokens of a vocabulary like a b c d e it makes it makes no sense to ask them well in which half of the alphabet are you the model can't do a crude decision it already needs to know to answer this question for you so unless you have a way to really split the vocabulary in meaningful fashion it this doesn't make sense this is really this is really a a workaround around the artifact that they need categorical variables for their model and therefore they discretize the the the brightness here of the pixels and you know that that's a result of that so in any case i don't want to dive too much into the results you've already seen them they do don't do large scale as far as i can tell they do c for 10 generation they also do lossless compression what they can do is with their model they have a pretty good handle at the trade-off so this gives you the applet so the the user of the model a good way of trading off performance for speed and you can do this on the fly right you can do you can say i want less performance i want more performance i have less of a budget to infer the sample or more and you can change from from time to time and yeah these these models as i said they're young therefore they have a way to go we've put so much work into GANs and whatnot and and other aggressive text models that the fail like the fact that these here are not state of the art yet they might it might just be an artifact of that or they might just suck who knows all right thank you so much for listening as i said join our discord to get in on the paper discussions they're usually very very entertaining and i'll see you next time bye bye
[ { "start": 0, "end": 6.24, "text": " Hi there! Today we'll look at autoregressive diffusion models by Emil Hageboom and others" }, { "start": 6.24, "end": 12.92, "text": " of Google research. This paper on a high level proposes a new type of autoregressive model," }, { "start": 12.92, "end": 22.16, "text": " specifically one where variables can be decoded in arbitrary orders. This is akin to the new" }, { "start": 22.16, "end": 27.68, "text": " types of diffusion models that have been used for generative models and it essentially amounts" }, { "start": 27.68, "end": 35.76, "text": " to something like BERT in sequence. The training objective is made such that we can decode variables" }, { "start": 35.76, "end": 42, "text": " as we like and I can show you the results. The results are going to be that we can for example" }, { "start": 42, "end": 51.66, "text": " sample pictures pixel by pixel in order to make a generative model. So rather than GANs which produce" }, { "start": 51.66, "end": 58.12, "text": " pictures all at once or what we had so far autoregressive models but with a fixed order" }, { "start": 58.12, "end": 64.84, "text": " from for example from left to right, now we can do it in any order. In addition to this they" }, { "start": 64.84, "end": 70.32, "text": " introduce techniques where you don't have to go pixel by pixel but you can do multiple pixels at" }, { "start": 70.32, "end": 80.69999999999999, "text": " the same time and speed up by a lot. So this is a paper which is also community informed. So this" }, { "start": 80.7, "end": 87.48, "text": " is a community informed paper review which means that on our discord server we have regular paper" }, { "start": 87.48, "end": 94.2, "text": " discussions. This was one of them. I tried to pay attention. I can't say yet whether that has worked" }, { "start": 94.2, "end": 103.56, "text": " but I'm trying to try to recount here a little bit also. So my opinions are influenced a lot by what" }, { "start": 103.56, "end": 109.92, "text": " was said at the paper discussion. If you want to influence my opinion feel free to join our paper" }, { "start": 109.92, "end": 119.32000000000001, "text": " discussions. Okay so there we go. They say they introduce these autoregressive diffusion models" }, { "start": 119.32000000000001, "end": 127.64, "text": " which is a model class encompassing and generalizing order-agnostic autoregressive models and absorbing" }, { "start": 127.64, "end": 134.12, "text": " discrete diffusion models which they show are special cases yada yada yada. They say they're" }, { "start": 134.12, "end": 139.52, "text": " simple to implement and easy to train unlike standard autoregressive models which you might" }, { "start": 139.52, "end": 148.72, "text": " know as LSTM or standard autoregressive models or GPT type transformers. These are all autoregressive" }, { "start": 148.72, "end": 155.20000000000002, "text": " models. They do not require causal masking of model representations and can be trained using" }, { "start": 155.20000000000002, "end": 161.96, "text": " an effective objective similar to modern probabilistic diffusion models that scales favorably to high" }, { "start": 161.96, "end": 169.64000000000001, "text": " dimensional data. At test time the ARDM support parallel generation which can be adapted to fit" }, { "start": 169.64000000000001, "end": 178.4, "text": " any given generation budget. So you can trade off how long you need to produce a given sample with" }, { "start": 178.4, "end": 185.06, "text": " how with the quality. So you can say I want it faster and you'll still get a sample you'll just" }, { "start": 185.06, "end": 191.94, "text": " get a like a lower quality sample. We find that they require significantly fewer steps than the" }, { "start": 191.94, "end": 196.4, "text": " discrete diffusion models to attain the same performance yada yada yada. They also do lossless" }, { "start": 196.4, "end": 202.84, "text": " compression with it. Okay so what's the deal with autoregressive models? If I have a bunch" }, { "start": 202.84, "end": 209.36, "text": " of variables let's say I have a piece of text or something like this what I'd have to do is" }, { "start": 209.36, "end": 217.72, "text": " what you'd usually do in GPT you give a prefix and then you decode a token by token from left to" }, { "start": 217.72, "end": 227.72, "text": " right right a cat and then the model has to predict sat on the and so on. So you predict from left to" }, { "start": 227.72, "end": 233.48, "text": " right one by one that's also how you train right you train from left to right you predict from" }, { "start": 233.48, "end": 240.4, "text": " left to right and with text that makes kind of sense because we also read from left to right" }, { "start": 240.4, "end": 249.24, "text": " right however it would also make sense to do this in a different order so if you have a cat and you" }, { "start": 249.24, "end": 258.72, "text": " first decode let's say mat right here then if you first do that then it becomes pretty clear what's" }, { "start": 258.72, "end": 267.32, "text": " in here so in order to give the model sort of the the biggest freedom you could let it decode in" }, { "start": 267.32, "end": 273.52, "text": " other places first and then it could decode the mat here first which would sort of determine the" }, { "start": 273.52, "end": 279.6, "text": " rest of the sentence whereas on the top the model already sort of has to have in mind what it wants" }, { "start": 279.6, "end": 286.36, "text": " to say later like the fact that that there's math here in order to produce all of these things here" }, { "start": 286.36, "end": 293.64, "text": " but in this way the model could predict that first and then the rest is sort of determined so it" }, { "start": 293.64, "end": 301.91999999999996, "text": " could impute that a little bit and this all of this is just to show you that it's not the only way" }, { "start": 301.91999999999996, "end": 307.86, "text": " to decode left to right and even more so in something like image GPT so you have an image" }, { "start": 307.86, "end": 315.52, "text": " and in again I produce the whole picture as one at once but in something like image GDP what I do" }, { "start": 315.52, "end": 322.24, "text": " is I start at the top left and I simply start producing the pixels left to right top to bottom" }, { "start": 322.24, "end": 330.16, "text": " right that's it and there is not really a reason why this is the best order to produce things out" }, { "start": 330.16, "end": 336.84000000000003, "text": " it's simply that we train in this way and that means we have to predict in this way what the" }, { "start": 336.84000000000003, "end": 344.40000000000003, "text": " autoregressive diffusion models do is they say we're gonna train a model that can produce a" }, { "start": 344.40000000000003, "end": 352.04, "text": " sample in any order it doesn't matter which one so we could start off with like this pixel then" }, { "start": 352.04, "end": 357.72, "text": " go to this and ask for this then ask for this we can even ask the model something like which one" }, { "start": 357.72, "end": 363.16, "text": " do you feel best about like which one are you most sure about and the model can tell us and then" }, { "start": 363.16, "end": 368.72, "text": " that's the one that we could we could decode further we can also tell the model to decode" }, { "start": 368.72, "end": 374.68, "text": " like three pixels at a time and then these three pixels and so on so that's the trade-off I" }, { "start": 374.68, "end": 380.64000000000004, "text": " mentioned so this is how it looks in practice what you're going to have is you're going to have a" }, { "start": 380.64, "end": 390.24, "text": " neural so here the vector is your sample right and usually you would decode top to bottom that's" }, { "start": 390.24, "end": 396.96, "text": " sort of the analogous to left to right that's what you usually would do however in this model you can" }, { "start": 396.96, "end": 403.52, "text": " see first it's empty so nothing is decoded yet you have your neural network you have your predictor" }, { "start": 403.52, "end": 412.68, "text": " let's say that predicts a distribution so for every single item in the sample it predicts a" }, { "start": 412.68, "end": 419.44, "text": " distribution so these here are categorical variables so it's going to be predicting a" }, { "start": 419.44, "end": 428.15999999999997, "text": " distribution and so all of these for example if there are pixels all of them predict color so" }, { "start": 428.16, "end": 434.68, "text": " prediction is made for the whole image and not just for the thing you want to decode and after" }, { "start": 434.68, "end": 441.68, "text": " that you decide on one of them that you actually want to decode you sample that or you take the" }, { "start": 441.68, "end": 448.64000000000004, "text": " maximum class or whatever and then you continue right then the next step so in the next step you" }, { "start": 448.64000000000004, "end": 455.56, "text": " have the same sample except that one of the values is now already decoded the other ones are still" }, { "start": 455.56, "end": 462.2, "text": " empty again you use a neural network to predict a distribution for the entire image you'll see" }, { "start": 462.2, "end": 469.8, "text": " that you know for technical reasons even this here is actually predicted it doesn't need to be but the" }, { "start": 469.8, "end": 478.76, "text": " important part is that you're going to predict the entire image at once and then you decide to again" }, { "start": 478.76, "end": 486.03999999999996, "text": " decode one of them that's your choosing so this one and you can see that you know this how this" }, { "start": 486.03999999999996, "end": 493.8, "text": " goes on specifically which ones you decode is given by a by this thing right here this sigma is" }, { "start": 493.8, "end": 501.71999999999997, "text": " a variable that stands for a given permutation so what you would do is if before before you sample" }, { "start": 501.71999999999997, "end": 507.92, "text": " you can select a permutation you can say here is the the order in which I want to decode and then" }, { "start": 507.92, "end": 513.36, "text": " you decode according to that but in my mind it doesn't matter even if you decide on the fly so" }, { "start": 513.36, "end": 519.8000000000001, "text": " you can decide on the fly you know here is here's my desired order I want to decode in that way now" }, { "start": 519.8000000000001, "end": 528.32, "text": " if this is seems familiar to you if you have seen a model something like this already before then" }, { "start": 528.32, "end": 535.32, "text": " if you're thinking of BERT you would be sort of correct so even the paper says that this is kind" }, { "start": 535.32, "end": 543.88, "text": " of like you take the BERT model and you just kind of stack it or you just repeat it notice the this" }, { "start": 543.88, "end": 549.24, "text": " here these are always the same neural network so the same neural network will predict every single" }, { "start": 549.24, "end": 558.12, "text": " step right here that's why it's an autoregressive model right because you input the output into the" }, { "start": 558.12, "end": 563.5200000000001, "text": " same neural network again so what do you do in BERT you have a bunch you have a sentence right" }, { "start": 563.52, "end": 571.04, "text": " a cat sat on if you do masked language modeling you put that through the neural network right" }, { "start": 571.04, "end": 582.4, "text": " that's BERT and out comes one sort of output per token now what you do when you train BERT you" }, { "start": 582.4, "end": 590.1999999999999, "text": " mask some of the tokens right for example this one and this one and then BERT predicts these BERT" }, { "start": 590.2, "end": 597.88, "text": " predicts these at once this one and this one and what you want to do sorry BERT predicts these" }, { "start": 597.88, "end": 603.0400000000001, "text": " tokens at once and that's a categorical distribution that's a classification into your vocabulary" }, { "start": 603.0400000000001, "end": 608.9200000000001, "text": " right which word was masked right here so what BERT needs to do is BERT needs to infer from the" }, { "start": 608.9200000000001, "end": 616.58, "text": " words that exist what other words could be here notice one interesting property about BERT the" }, { "start": 616.58, "end": 622.0400000000001, "text": " question is of course you know why do we even have to do this in a particular order can't we" }, { "start": 622.0400000000001, "end": 628.6800000000001, "text": " just if we are already predicting all pixels at once right the network already for each pixel" }, { "start": 628.6800000000001, "end": 635.2, "text": " that's not yet there predicts a categorical distribution why can't we just sample that right" }, { "start": 635.2, "end": 646.88, "text": " and the answer is because these things are not independent so if I if I simply if I have a bunch" }, { "start": 646.88, "end": 655.44, "text": " of variables right here let me use this one if every single one of these nodes gives me a" }, { "start": 655.44, "end": 661.6, "text": " distribution or let's say just the ones that are not just the ones that are not filled out yet" }, { "start": 661.6, "end": 669.76, "text": " right here I have two pixels or two elements that are not filled yet now I'm going to take my input" }, { "start": 669.76, "end": 676.32, "text": " vector and I want to use that to predict for every of one of these two pixels what's the" }, { "start": 676.32, "end": 682, "text": " distribution of values that could be there right so the distribution of values could be well the" }, { "start": 682, "end": 688.5600000000001, "text": " first number one is really popular to not so much number three a little bit and here it could be" }, { "start": 688.56, "end": 696.8, "text": " let's say number one also popular number two a little bit number three not that much right now" }, { "start": 696.8, "end": 704.1999999999999, "text": " if if those two are independent then we could totally fill these in at the same time but they" }, { "start": 704.1999999999999, "end": 709.76, "text": " might not be right pixels typically aren't independent if they're in the same image for" }, { "start": 709.76, "end": 719.6, "text": " example right if the entire if the pixel here is blue that makes it makes it's not independent" }, { "start": 719.6, "end": 724.88, "text": " of the fact of whether the pixel you know right next to it is blue and that doesn't only count" }, { "start": 724.88, "end": 730.48, "text": " for pixels next to one another that counts for pixels farther away of course the further they" }, { "start": 730.48, "end": 738.48, "text": " are the less dependent they probably are but still I can't just sample both independently I need to" }, { "start": 738.48, "end": 746.32, "text": " in order to sample one I need to know what the other is so I need to sample this one first and" }, { "start": 746.32, "end": 755.36, "text": " not just have the distribution I need to commit to one of the outcomes before I even try to sample" }, { "start": 755.36, "end": 760.64, "text": " the other one and by committing to one that will actually change the distribution of the other one" }, { "start": 760.64, "end": 768.08, "text": " because this here assumes that the other pixel will be according to this distribution however" }, { "start": 768.08, "end": 773.6, "text": " once it's sampled it's no longer this distribution it's actually one of these things for sure like" }, { "start": 773.6, "end": 779.5200000000001, "text": " it's maybe this one for sure if that has been sampled and that will change in turn the" }, { "start": 779.5200000000001, "end": 785.36, "text": " distribution so what I want to do is I want to put the whole thing through the neural network again" }, { "start": 785.36, "end": 793.6, "text": " in order to really get the true distribution of this node right here so maybe it's maybe it was" }, { "start": 793.6, "end": 799.84, "text": " really likely that number class number one was hit but now that it sees well this other node" }, { "start": 799.84, "end": 808.32, "text": " really has chosen number one so I'm probably not number one so I am class number two maybe" }, { "start": 809.28, "end": 816.88, "text": " I hope this is this is a bit clear that even though we can train in BERT style so we can predict all" }, { "start": 816.88, "end": 824.64, "text": " the things that are missing at once what we cannot do is we cannot decode all the things at once" }, { "start": 824.64, "end": 834, "text": " because what some of the elements or all of the elements are dependent on all of the other elements" }, { "start": 834, "end": 841.76, "text": " and being dependent means that we they need to know what the other elements are before they" }, { "start": 841.76, "end": 850.64, "text": " themselves commit to one of the classes of their distribution and that's the whole the whole point" }, { "start": 850.64, "end": 859.28, "text": " of it the point is these models they train like BERT but they decode like like autoregressive" }, { "start": 859.28, "end": 868.56, "text": " models except that the order isn't fixed the order can be any order you want and they do actually" }, { "start": 868.56, "end": 878.64, "text": " apply this to text so just so you can see that this how this looks so here's how it looks this" }, { "start": 878.64, "end": 888.3199999999999, "text": " is a character level language model right so the it starts off with a relatively empty empty" }, { "start": 889.3599999999999, "end": 895.1999999999999, "text": " sentence let's say so the underscores are just empty these are variables that are not chosen yet" }, { "start": 895.2, "end": 901.12, "text": " and then it's going to fill in a bunch at the beginning you can see that right here and it's" }, { "start": 901.12, "end": 906.32, "text": " going to fill in some more right so here it's going to fill in some more you'll notice that" }, { "start": 906.32, "end": 915.6, "text": " all of the ones that existed they should still exist do they do they i'm not even sure like" }, { "start": 916.24, "end": 924.5600000000001, "text": " here the x still exists the i still exists this i still exists yeah okay so all of the ones that" }, { "start": 924.56, "end": 932.3199999999999, "text": " were there they are still there but they're just more now and then more are imputed more are imputed" }, { "start": 933.28, "end": 941.92, "text": " until you finally come to the fully imputed sentence and you can see that these are actual" }, { "start": 941.92, "end": 949.92, "text": " samples from their model so on text on character level text it's not yet like super good the" }, { "start": 949.92, "end": 954.88, "text": " the sentence doesn't really make sense i don't think that's actually an english word it sounds" }, { "start": 954.88, "end": 963.04, "text": " english but it may not exactly be an english word a potentially unsucked proof or inject" }, { "start": 963.04, "end": 972.4799999999999, "text": " operational weapons in the game car us individual model so yeah this is it's unclear because these" }, { "start": 972.4799999999999, "end": 977.8399999999999, "text": " are the sort of the beginnings of these types of models of whether that's the case or whether" }, { "start": 977.84, "end": 985.52, "text": " it's just much much much more um a much better objective to just train order aggressive from" }, { "start": 985.52, "end": 992.08, "text": " left to right because there is also trade-offs right if you predict every single thing at once" }, { "start": 992.5600000000001, "end": 997.9200000000001, "text": " in your loss function has to split between all the things that there are to predict however" }, { "start": 997.9200000000001, "end": 1004.96, "text": " if you just train left to right then your loss function can focus fully on what the next token" }, { "start": 1004.96, "end": 1012, "text": " is right in the given order so you gain the ability to decode in any order you want but" }, { "start": 1012, "end": 1018.5600000000001, "text": " that has a trade-off namely a performance trade-off because the model that specializes in one particular" }, { "start": 1019.6800000000001, "end": 1027.3600000000001, "text": " in one particular order will always beat you so let's go back and i think that's you know that's" }, { "start": 1027.3600000000001, "end": 1034.32, "text": " the the entire point i've sort of found you can simplify this relatively much by essentially" }, { "start": 1034.32, "end": 1042.1599999999999, "text": " saying you know this is BERT training but you decode one after another and you can i'm pretty" }, { "start": 1042.1599999999999, "end": 1050.1599999999999, "text": " sure the way this this is you can you could take you could take the pre-trained BERT checkpoints" }, { "start": 1050.1599999999999, "end": 1056.8799999999999, "text": " and sort of decode like this however the problem is of course these BERT checkpoints they have been" }, { "start": 1056.88, "end": 1064.4, "text": " trained with like a fixed percentage of tokens masked out so they usually say it's like 10 to 20" }, { "start": 1064.4, "end": 1069.7600000000002, "text": " of tokens masked out however in order to really get these models to produce samples they also" }, { "start": 1069.7600000000002, "end": 1076.96, "text": " had had to have seen cases where like this case where zero percent sorry not zero 100 percent of" }, { "start": 1076.96, "end": 1083.3600000000001, "text": " the tokens are masked right so the way you train this is you mask tokens like BERT and then you" }, { "start": 1083.36, "end": 1089.84, "text": " predict all of them at once so the model would have to have seen every single proportion of" }, { "start": 1089.84, "end": 1098.24, "text": " masked tokens so that's not what exactly what what BERT is trained for but in essence you could do it" }, { "start": 1098.9599999999998, "end": 1104.8799999999999, "text": " so what's the background the background is essentially that these models what they usually" }, { "start": 1104.8799999999999, "end": 1113.1999999999998, "text": " do is they say look the whole sample has a given probability i can decompose that probability due" }, { "start": 1113.2, "end": 1120.56, "text": " to the multiplicative rule into products or in the log space sums of probabilities and this here" }, { "start": 1120.56, "end": 1128.0800000000002, "text": " this part here is what the order aggressive models take they say look if i have a bunch of nodes then" }, { "start": 1128.0800000000002, "end": 1136.0800000000002, "text": " the probability of for example this node is conditioned on everything that's before so i" }, { "start": 1136.08, "end": 1143.4399999999998, "text": " can factorize this into products where every probability is conditioned on the ones before" }, { "start": 1146.1599999999999, "end": 1151.9199999999998, "text": " and these models they essentially go and they say well there is no reason no particular reason why" }, { "start": 1152.48, "end": 1158.56, "text": " you have to factorize in this way you can in fact factorize in any order that you want and" }, { "start": 1159.76, "end": 1165.04, "text": " if you do that if you recognize that you can factorize in any order you want you can also" }, { "start": 1165.04, "end": 1174.8799999999999, "text": " say that you can also say that the you can essentially not only train in the order" }, { "start": 1176.48, "end": 1187.6, "text": " that you decode in you can already train for all the orders at once right so if if my chosen order" }, { "start": 1187.6, "end": 1197.76, "text": " is i go from here to here to here to here right once i'm at the purple node right in this particular" }, { "start": 1197.76, "end": 1207.04, "text": " order i would go here next but in many other orders right where i came from from here in" }, { "start": 1207.04, "end": 1212.9599999999998, "text": " a other order i would go here next and in yet another order i could choose i would go here next" }, { "start": 1212.96, "end": 1219.04, "text": " and these orders i sample uniformly okay so i can reasonably assume that the next time i see the" }, { "start": 1219.04, "end": 1226.72, "text": " sample i'm in one of those other orderings right and therefore the expectation of my loss function" }, { "start": 1226.72, "end": 1235.28, "text": " is just the average if i were to predict this one or this one or this one at this time and therefore" }, { "start": 1235.8400000000001, "end": 1242.8, "text": " if why do i have to wait for the next samples i can simply say right now well i'm simply going" }, { "start": 1242.8, "end": 1248.6399999999999, "text": " to predict all of them at the same time and then take the mean as my loss function so the mean" }, { "start": 1248.6399999999999, "end": 1254.32, "text": " classification error as my loss function rather than just predict the one in the order where i" }, { "start": 1254.32, "end": 1260.6399999999999, "text": " happen to be left to right models don't need to do that because they are always left to right so the" }, { "start": 1260.6399999999999, "end": 1268.08, "text": " next time they see the sample they will have to only decode the exact same next variable however" }, { "start": 1268.08, "end": 1275.28, "text": " these models we train them to work in arbitrary orders and therefore we might as well predict all" }, { "start": 1275.28, "end": 1280.8, "text": " of the orders at once and take the mean of the loss function as the loss function and there again" }, { "start": 1280.8, "end": 1289.76, "text": " you see the trade-off this allows us then to decode in any order we want however also there's a trade-off" }, { "start": 1289.76, "end": 1297.52, "text": " now only one over the number of of remaining nodes is the portion of the loss function that is really" }, { "start": 1297.52, "end": 1304.72, "text": " trained on the order that we're eventually going to have and all the others are essentially superfluous" }, { "start": 1304.72, "end": 1313.04, "text": " well they might help for generalization a bit but you know the you you significantly reduce loss mass" }, { "start": 1313.68, "end": 1319.92, "text": " on the order that you actually then care about at the end when you sample so here is how you sample" }, { "start": 1319.92, "end": 1327.12, "text": " it's pretty simple it's what i said so you initialize x empty you sample one order as i said you" }, { "start": 1327.12, "end": 1332, "text": " don't have to commit to one at the beginning but that's how you specify you sample and order" }, { "start": 1332, "end": 1339.84, "text": " uniformly then you go through the through the ordering through the permutation here sigma is" }, { "start": 1339.84, "end": 1348.9599999999998, "text": " the permutation of nodes decode this is very complicated written so they build these masks" }, { "start": 1348.9599999999998, "end": 1355.6799999999998, "text": " right here you can see they build these masks and essentially m is just whatever has been decoded so" }, { "start": 1355.68, "end": 1364.8, "text": " far n is whatever is whatever one node is to be predicted right now so what you do is you build" }, { "start": 1364.8, "end": 1373.68, "text": " a categorical distribution you put the masked x into your neural network build a categorical" }, { "start": 1373.68, "end": 1384.3200000000002, "text": " distribution so this here means you predict all of the nodes at once given what you've predicted so" }, { "start": 1384.32, "end": 1390.32, "text": " far so m times x is what you've predicted so far that goes into a neural network that's essentially" }, { "start": 1390.32, "end": 1396.56, "text": " the learned part of this and the neural network will output a distribution a categorical distribution" }, { "start": 1396.56, "end": 1405.6, "text": " for every single other node there is and what you do then is you choose the one the n you know that's" }, { "start": 1405.6, "end": 1413.84, "text": " the entry in the ordering that you chose you choose the one that you want to decode and you simply" }, { "start": 1413.84, "end": 1423.04, "text": " augment amend the sample that you have by the one you want to decode this is written very complicated" }, { "start": 1423.04, "end": 1430.8799999999999, "text": " in a very complicated way so optimizing training these models isn't too hard either what you're" }, { "start": 1430.8799999999999, "end": 1438.56, "text": " going to do is you have a data point that i guess you sample from the data set you're going to sample" }, { "start": 1438.56, "end": 1444.3999999999999, "text": " one particular time step so notice here we go over all the time steps because we actually want to" }, { "start": 1444.3999999999999, "end": 1451.6, "text": " get a sample when we train that's much like transformer autoregressive models actually there" }, { "start": 1451.6, "end": 1458.1599999999999, "text": " we can train all the time steps at once but the individual training sample is just we select one" }, { "start": 1458.1599999999999, "end": 1464.56, "text": " particular time step in one particular ordering right so we select an ordering and in that ordering" }, { "start": 1464.56, "end": 1473.6799999999998, "text": " we select the time step and typically what you do is so you have a picture you have pixels what" }, { "start": 1473.6799999999998, "end": 1480.48, "text": " this amounts to is we say okay we're just going to mask a bunch of these pixels right here we're" }, { "start": 1480.48, "end": 1486.24, "text": " just going to black them out right that will correspond to some time step in some ordering" }, { "start": 1486.8, "end": 1491.44, "text": " so we're just going to assume we've predicted all of the ones that we haven't masked and now" }, { "start": 1491.44, "end": 1497.04, "text": " we're trying to predict all of the ones that we did mask right all of these ones we're going to" }, { "start": 1497.04, "end": 1508.72, "text": " predict at once and um yeah that will so you notice that there is no n right here the n" }, { "start": 1508.72, "end": 1516.3200000000002, "text": " specifies the one pixel you want to predict next but during training we simply mask out a bunch of" }, { "start": 1516.32, "end": 1522.8, "text": " pixels and then we predict all at once so again we have the m which is what we've predicted so far" }, { "start": 1522.8, "end": 1528.96, "text": " we input m times x into the neural network so the neural network will predict the distribution of" }, { "start": 1529.52, "end": 1535.84, "text": " every single thing that we haven't predicted so far and rather than selecting n from it" }, { "start": 1536.8, "end": 1544.8799999999999, "text": " we now select one minus m so everything that hasn't been predicted so far and then we average that" }, { "start": 1544.88, "end": 1552.3200000000002, "text": " and that will become our loss function okay now given that we know what the pixels are that we've" }, { "start": 1552.3200000000002, "end": 1558.8000000000002, "text": " masked during training we can actually compute this loss function and you know that's that's it" }, { "start": 1558.8000000000002, "end": 1565.7600000000002, "text": " that's how you train uh pretty simple as i said this should remind you of BERT and yeah so they" }, { "start": 1565.7600000000002, "end": 1572.8000000000002, "text": " have several extensions to this which i just briefly want to touch so they now they say well" }, { "start": 1572.8, "end": 1580.8, "text": " if we if we sort of allow a bunch of times these dependence independency mistakes so you know given" }, { "start": 1580.8, "end": 1587.6, "text": " that we have like i don't know a million pixels in an image right can't we just sort of assume" }, { "start": 1587.6, "end": 1592.1599999999999, "text": " that you know the pixel up here and maybe the pixel here they're kind of independent from each" }, { "start": 1592.1599999999999, "end": 1600.8, "text": " other so couldn't we just sort of sample um sample them at once so we can sample multiple pixels at" }, { "start": 1600.8, "end": 1608.08, "text": " once if they're kind of far away from each other we we're just kind of fine with that um and uh" }, { "start": 1608.6399999999999, "end": 1618.1599999999999, "text": " yeah so we trade off speed predicting multiple pixels at a time by we trade off speed and" }, { "start": 1618.72, "end": 1624.8799999999999, "text": " accuracy essentially because now the pixels that we predict at the same time they have no knowledge" }, { "start": 1624.8799999999999, "end": 1629.9199999999998, "text": " of the other pixels in the same time step that's the problem we've talked about before" }, { "start": 1629.92, "end": 1634.24, "text": " and then they go a step further and they say well rather than deciding you know we want to decode" }, { "start": 1634.24, "end": 1639.28, "text": " five pixels at a time instead of just one what we're going to do is we're going to give the" }, { "start": 1639.28, "end": 1648.16, "text": " algorithm a budget and they say look you have an entire image we have 20 steps so you need to decide" }, { "start": 1648.72, "end": 1654.0800000000002, "text": " this is the visualization right here you have 20 steps you need to decide do i want to go like" }, { "start": 1654.08, "end": 1663.12, "text": " do i want to go like um do i want to go so here is like one pixel then two pixels then three pixels" }, { "start": 1663.12, "end": 1669.52, "text": " then five pixels then the rest of the pixels right these are five time steps that's your budget you" }, { "start": 1669.52, "end": 1677.52, "text": " decide so they use a dynamic programming algorithm essentially they build up they go through their as" }, { "start": 1677.52, "end": 1686.16, "text": " far as i understand it they go through their training data set and um they compute what they" }, { "start": 1686.16, "end": 1696.24, "text": " call loss components so here is your your budget and here is the number of nodes in the uh in the" }, { "start": 1697.12, "end": 1706.48, "text": " here is the number of nodes in your data points and so you can say okay for step number three" }, { "start": 1706.48, "end": 1714.88, "text": " if i were to decode five uh steps in step number three right how much would that cost and then you" }, { "start": 1714.88, "end": 1723.04, "text": " can try to find in classic dynamic programming fashion a path through this matrix and you know" }, { "start": 1723.04, "end": 1728.96, "text": " at the end this path is going to give you what how many pixels you should decode at what step" }, { "start": 1728.96, "end": 1736, "text": " so for example here in step one we decode two then we decode one i don't know what this is" }, { "start": 1736, "end": 1745.28, "text": " actually means one no zero that makes no sense and then we decode the rest but you know how dynamic" }, { "start": 1745.28, "end": 1751.2, "text": " programming works and this isn't this is from a different paper actually but they just say you" }, { "start": 1751.2, "end": 1758, "text": " know we can use this given that we train for any order at all and predict all at the same time this" }, { "start": 1758, "end": 1766, "text": " is an option so you can technically trade this off what they also do is this depth upscaling" }, { "start": 1767.12, "end": 1772.16, "text": " and what they do in the depth upscaling is they say well you know if we're trying to predict a" }, { "start": 1772.16, "end": 1779.92, "text": " pixel value for a pixel right the pixel value is like 256 classes yeah it's it's a big thing right" }, { "start": 1780.8, "end": 1786, "text": " let's not have the model so the model needs to sort of commit to one of them" }, { "start": 1786, "end": 1792.24, "text": " you know in immediately like that's my pixel value what if what if we could do the following" }, { "start": 1793.04, "end": 1800.24, "text": " what if we could have the model just predict which half of the pixel values it's in right are you" }, { "start": 1800.24, "end": 1808.4, "text": " bright in the blue channel or are you not bright are you dark okay and then we do this for all the" }, { "start": 1808.4, "end": 1814.4, "text": " pixels so all the pixels in the image they simply first in the first iteration decide" }, { "start": 1814.4, "end": 1822.3200000000002, "text": " am i light or am i dark right am i light am i dark am i light am i dark and so on and then once" }, { "start": 1822.3200000000002, "end": 1829.3600000000001, "text": " everyone has decided on that we go over the image again and we say well okay now okay i should have" }, { "start": 1829.3600000000001, "end": 1836.4, "text": " filled all of them just imagine all of them filled in now they say okay now you pixel who previously" }, { "start": 1836.4, "end": 1842.72, "text": " decided you were light now that you see all the other pixel and their crude decision you know" }, { "start": 1842.72, "end": 1850.64, "text": " what sub part of the light do you fall in are you very light or are just a bit light and then so we" }, { "start": 1850.64, "end": 1856.24, "text": " go through the image multiple times right it can even be in different orders and the advantage here" }, { "start": 1856.24, "end": 1862.64, "text": " is that you first let the other parts make crude decisions and then you don't have to decide out of" }, { "start": 1862.64, "end": 1868.32, "text": " the blue right so you you know sort of approximately what all the others are before you refine and then" }, { "start": 1868.32, "end": 1875.52, "text": " you refine refine refine until you get to the final choice so this is i think this is a neat idea" }, { "start": 1876.08, "end": 1883.76, "text": " they specify exactly you know how to do this however i can't help noticing that as you can see" }, { "start": 1883.76, "end": 1891.6799999999998, "text": " the ordering here by which you decode so you first predict the the crude part then the not so crude" }, { "start": 1891.6799999999998, "end": 1897.6799999999998, "text": " part then the not so not so crude part and finally you predict the the final choice" }, { "start": 1897.68, "end": 1905.28, "text": " the the full part i can't help but notice that this is again a fixed order autoregressive model" }, { "start": 1905.28, "end": 1912.64, "text": " right this is this is again like this is exactly what they're trying to run away from so they they" }, { "start": 1912.64, "end": 1921.04, "text": " just introduce it again in a sub part of their model which i find to be funny right and on the" }, { "start": 1921.04, "end": 1926.8, "text": " on the other hand this this only works really this is my other problem with this this only works if" }, { "start": 1926.8, "end": 1931.44, "text": " this isn't really a categorical variable right pixel value pixel value is a continuous variable" }, { "start": 1931.44, "end": 1936.8, "text": " you can be anywhere we just discretize it right and that's why this works the you know decide on" }, { "start": 1936.8, "end": 1943.28, "text": " your crude and then go go more less and less crude go more and more detailed if you have something" }, { "start": 1943.28, "end": 1952.96, "text": " like true classification right let's say into tokens of a vocabulary like a b c d e it makes" }, { "start": 1952.96, "end": 1958.4, "text": " it makes no sense to ask them well in which half of the alphabet are you the model can't do a crude" }, { "start": 1958.4, "end": 1964.24, "text": " decision it already needs to know to answer this question for you so unless you have a way to" }, { "start": 1964.24, "end": 1971.44, "text": " really split the vocabulary in meaningful fashion it this doesn't make sense this is really this is" }, { "start": 1971.44, "end": 1978.72, "text": " really a a workaround around the artifact that they need categorical variables for their model" }, { "start": 1978.72, "end": 1986.64, "text": " and therefore they discretize the the the brightness here of the pixels and you know that" }, { "start": 1986.64, "end": 1992.16, "text": " that's a result of that so in any case i don't want to dive too much into the results you've" }, { "start": 1992.16, "end": 1998.72, "text": " already seen them they do don't do large scale as far as i can tell they do c for 10 generation" }, { "start": 1998.72, "end": 2004.4, "text": " they also do lossless compression what they can do is with their model they have a pretty good" }, { "start": 2004.4, "end": 2010.88, "text": " handle at the trade-off so this gives you the applet so the the user of the model a good way" }, { "start": 2010.88, "end": 2020.72, "text": " of trading off performance for speed and you can do this on the fly right you can do you can say" }, { "start": 2020.72, "end": 2026.64, "text": " i want less performance i want more performance i have less of a budget to infer the sample or more" }, { "start": 2026.64, "end": 2033.1200000000001, "text": " and you can change from from time to time and yeah these these models as i said they're young" }, { "start": 2033.12, "end": 2039.4399999999998, "text": " therefore they have a way to go we've put so much work into GANs and whatnot and and other" }, { "start": 2039.4399999999998, "end": 2046.08, "text": " aggressive text models that the fail like the fact that these here are not state of the art yet" }, { "start": 2046.08, "end": 2051.8399999999997, "text": " they might it might just be an artifact of that or they might just suck who knows all right thank" }, { "start": 2051.8399999999997, "end": 2058.4, "text": " you so much for listening as i said join our discord to get in on the paper discussions they're" }, { "start": 2058.4, "end": 2069.28, "text": " usually very very entertaining and i'll see you next time bye bye" } ]
ZfDZRX3WiJg
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
VirTex: Learning Visual Representations from Textual Annotations (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "cnn", "visual", "resnet", "caption", "nlp", "transformer", "vasvani", "attention", "text", "coco", "imagenet", "convolutional neural network", "adaptation", "transfer learning", "quality", "unsupervised", "self-supervised" ]
Pre-training a CNN backbone for visual transfer learning has recently seen a big push into the direction of incorporating more data, at the cost of less supervision. This paper investigates the opposite: Visual transfer learning by pre-training from very few, but very high-quality samples on an image captioning task. OUTLINE: 0:00 - Intro & Overview 1:00 - Pre-Training for Visual Tasks 3:40 - Quality-Quantity Tradeoff 5:50 - Image Captioning 8:35 - VirTex Method 14:30 - Linear Classification 20:30 - Ablations 22:05 - Fine-Tuning 25:45 - Attention Visualization 27:30 - Conclusion & Remarks Paper: https://arxiv.org/abs/2006.06666 Code: https://github.com/kdexd/virtex Abstract: The de-facto approach to many vision tasks is to start from pretrained visual representations, typically learned via supervised training on ImageNet. Recent methods have explored unsupervised pretraining to scale to vast quantities of unlabeled images. In contrast, we aim to learn high-quality visual representations from fewer images. To this end, we revisit supervised pretraining, and seek data-efficient alternatives to classification-based pretraining. We propose VirTex -- a pretraining approach using semantically dense captions to learn visual representations. We train convolutional networks from scratch on COCO Captions, and transfer them to downstream recognition tasks including image classification, object detection, and instance segmentation. On all tasks, VirTex yields features that match or exceed those learned on ImageNet -- supervised or unsupervised -- despite using up to ten times fewer images. Authors: Karan Desai, Justin Johnson Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at Vertex Learning Visual Representations from Textual Annotations by Karen Desai and Justin Johnson of the University of Michigan. So this paper at its core is pretty simple. On a high level it proposes to take the task of image captioning, which is where you're given an image and you're asked to produce a caption for the image, and basically train a model to do this, and then just take the visual part of it as a baseline to transfer learn on other visual tasks. And that appears to work surprisingly well if you don't have much data. So if you don't have much data to pre-train on, this appears to work very well. Alright, as always, if you like content like this, then consider sharing it out, subscribing to the channel, or tell me what you think in the comments. So as I already said, the idea here is pretty simple. So people have been looking for pre-training tasks for visual tasks. So a visual task is anything where the input is an image, and then you usually have some sort of neural network that processes the image, and then at the end you can have many things. So you could have a classifier that classifies the image into one of many classes. If you know ImageNet, that's a thing. So if there's a cat here, then the ImageNet classifier here would say cat. Or you could have something like an object detector that tries to predict on the image where the cat is, like with a bounding box. You could have a semantic segmentation where it's like all of these pixels here are cats, and maybe all of these pixels here are sky. And so it labels every pixel. There's many visual tasks that you can formulate, and they all sort of share the same architecture. And specifically, they all share this part right here. If you will, this is the visual encoder. It's usually a convolutional neural network. And what's really different between the tasks is mostly this last part here that does the actual task. But this is often called the backbone. So this is the backbone. And the idea now is, if I have a bunch of these tasks, sometimes I don't have many labels for these tasks. I don't have many labeled images so that I could train this big architecture from scratch, like in medical images or just in domains where you don't have many images. So couldn't I somehow come up with a method to create this backbone beforehand? So to create backbone given another dataset. And the simplest variant here is you take a big image dataset, such as ImageNet, and then you train a classifier, like we said, to predict some classes on it. And then because an ImageNet has a lot of images, then this is your backbone. And then whenever you have a different task, you simply take the backbone, transfer it over, and then train the other. Basically, you continue training on the other task. That's called transfer learning. The question is, how do you get a good backbone? So if you train on something like ImageNet, then this is of course a supervised task. You have a very good learning signal, but even ImageNet has like 1 million images. But for example, the internet has many more images. So what you could do is you could train on this much bigger dataset that you collected from the internet. Let's call it internet. But there you don't have labels, right? So what you'll have to resort to is instead of supervised learning is self supervised learning, where you have an image and maybe you rotate it to the right. So here is our cat. You rotate it to the right. And then you have a classifier that predicts that this image was rotated to the right. And then that will become your backbone. These self supervised methods, they work very well. There is a different number of them. For example, MoCo, things like this. And there's also a number of techniques that do supervised pre training and then transfer learning. You can maybe watch my video on big transfer, which is a very large attempt to do to pre train a backbone for visual tasks. All right. Now, you can see right here that the sort of direction is that the more data the better. So that's sort of the idea here that ImageNet is a big data set, we can train a really good backbone. But you know, the internet is an even bigger data set, we don't have labels. So there's a trade off. But we potentially can train an even better visual backbone to then transfer learn with. This paper goes into a different direction. They say, look, if you go in this direction right here, you get more images, but you get less information per image. So with ImageNet, at least you have the label, right per image. But if you simply take a photo of the internet, you don't even have to label you have to resort to self supervised. What if we go into the other direction, and we look for images that have very high quality annotations, but maybe we don't have as many? Can we can we do the same thing? Can we learn good backbones by trading off quality for quantity in this case, and their quantity and quality trade off is they go for descriptions. So they'll go for something like this, where you'll have an image, and you'll have a caption for the image. And so they show these on a line here, semantically dense, semantically sparse, but their task is going to be caption generation. So their back their mod, their task is given an image, I want to produce a caption. And there are data sets that you can train this from in a supervised fashion, which of course, these are very expensive to create. I mean, if you want to create an ImageNet data set, then you have to label each image. But if you want to create a caption data set, that's even harder because human really needs to sit down, look at the image. And in ImageNet, everything is like one class. But here you need to look at the image. And then you'll have to come up with like an adequate description. Here the adequate description is an orange, sorry, an orange and white, an orange and white cat near a plate, and the white cake. Okay. So that's, that's the caption right here. And of course, the caption is ambiguous. So you'll have to collect multiple captions per image. And you'll have to make sure that the humans that do this do a good job and so on. So this these are very, very expensive data sets, but they are very high quality. If you think of what does what does a single label, let's just take ImageNet, ImageNet has a single label per class. Let's say this is cat or cake for that matter. It just sort of gives you very few bits of information. But if you consider the text here, an orange cat and a white cat, an orange and white cat, you know that there is a cat, right? You know that it's one cat, you know what its color is, orange and white, then you know that there is a white cake, right? So you know the other object. And you know the relation, they are near each other. Okay, same for here, a brown and white puppy. So this is one object and the description of the object. There is a there are apples, there is a green lawn, and the relations between them are also clear. The puppy is lying on the green lawn and looking at the apples. So the information in captions is so much more dense than just labels. And that's the that's the backdrop here to say, Hey, can't we can't we do? Can't we pre train a backbone from maybe a small data set, but that has so much information, like a caption date, image caption data set. Okay, so their method is nothing more. They train image captioning, and then they use the visual backbone for transfer learning. So this is the model, there's an image, the image goes into this visual backbone right here, which is a resin at 50. So this is a very, very standard convolutional neural network. And that gives you these features. So these features are seven by seven by 2048. This is the standard output of a resin at 50. And then from this part on, they do a linear projection, such that they can now input it into a language model. So they have visual features. And now they feed those into the language model. And the language model is just a transformer, actually two transformers. So one transformer, they're both autoregressive, one transformer tries to predict the caption in a forward way. And the other transformer tries to predict the caption in a backward way. And that's down here. So in this direction is backward because the caption has been reversed. If you don't know what a transformer is, I've made several videos on transformers. The first one is attention is all you need. And that's sort of the same, the same kind of transformer they use here. So as you can see right here, you have this multi-head attention, the layer normalization attention from the decoder. Now the difference between the original Vasvani attention is all you need transformer. And this one is that in the original transformer, you had, for example, if you had a machine translation task, you would have the French, maybe a French sentence over here. And then you would have the beginnings of German sentence here, right? This is what you have already produced. And now you're asking what should the next word be. And the architecture was such that there is a decoder transformer right here and that there is an encoder transformer that encodes whatever you already had. And then at some point there is this cross attention, right? There is the signal from the decoder going into the encoder and the encoder incorporating that. And then at the end right here, the encoder would predict or the entire transformer would predict what the next word will be. The only difference right here is that the decode this, sorry, I mixed this up. This is the decoder. This is the encoder. The only difference right here is that this encoder is no longer a transformer, but is this ResNet, this ResNet 50. Okay, because now you don't have an image as a, you can think of it like a translation task. You want to translate from images to text. Okay, so your input is going to be an image and the signal is going like it would go in the original transformer into the decoder. It would come from the image. So from these visual features goes here. So in this drawing, this thing is going in here. And then you simply predict the next word and you do it in both directions. And the reason you can do it in both directions here, this wasn't, is not the case, of course, if you have a decoder like a standard transformer task, because you don't need to do inference at this, you just need to do training. And training you can do using teacher forcing. And so you can do this in a bi directional way. You don't need, you don't need this at inference time. So at inference time, you simply cut off this part right here. That's your visual backbone. Okay. And these features here, those are going to be the features that you then train your task on. And sometimes you fine tune this or sometimes you keep it frozen, you can choose that. Alright, so convolutional network to encode the images that gives you features, visual features. Those visual features go into two transformers, both try to predict the caption of the image, one in a forward motion, one in a backward motion. And you train it to predict as accurately as possible the gold standard captions that you have in your data set. That's it. If you train this model well, that means the model can produce accurate captions for these images, which means that it has learned something meaningful about the image to the degree of course, that the original caption that was in your data set was a good descriptive caption. But we're just we're going to assume that the in these data sets, this is the case. Alright, that's what they do. Now, interesting thing here is that in their standard in their standard in their standard setup, they only have one of these transformer layers. So of these things right here, they only have one. And that's like I think it's like 2000 units wide, but or sorry, the hidden dimension is 2000 units or 2048. But they only have one layer. So what that means is that this transformer is not very powerful. So most that you force most of the power to come from the visual encoder, the visual encoder had basically has to do most of the work. And then the transformer is going to simply be a very shallow language model on top of that. And that of course makes your visual backbone even better. Alright, we can pretty much skip the rest. That's the idea. Like that there's nothing more to it. You train this from the beginning, you don't use any pre trained, whatever you train this from scratch. And then you use this. And then the first experiment, they simply train a linear classifier on top of that representation. So they freeze the backbone, and then they use a linear classifier. And they compare this to baselines. So one of the baseline is image net supervised, where you use the same backbone, but you train it on image net in a supervised fashion. Okay, and then you use that backbone to transfer out of the text, it's kind of like what big transfer does, but just on the regular 1000 class image net baseline. Then you have the sort of the unsupervised pre training ones. So moco, so pearly pearly some, I want to go into pearl, but moco is this momentum contrast, which is one of these supervised methods that has been shown to work really, really well. And this is also moco en is trained on image net, but now without the labels, because moco is unsupervised. And moco cocoa is trained on the cocoa data set. And the cocoa data set is what this paper here, the vertex paper uses cocoa is this image captioning data set. Now what's important to note is that cocoa has about 10% only of the images of image net. So it's considerably smaller. Now let's see how these things fair. Right here, you can see on the x axis, the number of images, okay, the number of images that the data set or that the pre training method trains on. Now, of course, some of these are going to be capped because for some data sets, there are just not more images available, right? So they're going to be capped here, the ones that are training on cocoa and the ones that are training on image net are going to be capped here. And you can already see that the vertex outperforms the image net supervised baseline by pretty much when you only give it this many images. Okay, so the way you do it is, in this case, you simply train these models. Now the brown one is when you take one caption per image, but the data set actually has more more than one caption per image. So when you use more than one, you can still boost your performance a bit. And that works way better than when you do the supervised pre training on image net, which would get you here with about the same amount of images. Now, when you use all of image net, you can see here you can get to a similar performance right here, but you have to use a 10 times bigger data set to get there. Right, so this already shows you sort of the advantage here. Now also consider the difference to the unsupervised ones. So if you look at the same amount of images, the unsupervised self supervised baselines are even lower. But if you go to more images, they sort of get closer to image net. And in their own papers, there are there are some evidence that if you self supervised train for long enough, you can actually surpass image net supervised pre training, but I'm not so sure that that's really the case. But you can see here the trade off between higher quality information, but smaller data sets versus lower quality information, but more data per data set. And yeah, if I guess if you were to if you were to pre train these self supervised methods with lots more data in a self supervised manner, they would maybe end up even higher than image net. Now this graph here is sort of the same thing where they also train a linear classifier. And you can see right here that now the image net supervised baseline is outperforming vertex by a lot. So what's happening here? Now this here is actually this is on image net. So the task here that you transfer learn is image net. Here it was like a neutral task Pascal VOC. None of these methods have trained on Pascal. They simply have trained on their own data set. These have trained on cocoa. This has trained on image net. And then they have transfer learned to Pascal. Now, the task is actually the transfer learning task is image net. So naturally, the the thing that was pre trained in a supervised fashion on image net is going to have a huge advantage in this task, because it basically has already learned the task beforehand, whereas the vertex, it has pre trained on cocoa, not on image net. And you can see, if you give it the same amount of images for pre training, it can actually it's it's fairly close to the image net baseline. So that's pretty respectable right there. Now, again, of course, if you use more images on the same data set that you then train for, then of course, the the image that baseline is going to outperform it. But so pretty cool to see here that in this smaller image regime, and also consider this down here, if you go even an order of magnitude lower, it's really shining that if you have higher quality information, and you make use of it, you don't need as many images. And now we knew this for a long time. But this now is showing the same for transfer learning for visual transfer learning. So this was when we froze the backbone. And then we trained a linear classifier on top, they go, they make a short excursion here and show how different parts of their model affect their final performance. And they find that, for example, the by captioning, which I believe is the is forward and backward captioning significantly helps, for example, compared to only forward captioning. And they also find that it's significantly outperforms other pre training tasks that they could do. And they also investigate whether how big their models should be. So here, this is their baseline model. Oh, I was I was wrong, actually, they the it's one layer of with 1024. You can see as you make the layer bigger and bigger, that generally helps. But I guess they decided against it because the gains are too little to, to afford to make it worth. And also if you make the network deeper here, you make the transformer have more layers, the performance goes up. But again, the gains are marginal. So I guess they're going to leave it away. So their baseline, as you can see, is these resin at 50 with the one layer of 1024 size. So this is now the last task, it's the fine tuning task. So this is what most people would do is they would train a backbone, and then they would fine tune it on a different data set on or on a different task where they don't have much labels. And here the situation looks a bit different. So if you look at, for example, a task on cocoa, so there are several tasks on cocoa, one of them is image captioning, which they use for peritra for pre training. If you do other tasks on cocoa, you can see right here that compared to the supervised baseline, this vertex, it performs about the same or maybe a bit worse. But what you can see is it performs significantly better than, for example, moco that was only trained on cocoa. So again, this shows that if you have the same data set, higher quality information makes it worth it. And it's even better, as you can see, on moco that was trained on image net is just not quite as good as the supervised baseline. But all of them, of course, are better than just a randomly initialized network that is trained from scratch. I mean, that's the entire point of transfer learning, that you are better than simply learning from scratch. And this shows throughout this experiment, except in this LVS masking task, where they do outperform the other the other things, the other methods significantly. Now the lower numbers on this tasks task also means that the task is harder than these tasks right here. And therefore, there are more gains to be made. And therefore, you could hypothesize that the bigger, the more quality information that you input can be used in a better way. So maybe more complex, also, the more complex a task is might also have an influence on how well the transfer learning works, if you come from a high quality transfer learning task versus a low quality transfer learning tasks. Yeah, so the lastly compare here with the again with Pascal VOC object detection, and these iNaturalist classification, where I believe this is also a transfer learning task with fine tuning. And as you can see, they can also hold up against the supervised baseline, or even outperform it at sometimes the green triangles mean that they outperform it by a significant margin. But then on this task right here, they again lag behind. So I think the point of the paper isn't really to show that that this is the best thing ever. But the point of the paper is to show that you can go about pre trainings, basically, the the common assumption is that you need more and more and more and more data for your model to learn about the data set. And they conclude here, no, actually, you can do with with very few data points as long as they have high quality annotations. Okay, so I think that's the point of the of the paper that and they don't always outperform the other baselines and whatnot. But they keep, they keep the performance the same, which basically means that this is an option. Here is a pretty cool result where they visualize the attention of their image captioning model, because they train an image captioning model. And you can really see that the image captioning model learns something meaningful about the image. So when it's a bird flying, the attention is mainly on the bird, as you can see, then over the the attention widens out over the image, air. So over the air, the attention is here in the sky and on the on the ocean. And then it goes near the ocean. And then the attention is on the ocean itself. As you can see, so they have a bunch of these images and they're they're pretty cool here a dog, so focused on the dog riding on and then you can see the attention going down because on is riding on means probably there's something below the dog. A surfboard. Now the attention is fully on the surfboard in. So as soon as you say in the attention, as you can see, it widens out. So I think that's that's fairly cool, fairly cool demonstration that the model understands sort of the the in relation, namely, if it is focused on something, and that something is in something else, then it widens the attention out to see what it is in, okay, the ocean, and then it focuses the attention on the ocean. So that's, that's a pretty, that's a pretty cool result. I guess we already knew this because we could train image captioning models before. It's just to show that it actually makes sense to use them as a pre training task for backbones. Now, what's the future of this, the authors here in their introduction, they make a claim that this has a good future because they here they only train on this small data set, right, it's smaller than image net, as you can see here, and they already get the same performance as if you train on the whole image net data set in a supervised fashion. Of course, they're also supervised, but they have 10 times less images. And they they say something to the effect of you do you know, it would be pretty easy to collect more data for this task, because the internet is full of images. And mostly these images have like some text with them. They, you know, they have these descriptions or they have text around it, people write something about the images, you could like mine Twitter, and then the responses when someone posts an image might tell you something about the image. But this definitely counteracts their this definitely counteracts their notion that these are very high quality labels, right? Their entire point here was that these annotations, these, these data sets with these these image captioning data sets like Coco, they have very, very high quality annotations. So this this text here is very high quality is really a descriptive text of the image that tries to capture what a human can see visually in the image. And as soon as you go out to the internet and collect a text around images, that's not going to be the case that information is again going to be quite low quality. And so I doubt that the performance here would hold up or that the claim you can easily, you know, you can easily create more data for this task holds up. So that's a bit my worry about the future of this, but it's definitely cool and definitely shows these quality quantity trade off very well. Alright, that was my two cents to the paper. I invite you to read it and tell me in the comments what you think about it and I'll see you next time.
[ { "start": 0, "end": 6.08, "text": " Hi there! Today we're looking at Vertex Learning Visual Representations from Textual Annotations" }, { "start": 6.08, "end": 13, "text": " by Karen Desai and Justin Johnson of the University of Michigan. So this paper at its core is pretty" }, { "start": 13, "end": 18.8, "text": " simple. On a high level it proposes to take the task of image captioning, which is where" }, { "start": 18.8, "end": 23.8, "text": " you're given an image and you're asked to produce a caption for the image, and basically" }, { "start": 23.8, "end": 30.96, "text": " train a model to do this, and then just take the visual part of it as a baseline to transfer" }, { "start": 30.96, "end": 38.38, "text": " learn on other visual tasks. And that appears to work surprisingly well if you don't have" }, { "start": 38.38, "end": 45.6, "text": " much data. So if you don't have much data to pre-train on, this appears to work very" }, { "start": 45.6, "end": 53.66, "text": " well. Alright, as always, if you like content like this, then consider sharing it out, subscribing" }, { "start": 53.66, "end": 61.64, "text": " to the channel, or tell me what you think in the comments. So as I already said, the" }, { "start": 61.64, "end": 68.84, "text": " idea here is pretty simple. So people have been looking for pre-training tasks for visual" }, { "start": 68.84, "end": 75.72, "text": " tasks. So a visual task is anything where the input is an image, and then you usually" }, { "start": 75.72, "end": 80.44, "text": " have some sort of neural network that processes the image, and then at the end you can have" }, { "start": 80.44, "end": 85.39999999999999, "text": " many things. So you could have a classifier that classifies the image into one of many" }, { "start": 85.39999999999999, "end": 93.96, "text": " classes. If you know ImageNet, that's a thing. So if there's a cat here, then the ImageNet" }, { "start": 93.96, "end": 100.92, "text": " classifier here would say cat. Or you could have something like an object detector that" }, { "start": 100.92, "end": 108.4, "text": " tries to predict on the image where the cat is, like with a bounding box. You could have" }, { "start": 108.4, "end": 115.32000000000001, "text": " a semantic segmentation where it's like all of these pixels here are cats, and maybe all" }, { "start": 115.32000000000001, "end": 122.92, "text": " of these pixels here are sky. And so it labels every pixel. There's many visual tasks that" }, { "start": 122.92, "end": 128.36, "text": " you can formulate, and they all sort of share the same architecture. And specifically, they" }, { "start": 128.36, "end": 135.08, "text": " all share this part right here. If you will, this is the visual encoder. It's usually a" }, { "start": 135.08, "end": 140.68, "text": " convolutional neural network. And what's really different between the tasks is mostly this" }, { "start": 140.68, "end": 146.68, "text": " last part here that does the actual task. But this is often called the backbone. So" }, { "start": 146.68, "end": 154.32000000000002, "text": " this is the backbone. And the idea now is, if I have a bunch of these tasks, sometimes" }, { "start": 154.32000000000002, "end": 158.28, "text": " I don't have many labels for these tasks. I don't have many labeled images so that I" }, { "start": 158.28, "end": 165.48, "text": " could train this big architecture from scratch, like in medical images or just in domains" }, { "start": 165.48, "end": 170.68, "text": " where you don't have many images. So couldn't I somehow come up with a method to create" }, { "start": 170.68, "end": 178.44, "text": " this backbone beforehand? So to create backbone given another dataset. And the simplest variant" }, { "start": 178.44, "end": 185.72, "text": " here is you take a big image dataset, such as ImageNet, and then you train a classifier," }, { "start": 185.72, "end": 190.44, "text": " like we said, to predict some classes on it. And then because an ImageNet has a lot of" }, { "start": 190.44, "end": 194.6, "text": " images, then this is your backbone. And then whenever you have a different task, you simply" }, { "start": 194.6, "end": 201.92, "text": " take the backbone, transfer it over, and then train the other. Basically, you continue training" }, { "start": 201.92, "end": 207.4, "text": " on the other task. That's called transfer learning. The question is, how do you get" }, { "start": 207.4, "end": 214.64, "text": " a good backbone? So if you train on something like ImageNet, then this is of course a supervised" }, { "start": 214.64, "end": 220, "text": " task. You have a very good learning signal, but even ImageNet has like 1 million images." }, { "start": 220, "end": 224.83999999999997, "text": " But for example, the internet has many more images. So what you could do is you could" }, { "start": 224.83999999999997, "end": 229.95999999999998, "text": " train on this much bigger dataset that you collected from the internet. Let's call it" }, { "start": 229.95999999999998, "end": 235.23999999999998, "text": " internet. But there you don't have labels, right? So what you'll have to resort to is" }, { "start": 235.23999999999998, "end": 240, "text": " instead of supervised learning is self supervised learning, where you have an image and maybe" }, { "start": 240, "end": 245.64, "text": " you rotate it to the right. So here is our cat. You rotate it to the right. And then" }, { "start": 245.64, "end": 252.6, "text": " you have a classifier that predicts that this image was rotated to the right. And then that" }, { "start": 252.6, "end": 259.72, "text": " will become your backbone. These self supervised methods, they work very well. There is a different" }, { "start": 259.72, "end": 265.6, "text": " number of them. For example, MoCo, things like this. And there's also a number of techniques" }, { "start": 265.6, "end": 271.28000000000003, "text": " that do supervised pre training and then transfer learning. You can maybe watch my video on" }, { "start": 271.28000000000003, "end": 277.84000000000003, "text": " big transfer, which is a very large attempt to do to pre train a backbone for visual" }, { "start": 277.84000000000003, "end": 286, "text": " tasks. All right. Now, you can see right here that the sort of direction is that the more" }, { "start": 286, "end": 290.88, "text": " data the better. So that's sort of the idea here that ImageNet is a big data set, we can" }, { "start": 290.88, "end": 295.56, "text": " train a really good backbone. But you know, the internet is an even bigger data set, we" }, { "start": 295.56, "end": 300.15999999999997, "text": " don't have labels. So there's a trade off. But we potentially can train an even better" }, { "start": 300.15999999999997, "end": 306.12, "text": " visual backbone to then transfer learn with. This paper goes into a different direction." }, { "start": 306.12, "end": 311.92, "text": " They say, look, if you go in this direction right here, you get more images, but you get" }, { "start": 311.92, "end": 318.2, "text": " less information per image. So with ImageNet, at least you have the label, right per image." }, { "start": 318.2, "end": 322.8, "text": " But if you simply take a photo of the internet, you don't even have to label you have to resort" }, { "start": 322.8, "end": 329.47999999999996, "text": " to self supervised. What if we go into the other direction, and we look for images that" }, { "start": 329.47999999999996, "end": 336.36, "text": " have very high quality annotations, but maybe we don't have as many? Can we can we do the" }, { "start": 336.36, "end": 343.88, "text": " same thing? Can we learn good backbones by trading off quality for quantity in this case," }, { "start": 343.88, "end": 352.68, "text": " and their quantity and quality trade off is they go for descriptions. So they'll go for" }, { "start": 352.68, "end": 359.68, "text": " something like this, where you'll have an image, and you'll have a caption for the image." }, { "start": 359.68, "end": 366.08, "text": " And so they show these on a line here, semantically dense, semantically sparse, but their task" }, { "start": 366.08, "end": 372.6, "text": " is going to be caption generation. So their back their mod, their task is given an image," }, { "start": 372.6, "end": 378.56, "text": " I want to produce a caption. And there are data sets that you can train this from in" }, { "start": 378.56, "end": 383.76000000000005, "text": " a supervised fashion, which of course, these are very expensive to create. I mean, if you" }, { "start": 383.76000000000005, "end": 389.3, "text": " want to create an ImageNet data set, then you have to label each image. But if you want" }, { "start": 389.3, "end": 394.58000000000004, "text": " to create a caption data set, that's even harder because human really needs to sit down," }, { "start": 394.58000000000004, "end": 400.36, "text": " look at the image. And in ImageNet, everything is like one class. But here you need to look" }, { "start": 400.36, "end": 404.40000000000003, "text": " at the image. And then you'll have to come up with like an adequate description. Here" }, { "start": 404.40000000000003, "end": 411.04, "text": " the adequate description is an orange, sorry, an orange and white, an orange and white cat" }, { "start": 411.04, "end": 417.84000000000003, "text": " near a plate, and the white cake. Okay. So that's, that's the caption right here. And" }, { "start": 417.84000000000003, "end": 423.52000000000004, "text": " of course, the caption is ambiguous. So you'll have to collect multiple captions per image." }, { "start": 423.52000000000004, "end": 427.52000000000004, "text": " And you'll have to make sure that the humans that do this do a good job and so on. So this" }, { "start": 427.52, "end": 432.96, "text": " these are very, very expensive data sets, but they are very high quality. If you think" }, { "start": 432.96, "end": 437.96, "text": " of what does what does a single label, let's just take ImageNet, ImageNet has a single" }, { "start": 437.96, "end": 444.52, "text": " label per class. Let's say this is cat or cake for that matter. It just sort of gives" }, { "start": 444.52, "end": 450.88, "text": " you very few bits of information. But if you consider the text here, an orange cat and" }, { "start": 450.88, "end": 457.6, "text": " a white cat, an orange and white cat, you know that there is a cat, right? You know" }, { "start": 457.6, "end": 463.52, "text": " that it's one cat, you know what its color is, orange and white, then you know that there" }, { "start": 463.52, "end": 469.71999999999997, "text": " is a white cake, right? So you know the other object. And you know the relation, they are" }, { "start": 469.71999999999997, "end": 476.68, "text": " near each other. Okay, same for here, a brown and white puppy. So this is one object and" }, { "start": 476.68, "end": 483.44, "text": " the description of the object. There is a there are apples, there is a green lawn, and" }, { "start": 483.44, "end": 488.56, "text": " the relations between them are also clear. The puppy is lying on the green lawn and looking" }, { "start": 488.56, "end": 496.4, "text": " at the apples. So the information in captions is so much more dense than just labels. And" }, { "start": 496.4, "end": 504.9, "text": " that's the that's the backdrop here to say, Hey, can't we can't we do? Can't we pre train" }, { "start": 504.9, "end": 510.67999999999995, "text": " a backbone from maybe a small data set, but that has so much information, like a caption" }, { "start": 510.67999999999995, "end": 519.36, "text": " date, image caption data set. Okay, so their method is nothing more. They train image captioning," }, { "start": 519.36, "end": 523.4399999999999, "text": " and then they use the visual backbone for transfer learning. So this is the model, there's" }, { "start": 523.4399999999999, "end": 529.3, "text": " an image, the image goes into this visual backbone right here, which is a resin at 50." }, { "start": 529.3, "end": 536.3199999999999, "text": " So this is a very, very standard convolutional neural network. And that gives you these features." }, { "start": 536.3199999999999, "end": 543.3399999999999, "text": " So these features are seven by seven by 2048. This is the standard output of a resin at" }, { "start": 543.3399999999999, "end": 550, "text": " 50. And then from this part on, they do a linear projection, such that they can now" }, { "start": 550, "end": 556.3199999999999, "text": " input it into a language model. So they have visual features. And now they feed those into" }, { "start": 556.32, "end": 564.34, "text": " the language model. And the language model is just a transformer, actually two transformers." }, { "start": 564.34, "end": 571.1600000000001, "text": " So one transformer, they're both autoregressive, one transformer tries to predict the caption" }, { "start": 571.1600000000001, "end": 576.4000000000001, "text": " in a forward way. And the other transformer tries to predict the caption in a backward" }, { "start": 576.4000000000001, "end": 582.72, "text": " way. And that's down here. So in this direction is backward because the caption has been reversed." }, { "start": 582.72, "end": 586.8000000000001, "text": " If you don't know what a transformer is, I've made several videos on transformers. The first" }, { "start": 586.8000000000001, "end": 592.96, "text": " one is attention is all you need. And that's sort of the same, the same kind of transformer" }, { "start": 592.96, "end": 600.5600000000001, "text": " they use here. So as you can see right here, you have this multi-head attention, the layer" }, { "start": 600.5600000000001, "end": 607.48, "text": " normalization attention from the decoder. Now the difference between the original Vasvani" }, { "start": 607.48, "end": 614.24, "text": " attention is all you need transformer. And this one is that in the original transformer," }, { "start": 614.24, "end": 619.08, "text": " you had, for example, if you had a machine translation task, you would have the French," }, { "start": 619.08, "end": 626.64, "text": " maybe a French sentence over here. And then you would have the beginnings of German sentence" }, { "start": 626.64, "end": 630.36, "text": " here, right? This is what you have already produced. And now you're asking what should" }, { "start": 630.36, "end": 637.22, "text": " the next word be. And the architecture was such that there is a decoder transformer right" }, { "start": 637.22, "end": 644.5600000000001, "text": " here and that there is an encoder transformer that encodes whatever you already had. And" }, { "start": 644.5600000000001, "end": 649.96, "text": " then at some point there is this cross attention, right? There is the signal from the decoder" }, { "start": 649.96, "end": 657.08, "text": " going into the encoder and the encoder incorporating that. And then at the end right here, the encoder" }, { "start": 657.08, "end": 662.36, "text": " would predict or the entire transformer would predict what the next word will be. The only" }, { "start": 662.36, "end": 669.52, "text": " difference right here is that the decode this, sorry, I mixed this up. This is the decoder." }, { "start": 669.52, "end": 677.2, "text": " This is the encoder. The only difference right here is that this encoder is no longer a transformer," }, { "start": 677.2, "end": 685.02, "text": " but is this ResNet, this ResNet 50. Okay, because now you don't have an image as a," }, { "start": 685.02, "end": 690.72, "text": " you can think of it like a translation task. You want to translate from images to text." }, { "start": 690.72, "end": 696.6800000000001, "text": " Okay, so your input is going to be an image and the signal is going like it would go in" }, { "start": 696.6800000000001, "end": 701.88, "text": " the original transformer into the decoder. It would come from the image. So from these" }, { "start": 701.88, "end": 711.6800000000001, "text": " visual features goes here. So in this drawing, this thing is going in here. And then you" }, { "start": 711.6800000000001, "end": 716.6600000000001, "text": " simply predict the next word and you do it in both directions. And the reason you can" }, { "start": 716.66, "end": 724, "text": " do it in both directions here, this wasn't, is not the case, of course, if you have a" }, { "start": 724, "end": 728.16, "text": " decoder like a standard transformer task, because you don't need to do inference at" }, { "start": 728.16, "end": 734.0799999999999, "text": " this, you just need to do training. And training you can do using teacher forcing. And so you" }, { "start": 734.0799999999999, "end": 740.28, "text": " can do this in a bi directional way. You don't need, you don't need this at inference time." }, { "start": 740.28, "end": 748, "text": " So at inference time, you simply cut off this part right here. That's your visual backbone." }, { "start": 748, "end": 753.72, "text": " Okay. And these features here, those are going to be the features that you then train your" }, { "start": 753.72, "end": 758.88, "text": " task on. And sometimes you fine tune this or sometimes you keep it frozen, you can choose" }, { "start": 758.88, "end": 766.06, "text": " that. Alright, so convolutional network to encode the images that gives you features," }, { "start": 766.06, "end": 771.4799999999999, "text": " visual features. Those visual features go into two transformers, both try to predict" }, { "start": 771.4799999999999, "end": 778.3199999999999, "text": " the caption of the image, one in a forward motion, one in a backward motion. And you" }, { "start": 778.3199999999999, "end": 784.8199999999999, "text": " train it to predict as accurately as possible the gold standard captions that you have in" }, { "start": 784.8199999999999, "end": 790.1999999999999, "text": " your data set. That's it. If you train this model well, that means the model can produce" }, { "start": 790.2, "end": 796.2, "text": " accurate captions for these images, which means that it has learned something meaningful" }, { "start": 796.2, "end": 800.9200000000001, "text": " about the image to the degree of course, that the original caption that was in your data" }, { "start": 800.9200000000001, "end": 806.7800000000001, "text": " set was a good descriptive caption. But we're just we're going to assume that the in these" }, { "start": 806.7800000000001, "end": 813.6400000000001, "text": " data sets, this is the case. Alright, that's what they do. Now, interesting thing here" }, { "start": 813.64, "end": 820.4399999999999, "text": " is that in their standard in their standard in their standard setup, they only have one" }, { "start": 820.4399999999999, "end": 825.96, "text": " of these transformer layers. So of these things right here, they only have one. And that's" }, { "start": 825.96, "end": 832.92, "text": " like I think it's like 2000 units wide, but or sorry, the hidden dimension is 2000 units" }, { "start": 832.92, "end": 838.8, "text": " or 2048. But they only have one layer. So what that means is that this transformer is" }, { "start": 838.8, "end": 846.68, "text": " not very powerful. So most that you force most of the power to come from the visual" }, { "start": 846.68, "end": 852.4, "text": " encoder, the visual encoder had basically has to do most of the work. And then the transformer" }, { "start": 852.4, "end": 861.12, "text": " is going to simply be a very shallow language model on top of that. And that of course makes" }, { "start": 861.12, "end": 867.8, "text": " your visual backbone even better. Alright, we can pretty much skip the rest. That's the" }, { "start": 867.8, "end": 871.5999999999999, "text": " idea. Like that there's nothing more to it. You train this from the beginning, you don't" }, { "start": 871.5999999999999, "end": 877.64, "text": " use any pre trained, whatever you train this from scratch. And then you use this. And then" }, { "start": 877.64, "end": 882.9599999999999, "text": " the first experiment, they simply train a linear classifier on top of that representation." }, { "start": 882.9599999999999, "end": 888, "text": " So they freeze the backbone, and then they use a linear classifier. And they compare" }, { "start": 888, "end": 893.88, "text": " this to baselines. So one of the baseline is image net supervised, where you use the" }, { "start": 893.88, "end": 900.28, "text": " same backbone, but you train it on image net in a supervised fashion. Okay, and then you" }, { "start": 900.28, "end": 904.2, "text": " use that backbone to transfer out of the text, it's kind of like what big transfer does," }, { "start": 904.2, "end": 914.24, "text": " but just on the regular 1000 class image net baseline. Then you have the sort of the unsupervised" }, { "start": 914.24, "end": 922.6, "text": " pre training ones. So moco, so pearly pearly some, I want to go into pearl, but moco is" }, { "start": 922.6, "end": 927.32, "text": " this momentum contrast, which is one of these supervised methods that has been shown to" }, { "start": 927.32, "end": 935.36, "text": " work really, really well. And this is also moco en is trained on image net, but now without" }, { "start": 935.36, "end": 941.6800000000001, "text": " the labels, because moco is unsupervised. And moco cocoa is trained on the cocoa data" }, { "start": 941.6800000000001, "end": 949.32, "text": " set. And the cocoa data set is what this paper here, the vertex paper uses cocoa is this image" }, { "start": 949.32, "end": 957.5600000000001, "text": " captioning data set. Now what's important to note is that cocoa has about 10% only of" }, { "start": 957.5600000000001, "end": 968.12, "text": " the images of image net. So it's considerably smaller. Now let's see how these things fair." }, { "start": 968.12, "end": 974.72, "text": " Right here, you can see on the x axis, the number of images, okay, the number of images" }, { "start": 974.72, "end": 980.4, "text": " that the data set or that the pre training method trains on. Now, of course, some of" }, { "start": 980.4, "end": 985.0400000000001, "text": " these are going to be capped because for some data sets, there are just not more images" }, { "start": 985.0400000000001, "end": 990.9200000000001, "text": " available, right? So they're going to be capped here, the ones that are training on cocoa" }, { "start": 990.9200000000001, "end": 994.5600000000001, "text": " and the ones that are training on image net are going to be capped here. And you can already" }, { "start": 994.5600000000001, "end": 1004.26, "text": " see that the vertex outperforms the image net supervised baseline by pretty much when" }, { "start": 1004.26, "end": 1010, "text": " you only give it this many images. Okay, so the way you do it is, in this case, you simply" }, { "start": 1010, "end": 1017.08, "text": " train these models. Now the brown one is when you take one caption per image, but the data" }, { "start": 1017.08, "end": 1021.6, "text": " set actually has more more than one caption per image. So when you use more than one," }, { "start": 1021.6, "end": 1029.22, "text": " you can still boost your performance a bit. And that works way better than when you do" }, { "start": 1029.22, "end": 1035.6000000000001, "text": " the supervised pre training on image net, which would get you here with about the same" }, { "start": 1035.6000000000001, "end": 1040.64, "text": " amount of images. Now, when you use all of image net, you can see here you can get to" }, { "start": 1040.64, "end": 1046.56, "text": " a similar performance right here, but you have to use a 10 times bigger data set to" }, { "start": 1046.56, "end": 1054.52, "text": " get there. Right, so this already shows you sort of the advantage here. Now also consider" }, { "start": 1054.52, "end": 1060.24, "text": " the difference to the unsupervised ones. So if you look at the same amount of images," }, { "start": 1060.24, "end": 1068.68, "text": " the unsupervised self supervised baselines are even lower. But if you go to more images," }, { "start": 1068.68, "end": 1073.8, "text": " they sort of get closer to image net. And in their own papers, there are there are some" }, { "start": 1073.8, "end": 1080.84, "text": " evidence that if you self supervised train for long enough, you can actually surpass" }, { "start": 1080.84, "end": 1089.08, "text": " image net supervised pre training, but I'm not so sure that that's really the case. But" }, { "start": 1089.08, "end": 1098.56, "text": " you can see here the trade off between higher quality information, but smaller data sets" }, { "start": 1098.56, "end": 1108.84, "text": " versus lower quality information, but more data per data set. And yeah, if I guess if" }, { "start": 1108.84, "end": 1114.6799999999998, "text": " you were to if you were to pre train these self supervised methods with lots more data" }, { "start": 1114.6799999999998, "end": 1122.52, "text": " in a self supervised manner, they would maybe end up even higher than image net. Now this" }, { "start": 1122.52, "end": 1126.9599999999998, "text": " graph here is sort of the same thing where they also train a linear classifier. And you" }, { "start": 1126.9599999999998, "end": 1132.24, "text": " can see right here that now the image net supervised baseline is outperforming vertex" }, { "start": 1132.24, "end": 1137.28, "text": " by a lot. So what's happening here? Now this here is actually this is on image net. So" }, { "start": 1137.28, "end": 1143.6399999999999, "text": " the task here that you transfer learn is image net. Here it was like a neutral task Pascal" }, { "start": 1143.6399999999999, "end": 1149.6399999999999, "text": " VOC. None of these methods have trained on Pascal. They simply have trained on their" }, { "start": 1149.6399999999999, "end": 1153.76, "text": " own data set. These have trained on cocoa. This has trained on image net. And then they" }, { "start": 1153.76, "end": 1159.8, "text": " have transfer learned to Pascal. Now, the task is actually the transfer learning task" }, { "start": 1159.8, "end": 1167.68, "text": " is image net. So naturally, the the thing that was pre trained in a supervised fashion" }, { "start": 1167.68, "end": 1173, "text": " on image net is going to have a huge advantage in this task, because it basically has already" }, { "start": 1173, "end": 1180.32, "text": " learned the task beforehand, whereas the vertex, it has pre trained on cocoa, not on image" }, { "start": 1180.32, "end": 1186.44, "text": " net. And you can see, if you give it the same amount of images for pre training, it can" }, { "start": 1186.44, "end": 1191.96, "text": " actually it's it's fairly close to the image net baseline. So that's pretty respectable" }, { "start": 1191.96, "end": 1196.68, "text": " right there. Now, again, of course, if you use more images on the same data set that" }, { "start": 1196.68, "end": 1201.6000000000001, "text": " you then train for, then of course, the the image that baseline is going to outperform" }, { "start": 1201.6000000000001, "end": 1210.06, "text": " it. But so pretty cool to see here that in this smaller image regime, and also consider" }, { "start": 1210.06, "end": 1215.98, "text": " this down here, if you go even an order of magnitude lower, it's really shining that" }, { "start": 1215.98, "end": 1221.98, "text": " if you have higher quality information, and you make use of it, you don't need as many" }, { "start": 1221.98, "end": 1229.2, "text": " images. And now we knew this for a long time. But this now is showing the same for transfer" }, { "start": 1229.2, "end": 1238.46, "text": " learning for visual transfer learning. So this was when we froze the backbone. And then" }, { "start": 1238.46, "end": 1246.88, "text": " we trained a linear classifier on top, they go, they make a short excursion here and show" }, { "start": 1246.88, "end": 1253.2, "text": " how different parts of their model affect their final performance. And they find that," }, { "start": 1253.2, "end": 1261.52, "text": " for example, the by captioning, which I believe is the is forward and backward captioning" }, { "start": 1261.52, "end": 1268.7, "text": " significantly helps, for example, compared to only forward captioning. And they also" }, { "start": 1268.7, "end": 1274.6399999999999, "text": " find that it's significantly outperforms other pre training tasks that they could do. And" }, { "start": 1274.6399999999999, "end": 1280.44, "text": " they also investigate whether how big their models should be. So here, this is their baseline" }, { "start": 1280.44, "end": 1291.94, "text": " model. Oh, I was I was wrong, actually, they the it's one layer of with 1024. You can see" }, { "start": 1291.94, "end": 1297.68, "text": " as you make the layer bigger and bigger, that generally helps. But I guess they decided" }, { "start": 1297.68, "end": 1304.8200000000002, "text": " against it because the gains are too little to, to afford to make it worth. And also if" }, { "start": 1304.82, "end": 1311.08, "text": " you make the network deeper here, you make the transformer have more layers, the performance" }, { "start": 1311.08, "end": 1315.24, "text": " goes up. But again, the gains are marginal. So I guess they're going to leave it away." }, { "start": 1315.24, "end": 1325.48, "text": " So their baseline, as you can see, is these resin at 50 with the one layer of 1024 size." }, { "start": 1325.48, "end": 1333.4199999999998, "text": " So this is now the last task, it's the fine tuning task. So this is what most people would" }, { "start": 1333.42, "end": 1338.76, "text": " do is they would train a backbone, and then they would fine tune it on a different data" }, { "start": 1338.76, "end": 1344.66, "text": " set on or on a different task where they don't have much labels. And here the situation looks" }, { "start": 1344.66, "end": 1352.1000000000001, "text": " a bit different. So if you look at, for example, a task on cocoa, so there are several tasks" }, { "start": 1352.1000000000001, "end": 1358.14, "text": " on cocoa, one of them is image captioning, which they use for peritra for pre training." }, { "start": 1358.14, "end": 1365.92, "text": " If you do other tasks on cocoa, you can see right here that compared to the supervised" }, { "start": 1365.92, "end": 1374.6000000000001, "text": " baseline, this vertex, it performs about the same or maybe a bit worse. But what you can" }, { "start": 1374.6000000000001, "end": 1382.68, "text": " see is it performs significantly better than, for example, moco that was only trained on" }, { "start": 1382.68, "end": 1388.42, "text": " cocoa. So again, this shows that if you have the same data set, higher quality information" }, { "start": 1388.42, "end": 1394.6000000000001, "text": " makes it worth it. And it's even better, as you can see, on moco that was trained on image" }, { "start": 1394.6000000000001, "end": 1400.78, "text": " net is just not quite as good as the supervised baseline. But all of them, of course, are" }, { "start": 1400.78, "end": 1405.48, "text": " better than just a randomly initialized network that is trained from scratch. I mean, that's" }, { "start": 1405.48, "end": 1411.3400000000001, "text": " the entire point of transfer learning, that you are better than simply learning from scratch." }, { "start": 1411.34, "end": 1419.3799999999999, "text": " And this shows throughout this experiment, except in this LVS masking task, where they" }, { "start": 1419.3799999999999, "end": 1427.22, "text": " do outperform the other the other things, the other methods significantly. Now the lower" }, { "start": 1427.22, "end": 1433.6, "text": " numbers on this tasks task also means that the task is harder than these tasks right" }, { "start": 1433.6, "end": 1438.74, "text": " here. And therefore, there are more gains to be made. And therefore, you could hypothesize" }, { "start": 1438.74, "end": 1446.16, "text": " that the bigger, the more quality information that you input can be used in a better way." }, { "start": 1446.16, "end": 1451.76, "text": " So maybe more complex, also, the more complex a task is might also have an influence on" }, { "start": 1451.76, "end": 1457.3, "text": " how well the transfer learning works, if you come from a high quality transfer learning" }, { "start": 1457.3, "end": 1462.06, "text": " task versus a low quality transfer learning tasks." }, { "start": 1462.06, "end": 1473.26, "text": " Yeah, so the lastly compare here with the again with Pascal VOC object detection, and" }, { "start": 1473.26, "end": 1479.46, "text": " these iNaturalist classification, where I believe this is also a transfer learning task" }, { "start": 1479.46, "end": 1487.6399999999999, "text": " with fine tuning. And as you can see, they can also hold up against the supervised baseline," }, { "start": 1487.64, "end": 1493.0400000000002, "text": " or even outperform it at sometimes the green triangles mean that they outperform it by" }, { "start": 1493.0400000000002, "end": 1499.3000000000002, "text": " a significant margin. But then on this task right here, they again lag behind. So I think" }, { "start": 1499.3000000000002, "end": 1507.3400000000001, "text": " the point of the paper isn't really to show that that this is the best thing ever. But" }, { "start": 1507.3400000000001, "end": 1513.7, "text": " the point of the paper is to show that you can go about pre trainings, basically, the" }, { "start": 1513.7, "end": 1518.42, "text": " the common assumption is that you need more and more and more and more data for your model" }, { "start": 1518.42, "end": 1525.66, "text": " to learn about the data set. And they conclude here, no, actually, you can do with with very" }, { "start": 1525.66, "end": 1532.76, "text": " few data points as long as they have high quality annotations. Okay, so I think that's" }, { "start": 1532.76, "end": 1537.7, "text": " the point of the of the paper that and they don't always outperform the other baselines" }, { "start": 1537.7, "end": 1544.14, "text": " and whatnot. But they keep, they keep the performance the same, which basically means" }, { "start": 1544.14, "end": 1550.38, "text": " that this is an option. Here is a pretty cool result where they visualize the attention" }, { "start": 1550.38, "end": 1555.06, "text": " of their image captioning model, because they train an image captioning model. And you can" }, { "start": 1555.06, "end": 1559.98, "text": " really see that the image captioning model learns something meaningful about the image." }, { "start": 1559.98, "end": 1566.74, "text": " So when it's a bird flying, the attention is mainly on the bird, as you can see, then" }, { "start": 1566.74, "end": 1573.66, "text": " over the the attention widens out over the image, air. So over the air, the attention" }, { "start": 1573.66, "end": 1580.02, "text": " is here in the sky and on the on the ocean. And then it goes near the ocean. And then" }, { "start": 1580.02, "end": 1586.54, "text": " the attention is on the ocean itself. As you can see, so they have a bunch of these images" }, { "start": 1586.54, "end": 1592.6200000000001, "text": " and they're they're pretty cool here a dog, so focused on the dog riding on and then you" }, { "start": 1592.62, "end": 1600.06, "text": " can see the attention going down because on is riding on means probably there's something" }, { "start": 1600.06, "end": 1608.3, "text": " below the dog. A surfboard. Now the attention is fully on the surfboard in. So as soon as" }, { "start": 1608.3, "end": 1614.4199999999998, "text": " you say in the attention, as you can see, it widens out. So I think that's that's fairly" }, { "start": 1614.4199999999998, "end": 1620.6599999999999, "text": " cool, fairly cool demonstration that the model understands sort of the the in relation, namely," }, { "start": 1620.66, "end": 1627.0600000000002, "text": " if it is focused on something, and that something is in something else, then it widens the attention" }, { "start": 1627.0600000000002, "end": 1634.3400000000001, "text": " out to see what it is in, okay, the ocean, and then it focuses the attention on the ocean." }, { "start": 1634.3400000000001, "end": 1638.8600000000001, "text": " So that's, that's a pretty, that's a pretty cool result. I guess we already knew this" }, { "start": 1638.8600000000001, "end": 1644.4, "text": " because we could train image captioning models before. It's just to show that it actually" }, { "start": 1644.4, "end": 1651.74, "text": " makes sense to use them as a pre training task for backbones. Now, what's the future" }, { "start": 1651.74, "end": 1657.9, "text": " of this, the authors here in their introduction, they make a claim that this has a good future" }, { "start": 1657.9, "end": 1663.74, "text": " because they here they only train on this small data set, right, it's smaller than image" }, { "start": 1663.74, "end": 1669.18, "text": " net, as you can see here, and they already get the same performance as if you train on" }, { "start": 1669.18, "end": 1674.5800000000002, "text": " the whole image net data set in a supervised fashion. Of course, they're also supervised," }, { "start": 1674.5800000000002, "end": 1681.38, "text": " but they have 10 times less images. And they they say something to the effect of you do" }, { "start": 1681.38, "end": 1686.38, "text": " you know, it would be pretty easy to collect more data for this task, because the internet" }, { "start": 1686.38, "end": 1694.78, "text": " is full of images. And mostly these images have like some text with them. They, you know," }, { "start": 1694.78, "end": 1698.18, "text": " they have these descriptions or they have text around it, people write something about" }, { "start": 1698.18, "end": 1703.18, "text": " the images, you could like mine Twitter, and then the responses when someone posts an image" }, { "start": 1703.18, "end": 1709.98, "text": " might tell you something about the image. But this definitely counteracts their this" }, { "start": 1709.98, "end": 1715.3400000000001, "text": " definitely counteracts their notion that these are very high quality labels, right? Their" }, { "start": 1715.3400000000001, "end": 1721.8, "text": " entire point here was that these annotations, these, these data sets with these these image" }, { "start": 1721.8, "end": 1727.18, "text": " captioning data sets like Coco, they have very, very high quality annotations. So this" }, { "start": 1727.18, "end": 1733.5800000000002, "text": " this text here is very high quality is really a descriptive text of the image that tries" }, { "start": 1733.5800000000002, "end": 1741.14, "text": " to capture what a human can see visually in the image. And as soon as you go out to the" }, { "start": 1741.14, "end": 1746.54, "text": " internet and collect a text around images, that's not going to be the case that information" }, { "start": 1746.54, "end": 1752.04, "text": " is again going to be quite low quality. And so I doubt that the performance here would" }, { "start": 1752.04, "end": 1758.98, "text": " hold up or that the claim you can easily, you know, you can easily create more data" }, { "start": 1758.98, "end": 1764.52, "text": " for this task holds up. So that's a bit my worry about the future of this, but it's definitely" }, { "start": 1764.52, "end": 1772.02, "text": " cool and definitely shows these quality quantity trade off very well. Alright, that was my" }, { "start": 1772.02, "end": 1778.34, "text": " two cents to the paper. I invite you to read it and tell me in the comments what you think" }, { "start": 1778.34, "end": 1782.34, "text": " about it and I'll see you next time." } ]
_7xpGve9QEE
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
The Future of AI is Self-Organizing and Self-Assembling (w/ Prof. Sebastian Risi)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "sebastian risi", "copenhagen", "minecraft ai", "self-assembly", "self assembly", "nanobots", "swarm bots", "swarm ai", "evolution ai", "evolutionary methods", "genetic algorithms", "neural cellular automata", "cellular automata", "nca", "graph neural networks", "gnns", "self organization", "ant colony ai", "swarm intelligence", "interview", "emergence", "emergent properties" ]
#ai #selforganization #emergence Read Sebastian's article here: https://sebastianrisi.com/self_assembling_ai/ OUTLINE: 0:00 - Introduction 2:25 - Start of Interview 4:00 - The intelligence of swarms 9:15 - The game of life & neural cellular automata 14:10 - What's missing from neural CAs? 17:20 - How does local computation compare to centralized computation? 25:40 - Applications beyond games and graphics 33:00 - Can we do away with goals? 35:30 - Where do these methods shine? 43:30 - The paradox of scales & brains 49:45 - Connections to graphical systems & GNNs 51:30 - Could this solve ARC? 57:45 - Where can people get started? References: https://sebastianrisi.com/ https://modl.ai/ https://sebastianrisi.com/self_assembling_ai/ https://twitter.com/risi1979/status/1519053654921293827?cxt=HHwWhsC9hYfQ4ZQqAAAA https://distill.pub/2020/growing-ca/ https://arxiv.org/abs/2201.12360?source=techstories.org https://distill.pub/2020/selforg/mnist/ https://arxiv.org/pdf/2204.11674.pdf https://github.com/fchollet/ARC https://github.com/volotat/ARC-Game http://animalaiolympics.com/AAI/ https://www.deepmind.com/publications/alchemy-a-structured-task-distribution-for-meta-reinforcement-learning-f https://melaniemitchell.me/BooksContent/CAGTReviews.html Links: Homepage: https://ykilcher.com Merch: https://ykilcher.com/merch YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ykilcher.com/discord LinkedIn: https://www.linkedin.com/in/ykilcher If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hey there, today I'm talking to Sebastian Riese, who is the director of the creative AI lab and the co director of the robotics, evolution and art lab at the IT University of Copenhagen. He's also the co founder of a company called model AI that uses AI for various aspects of game development. Specifically today, we're going to talk about a blog post that Sebastian wrote that's called the future of artificial intelligence is self organizing and self assembling, we're going to talk about systems that have no supervised instance controlling everything but contain little elements that all need to somehow communicate locally with their neighbors to come to an agreement about the whole thing. Think of something like an anthill just organizing in tiny parts to achieve a bigger goal. Now we've had massive success with these big supervised model, essentially a central instance controlling everything and that works wonders for the problems that we're currently solving. However, if you think of things like the most complex organisms that ever existed, which is probably human society, at least as far as we know, that is not supervised that has no central instance except the Illuminati. But you know, so essentially human society is self organizing and self assembling lots of little parts, making decisions on their own communicating locally. And what emerges is this absolutely beautiful thing. Now, as you can imagine, this is not mainstream self organizing and self assembling systems and related things like open ended and lifelong learning. These are not the current hype topics, but I believe strongly that they will be in the future. Things like this will play a big role when we push beyond the limits that we are definitely going to hit when using supervised and centrally controlled systems. Applications of this are numerous, I already mentioned things like game development. In fact, a lot of Sebastian's experiments are in things like Minecraft and other games just for visual, you know, in their research. However, the applications are possibly unbounded and could touch every area of AI and the greater field of technology. So join me this interview was absolutely awesome. You should follow Sebastian and follow his research and the research of his collaborators very, very interesting. I like it. It's out of the box. It's new, it's creative, it pushes beyond what I know. That is it for me. We'll dive into the interview. I'll see you around. Bye bye. Hello, everyone. Today, I have Sebastian Rizzi with me, who is a professor at in Copenhagen working in the general field of self organizing and self assembling systems, which is, I think an entire different world than the current paradigm that we're used to. We're used to having our deep networks, training them really top down with supervised signal, sometimes self supervised. But I guess that's still kind of like a top down supervision. There's gradient descent, there's all these things where essentially an outsider outside us human or or some some constraint is globally enforced. And there's an entirely different world that goes much more along the lines of nature. And that tries to come up with structure from from the bottom up and that I find this really cool and is really promising. And I think it's sort of can solve problems that are really hard to tackle with these classical algorithms. And I think the field is upcoming, even though it has existed for a long time. But I believe that is definitely worth to look at. So today, we'll talk about a first and foremost, this blog post, the future of artificial intelligence is self organizing and self assembling, but also a bunch of other things in this field. So Sebastian, welcome. And thank you so much for being here. Thanks a lot for the invitation. Very happy to be here. So why aren't you working on just scaling deep learning more and more to bigger and bigger models? What's the appeal of going like really small, really, really modular? Right? Yeah, I think there I mean, one reason is there a lot of people working on or in this field. So I like to work on things where they're, you know, there's there's maybe not so many people working on it. And I find this field particularly exciting. And we have seen that we can scale up deep learning and it can do like amazing things. But we have also seen that the systems still tend to be quite brittle. So we have reinforcement learning agents that that perform beyond human capabilities in some domains. But then you add a single pixel in this kind of the sock in this Atari breakout, and the system completely fell down. And there are a lot of other examples like image recognition examples where you slightly change an image or you rotate slightly and instead of detecting a fire bus, it's detecting something else. You have examples of Tesla driving into like an airplane because it mistakes it for something else. So these systems are amazing at a lot of things, but they're still very, very brittle in other tasks. And so that's why I'm particularly interested in this kind of idea of collective systems and self organization, because these systems have this inherent kind of robustness, you can take away parts, you can add parts. And the system will not completely break down because there's no central leader. It's like a self organizing process, a collective system. And that's what kind of fascinates me. And that's why I'm more recently we're going a lot in this direction. And it seems to be very fruitful direction where there's a lot of interesting things to discover that we haven't really looked at it. I think as a motivating example, we can show this thing right here, which is a collection of what are called swarm robots, or here it's called a robot swarm. Could you describe what is happening right here? What are we looking at? Right. This is a great work from Radhika Nagpal's group, where basically they have these kilobots, a thousand of them. And they follow a specific algorithm. And that allows these thousands of kilobots to assemble into a certain shape, like those shapes we see are like a star, a K, and I think this wrench. And this system shows basically they only have very limited information. These kilobots, they can only basically see their surroundings. But just by having this kind of local communication, these kilobots are able to, over time, to assemble into different shapes. And so this was one of the seminal papers that showed that you can run actually these kind of algorithms inspired by nature on a large scale, on a large swarm of robots. And this is basically like one great example of this. What it kind of what limited it is that those rules that those robots follow, like they have a specific plan, they needed to be designed by humans. So it's a human-made algorithm. They follow it and they can, you can compile it into making different shapes. But what we are more interested in is can we do similar things, but can we instead learn these rules with recent deep learning, machine learning methods, basically combining this deep learning with ideas from collective intelligence to create even more complex structures, growing more complex structures. This I think reminds a lot of people probably of something like ant colonies. Also, maybe not necessarily evolution, but the development of just cellular organisms in general, where there's not really, well, I'm going to step on some toes here, but an intelligent designer, you know, directing every step of the process up there. Is it fair to say that that these things you said inspired by nature? Is it fair to say that something like an ant colony implements one of these algorithms? Yeah, exactly. So it's inspired by what you see in swarms of animals, of insects doing like ants. They're like amazingly robust and they have this kind of collective intelligence that is bigger. They are made out of simple units, but together they do these amazing kind of things and termites. They build these amazing structures. And so I think for this work is actually, I think it was termites that was the main inspiration for this. And then you also have the same thing in the same kind of collective thing happens when through morphogenesis, like when we are grown basically from one cell by division and local communication, it's growing these like amazingly complex structures. And both processes show that by very simple rules, you can get amazing things. And there are many other examples. And one thing that these systems have in common is that you can remove parts and it still kind of works, which is very different to our current like neural networks where you change something slightly and oftentimes they will just break down. I think yeah, you demonstrate this later by training robots and then removing limbs of them and they can still kind of adjust to it. And I think the arch example of these local rules you have also in your blog post, which is this game of life, which is obviously, as you said, these are hand designed rules still give rise to like a really complex set of phenomenon, which is, I believe even like undecidable to really decide from a starting point. I'm not sure about the lore behind game of life. Yeah, exactly. I mean, they're basically you can build any I mean, with this, it's a universal computer, basically, you can build any kind of program if you that you would want with the cellular automata, of course, it would be like a super massive cellular automata. But they as you said, they show that even these kind of simple rules, they give rise to things that replicate things that move across the screen. And so people have found like all kinds of amazing structures by basically not changing the rules, but changing the starting configuration of these kind of cellular automata. When we think about combining this with deep learning, we quickly get to these neural what are called neural cellular automata. You have some examples right here. And I think I have the website open somewhere. This is work that appeared in distilled pub, which is obviously cool interactive journals. So this I think this was one of the first even articles to appear out of Google. And so this here, I can maybe interact with it, you can destroy parts of it, and it will kind of regrow it. And all of this is happening just by local interaction. So there is no, there's no kind of global organizing system that tells these things what to do. But every single pixel in here essentially has a feature vector and communicates with the neighbors. And how they communicate is am I correct to say that the way they communicate with each other, that is the part that is learned through deep learning. Exactly. Yeah, you can imagine like you have basically a copy of the same neural network like running in each cell. And that and that network takes into account like information from the neighbors, the neighbor state, and then it decides what should what should the next state of that pixel basically be. And you have these like RGB values, that's one thing it decides on. But then it also has these additional channels, like hidden channels that it can basically, it can decide what kind of information would be good to communicate to my neighbors. And so this work was not like the first that used neural networks to learn rules for cell automata, but it really kind of revived the field. And what it did is that it showed that you can actually you can make the whole system differentiable. So we tried similar things before where we used evolution to optimize neural networks, which is this field neuroevolution. But it's quite difficult for evolution if you have a specific target in mind, like you want to grow the salamander or you want to grow a certain other structure. It's quite hard for evolution to learn these kind of supervised tasks. And then basically this paper showed then if you have a target, you can just use recent tools like do auto diff, differentiate the whole system. And you can actually efficiently learn how to grow a certain structure that is only grown through these local communication of cells. And that was one of the that I think revived like the whole field. And there's a lot more papers now using neural networks for cell automata to grow all kinds of things, game levels, robots. How do you train such a thing? You said the full thing is differentiable and there is a target in this case, right? Is it the fact that you are in some starting state? Do you let it evolve for a couple of steps and then kind of measure the loss and then do something like back propagation through time? Yeah, exactly. Yeah. So you let it grow and then you measure like is it how close is it to the final output? And then it gives you the error to correct it. And then they do all kinds of tricks like that you want the system to be of course robust that if I let it grow for 50 steps, instead of like 20, I still want it to look like a salamander. So they do some kind of a few tricks that like doing it stochastically and letting grow for different amounts of time to get the system to be that it grows and it also kind of knows when to stop growing because that's an important part. Also nature like if through morphogenesis it grows an organ, it should know when to stop growing that organ and then like not grow forever. So that's one important ability of the systems is to learn kind of when to stop. If you were to let's say criticize this particular work, what would your criticism be? What's still missing from this? Or where is it weak? Yeah, so this what this showed is that it's basically it doesn't if you would critique it that you would you could say that it does not but that was also not the goal. It doesn't discover the structure itself. It has a target. So it has some kind of human design target like the salamander that is drawn by a human. And so in that case, that's one limitation. So actually one follow up work that we will be published soon, we actually combined evolution and this system where evolution we let evolution come up with like these soft robot in that case. And evolution is good at discovering like variety of different morphologies. And then we use basically this method to make the structure very robust. So we let evolution discover the structure and then we cut off all kinds of limbs and let it regrow. So combining kind of the creativity of evolution with this kind of making things robust through this gradient descent based training. That is the yeah, the work on soft robots. I've seen that it just looks really cool. So this would be one thing that is that is discovered this sort of kind of hopping tripod. And obviously this, I think soft robotics in general are rather new field and combining them up with like evolving system seems quite appropriate. So here's one with a cut off limb and you can learn to regrow, regrow it. Right? How in general, how do you teach a self organizing system to regrow things? Do you have to explicitly program? Like you have to explicitly train it to regrow things? Or is this just a natural consequence out of how the system was trained in the first place? Yeah, so sometimes it can often it already has some inherent robustness, but it will without explicit training, it will probably not be able to do this like perfectly. And it will be that it sometimes works and sometimes doesn't. So in these cases, we explicitly and also in the case of the work by Google, like they explicitly like you explicitly remove stuff during the training process so that you confront the system with, you know, this kind of this damage that it has to recover from. So it makes the system more robust if you specifically train for it. And I guess in nature, that's probably one reason that the system had to work for all these different environments. So there is a lot of like they weren't in your aunt colonies, sometimes you had more, sometimes you had less and so these systems are because of the way they were, they are evolved. They also show this kind of similar level of like superior level of robustness. At this point, are we already at the point where you would say that this surpasses or this is very advantageous to classical deep learning? Or are we still in the realm where, let's say, everything would be fairly possible with classic supervised top down deep learning? I think like this, it, it would be possible to have it grow and recover. But I think that the secret here is that it only uses local communication. Basically, you could of course have a network that would, I don't know, a network that you query that would could like similarly to earlier work like compositional pattern producing networks, CPPNs, where you query basically each location in space and you ask it what should the voxel be? And of course, these systems could then if there's damage, they could you could ask them again and they could recover. But the trick here is, is that it's only based on local communication. So if we ever want these things to work in the real world, then it's really advantageous to have things that only require local communication, basically. And so that's one whole that's one goal is that ultimately, we want to take those systems from also the simulation later on and you know, we have some initial work and we want to really create complex things also in the in the physical world. If you say in the in the physical world, because if I if I think of there was, oh, no, this was a this, the paper, the physical cell either automata is at least a thing that is doable in the in the real world. But if I think of something like, I don't know, a Tesla car or something like this, that is in the real world. Yet, it is still, you know, a central controller that controls the whole car, and there are still top down and so on. And it's also trained in that way. What are the types of physical situations where these would really the local communication would really come in handy? Yeah, like I could imagine like, let's say you have a building or something that could automatically detect if it's damaged, and then you know, it could like our you know, our skin, it, you know, it's damaged, and it's it's regrowing, it's, it's self, self healing. So you could ultimately, I mean, this is like science fiction, but imagine a building and then you it gets damaged, and then automatically it recognizes it's damaged. And then it, you know, automatically recovers from this damage. More other like science sci fi is if you have, imagine you have a swarm of nanobots, they only can communicate locally, right, but they have to figure out their shape, they have to figure out their what they can do in an environment. So in those situations, this local communication would be very advantageous. I don't know if it would necessarily be useful for these kind of, you know, Tesla, this car example. But but I could imagine a lot of other like application areas or drones that have to coordinate somehow together only being able to sense each other locally. So more these kind of in that areas. One thing I'm quite excited about is this getting this from like this 2d version to a 3d version. And then you can imagine building all kinds of things and it would automatically know you're building, you know, a table or you're building a chair or you're building this and this, which which I think it's quite so. So this is one example also of so yeah, the self classifying MNIST digits, where basically the system cannot only be used to grow something, but it can also be used to self infer its own shape. So you build something out of small components, or you draw like a digit. And then by having the cells communicate with each other, they figure out, oh, I'm part of an eight or I'm part of a one. And so basically, this is then what we replicated in this physical where you can put them together, make digits, and then each each of these cells will tell would figure out what part what shape am I part of. So this, this is a physical instantiation of the demo I have here online. This is another distal article where as you exactly said, these things, they figure out themselves what they're part of. And you made you made this this is your paper into a physical instantiation, which I find really cool. And now you're taking it to 3d. Yeah, yeah, that's the that's the plan. Yeah. And of course, currently, these systems, like this kind of self classifying MNIST digits, it does not work as well as like using like, like state of the art, deep convolutional neural network or transformer or what what you have. But I think ultimately, these systems, maybe we can integrate some ideas also for things like object detection to make these systems kind of more robust by having a more kind of distributed object detection where you have this system where the components, maybe it could be a combination of something convolutional and but then you have the system on top, where you have this local communication and they figure out together kind of what shape am I looking at and maybe that could make these systems also more robust in the future. And maybe less prone to kind of this adversarial attacks that we currently see the system still exhibit. Has anyone tried with like, maybe this would be interesting, like to take something like this, and try to like make an adversarial, I don't even know how that would look like, but something that a human would clearly classify as like a seven, but there's like a slight twist. Yeah, yeah, I'm not sure people have actually studied it so much on this, trying to see how what kind of adversarial attacks these systems could, I mean, fool like you could fool them. I'm sure there are also some. But maybe the combination of kind of both this and more classic deep image recognition techniques could make them more robust. So you've taken also this idea of this 2D cellular automata, and you've applied this in 3D here in Minecraft, which so this is a morphogenesis. How do you how would you define morphogenesis just quickly? Yeah, I would define morphogenesis as like growing a complex structure based also on this kind of local communication. So how our like bodies are grown is morphogenesis, how our like organs are grown, how our nervous systems is grown basically, from like, you know, a single starting cell. And so this is what we do here. And again, the structures are not found by the system itself, like we took like an existing apartment building. And then we trained the system in the same supervised way to regrow it basically. And we were surprised that it could also grow like these kind of functional machines, we actually had it growing like, like this temple. And then we found that the trap in this temple still worked. So because it had all the components, like there was not one single mistake. And that allowed these kind of functional things to still to still work like this kind of like caterpillar you see there. And can you can you you also said you can destroy part of it and it will regrow, right, which Yeah, is this have you made this playable somewhere in Minecraft itself? Or is this just purely your Yeah, it's currently it's it's not I mean, you can download the code and stuff. But it's not that we have a server where you can play with those things. But it would be very interesting. We actually we organized this Minecraft open endedness competition where we like a related field like can you have an algorithm that can like natural evolution create all kinds of novel things without limits. And that's also where we use this Minecraft framework. But it would be real fun. Like one thing that I that I want to try to also pursue in the future. Imagine you don't have it grow like caterpillars, but you have it grow like cities. And then depending on the environment that you as the human does decide, like the mountains or like the desert, it would grow a different type of city. So like, that's one thing we're looking at now, how can you incorporate also feedback back into the album, because this caterpillar will always grow the same caterpillar. But if if I put this caterpillar in a in a small box, it should maybe grow a small caterpillar. And if it's a large box, it should grow a large caterpillar. So how can you kind of incorporate this environmental feedback? That's another thing that I'm very curious about. Yeah, is do you see beyond beyond gaming, maybe which which I can definitely see applications of this? Do you see applications that are not in the physical world as we talked before, but but maybe in the in the still in the realm of the digital world? Are there applications? I don't know what what what all you you're thinking of, but distributed applications, networking applications, any sort of things that you're very excited about that maybe aren't super obvious if you just see the the Minecraft example. Right. I mean, one thing that we are basically I think like two things. One is like just this Minecraft, I think, could also ultimately teach us something about biology itself. So if we could because we don't know everything yet about how this exact morphogenesis process works in nature. I mean, we know a lot of things, but we don't know, for example, how is it so accurate? Like and and and so there are certain things that we are we don't know yet. And so by simulating these process like a very simplified model, but maybe there's things we can learn from these kind of very simple models. So that's one one area I'm also very excited about. And so taking these systems to as a as a very simplified models biology to learn something. The other thing, the other application area is what I'm excited about is using those things. But instead of growing Minecraft structures, you can grow actually artificial neural networks. So so so you're basically kind of replicating our brains are not like designed and fixed, they're grown like through this developmental process. So what what we did with this recent work is hyper NCA is taken basically, instead of having growing a caterpillar, we grow a pattern. And then we then we with a neural cell automata, and then we convert that pattern into a policy network. And that policy network then is we can use this for our RL task, for example. So that's one one area I'm very excited about and making this systems more, more performant, because currently we apply to quite simple problems. But I think ultimately, this kind of idea of this growing neural networks is can be very powerful, because that's how you know, our brains are created. So so we're trying to replicate that process, hoping to create also more, more adaptive, basically neural networks. What do I gain out of so in this here, I have these developmental steps on the left, I do essentially start with some configuration of weights, essentially. And then I let the cellular automata run for a number of steps self organizing here, then I take it into a network, and then I execute the network. And presumably, I have to learn this somehow. In this paper, what you are doing is you're using, if I recall correctly, a variant of evolutionary search, right? I could also, like, I know, in whatever way I learn it, I somehow have to learn how the cellular automata here reacts. What do I gain out of this instead of just training my policy net? Right. So so far, I would say it's the you don't get so much directly. So so far, this method, it's not that they outperform like current deep RL methods. But ultimately, basically, there is this this hypothesis, also popularized more recently by Tony's adore this kind of genomic bottleneck hypothesis that means that we only have, you know, 20,000 genes, and they, they, they guide the growth and self organization of our brains with trillions of connections. And and and so it's a much smaller genotype that encodes a much larger structure. And so this kind of compression is hypothesized to also allows us to and animals to deal with situation they haven't seen, like to basically that the robustness that animals show is part because they have to go through this bottleneck this compression. And this is the information you give to the next generation. So there's some limit on the information you can get. So that might bias the system towards learning rules that generalize well, like learning rules that generalize well. And so this is the the hypothesis here, that at some point, we can have a very small neural cell automata, which is basically like the genome and that encodes a much larger network and that hopefully would then be more robust. But that's something we have. That's basically what we're working on, which we which we haven't really shown yet. But that's the that's the hypothesis and the hope. One other thing that's kind of funny that it can do like it can you can basically let the growth continue and not just have one network grown, but multiple networks. So like we applied this to this quadruped domain. So we had it grow for for 10 steps to grow one brain like one network, then we put it into this quadruped. And we have a slightly larger quadruped. So we let it grow for longer, and then put it in the middle quadruped and then have a larger one. So and so basically one NCA can grow multiple different neural networks. And that's also one thing that I'm pretty excited about that we want to apply also for like more complex domains. And again, here you had an experiment with with where you damaged these quarter pads, the system is able to adjust, can you explain how this system is able to adjust to a damaged morphology, like a cut off a limb or something? So here it was basically trained to on these, like on all these different morphologies. And then we had it basically, by continuing the growth, you can get a controller that was trained for one morphology, and then you continue it and you get a controller that works for M2 and you let it grow a little longer and it has a morphology for M3. So in this case, those were basically seen during some other experiments, we have results where it has damage that was not seen during training here, basically was trained to being able to deal with this particular type. So if we would damage it in another way, it probably wouldn't work anymore with these metamorphosis networks. But yeah, so the hope is also that if you know how to control one quadruped, then there should be that you don't have to start basically from scratch, there should be some information there that allows you to also grow something that is related, and not having to start like all over again, basically. This flows, I think, into a lot of a lot of ideas from, as you said, the open ended community and the sort of don't have explicit goals community. I think parts of your blog posts and papers mentioned algorithms like quality, diversity, map elites, and things like this, which are obviously very exciting and very different from how we do deep learning today. So far, we've always looked at things that have either an explicit goal, like here is the salamander I want to build, or here is the Minecraft structure I want to build, or have some sort of, I want to say, goal in an in a more abstract sense, like the reinforcement learning goal of maximizing the height in this case, right for these robots that stand on top of one another. Yet, how do we go away from this? Is there is there a natural progression in these self organizing systems to go away from having explicit goals that would be more difficult to pursue with like the classic deep learning systems? Right, I think in general, so I think that, like two things like one is the representation, which I think these neural cell automata are like a great representation for a lot of like growing structures growing neural networks. And then the other thing is you mentioned is like the search, how do we actually get to systems that show interesting, these interesting properties. And so there seems to be a recent trend, I mean, not just in the self organizing systems, but in also in deep RL in general, to not train on one thing basically, but train on a variety of different things. So there was also this more recent paper by I think it was DeepMind where they this XLL that they showed like basically, if you train agents in a lot of different changing environments, they develop more robust skills basically. So I think basically here it's we also what I think it makes these self organizing systems quite difficult to train is that these landscapes, the fitness landscapes basically, they are probably very kind of not very smooth, because changing like something small in the self organizing systems can have like this cascading effect. So that's why these traditional objective based rewards, they work, but then they don't, it's still difficult to optimize. So that's why we're more looking into this kind of open ended, like what you mentioned quality diversity methods basically, where we're not trying to optimize for one particular outcome. But we're trying to find things that differ in some interesting ways basically. And I think those methods, particularly for this kind of self organization, they are very, very powerful basically. They are better at navigating like these kind of very complex landscapes with many local optima, but they're also slightly more expensive because they're looking at the larger space of this of the search space basically. What maybe these two questions in one given given these outlooks, what field that deep learning is good at right now? Do you expect these methods to be better? If you know, let's say if we invest the resources and figure out, you know, the tricks of the trade enough, what parts that deep learning is good at now? Could these methods overtake deep learning? And then on the other hand, what's kind of the, for you, the most exciting area that we haven't even unlocked yet with deep learning that are accessible with this? Right? So it's two different, two different things, but I'm wondering about what you think about both of these directions. Right. So I think it's also, I wouldn't say like overtake deep learning. I mean, we use basically we use deep learning as a tool for basically like kind of train the system. So I think, Yeah, sorry. I mean, deep learning and like the, just the thing we do right now, right? We have objective loss, supervised training, single neural network. So I would assume that these systems would be able to have a lot of different domains. I think the one kind of probably the closest, I think what we would see is that they would make our L agents more, you know, like more robust, more adaptive. And that's also already in this work that you that we have there is like where we have basically in this case, we trained not only the, we have completely random weights and we only trained local update rules, basically the habit rules. And then we show that through this system, we can actually during the lifetime cut off a leg. Again, we are always somehow mutilating these robots. We're not very nice to them. But basically, this is an example, I think, where we already show that is this is more adaptive than the current RL design. So in the current basically deep RL, I think the one main drawback is that we train a system and then we freeze the neural network and then let it do its tasks. And this seems like kind of very unnatural that like you have a frozen brain. Okay, maybe you have like some recurrent connection that allow you to learn something. But basically, we have this training period, then we freeze everything in the system and we apply it to domains. So that's not like lifetime learning in normally these systems. But the idea here is, in general self-organization, that we never wanted to stop learning, we never wanted to stop adapting, we want the self-organizing process to happening the whole time. So I think in any domain, where there are things that you might not have anticipated during test time, these systems could be beneficial. Like might it be there's a pixel edit, you're losing a leg or you wanted to do something else. I think that they already show that there's some, they can be superior in those domains. And that's one thing that I'm pretty excited about to apply them to more complicated domains, not just these like quadruped locomotion tasks, basically. But anything where you have something unanticipated happening, I think there will be can be a benefit of it. And then was the second question like what other a new area that we haven't even like we have no chance currently of tackling with our tools? Yeah, that's a great question. I mean, I think this new area is this kind of rapid lifetime adaptation basically, I think these systems are great for if you know what you would expect. But things like basically like having things that work in unknown environments, I think that's a really, I think exciting area that I mean, you have like animals in nature and you can put a dog into a new environment and will not completely like break down and will still know kind of what to do and to interact with the environment. And we don't have that yet for our agents, like we can put them in environments they're trained for, you put them too far out, they don't know what to do. So and I think that too, that's so this working in other environments and also having this kind of like, you know, common sense, I think is maybe also an area I think in the future that these systems could be applied to although I don't know exactly how but but that these systems have more common sense and don't directly break down like kind of giving them this kind of innate abilities that we humans are born with animals are some animals are born with that allows them to yeah, do a little bit more common sense things than than current deep learning system that don't have that property basically. And this, I think you say it even here at some point. This, in addition to the fact that there is this genomic bottleneck, right, you already said this, the genes encode or only have the capacity to encode very little information. And what we're doing here is we're learning essentially the rules to learn the rules, which can be compressed in a much better way than the rules themselves. And there is a reason to assume that this will result in that kind of common sense that if you have to essentially learn the meta rule, then that will make you generalize better. I mean, it's an it's an argument, I'm not super convinced yet. Right. But if you do then some parameter sharing, you showed in some experiments, you can compress this even further. So that might be a way to tackle that. And also this in Tony's adores paper, he actually he points out that this bottleneck, like there's some organism nature that have many more genes, for example. So maybe it is a feature that we have that number of genes that it's compressed. And so so that gives us like some hope that also having the similar feature in our artificial systems should be beneficial. But but we're still we only showed that for very, very simple, you know, simple tasks so far. And deep learning goes into the exact opposite directions, right? We're like the more the more parameters, the better we have the double descent phenomenon, and we can go essentially infinite and it always gets better, which is which is weird, right? Which is also giving amazing results, I think recently with you know, the whole language models and so on. So it's definitely it could it would be cool if in the near future, people discover like a fundamental connection between, you know, the the good results we get by scaling up, and the the actual principle from biology, which is seems to be more like compressing and scaling down, it would be nice if those were to join together somehow. And hopefully, we can be part of that in some extent. But yeah, I agree. It's really interesting that like that you Yeah, you scale up networks, and then your local optima disappear, like everything just works better. And here we basically we want to go the opposite direction. But it's not necessarily that we, of course, we still want our the final models to have trillions of of of like connections. But we what we basically want is we want the trainable parameters to be low. And I think that that's the fundamental difference that we have a small number of train or relatively small number of trainable parameters there, but they give rise to much more complicated system, exploiting things like self organization growth over time. And, yeah. This is I think, because you said before, you're not you're not an opponent of deep learning. In fact, deep learning is used inside of the cellular automata to to sort of learn these rules. I find it interesting, if you look in nature, that there are cells and they self organize in some way, right, by whatever means that is learned. But these cells then make up brains, right? And brains are naturally very top down planners. They're they're, they're in the moment, they, you know, look ahead. And then the brain somehow organizing to societies and the societies again, are very distributed, very local, very interaction on a person to person level. What do you what do you make of this? Do you think there is like an optimal switch from local to global to local to global that we could sort of stack on top of one another? Or is this just a happenstance of of the universe? Yeah, that's a Yeah, that's a that's a great question. And even more like the humans in the societies, they organize themselves into hierarchies, right? Top down control and somehow it gets even crazy. It's a good question. Do we need one? Yeah, do we need all of this in our artificial systems? Maybe we need all of this to get to real like more general artificial intelligence. Like because also one thing that is really crucial is the our culture, right? Like, like, if you if you I was reading this great book recently, like if you just put humans somewhere by themselves, they're not very like, you know, good at surviving, but we are good at surviving because we have all this cultural information, like all this knowledge that other people made that that we can build on. And that allows us to do all these amazing things. So maybe to get our eyes to do really amazing things, it's not enough to having like single agents in complex environments, but it needs to be multiple agents that need to be simulated maybe over multiple generations. So there can be some cultural knowledge transferred from some agents to other agents, similarly to how how it happens in for us. But of course, that also makes the simulations much more complex and expensive. When you have to simulate cultures, multiple like generations, and then we need some more better compute, especially at the university level. I think yeah, that's one advantage that nature has it has lots of lots of distributed compute available. That said that there is there is an interesting part in your blog post where you describe sort of how to train these things, or how to steer the development of these swarm systems or distributed systems. One one quote here you have is guiding a swarm system can only be done as a shepherd would drive a herd by applying force at crucial leverage points by subverting the natural tendencies of the system. And then another one is the self assembling brain knows no shortcuts in which your I believe your argument was a little bit that is very hard to predict what a change does until you observe it because the interactions can be kind of nonlinear, very dynamic, very, very hard to predict. In essence, that was basically the argument that that hissing are made in his this great book like self organizing, no self assembling brain. And basically that you need to basically the system needs this process of growth. And you have to put energy into it to observe the outcome you cannot predict. And that's also things they showed that Wolfram what he showed with simple one diesel automata, you cannot predict the state of the system, you have to actually run the system even if it's a simple one diesel automata. And that is also apparently the question is, do we also need to do that for to growing our neural networks instead of like designing them? Maybe we need to go through this kind of process of growth with learned rules to to really unlock you know what these systems can do. There is recent work in using for example, GANs or so to predict things like fluid dynamics and you know, they can't do it like super, like they're not extremely accurate, but they can give a pretty good estimate of given starting state and then a highly dynamic nonlinear system. And then they can predict some steps into the future, I've seen the same like galaxy development and so on. Do is there any happening like this where you can say, Well, I don't I can't, I don't have enough compute to run all these swarms, but I can sort of train a surrogate model that will give me the end in sort of a one step fashion. And then these the forces that I poke at the swarm at I could determine those using the surrogate model. Yeah, I think that that would be really interesting. I wonder I think it's, it could work for some limited steps in the future. But but but I think you you would still need to, you know, like, like at some point, you need to basically run this this model. I mean, maybe in the first like generations, you could help have so great model that somehow helps you to sort out like the things that are really bad, like, this will not grow into anything. So I think you could use it there later, I guess you would probably have to run the system like when things get more complex. But I but I think there's also another role for the surrogate models, which something I always wanted to try to predict basically the learning abilities of the system. So you have an agent in an environment. So maybe you don't need to simulate the whole lifetime, right? But you can have some more like some kind of some tests that would test is this agent, how capable is this agent, so having some kind of surrogate that would could look at certain parts of I don't know the neural network and already predict, will this be a good learner or not basically. But yeah, it is in the in one part you also it has very, can very remember like I got into machine learning and graphical models were the hot thing at that point, it was just before deep learning. And this reminds me all this self organizing systems with the local communication, they remind me a lot of belief propagation, things like this graph neural networks, obviously are right now up and coming, let's say, do you see connections between all of those things? Or is that just kind of a superficial connection? Yeah, I definitely see there's a big connection to these also these graph neural networks, basically, like, I mean, they're very close to like a more generalized form basically of like a cell automata, where you have different basically neighborhoods, depending on your the topology of the graph. And they also seem to be there. I think they're super interesting. I also actually how I got into neural networks is the the first lecture I had as an undergrad was actually on neural networks and about these self organizing maps, which these coho and self organizing maps that basically can do clustering based on this somehow like kind of like kimmins, but on a on a much more, they can do it better. And you have to get these like nice visualizations out of them. And apparently, there's also some pros in our brain. I mean, we have these topographic maps also in our brains. I was always fascinated somehow by these self organizing maps. And even though I did a lot of like some other things during my PhD, somehow now I'm coming back to this kind of self organization. And and and yeah, using these recently learning tools, it's I think we can really unlock like the power behind them. There was a Do you know the arc challenge? The abstract reasoning corpus by Francois? Yeah, yeah, yeah. There is I'm not sure if they have an example right here. So for everyone who doesn't know this, this is a task where you get so the left ones are demonstration examples, there's always like an input grid and an output grid. And then you get a test example where you only get the input. So here, the rule is I've looked at that before. So the rule is kind of there is the gray in the middle, and you kind of fold the right hand side onto the left hand side and then you the solution here on the right hand side is kind of the the sum of the two. And this is these are things that humans are surprisingly good at, but are very difficult for a machine to learn. And the this is a data set and the training examples, there are not many training examples. So there is not really a way to to learn this through brute force training. There is a little game that people can play, I think I've reported on this before, but there is a game for anyone who's interested, where this is the arc game, you can find it on the GitHub page on of of Alexei Borski. And you can just choose one here, they're divided into different levels. And yeah, you can you can try them for yourself. So this, this looks even familiar, like cellular automata. Do you think that it like self organizing systems in one way or another in the way we've looked at them today, or in the way you've seen them could be useful in solving challenges like these, because challenges like these are related very much to, let's say, something that we would call intelligence. Yeah, I think the the, the hope would be that if we can get this kind of bottleneck algorithms to work where we exploit, so I'm not sure it like we could apply like self organization directly. But what I could imagine is that we exploit develop these kind of genomic bottleneck algorithms that can guide this self organization growth of a very complex neural network and that that network then could maybe be used for these kind of tasks. And the hope would be that because it has this compression, it would maybe develop an algorithm that would allow it to, you know, solve these kind of tasks that require more like high level cognitive skills. But but of course, that's still Yeah, we're still a little far away from that, I think. And I guess I don't know what the current state of the art and in this task is. How? I think it's, I think it's still largely unsolved. So this could be a great test domain, I think. But yeah, I think I, I'm not sure I have high hopes that it would already like, I think we still probably missing some other ingredients that we don't have yet to kind of make progress there. Yeah, but by the way, this, I think I just clicked on on one randomly. But I think here, the rule as I think if people get it, they can see that you always kind of select the smallest of the shapes that is there and kind of replicate it. You know, at least that's my that's my hypothesis, right? Yeah, maybe, maybe. Oh, I think maybe you take the one that fits in the box. Oh, yeah, yeah, yeah. Right. But it's like this, this, this kind of, like, you need to understand what shapes are and so on. So that is very much that this is very high level. This is very bottlenecky. It has a bottlenecky feel to it. Like, you're probably not going to get very far with like a CNN trained on these pixels directly. So that's, that's like I can see something like this very much be in the domain of of like, first open endedness, but then also self organizing things made up like simple rules making up something very complicated. There's two other domains that I think also very exciting, like one is this animal AI benchmark, where basically they it's like an animal AI Olympics where you apply eyes to tasks that animals normally are good at, like, and like, for example, trying to figure out which one is the tool and then you use that tool to, you know, get a reward. And so there's also where current methods basically, they've pretty much fail on more complicated tasks. And then they also had a mid-term experiments where they had children perform these tasks and they are still much better at than than like any of our deep RL methods. So in the simple task, deeper RL performs pretty well. Once it gets to more complicated things, then they the system basically, these systems fail. So this is one task that like, in the recent grant proposal that I proposed that that there would be a good test domain for these methods basically, because the whole point is to act in an environment that you haven't seen during training. Even though the environment is made out of the same building blocks, like there's rewards, there's like barriers, but how they are composed, all of this is new, basically, and never seen before. And the other one is this also by I think was the mind is alchemy task where you have to learn to kind of it's a task that we have to learn basically about the structure of the domain, what things you can put together, and then you have to use that knowledge to like building on that knowledge basically. And this is also a very difficult task for all of our current methods. So I think this could also be very good task to basically as the North Star to drive these the progress in this kind of area. And the hope is that these kind of self organizing system, they should be, hopefully would be better at in this where can people if someone wants to get started in diving into the world of self organizing systems, swarm intelligence, maybe a bit of open endedness, is there a good place for people to get started like get their their feet? Yeah, I would say I was recently rereading this great book from Melanie Mitchell, this complexity. I think this is a great starting book on on kind of this ideas of complex system self organization. There's something about cellular automata in there. So I think this is a this is a good kind of good point to get a broader overview of of that kind of whole field of basically complex system self organization. And yeah, and hopefully the also the the blog post hopefully can be helpful to some people and also plan to write more on on that as well. But but this I would suggest this is a this is definitely a good place to start. And is there some some, you know, in, in deep learning, it's usually Keras, I train a CNN on MNIST or CIFAR 10. Is there like some some standard thing that every one of your of your students goes through? I mean, now I sent a lot of them to this great distill article basically and looking at this this growing NCAs because they also have a great, like this collab notebook where you can play with the system. So I think this is a great starting point to where you both have neural like you have cellular automata and you have like how recent tools can be used to grow them. So I think this is a good good place to play around with basically. Okay. Yeah, I've I've spent more than more than more time than I've had on these things because they're quite It's great that it's also so interactive and fun to play with. Yes, definitely. Yeah, I think is there anything else that you would like to get out there to people about this field? Yeah, I just Yeah, I hope that people would be not only everybody running basically in the same direction just doing like what everybody else is doing. So hopefully this will be also get a few more people into this field of complex systems and self organizing systems and combining the ideas of deep learning. Because I think there's a lot of things interesting things to discover basically here and a little bit less people working on it then then the heart like like working on foundation models and language models and all those other things. Yeah, it's certainly I think I think is certainly an interesting area. And I guess especially if you're at a university without the super duper clusters. Probably just strategically a PhD in this field would maybe be more of a advantageous position for new newcomers to the field. Actually, like Hinton had this great quote recently on this other podcast, like it's always a good idea to figure out what huge numbers of very smart people are working on and to work on something else. Because you don't want to do maybe what what everybody else is doing. And I think so I would suggest this is a great field where a lot of I think interesting discoveries basically waiting to happen. I agree. All right. So Sebastian, thank you very much for being here today. This was very cool. I hope to see yeah I hope to see a sprawling future for your field. Thanks a lot for the invite. Thanks.
[ { "start": 0, "end": 3.9, "text": " Hey there, today I'm talking to Sebastian Riese, who is the director of the creative" }, { "start": 3.9, "end": 9.040000000000001, "text": " AI lab and the co director of the robotics, evolution and art lab at the IT University" }, { "start": 9.040000000000001, "end": 13.68, "text": " of Copenhagen. He's also the co founder of a company called model AI that uses AI for" }, { "start": 13.68, "end": 18.78, "text": " various aspects of game development. Specifically today, we're going to talk about a blog post" }, { "start": 18.78, "end": 23.080000000000002, "text": " that Sebastian wrote that's called the future of artificial intelligence is self organizing" }, { "start": 23.080000000000002, "end": 28.28, "text": " and self assembling, we're going to talk about systems that have no supervised instance controlling" }, { "start": 28.28, "end": 33.36, "text": " everything but contain little elements that all need to somehow communicate locally with" }, { "start": 33.36, "end": 37.400000000000006, "text": " their neighbors to come to an agreement about the whole thing. Think of something like an" }, { "start": 37.400000000000006, "end": 44, "text": " anthill just organizing in tiny parts to achieve a bigger goal. Now we've had massive success" }, { "start": 44, "end": 48.84, "text": " with these big supervised model, essentially a central instance controlling everything" }, { "start": 48.84, "end": 54.260000000000005, "text": " and that works wonders for the problems that we're currently solving. However, if you think" }, { "start": 54.26, "end": 59.559999999999995, "text": " of things like the most complex organisms that ever existed, which is probably human" }, { "start": 59.559999999999995, "end": 66.1, "text": " society, at least as far as we know, that is not supervised that has no central instance" }, { "start": 66.1, "end": 71.8, "text": " except the Illuminati. But you know, so essentially human society is self organizing and self" }, { "start": 71.8, "end": 77.36, "text": " assembling lots of little parts, making decisions on their own communicating locally. And what" }, { "start": 77.36, "end": 84.36, "text": " emerges is this absolutely beautiful thing. Now, as you can imagine, this is not mainstream" }, { "start": 84.36, "end": 89.56, "text": " self organizing and self assembling systems and related things like open ended and lifelong" }, { "start": 89.56, "end": 95.28, "text": " learning. These are not the current hype topics, but I believe strongly that they will be in" }, { "start": 95.28, "end": 100.76, "text": " the future. Things like this will play a big role when we push beyond the limits that we" }, { "start": 100.76, "end": 106.16, "text": " are definitely going to hit when using supervised and centrally controlled systems. Applications" }, { "start": 106.16, "end": 110.74, "text": " of this are numerous, I already mentioned things like game development. In fact, a lot" }, { "start": 110.74, "end": 116.32, "text": " of Sebastian's experiments are in things like Minecraft and other games just for visual," }, { "start": 116.32, "end": 122.12, "text": " you know, in their research. However, the applications are possibly unbounded and could" }, { "start": 122.12, "end": 127.84, "text": " touch every area of AI and the greater field of technology. So join me this interview was" }, { "start": 127.84, "end": 132.56, "text": " absolutely awesome. You should follow Sebastian and follow his research and the research of" }, { "start": 132.56, "end": 137.12, "text": " his collaborators very, very interesting. I like it. It's out of the box. It's new," }, { "start": 137.12, "end": 142.16, "text": " it's creative, it pushes beyond what I know. That is it for me. We'll dive into the interview." }, { "start": 142.16, "end": 147.92000000000002, "text": " I'll see you around. Bye bye. Hello, everyone. Today, I have Sebastian Rizzi with me, who" }, { "start": 147.92000000000002, "end": 154.6, "text": " is a professor at in Copenhagen working in the general field of self organizing and self" }, { "start": 154.6, "end": 161.04, "text": " assembling systems, which is, I think an entire different world than the current paradigm" }, { "start": 161.04, "end": 166.23999999999998, "text": " that we're used to. We're used to having our deep networks, training them really top down" }, { "start": 166.23999999999998, "end": 171.76, "text": " with supervised signal, sometimes self supervised. But I guess that's still kind of like a top" }, { "start": 171.76, "end": 177.2, "text": " down supervision. There's gradient descent, there's all these things where essentially" }, { "start": 177.2, "end": 185.7, "text": " an outsider outside us human or or some some constraint is globally enforced. And there's" }, { "start": 185.7, "end": 193.04, "text": " an entirely different world that goes much more along the lines of nature. And that tries" }, { "start": 193.04, "end": 199.2, "text": " to come up with structure from from the bottom up and that I find this really cool and is" }, { "start": 199.2, "end": 205.64, "text": " really promising. And I think it's sort of can solve problems that are really hard to" }, { "start": 205.64, "end": 211.95999999999998, "text": " tackle with these classical algorithms. And I think the field is upcoming, even though" }, { "start": 211.96, "end": 218.24, "text": " it has existed for a long time. But I believe that is definitely worth to look at. So today," }, { "start": 218.24, "end": 223.64000000000001, "text": " we'll talk about a first and foremost, this blog post, the future of artificial intelligence" }, { "start": 223.64000000000001, "end": 229, "text": " is self organizing and self assembling, but also a bunch of other things in this field." }, { "start": 229, "end": 234.48000000000002, "text": " So Sebastian, welcome. And thank you so much for being here. Thanks a lot for the invitation." }, { "start": 234.48, "end": 243.44, "text": " Very happy to be here. So why aren't you working on just scaling deep learning more and more" }, { "start": 243.44, "end": 248.44, "text": " to bigger and bigger models? What's the appeal of going like really small, really, really" }, { "start": 248.44, "end": 254.48, "text": " modular? Right? Yeah, I think there I mean, one reason is there a lot of people working" }, { "start": 254.48, "end": 258.76, "text": " on or in this field. So I like to work on things where they're, you know, there's there's" }, { "start": 258.76, "end": 263.52, "text": " maybe not so many people working on it. And I find this field particularly exciting. And" }, { "start": 263.52, "end": 269.56, "text": " we have seen that we can scale up deep learning and it can do like amazing things. But we" }, { "start": 269.56, "end": 275.08, "text": " have also seen that the systems still tend to be quite brittle. So we have reinforcement" }, { "start": 275.08, "end": 281.79999999999995, "text": " learning agents that that perform beyond human capabilities in some domains. But then you" }, { "start": 281.79999999999995, "end": 287.96, "text": " add a single pixel in this kind of the sock in this Atari breakout, and the system completely" }, { "start": 287.96, "end": 292.76, "text": " fell down. And there are a lot of other examples like image recognition examples where you" }, { "start": 292.76, "end": 297.12, "text": " slightly change an image or you rotate slightly and instead of detecting a fire bus, it's" }, { "start": 297.12, "end": 302.15999999999997, "text": " detecting something else. You have examples of Tesla driving into like an airplane because" }, { "start": 302.15999999999997, "end": 305.64, "text": " it mistakes it for something else. So these systems are amazing at a lot of things, but" }, { "start": 305.64, "end": 310.8, "text": " they're still very, very brittle in other tasks. And so that's why I'm particularly" }, { "start": 310.8, "end": 316.08, "text": " interested in this kind of idea of collective systems and self organization, because these" }, { "start": 316.08, "end": 322.32, "text": " systems have this inherent kind of robustness, you can take away parts, you can add parts." }, { "start": 322.32, "end": 326, "text": " And the system will not completely break down because there's no central leader. It's like" }, { "start": 326, "end": 332.68, "text": " a self organizing process, a collective system. And that's what kind of fascinates me. And" }, { "start": 332.68, "end": 337.28, "text": " that's why I'm more recently we're going a lot in this direction. And it seems to be" }, { "start": 337.28, "end": 342.12, "text": " very fruitful direction where there's a lot of interesting things to discover that we" }, { "start": 342.12, "end": 343.92, "text": " haven't really looked at it." }, { "start": 343.92, "end": 350.28, "text": " I think as a motivating example, we can show this thing right here, which is a collection" }, { "start": 350.28, "end": 356, "text": " of what are called swarm robots, or here it's called a robot swarm. Could you describe what" }, { "start": 356, "end": 358.64, "text": " is happening right here? What are we looking at?" }, { "start": 358.64, "end": 365.55999999999995, "text": " Right. This is a great work from Radhika Nagpal's group, where basically they have these kilobots," }, { "start": 365.55999999999995, "end": 372.47999999999996, "text": " a thousand of them. And they follow a specific algorithm. And that allows these thousands" }, { "start": 372.47999999999996, "end": 378.03999999999996, "text": " of kilobots to assemble into a certain shape, like those shapes we see are like a star," }, { "start": 378.04, "end": 385.64000000000004, "text": " a K, and I think this wrench. And this system shows basically they only have very limited" }, { "start": 385.64000000000004, "end": 390.72, "text": " information. These kilobots, they can only basically see their surroundings. But just" }, { "start": 390.72, "end": 396.12, "text": " by having this kind of local communication, these kilobots are able to, over time, to" }, { "start": 396.12, "end": 401.08000000000004, "text": " assemble into different shapes. And so this was one of the seminal papers that showed" }, { "start": 401.08000000000004, "end": 406.56, "text": " that you can run actually these kind of algorithms inspired by nature on a large scale, on a large" }, { "start": 406.56, "end": 416.36, "text": " swarm of robots. And this is basically like one great example of this. What it kind of" }, { "start": 416.36, "end": 421.52, "text": " what limited it is that those rules that those robots follow, like they have a specific plan," }, { "start": 421.52, "end": 426.56, "text": " they needed to be designed by humans. So it's a human-made algorithm. They follow it and" }, { "start": 426.56, "end": 431.88, "text": " they can, you can compile it into making different shapes. But what we are more interested in" }, { "start": 431.88, "end": 437.8, "text": " is can we do similar things, but can we instead learn these rules with recent deep learning," }, { "start": 437.8, "end": 441.92, "text": " machine learning methods, basically combining this deep learning with ideas from collective" }, { "start": 441.92, "end": 448.92, "text": " intelligence to create even more complex structures, growing more complex structures." }, { "start": 448.92, "end": 457.8, "text": " This I think reminds a lot of people probably of something like ant colonies. Also, maybe" }, { "start": 457.8, "end": 464.96000000000004, "text": " not necessarily evolution, but the development of just cellular organisms in general, where" }, { "start": 464.96000000000004, "end": 470.04, "text": " there's not really, well, I'm going to step on some toes here, but an intelligent designer," }, { "start": 470.04, "end": 475.32, "text": " you know, directing every step of the process up there. Is it fair to say that that these" }, { "start": 475.32, "end": 480.24, "text": " things you said inspired by nature? Is it fair to say that something like an ant colony" }, { "start": 480.24, "end": 482.64, "text": " implements one of these algorithms?" }, { "start": 482.64, "end": 491, "text": " Yeah, exactly. So it's inspired by what you see in swarms of animals, of insects doing" }, { "start": 491, "end": 495.88, "text": " like ants. They're like amazingly robust and they have this kind of collective intelligence" }, { "start": 495.88, "end": 501.08, "text": " that is bigger. They are made out of simple units, but together they do these amazing" }, { "start": 501.08, "end": 505.28, "text": " kind of things and termites. They build these amazing structures. And so I think for this" }, { "start": 505.28, "end": 510.12, "text": " work is actually, I think it was termites that was the main inspiration for this. And" }, { "start": 510.12, "end": 516.36, "text": " then you also have the same thing in the same kind of collective thing happens when through" }, { "start": 516.36, "end": 523.76, "text": " morphogenesis, like when we are grown basically from one cell by division and local communication," }, { "start": 523.76, "end": 529.84, "text": " it's growing these like amazingly complex structures. And both processes show that by" }, { "start": 529.84, "end": 536.92, "text": " very simple rules, you can get amazing things. And there are many other examples. And one" }, { "start": 536.92, "end": 540.4399999999999, "text": " thing that these systems have in common is that you can remove parts and it still kind" }, { "start": 540.4399999999999, "end": 544.88, "text": " of works, which is very different to our current like neural networks where you change something" }, { "start": 544.88, "end": 547.68, "text": " slightly and oftentimes they will just break down." }, { "start": 547.68, "end": 553.7199999999999, "text": " I think yeah, you demonstrate this later by training robots and then removing limbs of" }, { "start": 553.7199999999999, "end": 559.56, "text": " them and they can still kind of adjust to it. And I think the arch example of these" }, { "start": 559.56, "end": 564.92, "text": " local rules you have also in your blog post, which is this game of life, which is obviously," }, { "start": 564.92, "end": 570.1999999999999, "text": " as you said, these are hand designed rules still give rise to like a really complex set" }, { "start": 570.1999999999999, "end": 577.4, "text": " of phenomenon, which is, I believe even like undecidable to really decide from a starting" }, { "start": 577.4, "end": 582.0799999999999, "text": " point. I'm not sure about the lore behind game of life." }, { "start": 582.0799999999999, "end": 587.36, "text": " Yeah, exactly. I mean, they're basically you can build any I mean, with this, it's a universal" }, { "start": 587.36, "end": 593, "text": " computer, basically, you can build any kind of program if you that you would want with" }, { "start": 593, "end": 596.92, "text": " the cellular automata, of course, it would be like a super massive cellular automata." }, { "start": 596.92, "end": 600.48, "text": " But they as you said, they show that even these kind of simple rules, they give rise" }, { "start": 600.48, "end": 605.44, "text": " to things that replicate things that move across the screen. And so people have found" }, { "start": 605.44, "end": 610.64, "text": " like all kinds of amazing structures by basically not changing the rules, but changing the starting" }, { "start": 610.64, "end": 616.06, "text": " configuration of these kind of cellular automata." }, { "start": 616.06, "end": 621.44, "text": " When we think about combining this with deep learning, we quickly get to these neural what" }, { "start": 621.44, "end": 626.9200000000001, "text": " are called neural cellular automata. You have some examples right here. And I think I have" }, { "start": 626.9200000000001, "end": 634.0400000000001, "text": " the website open somewhere. This is work that appeared in distilled pub, which is obviously" }, { "start": 634.0400000000001, "end": 639.34, "text": " cool interactive journals. So this I think this was one of the first even articles to" }, { "start": 639.34, "end": 645.6800000000001, "text": " appear out of Google. And so this here, I can maybe interact with it, you can destroy" }, { "start": 645.6800000000001, "end": 651.36, "text": " parts of it, and it will kind of regrow it. And all of this is happening just by local" }, { "start": 651.36, "end": 657.34, "text": " interaction. So there is no, there's no kind of global organizing system that tells these" }, { "start": 657.34, "end": 662.0600000000001, "text": " things what to do. But every single pixel in here essentially has a feature vector and" }, { "start": 662.0600000000001, "end": 669.36, "text": " communicates with the neighbors. And how they communicate is am I correct to say that the" }, { "start": 669.36, "end": 675.88, "text": " way they communicate with each other, that is the part that is learned through deep learning." }, { "start": 675.88, "end": 680.62, "text": " Exactly. Yeah, you can imagine like you have basically a copy of the same neural network" }, { "start": 680.62, "end": 685.76, "text": " like running in each cell. And that and that network takes into account like information" }, { "start": 685.76, "end": 690.6, "text": " from the neighbors, the neighbor state, and then it decides what should what should the" }, { "start": 690.6, "end": 695.52, "text": " next state of that pixel basically be. And you have these like RGB values, that's one" }, { "start": 695.52, "end": 699.76, "text": " thing it decides on. But then it also has these additional channels, like hidden channels" }, { "start": 699.76, "end": 704.2, "text": " that it can basically, it can decide what kind of information would be good to communicate" }, { "start": 704.2, "end": 711.76, "text": " to my neighbors. And so this work was not like the first that used neural networks to" }, { "start": 711.76, "end": 716.72, "text": " learn rules for cell automata, but it really kind of revived the field. And what it did" }, { "start": 716.72, "end": 721.0400000000001, "text": " is that it showed that you can actually you can make the whole system differentiable." }, { "start": 721.0400000000001, "end": 727.2800000000001, "text": " So we tried similar things before where we used evolution to optimize neural networks," }, { "start": 727.2800000000001, "end": 732.5200000000001, "text": " which is this field neuroevolution. But it's quite difficult for evolution if you have" }, { "start": 732.52, "end": 736.28, "text": " a specific target in mind, like you want to grow the salamander or you want to grow a" }, { "start": 736.28, "end": 740.24, "text": " certain other structure. It's quite hard for evolution to learn these kind of supervised" }, { "start": 740.24, "end": 744.64, "text": " tasks. And then basically this paper showed then if you have a target, you can just use" }, { "start": 744.64, "end": 749.9, "text": " recent tools like do auto diff, differentiate the whole system. And you can actually efficiently" }, { "start": 749.9, "end": 755.84, "text": " learn how to grow a certain structure that is only grown through these local communication" }, { "start": 755.84, "end": 760.04, "text": " of cells. And that was one of the that I think revived like the whole field. And there's" }, { "start": 760.04, "end": 765.9599999999999, "text": " a lot more papers now using neural networks for cell automata to grow all kinds of things," }, { "start": 765.9599999999999, "end": 770.48, "text": " game levels, robots." }, { "start": 770.48, "end": 775.36, "text": " How do you train such a thing? You said the full thing is differentiable and there is" }, { "start": 775.36, "end": 783.76, "text": " a target in this case, right? Is it the fact that you are in some starting state? Do you" }, { "start": 783.76, "end": 788.3199999999999, "text": " let it evolve for a couple of steps and then kind of measure the loss and then do something" }, { "start": 788.32, "end": 790.6400000000001, "text": " like back propagation through time?" }, { "start": 790.6400000000001, "end": 796.36, "text": " Yeah, exactly. Yeah. So you let it grow and then you measure like is it how close is it" }, { "start": 796.36, "end": 800.84, "text": " to the final output? And then it gives you the error to correct it. And then they do" }, { "start": 800.84, "end": 806.8000000000001, "text": " all kinds of tricks like that you want the system to be of course robust that if I let" }, { "start": 806.8000000000001, "end": 814.7600000000001, "text": " it grow for 50 steps, instead of like 20, I still want it to look like a salamander." }, { "start": 814.76, "end": 820.68, "text": " So they do some kind of a few tricks that like doing it stochastically and letting grow" }, { "start": 820.68, "end": 827.2, "text": " for different amounts of time to get the system to be that it grows and it also kind of knows" }, { "start": 827.2, "end": 837.12, "text": " when to stop growing because that's an important part. Also nature like if through morphogenesis" }, { "start": 837.12, "end": 842.24, "text": " it grows an organ, it should know when to stop growing that organ and then like not" }, { "start": 842.24, "end": 846.48, "text": " grow forever. So that's one important ability of the systems is to learn kind of when to" }, { "start": 846.48, "end": 849.84, "text": " stop." }, { "start": 849.84, "end": 860.5600000000001, "text": " If you were to let's say criticize this particular work, what would your criticism be? What's" }, { "start": 860.5600000000001, "end": 863.8, "text": " still missing from this? Or where is it weak?" }, { "start": 863.8, "end": 869.04, "text": " Yeah, so this what this showed is that it's basically it doesn't if you would critique" }, { "start": 869.04, "end": 873.56, "text": " it that you would you could say that it does not but that was also not the goal. It doesn't" }, { "start": 873.56, "end": 880.0799999999999, "text": " discover the structure itself. It has a target. So it has some kind of human design target" }, { "start": 880.0799999999999, "end": 887.88, "text": " like the salamander that is drawn by a human. And so in that case, that's one limitation." }, { "start": 887.88, "end": 895.0799999999999, "text": " So actually one follow up work that we will be published soon, we actually combined evolution" }, { "start": 895.08, "end": 901.5200000000001, "text": " and this system where evolution we let evolution come up with like these soft robot in that" }, { "start": 901.5200000000001, "end": 906.44, "text": " case. And evolution is good at discovering like variety of different morphologies. And" }, { "start": 906.44, "end": 912.2, "text": " then we use basically this method to make the structure very robust. So we let evolution" }, { "start": 912.2, "end": 916.76, "text": " discover the structure and then we cut off all kinds of limbs and let it regrow. So combining" }, { "start": 916.76, "end": 922.2800000000001, "text": " kind of the creativity of evolution with this kind of making things robust through this" }, { "start": 922.2800000000001, "end": 924.2800000000001, "text": " gradient descent based training." }, { "start": 924.28, "end": 931.8, "text": " That is the yeah, the work on soft robots. I've seen that it just looks really cool." }, { "start": 931.8, "end": 938.4, "text": " So this would be one thing that is that is discovered this sort of kind of hopping tripod." }, { "start": 938.4, "end": 944.76, "text": " And obviously this, I think soft robotics in general are rather new field and combining" }, { "start": 944.76, "end": 949.5, "text": " them up with like evolving system seems quite appropriate. So here's one with a cut off" }, { "start": 949.5, "end": 961.4, "text": " limb and you can learn to regrow, regrow it. Right? How in general, how do you teach a" }, { "start": 961.4, "end": 968.36, "text": " self organizing system to regrow things? Do you have to explicitly program? Like you have" }, { "start": 968.36, "end": 974.06, "text": " to explicitly train it to regrow things? Or is this just a natural consequence out of" }, { "start": 974.06, "end": 976.28, "text": " how the system was trained in the first place?" }, { "start": 976.28, "end": 982.24, "text": " Yeah, so sometimes it can often it already has some inherent robustness, but it will" }, { "start": 982.24, "end": 988.48, "text": " without explicit training, it will probably not be able to do this like perfectly. And" }, { "start": 988.48, "end": 993.48, "text": " it will be that it sometimes works and sometimes doesn't. So in these cases, we explicitly" }, { "start": 993.48, "end": 998.1999999999999, "text": " and also in the case of the work by Google, like they explicitly like you explicitly remove" }, { "start": 998.1999999999999, "end": 1003.68, "text": " stuff during the training process so that you confront the system with, you know, this" }, { "start": 1003.68, "end": 1010.64, "text": " kind of this damage that it has to recover from. So it makes the system more robust if" }, { "start": 1010.64, "end": 1014.4, "text": " you specifically train for it. And I guess in nature, that's probably one reason that" }, { "start": 1014.4, "end": 1018.1999999999999, "text": " the system had to work for all these different environments. So there is a lot of like they" }, { "start": 1018.1999999999999, "end": 1023.78, "text": " weren't in your aunt colonies, sometimes you had more, sometimes you had less and so these" }, { "start": 1023.78, "end": 1028.52, "text": " systems are because of the way they were, they are evolved. They also show this kind" }, { "start": 1028.52, "end": 1033.3, "text": " of similar level of like superior level of robustness." }, { "start": 1033.3, "end": 1039.22, "text": " At this point, are we already at the point where you would say that this surpasses or" }, { "start": 1039.22, "end": 1044.6, "text": " this is very advantageous to classical deep learning? Or are we still in the realm where," }, { "start": 1044.6, "end": 1053.6399999999999, "text": " let's say, everything would be fairly possible with classic supervised top down deep learning?" }, { "start": 1053.6399999999999, "end": 1062.36, "text": " I think like this, it, it would be possible to have it grow and recover. But I think that" }, { "start": 1062.36, "end": 1066.56, "text": " the secret here is that it only uses local communication. Basically, you could of course" }, { "start": 1066.56, "end": 1071, "text": " have a network that would, I don't know, a network that you query that would could like" }, { "start": 1071, "end": 1077.28, "text": " similarly to earlier work like compositional pattern producing networks, CPPNs, where you" }, { "start": 1077.28, "end": 1082.8, "text": " query basically each location in space and you ask it what should the voxel be? And of" }, { "start": 1082.8, "end": 1086.36, "text": " course, these systems could then if there's damage, they could you could ask them again" }, { "start": 1086.36, "end": 1090.8, "text": " and they could recover. But the trick here is, is that it's only based on local communication." }, { "start": 1090.8, "end": 1094.84, "text": " So if we ever want these things to work in the real world, then it's really advantageous" }, { "start": 1094.84, "end": 1101.1, "text": " to have things that only require local communication, basically. And so that's one whole that's" }, { "start": 1101.1, "end": 1106.12, "text": " one goal is that ultimately, we want to take those systems from also the simulation later" }, { "start": 1106.12, "end": 1111.5, "text": " on and you know, we have some initial work and we want to really create complex things" }, { "start": 1111.5, "end": 1114.2, "text": " also in the in the physical world." }, { "start": 1114.2, "end": 1119.68, "text": " If you say in the in the physical world, because if I if I think of there was, oh, no, this" }, { "start": 1119.68, "end": 1127.88, "text": " was a this, the paper, the physical cell either automata is at least a thing that is doable" }, { "start": 1127.88, "end": 1133.3200000000002, "text": " in the in the real world. But if I think of something like, I don't know, a Tesla car" }, { "start": 1133.3200000000002, "end": 1139.44, "text": " or something like this, that is in the real world. Yet, it is still, you know, a central" }, { "start": 1139.44, "end": 1146.3200000000002, "text": " controller that controls the whole car, and there are still top down and so on. And it's" }, { "start": 1146.32, "end": 1151.6399999999999, "text": " also trained in that way. What are the types of physical situations where these would really" }, { "start": 1151.6399999999999, "end": 1154.48, "text": " the local communication would really come in handy?" }, { "start": 1154.48, "end": 1158.9399999999998, "text": " Yeah, like I could imagine like, let's say you have a building or something that could" }, { "start": 1158.9399999999998, "end": 1163.6, "text": " automatically detect if it's damaged, and then you know, it could like our you know," }, { "start": 1163.6, "end": 1170.52, "text": " our skin, it, you know, it's damaged, and it's it's regrowing, it's, it's self, self" }, { "start": 1170.52, "end": 1174.4399999999998, "text": " healing. So you could ultimately, I mean, this is like science fiction, but imagine" }, { "start": 1174.44, "end": 1179.1200000000001, "text": " a building and then you it gets damaged, and then automatically it recognizes it's damaged." }, { "start": 1179.1200000000001, "end": 1184.72, "text": " And then it, you know, automatically recovers from this damage. More other like science" }, { "start": 1184.72, "end": 1189.24, "text": " sci fi is if you have, imagine you have a swarm of nanobots, they only can communicate" }, { "start": 1189.24, "end": 1195.28, "text": " locally, right, but they have to figure out their shape, they have to figure out their" }, { "start": 1195.28, "end": 1198.8400000000001, "text": " what they can do in an environment. So in those situations, this local communication" }, { "start": 1198.84, "end": 1204.72, "text": " would be very advantageous. I don't know if it would necessarily be useful for these kind" }, { "start": 1204.72, "end": 1210.6, "text": " of, you know, Tesla, this car example. But but I could imagine a lot of other like application" }, { "start": 1210.6, "end": 1214.9199999999998, "text": " areas or drones that have to coordinate somehow together only being able to sense each other" }, { "start": 1214.9199999999998, "end": 1223.84, "text": " locally. So more these kind of in that areas. One thing I'm quite excited about is this" }, { "start": 1223.84, "end": 1227.72, "text": " getting this from like this 2d version to a 3d version. And then you can imagine building" }, { "start": 1227.72, "end": 1231.88, "text": " all kinds of things and it would automatically know you're building, you know, a table or" }, { "start": 1231.88, "end": 1236.68, "text": " you're building a chair or you're building this and this, which which I think it's quite" }, { "start": 1236.68, "end": 1242.96, "text": " so. So this is one example also of so yeah, the self classifying MNIST digits, where basically" }, { "start": 1242.96, "end": 1248.84, "text": " the system cannot only be used to grow something, but it can also be used to self infer its" }, { "start": 1248.84, "end": 1253.48, "text": " own shape. So you build something out of small components, or you draw like a digit. And" }, { "start": 1253.48, "end": 1257.34, "text": " then by having the cells communicate with each other, they figure out, oh, I'm part" }, { "start": 1257.34, "end": 1263.76, "text": " of an eight or I'm part of a one. And so basically, this is then what we replicated in this physical" }, { "start": 1263.76, "end": 1269.08, "text": " where you can put them together, make digits, and then each each of these cells will tell" }, { "start": 1269.08, "end": 1274.6599999999999, "text": " would figure out what part what shape am I part of." }, { "start": 1274.6599999999999, "end": 1280.6799999999998, "text": " So this, this is a physical instantiation of the demo I have here online. This is another" }, { "start": 1280.6799999999998, "end": 1285.6399999999999, "text": " distal article where as you exactly said, these things, they figure out themselves what" }, { "start": 1285.64, "end": 1291.72, "text": " they're part of. And you made you made this this is your paper into a physical instantiation," }, { "start": 1291.72, "end": 1295.0400000000002, "text": " which I find really cool. And now you're taking it to 3d." }, { "start": 1295.0400000000002, "end": 1300.64, "text": " Yeah, yeah, that's the that's the plan. Yeah. And of course, currently, these systems, like" }, { "start": 1300.64, "end": 1306.48, "text": " this kind of self classifying MNIST digits, it does not work as well as like using like," }, { "start": 1306.48, "end": 1311.76, "text": " like state of the art, deep convolutional neural network or transformer or what what" }, { "start": 1311.76, "end": 1317.68, "text": " you have. But I think ultimately, these systems, maybe we can integrate some ideas also for" }, { "start": 1317.68, "end": 1321.84, "text": " things like object detection to make these systems kind of more robust by having a more" }, { "start": 1321.84, "end": 1327.64, "text": " kind of distributed object detection where you have this system where the components," }, { "start": 1327.64, "end": 1331.6, "text": " maybe it could be a combination of something convolutional and but then you have the system" }, { "start": 1331.6, "end": 1336.18, "text": " on top, where you have this local communication and they figure out together kind of what" }, { "start": 1336.18, "end": 1340.96, "text": " shape am I looking at and maybe that could make these systems also more robust in the" }, { "start": 1340.96, "end": 1348.2, "text": " future. And maybe less prone to kind of this adversarial attacks that we currently see" }, { "start": 1348.2, "end": 1350.68, "text": " the system still exhibit." }, { "start": 1350.68, "end": 1354.88, "text": " Has anyone tried with like, maybe this would be interesting, like to take something like" }, { "start": 1354.88, "end": 1360.76, "text": " this, and try to like make an adversarial, I don't even know how that would look like," }, { "start": 1360.76, "end": 1365.4, "text": " but something that a human would clearly classify as like a seven, but there's like a slight" }, { "start": 1365.4, "end": 1366.4, "text": " twist." }, { "start": 1366.4, "end": 1374.3600000000001, "text": " Yeah, yeah, I'm not sure people have actually studied it so much on this, trying to see" }, { "start": 1374.3600000000001, "end": 1378.24, "text": " how what kind of adversarial attacks these systems could, I mean, fool like you could" }, { "start": 1378.24, "end": 1384.44, "text": " fool them. I'm sure there are also some. But maybe the combination of kind of both this" }, { "start": 1384.44, "end": 1390.48, "text": " and more classic deep image recognition techniques could make them more robust." }, { "start": 1390.48, "end": 1399.16, "text": " So you've taken also this idea of this 2D cellular automata, and you've applied this" }, { "start": 1399.16, "end": 1407.3600000000001, "text": " in 3D here in Minecraft, which so this is a morphogenesis. How do you how would you" }, { "start": 1407.3600000000001, "end": 1409.68, "text": " define morphogenesis just quickly?" }, { "start": 1409.68, "end": 1415.52, "text": " Yeah, I would define morphogenesis as like growing a complex structure based also on" }, { "start": 1415.52, "end": 1420.28, "text": " this kind of local communication. So how our like bodies are grown is morphogenesis, how" }, { "start": 1420.28, "end": 1425.84, "text": " our like organs are grown, how our nervous systems is grown basically, from like, you" }, { "start": 1425.84, "end": 1430.8, "text": " know, a single starting cell. And so this is what we do here. And again, the structures" }, { "start": 1430.8, "end": 1436.72, "text": " are not found by the system itself, like we took like an existing apartment building." }, { "start": 1436.72, "end": 1442.68, "text": " And then we trained the system in the same supervised way to regrow it basically. And" }, { "start": 1442.68, "end": 1446.52, "text": " we were surprised that it could also grow like these kind of functional machines, we" }, { "start": 1446.52, "end": 1451.8799999999999, "text": " actually had it growing like, like this temple. And then we found that the trap in this temple" }, { "start": 1451.8799999999999, "end": 1457.8, "text": " still worked. So because it had all the components, like there was not one single mistake. And" }, { "start": 1457.8, "end": 1462.56, "text": " that allowed these kind of functional things to still to still work like this kind of like" }, { "start": 1462.56, "end": 1464.8799999999999, "text": " caterpillar you see there." }, { "start": 1464.8799999999999, "end": 1470.52, "text": " And can you can you you also said you can destroy part of it and it will regrow, right," }, { "start": 1470.52, "end": 1477.84, "text": " which Yeah, is this have you made this playable somewhere in Minecraft itself? Or is this" }, { "start": 1477.84, "end": 1479.48, "text": " just purely your" }, { "start": 1479.48, "end": 1483.08, "text": " Yeah, it's currently it's it's not I mean, you can download the code and stuff. But it's" }, { "start": 1483.08, "end": 1487.24, "text": " not that we have a server where you can play with those things. But it would be very interesting." }, { "start": 1487.24, "end": 1493.6399999999999, "text": " We actually we organized this Minecraft open endedness competition where we like a related" }, { "start": 1493.6399999999999, "end": 1497.52, "text": " field like can you have an algorithm that can like natural evolution create all kinds" }, { "start": 1497.52, "end": 1504.52, "text": " of novel things without limits. And that's also where we use this Minecraft framework." }, { "start": 1504.52, "end": 1508.68, "text": " But it would be real fun. Like one thing that I that I want to try to also pursue in the" }, { "start": 1508.68, "end": 1513.6399999999999, "text": " future. Imagine you don't have it grow like caterpillars, but you have it grow like cities." }, { "start": 1513.6399999999999, "end": 1517.8799999999999, "text": " And then depending on the environment that you as the human does decide, like the mountains" }, { "start": 1517.8799999999999, "end": 1524.28, "text": " or like the desert, it would grow a different type of city. So like, that's one thing we're" }, { "start": 1524.28, "end": 1528.6, "text": " looking at now, how can you incorporate also feedback back into the album, because this" }, { "start": 1528.6, "end": 1533.16, "text": " caterpillar will always grow the same caterpillar. But if if I put this caterpillar in a in a" }, { "start": 1533.16, "end": 1537.68, "text": " small box, it should maybe grow a small caterpillar. And if it's a large box, it should grow a" }, { "start": 1537.68, "end": 1543.32, "text": " large caterpillar. So how can you kind of incorporate this environmental feedback? That's" }, { "start": 1543.32, "end": 1547.12, "text": " another thing that I'm very curious about." }, { "start": 1547.12, "end": 1553.32, "text": " Yeah, is do you see beyond beyond gaming, maybe which which I can definitely see applications" }, { "start": 1553.32, "end": 1560.1599999999999, "text": " of this? Do you see applications that are not in the physical world as we talked before," }, { "start": 1560.1599999999999, "end": 1567.36, "text": " but but maybe in the in the still in the realm of the digital world? Are there applications?" }, { "start": 1567.36, "end": 1573.6, "text": " I don't know what what what all you you're thinking of, but distributed applications," }, { "start": 1573.6, "end": 1578.6399999999999, "text": " networking applications, any sort of things that you're very excited about that maybe" }, { "start": 1578.6399999999999, "end": 1582.32, "text": " aren't super obvious if you just see the the Minecraft example." }, { "start": 1582.32, "end": 1587.2, "text": " Right. I mean, one thing that we are basically I think like two things. One is like just" }, { "start": 1587.2, "end": 1594.24, "text": " this Minecraft, I think, could also ultimately teach us something about biology itself. So" }, { "start": 1594.24, "end": 1598.2, "text": " if we could because we don't know everything yet about how this exact morphogenesis process" }, { "start": 1598.2, "end": 1601.6399999999999, "text": " works in nature. I mean, we know a lot of things, but we don't know, for example, how" }, { "start": 1601.6399999999999, "end": 1607.56, "text": " is it so accurate? Like and and and so there are certain things that we are we don't know" }, { "start": 1607.56, "end": 1611.12, "text": " yet. And so by simulating these process like a very simplified model, but maybe there's" }, { "start": 1611.12, "end": 1615.56, "text": " things we can learn from these kind of very simple models. So that's one one area I'm" }, { "start": 1615.56, "end": 1622.56, "text": " also very excited about. And so taking these systems to as a as a very simplified models" }, { "start": 1622.56, "end": 1629.84, "text": " biology to learn something. The other thing, the other application area is what I'm excited" }, { "start": 1629.84, "end": 1633.1999999999998, "text": " about is using those things. But instead of growing Minecraft structures, you can grow" }, { "start": 1633.1999999999998, "end": 1638.6, "text": " actually artificial neural networks. So so so you're basically kind of replicating our" }, { "start": 1638.6, "end": 1645.08, "text": " brains are not like designed and fixed, they're grown like through this developmental process." }, { "start": 1645.08, "end": 1650.12, "text": " So what what we did with this recent work is hyper NCA is taken basically, instead of" }, { "start": 1650.12, "end": 1658.6599999999999, "text": " having growing a caterpillar, we grow a pattern. And then we then we with a neural cell automata," }, { "start": 1658.6599999999999, "end": 1664.76, "text": " and then we convert that pattern into a policy network. And that policy network then is we" }, { "start": 1664.76, "end": 1669.84, "text": " can use this for our RL task, for example. So that's one one area I'm very excited about" }, { "start": 1669.84, "end": 1675.8, "text": " and making this systems more, more performant, because currently we apply to quite simple" }, { "start": 1675.8, "end": 1681.56, "text": " problems. But I think ultimately, this kind of idea of this growing neural networks is" }, { "start": 1681.56, "end": 1687.8, "text": " can be very powerful, because that's how you know, our brains are created. So so we're" }, { "start": 1687.8, "end": 1692.68, "text": " trying to replicate that process, hoping to create also more, more adaptive, basically" }, { "start": 1692.68, "end": 1700.2, "text": " neural networks. What do I gain out of so in this here, I have these developmental steps" }, { "start": 1700.2, "end": 1705.68, "text": " on the left, I do essentially start with some configuration of weights, essentially. And" }, { "start": 1705.68, "end": 1711.68, "text": " then I let the cellular automata run for a number of steps self organizing here, then" }, { "start": 1711.68, "end": 1717, "text": " I take it into a network, and then I execute the network. And presumably, I have to learn" }, { "start": 1717, "end": 1721.76, "text": " this somehow. In this paper, what you are doing is you're using, if I recall correctly," }, { "start": 1721.76, "end": 1727.52, "text": " a variant of evolutionary search, right? I could also, like, I know, in whatever way" }, { "start": 1727.52, "end": 1734, "text": " I learn it, I somehow have to learn how the cellular automata here reacts. What do I gain" }, { "start": 1734, "end": 1741.04, "text": " out of this instead of just training my policy net? Right. So so far, I would say it's the" }, { "start": 1741.04, "end": 1745.36, "text": " you don't get so much directly. So so far, this method, it's not that they outperform" }, { "start": 1745.36, "end": 1753.9199999999998, "text": " like current deep RL methods. But ultimately, basically, there is this this hypothesis," }, { "start": 1753.9199999999998, "end": 1759.4799999999998, "text": " also popularized more recently by Tony's adore this kind of genomic bottleneck hypothesis" }, { "start": 1759.4799999999998, "end": 1764.84, "text": " that means that we only have, you know, 20,000 genes, and they, they, they guide the growth" }, { "start": 1764.84, "end": 1770.04, "text": " and self organization of our brains with trillions of connections. And and and so it's a much" }, { "start": 1770.04, "end": 1776.12, "text": " smaller genotype that encodes a much larger structure. And so this kind of compression" }, { "start": 1776.12, "end": 1781.44, "text": " is hypothesized to also allows us to and animals to deal with situation they haven't seen," }, { "start": 1781.44, "end": 1786.84, "text": " like to basically that the robustness that animals show is part because they have to" }, { "start": 1786.84, "end": 1790.8, "text": " go through this bottleneck this compression. And this is the information you give to the" }, { "start": 1790.8, "end": 1794.72, "text": " next generation. So there's some limit on the information you can get. So that might" }, { "start": 1794.72, "end": 1799.92, "text": " bias the system towards learning rules that generalize well, like learning rules that" }, { "start": 1799.92, "end": 1804.96, "text": " generalize well. And so this is the the hypothesis here, that at some point, we can have a very" }, { "start": 1804.96, "end": 1810, "text": " small neural cell automata, which is basically like the genome and that encodes a much larger" }, { "start": 1810, "end": 1815.52, "text": " network and that hopefully would then be more robust. But that's something we have. That's" }, { "start": 1815.52, "end": 1819.24, "text": " basically what we're working on, which we which we haven't really shown yet. But that's" }, { "start": 1819.24, "end": 1825.36, "text": " the that's the hypothesis and the hope. One other thing that's kind of funny that it can" }, { "start": 1825.36, "end": 1833.44, "text": " do like it can you can basically let the growth continue and not just have one network grown," }, { "start": 1833.44, "end": 1838.04, "text": " but multiple networks. So like we applied this to this quadruped domain. So we had it" }, { "start": 1838.04, "end": 1845.14, "text": " grow for for 10 steps to grow one brain like one network, then we put it into this quadruped." }, { "start": 1845.14, "end": 1850.72, "text": " And we have a slightly larger quadruped. So we let it grow for longer, and then put it" }, { "start": 1850.72, "end": 1858.3600000000001, "text": " in the middle quadruped and then have a larger one. So and so basically one NCA can grow" }, { "start": 1858.3600000000001, "end": 1863.48, "text": " multiple different neural networks. And that's also one thing that I'm pretty excited about" }, { "start": 1863.48, "end": 1868.8000000000002, "text": " that we want to apply also for like more complex domains." }, { "start": 1868.8, "end": 1875.32, "text": " And again, here you had an experiment with with where you damaged these quarter pads," }, { "start": 1875.32, "end": 1882.32, "text": " the system is able to adjust, can you explain how this system is able to adjust to a damaged" }, { "start": 1882.32, "end": 1885.6399999999999, "text": " morphology, like a cut off a limb or something?" }, { "start": 1885.6399999999999, "end": 1891, "text": " So here it was basically trained to on these, like on all these different morphologies." }, { "start": 1891, "end": 1895.6399999999999, "text": " And then we had it basically, by continuing the growth, you can get a controller that" }, { "start": 1895.64, "end": 1900.3200000000002, "text": " was trained for one morphology, and then you continue it and you get a controller that" }, { "start": 1900.3200000000002, "end": 1905.44, "text": " works for M2 and you let it grow a little longer and it has a morphology for M3. So" }, { "start": 1905.44, "end": 1910.76, "text": " in this case, those were basically seen during some other experiments, we have results where" }, { "start": 1910.76, "end": 1914.88, "text": " it has damage that was not seen during training here, basically was trained to being able" }, { "start": 1914.88, "end": 1919.0400000000002, "text": " to deal with this particular type. So if we would damage it in another way, it probably" }, { "start": 1919.04, "end": 1926.32, "text": " wouldn't work anymore with these metamorphosis networks. But yeah, so the hope is also that" }, { "start": 1926.32, "end": 1931.52, "text": " if you know how to control one quadruped, then there should be that you don't have" }, { "start": 1931.52, "end": 1935.1599999999999, "text": " to start basically from scratch, there should be some information there that allows you" }, { "start": 1935.1599999999999, "end": 1942.72, "text": " to also grow something that is related, and not having to start like all over again, basically." }, { "start": 1942.72, "end": 1947.04, "text": " This flows, I think, into a lot of a lot of ideas from, as you said, the open ended community" }, { "start": 1947.04, "end": 1955.2, "text": " and the sort of don't have explicit goals community. I think parts of your blog posts" }, { "start": 1955.2, "end": 1960.24, "text": " and papers mentioned algorithms like quality, diversity, map elites, and things like this," }, { "start": 1960.24, "end": 1966.24, "text": " which are obviously very exciting and very different from how we do deep learning today." }, { "start": 1966.24, "end": 1972.6399999999999, "text": " So far, we've always looked at things that have either an explicit goal, like here is" }, { "start": 1972.64, "end": 1978, "text": " the salamander I want to build, or here is the Minecraft structure I want to build, or" }, { "start": 1978, "end": 1985.0400000000002, "text": " have some sort of, I want to say, goal in an in a more abstract sense, like the reinforcement" }, { "start": 1985.0400000000002, "end": 1989.96, "text": " learning goal of maximizing the height in this case, right for these robots that stand" }, { "start": 1989.96, "end": 1998.3400000000001, "text": " on top of one another. Yet, how do we go away from this? Is there is there a natural progression" }, { "start": 1998.34, "end": 2005.08, "text": " in these self organizing systems to go away from having explicit goals that would be more" }, { "start": 2005.08, "end": 2008.32, "text": " difficult to pursue with like the classic deep learning systems?" }, { "start": 2008.32, "end": 2013.6799999999998, "text": " Right, I think in general, so I think that, like two things like one is the representation," }, { "start": 2013.6799999999998, "end": 2017.6799999999998, "text": " which I think these neural cell automata are like a great representation for a lot of like" }, { "start": 2017.6799999999998, "end": 2021.48, "text": " growing structures growing neural networks. And then the other thing is you mentioned" }, { "start": 2021.48, "end": 2030.32, "text": " is like the search, how do we actually get to systems that show interesting, these interesting" }, { "start": 2030.32, "end": 2034.3600000000001, "text": " properties. And so there seems to be a recent trend, I mean, not just in the self organizing" }, { "start": 2034.3600000000001, "end": 2040.96, "text": " systems, but in also in deep RL in general, to not train on one thing basically, but train" }, { "start": 2040.96, "end": 2046.6, "text": " on a variety of different things. So there was also this more recent paper by I think" }, { "start": 2046.6, "end": 2051.7999999999997, "text": " it was DeepMind where they this XLL that they showed like basically, if you train agents" }, { "start": 2051.7999999999997, "end": 2058.58, "text": " in a lot of different changing environments, they develop more robust skills basically." }, { "start": 2058.58, "end": 2065.96, "text": " So I think basically here it's we also what I think it makes these self organizing systems" }, { "start": 2065.96, "end": 2072.2799999999997, "text": " quite difficult to train is that these landscapes, the fitness landscapes basically, they are" }, { "start": 2072.28, "end": 2078.52, "text": " probably very kind of not very smooth, because changing like something small in the self" }, { "start": 2078.52, "end": 2084.6800000000003, "text": " organizing systems can have like this cascading effect. So that's why these traditional objective" }, { "start": 2084.6800000000003, "end": 2091.96, "text": " based rewards, they work, but then they don't, it's still difficult to optimize. So that's" }, { "start": 2091.96, "end": 2096.5, "text": " why we're more looking into this kind of open ended, like what you mentioned quality diversity" }, { "start": 2096.5, "end": 2100.52, "text": " methods basically, where we're not trying to optimize for one particular outcome. But" }, { "start": 2100.52, "end": 2106, "text": " we're trying to find things that differ in some interesting ways basically. And I think" }, { "start": 2106, "end": 2111.7599999999998, "text": " those methods, particularly for this kind of self organization, they are very, very" }, { "start": 2111.7599999999998, "end": 2118.84, "text": " powerful basically. They are better at navigating like these kind of very complex landscapes" }, { "start": 2118.84, "end": 2127.08, "text": " with many local optima, but they're also slightly more expensive because they're looking at" }, { "start": 2127.08, "end": 2130.92, "text": " the larger space of this of the search space basically." }, { "start": 2130.92, "end": 2142.88, "text": " What maybe these two questions in one given given these outlooks, what field that deep" }, { "start": 2142.88, "end": 2150.64, "text": " learning is good at right now? Do you expect these methods to be better? If you know, let's" }, { "start": 2150.64, "end": 2157.48, "text": " say if we invest the resources and figure out, you know, the tricks of the trade enough," }, { "start": 2157.48, "end": 2163.12, "text": " what parts that deep learning is good at now? Could these methods overtake deep learning?" }, { "start": 2163.12, "end": 2168.7599999999998, "text": " And then on the other hand, what's kind of the, for you, the most exciting area that" }, { "start": 2168.7599999999998, "end": 2175.16, "text": " we haven't even unlocked yet with deep learning that are accessible with this? Right? So it's" }, { "start": 2175.16, "end": 2179.42, "text": " two different, two different things, but I'm wondering about what you think about both" }, { "start": 2179.42, "end": 2181.28, "text": " of these directions." }, { "start": 2181.28, "end": 2187.76, "text": " Right. So I think it's also, I wouldn't say like overtake deep learning. I mean, we use" }, { "start": 2187.76, "end": 2193.92, "text": " basically we use deep learning as a tool for basically like kind of train the system. So" }, { "start": 2193.92, "end": 2194.92, "text": " I think," }, { "start": 2194.92, "end": 2199.32, "text": " Yeah, sorry. I mean, deep learning and like the, just the thing we do right now, right?" }, { "start": 2199.32, "end": 2204.36, "text": " We have objective loss, supervised training, single neural network." }, { "start": 2204.36, "end": 2209.36, "text": " So I would assume that these systems would be able to have a lot of different domains." }, { "start": 2209.36, "end": 2215.2400000000002, "text": " I think the one kind of probably the closest, I think what we would see is that they would" }, { "start": 2215.2400000000002, "end": 2221.7200000000003, "text": " make our L agents more, you know, like more robust, more adaptive. And that's also already" }, { "start": 2221.7200000000003, "end": 2228.2400000000002, "text": " in this work that you that we have there is like where we have basically in this case," }, { "start": 2228.2400000000002, "end": 2233.84, "text": " we trained not only the, we have completely random weights and we only trained local update" }, { "start": 2233.84, "end": 2237.8, "text": " rules, basically the habit rules. And then we show that through this system, we can actually" }, { "start": 2237.8, "end": 2242.52, "text": " during the lifetime cut off a leg. Again, we are always somehow mutilating these robots." }, { "start": 2242.52, "end": 2248.52, "text": " We're not very nice to them. But basically, this is an example, I think, where we already" }, { "start": 2248.52, "end": 2255.96, "text": " show that is this is more adaptive than the current RL design. So in the current basically" }, { "start": 2255.96, "end": 2263.76, "text": " deep RL, I think the one main drawback is that we train a system and then we freeze" }, { "start": 2263.76, "end": 2268.6400000000003, "text": " the neural network and then let it do its tasks. And this seems like kind of very unnatural" }, { "start": 2268.6400000000003, "end": 2272.6400000000003, "text": " that like you have a frozen brain. Okay, maybe you have like some recurrent connection that" }, { "start": 2272.6400000000003, "end": 2279.5200000000004, "text": " allow you to learn something. But basically, we have this training period, then we freeze" }, { "start": 2279.5200000000004, "end": 2283.6000000000004, "text": " everything in the system and we apply it to domains. So that's not like lifetime learning" }, { "start": 2283.6000000000004, "end": 2288.28, "text": " in normally these systems. But the idea here is, in general self-organization, that we" }, { "start": 2288.28, "end": 2292.88, "text": " never wanted to stop learning, we never wanted to stop adapting, we want the self-organizing" }, { "start": 2292.88, "end": 2297.92, "text": " process to happening the whole time. So I think in any domain, where there are things" }, { "start": 2297.92, "end": 2305.84, "text": " that you might not have anticipated during test time, these systems could be beneficial." }, { "start": 2305.84, "end": 2311.36, "text": " Like might it be there's a pixel edit, you're losing a leg or you wanted to do something" }, { "start": 2311.36, "end": 2317.4, "text": " else. I think that they already show that there's some, they can be superior in those" }, { "start": 2317.4, "end": 2324.84, "text": " domains. And that's one thing that I'm pretty excited about to apply them to more complicated" }, { "start": 2324.84, "end": 2330.6, "text": " domains, not just these like quadruped locomotion tasks, basically. But anything where you have" }, { "start": 2330.6, "end": 2339.04, "text": " something unanticipated happening, I think there will be can be a benefit of it. And" }, { "start": 2339.04, "end": 2344.96, "text": " then was the second question like what other" }, { "start": 2344.96, "end": 2350.68, "text": " a new area that we haven't even like we have no chance currently of tackling with our tools?" }, { "start": 2350.68, "end": 2357.7200000000003, "text": " Yeah, that's a great question. I mean, I think this new area is this kind of rapid lifetime" }, { "start": 2357.7200000000003, "end": 2364.18, "text": " adaptation basically, I think these systems are great for if you know what you would expect." }, { "start": 2364.18, "end": 2369.52, "text": " But things like basically like having things that work in unknown environments, I think" }, { "start": 2369.52, "end": 2376.08, "text": " that's a really, I think exciting area that I mean, you have like animals in nature and" }, { "start": 2376.08, "end": 2379.68, "text": " you can put a dog into a new environment and will not completely like break down and will" }, { "start": 2379.68, "end": 2383.88, "text": " still know kind of what to do and to interact with the environment. And we don't have that" }, { "start": 2383.88, "end": 2388.48, "text": " yet for our agents, like we can put them in environments they're trained for, you put" }, { "start": 2388.48, "end": 2395.88, "text": " them too far out, they don't know what to do. So and I think that too, that's so this" }, { "start": 2395.88, "end": 2400.36, "text": " working in other environments and also having this kind of like, you know, common sense," }, { "start": 2400.36, "end": 2403.88, "text": " I think is maybe also an area I think in the future that these systems could be applied" }, { "start": 2403.88, "end": 2409.76, "text": " to although I don't know exactly how but but that these systems have more common sense" }, { "start": 2409.76, "end": 2415.04, "text": " and don't directly break down like kind of giving them this kind of innate abilities" }, { "start": 2415.04, "end": 2422.44, "text": " that we humans are born with animals are some animals are born with that allows them to" }, { "start": 2422.44, "end": 2430.68, "text": " yeah, do a little bit more common sense things than than current deep learning system that" }, { "start": 2430.68, "end": 2433.76, "text": " don't have that property basically." }, { "start": 2433.76, "end": 2441.04, "text": " And this, I think you say it even here at some point. This, in addition to the fact" }, { "start": 2441.04, "end": 2448.34, "text": " that there is this genomic bottleneck, right, you already said this, the genes encode or" }, { "start": 2448.34, "end": 2453.2400000000002, "text": " only have the capacity to encode very little information. And what we're doing here is" }, { "start": 2453.2400000000002, "end": 2459.36, "text": " we're learning essentially the rules to learn the rules, which can be compressed in a much" }, { "start": 2459.36, "end": 2466.1200000000003, "text": " better way than the rules themselves. And there is a reason to assume that this will" }, { "start": 2466.1200000000003, "end": 2471.96, "text": " result in that kind of common sense that if you have to essentially learn the meta rule," }, { "start": 2471.96, "end": 2476.6000000000004, "text": " then that will make you generalize better. I mean, it's an it's an argument, I'm not" }, { "start": 2476.6, "end": 2481.44, "text": " super convinced yet. Right. But if you do then some parameter sharing, you showed in" }, { "start": 2481.44, "end": 2486.56, "text": " some experiments, you can compress this even further. So that might be a way to tackle" }, { "start": 2486.56, "end": 2487.56, "text": " that." }, { "start": 2487.56, "end": 2494.56, "text": " And also this in Tony's adores paper, he actually he points out that this bottleneck, like there's" }, { "start": 2494.56, "end": 2499.96, "text": " some organism nature that have many more genes, for example. So maybe it is a feature that" }, { "start": 2499.96, "end": 2507.08, "text": " we have that number of genes that it's compressed. And so so that gives us like some hope that" }, { "start": 2507.08, "end": 2512.12, "text": " also having the similar feature in our artificial systems should be beneficial. But but we're" }, { "start": 2512.12, "end": 2519.56, "text": " still we only showed that for very, very simple, you know, simple tasks so far." }, { "start": 2519.56, "end": 2523.76, "text": " And deep learning goes into the exact opposite directions, right? We're like the more the" }, { "start": 2523.76, "end": 2529.44, "text": " more parameters, the better we have the double descent phenomenon, and we can go essentially" }, { "start": 2529.44, "end": 2536.2000000000003, "text": " infinite and it always gets better, which is which is weird, right? Which is also giving" }, { "start": 2536.2000000000003, "end": 2541.18, "text": " amazing results, I think recently with you know, the whole language models and so on." }, { "start": 2541.18, "end": 2545.9, "text": " So it's definitely it could it would be cool if in the near future, people discover like" }, { "start": 2545.9, "end": 2552.6, "text": " a fundamental connection between, you know, the the good results we get by scaling up," }, { "start": 2552.6, "end": 2557.64, "text": " and the the actual principle from biology, which is seems to be more like compressing" }, { "start": 2557.64, "end": 2561.7999999999997, "text": " and scaling down, it would be nice if those were to join together somehow." }, { "start": 2561.7999999999997, "end": 2568.44, "text": " And hopefully, we can be part of that in some extent. But yeah, I agree. It's really interesting" }, { "start": 2568.44, "end": 2574.72, "text": " that like that you Yeah, you scale up networks, and then your local optima disappear, like" }, { "start": 2574.72, "end": 2579.96, "text": " everything just works better. And here we basically we want to go the opposite direction." }, { "start": 2579.96, "end": 2585.48, "text": " But it's not necessarily that we, of course, we still want our the final models to have" }, { "start": 2585.48, "end": 2591.96, "text": " trillions of of of like connections. But we what we basically want is we want the trainable" }, { "start": 2591.96, "end": 2598.12, "text": " parameters to be low. And I think that that's the fundamental difference that we have a" }, { "start": 2598.12, "end": 2601.48, "text": " small number of train or relatively small number of trainable parameters there, but" }, { "start": 2601.48, "end": 2607.96, "text": " they give rise to much more complicated system, exploiting things like self organization growth" }, { "start": 2607.96, "end": 2612.98, "text": " over time. And, yeah." }, { "start": 2612.98, "end": 2617.96, "text": " This is I think, because you said before, you're not you're not an opponent of deep" }, { "start": 2617.96, "end": 2623.32, "text": " learning. In fact, deep learning is used inside of the cellular automata to to sort of learn" }, { "start": 2623.32, "end": 2629.08, "text": " these rules. I find it interesting, if you look in nature, that there are cells and they" }, { "start": 2629.08, "end": 2635.52, "text": " self organize in some way, right, by whatever means that is learned. But these cells then" }, { "start": 2635.52, "end": 2641.48, "text": " make up brains, right? And brains are naturally very top down planners. They're they're, they're" }, { "start": 2641.48, "end": 2647.12, "text": " in the moment, they, you know, look ahead. And then the brain somehow organizing to societies" }, { "start": 2647.12, "end": 2652.96, "text": " and the societies again, are very distributed, very local, very interaction on a person to" }, { "start": 2652.96, "end": 2660.2, "text": " person level. What do you what do you make of this? Do you think there is like an optimal" }, { "start": 2660.2, "end": 2666.28, "text": " switch from local to global to local to global that we could sort of stack on top of one" }, { "start": 2666.28, "end": 2669.4, "text": " another? Or is this just a happenstance of of the universe?" }, { "start": 2669.4, "end": 2674.48, "text": " Yeah, that's a Yeah, that's a that's a great question." }, { "start": 2674.48, "end": 2679.84, "text": " And even more like the humans in the societies, they organize themselves into hierarchies," }, { "start": 2679.84, "end": 2683.84, "text": " right? Top down control and somehow it gets even" }, { "start": 2683.84, "end": 2686.84, "text": " crazy. It's a good question. Do we need one? Yeah," }, { "start": 2686.84, "end": 2691.7200000000003, "text": " do we need all of this in our artificial systems? Maybe we need all of this to get to real like" }, { "start": 2691.7200000000003, "end": 2697.48, "text": " more general artificial intelligence. Like because also one thing that is really crucial" }, { "start": 2697.48, "end": 2704.04, "text": " is the our culture, right? Like, like, if you if you I was reading this great book recently," }, { "start": 2704.04, "end": 2712.34, "text": " like if you just put humans somewhere by themselves, they're not very like, you know, good at surviving," }, { "start": 2712.34, "end": 2716.2400000000002, "text": " but we are good at surviving because we have all this cultural information, like all this" }, { "start": 2716.2400000000002, "end": 2720.6, "text": " knowledge that other people made that that we can build on. And that allows us to do" }, { "start": 2720.6, "end": 2725.32, "text": " all these amazing things. So maybe to get our eyes to do really amazing things, it's" }, { "start": 2725.32, "end": 2731.1600000000003, "text": " not enough to having like single agents in complex environments, but it needs to be multiple" }, { "start": 2731.1600000000003, "end": 2735.48, "text": " agents that need to be simulated maybe over multiple generations. So there can be some" }, { "start": 2735.48, "end": 2740.96, "text": " cultural knowledge transferred from some agents to other agents, similarly to how how it happens" }, { "start": 2740.96, "end": 2748.2000000000003, "text": " in for us. But of course, that also makes the simulations much more complex and expensive." }, { "start": 2748.2000000000003, "end": 2753.96, "text": " When you have to simulate cultures, multiple like generations, and then we need some more" }, { "start": 2753.96, "end": 2758.52, "text": " better compute, especially at the university level." }, { "start": 2758.52, "end": 2764.2, "text": " I think yeah, that's one advantage that nature has it has lots of lots of distributed compute" }, { "start": 2764.2, "end": 2769.12, "text": " available. That said that there is there is an interesting part in your blog post where" }, { "start": 2769.12, "end": 2777.88, "text": " you describe sort of how to train these things, or how to steer the development of these swarm" }, { "start": 2777.88, "end": 2782.8, "text": " systems or distributed systems. One one quote here you have is guiding a swarm system can" }, { "start": 2782.8, "end": 2788.6800000000003, "text": " only be done as a shepherd would drive a herd by applying force at crucial leverage points" }, { "start": 2788.6800000000003, "end": 2794.52, "text": " by subverting the natural tendencies of the system. And then another one is the self assembling" }, { "start": 2794.52, "end": 2801.5600000000004, "text": " brain knows no shortcuts in which your I believe your argument was a little bit that is very" }, { "start": 2801.5600000000004, "end": 2808.4, "text": " hard to predict what a change does until you observe it because the interactions can be" }, { "start": 2808.4, "end": 2813.56, "text": " kind of nonlinear, very dynamic, very, very hard to predict." }, { "start": 2813.56, "end": 2816.7200000000003, "text": " In essence, that was basically the argument that that hissing are made in his this great" }, { "start": 2816.7200000000003, "end": 2822.84, "text": " book like self organizing, no self assembling brain. And basically that you need to basically" }, { "start": 2822.84, "end": 2828.4, "text": " the system needs this process of growth. And you have to put energy into it to observe" }, { "start": 2828.4, "end": 2832.6, "text": " the outcome you cannot predict. And that's also things they showed that Wolfram what" }, { "start": 2832.6, "end": 2838.12, "text": " he showed with simple one diesel automata, you cannot predict the state of the system," }, { "start": 2838.12, "end": 2843.56, "text": " you have to actually run the system even if it's a simple one diesel automata. And that" }, { "start": 2843.56, "end": 2848.12, "text": " is also apparently the question is, do we also need to do that for to growing our neural" }, { "start": 2848.12, "end": 2852.7999999999997, "text": " networks instead of like designing them? Maybe we need to go through this kind of process" }, { "start": 2852.7999999999997, "end": 2861.7599999999998, "text": " of growth with learned rules to to really unlock you know what these systems can do." }, { "start": 2861.7599999999998, "end": 2868.1, "text": " There is recent work in using for example, GANs or so to predict things like fluid dynamics" }, { "start": 2868.1, "end": 2872.52, "text": " and you know, they can't do it like super, like they're not extremely accurate, but they" }, { "start": 2872.52, "end": 2879.7599999999998, "text": " can give a pretty good estimate of given starting state and then a highly dynamic nonlinear" }, { "start": 2879.7599999999998, "end": 2885.6, "text": " system. And then they can predict some steps into the future, I've seen the same like galaxy" }, { "start": 2885.6, "end": 2892.7599999999998, "text": " development and so on. Do is there any happening like this where you can say, Well, I don't" }, { "start": 2892.76, "end": 2899.1600000000003, "text": " I can't, I don't have enough compute to run all these swarms, but I can sort of train a" }, { "start": 2899.1600000000003, "end": 2905.28, "text": " surrogate model that will give me the end in sort of a one step fashion. And then these" }, { "start": 2905.28, "end": 2911.6800000000003, "text": " the forces that I poke at the swarm at I could determine those using the surrogate model." }, { "start": 2911.6800000000003, "end": 2916.5400000000004, "text": " Yeah, I think that that would be really interesting. I wonder I think it's, it could work for some" }, { "start": 2916.54, "end": 2922.7599999999998, "text": " limited steps in the future. But but but I think you you would still need to, you know," }, { "start": 2922.7599999999998, "end": 2927.2799999999997, "text": " like, like at some point, you need to basically run this this model. I mean, maybe in the" }, { "start": 2927.2799999999997, "end": 2932.16, "text": " first like generations, you could help have so great model that somehow helps you to sort" }, { "start": 2932.16, "end": 2938.24, "text": " out like the things that are really bad, like, this will not grow into anything. So I think" }, { "start": 2938.24, "end": 2942.96, "text": " you could use it there later, I guess you would probably have to run the system like" }, { "start": 2942.96, "end": 2948, "text": " when things get more complex. But I but I think there's also another role for the surrogate" }, { "start": 2948, "end": 2953.88, "text": " models, which something I always wanted to try to predict basically the learning abilities" }, { "start": 2953.88, "end": 2958.12, "text": " of the system. So you have an agent in an environment. So maybe you don't need to simulate" }, { "start": 2958.12, "end": 2962.76, "text": " the whole lifetime, right? But you can have some more like some kind of some tests that" }, { "start": 2962.76, "end": 2967.6, "text": " would test is this agent, how capable is this agent, so having some kind of surrogate that" }, { "start": 2967.6, "end": 2972.6, "text": " would could look at certain parts of I don't know the neural network and already predict," }, { "start": 2972.6, "end": 2981.04, "text": " will this be a good learner or not basically. But yeah," }, { "start": 2981.04, "end": 2991.2, "text": " it is in the in one part you also it has very, can very remember like I got into machine" }, { "start": 2991.2, "end": 2996.64, "text": " learning and graphical models were the hot thing at that point, it was just before deep" }, { "start": 2996.64, "end": 3003.52, "text": " learning. And this reminds me all this self organizing systems with the local communication," }, { "start": 3003.52, "end": 3011.72, "text": " they remind me a lot of belief propagation, things like this graph neural networks, obviously" }, { "start": 3011.72, "end": 3017.24, "text": " are right now up and coming, let's say, do you see connections between all of those things?" }, { "start": 3017.24, "end": 3021.4, "text": " Or is that just kind of a superficial connection? Yeah, I definitely see there's a big connection" }, { "start": 3021.4, "end": 3025.3399999999997, "text": " to these also these graph neural networks, basically, like, I mean, they're very close" }, { "start": 3025.34, "end": 3031.48, "text": " to like a more generalized form basically of like a cell automata, where you have different" }, { "start": 3031.48, "end": 3035.6000000000004, "text": " basically neighborhoods, depending on your the topology of the graph. And they also seem" }, { "start": 3035.6000000000004, "end": 3041.52, "text": " to be there. I think they're super interesting. I also actually how I got into neural networks" }, { "start": 3041.52, "end": 3047.88, "text": " is the the first lecture I had as an undergrad was actually on neural networks and about" }, { "start": 3047.88, "end": 3055.44, "text": " these self organizing maps, which these coho and self organizing maps that basically can" }, { "start": 3055.44, "end": 3064.12, "text": " do clustering based on this somehow like kind of like kimmins, but on a on a much more," }, { "start": 3064.12, "end": 3068.6400000000003, "text": " they can do it better. And you have to get these like nice visualizations out of them." }, { "start": 3068.6400000000003, "end": 3071.86, "text": " And apparently, there's also some pros in our brain. I mean, we have these topographic" }, { "start": 3071.86, "end": 3077.44, "text": " maps also in our brains. I was always fascinated somehow by these self organizing maps. And" }, { "start": 3077.44, "end": 3081.92, "text": " even though I did a lot of like some other things during my PhD, somehow now I'm coming" }, { "start": 3081.92, "end": 3089.1, "text": " back to this kind of self organization. And and and yeah, using these recently learning" }, { "start": 3089.1, "end": 3094.12, "text": " tools, it's I think we can really unlock like the power behind them. There was a Do you" }, { "start": 3094.12, "end": 3101.6, "text": " know the arc challenge? The abstract reasoning corpus by Francois? Yeah, yeah, yeah. There" }, { "start": 3101.6, "end": 3106.12, "text": " is I'm not sure if they have an example right here. So for everyone who doesn't know this," }, { "start": 3106.12, "end": 3111.2799999999997, "text": " this is a task where you get so the left ones are demonstration examples, there's always" }, { "start": 3111.2799999999997, "end": 3119.04, "text": " like an input grid and an output grid. And then you get a test example where you only" }, { "start": 3119.04, "end": 3124.2799999999997, "text": " get the input. So here, the rule is I've looked at that before. So the rule is kind of there" }, { "start": 3124.2799999999997, "end": 3129.2799999999997, "text": " is the gray in the middle, and you kind of fold the right hand side onto the left hand" }, { "start": 3129.2799999999997, "end": 3135.2799999999997, "text": " side and then you the solution here on the right hand side is kind of the the sum of" }, { "start": 3135.28, "end": 3146.28, "text": " the two. And this is these are things that humans are surprisingly good at, but are very" }, { "start": 3146.28, "end": 3154.5600000000004, "text": " difficult for a machine to learn. And the this is a data set and the training examples," }, { "start": 3154.5600000000004, "end": 3159.6400000000003, "text": " there are not many training examples. So there is not really a way to to learn this through" }, { "start": 3159.64, "end": 3166.12, "text": " brute force training. There is a little game that people can play, I think I've reported" }, { "start": 3166.12, "end": 3171.72, "text": " on this before, but there is a game for anyone who's interested, where this is the arc game," }, { "start": 3171.72, "end": 3181.4, "text": " you can find it on the GitHub page on of of Alexei Borski. And you can just choose one" }, { "start": 3181.4, "end": 3186.68, "text": " here, they're divided into different levels. And yeah, you can you can try them for yourself." }, { "start": 3186.68, "end": 3195.96, "text": " So this, this looks even familiar, like cellular automata. Do you think that it like self organizing" }, { "start": 3195.96, "end": 3200.52, "text": " systems in one way or another in the way we've looked at them today, or in the way you've" }, { "start": 3200.52, "end": 3207.2999999999997, "text": " seen them could be useful in solving challenges like these, because challenges like these" }, { "start": 3207.3, "end": 3217.2000000000003, "text": " are related very much to, let's say, something that we would call intelligence. Yeah, I think" }, { "start": 3217.2000000000003, "end": 3223.32, "text": " the the, the hope would be that if we can get this kind of bottleneck algorithms to" }, { "start": 3223.32, "end": 3228.52, "text": " work where we exploit, so I'm not sure it like we could apply like self organization" }, { "start": 3228.52, "end": 3233.28, "text": " directly. But what I could imagine is that we exploit develop these kind of genomic bottleneck" }, { "start": 3233.28, "end": 3239.0800000000004, "text": " algorithms that can guide this self organization growth of a very complex neural network and" }, { "start": 3239.0800000000004, "end": 3243.88, "text": " that that network then could maybe be used for these kind of tasks. And the hope would" }, { "start": 3243.88, "end": 3249.0800000000004, "text": " be that because it has this compression, it would maybe develop an algorithm that would" }, { "start": 3249.0800000000004, "end": 3256.48, "text": " allow it to, you know, solve these kind of tasks that require more like high level cognitive" }, { "start": 3256.48, "end": 3263.16, "text": " skills. But but of course, that's still Yeah, we're still a little far away from that, I" }, { "start": 3263.16, "end": 3272.88, "text": " think. And I guess I don't know what the current state of the art and in this task is. How?" }, { "start": 3272.88, "end": 3278.48, "text": " I think it's, I think it's still largely unsolved. So this could be a great test domain, I think." }, { "start": 3278.48, "end": 3284.44, "text": " But yeah, I think I, I'm not sure I have high hopes that it would already like, I think" }, { "start": 3284.44, "end": 3290.12, "text": " we still probably missing some other ingredients that we don't have yet to kind of make progress" }, { "start": 3290.12, "end": 3291.12, "text": " there." }, { "start": 3291.12, "end": 3296.28, "text": " Yeah, but by the way, this, I think I just clicked on on one randomly. But I think here," }, { "start": 3296.28, "end": 3300.68, "text": " the rule as I think if people get it, they can see that you always kind of select the" }, { "start": 3300.68, "end": 3307, "text": " smallest of the shapes that is there and kind of replicate it. You know, at least that's" }, { "start": 3307, "end": 3311.48, "text": " my that's my hypothesis, right? Yeah, maybe, maybe." }, { "start": 3311.48, "end": 3315.4, "text": " Oh, I think maybe you take the one that fits in the box." }, { "start": 3315.4, "end": 3323.36, "text": " Oh, yeah, yeah, yeah. Right. But it's like this, this, this kind of, like, you need to" }, { "start": 3323.36, "end": 3328.72, "text": " understand what shapes are and so on. So that is very much that this is very high level." }, { "start": 3328.72, "end": 3333.9, "text": " This is very bottlenecky. It has a bottlenecky feel to it. Like, you're probably not going" }, { "start": 3333.9, "end": 3339.76, "text": " to get very far with like a CNN trained on these pixels directly. So that's, that's like" }, { "start": 3339.76, "end": 3348.5600000000004, "text": " I can see something like this very much be in the domain of of like, first open endedness," }, { "start": 3348.5600000000004, "end": 3354, "text": " but then also self organizing things made up like simple rules making up something very" }, { "start": 3354, "end": 3355, "text": " complicated." }, { "start": 3355, "end": 3359.48, "text": " There's two other domains that I think also very exciting, like one is this animal AI" }, { "start": 3359.48, "end": 3364.84, "text": " benchmark, where basically they it's like an animal AI Olympics where you apply eyes" }, { "start": 3364.84, "end": 3371.1600000000003, "text": " to tasks that animals normally are good at, like, and like, for example, trying to figure" }, { "start": 3371.1600000000003, "end": 3377.08, "text": " out which one is the tool and then you use that tool to, you know, get a reward. And" }, { "start": 3377.08, "end": 3382.1600000000003, "text": " so there's also where current methods basically, they've pretty much fail on more complicated" }, { "start": 3382.1600000000003, "end": 3386.52, "text": " tasks. And then they also had a mid-term experiments where they had children perform these tasks" }, { "start": 3386.52, "end": 3391.76, "text": " and they are still much better at than than like any of our deep RL methods. So in the" }, { "start": 3391.76, "end": 3396.96, "text": " simple task, deeper RL performs pretty well. Once it gets to more complicated things, then" }, { "start": 3396.96, "end": 3404.2400000000002, "text": " they the system basically, these systems fail. So this is one task that like, in the recent" }, { "start": 3404.2400000000002, "end": 3409.2400000000002, "text": " grant proposal that I proposed that that there would be a good test domain for these methods" }, { "start": 3409.2400000000002, "end": 3414.44, "text": " basically, because the whole point is to act in an environment that you haven't seen during" }, { "start": 3414.44, "end": 3419.0400000000004, "text": " training. Even though the environment is made out of the same building blocks, like there's" }, { "start": 3419.04, "end": 3426.96, "text": " rewards, there's like barriers, but how they are composed, all of this is new, basically," }, { "start": 3426.96, "end": 3433.36, "text": " and never seen before. And the other one is this also by I think was the mind is alchemy" }, { "start": 3433.36, "end": 3439.36, "text": " task where you have to learn to kind of it's a task that we have to learn basically about" }, { "start": 3439.36, "end": 3443.04, "text": " the structure of the domain, what things you can put together, and then you have to use" }, { "start": 3443.04, "end": 3448.24, "text": " that knowledge to like building on that knowledge basically. And this is also a very difficult" }, { "start": 3448.24, "end": 3452.9599999999996, "text": " task for all of our current methods. So I think this could also be very good task to" }, { "start": 3452.9599999999996, "end": 3459.7999999999997, "text": " basically as the North Star to drive these the progress in this kind of area. And the" }, { "start": 3459.7999999999997, "end": 3466, "text": " hope is that these kind of self organizing system, they should be, hopefully would be" }, { "start": 3466, "end": 3467.4399999999996, "text": " better at in this" }, { "start": 3467.4399999999996, "end": 3474.9599999999996, "text": " where can people if someone wants to get started in diving into the world of self organizing" }, { "start": 3474.96, "end": 3480.36, "text": " systems, swarm intelligence, maybe a bit of open endedness, is there a good place for" }, { "start": 3480.36, "end": 3484.16, "text": " people to get started like get their their feet?" }, { "start": 3484.16, "end": 3490.76, "text": " Yeah, I would say I was recently rereading this great book from Melanie Mitchell, this" }, { "start": 3490.76, "end": 3497.12, "text": " complexity. I think this is a great starting book on on kind of this ideas of complex system" }, { "start": 3497.12, "end": 3502.84, "text": " self organization. There's something about cellular automata in there. So I think this" }, { "start": 3502.84, "end": 3509.48, "text": " is a this is a good kind of good point to get a broader overview of of that kind of" }, { "start": 3509.48, "end": 3517.08, "text": " whole field of basically complex system self organization. And yeah, and hopefully the" }, { "start": 3517.08, "end": 3522.56, "text": " also the the blog post hopefully can be helpful to some people and also plan to write more" }, { "start": 3522.56, "end": 3527.92, "text": " on on that as well. But but this I would suggest this is a this is definitely a good place" }, { "start": 3527.92, "end": 3531.56, "text": " to start." }, { "start": 3531.56, "end": 3540.12, "text": " And is there some some, you know, in, in deep learning, it's usually Keras, I train a CNN" }, { "start": 3540.12, "end": 3546.84, "text": " on MNIST or CIFAR 10. Is there like some some standard thing that every one of your of your" }, { "start": 3546.84, "end": 3547.84, "text": " students goes through?" }, { "start": 3547.84, "end": 3552.36, "text": " I mean, now I sent a lot of them to this great distill article basically and looking at this" }, { "start": 3552.36, "end": 3558.4, "text": " this growing NCAs because they also have a great, like this collab notebook where you" }, { "start": 3558.4, "end": 3562.7200000000003, "text": " can play with the system. So I think this is a great starting point to where you both" }, { "start": 3562.7200000000003, "end": 3567.7200000000003, "text": " have neural like you have cellular automata and you have like how recent tools can be" }, { "start": 3567.7200000000003, "end": 3574.48, "text": " used to grow them. So I think this is a good good place to play around with basically." }, { "start": 3574.48, "end": 3582.44, "text": " Okay. Yeah, I've I've spent more than more than more time than I've had on these things" }, { "start": 3582.44, "end": 3583.4, "text": " because they're quite" }, { "start": 3583.4, "end": 3588.7200000000003, "text": " It's great that it's also so interactive and fun to play with." }, { "start": 3588.7200000000003, "end": 3594.6800000000003, "text": " Yes, definitely. Yeah, I think is there anything else that you would like to get out there" }, { "start": 3594.6800000000003, "end": 3596.48, "text": " to people about this field?" }, { "start": 3596.48, "end": 3601.8, "text": " Yeah, I just Yeah, I hope that people would be not only everybody running basically in" }, { "start": 3601.8, "end": 3609.36, "text": " the same direction just doing like what everybody else is doing. So hopefully this will be also" }, { "start": 3609.36, "end": 3615.28, "text": " get a few more people into this field of complex systems and self organizing systems and combining" }, { "start": 3615.28, "end": 3620.6800000000003, "text": " the ideas of deep learning. Because I think there's a lot of things interesting things" }, { "start": 3620.6800000000003, "end": 3627, "text": " to discover basically here and a little bit less people working on it then then the heart" }, { "start": 3627, "end": 3633.2000000000003, "text": " like like working on foundation models and language models and all those other things." }, { "start": 3633.2000000000003, "end": 3639.2000000000003, "text": " Yeah, it's certainly I think I think is certainly an interesting area. And I guess especially" }, { "start": 3639.2, "end": 3646.52, "text": " if you're at a university without the super duper clusters. Probably just strategically" }, { "start": 3646.52, "end": 3655.3599999999997, "text": " a PhD in this field would maybe be more of a advantageous position for new newcomers" }, { "start": 3655.3599999999997, "end": 3656.3599999999997, "text": " to the field." }, { "start": 3656.3599999999997, "end": 3666.72, "text": " Actually, like Hinton had this great quote recently on this other podcast, like it's" }, { "start": 3666.72, "end": 3670.3999999999996, "text": " always a good idea to figure out what huge numbers of very smart people are working on" }, { "start": 3670.3999999999996, "end": 3675.04, "text": " and to work on something else. Because you don't want to do maybe what what everybody" }, { "start": 3675.04, "end": 3681.68, "text": " else is doing. And I think so I would suggest this is a great field where a lot of I think" }, { "start": 3681.68, "end": 3686, "text": " interesting discoveries basically waiting to happen." }, { "start": 3686, "end": 3692.2799999999997, "text": " I agree. All right. So Sebastian, thank you very much for being here today. This was very" }, { "start": 3692.28, "end": 3698.0800000000004, "text": " cool. I hope to see yeah I hope to see a sprawling future for your field. Thanks a lot for the" }, { "start": 3698.08, "end": 3725.72, "text": " invite. Thanks." } ]
qSArFEIoSbo
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
RepNet: Counting Out Time - Class Agnostic Video Repetition Counting in the Wild (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "vision", "counting", "self-similarity", "temporal", "frames", "video", "repeating", "lines", "transformer", "attention", "cnn", "convolutional neural network", "repetitions", "periodicity", "period", "repeat", "actions", "kinetics", "countix" ]
Counting repeated actions in a video is one of the easiest tasks for humans, yet remains incredibly hard for machines. RepNet achieves state-of-the-art by creating an information bottleneck in the form of a temporal self-similarity matrix, relating video frames to each other in a way that forces the model to surface the information relevant for counting. Along with that, the authors produce a new dataset for evaluating counting models. OUTLINE: 0:00 - Intro & Overview 2:30 - Problem Statement 5:15 - Output & Loss 6:25 - Per-Frame Embeddings 11:20 - Temporal Self-Similarity Matrix 19:00 - Periodicity Predictor 25:50 - Architecture Recap 27:00 - Synthetic Dataset 30:15 - Countix Dataset 31:10 - Experiments 33:35 - Applications 35:30 - Conclusion & Comments Paper Website: https://sites.google.com/view/repnet Colab: https://colab.research.google.com/github/google-research/google-research/blob/master/repnet/repnet_colab.ipynb Abstract: We present an approach for estimating the period with which an action is repeated in a video. The crux of the approach lies in constraining the period prediction module to use temporal self-similarity as an intermediate representation bottleneck that allows generalization to unseen repetitions in videos in the wild. We train this model, called RepNet, with a synthetic dataset that is generated from a large unlabeled video collection by sampling short clips of varying lengths and repeating them with different periods and counts. This combination of synthetic data and a powerful yet constrained model, allows us to predict periods in a class-agnostic fashion. Our model substantially exceeds the state of the art performance on existing periodicity (PERTUBE) and repetition counting (QUVA) benchmarks. We also collect a new challenging dataset called Countix (~90 times larger than existing datasets) which captures the challenges of repetition counting in real-world videos. Authors: Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Pierre Sermanet, Andrew Zisserman Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Check out these videos on the top. Each one kind of contains a repeating action. So on the left you see someone doing jumping jacks in a fairly regular pattern. In the middle it gets a bit more difficult because what you see is a tennis ball bouncing and it bounces faster and faster and faster as time goes on. On the right you see that there is a short intro sequence before the repeating action, the person shoveling the cement, is displayed. So the goal here is to build an AI that can detect that a repeating action is happening and if it detects so that it can count how often this repeating action is happening. You can already see the difficulties here is not only the recognition itself but the fact that the repeating actions can be different and can be of different length and cannot always look the same and so on. So this paper uses these temporal self-similarity matrices that you see at the bottom here to achieve this and we're going to explore how that's happening. So the paper we'll look at is called Counting Out Time Class Agnostic Video Repetition Counting in the Wild by Debidada Dwebidi, Yusuf Aitar, Jonathan Thompson, Pierre Sermonet and Andrew Zizerman of Google Research and DeepMind. So as I already said this paper detects repeating actions and is able to count them and on a high level what they do is they encode the video using convolutional networks then they build these temporal self-similarity matrices between the frames in order to detect the repetitions and they decode that into the predictions using another neural network. This is all trained end to end and they also make a new data set for this task. So that's the high level. If you want to find out how exactly that's done I invite you to stick around because we'll go into the paper. If you like content like this don't forget to share it out, leave a like and tell me what you think of the paper in the comments. I do read the comments so yeah I'm very happy to read what you all think about it. Okay so as we already said they say we present an approach for estimating the period with which an action is repeated in a video and that's actually understating what the problem is here. The problem is many manyfold. As you can see on the right here even if you don't get what this self-similarity matrix is yet the outputs that you want are at the bottom. So what you want is first of all a per frame periodicity prediction. So that means that for each frame you want to know is there even a repeating action happening or not in that particular frame. You can see here at the beginning at the end of the video there is no repeating action and then in the middle there is a repeating action. So that's the first thing you want to know. The second thing is this per frame period length prediction. So for each frame that is part of a repeating action you want to know what is the period length of that repeating action and that can change throughout the video. So you need it per frame and once you have it per frame you can actually count the number of repetitions. So those are two problems already. The third problem that this paper solves is that there is no adequate data set to train a model like this it seems. For example this model right here this QUVA data set I believe has 100 data points and these are meant for testing. So you would build your system somewhere and then you would test it on those 100 data points. But they claim not even those are large and diverse enough for these systems to be evaluated. So they build. We also collect a new challenging data set called Countix which is 90 times larger than existing data sets which captures the challenges of repetition counting in real world videos and that actually consists of a train and test split. So you can also train a system using it. Let's dive into the architecture. I think this paper is a very very very cool example of even though we're in this deep learning paradigm where you just throw neural networks at a problem it's a very cool example that you can still achieve a lot by smartly constructing this because we used to achieve a lot by smartly constructing features and so on. In this case the goal is achieved by smartly constructing the architecture of the neural network itself to give you back a good performance on the particular task right here. Okay so if that tablet lets me actually scroll around let's go to the architecture. So figure two shows the architecture in more detail. So we'll go through it from the beginning to the end. Actually let's go let's go to the end so you know what is supposed to happen. So for each frame in this video what we'll need is a period length and a periodicity. These are two so the bottom is a binary variable and the top is a it's actually a number but it is predicted in kind of binned as a classification task. It doesn't really matter we need two outputs that we can compare with the labels right. In this case the videos are of length 64 so there's 64 frames per video and for each of those frames we want a period length and for each of those frames we want a periodicity binary prediction. And that as I said we compare it with our labels and then we can calculate a loss. So this is the loss the loss is at the end comparing these two labels and then everything else is trained using back propagation on that loss. Okay so now with that in mind let's go to the beginning. So the video is taken and fed through an encoder in order to produce these per frame embeddings. So we want an embedding for each frame that means for each of the 64 frames we want to obtain one vector of length 512 that describes that particular frame in terms that the model can understand. And we do that using an encoder. Now the encoder it has a bunch of parts to it. It's not just a blob as you see right here. So the encoder consists of three things. First of all there's this convolutional feature extractor which is a ResNet 50 architecture. It's simply a convolution and you let the convolutional neural network run on each frame independently. This is simply a feature extractor from images right like you know it from any other image processing task. But of course here we have a video. So it would be nice if the frames knew something about the other frames right. Especially if you think of something like a jumping jack. If you are in this position right here. It doesn't tell you everything about that video frame. If you consider the frame before it and maybe the frame before it is this and maybe the frame after it is that, you can clearly see that the hands or the arms are in an upward motion. So the next step of the encoder tries to integrate that temporal information into the embeddings and that is achieved via a 3D convolution. So once we process each frame individually then we feed it into one layer through a layer of 3D convolution to add local temporal information to the per frame features. So if you don't know what a 3D convolution is, this already drew. So in a 2D convolution what you want to do is you want to have this filter right here which is a 2D filter for each of the channels and you slide it across the image like this. And you can have multiple channels of the input image right here and you can actually do this multiple times which corresponds to multiple output channels of the filters but the actual convolution is happening in two dimensions. So the sliding is across the width and the height of the image. In contrast to that if you have a 3D convolution and you have the same input stack of images and now this we have to you have to pay attention here. This right this stack right here is the individual channels of one image so of one video frame. This stack right here that I'm drawing these are the video frames stacked. Now each of these stacked video frames can have multiple channels resulting from its 2D convolution or even just RGB. So I can't really draw in four dimensions but now we stack the video frames and now our kernel our filter will be also in the direction of the video frame. So I don't know if this is really recognizable if I draw it like this but as you can see the kernel is not only 2D but 3D and now the sliding so if we have actually more than that the sliding is not only done into the direction of height and width but also into the direction of depth so that each of the frames each of the video frames right here can incorporate information from its immediate neighbors in case of a 3x3x3 filter or even more neighbors but here we use a 3x3x3 filter. Okay so that's that's how we obtain these embeddings right here and at the end there is a dimension reduction like a max pooling or something but ultimately what you'll end up with is for each of the 64 video frames you get one vector and that vector mainly describes that particular video frame like if you consider the green one here but also contains information from the video frame before it and from the video frame after it. Okay so that's sort of and and that's that's the so the temporal convolution is not to detect the periodic actions because it's just one frame into the future and into the past it is more like what we said here in order to give you extra information of what's happening in a particular frame because especially for periodicity it's actually important if the arms are going up or down. Cool so then comes the heart of the architecture. The heart of the architecture is this temporal self-similarity matrix. Now what does this do? This is relating the frames to each other and important to note here this is just a single channel image so there is no other ones of these. For the entire video frame sequence you have one 64x64 matrix and all the signal has to go through this matrix right everything from here is only through this matrix there's no residual connections there is no skip connections that's all that's your information bottleneck and by having a bottleneck like this these the authors here force the model to basically do a good job at making this temporal self-similarity happen if it wants to achieve a low loss and that's what I mean by you can achieve a lot by having a smart architecture. This temporal self-similarity matrix is actually not learned this can be computed deterministically from the embeddings right here so what you do is you each row here corresponds to one frame so you take each frame that corresponds to a row and you calculate simply the distance to or the similarity with each of the other frames so this is as you can see it's 64x64 so frame i here is simply compared to each of the other frames j and depending on whether that embedding is very similar to the embedding of frame j this number is going to be high or low. Now there's also a after that you do a softmax across here such that this is going to be like a distribution and not just raw numbers but that's ultimately not that important so what you can see is that the diagonal is very prominent and that makes sense because technically the diagonal is always one right if for example if you use the inner product as a similarity the diagonal should always be one but here we have the softmax so it's not but ultimately we can say that any frame should be very similar to its is going to be very similar to itself so that's why but then you can see right here there's a pattern emerging and that pattern is these diagonal lines in this direction as you can see and what what does that mean they actually have a larger version of this down here so what does the diagonal pattern mean it means that so here the diagonal that's okay frame i is very similar to frame i cool but the other lines what do they mean so if i look at frame i right here it is also very similar to this frame j now this wouldn't be further you know frames are similar but the line means that if i have if i look 10 frames later i plus 10 that's very similar to j plus 10 and if i now look at i plus 20 frame that's very similar to j plus 20 so this is the this is here why the the pattern is emerging of a line because if i go 10 frames into the future it's similar to the other one 10 frames into the future and so on and that means the line indicates that this entire sequence starting from i 20 frames into the future is repeated starting from j actually j is earlier here so but you get the point and if i have a bunch of these lines that means that this subsequence is repeated again and again and again throughout the video so each of these lines is basically one repetition of the sequence from the middle here at some other point in the video and that's pretty fascinating and that's these these self-similarity matrices that's what they're sort of showing now they don't use the inner product here as a self-similarity metric they actually use as you can see right here they use the negative square distance but the effect is the same so negative square distance followed by a row wise softmax operation so you could say hi we are basically done having this self-similarity matrix what we could do let's let's say we could train it we don't worry about how to train it we could simply take each row right here and we plot the intensities across that row and that's maybe you know like something like this and then there's the diagonal is a bit higher okay we could just use a heuristic to detect these bumps here basically calculate the length count the bumps and calculate the period length right that that should be pretty easy with like a simple heuristic but the authors here they want more they want to solve more problems so what are some of the problems we already saw some of the problems namely for example here is the hammer throw so the hammer throw starts out slow and gets faster and faster and faster and you can see this pretty clearly at the lines right here namely if you go through time so you start off here and you go through time you can see that the distance to the next line here is fairly large but you go through time further the distance gets shorter you go through time further the distance gets even shorter so these pattern of lines here that's kind of converging towards that it indicates that this repeating action gets faster and faster and faster this is nice to see here at the bouncy ball example where you can see it starts out pretty slow but it gets faster and faster and here you if you if you have this full thing right here that basically means all the frames are self similar to each other which basically means if you stop the video right that's if you have 10 frames in a row the same thing the ball is just lying on the ground all of these frames will be self similar so there's probably no bouncing happening down here you can see pretty well from the pattern what happens and here in this mixing concrete example that we saw at the beginning you can see that at the beginning at the end there's this intro and outro sequence and only in the middle is there a repeating action and that's indicated by this line pattern is only at in the middle of the videos only between here and here so it's it's going to be pretty difficult to just have a heuristic that reads out these periodic action periodicity and in a true deep learning fashion the the authors here oh sorry maybe you can't see that i've shifted my recording window so maybe sometimes something's out of frame and you have to yell at me if i do that please so i hope you you saw this that you have the ever the speeding up here and here they're visible in the pattern and then here you have the beginning sequence the end sequence that have no repeating pattern and the repeating pattern only merged in the middle so the authors want to do this through of course a deep learning network they want to read out the periodicities not through a heuristic but using a deep network you know respectable that's at the times we live in so what do they do first of all you have to see right here everything that happens from here as i understand it is per frame so they simply take a row of this matrix right here like this red line and that is independently pulled through to the end so there is no interaction happening anymore between the individual frame data the only interaction that happens is a little bit here at the temporal convolutions but the only real interaction between the frames is happening through the self-similarity matrix and again this is the information bottleneck that the authors force the information through everything happening from here no that's actually not right there is this convolution right here but still this is the information bottleneck you have to go through so right here we process this image using a convolution so this is an image right and we can process it using a convolutional neural network so what we do is we have a 64 by 64 image in one channel we simply up sample that not up sample but we expand the channels to 32 channels now as i said it's pretty easy to think we can just go to the end here use a conv net to produce our final 512 by so 512 embeddings we have here again 64 by 64 that we then use to predict the final result but the authors here do something different they do transformer layers in the middle but only per frame so what does it mean so here you up sample to 32 channels and then that means that one of these blocks right here one of these blocks corresponds to one row in the self-similarity matrix which corresponds to one frame and from now on so from now on i want to say what i said before from now on it's all just this one block they are independent of each other okay so you take this one block and you feed it through a transformer to achieve at your final embedding of 512 and it's probably best if we read what they say about it okay so if we're given this self-similarity matrices matrix they consist of row each row is the per frame self-similarity representation and generates two outputs the per frame period length estimation and the per frame binary periodicity classification note that both l and p are vectors and their elements are per frame predictions okay the architecture of the period predictor module can be viewed in figure two note that the predictors share a common architecture and waits until the last classification phase the shared processing pipeline starts with starts with 32 2d convolutional filters of size three by three followed by a transformer layer which uses a multi-headed attention with trainable positional embeddings in the form of a 64 length variable that is learned by training okay it's i guess the transformers learned by training and the positional embeddings are also learned by training that's fairly common we use four heads with 512 dimension in the transformer by the way if you don't know what a transformer is watch the video on attention is all you need i made one it's very popular yeah so with each head being 128 dimensions in size after the shared pipeline we have two classifiers period length classifier and periodicity classifier tau sorry this is fine this is tau each of them consists of two fully connected layers of size 512 so i guess the the the pipeline here is pretty simple the question could be why do they use a transformer and not simply another convolutional network so here they up sample the image as we saw into 32 channels and then they simply want to take one of these one of these blocks here and that corresponds a little bit so we have for one frame right what does it mean we have basically we have 64 by 32 things so the 64 things it's this one frames temporal connection to each other frame given that you know comes from this self-similarity matrix so it kind of relates this frame that we're considering to each of the other frames and each of this each of these entries is a 32 size vector this is sort of a this is you can consider like a sequence of 64 things 64 embeddings so to use a transformer here it's pretty natural if you think of this as like a sequence transformation task i i would guess so the transformer can if there are these peaks right here like we saw right here the transformer can make very good sense of that because of course the attention mechanism from a one peak it can attend to all the other peaks and can sort of relate the different peaks to each other and then determine the periodicity length whereas with a convolutional network i guess that's going to be a lot harder because of the sort of invariance built into the convolution i'm not sure maybe they also it just worked better but that's how i think about it it's that for a given frame you basically have a sequence classification or a set classification task and the attention mechanism allows you to in one single step connect each peak with each other peak or each information with each other information in this sequence all right so at the end you have just fully connected layers again only on a per frame basis and that will give you the output and again you compare this to the label and you back prop through everything everything here is differentiable so all of this is trained to achieve minimum possible loss and because you train everything to achieve minimal possible loss you make this encoder right here which is the crucial part because the encoder is must give you good embeddings which must give you a sensible self-similarity matrix right you train the encoder to encode things that are relevant for the task and that's what makes the whole thing work okay so we've gone through the architecture now the problem right here is the the data set so they also go into how they do inference they can actually do a bunch of things like play the video at different speeds and then look at what each of the predictions so if a double speed it predicts half the period length then you can be more sure and so on so that's pretty cool but they go into another point right here and that's the data set so they produce this countix data set but also on the other hand which is something I also find very cool is they produce a synthetic data set so here they say we train with synthetic repetitions and that can be sort of I didn't know what to think of it at first I was just like huh but then it's pretty cool so if you have a video with these these are the frames of the video right so the video goes in this temporal direction what you can do is simply go here go through these frames and just repeat these frames and repeat them and repeat them and at the end you have these frames right and then you have a data set and if you if you assume that most videos do not naturally contain repeating actions right most videos are just videos they're not videos of something repeating then you can safely assume that these parts here are non repeating so and these parts here are repeating this is one of the labels that you need right the problem with synthetic data set is always to have the labels and also you know how many there are because you can simply count the number of times that you go through it you can even make it faster slower and so on so this synthetic approach is pretty cool and especially the bottom right here because this might be kind of hacky because each time each time you jump from the end of one of those arrows to the beginning right you have kind of a hack in the indie video because you know it's not continuous so what you can do and this is the the bottom here you can do this reversal technique where you go to the end and then you play the frames backwards and then you play the frames forwards again backwards again forwards again and then you go out here and that gives you one continuous motion right if someone if it's simply a video of someone lifting their hand like it starts out down here and it goes here and it goes here and then if you do this technique it would go down again down again up again up again and so on so that's you know i think it's a fairly smart technique honestly now they tried this and it doesn't work super well so what they also have to do is they have to do manual camera motion augmentation so that's so camera motion augmentation it basically means that if you just do a repeating action like this it's sort of i guess it's too monotonic it doesn't really cover real videos with repeating actions so what they do is they kind of simulate a moving camera and you simulate that much like you would do image augmentation so you can rotate the camera over time you can translate it you can scale it differently and through if you do that throughout the video and you change it around how the camera moves then that appears to work fairly well so if they now compare this and their data set they perform pretty well so in their data set they take this kinetics data set and they crowdsource the label and the tasks in the data set they're pretty diverse as you can see right here so you have sports like rope training mountain climbers but you have also things like playing ukulele exercising arms slicing an onion and so on and you can see that the repetition count is fairly diverse as well so from one or two repetitions per video it goes to 50 or so and the period length is also between one and five seconds though as you as i already said you don't have to you don't have to count on that because you can always play the video slower or faster and then determine other periodicities so in their experiment first of all they perform pretty well and they show that if they train on their data set and on the synthetic data set they perform better than if they just train on the synthetic or they just train on their data set they also show pretty clearly that the addition of this temporal self-similarity matrix helps tremendously you can see right here in each of these boxes is the comparison and this obi I think is the off by one error so it kind of forgives you if you're off by one count but otherwise you get a zero if you're wrong and you can see that the self-similarity matrix helps tremendously they also compare with some other architectural choices instead of the transformer I guess yeah so I guess they just take it because it performs pretty well and they do a lot of lot of ablations but what I particularly appreciate is that they do something like this so what they do at the end went once they've trained the architectures they do a 1d PCA protection of the encoder features over time now the encoder features they were 512 dimensional right this is the thing before it goes into the self-similarity matrix so those we said the encoder is the crucial part here because it needs to take the video and encode things that make them accessible to calculating the self-similarity now they do a 1d PCA so a projection into one dimension of these features and you can already see at this one dimensional projection that the periodicity here is clearly clearly visible namely for example right here every time up here is when the legs are up and every time down here is when the legs are down right here so that is very very impressive and that kind of that really shows that the model is doing what you claim what you claim that it's doing like I'm almost more interested in experiments like this than in and in these numbers right here because the numbers could always be because you've just thrown more stuff at it right so they go over a bunch of possible applications of their model so first of all you can do something like as we can see repetition counting from videos you can do periodicity detection those were the things that the model is trained to do but there's also a bunch of things that the model can now implicitly do namely something like change inspection where they say look if someone's chopping this pineapple right here then at the end of each of the repetitions there is something that changed namely the number of slices of pineapple is it bread is it I can't I think it's pineapple okay so the number of slices or pieces right here changes so in essence this could be the base for another model estimating whatever changed or training to recognize numbers of pieces and so on also you can detect the speed so the speed of a repeating action if you perform something slow or fast this model can implicitly do it and this they call cross-period retrieval so if you know when the repetitions are you know that okay maybe the first frame so always on the upswing right here these should all these should all be fairly similar visually right as with respect to the repeating action so you can see that even though this whenever the kid in the swing here is close it looks fairly different in in a purely visual sense in a pixel sense but it is at the same point in the repeating action and that's you know that's that's pretty cool so you can technically retrieve related things even though they visually they don't look similar that much yeah that that's the the kind of applications here are probably many many fold and I also think that so in this measure of intelligence paper by françois choulet he basically claims that this is one of the innate abilities of humans they can count you know they can count things this is something you're basically born with and maybe this thing right here will become sort of a staple staple component for many other things that we build AI on I would not be surprised but maybe it will just fade into history I think it's pretty cool project especially you know the the architectural choice here to pull everything through this self-similarity matrix and the you know just just looking at this matrix already makes you kind of know that this thing works alright this was it from me let me know in the comments what you think about the paper check out the website the website has a lot of video demo examples of what they're doing I think the data set as well and yeah I'll see you next time bye bye
[ { "start": 0, "end": 7.68, "text": " Hi there! Check out these videos on the top. Each one kind of contains a repeating action." }, { "start": 7.68, "end": 12.92, "text": " So on the left you see someone doing jumping jacks in a fairly regular pattern. In the" }, { "start": 12.92, "end": 18.64, "text": " middle it gets a bit more difficult because what you see is a tennis ball bouncing and" }, { "start": 18.64, "end": 24.560000000000002, "text": " it bounces faster and faster and faster as time goes on. On the right you see that there" }, { "start": 24.56, "end": 31.279999999999998, "text": " is a short intro sequence before the repeating action, the person shoveling the cement, is" }, { "start": 31.279999999999998, "end": 38.72, "text": " displayed. So the goal here is to build an AI that can detect that a repeating action" }, { "start": 38.72, "end": 44.8, "text": " is happening and if it detects so that it can count how often this repeating action" }, { "start": 44.8, "end": 51.36, "text": " is happening. You can already see the difficulties here is not only the recognition itself but" }, { "start": 51.36, "end": 56.2, "text": " the fact that the repeating actions can be different and can be of different length and" }, { "start": 56.2, "end": 62.3, "text": " cannot always look the same and so on. So this paper uses these temporal self-similarity" }, { "start": 62.3, "end": 68.14, "text": " matrices that you see at the bottom here to achieve this and we're going to explore how" }, { "start": 68.14, "end": 74.7, "text": " that's happening. So the paper we'll look at is called Counting Out Time Class Agnostic" }, { "start": 74.7, "end": 84.08, "text": " Video Repetition Counting in the Wild by Debidada Dwebidi, Yusuf Aitar, Jonathan Thompson, Pierre" }, { "start": 84.08, "end": 92.32000000000001, "text": " Sermonet and Andrew Zizerman of Google Research and DeepMind. So as I already said this paper" }, { "start": 92.32000000000001, "end": 98.64, "text": " detects repeating actions and is able to count them and on a high level what they do is they" }, { "start": 98.64, "end": 104.82, "text": " encode the video using convolutional networks then they build these temporal self-similarity" }, { "start": 104.82, "end": 111.52, "text": " matrices between the frames in order to detect the repetitions and they decode that into" }, { "start": 111.52, "end": 118.56, "text": " the predictions using another neural network. This is all trained end to end and they also" }, { "start": 118.56, "end": 124.2, "text": " make a new data set for this task. So that's the high level. If you want to find out how" }, { "start": 124.2, "end": 130.28, "text": " exactly that's done I invite you to stick around because we'll go into the paper. If" }, { "start": 130.28, "end": 135.3, "text": " you like content like this don't forget to share it out, leave a like and tell me what" }, { "start": 135.3, "end": 141.66, "text": " you think of the paper in the comments. I do read the comments so yeah I'm very happy" }, { "start": 141.66, "end": 149.96, "text": " to read what you all think about it. Okay so as we already said they say we present" }, { "start": 149.96, "end": 155, "text": " an approach for estimating the period with which an action is repeated in a video and" }, { "start": 155, "end": 161.32, "text": " that's actually understating what the problem is here. The problem is many manyfold. As" }, { "start": 161.32, "end": 166.96, "text": " you can see on the right here even if you don't get what this self-similarity matrix" }, { "start": 166.96, "end": 174.64000000000001, "text": " is yet the outputs that you want are at the bottom. So what you want is first of all a" }, { "start": 174.64, "end": 181.39999999999998, "text": " per frame periodicity prediction. So that means that for each frame you want to know" }, { "start": 181.39999999999998, "end": 186.23999999999998, "text": " is there even a repeating action happening or not in that particular frame. You can see" }, { "start": 186.23999999999998, "end": 190.72, "text": " here at the beginning at the end of the video there is no repeating action and then in the" }, { "start": 190.72, "end": 195.44, "text": " middle there is a repeating action. So that's the first thing you want to know. The second" }, { "start": 195.44, "end": 202.72, "text": " thing is this per frame period length prediction. So for each frame that is part of a repeating" }, { "start": 202.72, "end": 208.52, "text": " action you want to know what is the period length of that repeating action and that can" }, { "start": 208.52, "end": 213.2, "text": " change throughout the video. So you need it per frame and once you have it per frame you" }, { "start": 213.2, "end": 221, "text": " can actually count the number of repetitions. So those are two problems already. The third" }, { "start": 221, "end": 227.32, "text": " problem that this paper solves is that there is no adequate data set to train a model like" }, { "start": 227.32, "end": 236, "text": " this it seems. For example this model right here this QUVA data set I believe has 100" }, { "start": 236, "end": 241.28, "text": " data points and these are meant for testing. So you would build your system somewhere and" }, { "start": 241.28, "end": 247.35999999999999, "text": " then you would test it on those 100 data points. But they claim not even those are large and" }, { "start": 247.35999999999999, "end": 254.26, "text": " diverse enough for these systems to be evaluated. So they build. We also collect a new challenging" }, { "start": 254.26, "end": 260.32, "text": " data set called Countix which is 90 times larger than existing data sets which captures" }, { "start": 260.32, "end": 265.36, "text": " the challenges of repetition counting in real world videos and that actually consists of" }, { "start": 265.36, "end": 273.64, "text": " a train and test split. So you can also train a system using it. Let's dive into the architecture." }, { "start": 273.64, "end": 280.68, "text": " I think this paper is a very very very cool example of even though we're in this deep" }, { "start": 280.68, "end": 286.28000000000003, "text": " learning paradigm where you just throw neural networks at a problem it's a very cool example" }, { "start": 286.28000000000003, "end": 294.16, "text": " that you can still achieve a lot by smartly constructing this because we used to achieve" }, { "start": 294.16, "end": 300.04, "text": " a lot by smartly constructing features and so on. In this case the goal is achieved by" }, { "start": 300.04, "end": 305.92, "text": " smartly constructing the architecture of the neural network itself to give you back a good" }, { "start": 305.92, "end": 313.44, "text": " performance on the particular task right here. Okay so if that tablet lets me actually scroll" }, { "start": 313.44, "end": 319.56, "text": " around let's go to the architecture. So figure two shows the architecture in more detail." }, { "start": 319.56, "end": 324.12, "text": " So we'll go through it from the beginning to the end. Actually let's go let's go to" }, { "start": 324.12, "end": 330.20000000000005, "text": " the end so you know what is supposed to happen. So for each frame in this video what we'll" }, { "start": 330.2, "end": 336.15999999999997, "text": " need is a period length and a periodicity. These are two so the bottom is a binary variable" }, { "start": 336.15999999999997, "end": 344.15999999999997, "text": " and the top is a it's actually a number but it is predicted in kind of binned as a classification" }, { "start": 344.15999999999997, "end": 349.15999999999997, "text": " task. It doesn't really matter we need two outputs that we can compare with the labels" }, { "start": 349.15999999999997, "end": 355.82, "text": " right. In this case the videos are of length 64 so there's 64 frames per video and for" }, { "start": 355.82, "end": 361.92, "text": " each of those frames we want a period length and for each of those frames we want a periodicity" }, { "start": 361.92, "end": 369.2, "text": " binary prediction. And that as I said we compare it with our labels and then we can calculate" }, { "start": 369.2, "end": 374.8, "text": " a loss. So this is the loss the loss is at the end comparing these two labels and then" }, { "start": 374.8, "end": 382.48, "text": " everything else is trained using back propagation on that loss. Okay so now with that in mind" }, { "start": 382.48, "end": 388.46000000000004, "text": " let's go to the beginning. So the video is taken and fed through an encoder in order" }, { "start": 388.46000000000004, "end": 394.6, "text": " to produce these per frame embeddings. So we want an embedding for each frame that means" }, { "start": 394.6, "end": 402.88, "text": " for each of the 64 frames we want to obtain one vector of length 512 that describes that" }, { "start": 402.88, "end": 407.94, "text": " particular frame in terms that the model can understand. And we do that using an encoder." }, { "start": 407.94, "end": 414.44, "text": " Now the encoder it has a bunch of parts to it. It's not just a blob as you see right" }, { "start": 414.44, "end": 421.64, "text": " here. So the encoder consists of three things. First of all there's this convolutional feature" }, { "start": 421.64, "end": 427.62, "text": " extractor which is a ResNet 50 architecture. It's simply a convolution and you let the" }, { "start": 427.62, "end": 433.88, "text": " convolutional neural network run on each frame independently. This is simply a feature extractor" }, { "start": 433.88, "end": 441.96, "text": " from images right like you know it from any other image processing task. But of course" }, { "start": 441.96, "end": 452.6, "text": " here we have a video. So it would be nice if the frames knew something about the other" }, { "start": 452.6, "end": 460.64, "text": " frames right. Especially if you think of something like a jumping jack. If you are in this position" }, { "start": 460.64, "end": 466.91999999999996, "text": " right here. It doesn't tell you everything about that video frame. If you consider the" }, { "start": 466.91999999999996, "end": 473, "text": " frame before it and maybe the frame before it is this and maybe the frame after it is" }, { "start": 473, "end": 482.91999999999996, "text": " that, you can clearly see that the hands or the arms are in an upward motion. So the next" }, { "start": 482.91999999999996, "end": 489.2, "text": " step of the encoder tries to integrate that temporal information into the embeddings and" }, { "start": 489.2, "end": 498.03999999999996, "text": " that is achieved via a 3D convolution. So once we process each frame individually then" }, { "start": 498.03999999999996, "end": 505.52, "text": " we feed it into one layer through a layer of 3D convolution to add local temporal information" }, { "start": 505.52, "end": 510.96, "text": " to the per frame features. So if you don't know what a 3D convolution is, this already" }, { "start": 510.96, "end": 517.68, "text": " drew. So in a 2D convolution what you want to do is you want to have this filter right" }, { "start": 517.68, "end": 524.3599999999999, "text": " here which is a 2D filter for each of the channels and you slide it across the image" }, { "start": 524.3599999999999, "end": 530.2399999999999, "text": " like this. And you can have multiple channels of the input image right here and you can" }, { "start": 530.2399999999999, "end": 535.4399999999999, "text": " actually do this multiple times which corresponds to multiple output channels of the filters" }, { "start": 535.4399999999999, "end": 540.8399999999999, "text": " but the actual convolution is happening in two dimensions. So the sliding is across the" }, { "start": 540.8399999999999, "end": 546.7199999999999, "text": " width and the height of the image. In contrast to that if you have a 3D convolution and you" }, { "start": 546.72, "end": 553, "text": " have the same input stack of images and now this we have to you have to pay attention" }, { "start": 553, "end": 560.36, "text": " here. This right this stack right here is the individual channels of one image so of" }, { "start": 560.36, "end": 568.6800000000001, "text": " one video frame. This stack right here that I'm drawing these are the video frames stacked." }, { "start": 568.6800000000001, "end": 573.32, "text": " Now each of these stacked video frames can have multiple channels resulting from its" }, { "start": 573.32, "end": 580.6400000000001, "text": " 2D convolution or even just RGB. So I can't really draw in four dimensions but now we" }, { "start": 580.6400000000001, "end": 588.96, "text": " stack the video frames and now our kernel our filter will be also in the direction of" }, { "start": 588.96, "end": 595.6800000000001, "text": " the video frame. So I don't know if this is really recognizable if I draw it like this" }, { "start": 595.6800000000001, "end": 602.5200000000001, "text": " but as you can see the kernel is not only 2D but 3D and now the sliding so if we have" }, { "start": 602.52, "end": 607.4, "text": " actually more than that the sliding is not only done into the direction of height and" }, { "start": 607.4, "end": 613.64, "text": " width but also into the direction of depth so that each of the frames each of the video" }, { "start": 613.64, "end": 619, "text": " frames right here can incorporate information from its immediate neighbors in case of a" }, { "start": 619, "end": 628.1999999999999, "text": " 3x3x3 filter or even more neighbors but here we use a 3x3x3 filter. Okay so that's that's" }, { "start": 628.2, "end": 632.5600000000001, "text": " how we obtain these embeddings right here and at the end there is a dimension reduction" }, { "start": 632.5600000000001, "end": 637.6400000000001, "text": " like a max pooling or something but ultimately what you'll end up with is for each of the" }, { "start": 637.6400000000001, "end": 644.96, "text": " 64 video frames you get one vector and that vector mainly describes that particular video" }, { "start": 644.96, "end": 651.5200000000001, "text": " frame like if you consider the green one here but also contains information from the video" }, { "start": 651.52, "end": 658.52, "text": " frame before it and from the video frame after it. Okay so that's sort of and and that's" }, { "start": 658.52, "end": 664, "text": " that's the so the temporal convolution is not to detect the periodic actions because" }, { "start": 664, "end": 668.8, "text": " it's just one frame into the future and into the past it is more like what we said here" }, { "start": 668.8, "end": 674.3, "text": " in order to give you extra information of what's happening in a particular frame because" }, { "start": 674.3, "end": 681, "text": " especially for periodicity it's actually important if the arms are going up or down. Cool so" }, { "start": 681, "end": 685.68, "text": " then comes the heart of the architecture. The heart of the architecture is this temporal" }, { "start": 685.68, "end": 692.88, "text": " self-similarity matrix. Now what does this do? This is relating the frames to each other" }, { "start": 692.88, "end": 699.96, "text": " and important to note here this is just a single channel image so there is no other" }, { "start": 699.96, "end": 708.32, "text": " ones of these. For the entire video frame sequence you have one 64x64 matrix and all" }, { "start": 708.32, "end": 714.8000000000001, "text": " the signal has to go through this matrix right everything from here is only through this" }, { "start": 714.8000000000001, "end": 720.2, "text": " matrix there's no residual connections there is no skip connections that's all that's" }, { "start": 720.2, "end": 725.72, "text": " your information bottleneck and by having a bottleneck like this these the authors here" }, { "start": 725.72, "end": 732.2, "text": " force the model to basically do a good job at making this temporal self-similarity happen" }, { "start": 732.2, "end": 738.3000000000001, "text": " if it wants to achieve a low loss and that's what I mean by you can achieve a lot by having" }, { "start": 738.3, "end": 744.24, "text": " a smart architecture. This temporal self-similarity matrix is actually not learned this can be" }, { "start": 744.24, "end": 750.88, "text": " computed deterministically from the embeddings right here so what you do is you each row" }, { "start": 750.88, "end": 757, "text": " here corresponds to one frame so you take each frame that corresponds to a row and you" }, { "start": 757, "end": 766.52, "text": " calculate simply the distance to or the similarity with each of the other frames so this is as" }, { "start": 766.52, "end": 773.68, "text": " you can see it's 64x64 so frame i here is simply compared to each of the other frames" }, { "start": 773.68, "end": 780.3199999999999, "text": " j and depending on whether that embedding is very similar to the embedding of frame" }, { "start": 780.3199999999999, "end": 787.92, "text": " j this number is going to be high or low. Now there's also a after that you do a softmax" }, { "start": 787.92, "end": 794.3199999999999, "text": " across here such that this is going to be like a distribution and not just raw numbers" }, { "start": 794.32, "end": 799.38, "text": " but that's ultimately not that important so what you can see is that the diagonal is very" }, { "start": 799.38, "end": 807.2800000000001, "text": " prominent and that makes sense because technically the diagonal is always one right if for example" }, { "start": 807.2800000000001, "end": 811.0600000000001, "text": " if you use the inner product as a similarity the diagonal should always be one but here" }, { "start": 811.0600000000001, "end": 816.12, "text": " we have the softmax so it's not but ultimately we can say that any frame should be very similar" }, { "start": 816.12, "end": 821.6800000000001, "text": " to its is going to be very similar to itself so that's why but then you can see right here" }, { "start": 821.68, "end": 827.8, "text": " there's a pattern emerging and that pattern is these diagonal lines in this direction" }, { "start": 827.8, "end": 834.26, "text": " as you can see and what what does that mean they actually have a larger version of this" }, { "start": 834.26, "end": 843.1999999999999, "text": " down here so what does the diagonal pattern mean it means that so here the diagonal that's" }, { "start": 843.2, "end": 851.9200000000001, "text": " okay frame i is very similar to frame i cool but the other lines what do they mean so if" }, { "start": 851.9200000000001, "end": 857.5600000000001, "text": " i look at frame i right here it is also very similar to this frame j now this wouldn't" }, { "start": 857.5600000000001, "end": 865.48, "text": " be further you know frames are similar but the line means that if i have if i look 10" }, { "start": 865.48, "end": 874.76, "text": " frames later i plus 10 that's very similar to j plus 10 and if i now look at i plus 20" }, { "start": 874.76, "end": 882.96, "text": " frame that's very similar to j plus 20 so this is the this is here why the the pattern" }, { "start": 882.96, "end": 888.16, "text": " is emerging of a line because if i go 10 frames into the future it's similar to the other" }, { "start": 888.16, "end": 895.32, "text": " one 10 frames into the future and so on and that means the line indicates that this entire" }, { "start": 895.32, "end": 902.88, "text": " sequence starting from i 20 frames into the future is repeated starting from j actually" }, { "start": 902.88, "end": 908.7600000000001, "text": " j is earlier here so but you get the point and if i have a bunch of these lines that" }, { "start": 908.7600000000001, "end": 914.5600000000001, "text": " means that this subsequence is repeated again and again and again throughout the video so" }, { "start": 914.5600000000001, "end": 920.96, "text": " each of these lines is basically one repetition of the sequence from the middle here at some" }, { "start": 920.96, "end": 929.9200000000001, "text": " other point in the video and that's pretty fascinating and that's these these self-similarity" }, { "start": 929.9200000000001, "end": 936.32, "text": " matrices that's what they're sort of showing now they don't use the inner product here" }, { "start": 936.32, "end": 941.76, "text": " as a self-similarity metric they actually use as you can see right here they use the" }, { "start": 941.76, "end": 947.08, "text": " negative square distance but the effect is the same so negative square distance followed" }, { "start": 947.08, "end": 955.1600000000001, "text": " by a row wise softmax operation so you could say hi we are basically done having this self-similarity" }, { "start": 955.1600000000001, "end": 959.2800000000001, "text": " matrix what we could do let's let's say we could train it we don't worry about how to" }, { "start": 959.2800000000001, "end": 966.6800000000001, "text": " train it we could simply take each row right here and we plot the intensities across that" }, { "start": 966.6800000000001, "end": 970.76, "text": " row and that's maybe you know like something like this and then there's the diagonal is" }, { "start": 970.76, "end": 978.36, "text": " a bit higher okay we could just use a heuristic to detect these bumps here basically calculate" }, { "start": 978.36, "end": 983.6, "text": " the length count the bumps and calculate the period length right that that should be pretty" }, { "start": 983.6, "end": 990.72, "text": " easy with like a simple heuristic but the authors here they want more they want to solve" }, { "start": 990.72, "end": 996.08, "text": " more problems so what are some of the problems we already saw some of the problems namely" }, { "start": 996.08, "end": 1002.2, "text": " for example here is the hammer throw so the hammer throw starts out slow and gets faster" }, { "start": 1002.2, "end": 1007.64, "text": " and faster and faster and you can see this pretty clearly at the lines right here namely" }, { "start": 1007.64, "end": 1014.48, "text": " if you go through time so you start off here and you go through time you can see that the" }, { "start": 1014.48, "end": 1021.6, "text": " distance to the next line here is fairly large but you go through time further the distance" }, { "start": 1021.6, "end": 1027, "text": " gets shorter you go through time further the distance gets even shorter so these pattern" }, { "start": 1027, "end": 1033.44, "text": " of lines here that's kind of converging towards that it indicates that this repeating action" }, { "start": 1033.44, "end": 1039.68, "text": " gets faster and faster and faster this is nice to see here at the bouncy ball example" }, { "start": 1039.68, "end": 1047.8, "text": " where you can see it starts out pretty slow but it gets faster and faster and here you" }, { "start": 1047.8, "end": 1052.9199999999998, "text": " if you if you have this full thing right here that basically means all the frames are self" }, { "start": 1052.9199999999998, "end": 1059.12, "text": " similar to each other which basically means if you stop the video right that's if you" }, { "start": 1059.12, "end": 1063.08, "text": " have 10 frames in a row the same thing the ball is just lying on the ground all of these" }, { "start": 1063.08, "end": 1069.72, "text": " frames will be self similar so there's probably no bouncing happening down here you can see" }, { "start": 1069.72, "end": 1075.68, "text": " pretty well from the pattern what happens and here in this mixing concrete example that" }, { "start": 1075.68, "end": 1081.1200000000001, "text": " we saw at the beginning you can see that at the beginning at the end there's this intro" }, { "start": 1081.1200000000001, "end": 1087.16, "text": " and outro sequence and only in the middle is there a repeating action and that's indicated" }, { "start": 1087.16, "end": 1095.0800000000002, "text": " by this line pattern is only at in the middle of the videos only between here and here so" }, { "start": 1095.0800000000002, "end": 1101.42, "text": " it's it's going to be pretty difficult to just have a heuristic that reads out these" }, { "start": 1101.42, "end": 1107.52, "text": " periodic action periodicity and in a true deep learning fashion the the authors here" }, { "start": 1107.52, "end": 1113.96, "text": " oh sorry maybe you can't see that i've shifted my recording window so maybe sometimes something's" }, { "start": 1113.96, "end": 1119.96, "text": " out of frame and you have to yell at me if i do that please so i hope you you saw this" }, { "start": 1119.96, "end": 1126.3600000000001, "text": " that you have the ever the speeding up here and here they're visible in the pattern and" }, { "start": 1126.36, "end": 1131.4799999999998, "text": " then here you have the beginning sequence the end sequence that have no repeating pattern" }, { "start": 1131.4799999999998, "end": 1137.4399999999998, "text": " and the repeating pattern only merged in the middle so the authors want to do this through" }, { "start": 1137.4399999999998, "end": 1143.28, "text": " of course a deep learning network they want to read out the periodicities not through" }, { "start": 1143.28, "end": 1148.9599999999998, "text": " a heuristic but using a deep network you know respectable that's at the times we live in" }, { "start": 1148.96, "end": 1156.8400000000001, "text": " so what do they do first of all you have to see right here everything that happens from" }, { "start": 1156.8400000000001, "end": 1164.68, "text": " here as i understand it is per frame so they simply take a row of this matrix right here" }, { "start": 1164.68, "end": 1173.28, "text": " like this red line and that is independently pulled through to the end so there is no interaction" }, { "start": 1173.28, "end": 1179.48, "text": " happening anymore between the individual frame data the only interaction that happens is" }, { "start": 1179.48, "end": 1184.8799999999999, "text": " a little bit here at the temporal convolutions but the only real interaction between the" }, { "start": 1184.8799999999999, "end": 1190.76, "text": " frames is happening through the self-similarity matrix and again this is the information bottleneck" }, { "start": 1190.76, "end": 1198.1399999999999, "text": " that the authors force the information through everything happening from here no that's actually" }, { "start": 1198.14, "end": 1203.3200000000002, "text": " not right there is this convolution right here but still this is the information bottleneck" }, { "start": 1203.3200000000002, "end": 1210.5600000000002, "text": " you have to go through so right here we process this image using a convolution so this is" }, { "start": 1210.5600000000002, "end": 1219.64, "text": " an image right and we can process it using a convolutional neural network so what we" }, { "start": 1219.64, "end": 1226.3200000000002, "text": " do is we have a 64 by 64 image in one channel we simply up sample that not up sample but" }, { "start": 1226.32, "end": 1232.2, "text": " we expand the channels to 32 channels now as i said it's pretty easy to think we can" }, { "start": 1232.2, "end": 1240.2, "text": " just go to the end here use a conv net to produce our final 512 by so 512 embeddings" }, { "start": 1240.2, "end": 1248.6, "text": " we have here again 64 by 64 that we then use to predict the final result but the authors" }, { "start": 1248.6, "end": 1255.8799999999999, "text": " here do something different they do transformer layers in the middle but only per frame so" }, { "start": 1255.88, "end": 1267.5600000000002, "text": " what does it mean so here you up sample to 32 channels and then that means that one of" }, { "start": 1267.5600000000002, "end": 1274.2800000000002, "text": " these blocks right here one of these blocks corresponds to one row in the self-similarity" }, { "start": 1274.2800000000002, "end": 1280.3600000000001, "text": " matrix which corresponds to one frame and from now on so from now on i want to say what" }, { "start": 1280.36, "end": 1288.6, "text": " i said before from now on it's all just this one block they are independent of each other" }, { "start": 1288.6, "end": 1295.6399999999999, "text": " okay so you take this one block and you feed it through a transformer to achieve at your" }, { "start": 1295.6399999999999, "end": 1307.08, "text": " final embedding of 512 and it's probably best if we read what they say about it okay so" }, { "start": 1307.08, "end": 1314.8799999999999, "text": " if we're given this self-similarity matrices matrix they consist of row each row is the" }, { "start": 1314.8799999999999, "end": 1320.4399999999998, "text": " per frame self-similarity representation and generates two outputs the per frame period" }, { "start": 1320.4399999999998, "end": 1325.98, "text": " length estimation and the per frame binary periodicity classification note that both" }, { "start": 1325.98, "end": 1334.9199999999998, "text": " l and p are vectors and their elements are per frame predictions okay the architecture" }, { "start": 1334.92, "end": 1339.2, "text": " of the period predictor module can be viewed in figure two note that the predictors share" }, { "start": 1339.2, "end": 1345.76, "text": " a common architecture and waits until the last classification phase the shared processing" }, { "start": 1345.76, "end": 1354.3600000000001, "text": " pipeline starts with starts with 32 2d convolutional filters of size three by three followed by" }, { "start": 1354.3600000000001, "end": 1360, "text": " a transformer layer which uses a multi-headed attention with trainable positional embeddings" }, { "start": 1360, "end": 1367.44, "text": " in the form of a 64 length variable that is learned by training okay it's i guess the" }, { "start": 1367.44, "end": 1373.18, "text": " transformers learned by training and the positional embeddings are also learned by training that's" }, { "start": 1373.18, "end": 1379.66, "text": " fairly common we use four heads with 512 dimension in the transformer by the way if you don't" }, { "start": 1379.66, "end": 1384.66, "text": " know what a transformer is watch the video on attention is all you need i made one it's" }, { "start": 1384.66, "end": 1392.8400000000001, "text": " very popular yeah so with each head being 128 dimensions in size after the shared pipeline" }, { "start": 1392.8400000000001, "end": 1400, "text": " we have two classifiers period length classifier and periodicity classifier tau sorry this" }, { "start": 1400, "end": 1404.8400000000001, "text": " is fine this is tau each of them consists of two fully connected layers of size 512" }, { "start": 1404.8400000000001, "end": 1410.88, "text": " so i guess the the the pipeline here is pretty simple the question could be why do they use" }, { "start": 1410.88, "end": 1418.4, "text": " a transformer and not simply another convolutional network so here they up sample the image as" }, { "start": 1418.4, "end": 1426.22, "text": " we saw into 32 channels and then they simply want to take one of these one of these blocks" }, { "start": 1426.22, "end": 1433.2600000000002, "text": " here and that corresponds a little bit so we have for one frame right what does it mean" }, { "start": 1433.26, "end": 1445.64, "text": " we have basically we have 64 by 32 things so the 64 things it's this one frames temporal" }, { "start": 1445.64, "end": 1451.16, "text": " connection to each other frame given that you know comes from this self-similarity matrix" }, { "start": 1451.16, "end": 1457.36, "text": " so it kind of relates this frame that we're considering to each of the other frames and" }, { "start": 1457.36, "end": 1464.28, "text": " each of this each of these entries is a 32 size vector this is sort of a this is you" }, { "start": 1464.28, "end": 1472.3999999999999, "text": " can consider like a sequence of 64 things 64 embeddings so to use a transformer here" }, { "start": 1472.3999999999999, "end": 1479.6799999999998, "text": " it's pretty natural if you think of this as like a sequence transformation task i i would" }, { "start": 1479.68, "end": 1488.5600000000002, "text": " guess so the transformer can if there are these peaks right here like we saw right here" }, { "start": 1488.5600000000002, "end": 1494.48, "text": " the transformer can make very good sense of that because of course the attention mechanism" }, { "start": 1494.48, "end": 1502.76, "text": " from a one peak it can attend to all the other peaks and can sort of relate the different" }, { "start": 1502.76, "end": 1507.72, "text": " peaks to each other and then determine the periodicity length whereas with a convolutional" }, { "start": 1507.72, "end": 1514.76, "text": " network i guess that's going to be a lot harder because of the sort of invariance built into" }, { "start": 1514.76, "end": 1520.08, "text": " the convolution i'm not sure maybe they also it just worked better but that's how i think" }, { "start": 1520.08, "end": 1527.24, "text": " about it it's that for a given frame you basically have a sequence classification or a set classification" }, { "start": 1527.24, "end": 1534.88, "text": " task and the attention mechanism allows you to in one single step connect each peak with" }, { "start": 1534.88, "end": 1541.96, "text": " each other peak or each information with each other information in this sequence all right" }, { "start": 1541.96, "end": 1547.6000000000001, "text": " so at the end you have just fully connected layers again only on a per frame basis and" }, { "start": 1547.6000000000001, "end": 1553.3600000000001, "text": " that will give you the output and again you compare this to the label and you back prop" }, { "start": 1553.3600000000001, "end": 1559.72, "text": " through everything everything here is differentiable so all of this is trained to achieve minimum" }, { "start": 1559.72, "end": 1565.88, "text": " possible loss and because you train everything to achieve minimal possible loss you make" }, { "start": 1565.88, "end": 1571.08, "text": " this encoder right here which is the crucial part because the encoder is must give you" }, { "start": 1571.08, "end": 1576.76, "text": " good embeddings which must give you a sensible self-similarity matrix right you train the" }, { "start": 1576.76, "end": 1583.84, "text": " encoder to encode things that are relevant for the task and that's what makes the whole" }, { "start": 1583.84, "end": 1594.9199999999998, "text": " thing work okay so we've gone through the architecture now the problem right here is" }, { "start": 1594.9199999999998, "end": 1603.9199999999998, "text": " the the data set so they also go into how they do inference they can actually do a bunch" }, { "start": 1603.9199999999998, "end": 1608.9399999999998, "text": " of things like play the video at different speeds and then look at what each of the predictions" }, { "start": 1608.94, "end": 1614, "text": " so if a double speed it predicts half the period length then you can be more sure and" }, { "start": 1614, "end": 1621.8400000000001, "text": " so on so that's pretty cool but they go into another point right here and that's the data" }, { "start": 1621.8400000000001, "end": 1629.04, "text": " set so they produce this countix data set but also on the other hand which is something" }, { "start": 1629.04, "end": 1636.8200000000002, "text": " I also find very cool is they produce a synthetic data set so here they say we train with synthetic" }, { "start": 1636.82, "end": 1645.3999999999999, "text": " repetitions and that can be sort of I didn't know what to think of it at first I was just" }, { "start": 1645.3999999999999, "end": 1651.8799999999999, "text": " like huh but then it's pretty cool so if you have a video with these these are the frames" }, { "start": 1651.8799999999999, "end": 1657.84, "text": " of the video right so the video goes in this temporal direction what you can do is simply" }, { "start": 1657.84, "end": 1664.8, "text": " go here go through these frames and just repeat these frames and repeat them and repeat them" }, { "start": 1664.8, "end": 1669.94, "text": " and at the end you have these frames right and then you have a data set and if you if" }, { "start": 1669.94, "end": 1676.1599999999999, "text": " you assume that most videos do not naturally contain repeating actions right most videos" }, { "start": 1676.1599999999999, "end": 1682.32, "text": " are just videos they're not videos of something repeating then you can safely assume that" }, { "start": 1682.32, "end": 1687.52, "text": " these parts here are non repeating so and these parts here are repeating this is one" }, { "start": 1687.52, "end": 1692, "text": " of the labels that you need right the problem with synthetic data set is always to have" }, { "start": 1692, "end": 1699.72, "text": " the labels and also you know how many there are because you can simply count the number" }, { "start": 1699.72, "end": 1705.8, "text": " of times that you go through it you can even make it faster slower and so on so this synthetic" }, { "start": 1705.8, "end": 1710.3, "text": " approach is pretty cool and especially the bottom right here because this might be kind" }, { "start": 1710.3, "end": 1715.7, "text": " of hacky because each time each time you jump from the end of one of those arrows to the" }, { "start": 1715.7, "end": 1722.1200000000001, "text": " beginning right you have kind of a hack in the indie video because you know it's not" }, { "start": 1722.1200000000001, "end": 1727.8600000000001, "text": " continuous so what you can do and this is the the bottom here you can do this reversal" }, { "start": 1727.8600000000001, "end": 1732.54, "text": " technique where you go to the end and then you play the frames backwards and then you" }, { "start": 1732.54, "end": 1737.88, "text": " play the frames forwards again backwards again forwards again and then you go out here and" }, { "start": 1737.88, "end": 1743.48, "text": " that gives you one continuous motion right if someone if it's simply a video of someone" }, { "start": 1743.48, "end": 1748.88, "text": " lifting their hand like it starts out down here and it goes here and it goes here and" }, { "start": 1748.88, "end": 1755.56, "text": " then if you do this technique it would go down again down again up again up again and" }, { "start": 1755.56, "end": 1763.96, "text": " so on so that's you know i think it's a fairly smart technique honestly now they tried this" }, { "start": 1763.96, "end": 1771.16, "text": " and it doesn't work super well so what they also have to do is they have to do manual" }, { "start": 1771.16, "end": 1777.4, "text": " camera motion augmentation so that's so camera motion augmentation it basically means that" }, { "start": 1777.4, "end": 1782.72, "text": " if you just do a repeating action like this it's sort of i guess it's too monotonic it" }, { "start": 1782.72, "end": 1791.64, "text": " doesn't really cover real videos with repeating actions so what they do is they kind of simulate" }, { "start": 1791.64, "end": 1798.0800000000002, "text": " a moving camera and you simulate that much like you would do image augmentation so you" }, { "start": 1798.08, "end": 1804.52, "text": " can rotate the camera over time you can translate it you can scale it differently and through" }, { "start": 1804.52, "end": 1809.04, "text": " if you do that throughout the video and you change it around how the camera moves then" }, { "start": 1809.04, "end": 1819.24, "text": " that appears to work fairly well so if they now compare this and their data set they perform" }, { "start": 1819.24, "end": 1824.72, "text": " pretty well so in their data set they take this kinetics data set and they crowdsource" }, { "start": 1824.72, "end": 1831.16, "text": " the label and the tasks in the data set they're pretty diverse as you can see right here so" }, { "start": 1831.16, "end": 1836.84, "text": " you have sports like rope training mountain climbers but you have also things like playing" }, { "start": 1836.84, "end": 1843.7, "text": " ukulele exercising arms slicing an onion and so on and you can see that the repetition" }, { "start": 1843.7, "end": 1849.04, "text": " count is fairly diverse as well so from one or two repetitions per video it goes to 50" }, { "start": 1849.04, "end": 1856.2, "text": " or so and the period length is also between one and five seconds though as you as i already" }, { "start": 1856.2, "end": 1862, "text": " said you don't have to you don't have to count on that because you can always play the video" }, { "start": 1862, "end": 1870.12, "text": " slower or faster and then determine other periodicities so in their experiment first" }, { "start": 1870.12, "end": 1878.24, "text": " of all they perform pretty well and they show that if they train on their data set and on" }, { "start": 1878.24, "end": 1884.84, "text": " the synthetic data set they perform better than if they just train on the synthetic or" }, { "start": 1884.84, "end": 1890.96, "text": " they just train on their data set they also show pretty clearly that the addition of this" }, { "start": 1890.96, "end": 1896.6, "text": " temporal self-similarity matrix helps tremendously you can see right here in each of these boxes" }, { "start": 1896.6, "end": 1903.6, "text": " is the comparison and this obi I think is the off by one error so it kind of forgives" }, { "start": 1903.6, "end": 1909.24, "text": " you if you're off by one count but otherwise you get a zero if you're wrong and you can" }, { "start": 1909.24, "end": 1914.84, "text": " see that the self-similarity matrix helps tremendously they also compare with some other" }, { "start": 1914.84, "end": 1920.4399999999998, "text": " architectural choices instead of the transformer I guess yeah so I guess they just take it" }, { "start": 1920.4399999999998, "end": 1929.6399999999999, "text": " because it performs pretty well and they do a lot of lot of ablations but what I particularly" }, { "start": 1929.64, "end": 1935.76, "text": " appreciate is that they do something like this so what they do at the end went once" }, { "start": 1935.76, "end": 1941.26, "text": " they've trained the architectures they do a 1d PCA protection of the encoder features" }, { "start": 1941.26, "end": 1948.76, "text": " over time now the encoder features they were 512 dimensional right this is the thing before" }, { "start": 1948.76, "end": 1955.5200000000002, "text": " it goes into the self-similarity matrix so those we said the encoder is the crucial part" }, { "start": 1955.52, "end": 1962.36, "text": " here because it needs to take the video and encode things that make them accessible to" }, { "start": 1962.36, "end": 1969.56, "text": " calculating the self-similarity now they do a 1d PCA so a projection into one dimension" }, { "start": 1969.56, "end": 1977.4, "text": " of these features and you can already see at this one dimensional projection that the" }, { "start": 1977.4, "end": 1985.5600000000002, "text": " periodicity here is clearly clearly visible namely for example right here every time up" }, { "start": 1985.5600000000002, "end": 1991.0400000000002, "text": " here is when the legs are up and every time down here is when the legs are down right" }, { "start": 1991.0400000000002, "end": 1999.0600000000002, "text": " here so that is very very impressive and that kind of that really shows that the model is" }, { "start": 1999.0600000000002, "end": 2004.48, "text": " doing what you claim what you claim that it's doing like I'm almost more interested in experiments" }, { "start": 2004.48, "end": 2009.16, "text": " like this than in and in these numbers right here because the numbers could always be because" }, { "start": 2009.16, "end": 2018.92, "text": " you've just thrown more stuff at it right so they go over a bunch of possible applications" }, { "start": 2018.92, "end": 2026.52, "text": " of their model so first of all you can do something like as we can see repetition counting" }, { "start": 2026.52, "end": 2032.14, "text": " from videos you can do periodicity detection those were the things that the model is trained" }, { "start": 2032.14, "end": 2037.8400000000001, "text": " to do but there's also a bunch of things that the model can now implicitly do namely something" }, { "start": 2037.8400000000001, "end": 2042.8400000000001, "text": " like change inspection where they say look if someone's chopping this pineapple right" }, { "start": 2042.8400000000001, "end": 2048.7400000000002, "text": " here then at the end of each of the repetitions there is something that changed namely the" }, { "start": 2048.7400000000002, "end": 2055.48, "text": " number of slices of pineapple is it bread is it I can't I think it's pineapple okay" }, { "start": 2055.48, "end": 2063.08, "text": " so the number of slices or pieces right here changes so in essence this could be the base" }, { "start": 2063.08, "end": 2070.44, "text": " for another model estimating whatever changed or training to recognize numbers of pieces" }, { "start": 2070.44, "end": 2078.36, "text": " and so on also you can detect the speed so the speed of a repeating action if you perform" }, { "start": 2078.36, "end": 2087, "text": " something slow or fast this model can implicitly do it and this they call cross-period retrieval" }, { "start": 2087, "end": 2094.84, "text": " so if you know when the repetitions are you know that okay maybe the first frame so always" }, { "start": 2094.84, "end": 2100.6400000000003, "text": " on the upswing right here these should all these should all be fairly similar visually" }, { "start": 2100.64, "end": 2109.3599999999997, "text": " right as with respect to the repeating action so you can see that even though this whenever" }, { "start": 2109.3599999999997, "end": 2115.08, "text": " the kid in the swing here is close it looks fairly different in in a purely visual sense" }, { "start": 2115.08, "end": 2121.3599999999997, "text": " in a pixel sense but it is at the same point in the repeating action and that's you know" }, { "start": 2121.3599999999997, "end": 2127.04, "text": " that's that's pretty cool so you can technically retrieve related things even though they visually" }, { "start": 2127.04, "end": 2134.88, "text": " they don't look similar that much yeah that that's the the kind of applications here are" }, { "start": 2134.88, "end": 2141.32, "text": " probably many many fold and I also think that so in this measure of intelligence paper by" }, { "start": 2141.32, "end": 2147.4, "text": " françois choulet he basically claims that this is one of the innate abilities of humans" }, { "start": 2147.4, "end": 2152.92, "text": " they can count you know they can count things this is something you're basically born with" }, { "start": 2152.92, "end": 2161.08, "text": " and maybe this thing right here will become sort of a staple staple component for many" }, { "start": 2161.08, "end": 2167.44, "text": " other things that we build AI on I would not be surprised but maybe it will just fade into" }, { "start": 2167.44, "end": 2173.4, "text": " history I think it's pretty cool project especially you know the the architectural choice here" }, { "start": 2173.4, "end": 2180.04, "text": " to pull everything through this self-similarity matrix and the you know just just looking" }, { "start": 2180.04, "end": 2187.44, "text": " at this matrix already makes you kind of know that this thing works alright this was it" }, { "start": 2187.44, "end": 2192.48, "text": " from me let me know in the comments what you think about the paper check out the website" }, { "start": 2192.48, "end": 2198.08, "text": " the website has a lot of video demo examples of what they're doing I think the data set" }, { "start": 2198.08, "end": 2210.56, "text": " as well and yeah I'll see you next time bye bye" } ]
_Z9ZP1eiKsI
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Curiosity-driven Exploration by Self-supervised Prediction
[ "Science & Technology" ]
[]
https://arxiv.org/abs/1705.05363 Authors: Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, Trevor Darrell Abstract: In many real-world scenarios, rewards extrinsic to the agent are extremely sparse, or absent altogether. In such cases, curiosity can serve as an intrinsic reward signal to enable the agent to explore its environment and learn skills that might be useful later in its life. We formulate curiosity as the error in an agent's ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model. Our formulation scales to high-dimensional continuous state spaces like images, bypasses the difficulties of directly predicting pixels, and, critically, ignores the aspects of the environment that cannot affect the agent. The proposed approach is evaluated in two environments: VizDoom and Super Mario Bros. Three broad settings are investigated: 1) sparse extrinsic reward, where curiosity allows for far fewer interactions with the environment to reach the goal; 2) exploration with no extrinsic reward, where curiosity pushes the agent to explore more efficiently; and 3) generalization to unseen scenarios (e.g. new levels of the same game) where the knowledge gained from earlier experience helps the agent explore new places much faster than starting from scratch.
Hi there! Today we're going to look at this paper, Curiosity-Driven Exploration by Self-Supervised Prediction. It's a relatively short idea, so it shouldn't take too long. So the fundamental idea of the paper is to tackle the reward sparseness problem reinforcement learning. For example, if you have a Super Mario game like here, and there's a number of ways you can think of the reward, but one way you could formulate it is that you simply get kind of a plus one reward when you finish the game, or the level. Let's say you finish the level, you get plus one. If you die or don't make it in time, you get negative one. I think there's no way to not make it in... Oh yeah, there's actually a time limit. So the... The problem here is that your algorithm kind of needs to learn to make things now such that it gets to the end of the level, but the reward is only at the end of the level. So basically step by step it has no signal to go on because the reward is always zero, and it kind of needs to learn these long range dependencies. And that's notoriously hard in reinforcement learning to step by step learn actions that kind of maximize some very long term goal. So you can also think of a game of chess where your reward is going to be whether you win or lose at the end, but step by step it's kind of this... The reward is 50ish steps away. So you have no way of kind of step by step optimizing your actions in a meaningful manner. So there are many ways to get around this. One way that people have done is what's called reward shaping. And reward shaping is you're trying to introduce additional rewards kind of as a designer of the algorithm that you know are kind of good or helping to solve the problem or at least correlated with the reward you're going to get at the end. So in Mario this could be like the further right you go, the more reward you get. You get kind of an additional reward if you go right. Coincidentally I think in real Mario this also gives you points, but our situation is that the reward is just going to be at the end. You could also say like if you kill the... Or if you stomp the goombas, one goomba you stomp, that actually gives you also a bit of reward. In chess you could say like the more pieces you have, that gives you a bit of reward if you have more pieces than your opponent, if your opponent loses pieces. You don't and you also get a bit of reward if you get more territory on the board and so on. So these are all things that we know kind of correlate with the end reward. Like because in Mario for example the end of the level is actually on the right. But of course it's not perfect because sometimes there are situations where you kind of have to go back, go around something or go over something and not immediately go to the right. As well as in chess there are good sacrifices that you can make. So these kind of additional rewards they help, but they're not perfect. And the biggest problem with them is they're very domain specific. So a developer of the algorithm you basically have to know the domain like Super Mario and you have to know the goal is on the right. So you have to construct your reward in order to kind of reflect this. And this is very domain specific. Basically you have to do it for every domain again and again and again. In chess you have to know something about chess to play and so on. So one way around this, and this paper proposes one method to do this, is to introduce an additional reward not based on the domain specifically, but based on what they call this curiosity. And it's specifically curiosity by self supervised prediction. So what does that mean? The idea is not new in that people have kind of done this before. If we go for example down here. So here is this kind of doom environment and what you could say is in my agent I have kind of a little module that's going to predict the future. So like if I'm here then I will basically choose an action, my agent will choose an action, like move forward, like press the forward key and then I will predict how that's going to look. And of course we know this is kind of a 3D environment so this is probably going to be this part of the screen is going to be the full screen because you're now closer and so on the perspective changes a little bit. But basically this should be a learned neural network that predicts the future from the state now and the action now. And basically you can train this in a supervised fashion because you will perform some actions, you will collect some data about this so you can learn a network that is going to predict one step into the future basically, how the environment will look. And then, and this is by no means kind of a new idea to introduce rewards based on this type of learning how the environment acts. We've seen this in like the A3C paper, the original one where the additional reward is something like pixel control where they consider like okay this pixel here, how much can I control it by my action, like how does my action influence it, can I predict this and so on. And to learn how to control the pixels on the screen by your actions and to give a reward based on that so that's been around this idea. And what this paper here does specifically is they say well I'm going to predict the future and if I am wrong about the prediction then that gives me a reward and that's the curiosity part. Basically it means like if I have a good model of what's going to happen in the future and then I predict the future and then I'm wrong it means something new has happened, something special, something that I hadn't expected. And therefore if the goal is to get the algorithm to explore by itself which is what you need to do when you don't have a reward, right? When you don't have a reward what you want your algorithm to do is simply to go around and explore. And in a sense they're saying okay the way to do this is to go by curiosity which means is to go to actively seek out environments that you wouldn't expect basically. So whenever you don't expect something that means it's something new, that means you haven't had this experience before, right? And that means that it's kind of a new state to explore. That you have not seen this before so kind of in absence of any reward you might as well go where you haven't been before and that's kind of the essence. So they outline a number of problems that you might have with this approach. They give the example, let's first actually go to what the model actually looks like. So that's here. You can see this is kind of what they call an intrinsic curiosity module. So you have a state here, you're in a state, you have your policy and your policy gives you an action. And the action goes to the environment and the environment gives you the next state and also what's called the reward. They call here E is the extrinsic reward that you get from the environment. But they also combine this with what's called an intrinsic reward that you get from here that you get from the curiosity module. And that's what we've discussed. It kind of tries to assess how new is the state that I'm going to be in. How surprising it is for me. So the thing is that I'm going to first describe the model how you would build it and how that gets you into problems and then how to fix it. So how you would build this is to have this what's called this forward model. So the forward model takes the action and the current state and it kind of predicts the next state that's in here. Don't worry about the phi hat right now. It predicts the next state and then you compare this to the actual next state. You subtract, you just subtract the next state and then you get the next state. You subtract, you just look at the difference between what you predict the next state is going to be and what the next state really is. And that gives you the intrinsic reward. The more different these are, the higher the reward. That's what we've discussed. How much different is it from what I've expected. So how does that get you into problems? And the authors give a very good illustrative example of say you are in an environment. Let's actually go over here. You are in an environment and you have your screen. And here is kind of a road that you need to maybe walk after. And here are some leaves in the wind. I'm very bad at drawing leaves so imagine these are leaves and there's wind right? Like winds coming from here and kind of shaking up these leaves and so on. So if you simply try to predict this entire screen as your forward model, what's going to happen is you will never be able to predict how these leaves are going to move because there basically you can't influence them. You can predict a bit from the current state but the action you take has no influence on how these leaves are going to move because they are influenced by the wind. And the wind is kind of this random-ish process that you can't control. So the authors say because of this your algorithm is always going to find these leaves basically interesting, curious, be curious about it because it can't predict them. And we've seen that the reward that they model to give an addition is based on how well you cannot predict a certain state. And they say okay if we do like this then these random things that we can't influence will always be surprising and therefore we will always be curious about them and therefore we will always kind of look at the leaves and be amazed and get reward after reward because we can't predict them. That's not the goal. So what they're arguing is that why are these leaves not important for curiosity? Because we can't influence them with our actions. Like we can influence where we go on this road because we can kind of move and the road is kind of static, not governed by these random processes. But the leaves we would like to discard them. We can't influence them. And therefore what they say is what we need is an encoder that takes a state and I'm going to try to delete this annotation. So we need an encoder here features that takes a state and it outputs features of the state. And then our forward model isn't fed with the state, it's fed with the features of the state and is not going to output the next state. So we need an encoder that takes a state and is fed with the features of the state and is not going to output the next state as such but the features of the next state. It predicts the features and then we're going to compare that with the features of the true next state and that's what we compare. So how does this encoder, these features need to look? And they're saying well these features should kind of only consider things about the state that are actually dependent on our actions. And they have a very interesting way of achieving to train such an encoder, such a feature producing function in that they say it's going to be a neural network that we train by training this so called inverse model. So we take this encoder and we train this inverse model on top of it and the inverse model takes the features of the last state and the new state and is trying to predict this action, this action right here. So this is this action, the action we took to get from the old state to the new state. So this inverse model is trained to predict what action was taken to get from the old state to the new state. And by training the encoder with this inverse model, like training this end to end, you will make the encoder such that it only considers things that are actually relevant to predicting this action. So in the leaves example it would discard the leaves. It will discard anything that you can't influence with your action and therefore it will only retain features that are dependent on your action. I think that's quite an interesting way to get rid of the irrelevant information that they don't want. And then they can use this encoder to train this forward model and to essentially get information from the old model and to essentially get this intrinsic reward. So I find this idea quite interesting and as I said the idea of intrinsic reward and curiosity to go for exploration is not new, but I think this kind of approach and I'm sure it's been around in some variants, but I've just stumbled across this and this is quite interesting. So we're going to take a look, and you can go about the math yourself, but they do these kind of experiments and they corrupt, as you can see, part of the screen with noise here and they of course show like, okay, since the noise is not dependent on our action, our features do actually discard this noise, only focus on the part that we can actually influence by our actions. So that's, I think, all in all pretty interesting. They show, of course, that their algorithm then outperforms the kind of baseline of A3C on these sparse reward tasks and the sparser here you can see like the left is like dense reward and then sparse reward and then very sparse reward and at some point you see the A3C simply doesn't do it anymore. But what's also interesting is here you have the ICM in pixels, which kind of means pixel-based curiosity, so where we don't have this encoder, where we simply try to predict the pixels of the environment and that works if you have like this kind of sparse reward thing, but if you want to, if you have the very sparse reward, that also fails and you actually need this encoder that discards what's not relevant for predicting the actions. Yeah, so you can take a look at the rest of the paper yourself. I find it quite interesting. They analyze how their agent explore these mazes and things and they have more experiments on like benchmark tasks. So have a look at it and I'll see you next time.
[ { "start": 0, "end": 8, "text": " Hi there! Today we're going to look at this paper, Curiosity-Driven Exploration by Self-Supervised" }, { "start": 8, "end": 14.84, "text": " Prediction. It's a relatively short idea, so it shouldn't take too long. So the fundamental" }, { "start": 14.84, "end": 21.36, "text": " idea of the paper is to tackle the reward sparseness problem reinforcement learning." }, { "start": 21.36, "end": 27.52, "text": " For example, if you have a Super Mario game like here, and there's a number of ways you" }, { "start": 27.52, "end": 33.6, "text": " can think of the reward, but one way you could formulate it is that you simply get kind of" }, { "start": 33.6, "end": 40.56, "text": " a plus one reward when you finish the game, or the level. Let's say you finish the level," }, { "start": 40.56, "end": 49.2, "text": " you get plus one. If you die or don't make it in time, you get negative one. I think" }, { "start": 49.2, "end": 55.92, "text": " there's no way to not make it in... Oh yeah, there's actually a time limit. So the..." }, { "start": 55.92, "end": 63.92, "text": " The problem here is that your algorithm kind of needs to learn to make things now such" }, { "start": 63.92, "end": 68, "text": " that it gets to the end of the level, but the reward is only at the end of the level." }, { "start": 68, "end": 74.04, "text": " So basically step by step it has no signal to go on because the reward is always zero," }, { "start": 74.04, "end": 78.16, "text": " and it kind of needs to learn these long range dependencies. And that's notoriously hard" }, { "start": 78.16, "end": 83.2, "text": " in reinforcement learning to step by step learn actions that kind of maximize some very" }, { "start": 83.2, "end": 89.28, "text": " long term goal. So you can also think of a game of chess where your reward is going to" }, { "start": 89.28, "end": 94.16, "text": " be whether you win or lose at the end, but step by step it's kind of this... The reward" }, { "start": 94.16, "end": 103.48, "text": " is 50ish steps away. So you have no way of kind of step by step optimizing your actions" }, { "start": 103.48, "end": 112.44, "text": " in a meaningful manner. So there are many ways to get around this. One way that people" }, { "start": 112.44, "end": 118.03999999999999, "text": " have done is what's called reward shaping. And reward shaping is you're trying to introduce" }, { "start": 118.03999999999999, "end": 125.32, "text": " additional rewards kind of as a designer of the algorithm that you know are kind of good" }, { "start": 125.32, "end": 132.76, "text": " or helping to solve the problem or at least correlated with the reward you're going to" }, { "start": 132.76, "end": 138.28, "text": " get at the end. So in Mario this could be like the further right you go, the more reward" }, { "start": 138.28, "end": 143.8, "text": " you get. You get kind of an additional reward if you go right. Coincidentally I think in" }, { "start": 143.8, "end": 149.52, "text": " real Mario this also gives you points, but our situation is that the reward is just going" }, { "start": 149.52, "end": 156.2, "text": " to be at the end. You could also say like if you kill the... Or if you stomp the goombas," }, { "start": 156.2, "end": 162.8, "text": " one goomba you stomp, that actually gives you also a bit of reward. In chess you could" }, { "start": 162.8, "end": 167.36, "text": " say like the more pieces you have, that gives you a bit of reward if you have more pieces" }, { "start": 167.36, "end": 173.08, "text": " than your opponent, if your opponent loses pieces. You don't and you also get a bit of" }, { "start": 173.08, "end": 177.52, "text": " reward if you get more territory on the board and so on. So these are all things that we" }, { "start": 177.52, "end": 183.56, "text": " know kind of correlate with the end reward. Like because in Mario for example the end" }, { "start": 183.56, "end": 187.72000000000003, "text": " of the level is actually on the right. But of course it's not perfect because sometimes" }, { "start": 187.72000000000003, "end": 192.96, "text": " there are situations where you kind of have to go back, go around something or go over" }, { "start": 192.96, "end": 198.92000000000002, "text": " something and not immediately go to the right. As well as in chess there are good sacrifices" }, { "start": 198.92000000000002, "end": 205.92000000000002, "text": " that you can make. So these kind of additional rewards they help, but they're not perfect." }, { "start": 206.92000000000002, "end": 212.36, "text": " And the biggest problem with them is they're very domain specific. So a developer of the" }, { "start": 212.36, "end": 217.08, "text": " algorithm you basically have to know the domain like Super Mario and you have to know the" }, { "start": 217.08, "end": 224.08, "text": " goal is on the right. So you have to construct your reward in order to kind of reflect this." }, { "start": 224.60000000000002, "end": 231.48000000000002, "text": " And this is very domain specific. Basically you have to do it for every domain again and" }, { "start": 231.48000000000002, "end": 238.60000000000002, "text": " again and again. In chess you have to know something about chess to play and so on. So" }, { "start": 238.60000000000002, "end": 245.28, "text": " one way around this, and this paper proposes one method to do this, is to introduce an" }, { "start": 245.28, "end": 250.16, "text": " additional reward not based on the domain specifically, but based on what they call" }, { "start": 250.16, "end": 257.16, "text": " this curiosity. And it's specifically curiosity by self supervised prediction. So what does" }, { "start": 257.36, "end": 268.36, "text": " that mean? The idea is not new in that people have kind of done this before. If we go for" }, { "start": 268.36, "end": 278.36, "text": " example down here. So here is this kind of doom environment and what you could say is" }, { "start": 281.36, "end": 292.36, "text": " in my agent I have kind of a little module that's going to predict the future. So like" }, { "start": 292.36, "end": 299.36, "text": " if I'm here then I will basically choose an action, my agent will choose an action, like" }, { "start": 301.24, "end": 308.24, "text": " move forward, like press the forward key and then I will predict how that's going to look." }, { "start": 309.88, "end": 314.40000000000003, "text": " And of course we know this is kind of a 3D environment so this is probably going to be" }, { "start": 314.40000000000003, "end": 318.76, "text": " this part of the screen is going to be the full screen because you're now closer and" }, { "start": 318.76, "end": 324.92, "text": " so on the perspective changes a little bit. But basically this should be a learned neural" }, { "start": 324.92, "end": 330.48, "text": " network that predicts the future from the state now and the action now. And basically" }, { "start": 330.48, "end": 336.88, "text": " you can train this in a supervised fashion because you will perform some actions, you" }, { "start": 336.88, "end": 343.03999999999996, "text": " will collect some data about this so you can learn a network that is going to predict one" }, { "start": 343.04, "end": 349.32, "text": " step into the future basically, how the environment will look. And then, and this is by no means" }, { "start": 349.32, "end": 356.32000000000005, "text": " kind of a new idea to introduce rewards based on this type of learning how the environment" }, { "start": 357, "end": 364, "text": " acts. We've seen this in like the A3C paper, the original one where the additional reward" }, { "start": 364.24, "end": 369.12, "text": " is something like pixel control where they consider like okay this pixel here, how much" }, { "start": 369.12, "end": 374.62, "text": " can I control it by my action, like how does my action influence it, can I predict this" }, { "start": 374.62, "end": 381.62, "text": " and so on. And to learn how to control the pixels on the screen by your actions and to" }, { "start": 382.48, "end": 388.88, "text": " give a reward based on that so that's been around this idea. And what this paper here" }, { "start": 388.88, "end": 395.88, "text": " does specifically is they say well I'm going to predict the future and if I am wrong about" }, { "start": 395.88, "end": 402.88, "text": " the prediction then that gives me a reward and that's the curiosity part. Basically it" }, { "start": 403.68, "end": 410.68, "text": " means like if I have a good model of what's going to happen in the future and then I predict" }, { "start": 411.15999999999997, "end": 417.15999999999997, "text": " the future and then I'm wrong it means something new has happened, something special, something" }, { "start": 417.16, "end": 424.16, "text": " that I hadn't expected. And therefore if the goal is to get the algorithm to explore by" }, { "start": 427.32000000000005, "end": 430.76000000000005, "text": " itself which is what you need to do when you don't have a reward, right? When you don't" }, { "start": 430.76000000000005, "end": 437.76000000000005, "text": " have a reward what you want your algorithm to do is simply to go around and explore." }, { "start": 438.8, "end": 443.8, "text": " And in a sense they're saying okay the way to do this is to go by curiosity which means" }, { "start": 443.8, "end": 450.8, "text": " is to go to actively seek out environments that you wouldn't expect basically. So whenever" }, { "start": 453.56, "end": 458.16, "text": " you don't expect something that means it's something new, that means you haven't had" }, { "start": 458.16, "end": 465.16, "text": " this experience before, right? And that means that it's kind of a new state to explore." }, { "start": 465.16, "end": 472.16, "text": " That you have not seen this before so kind of in absence of any reward you might as well" }, { "start": 472.16, "end": 479.16, "text": " go where you haven't been before and that's kind of the essence. So they outline a number" }, { "start": 480.6, "end": 487.6, "text": " of problems that you might have with this approach. They give the example, let's first" }, { "start": 487.6, "end": 494.6, "text": " actually go to what the model actually looks like. So that's here. You can see this is" }, { "start": 495.12, "end": 502.12, "text": " kind of what they call an intrinsic curiosity module. So you have a state here, you're in" }, { "start": 502.12, "end": 509.12, "text": " a state, you have your policy and your policy gives you an action. And the action goes to" }, { "start": 509.12, "end": 516.12, "text": " the environment and the environment gives you the next state and also what's called" }, { "start": 517.68, "end": 524.68, "text": " the reward. They call here E is the extrinsic reward that you get from the environment." }, { "start": 524.68, "end": 529.68, "text": " But they also combine this with what's called an intrinsic reward that you get from here" }, { "start": 529.68, "end": 535.6800000000001, "text": " that you get from the curiosity module. And that's what we've discussed. It kind of tries" }, { "start": 535.68, "end": 542.68, "text": " to assess how new is the state that I'm going to be in. How surprising it is for me. So" }, { "start": 542.68, "end": 549.68, "text": " the thing is that I'm going to first describe the model how you would build it and how that" }, { "start": 553.1999999999999, "end": 559.1999999999999, "text": " gets you into problems and then how to fix it. So how you would build this is to have" }, { "start": 559.2, "end": 566.2, "text": " this what's called this forward model. So the forward model takes the action and the" }, { "start": 566.2, "end": 570.2, "text": " current state and it kind of predicts the next state that's in here. Don't worry about" }, { "start": 570.2, "end": 577.2, "text": " the phi hat right now. It predicts the next state and then you compare this to the actual" }, { "start": 580.2, "end": 587.2, "text": " next state. You subtract, you just subtract the next state and then you get the next state." }, { "start": 587.2, "end": 592.44, "text": " You subtract, you just look at the difference between what you predict the next state is" }, { "start": 592.44, "end": 597.24, "text": " going to be and what the next state really is. And that gives you the intrinsic reward." }, { "start": 597.24, "end": 602.72, "text": " The more different these are, the higher the reward. That's what we've discussed. How much" }, { "start": 602.72, "end": 609.72, "text": " different is it from what I've expected. So how does that get you into problems? And the" }, { "start": 609.72, "end": 616.72, "text": " authors give a very good illustrative example of say you are in an environment. Let's actually" }, { "start": 619.28, "end": 625.96, "text": " go over here. You are in an environment and you have your screen. And here is kind of" }, { "start": 625.96, "end": 631.6800000000001, "text": " a road that you need to maybe walk after. And here are some leaves in the wind. I'm" }, { "start": 631.6800000000001, "end": 638, "text": " very bad at drawing leaves so imagine these are leaves and there's wind right? Like winds" }, { "start": 638, "end": 644.2, "text": " coming from here and kind of shaking up these leaves and so on. So if you simply try to" }, { "start": 644.2, "end": 651.2, "text": " predict this entire screen as your forward model, what's going to happen is you will" }, { "start": 652.44, "end": 658.26, "text": " never be able to predict how these leaves are going to move because there basically" }, { "start": 658.26, "end": 665.26, "text": " you can't influence them. You can predict a bit from the current state but the action" }, { "start": 665.26, "end": 671.26, "text": " you take has no influence on how these leaves are going to move because they are influenced" }, { "start": 671.26, "end": 678.26, "text": " by the wind. And the wind is kind of this random-ish process that you can't control." }, { "start": 682.26, "end": 689.26, "text": " So the authors say because of this your algorithm is always going to find these leaves basically" }, { "start": 689.26, "end": 694.26, "text": " interesting, curious, be curious about it because it can't predict them. And we've" }, { "start": 694.26, "end": 701.26, "text": " seen that the reward that they model to give an addition is based on how well you cannot" }, { "start": 701.26, "end": 708.26, "text": " predict a certain state. And they say okay if we do like this then these random things" }, { "start": 708.74, "end": 715.74, "text": " that we can't influence will always be surprising and therefore we will always be curious about" }, { "start": 715.74, "end": 720.74, "text": " them and therefore we will always kind of look at the leaves and be amazed and get reward" }, { "start": 720.74, "end": 725.74, "text": " after reward because we can't predict them. That's not the goal. So what they're arguing" }, { "start": 725.74, "end": 732.74, "text": " is that why are these leaves not important for curiosity? Because we can't influence" }, { "start": 733.26, "end": 739.26, "text": " them with our actions. Like we can influence where we go on this road because we can kind" }, { "start": 739.26, "end": 746.26, "text": " of move and the road is kind of static, not governed by these random processes. But the" }, { "start": 746.26, "end": 753.26, "text": " leaves we would like to discard them. We can't influence them. And therefore what they say" }, { "start": 753.26, "end": 760.26, "text": " is what we need is an encoder that takes a state and I'm going to try to delete this" }, { "start": 760.26, "end": 767.26, "text": " annotation. So we need an encoder here features that takes a state and it outputs features" }, { "start": 771.26, "end": 778.26, "text": " of the state. And then our forward model isn't fed with the state, it's fed with the features" }, { "start": 778.26, "end": 785.26, "text": " of the state and is not going to output the next state. So we need an encoder that takes" }, { "start": 785.26, "end": 790.26, "text": " a state and is fed with the features of the state and is not going to output the next" }, { "start": 790.26, "end": 796.26, "text": " state as such but the features of the next state. It predicts the features and then we're" }, { "start": 796.26, "end": 801.26, "text": " going to compare that with the features of the true next state and that's what we compare." }, { "start": 801.26, "end": 808.26, "text": " So how does this encoder, these features need to look? And they're saying well these features" }, { "start": 808.76, "end": 814.26, "text": " should kind of only consider things about the state that are actually dependent on our" }, { "start": 814.26, "end": 821.26, "text": " actions. And they have a very interesting way of achieving to train such an encoder," }, { "start": 821.76, "end": 828.26, "text": " such a feature producing function in that they say it's going to be a neural network" }, { "start": 828.26, "end": 835.26, "text": " that we train by training this so called inverse model. So we take this encoder and we train" }, { "start": 835.26, "end": 842.26, "text": " this inverse model on top of it and the inverse model takes the features of the last state" }, { "start": 843.26, "end": 850.26, "text": " and the new state and is trying to predict this action, this action right here. So this" }, { "start": 850.26, "end": 857.26, "text": " is this action, the action we took to get from the old state to the new state. So this" }, { "start": 857.26, "end": 864.26, "text": " inverse model is trained to predict what action was taken to get from the old state to the" }, { "start": 864.26, "end": 871.26, "text": " new state. And by training the encoder with this inverse model, like training this end" }, { "start": 871.26, "end": 878.26, "text": " to end, you will make the encoder such that it only considers things that are actually" }, { "start": 878.26, "end": 883.26, "text": " relevant to predicting this action. So in the leaves example it would discard the leaves." }, { "start": 883.26, "end": 890.26, "text": " It will discard anything that you can't influence with your action and therefore it will only" }, { "start": 890.26, "end": 896.26, "text": " retain features that are dependent on your action. I think that's quite an interesting" }, { "start": 896.26, "end": 902.26, "text": " way to get rid of the irrelevant information that they don't want. And then they can use" }, { "start": 902.26, "end": 909.26, "text": " this encoder to train this forward model and to essentially get information from the old" }, { "start": 909.26, "end": 916.26, "text": " model and to essentially get this intrinsic reward. So I find this idea quite interesting" }, { "start": 918.26, "end": 924.26, "text": " and as I said the idea of intrinsic reward and curiosity to go for exploration is not" }, { "start": 924.26, "end": 930.26, "text": " new, but I think this kind of approach and I'm sure it's been around in some variants," }, { "start": 930.26, "end": 944.26, "text": " but I've just stumbled across this and this is quite interesting. So we're going to take" }, { "start": 944.26, "end": 951.26, "text": " a look, and you can go about the math yourself, but they do these kind of experiments and" }, { "start": 951.26, "end": 958.26, "text": " they corrupt, as you can see, part of the screen with noise here and they of course" }, { "start": 958.26, "end": 964.26, "text": " show like, okay, since the noise is not dependent on our action, our features do actually discard" }, { "start": 964.26, "end": 969.26, "text": " this noise, only focus on the part that we can actually influence by our actions. So" }, { "start": 969.26, "end": 976.26, "text": " that's, I think, all in all pretty interesting. They show, of course, that their algorithm" }, { "start": 976.26, "end": 984.26, "text": " then outperforms the kind of baseline of A3C on these sparse reward tasks and the sparser" }, { "start": 984.26, "end": 992.26, "text": " here you can see like the left is like dense reward and then sparse reward and then very" }, { "start": 992.26, "end": 999.26, "text": " sparse reward and at some point you see the A3C simply doesn't do it anymore. But what's" }, { "start": 999.26, "end": 1007.26, "text": " also interesting is here you have the ICM in pixels, which kind of means pixel-based" }, { "start": 1007.26, "end": 1013.26, "text": " curiosity, so where we don't have this encoder, where we simply try to predict the pixels" }, { "start": 1013.26, "end": 1018.26, "text": " of the environment and that works if you have like this kind of sparse reward thing, but" }, { "start": 1018.26, "end": 1023.26, "text": " if you want to, if you have the very sparse reward, that also fails and you actually need" }, { "start": 1023.26, "end": 1033.26, "text": " this encoder that discards what's not relevant for predicting the actions. Yeah, so you can" }, { "start": 1033.26, "end": 1038.26, "text": " take a look at the rest of the paper yourself. I find it quite interesting. They analyze" }, { "start": 1038.26, "end": 1048.26, "text": " how their agent explore these mazes and things and they have more experiments on like benchmark" }, { "start": 1048.26, "end": 1068.26, "text": " tasks. So have a look at it and I'll see you next time." } ]
19Q-vMd9bYg
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Inconsistency in Conference Peer Review: Revisiting the 2014 NeurIPS Experiment (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "neurips", "nips", "nips experiment", "peer reviw", "conference review", "reviewer", "machine learning reviewer", "ml conference review", "subjectivity in peer review", "reviewer opinions", "science review", "science peer review", "peer review fail" ]
#neurips #peerreview #nips The peer-review system at Machine Learning conferences has come under much criticism over the last years. One major driver was the infamous 2014 NeurIPS experiment, where a subset of papers were given to two different sets of reviewers. This experiment showed that only about half of all accepted papers were consistently accepted by both committees and demonstrated significant influence of subjectivity. This paper revisits the data from the 2014 experiment and traces the fate of accepted and rejected papers during the 7 years since, and analyzes how well reviewers can assess future impact, among other things. OUTLINE: 0:00 - Intro & Overview 1:20 - Recap: The 2014 NeurIPS Experiment 5:40 - How much of reviewing is subjective? 11:00 - Validation via simulation 15:45 - Can reviewers predict future impact? 23:10 - Discussion & Comments Paper: https://arxiv.org/abs/2109.09774 Code: https://github.com/lawrennd/neurips2014/ Abstract: In this paper we revisit the 2014 NeurIPS experiment that examined inconsistency in conference peer review. We determine that 50% of the variation in reviewer quality scores was subjective in origin. Further, with seven years passing since the experiment we find that for accepted papers, there is no correlation between quality scores and impact of the paper as measured as a function of citation count. We trace the fate of rejected papers, recovering where these papers were eventually published. For these papers we find a correlation between quality scores and impact. We conclude that the reviewing process for the 2014 conference was good for identifying poor papers, but poor for identifying good papers. We give some suggestions for improving the reviewing process but also warn against removing the subjective element. Finally, we suggest that the real conclusion of the experiment is that the community should place less onus on the notion of top-tier conference publications when assessing the quality of individual researchers. For NeurIPS 2021, the PCs are repeating the experiment, as well as conducting new ones. Authors: Corinna Cortes, Neil D. Lawrence Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there, today we'll look at inconsistency in conference peer review, revisiting the 2014 NeurIPS experiment by Corina Cortes and Neil D. Lawrence, which were actually the chairs of the 2014 NeurIPS conference. So they are going to have access to some data that the rest of us sadly don't have access to. So it allows them to make pretty cool research on how conference reviewing works and whether or not it actually can determine the quality of a paper or how much of it is just random subjective reviewer decisions. Now this paper particularly here takes up the papers that were subject to the 2014 NeurIPS experiment and tracks them over time. So it looks at the papers that were submitted, how did they perform in the subsequent years, meaning how many citations that they accumulate, both for the accepted and for the rejected papers. And they find some pretty interesting results right here. So we'll dive into this. The paper is not too long and the conclusions are fairly straightforward. I still think it's really cool that people actually follow up on this work. So for those of you who don't know, the 2014 NeurIPS experiment, that is the wrong color, the 2014 NeurIPS experiment was an experiment in assessing how much of review of conference review is random essentially. So what you did was, and I think they have a little section about this here. So they selected about 10% of the submissions. These were 170 papers and these would undergo review by two separate committees. So whereas usually you have a paper that goes into a review, let's call that a committee, which is a bunch of reviewers and an area chair and they make the decisions of whether to accept or to reject. And yeah, at the end you have a decision. So in this experiment, you would take a paper, you would actually give it to two different committees, committee one and committee two. Committee one would only be selected from kind of one half of the reviewer pool and committee two would only be selected from the other half. These were random assignments to the two pools and also the papers who participated were randomly selected. So each of these committees would reach their own decision, accept or reject. And of course, the interesting part is how many of those agree or how many of those disagree with each other. And by the way, the paper would be accepted finally, if the max, so if either of the committees would accept the paper. And if I recall correctly, this year's NeurIPS conference actually repeats that experiment from 2014. So we're going to have another data point in hopefully assessing how conference reviewing has developed over the years, whether it's gotten better or actually worse. All right, so that was the experiment 2014. But by the way, the authors here have decided that the name change is retroactive. I never know. I never know when talking about old NeurIPS conferences, whether I'm supposed to say it was NIPS 2014 or NeurIPS. In any case, in this paper, we're doing NeurIPS. So what was the outcome of that experiment? And that's pretty interesting. Namely, here you can see these are still 2014 numbers, committee one and committee two split up. So it's not the same committee one, of course, but committee one would always be reviewers selected from kind of the first half of the population, committee two from the second half. They did agree on most of the papers, as you can see here. For 101 papers, they agreed to reject, for 22, they agreed to accept. However, for 43 of the papers, one committee would accept and the other one would actually reject. So for about 25% of the papers, the two committees would disagree. 25%, it's, you know, it sounds it's a lot, but it doesn't sound like that much. But if you look at it in a different way, where they say right here, if the conference reviewing had been run with a different committee, only half of the papers presented at the conference would have been the same. So this is looking at if you'd, for example, always go with committee one, you would have these papers. But if you would always go with committee two, you would have these papers. Therefore, but the simple selection of the committee determines about half the papers at the conference. So if you're at the conference, you walk through the big halls of posters, or you look at the proceedings, you have to keep in mind that half of the papers are there only purely because of the random choice of or not purely, but they wouldn't be there. Had the reviewing committee been a different one, half the papers, that's kind of crazy. And of course, this sparked a lot of discussion right here. So this is the outset, this was the results from that time. And now we're going into new analysis. So they do three different distinct points of analysis. The first one is they do the title is called reviewer calibration. So they try to figure out what portion of a reviewers assessment of a paper is, let's say objective, and what portion is subjective. So what portion of a score is simply due to the reviewers subjective feelings about the paper that doesn't match with any other reviewers scores. So here you can see this, for example, what you can do is you can build a model, you can build a model, you can say, why ij, that's the score that the jth reviewer gives to the ith paper. And you know, being the conference chairs, these authors here would have prime access to that data. So what you observe is why now you can say, we assume this is a combination of three things. First of all, we assume that there is some sort of a objective paper quality, which is fi. This is the objective quality of the paper. This is actually what the reviewers are trying to predict. So when the reviewer posts the number y into the system, they're trying their best to actually assess fi. However, there is also this bj right here. And this is the bias that the jth reviewer has in calibration. So not everyone, not everyone sees the one through 10 or one through nine scale that we have in the same fashion. And therefore, what's like a three to me might be a five to you. So we have to correct somehow for this and the inclusion of this bj factor is how we account for that. And then lastly, you have this eij factor right here. And this is the subjective portion of the score. So this is independent of the objective quality of the paper. This is sort of the subjective bonus or penalty that reviewer j gives to paper i. And our goal is going to be to figure out how do these two numbers compare to each other, how much of the score is objective versus subjective after we have calibrated for reviewer for general reviewer bias for calibration bias, let's say. Keep in mind, this is a model. This is how we imagine the world. All we observe is this y thing right here. What we can do is of course, we can put up a linear system of all the scores, right? And of all the scores, because every reviewer does give more than one score in this conference and every paper gets more than one reviewers scores. So we can put up a linear system. But it turns out this is over parameterized because you only have as many numbers as you have these parameters right here. So the rest both parameters, they don't, you don't have enough data points to assess that. Now as much fun as over parameterized models are in deep learning, they're actually not that good if you want to estimate a linear system. So what people do, they come up with regularizers and Bayesian approaches and yada, yada, yada. I'll skip all of this to just give you the numbers. So the model that these authors come up with determines that the factors of the linear systems are as follows. This here is the factor that goes with the fi. This one is the one that goes with the bj and this one is the one that goes with the ej. And you see you, you, you, you pull out this one and then you simply compare the number on the left to the number on the right and you'll see they're almost exactly the same. And that means and they formulate this here, in other words, 50% of a typical reviewer's score is coming from opinion that is particular to that reviewer and not shared with the other reviewers. This figure may seem large. Sorry about that. This figure may seem large, they say, but in retrospect, it's perhaps not surprising. So this is pretty, I guess this is pretty surprising to me, but it is not that, it is not that I didn't expect it. And I think anyone who's participated in conference peer review would expect a number that is in approximately this range because we know that the review process is pretty noisy. And very, very often individual reviewers just kind of give weird scores that you don't understand. And here's the reason you don't understand because it's the source of them are subjective and largely not shared by other reviewers. So having figured that out, having figured out that about 50% of the variation is due to just subjective feeling of a reviewer about a paper. Now they sort of try to validate their findings. And for that they run a simulation. So the simulation is, it's a simulated conference. So we assume that each paper was scored according to the model we've given above, and we estimated the accept consistency through averaging across 100,000 samples. So now they're simulating the conference with this experiment done. And they ask themselves, if this is really the correct model, then we should get back, we should get back a consistency of the 50% we found above. So because above the results of the experiments were that there was about a 50% consistency in acceptance in the experiment. And now they go and they look at all the papers and all the scores and they determine that there is about a 50% subjectivity in scoring. And now they ask themselves, do these two numbers match? And they run a simulation where every reviewer has a 50% subjectivity. And they ask themselves, if we simulate this splitting up into two committees, and then every committee agrees by themselves, do we see the numbers that we found in the experiment? And the answer is yes, actually. So you can see these are conferences for a bunch of different scenarios, namely for different number of reviewers, as you can see here, these are reviewers per committee. So random means there is no reviewer per committee, committee decisions are just random. And you can see that as the accept rate of the conference goes up, the accept precision of the committees go up because they simply they would more papers are accepted. And therefore, more papers would be the same if you were to change the committee. What we're interested in is, of course, the one with three reviewers, which is the most common reviewers scenario in these conferences. And that's this curve right here. So the way to read this is that, for example, if the conference had an accept rate of 50%, right here, then we would expect a reviewer consistency or an accept precision of 0.75 of 75%, which means that if we were to switch the reviewers for a particular or for all the papers, 75% of the paper would still be the same. Remember that in our experiment, only 50% of the papers were still the same if we switched committee. But the conference also didn't have a 50% accept rate. So for that, we actually need to go to the accept rate of the conference, which was something like 23% right here. And then if we look that up, we are at about a 60% accept precision. Now, this might still be away from the 50% we found in the experiment. However, the experiment had so little data that if you calculate the bounds on what the true accept precision was from that experiment, you can determine that it was between 38 and 64%. And the exact number we got is 61%. So this is still within the bounds of what we found in the experiment. So pretty interesting. This actually means that the model they put up is a close enough approximation to reality such that it predicts the experiment's outcome. And this gives us a little bit of a this gives us a little bit validation that we're on a good track right here. So we can sort of confidently say that about half of a reviewers decision on a particular paper essentially comes down to subjectivity is consistent with what we found in the experiment. And it'd be interesting to see how this develops this year when we repeat the experiment. So lastly, what they were trying to figure out is, well, are these reviews even worth it, so to say, do they actually predict how good a paper is, and you know, how do you measure how good a paper is, of course, by the number of citations. So here they define the citation impact as the log of the number of citations. And yes, there is a debate about whether citations really mean a paper is good or influential or blah, blah, blah. But we don't, for better or worse, we don't have a different measure right now than number of citations. And it's been seven years, which is like three generations in machine learning. So there is a long enough time that these papers had to accumulate citations. So let's just look at the accepted papers. Do the scores that the reviewers give to the papers predict in any way whether or not the paper is going to be cited more or less? So do higher scores indicate more citations? And the answer is no, not at all. So here is a plot. The correlation is 0.05. This is ever so slightly statistically significant, but not really. So you can, like at least for this particular conference right here, there's no correlation between reviewer scores and between reviewer scores and impact of the paper in the future. It becomes a little bit interesting when you ask specifically. So because here the question is, you know, is the paper novel? Is it correct? Is it well written and so on? These are not necessarily indicators of significance, right? If you accept the paper to a conference, only a small part of it is, is it significant? If you actually ask reviewers, do you think this paper will have a potentially major impact or not, you get a slightly higher correlation, but also not really, which means that reviewers are kind of bad at estimating whether any given paper will have a big impact or not. Though to be fair for most papers, the answer is probably no by default. However, the interesting part is when you ask them about their confidence in their rating and it is, if I understand correctly, it doesn't even matter which rating, but for the rating that you give at these conferences, you have to provide a confidence score. Like you say, okay, I think this paper is really good, but I'm not very confident. And if you simply correlate the confidence scores, as you can see here, the average confidence over all your sort of confidences of the paper with the impact, then you do get a slight correlation, which is interesting, right? So the authors here argue that it might be that there might be something like clarity in the paper. So if a paper is written very clearly, then you will also be able to understand it better as a reviewer, which makes your confidence higher. But also, since the paper is more clear, it means that the rest of the world will have an easier time understanding the paper and therefore cite it more often. So this is a good hypothesis, but it's quite interesting that the confidence in papers seems to predict the impact better than the actual assessment of the impact. That's astounding. It's not super astounding that confidence by itself would predict it, but that it does so more than if you directly ask people. I wonder what else we can ask. I wonder what weird questions we can ask that will then up correlating with the future impact. Do you like the colors of the paper? Do you like the pictures? So these were for accepted papers. They also interestingly trace the fate of the rejected papers. So they say only 414 were presented at the final conference. So they want to trace the rejected papers and they go through a lot of work to try to figure out where these papers ended up. So they search for papers with similar titles and authors or same titles and authors. And of course, this is not a perfect process, but it seems like they've been able to trace a lot of these papers to their final destination. You can see a lot of papers are discarded or some are simply posted on an archive or somewhere else. Of course, the discarded papers, you don't know if they somehow morphed into other papers or something like this. But it's still pretty interesting to see, though they say there are various error sources in these plots. Lastly, yeah, here is the fate of the rejected papers. Now they don't say exactly what blue and green means in this particular thing. In other plots in the same papers, they differentiate, for example, between papers that have been accepted somewhere else ultimately and papers that have not been or that they have not been able to trace. So this might be blue and green. I'm not sure. I haven't been able to maybe I'm just stupid at reading. But as you can see, if you look at the rejected papers, so this is the calibrated quality score for the rejected papers. And here you can see that there is in fact a correlation, which means that for the rejected papers, the assessment of the reviewers really does correlate with how the papers will end up doing ultimately. So I'm going to guess, well, if the citation count is in here, I'm going to guess the discarded paper must not be in here. Yeah, sorry. But the conclusion is that for the rejected papers, reviewers can tell whether they're better or worse. For the accepted papers, not so much. And that's what they said at the beginning. The review process is probably good at identifying bad papers, but bad at identifying good papers. And this is it's not too surprising because bad papers, you know, you can find it's really easy to recognize a very poor paper. But it's it's harder to recognize really how good a paper is, you know, compared to other good papers. So that was the paper they give some recommendations. For example, they say, well, maybe we should we should assess papers on on on some on different on different criteria than we do now. But they do guard they do warn against saying we should do away with with subjectivity all together. Because, you know, as annoying as the subjectivity is, they argue is it also guards against sort of the collective dominance. So it guards against sort of making consistent mistakes. If all the like if the entire conference for example, if the entire conference makes consistent mistakes in in some direction, then the subjectivity might counter that a little bit. I'm not sure if that's a super good argument. I am generally for noisy processes over super duper rigid ones. It seems though that the conference review right now is a bit too noisy. I'd rather do away with just having three reviewers and not having this accept barrier. This is my personal opinion. I would just do away with the accept barrier altogether. You know, you submit to a conference, you get a bunch of scores and then you have the scores. Like why do we need to divide papers up into accepted and rejected or, you know, like it seems better to just put papers out there and let the future let the future researchers assess them in retrospect, rather than having three random people with highly subjective opinions assess them. But yes, probably a bit of noise is good in a process like this. If you do a process like this. They also say, well, maybe we should not put that much value at publishing at top tier conferences. Now, I don't know how that's gonna work, you know, like whenever, whenever. And yeah, I wish I wish as well that we could like change the collective the collective thinking about our field. I don't I don't see that as a super easy task, though. In any case, this was the paper. Let me know your ideas. Let me know how you think this year's experiment is going to turn out. Like are we going to find more subjectivity? Are we going to find less? How much disagreement do you think we're going to find? This is going to be interesting. So yeah, thanks for listening and I'll see you next time.
[ { "start": 0, "end": 5.96, "text": " Hi there, today we'll look at inconsistency in conference peer review, revisiting the" }, { "start": 5.96, "end": 12.8, "text": " 2014 NeurIPS experiment by Corina Cortes and Neil D. Lawrence, which were actually the" }, { "start": 12.8, "end": 16.080000000000002, "text": " chairs of the 2014 NeurIPS conference." }, { "start": 16.080000000000002, "end": 23.080000000000002, "text": " So they are going to have access to some data that the rest of us sadly don't have access" }, { "start": 23.080000000000002, "end": 24.080000000000002, "text": " to." }, { "start": 24.08, "end": 30.759999999999998, "text": " So it allows them to make pretty cool research on how conference reviewing works and whether" }, { "start": 30.759999999999998, "end": 37.28, "text": " or not it actually can determine the quality of a paper or how much of it is just random" }, { "start": 37.28, "end": 40.12, "text": " subjective reviewer decisions." }, { "start": 40.12, "end": 46.92, "text": " Now this paper particularly here takes up the papers that were subject to the 2014 NeurIPS" }, { "start": 46.92, "end": 50.3, "text": " experiment and tracks them over time." }, { "start": 50.3, "end": 57.32, "text": " So it looks at the papers that were submitted, how did they perform in the subsequent years," }, { "start": 57.32, "end": 63.12, "text": " meaning how many citations that they accumulate, both for the accepted and for the rejected" }, { "start": 63.12, "end": 64.64, "text": " papers." }, { "start": 64.64, "end": 68.75999999999999, "text": " And they find some pretty interesting results right here." }, { "start": 68.75999999999999, "end": 70.12, "text": " So we'll dive into this." }, { "start": 70.12, "end": 75.3, "text": " The paper is not too long and the conclusions are fairly straightforward." }, { "start": 75.3, "end": 81.2, "text": " I still think it's really cool that people actually follow up on this work." }, { "start": 81.2, "end": 88.32, "text": " So for those of you who don't know, the 2014 NeurIPS experiment, that is the wrong color," }, { "start": 88.32, "end": 95.6, "text": " the 2014 NeurIPS experiment was an experiment in assessing how much of review of conference" }, { "start": 95.6, "end": 99.67999999999999, "text": " review is random essentially." }, { "start": 99.67999999999999, "end": 104.88, "text": " So what you did was, and I think they have a little section about this here." }, { "start": 104.88, "end": 108.24, "text": " So they selected about 10% of the submissions." }, { "start": 108.24, "end": 115.06, "text": " These were 170 papers and these would undergo review by two separate committees." }, { "start": 115.06, "end": 121.39999999999999, "text": " So whereas usually you have a paper that goes into a review, let's call that a committee," }, { "start": 121.39999999999999, "end": 126, "text": " which is a bunch of reviewers and an area chair and they make the decisions of whether" }, { "start": 126, "end": 128.44, "text": " to accept or to reject." }, { "start": 128.44, "end": 130.35999999999999, "text": " And yeah, at the end you have a decision." }, { "start": 130.35999999999999, "end": 134.35999999999999, "text": " So in this experiment, you would take a paper, you would actually give it to two different" }, { "start": 134.36, "end": 137.12, "text": " committees, committee one and committee two." }, { "start": 137.12, "end": 141.92000000000002, "text": " Committee one would only be selected from kind of one half of the reviewer pool and" }, { "start": 141.92000000000002, "end": 145.18, "text": " committee two would only be selected from the other half." }, { "start": 145.18, "end": 152.88000000000002, "text": " These were random assignments to the two pools and also the papers who participated were" }, { "start": 152.88000000000002, "end": 155.4, "text": " randomly selected." }, { "start": 155.4, "end": 160.08, "text": " So each of these committees would reach their own decision, accept or reject." }, { "start": 160.08, "end": 166.24, "text": " And of course, the interesting part is how many of those agree or how many of those disagree" }, { "start": 166.24, "end": 167.52, "text": " with each other." }, { "start": 167.52, "end": 174.64000000000001, "text": " And by the way, the paper would be accepted finally, if the max, so if either of the committees" }, { "start": 174.64000000000001, "end": 176.76000000000002, "text": " would accept the paper." }, { "start": 176.76000000000002, "end": 183.12, "text": " And if I recall correctly, this year's NeurIPS conference actually repeats that experiment" }, { "start": 183.12, "end": 185.12, "text": " from 2014." }, { "start": 185.12, "end": 190.4, "text": " So we're going to have another data point in hopefully assessing how conference reviewing" }, { "start": 190.4, "end": 194.72, "text": " has developed over the years, whether it's gotten better or actually worse." }, { "start": 194.72, "end": 198.44, "text": " All right, so that was the experiment 2014." }, { "start": 198.44, "end": 203.92000000000002, "text": " But by the way, the authors here have decided that the name change is retroactive." }, { "start": 203.92000000000002, "end": 204.92000000000002, "text": " I never know." }, { "start": 204.92000000000002, "end": 209.54000000000002, "text": " I never know when talking about old NeurIPS conferences, whether I'm supposed to say it" }, { "start": 209.54000000000002, "end": 213.36, "text": " was NIPS 2014 or NeurIPS." }, { "start": 213.36, "end": 218.52, "text": " In any case, in this paper, we're doing NeurIPS." }, { "start": 218.52, "end": 221.52, "text": " So what was the outcome of that experiment?" }, { "start": 221.52, "end": 222.92000000000002, "text": " And that's pretty interesting." }, { "start": 222.92000000000002, "end": 231.04000000000002, "text": " Namely, here you can see these are still 2014 numbers, committee one and committee two split" }, { "start": 231.04000000000002, "end": 232.04000000000002, "text": " up." }, { "start": 232.04000000000002, "end": 236.24, "text": " So it's not the same committee one, of course, but committee one would always be reviewers" }, { "start": 236.24, "end": 240.8, "text": " selected from kind of the first half of the population, committee two from the second" }, { "start": 240.8, "end": 241.8, "text": " half." }, { "start": 241.8, "end": 246.16000000000003, "text": " They did agree on most of the papers, as you can see here." }, { "start": 246.16000000000003, "end": 251.12, "text": " For 101 papers, they agreed to reject, for 22, they agreed to accept." }, { "start": 251.12, "end": 257.58000000000004, "text": " However, for 43 of the papers, one committee would accept and the other one would actually" }, { "start": 257.58000000000004, "end": 258.92, "text": " reject." }, { "start": 258.92, "end": 264.52, "text": " So for about 25% of the papers, the two committees would disagree." }, { "start": 264.52, "end": 270.8, "text": " 25%, it's, you know, it sounds it's a lot, but it doesn't sound like that much." }, { "start": 270.8, "end": 275.72, "text": " But if you look at it in a different way, where they say right here, if the conference" }, { "start": 275.72, "end": 281.96000000000004, "text": " reviewing had been run with a different committee, only half of the papers presented at the conference" }, { "start": 281.96000000000004, "end": 283.64, "text": " would have been the same." }, { "start": 283.64, "end": 288.96000000000004, "text": " So this is looking at if you'd, for example, always go with committee one, you would have" }, { "start": 288.96000000000004, "end": 290.68, "text": " these papers." }, { "start": 290.68, "end": 294.96000000000004, "text": " But if you would always go with committee two, you would have these papers." }, { "start": 294.96000000000004, "end": 299.92, "text": " Therefore, but the simple selection of the committee determines about half the papers" }, { "start": 299.92, "end": 300.92, "text": " at the conference." }, { "start": 300.92, "end": 305.72, "text": " So if you're at the conference, you walk through the big halls of posters, or you look at the" }, { "start": 305.72, "end": 314.22, "text": " proceedings, you have to keep in mind that half of the papers are there only purely because" }, { "start": 314.22, "end": 320.32, "text": " of the random choice of or not purely, but they wouldn't be there." }, { "start": 320.32, "end": 327.04, "text": " Had the reviewing committee been a different one, half the papers, that's kind of crazy." }, { "start": 327.04, "end": 331.98, "text": " And of course, this sparked a lot of discussion right here." }, { "start": 331.98, "end": 336.6, "text": " So this is the outset, this was the results from that time." }, { "start": 336.6, "end": 340.54, "text": " And now we're going into new analysis." }, { "start": 340.54, "end": 344.56, "text": " So they do three different distinct points of analysis." }, { "start": 344.56, "end": 350.74, "text": " The first one is they do the title is called reviewer calibration." }, { "start": 350.74, "end": 357.84000000000003, "text": " So they try to figure out what portion of a reviewers assessment of a paper is, let's" }, { "start": 357.84000000000003, "end": 362.02, "text": " say objective, and what portion is subjective." }, { "start": 362.02, "end": 368.12, "text": " So what portion of a score is simply due to the reviewers subjective feelings about the" }, { "start": 368.12, "end": 374.24, "text": " paper that doesn't match with any other reviewers scores." }, { "start": 374.24, "end": 381.28000000000003, "text": " So here you can see this, for example, what you can do is you can build a model, you can" }, { "start": 381.28000000000003, "end": 387.24, "text": " build a model, you can say, why ij, that's the score that the jth reviewer gives to the" }, { "start": 387.24, "end": 388.8, "text": " ith paper." }, { "start": 388.8, "end": 393.96000000000004, "text": " And you know, being the conference chairs, these authors here would have prime access" }, { "start": 393.96000000000004, "end": 395.40000000000003, "text": " to that data." }, { "start": 395.40000000000003, "end": 402.12, "text": " So what you observe is why now you can say, we assume this is a combination of three things." }, { "start": 402.12, "end": 407.24, "text": " First of all, we assume that there is some sort of a objective paper quality, which is" }, { "start": 407.24, "end": 408.24, "text": " fi." }, { "start": 408.24, "end": 410.42, "text": " This is the objective quality of the paper." }, { "start": 410.42, "end": 415, "text": " This is actually what the reviewers are trying to predict." }, { "start": 415, "end": 422.52, "text": " So when the reviewer posts the number y into the system, they're trying their best to actually" }, { "start": 422.52, "end": 424.68, "text": " assess fi." }, { "start": 424.68, "end": 429.32, "text": " However, there is also this bj right here." }, { "start": 429.32, "end": 434, "text": " And this is the bias that the jth reviewer has in calibration." }, { "start": 434, "end": 440.32, "text": " So not everyone, not everyone sees the one through 10 or one through nine scale that" }, { "start": 440.32, "end": 442.56, "text": " we have in the same fashion." }, { "start": 442.56, "end": 450.08, "text": " And therefore, what's like a three to me might be a five to you." }, { "start": 450.08, "end": 456.76, "text": " So we have to correct somehow for this and the inclusion of this bj factor is how we" }, { "start": 456.76, "end": 458.38, "text": " account for that." }, { "start": 458.38, "end": 463.8, "text": " And then lastly, you have this eij factor right here." }, { "start": 463.8, "end": 467.54, "text": " And this is the subjective portion of the score." }, { "start": 467.54, "end": 471.8, "text": " So this is independent of the objective quality of the paper." }, { "start": 471.8, "end": 478.84, "text": " This is sort of the subjective bonus or penalty that reviewer j gives to paper i." }, { "start": 478.84, "end": 484.8, "text": " And our goal is going to be to figure out how do these two numbers compare to each other," }, { "start": 484.8, "end": 493.56, "text": " how much of the score is objective versus subjective after we have calibrated for reviewer" }, { "start": 493.56, "end": 499.12, "text": " for general reviewer bias for calibration bias, let's say." }, { "start": 499.12, "end": 500.58000000000004, "text": " Keep in mind, this is a model." }, { "start": 500.58000000000004, "end": 502.68, "text": " This is how we imagine the world." }, { "start": 502.68, "end": 506.08000000000004, "text": " All we observe is this y thing right here." }, { "start": 506.08000000000004, "end": 511.76, "text": " What we can do is of course, we can put up a linear system of all the scores, right?" }, { "start": 511.76, "end": 517.64, "text": " And of all the scores, because every reviewer does give more than one score in this conference" }, { "start": 517.64, "end": 521.96, "text": " and every paper gets more than one reviewers scores." }, { "start": 521.96, "end": 523.74, "text": " So we can put up a linear system." }, { "start": 523.74, "end": 530.3199999999999, "text": " But it turns out this is over parameterized because you only have as many numbers as you" }, { "start": 530.3199999999999, "end": 532.7, "text": " have these parameters right here." }, { "start": 532.7, "end": 540.72, "text": " So the rest both parameters, they don't, you don't have enough data points to assess that." }, { "start": 540.72, "end": 545.2, "text": " Now as much fun as over parameterized models are in deep learning, they're actually not" }, { "start": 545.2, "end": 548.26, "text": " that good if you want to estimate a linear system." }, { "start": 548.26, "end": 553.5600000000001, "text": " So what people do, they come up with regularizers and Bayesian approaches and yada, yada, yada." }, { "start": 553.5600000000001, "end": 557.38, "text": " I'll skip all of this to just give you the numbers." }, { "start": 557.38, "end": 564.96, "text": " So the model that these authors come up with determines that the factors of the linear" }, { "start": 564.96, "end": 567, "text": " systems are as follows." }, { "start": 567, "end": 571.46, "text": " This here is the factor that goes with the fi." }, { "start": 571.46, "end": 576.2, "text": " This one is the one that goes with the bj and this one is the one that goes with the" }, { "start": 576.2, "end": 578.52, "text": " ej." }, { "start": 578.52, "end": 584.72, "text": " And you see you, you, you, you pull out this one and then you simply compare the number" }, { "start": 584.72, "end": 590.5, "text": " on the left to the number on the right and you'll see they're almost exactly the same." }, { "start": 590.5, "end": 597.76, "text": " And that means and they formulate this here, in other words, 50% of a typical reviewer's" }, { "start": 597.76, "end": 605.16, "text": " score is coming from opinion that is particular to that reviewer and not shared with the other" }, { "start": 605.16, "end": 607.3, "text": " reviewers." }, { "start": 607.3, "end": 608.64, "text": " This figure may seem large." }, { "start": 608.64, "end": 610.08, "text": " Sorry about that." }, { "start": 610.08, "end": 617.44, "text": " This figure may seem large, they say, but in retrospect, it's perhaps not surprising." }, { "start": 617.44, "end": 623.9200000000001, "text": " So this is pretty, I guess this is pretty surprising to me, but it is not that, it is" }, { "start": 623.9200000000001, "end": 625.48, "text": " not that I didn't expect it." }, { "start": 625.48, "end": 631.6400000000001, "text": " And I think anyone who's participated in conference peer review would expect a number that is" }, { "start": 631.6400000000001, "end": 638.4000000000001, "text": " in approximately this range because we know that the review process is pretty noisy." }, { "start": 638.4000000000001, "end": 646.6800000000001, "text": " And very, very often individual reviewers just kind of give weird scores that you don't understand." }, { "start": 646.68, "end": 654.5999999999999, "text": " And here's the reason you don't understand because it's the source of them are subjective" }, { "start": 654.5999999999999, "end": 658.8, "text": " and largely not shared by other reviewers." }, { "start": 658.8, "end": 666, "text": " So having figured that out, having figured out that about 50% of the variation is due" }, { "start": 666, "end": 670.88, "text": " to just subjective feeling of a reviewer about a paper." }, { "start": 670.88, "end": 676.4799999999999, "text": " Now they sort of try to validate their findings." }, { "start": 676.48, "end": 678.52, "text": " And for that they run a simulation." }, { "start": 678.52, "end": 685.16, "text": " So the simulation is, it's a simulated conference." }, { "start": 685.16, "end": 690.72, "text": " So we assume that each paper was scored according to the model we've given above, and we estimated" }, { "start": 690.72, "end": 696.3000000000001, "text": " the accept consistency through averaging across 100,000 samples." }, { "start": 696.3000000000001, "end": 700.9, "text": " So now they're simulating the conference with this experiment done." }, { "start": 700.9, "end": 707.4, "text": " And they ask themselves, if this is really the correct model, then we should get back," }, { "start": 707.4, "end": 713.1999999999999, "text": " we should get back a consistency of the 50% we found above." }, { "start": 713.1999999999999, "end": 721.16, "text": " So because above the results of the experiments were that there was about a 50% consistency" }, { "start": 721.16, "end": 724.36, "text": " in acceptance in the experiment." }, { "start": 724.36, "end": 729, "text": " And now they go and they look at all the papers and all the scores and they determine that" }, { "start": 729, "end": 733, "text": " there is about a 50% subjectivity in scoring." }, { "start": 733, "end": 737.52, "text": " And now they ask themselves, do these two numbers match?" }, { "start": 737.52, "end": 742.4, "text": " And they run a simulation where every reviewer has a 50% subjectivity." }, { "start": 742.4, "end": 750.88, "text": " And they ask themselves, if we simulate this splitting up into two committees, and then" }, { "start": 750.88, "end": 757.94, "text": " every committee agrees by themselves, do we see the numbers that we found in the experiment?" }, { "start": 757.94, "end": 760.44, "text": " And the answer is yes, actually." }, { "start": 760.44, "end": 769.0400000000001, "text": " So you can see these are conferences for a bunch of different scenarios, namely for different" }, { "start": 769.0400000000001, "end": 774.2600000000001, "text": " number of reviewers, as you can see here, these are reviewers per committee." }, { "start": 774.2600000000001, "end": 780, "text": " So random means there is no reviewer per committee, committee decisions are just random." }, { "start": 780, "end": 787.2800000000001, "text": " And you can see that as the accept rate of the conference goes up, the accept precision" }, { "start": 787.28, "end": 795.1, "text": " of the committees go up because they simply they would more papers are accepted." }, { "start": 795.1, "end": 802.1999999999999, "text": " And therefore, more papers would be the same if you were to change the committee." }, { "start": 802.1999999999999, "end": 806.68, "text": " What we're interested in is, of course, the one with three reviewers, which is the most" }, { "start": 806.68, "end": 811.02, "text": " common reviewers scenario in these conferences." }, { "start": 811.02, "end": 813.72, "text": " And that's this curve right here." }, { "start": 813.72, "end": 821.24, "text": " So the way to read this is that, for example, if the conference had an accept rate of 50%," }, { "start": 821.24, "end": 832.6600000000001, "text": " right here, then we would expect a reviewer consistency or an accept precision of 0.75" }, { "start": 832.6600000000001, "end": 841.94, "text": " of 75%, which means that if we were to switch the reviewers for a particular or for all" }, { "start": 841.94, "end": 847.1600000000001, "text": " the papers, 75% of the paper would still be the same." }, { "start": 847.1600000000001, "end": 852.48, "text": " Remember that in our experiment, only 50% of the papers were still the same if we switched" }, { "start": 852.48, "end": 853.8800000000001, "text": " committee." }, { "start": 853.8800000000001, "end": 857.5600000000001, "text": " But the conference also didn't have a 50% accept rate." }, { "start": 857.5600000000001, "end": 861.84, "text": " So for that, we actually need to go to the accept rate of the conference, which was something" }, { "start": 861.84, "end": 864.36, "text": " like 23% right here." }, { "start": 864.36, "end": 869.6400000000001, "text": " And then if we look that up, we are at about a 60% accept precision." }, { "start": 869.64, "end": 874.96, "text": " Now, this might still be away from the 50% we found in the experiment." }, { "start": 874.96, "end": 885.24, "text": " However, the experiment had so little data that if you calculate the bounds on what the" }, { "start": 885.24, "end": 892.1999999999999, "text": " true accept precision was from that experiment, you can determine that it was between 38 and" }, { "start": 892.1999999999999, "end": 894.48, "text": " 64%." }, { "start": 894.48, "end": 897, "text": " And the exact number we got is 61%." }, { "start": 897, "end": 900.84, "text": " So this is still within the bounds of what we found in the experiment." }, { "start": 900.84, "end": 902.48, "text": " So pretty interesting." }, { "start": 902.48, "end": 909.64, "text": " This actually means that the model they put up is a close enough approximation to reality" }, { "start": 909.64, "end": 915.08, "text": " such that it predicts the experiment's outcome." }, { "start": 915.08, "end": 919.72, "text": " And this gives us a little bit of a this gives us a little bit validation that we're on a" }, { "start": 919.72, "end": 921.38, "text": " good track right here." }, { "start": 921.38, "end": 929.88, "text": " So we can sort of confidently say that about half of a reviewers decision on a particular" }, { "start": 929.88, "end": 936.92, "text": " paper essentially comes down to subjectivity is consistent with what we found in the experiment." }, { "start": 936.92, "end": 943.7, "text": " And it'd be interesting to see how this develops this year when we repeat the experiment." }, { "start": 943.7, "end": 951.6400000000001, "text": " So lastly, what they were trying to figure out is, well, are these reviews even worth" }, { "start": 951.6400000000001, "end": 957.6, "text": " it, so to say, do they actually predict how good a paper is, and you know, how do you" }, { "start": 957.6, "end": 962.76, "text": " measure how good a paper is, of course, by the number of citations." }, { "start": 962.76, "end": 968.2, "text": " So here they define the citation impact as the log of the number of citations." }, { "start": 968.2, "end": 974.48, "text": " And yes, there is a debate about whether citations really mean a paper is good or influential" }, { "start": 974.48, "end": 976.1600000000001, "text": " or blah, blah, blah." }, { "start": 976.1600000000001, "end": 980.76, "text": " But we don't, for better or worse, we don't have a different measure right now than number" }, { "start": 980.76, "end": 982.22, "text": " of citations." }, { "start": 982.22, "end": 986.36, "text": " And it's been seven years, which is like three generations in machine learning." }, { "start": 986.36, "end": 995.1400000000001, "text": " So there is a long enough time that these papers had to accumulate citations." }, { "start": 995.14, "end": 999.42, "text": " So let's just look at the accepted papers." }, { "start": 999.42, "end": 1007, "text": " Do the scores that the reviewers give to the papers predict in any way whether or not the" }, { "start": 1007, "end": 1009.64, "text": " paper is going to be cited more or less?" }, { "start": 1009.64, "end": 1013.04, "text": " So do higher scores indicate more citations?" }, { "start": 1013.04, "end": 1015.4399999999999, "text": " And the answer is no, not at all." }, { "start": 1015.4399999999999, "end": 1016.64, "text": " So here is a plot." }, { "start": 1016.64, "end": 1020.72, "text": " The correlation is 0.05." }, { "start": 1020.72, "end": 1029.04, "text": " This is ever so slightly statistically significant, but not really." }, { "start": 1029.04, "end": 1036.4, "text": " So you can, like at least for this particular conference right here, there's no correlation" }, { "start": 1036.4, "end": 1044.92, "text": " between reviewer scores and between reviewer scores and impact of the paper in the future." }, { "start": 1044.92, "end": 1052.3600000000001, "text": " It becomes a little bit interesting when you ask specifically." }, { "start": 1052.3600000000001, "end": 1057.52, "text": " So because here the question is, you know, is the paper novel?" }, { "start": 1057.52, "end": 1059.0800000000002, "text": " Is it correct?" }, { "start": 1059.0800000000002, "end": 1062.42, "text": " Is it well written and so on?" }, { "start": 1062.42, "end": 1065.64, "text": " These are not necessarily indicators of significance, right?" }, { "start": 1065.64, "end": 1070.64, "text": " If you accept the paper to a conference, only a small part of it is, is it significant?" }, { "start": 1070.64, "end": 1077.24, "text": " If you actually ask reviewers, do you think this paper will have a potentially major impact" }, { "start": 1077.24, "end": 1084.5800000000002, "text": " or not, you get a slightly higher correlation, but also not really, which means that reviewers" }, { "start": 1084.5800000000002, "end": 1091.96, "text": " are kind of bad at estimating whether any given paper will have a big impact or not." }, { "start": 1091.96, "end": 1098.8000000000002, "text": " Though to be fair for most papers, the answer is probably no by default." }, { "start": 1098.8, "end": 1107, "text": " However, the interesting part is when you ask them about their confidence in their rating" }, { "start": 1107, "end": 1114.76, "text": " and it is, if I understand correctly, it doesn't even matter which rating, but for the rating" }, { "start": 1114.76, "end": 1118.6, "text": " that you give at these conferences, you have to provide a confidence score." }, { "start": 1118.6, "end": 1124.04, "text": " Like you say, okay, I think this paper is really good, but I'm not very confident." }, { "start": 1124.04, "end": 1129.54, "text": " And if you simply correlate the confidence scores, as you can see here, the average confidence" }, { "start": 1129.54, "end": 1136.92, "text": " over all your sort of confidences of the paper with the impact, then you do get a slight" }, { "start": 1136.92, "end": 1139.6399999999999, "text": " correlation, which is interesting, right?" }, { "start": 1139.6399999999999, "end": 1149.1399999999999, "text": " So the authors here argue that it might be that there might be something like clarity" }, { "start": 1149.1399999999999, "end": 1150.1399999999999, "text": " in the paper." }, { "start": 1150.14, "end": 1156.44, "text": " So if a paper is written very clearly, then you will also be able to understand it better" }, { "start": 1156.44, "end": 1160.3600000000001, "text": " as a reviewer, which makes your confidence higher." }, { "start": 1160.3600000000001, "end": 1166.16, "text": " But also, since the paper is more clear, it means that the rest of the world will have" }, { "start": 1166.16, "end": 1172.6000000000001, "text": " an easier time understanding the paper and therefore cite it more often." }, { "start": 1172.6, "end": 1181.48, "text": " So this is a good hypothesis, but it's quite interesting that the confidence in papers" }, { "start": 1181.48, "end": 1188, "text": " seems to predict the impact better than the actual assessment of the impact." }, { "start": 1188, "end": 1189, "text": " That's astounding." }, { "start": 1189, "end": 1195.48, "text": " It's not super astounding that confidence by itself would predict it, but that it does" }, { "start": 1195.48, "end": 1201.1999999999998, "text": " so more than if you directly ask people." }, { "start": 1201.2, "end": 1203.1200000000001, "text": " I wonder what else we can ask." }, { "start": 1203.1200000000001, "end": 1211.28, "text": " I wonder what weird questions we can ask that will then up correlating with the future impact." }, { "start": 1211.28, "end": 1214.8, "text": " Do you like the colors of the paper?" }, { "start": 1214.8, "end": 1216.74, "text": " Do you like the pictures?" }, { "start": 1216.74, "end": 1218.94, "text": " So these were for accepted papers." }, { "start": 1218.94, "end": 1224.14, "text": " They also interestingly trace the fate of the rejected papers." }, { "start": 1224.14, "end": 1230.88, "text": " So they say only 414 were presented at the final conference." }, { "start": 1230.88, "end": 1236.92, "text": " So they want to trace the rejected papers and they go through a lot of work to try to" }, { "start": 1236.92, "end": 1240.0600000000002, "text": " figure out where these papers ended up." }, { "start": 1240.0600000000002, "end": 1246.92, "text": " So they search for papers with similar titles and authors or same titles and authors." }, { "start": 1246.92, "end": 1253.5200000000002, "text": " And of course, this is not a perfect process, but it seems like they've been able to trace" }, { "start": 1253.5200000000002, "end": 1256.74, "text": " a lot of these papers to their final destination." }, { "start": 1256.74, "end": 1263.8, "text": " You can see a lot of papers are discarded or some are simply posted on an archive or" }, { "start": 1263.8, "end": 1265.2, "text": " somewhere else." }, { "start": 1265.2, "end": 1269.84, "text": " Of course, the discarded papers, you don't know if they somehow morphed into other papers" }, { "start": 1269.84, "end": 1274.04, "text": " or something like this." }, { "start": 1274.04, "end": 1281, "text": " But it's still pretty interesting to see, though they say there are various error sources" }, { "start": 1281, "end": 1282.84, "text": " in these plots." }, { "start": 1282.84, "end": 1287.72, "text": " Lastly, yeah, here is the fate of the rejected papers." }, { "start": 1287.72, "end": 1292.8, "text": " Now they don't say exactly what blue and green means in this particular thing." }, { "start": 1292.8, "end": 1299.24, "text": " In other plots in the same papers, they differentiate, for example, between papers that have been" }, { "start": 1299.24, "end": 1304.04, "text": " accepted somewhere else ultimately and papers that have not been or that they have not been" }, { "start": 1304.04, "end": 1305.26, "text": " able to trace." }, { "start": 1305.26, "end": 1308.04, "text": " So this might be blue and green." }, { "start": 1308.04, "end": 1309.04, "text": " I'm not sure." }, { "start": 1309.04, "end": 1311.9199999999998, "text": " I haven't been able to maybe I'm just stupid at reading." }, { "start": 1311.92, "end": 1318.28, "text": " But as you can see, if you look at the rejected papers, so this is the calibrated quality" }, { "start": 1318.28, "end": 1323.0800000000002, "text": " score for the rejected papers." }, { "start": 1323.0800000000002, "end": 1329.76, "text": " And here you can see that there is in fact a correlation, which means that for the rejected" }, { "start": 1329.76, "end": 1336.92, "text": " papers, the assessment of the reviewers really does correlate with how the papers will end" }, { "start": 1336.92, "end": 1338.88, "text": " up doing ultimately." }, { "start": 1338.88, "end": 1344.92, "text": " So I'm going to guess, well, if the citation count is in here, I'm going to guess the discarded" }, { "start": 1344.92, "end": 1347.5200000000002, "text": " paper must not be in here." }, { "start": 1347.5200000000002, "end": 1349.48, "text": " Yeah, sorry." }, { "start": 1349.48, "end": 1356.2, "text": " But the conclusion is that for the rejected papers, reviewers can tell whether they're" }, { "start": 1356.2, "end": 1357.7600000000002, "text": " better or worse." }, { "start": 1357.7600000000002, "end": 1360.22, "text": " For the accepted papers, not so much." }, { "start": 1360.22, "end": 1362.0600000000002, "text": " And that's what they said at the beginning." }, { "start": 1362.0600000000002, "end": 1368.8400000000001, "text": " The review process is probably good at identifying bad papers, but bad at identifying good papers." }, { "start": 1368.84, "end": 1377.8799999999999, "text": " And this is it's not too surprising because bad papers, you know, you can find it's really" }, { "start": 1377.8799999999999, "end": 1382.08, "text": " easy to recognize a very poor paper." }, { "start": 1382.08, "end": 1388.04, "text": " But it's it's harder to recognize really how good a paper is, you know, compared to other" }, { "start": 1388.04, "end": 1390.12, "text": " good papers." }, { "start": 1390.12, "end": 1393.1999999999998, "text": " So that was the paper they give some recommendations." }, { "start": 1393.2, "end": 1404.44, "text": " For example, they say, well, maybe we should we should assess papers on on on some on different" }, { "start": 1404.44, "end": 1407.04, "text": " on different criteria than we do now." }, { "start": 1407.04, "end": 1414.4, "text": " But they do guard they do warn against saying we should do away with with subjectivity all" }, { "start": 1414.4, "end": 1415.4, "text": " together." }, { "start": 1415.4, "end": 1422.4, "text": " Because, you know, as annoying as the subjectivity is, they argue is it also guards against sort" }, { "start": 1422.4, "end": 1425.92, "text": " of the collective dominance." }, { "start": 1425.92, "end": 1432.5600000000002, "text": " So it guards against sort of making consistent mistakes." }, { "start": 1432.5600000000002, "end": 1440.7800000000002, "text": " If all the like if the entire conference for example, if the entire conference makes consistent" }, { "start": 1440.7800000000002, "end": 1447.52, "text": " mistakes in in some direction, then the subjectivity might counter that a little bit." }, { "start": 1447.52, "end": 1449.8000000000002, "text": " I'm not sure if that's a super good argument." }, { "start": 1449.8, "end": 1456.6, "text": " I am generally for noisy processes over super duper rigid ones." }, { "start": 1456.6, "end": 1461.36, "text": " It seems though that the conference review right now is a bit too noisy." }, { "start": 1461.36, "end": 1469.56, "text": " I'd rather do away with just having three reviewers and not having this accept barrier." }, { "start": 1469.56, "end": 1470.98, "text": " This is my personal opinion." }, { "start": 1470.98, "end": 1474.5, "text": " I would just do away with the accept barrier altogether." }, { "start": 1474.5, "end": 1478.8799999999999, "text": " You know, you submit to a conference, you get a bunch of scores and then you have the" }, { "start": 1478.88, "end": 1479.88, "text": " scores." }, { "start": 1479.88, "end": 1487.0400000000002, "text": " Like why do we need to divide papers up into accepted and rejected or, you know, like it" }, { "start": 1487.0400000000002, "end": 1493.16, "text": " seems better to just put papers out there and let the future let the future researchers" }, { "start": 1493.16, "end": 1499.0800000000002, "text": " assess them in retrospect, rather than having three random people with highly subjective" }, { "start": 1499.0800000000002, "end": 1501.48, "text": " opinions assess them." }, { "start": 1501.48, "end": 1505.8000000000002, "text": " But yes, probably a bit of noise is good in a process like this." }, { "start": 1505.8000000000002, "end": 1508, "text": " If you do a process like this." }, { "start": 1508, "end": 1515.2, "text": " They also say, well, maybe we should not put that much value at publishing at top tier" }, { "start": 1515.2, "end": 1516.2, "text": " conferences." }, { "start": 1516.2, "end": 1521.28, "text": " Now, I don't know how that's gonna work, you know, like whenever, whenever." }, { "start": 1521.28, "end": 1528.66, "text": " And yeah, I wish I wish as well that we could like change the collective the collective thinking" }, { "start": 1528.66, "end": 1530.96, "text": " about our field." }, { "start": 1530.96, "end": 1534.68, "text": " I don't I don't see that as a super easy task, though." }, { "start": 1534.68, "end": 1536.6, "text": " In any case, this was the paper." }, { "start": 1536.6, "end": 1539.28, "text": " Let me know your ideas." }, { "start": 1539.28, "end": 1543.12, "text": " Let me know how you think this year's experiment is going to turn out." }, { "start": 1543.12, "end": 1546.32, "text": " Like are we going to find more subjectivity?" }, { "start": 1546.32, "end": 1548.6599999999999, "text": " Are we going to find less?" }, { "start": 1548.6599999999999, "end": 1552.04, "text": " How much disagreement do you think we're going to find?" }, { "start": 1552.04, "end": 1553.9199999999998, "text": " This is going to be interesting." }, { "start": 1553.92, "end": 1567.44, "text": " So yeah, thanks for listening and I'll see you next time." } ]
W-O7AZNzbzQ
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
DDPM - Diffusion Models Beat GANs on Image Synthesis (Machine Learning Research Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "diffusion models", "diffusion model", "ddpm", "ddim", "denoising autoencoders", "generative models", "generative models deep learning", "gan alternatives", "alternatives to gans", "computer vision generative", "machine learning image generation", "openai diffusion", "openai gan", "variational autoencoder", "log likelihood", "variational lower bound" ]
#ddpm #diffusionmodels #openai GANs have dominated the image generation space for the majority of the last decade. This paper shows for the first time, how a non-GAN model, a DDPM, can be improved to overtake GANs at standard evaluation metrics for image generation. The produced samples look amazing and other than GANs, the new model has a formal probabilistic foundation. Is there a future for GANs or are Diffusion Models going to overtake them for good? OUTLINE: 0:00 - Intro & Overview 4:10 - Denoising Diffusion Probabilistic Models 11:30 - Formal derivation of the training loss 23:00 - Training in practice 27:55 - Learning the covariance 31:25 - Improving the noise schedule 33:35 - Reducing the loss gradient noise 40:35 - Classifier guidance 52:50 - Experimental Results Paper (this): https://arxiv.org/abs/2105.05233 Paper (previous): https://arxiv.org/abs/2102.09672 Code: https://github.com/openai/guided-diffusion Abstract: We show that diffusion models can achieve image sample quality superior to the current state-of-the-art generative models. We achieve this on unconditional image synthesis by finding a better architecture through a series of ablations. For conditional image synthesis, we further improve sample quality with classifier guidance: a simple, compute-efficient method for trading off diversity for sample quality using gradients from a classifier. We achieve an FID of 2.97 on ImageNet 128×128, 4.59 on ImageNet 256×256, and 7.72 on ImageNet 512×512, and we match BigGAN-deep even with as few as 25 forward passes per sample, all while maintaining better coverage of the distribution. Finally, we find that classifier guidance combines well with upsampling diffusion models, further improving FID to 3.85 on ImageNet 512×512. We release our code at this https URL Authors: Alex Nichol, Prafulla Dhariwal Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello! These are generated images from a new model, actually a new class of model. It's been around for a while, but for the first time this new class of model has been pushed to the point where the images they produce not only look really nice and look like something you've come to expect from the latest and greatest GAN models, but also they are better in the standard metrics we use to evaluate GANs, specifically here in the FID, the fresher inception distance. The paper we're going to talk about today is called Diffusion Models Beat GANs on Image Synthesis. It's by Prof. Dariwall and Alex Nicole of OpenAI. Already in the title they're pulling no punches, just be like this beats GANs. In this paper they're mainly talking about improvements to this new class of models which they call diffusion models. I would like to dive a bit more into what diffusion models are instead of just telling you what the improvements of this paper are, because I think most people haven't come in contact with these types of models yet. They thoroughly reference another paper which is called Improved Denoising Diffusion Probabilistic Models by themselves. In this paper they more develop these new models than in the other paper. The paper here, as you can see, is just three months younger than the other paper. This is really close. I think this paper is more insightful into what these models are. That being said, by the name Improved right here you can also see that this is not the seminal paper of these types of models. If you're interested in that you have to go back even further. However we're going to look at this and we're going to look at the new paper and see what are all the things that lead to this new class of models being better than GANs. Specifically we're going to talk about DDPMs, Denoising Diffusion Probabilistic Models. They're a bit like a variational auto encoder, a little bit. We'll go through that. If you feel that this was helpful please do share it out. It's been a pleasure bringing this to a lot of people and if you do it will just be more people. We'll have more fun. They say that Denoising Diffusion Probabilistic Models, DDPMs, are a class of generative models which have recently been shown to produce excellent samples. We show that with a few simple modifications DDPMs can also achieve competitive log likelihoods while maintaining high sample quality. In this paper they take these models, these DDPM models, and they say look we can push those models to push their log likelihood. There are a number of metrics that generative models track. It's not as easy as the validation set accuracy in a classifier. Log likelihood is one of the metrics that these models track. Here they say well we can get competitive log likelihood while maintaining high sample quality, which is a nice way of saying we don't beat GANs yet. In the next paper, the one I showed you before, they actually do beat GANs on the standard metrics and also the samples look quite impressive. The DDPMs have been around before but they go into a quick overview right here, which is what I think is quite appropriate for us to dive in. The philosophy here or the whole purpose behind this is they say let's imagine I have an image of my house right here. I have an image of a house and I define a process, what they call a forward noising process. This forward noising process takes the image and it just adds a little bit of noise to it, like epsilon noise that's sampled from some standard distribution like a Gaussian. You just sample a bit of noise and you just add it to that image. You have the same house but there'll be a bit of noise on it. Then you do it again. You sample another bit of noise and you do it again. As you do this over many steps, and here they actually notice that the previous authors were using a thousand steps and if they just increase that to four thousand steps, the log likelihoods go better. In any case you do this for many steps, thousands of steps in this first instance. You do this, what are you gonna end up with? The argument here is that if you do this for so many times, for so long, over so many steps, you're going to end up with random noise itself. This is ish according to some kind of normal distribution. You just assume. You can actually prove this that if you do enough steps, like if you do infinitely many steps, it goes actually towards just noise. Whenever you're done with this, there is no more information about the original image than actually sampling from this distribution right here. You have successfully defined a process that takes you from the image space. This here is from the data space that takes you from the data space to a known distribution, which is the normal distribution. Now here is the logic. If we could invert this, if we just somehow could invert this mapping, if we could have a process that knows if I give you an image with some noise, can you tell me what image that came from? Is that doable? It's not, it's not, it's thinkable. If I give you this image with some specks of noise on it and I ask you, could you please give me, I tell you, I'm the oracle, I tell you, look I've taken some image that already had a bit of noise on it, but I've added more. I've taken an image, I've added some noise. What was the original image that I, I don't tell you what the noise is, right? I just tell you the noise comes from whatever, a normal distribution, I've added it. What was the original image? Now you looking at this image, you'll see, you know, this could be a house. So not quite sure, but you know, this might be something like this might be the original image and this here I'm not really sure about if this is noise. So you're gonna sort of revert that process a little bit, right? Knowing that this is how the image came to be, you as a human, if I told you, you could approximately reverse that process. That of course requires you to know something about these images, right? That like, it requires you to know what a house looks like and when you see something like this, that well, you know, probably because I don't tell you which ones are the noise and which ones aren't. So that's the trick, right? If I just told you, well, all the orange stuff is noise, right? But you just see, you just see this all in mono color, but you know kind of, okay, so this here looks like it's from the image itself, but then this here is just kind of a spec and that just kind of, might just be noise, maybe not, right? But then this here, I'm pretty sure it's just noise and not part of the original image. So you could do that and the question is, can we learn a function that does this reverse process? If we can do so, right? If we can learn a function, function of course that's going to be some kind of neural network-ish thing. We can learn a function where I give you an image with noise and I tell you, by the way, so this is maybe time step zero, this is t equals zero, t equals one, t equals two, and so on. Well, you can't see that. If I tell you, okay, here is an image, this happened at t equals 50, can you give me the t equals 49 image that this came from? Alright, and this is the whole principle. We're going to, we can generate training data for this neural network very easily because we just take data and we run them through the noise process forward, right? Then we have plenty of training data for every step of this pipeline, right? In fact, we don't train a, we don't train a different phi function for every step. As you can see, the phi function simply takes the time or can take the time as an input. It's certainly possible otherwise, or it's possible to not tell it at all, right? Then you, it has no clue. So yeah, if you do this, you can generate training data and then the idea is you can just run this process in reverse and arrive at the original sample. And even more, because this here is actually the normal distribution, you can now sample random noise from that normal distribution, right? You can feed it to this process and this process, who has learned to map the data distribution to the normal distribution and can reverse that process, will give you some sort of data distribution sample for your input that you sampled from the normal distribution. All right, this is the idea and it's quite tricky to get this to work, as you can imagine, but let's not forget that GANs also have been quite tricky to get to work. It's just maybe there has been a bit more work going into GANs. All right, so formally this goes as follows. We define this forward-noising process, right? We sample this from the data distribution. We sample x0 from the data distribution. We define this forward-noising process Q, which produces x1 through xt, so capital T as the end here. And we, by adding Gaussian noise at time t with some variance, okay, so you can have, you can have, it's zero mean Gaussian noise, I believe, maybe. Yeah, it's, well, you scale, but you define this variance schedule right here. That's also your choice, right? You choose what kind of noise you want to add, but ultimately you take, ultimately, the distribution of the things you produce via that noising process, given that you start at the data sample x0, you simply define as this product of distributions. So you start with, this just means you start with x0 and then you go from x0 to x1 and then you go from x1 to x2 and so on, okay? And each of these steps is an independent application of noise. As you can see here, this is one of those steps. So what you're saying is that the distribution of the next sample right here is going to be a normal distribution that's going to be centered at this thing right here and its variance is this thing right here. So you can see that the assumption here is you use noise that has a diagonal covariance matrix, okay? This is, I guess it's reasonable. It certainly makes computing things easier, right? The other thing here is that you can see this Gaussian is centered at the last sample but down scaled by this factor right here and I think, like, this is a choice again by the modelers but I think this is also due to the fact that makes computation easier because I guess if you don't have this then, you know, you start somewhere and you add noise and you sample something, you add noise, you sample something, maybe this would grow indefinitely and you sort of need to rescale things such that you can make this statement right here. Given sufficiently large T and a well behaved schedule of beta, the latent XT, so the very last step, is nearly an isotropic Gaussian distribution, okay? That's the entire point. So if you do it like this, which is a choice, but if you do it like this then at the end if you do enough steps, infinitely many steps, then you end up at an isotropic Gaussian distribution. Thus, if we know the exact reverse distribution, we can sample from the Gaussian and run the process in reverse to get a sample from the data distribution. Then they say, however, since the reverse distribution depends on the entire data distribution, we approximate it using a neural network as follows. So this statement can be a bit weird in the first instance. This depends on the entire data distribution, right? Because it's very close to this thing right here and this thing right here depends on nothing, right? This you just define, you just say I'm gonna add random noise to something and that's my next distribution. It only depends on the input image right here. The way to see it, that this depends, the reverse depends on the entire data distribution, is exactly what I said before. If I give you the, like if I give you this picture, I'm not gonna actually tell you right where the noise is. So I give you this picture and I tell you this is a drawing from a very small child, because that's my drawing level, and I've just added a bunch of noise to it. Could you tell me what the original drawing was? This is very different from me saying here is a drawing from a small child, please add noise to it. That's easy, I just did this, right? I was just called, I just did it. But if I tell you what was the original image, you have to take into account the entire world. You know about how small children draw, what kind of motives they usually draw and so on, and that's how you are able to come up by saying well it was probably something like this. This needs your knowledge of the entire data distribution. That's why they say it right here. So they say well we can't, we like, we can't just have the entire data distribution otherwise, you know, we wouldn't even have the problem in the first place. So what we can do is we can approximate one of these steps using a neural network. So we have a neural network that takes as an input, as I said, it takes as an input the noised version of the image and it gives you as an output, it's a bit like this is, it gives you, I told you give me the image that this came from, in this case what they want is give me a distribution over images where that could have come from, right? And again they say this, they model this as a Gaussian right here and the neural network will produce the mean and the covariance matrix given the image. So the neural network is supposed to look at the image and decide okay what's the Gaussian distribution of images where that probably came from? And this is a strong assumption, right? The fact for example that you know this is a Gaussian distribution, like this is adequately modeled as a Gaussian distribution, it's a strong assumption that you can only make because you make these very small steps. Because nothing, I mean nothing stops you from actually doing this in one step, right? Nothing stops you from taking, you know, the data distribution just adding like a wild bunch of noise because then you're also approximately normally distributed. Maybe not, I don't know, you maybe end up at some other distribution. But I mean certainly if you, like you can do the reverse, also you can train a neural network to do it in one step. In fact that's a little bit what GANs do, right? But if you want to do this in this sort of manner where you model all the distributions, notice this is a very different language than GANs. Here it's all kind of in the distributional semantics. If you want to do this and you want to say well I modeled the reverse as a normal distribution, this is just not true if you took large enough steps, right? But if you take very tiny steps you can adequately make sort of the argument that the normal distribution is kind of okay for this to work. And of course it makes life easier after that. So they need the tiny steps because in the tiny steps they're able to sort of, the modeling assumptions hold, also I guess it works better. And then you can define the loss function right here. So they say the combination of QMP is a variational autoencoder and we can write the variational lower bound as follows. So I'm not sure if I have ever gone over variational autoencoders, but they, it's very much, it's very similar to here. What you can do is you can define this variational lower bound which essentially boils down to saying I would like the distribution that I want a model and the thing I actually output to be close together, right? So this is the reverse process that my neural network does and this is the thing that I actually would like to model, okay? And we're going to, this is the thing that needs the entire data distribution. We're going to look at that in just a second. So yeah there are some other terms here but you can get around that and the last term right here, like the last term, you just assume that's kind of a Gaussian. So really it comes down to does the distribution that your neural network outputs match what you, what it actually is? And here you can see the sort of proxy for well this needs the whole data distribution is the following. If I tell you that this is the process by which I derive the data, right? And I ask you what is the reverse distribution of one of these steps? You can't possibly compute that, right? Accurately because you don't know the data distribution. However what you can do is for this particular sample you can compute it if I tell you that you know this is the process by which I derived it and also if I actually give you x0 right here. If I give you that then you can do, you can do, you can calculate and that's what they show here, you can actually calculate this distribution. You can say what is the actual distribution I'd like to model and that's going to be a normal distribution but what just, it makes sense right? In this case like if this is, if this is the forward process and I give you x0, if you already know the result you can calculate the distribution. So that's what they derive right here and that is dependent of course on your noise scale which is like all over the place in this, in these formulas but you can calculate that and this is a Gaussian and they model the output of the neural network as a Gaussian so these KL divergences just they become really easy to calculate and then you have a loss function. So now they say how do we, how do we actually train this thing in practice? Because it turned out in the last papers that this thing right here, the actual variational lower bound isn't too effective. I think that's what they're saying. So yeah what the, what the authors here say is they go back to previous paper and say the previous paper found that modeling the noise here is the best way to do it. So the question is how exactly, what exactly does the neural network do? Like the neural network could do many things, it it could actually just predict this mean parameter which we've talked about right? The neural network could simply, I give you an image and you tell me what's the most probable image where it comes from or sort of the mean and also give me the covariance but also what you could do is you could just model the noise, that's a different thing. You could model the noise and that's equivalent from a computational perspective right or from a conceptual perspective. If I give you again this image you can either tell me where it came from or equivalently you can tell me what's the noise that I've added right and you tell me what this, you've probably added this noise. It's a, this is a both the same from an information perspective, however the authors previously noted that the modeling the noise is better just from a neural network training standpoint. In fact they make a point here to define a new loss function that simply estimates, that simply says well the noise that I output from the neural network should approximately match the actual noise that I've added right because I know what noise I sampled in my forward noise process and that works better. However these authors here say okay this does not tell you anything about the covariance because that only tells you something about the mean and the old authors found that we don't actually need the covariance we just we fix it and that works a lot better or equally well to actually learning it and the authors here say maybe they've you know missed something maybe they've missed the opportunity to learn the covariance so this was a little bit of a rant but to repeat we define this noising process and then we try to learn a neural network that reverts that noising process. In order to do so we train a neural network to reverse each of the little steps that we do right here and the way we do it is the neural network will predict the distribution of the predecessor so given a noised image the neural network will output the distribution modeled as a normal distribution over where that noisy image probably came from and it the previous authors have said well there are two things to model there is the mean and the covariance and we find first of all if we just fix the covariance that's enough right we fix the covariance matrix to the noise scale that we know we applied and good enough we don't actually need to model the the true covariance matrix just from an empirical standpoint and then when we model the mean we don't model the mean directly we actually model the noise and which is equivalent but it works better from a neural network standpoint. The authors now say maybe you've missed an opportunity learning that covariance matrix because it's one thing to say this is probably a Gaussian right it's another thing to say this is probably a Gaussian with completely isotropic covariance matrix and you would expect the second one is easier but also it's more wrong so that's what we're that's what we go about here so they say can we improve the log likelihood right here and the first topic they go into is learning this covariance matrix and what they discover I want to say is that if you fix the covariance matrix right here you have to know what scale to fix it at which is dependent on the the noise that you applied in the forward process right so you applied some noise and you can calculate what the average covariance of the reverse step should be at that particular time step and in fact you can derive an upper and a lower bound so if beta here is their schedule for noise then these are the two bounds so this this is the actual beta you used in that step the noise scale and this is sort of an accumulated noise scale up until that step these are the two bounds in which in which the noise can be right the noise level or the covariance and the previous author said well we can use either one of them it's actually fine it doesn't matter and these authors say okay look at this right here this is the ratio between the two so the ratio between the upper and the lower bound as a function of the diffusion step now especially if you go to a large amount of step size you see this immediately clamps at one right so there is like almost no difference between the upper and the lower bound which is probably why the other authors estimated it didn't matter now these authors go further and they say well if you just try to learn like a number neural networks are kind of bad at regression right so if you tell neural network learn me any number on the number string whatever you call that in English if there me any number like here's one here's two here's three like here's 500 any number whatsoever but however the only actual right answers are going to be a tiny tiny sliver between like like the ratio between them is going to be a tiny tiny sliver somewhere in in like three orders of magnitude down the neural networks going to have trouble hitting these correctly so the way they do it is they reparameterize the the how they predict the covariance matrix in fact what they come up with is they simply learn an interpolation parameter V right here to interpolate between the upper and the lower bound and that turns out to be quite a good decision because now the neural network can predict a number V for each dimension which is between 0 and 1 right and that's neural networks can predict stuff between 0 and 1 they're pretty good at it and the whole rest the whole scale issue will be taken care of by interpolating between the two valid bounds so this this is one thing they're able to learn the covariance matrix now and that boosts them a bit and then they also look at the noising process right here and they say well if you look at this and this is something I find a bit shady they say if you look at this and this top row is what is currently done with the noise schedule that is usually defined it's just kind of noisy a bit too much right like from here on out there's just noise right could we not schedule this a little bit such that the drop-off is more gradual that might help a lot and so they come up with a new schedule that does this now this seems very subjective right you know this is you as a human looking at it they they do some experiments here where they say we measure the inception distance as we just leave away a fraction of the reverse diffusion process so they wonder how many of these steps can we just leave away and still end up with something that's fine like can we can we just skip the first step of the reverse process and start here can we skip five steps and start here it turns out in the linear schedule you're just able to skip a lot more steps which gives you an indication that those steps weren't really helpful and it'd probably be better that you define a schedule where all of the steps are helpful so that's what they what they come up with you can see the linear schedule right here is dumping pretty fast like it goes down pretty fast while their new cosine schedule is much much slower like this these are now actual practical considerations that are just done by kind of looking evaluating a bit empirically and then going and saying well can't we do something better now this something better they admit that themselves isn't by no means the best thing you can do it's just something better like ultimately you would want the same step in the noising process probably to contribute equally to the quality of the entire system you know but that's what they do the last thing is very similar they say we reduce the gradient noise so they observe if they use they have now two loss functions right they have the loss the original loss function where you simply look at the L2 distance between the noise and the predicted noise like no variational lower bound yada KL divergence and who needs that crap right that's what they call the simple objective now the simple objective doesn't contain the covariance so what they would like to do is they would like to go back to the variational objective and that's the blue line here I know you can't really read it but that's the blue line here and you can see only is it pretty noisy it's also well okay I guess it's like it's pretty noisy the loss curve if they mix the variational objective together with the simple objective they get a better loss curve you see that right here this this is this hybrid loss it's the orange loss it it's still noisy their new loss which they call resampled loss that's again the variational lower bound loss but in a sampled in a different way is the green line which is much much smoother and also lower and that comes from this fact right here if you look at the sorry not from this right here is it okay so they what they say is if you look at the process like this noise process here and you look at where the actual loss comes from where does the the majority of the loss contribution come from they notice that the majority of the loss contribution comes from the first step so there is a real imbalance of how much these individual steps in the noising process differ from like contribute to the overall loss and say well if you know if we just add all of them up equally right because what do you need to do to train these neural networks you need to start off with a clean image then sample some step like some step you say okay I'm gonna now train the t equals 205 network right so you add noise 205 times you can do this in one go by the way but essentially you add noise 205 times you get here right you add noise once more to here and now you have your if your training sample right here you can calculate the the distribution you want to match by also including this one as we discussed and you good right so this is one training sample the next training sample is you select a different t and you produce another training sample it's one now if the first few steps are much more important than you know the step at t equals 5000 and you're just sampling t uniform you will end up with you know a correct on probably unbiased estimate of your laws oh sorry of your loss however it will be super duper noisy so they're saying can't we just focus a bit on where a loss actually occurs so they devise a scheme to do important sampling notice that the different terms of of the variational around have greatly different magnitudes and figure two where's which one's figure or figure two figure two oh there we go that was the plot so here is the step in the noising process and here is the loss term magnitude and you can see that the the first few steps they have a really lot like a larger loss this is a log scale right on the left then the last ones so they devise an important sampling scheme to counter that this is not specific right to this particular technique you can use this anywhere where different samples have very different contributions to loss you can choose to focus on the ones where the loss is high and I will not give you that will give you a biased estimate of your loss however it might decrease your variance by quite a bit and that's what they they end up with they in this paper they end up with something that's competitive but not better than the best GANs however it already it already looks pretty good they also investigate model size but I don't want to go into this I actually want to jump quickly into this next paper where they improve again on their models to make them actually better than GANs and the improvements right here are much more I don't know I want to say boring because it's like okay architecture improvements so we're going through the same process that we've gone through with GANs where it's like well here's a tweak here's a tweak here is an architecture a better architecture here is kind of a better loss function regularizer whatnot and it's quite conceivable right that this these models here come to the level of GANs now whether they are actually you know better than GANs like I think this is remains to be seen because you know it also depends quite a bit on how much compute you put into this and then you also have to see that here you have to it went when you want to sample a sample you have to input the sample and then do this denoising process a bunch of times like thousands of times until you end up with the data sample now they do have a kind of a trick going into another model class where you only have to have they say 25 of these steps so it's pretty cool but still like that's 25 forward passes through this neural network that predicts the denoising where again is just like you sample once the latent you you ship it through the GAN and you end up with a you end up with a sample and I'm actually wondering if GANs could take some sort of lesson from here we'll we'll look at this after we look at this right here which is what I think is the kind of cool improvement that they do in the new paper which is where they say classifier guidance so they say if you use GANs for conditional image synthesis so if you if you conditionally if you use a GAN to create images that are of a particular class condition on a class label they make heavy use of class label okay so they say it makes sense to explore different ways to condition diffusion models on class labels we already incorporate class information into normalization layers so you have different normalization layers for different classes here we explore a different approach exploiting a classifier to improve a diffusion generator as they say the kind of a previous work two previous works show one way to achieve this we're in a pre-trained diffusion model can be conditioned using the gradients of a classifier in particular we can train a classifier and on noisy images and then use the gradients to guide the diffusion sampling process towards an arbitrary class label in this section we first review two ways of driving conditional sampling processes we then describe how we use such classifiers in practice to improve sample quality so the idea here is that if you have class labels together with your data set you can train a classifier on not only the data set but also noisy samples of that data set right and then you can use that classifier in order to guide the process so this is what we're dealing with right here they say well instead of simply reverting the process which would be this part right here like instead of simply reverting the noise process if I tell you what label that image is from like what class that image is from can you do a better job right so if I in our original example if I tell you if I give you a noisy picture of a house and I tell you about by the way this is a house you're much more able to tell me what the original image was or alternatively what the noise is that I've added to the image so if you write this as a as a distribution as we did so far you can say if you want you want to predict the previous image from the next image and the class label and you can pull this apart into these two components which is the old component like how likely is the previous image given the noisy version times the what they I think what they call this this the prior right yeah they call this prior you can see that if you just like kind of ship this out it just it just swaps well I don't know how to explain this properly but I mean this is this is just probability manipulation so if you have a probability product between whatever we had before and how likely is that is the class label under this so this is sort of you want an image that makes sense given the noisy image but you also want you want an image that's that Mac that is a high probability of being of the class that you want to produce and of course this is exactly a classifier on the right which you can use so since we it since our model of so the question is what are these two things and can we sort of derive an easy form how we can work with this so the first thing we've already seen and we model this as a normal distribution and if we know the mean and covariance of that thing the the log is simply this form so you should recognize this as being just the form of the normal distribution this here is the normalization constant if you work in log space that is added and it is a constant so if you're just interesting in minimizing a function you might as well leave it away the second part is a bit more tricky but you can say well this distribution right here I can do a Taylor expansion around the predicted mean right then the first order Taylor expansion which becomes this so this is it's just kind of a vector form of the Taylor expansion if you've never seen it so this is this is f of x 0 right here and this is the this is f of x 1 this is the derivative at the point x 0 how do you say it is the derivative according to X at X 0 times X minus X 0 right here it's the same thing okay so what you end up with is this form right here and if you calculate this through what you end up with is the entire distributions of the product of the two things in log space looks like this and therefore therefore the distribution that you're looking at is a distribution you're saying here somewhere is the image that is the noisy version you ask your two models you ask your first model well what's what's an image or where does this likely come from and that model tells you well it's probably from here and the the covariance is like so like I think that's where it it came from when it was noised and the other model simply shifts that towards it says well but if you shift it a bit like this and it actually comes from here then it's much more likely under the classifier that's what you have you have the predicted mean right here that says where does it probably come from given that I've had a noise and this part right here says so the G is the gradient of the classifier with respect to the input this says well but if I shift it like this a little bit it becomes much more likely under the class and given that you've already told me what the class label is right I'm just gonna choose I'm I'm gonna choose to shift over here so this is what the classifier buys you the classifier will tell you without the classifier I think it comes from here but now that I know it comes from this class I can refine my belief of where it came from and that's how you become more accurate like if this is really the class it came from you're gonna be more accurate right given that the assumptions of the Taylor expansion hold now here as you can see we're really kind of getting close to the land of the GANs okay now if as soon as you have something like this where you derive the gradient of a model right of a classifier model with respect to its input and you use that gradient to sort of guide your search that is it's it's very close to a GAN it's very close to models that do score matching actually this very bad at explaining score matching but it is exactly sort of this you use the gradient of the log probability in order to model a distribution and I wonder if GANs can't sort of take a bit of a lesson from here like I wonder what happens if you don't have a GAN that just goes from noise to data but again like like here you have like little GANs or the discriminators at intermediate steps right that do their discrimination you can generate training data pretty easily again by doing this reverse noising process you can generate training data and you just have like little discriminators that discriminate between true data that was actually noised and data that you just produced and by you just produced I don't know what I'm just coming up with this right now this is not a prepared thing by the way you could probably use your existing model to somehow forward propagate and then you noise whatever that is right and then you have generated data and true data in all their noisy fashion and you can do discriminator at each level I'm not sure maybe it works maybe it won't I'm just saying maybe there is a way to get sort of the best out of both worlds because this this here like if this weren't a class label but kind of a label of true and fake data this would very much look like again and maybe we don't need all of this distribution distribution Schmistribution I guess it's a forever war between people who do formally correct their things and people who just throw everything out that doesn't contribute to the end quality in any case they also go into this DDIM models which are different class of models very close here but they do they they say to this and we use a score based conditioning trick adapted from these other papers which can leverage is the connection between diffusion models and score matching so there is an actual formal connection and you can use that to kind of actually what I said right now get rid of the noise in the system and directly sort of directly predict the predecessors and that will still end up at a formally correct thing and that allows you I think with this trick they don't have to sample as much or they they only use 25 reverse steps instead of 4000 which is important right and the last thing they discover if they discover like a hyper parameter like if you scale classifier gradients like this you have to observe that the classifier gradients are in log scale so technically the way multiplication behaves with a log is it becomes an exponent right here and that simply means that this distribution also you know the normalization that distribution is going to be more or less peaky and define depending on that hyper parameter and they notice that you can make it sort of more peaky and then the sample quality becomes higher right I think they a issue that the variational auto encoders had for a long time is that they were sort of blurry and so on and you know this is this is a little bit I think how that might be fixed though this is you know the classifier gradients so you want to make the classifier gradients more peaky which means that you get a stronger signal for from them which apparently results in better things so here all the results you see whenever they say 80m that's their model they have several variations namely this dash G here is the classifier guided version and whenever they say 25 steps that is the version without the noise with the trick connection to score matching yep so you can see in sort of the FID scores they do beat a big GAN on these tasks yeah maybe they you know the GANs will one up taking some tricks from here or maybe it's quite possible that these models will go beyond GANs because we've poured a lot of effort into GANs and not so much yet into these models into the denoising models and you know the samples look pretty good so the left is GAN and the middle here it's a bit small but the middle here is is their model and I have actually like I've gone through this entire image net class I've looked at every single image to try to find these images and I can I can tell you that the images are not in the training or the validation data set here are these are images from the actual data set they're pretty close but still I always fear a little bit that you know at some point a model is just gonna learn to copy the data all right so that was it I know this video is already too long if you're still here thank you I hope you've enjoyed this and I'll see you next time bye bye
[ { "start": 0, "end": 8.040000000000001, "text": " Hello! These are generated images from a new model, actually a new class of model." }, { "start": 8.040000000000001, "end": 13.44, "text": " It's been around for a while, but for the first time this new class of model has" }, { "start": 13.44, "end": 20.52, "text": " been pushed to the point where the images they produce not only look really" }, { "start": 20.52, "end": 26.6, "text": " nice and look like something you've come to expect from the latest and" }, { "start": 26.6, "end": 33.88, "text": " greatest GAN models, but also they are better in the standard metrics we use to" }, { "start": 33.88, "end": 41.2, "text": " evaluate GANs, specifically here in the FID, the fresher inception distance." }, { "start": 41.2, "end": 45.8, "text": " The paper we're going to talk about today is called" }, { "start": 45.8, "end": 51.28, "text": " Diffusion Models Beat GANs on Image Synthesis. It's by Prof. Dariwall and" }, { "start": 51.28, "end": 56.68, "text": " Alex Nicole of OpenAI. Already in the title they're pulling no punches," }, { "start": 56.68, "end": 65.28, "text": " just be like this beats GANs. In this paper they're mainly talking about" }, { "start": 65.28, "end": 72.12, "text": " improvements to this new class of models which they call diffusion models." }, { "start": 72.12, "end": 76.9, "text": " I would like to dive a bit more into what diffusion models are instead of" }, { "start": 76.9, "end": 81.12, "text": " just telling you what the improvements of this paper are, because I think most" }, { "start": 81.12, "end": 87.36, "text": " people haven't come in contact with these types of models yet. They thoroughly" }, { "start": 87.36, "end": 92.44, "text": " reference another paper which is called Improved Denoising Diffusion" }, { "start": 92.44, "end": 99.92, "text": " Probabilistic Models by themselves. In this paper they more" }, { "start": 99.92, "end": 106.36000000000001, "text": " develop these new models than in the other paper. The paper here, as you" }, { "start": 106.36, "end": 112.03999999999999, "text": " can see, is just three months younger than the other paper. This is" }, { "start": 112.03999999999999, "end": 116.56, "text": " really close. I think this paper is more insightful into what these models are." }, { "start": 116.56, "end": 121.42, "text": " That being said, by the name Improved right here you can also see" }, { "start": 121.42, "end": 127.36, "text": " that this is not the seminal paper of these types of models. If you're" }, { "start": 127.36, "end": 133.04, "text": " interested in that you have to go back even further. However we're going to" }, { "start": 133.04, "end": 137.6, "text": " look at this and we're going to look at the new paper and see what are all the" }, { "start": 137.6, "end": 142.2, "text": " things that lead to this new class of models being better than GANs." }, { "start": 142.2, "end": 147.6, "text": " Specifically we're going to talk about DDPMs, Denoising Diffusion" }, { "start": 147.6, "end": 152.84, "text": " Probabilistic Models. They're a bit like a variational auto" }, { "start": 152.84, "end": 160.07999999999998, "text": " encoder, a little bit. We'll go through that." }, { "start": 160.08, "end": 167.04000000000002, "text": " If you feel that this was helpful please do share it out. It's been" }, { "start": 167.04000000000002, "end": 172.84, "text": " a pleasure bringing this to a lot of people and if you do it will just be" }, { "start": 172.84, "end": 178.36, "text": " more people. We'll have more fun. They say that Denoising Diffusion" }, { "start": 178.36, "end": 184.4, "text": " Probabilistic Models, DDPMs, are a class of generative models which have recently" }, { "start": 184.4, "end": 190.28, "text": " been shown to produce excellent samples. We show that with a few simple" }, { "start": 190.28, "end": 194.88, "text": " modifications DDPMs can also achieve competitive log likelihoods while" }, { "start": 194.88, "end": 200.04000000000002, "text": " maintaining high sample quality. In this paper they take these models, these" }, { "start": 200.04000000000002, "end": 208.48000000000002, "text": " DDPM models, and they say look we can push those models to push their" }, { "start": 208.48000000000002, "end": 212.8, "text": " log likelihood. There are a number of metrics that generative models track." }, { "start": 212.8, "end": 217.92000000000002, "text": " It's not as easy as the validation set accuracy in a classifier. Log" }, { "start": 217.92000000000002, "end": 224.56, "text": " likelihood is one of the metrics that these models track. Here they say" }, { "start": 224.56, "end": 228.96, "text": " well we can get competitive log likelihood while maintaining high sample" }, { "start": 228.96, "end": 234.36, "text": " quality, which is a nice way of saying we don't beat GANs yet. In the next" }, { "start": 234.36, "end": 238.08, "text": " paper, the one I showed you before, they actually do beat GANs on" }, { "start": 238.08, "end": 243.32000000000002, "text": " the standard metrics and also the samples look quite impressive." }, { "start": 243.32000000000002, "end": 248.8, "text": " The DDPMs have been around before but they go into a quick overview right" }, { "start": 248.8, "end": 257.40000000000003, "text": " here, which is what I think is quite appropriate for us to dive in." }, { "start": 257.40000000000003, "end": 263.96000000000004, "text": " The philosophy here or the whole purpose behind this is they" }, { "start": 263.96, "end": 272.32, "text": " say let's imagine I have an image of my house right here." }, { "start": 272.32, "end": 277.91999999999996, "text": " I have an image of a house and I define a process, what they call a forward" }, { "start": 277.91999999999996, "end": 284.2, "text": " noising process. This forward noising process takes the image and it just" }, { "start": 284.2, "end": 290.91999999999996, "text": " adds a little bit of noise to it, like epsilon noise that's sampled from some" }, { "start": 290.92, "end": 295.92, "text": " standard distribution like a Gaussian. You just sample a bit of noise and" }, { "start": 295.92, "end": 301.04, "text": " you just add it to that image. You have the same house but there'll be a" }, { "start": 301.04, "end": 308.24, "text": " bit of noise on it. Then you do it again. You sample another bit of" }, { "start": 308.24, "end": 316.28000000000003, "text": " noise and you do it again. As you do this" }, { "start": 316.28, "end": 322.84, "text": " over many steps, and here they actually notice that the previous" }, { "start": 322.84, "end": 326.91999999999996, "text": " authors were using a thousand steps and if they just increase that to four" }, { "start": 326.91999999999996, "end": 331.71999999999997, "text": " thousand steps, the log likelihoods go better. In any case you do" }, { "start": 331.71999999999997, "end": 339.79999999999995, "text": " this for many steps, thousands of steps in this first instance. You do this, what" }, { "start": 339.79999999999995, "end": 345.55999999999995, "text": " are you gonna end up with? The argument here is that if you do this for" }, { "start": 345.56, "end": 352.84, "text": " so many times, for so long, over so many steps, you're going to end up with random" }, { "start": 352.84, "end": 360.92, "text": " noise itself. This is ish according to some kind of normal" }, { "start": 360.92, "end": 366.44, "text": " distribution. You just assume. You can actually prove this that if you" }, { "start": 366.44, "end": 371.32, "text": " do enough steps, like if you do infinitely many steps, it goes actually" }, { "start": 371.32, "end": 378.59999999999997, "text": " towards just noise. Whenever you're done with this, there is no more" }, { "start": 378.59999999999997, "end": 383.52, "text": " information about the original image than actually sampling from this" }, { "start": 383.52, "end": 387.44, "text": " distribution right here. You have successfully defined a process that" }, { "start": 387.44, "end": 392, "text": " takes you from the image space. This here is from the data space that" }, { "start": 392, "end": 396.88, "text": " takes you from the data space to a known distribution, which is the normal" }, { "start": 396.88, "end": 407.04, "text": " distribution. Now here is the logic. If we could invert this, if we" }, { "start": 407.04, "end": 412.64, "text": " just somehow could invert this mapping, if we could have a process that knows" }, { "start": 412.64, "end": 419.28, "text": " if I give you an image with some noise, can you tell me what image that came" }, { "start": 419.28, "end": 429.03999999999996, "text": " from? Is that doable? It's not, it's not, it's thinkable. If I give you" }, { "start": 429.03999999999996, "end": 435, "text": " this image with some specks of noise on it and I ask you, could you" }, { "start": 435, "end": 441.52, "text": " please give me, I tell you, I'm the oracle, I tell you, look I've taken" }, { "start": 441.52, "end": 447.88, "text": " some image that already had a bit of noise on it, but I've added more." }, { "start": 447.88, "end": 454.76, "text": " I've taken an image, I've added some noise. What was the original image that I, I" }, { "start": 454.76, "end": 458.68, "text": " don't tell you what the noise is, right? I just tell you the noise comes from" }, { "start": 458.68, "end": 462.28, "text": " whatever, a normal distribution, I've added it. What was the original image?" }, { "start": 462.28, "end": 469.92, "text": " Now you looking at this image, you'll see, you know, this could be a house. So not" }, { "start": 469.92, "end": 474.12, "text": " quite sure, but you know, this might be something like this might be the" }, { "start": 474.12, "end": 478.8, "text": " original image and this here I'm not really sure about if this is noise. So" }, { "start": 478.8, "end": 483.96, "text": " you're gonna sort of revert that process a little bit, right? Knowing that this is" }, { "start": 483.96, "end": 490.76, "text": " how the image came to be, you as a human, if I told you, you could" }, { "start": 490.76, "end": 496.8, "text": " approximately reverse that process. That of course requires you to know something" }, { "start": 496.8, "end": 502.28000000000003, "text": " about these images, right? That like, it requires you to know what a house looks" }, { "start": 502.28, "end": 508.08, "text": " like and when you see something like this, that well, you know, probably because" }, { "start": 508.08, "end": 511.71999999999997, "text": " I don't tell you which ones are the noise and which ones aren't. So that's" }, { "start": 511.71999999999997, "end": 515.64, "text": " the trick, right? If I just told you, well, all the orange stuff is noise, right? But" }, { "start": 515.64, "end": 521.68, "text": " you just see, you just see this all in mono color, but you know kind of, okay, so" }, { "start": 521.68, "end": 525.76, "text": " this here looks like it's from the image itself, but then this here is just kind" }, { "start": 525.76, "end": 531.1999999999999, "text": " of a spec and that just kind of, might just be noise, maybe not, right? But then" }, { "start": 531.2, "end": 536, "text": " this here, I'm pretty sure it's just noise and not part of the original image." }, { "start": 536, "end": 542.6, "text": " So you could do that and the question is, can we learn a function that does this" }, { "start": 542.6, "end": 548.8000000000001, "text": " reverse process? If we can do so, right? If we can learn a function, function of" }, { "start": 548.8000000000001, "end": 553.24, "text": " course that's going to be some kind of neural network-ish thing. We can learn a" }, { "start": 553.24, "end": 558.24, "text": " function where I give you an image with noise and I tell you, by the way, so this" }, { "start": 558.24, "end": 567.32, "text": " is maybe time step zero, this is t equals zero, t equals one, t equals two, and so" }, { "start": 567.32, "end": 574.12, "text": " on. Well, you can't see that. If I tell you, okay, here is an image, this happened" }, { "start": 574.12, "end": 584.08, "text": " at t equals 50, can you give me the t equals 49 image that this came from?" }, { "start": 584.08, "end": 591.9200000000001, "text": " Alright, and this is the whole principle. We're going to, we can generate training" }, { "start": 591.9200000000001, "end": 597.1600000000001, "text": " data for this neural network very easily because we just take data and we run" }, { "start": 597.1600000000001, "end": 602.44, "text": " them through the noise process forward, right? Then we have plenty of training" }, { "start": 602.44, "end": 608.48, "text": " data for every step of this pipeline, right? In fact, we don't train a, we don't" }, { "start": 608.48, "end": 613.72, "text": " train a different phi function for every step. As you can see, the phi function" }, { "start": 613.72, "end": 620.2, "text": " simply takes the time or can take the time as an input. It's certainly possible" }, { "start": 620.2, "end": 627.64, "text": " otherwise, or it's possible to not tell it at all, right? Then you, it has no clue." }, { "start": 627.64, "end": 634.08, "text": " So yeah, if you do this, you can generate training data and then the" }, { "start": 634.08, "end": 639.64, "text": " idea is you can just run this process in reverse and arrive at the original" }, { "start": 639.64, "end": 646, "text": " sample. And even more, because this here is actually the normal distribution, you" }, { "start": 646, "end": 650.56, "text": " can now sample random noise from that normal distribution, right? You can feed" }, { "start": 650.56, "end": 656, "text": " it to this process and this process, who has learned to map the data distribution" }, { "start": 656, "end": 660.3199999999999, "text": " to the normal distribution and can reverse that process, will give you some" }, { "start": 660.3199999999999, "end": 666.64, "text": " sort of data distribution sample for your input that you sampled from the" }, { "start": 666.64, "end": 673.96, "text": " normal distribution. All right, this is the idea and it's quite tricky to get" }, { "start": 673.96, "end": 682.16, "text": " this to work, as you can imagine, but let's not forget that GANs also have been" }, { "start": 682.16, "end": 686.56, "text": " quite tricky to get to work. It's just maybe there has been a bit more work" }, { "start": 686.56, "end": 694.12, "text": " going into GANs. All right, so formally this goes as follows. We define this" }, { "start": 694.12, "end": 699.88, "text": " forward-noising process, right? We sample this from the data distribution. We" }, { "start": 699.88, "end": 705.76, "text": " sample x0 from the data distribution. We define this forward-noising process Q," }, { "start": 705.76, "end": 717.72, "text": " which produces x1 through xt, so capital T as the end here. And we, by" }, { "start": 717.72, "end": 724.52, "text": " adding Gaussian noise at time t with some variance, okay, so you can have, you" }, { "start": 724.52, "end": 733.1600000000001, "text": " can have, it's zero mean Gaussian noise, I believe, maybe. Yeah, it's, well, you" }, { "start": 733.1600000000001, "end": 739.96, "text": " scale, but you define this variance schedule right here. That's also your" }, { "start": 739.96, "end": 746.32, "text": " choice, right? You choose what kind of noise you want to add, but ultimately" }, { "start": 746.32, "end": 754.48, "text": " you take, ultimately, the distribution of the things you produce via that" }, { "start": 754.48, "end": 761.08, "text": " noising process, given that you start at the data sample x0, you simply define as" }, { "start": 761.08, "end": 767, "text": " this product of distributions. So you start with, this just means you start" }, { "start": 767, "end": 773.5200000000001, "text": " with x0 and then you go from x0 to x1 and then you go from x1 to x2 and so on," }, { "start": 773.52, "end": 781.3199999999999, "text": " okay? And each of these steps is an independent application of noise. As you" }, { "start": 781.3199999999999, "end": 785.88, "text": " can see here, this is one of those steps. So what you're saying is that the" }, { "start": 785.88, "end": 790.1999999999999, "text": " distribution of the next sample right here is going to be a normal" }, { "start": 790.1999999999999, "end": 794.76, "text": " distribution that's going to be centered at this thing right here and its" }, { "start": 794.76, "end": 800.76, "text": " variance is this thing right here. So you can see that the assumption here is you" }, { "start": 800.76, "end": 807.92, "text": " use noise that has a diagonal covariance matrix, okay? This is, I guess it's" }, { "start": 807.92, "end": 814.4, "text": " reasonable. It certainly makes computing things easier, right? The other thing here" }, { "start": 814.4, "end": 819.88, "text": " is that you can see this Gaussian is centered at the last sample but down" }, { "start": 819.88, "end": 826.24, "text": " scaled by this factor right here and I think, like, this is a choice again by the" }, { "start": 826.24, "end": 831.24, "text": " modelers but I think this is also due to the fact that makes computation easier" }, { "start": 831.24, "end": 838.32, "text": " because I guess if you don't have this then, you know, you start somewhere and" }, { "start": 838.32, "end": 842.5600000000001, "text": " you add noise and you sample something, you add noise, you sample something, maybe" }, { "start": 842.5600000000001, "end": 848.48, "text": " this would grow indefinitely and you sort of need to rescale things such that" }, { "start": 848.48, "end": 854.28, "text": " you can make this statement right here. Given sufficiently large T and a well" }, { "start": 854.28, "end": 861.68, "text": " behaved schedule of beta, the latent XT, so the very last step, is nearly an" }, { "start": 861.68, "end": 868.8, "text": " isotropic Gaussian distribution, okay? That's the entire point. So if you do it" }, { "start": 868.8, "end": 874.26, "text": " like this, which is a choice, but if you do it like this then at the end if you" }, { "start": 874.26, "end": 880.16, "text": " do enough steps, infinitely many steps, then you end up at an isotropic Gaussian" }, { "start": 880.16, "end": 887, "text": " distribution. Thus, if we know the exact reverse distribution, we can sample from" }, { "start": 887, "end": 891.92, "text": " the Gaussian and run the process in reverse to get a sample from the data" }, { "start": 891.92, "end": 897.68, "text": " distribution. Then they say, however, since the reverse distribution depends" }, { "start": 897.68, "end": 901.88, "text": " on the entire data distribution, we approximate it using a neural network as" }, { "start": 901.88, "end": 910.52, "text": " follows. So this statement can be a bit weird in the first instance. This" }, { "start": 910.52, "end": 917.56, "text": " depends on the entire data distribution, right? Because it's very close to" }, { "start": 917.56, "end": 922.04, "text": " this thing right here and this thing right here depends on nothing, right?" }, { "start": 922.04, "end": 926.44, "text": " This you just define, you just say I'm gonna add random noise to something and" }, { "start": 926.44, "end": 931.64, "text": " that's my next distribution. It only depends on the input image right here." }, { "start": 931.64, "end": 937.48, "text": " The way to see it, that this depends, the reverse depends on the entire data" }, { "start": 937.48, "end": 942.48, "text": " distribution, is exactly what I said before. If I give you the, like if I" }, { "start": 942.48, "end": 946.6, "text": " give you this picture, I'm not gonna actually tell you right where the noise" }, { "start": 946.6, "end": 956.76, "text": " is. So I give you this picture and I tell you this is a drawing from a" }, { "start": 956.76, "end": 962.76, "text": " very small child, because that's my drawing level, and I've just added a" }, { "start": 962.76, "end": 969.4399999999999, "text": " bunch of noise to it. Could you tell me what the original drawing was? This" }, { "start": 969.4399999999999, "end": 975.76, "text": " is very different from me saying here is a drawing from a small child, please add" }, { "start": 975.76, "end": 982.8, "text": " noise to it. That's easy, I just did this, right? I was just called, I just did it." }, { "start": 982.8, "end": 988.28, "text": " But if I tell you what was the original image, you have to take into account the" }, { "start": 988.28, "end": 994.92, "text": " entire world. You know about how small children draw, what kind of" }, { "start": 994.92, "end": 999.56, "text": " motives they usually draw and so on, and that's how you are able to come up by" }, { "start": 999.56, "end": 1005.8399999999999, "text": " saying well it was probably something like this." }, { "start": 1005.8399999999999, "end": 1012.3199999999999, "text": " This needs your knowledge of the entire data distribution. That's" }, { "start": 1012.32, "end": 1019.08, "text": " why they say it right here. So they say well we can't, we like, we can't just" }, { "start": 1019.08, "end": 1022.2, "text": " have the entire data distribution otherwise, you know, we wouldn't even" }, { "start": 1022.2, "end": 1027.16, "text": " have the problem in the first place. So what we can do is we can approximate one" }, { "start": 1027.16, "end": 1032.44, "text": " of these steps using a neural network. So we have a neural network that" }, { "start": 1032.44, "end": 1038.96, "text": " takes as an input, as I said, it takes as an input the noised version of the image" }, { "start": 1038.96, "end": 1047.64, "text": " and it gives you as an output, it's a bit like this is, it gives you, I told you" }, { "start": 1047.64, "end": 1052.32, "text": " give me the image that this came from, in this case what they want is give me a" }, { "start": 1052.32, "end": 1058.6200000000001, "text": " distribution over images where that could have come from, right? And again" }, { "start": 1058.6200000000001, "end": 1063.66, "text": " they say this, they model this as a Gaussian right here and the neural" }, { "start": 1063.66, "end": 1069.3200000000002, "text": " network will produce the mean and the covariance matrix given the image. So the" }, { "start": 1069.3200000000002, "end": 1073.8400000000001, "text": " neural network is supposed to look at the image and decide okay what's the" }, { "start": 1073.8400000000001, "end": 1080.4, "text": " Gaussian distribution of images where that probably came from? And this is a" }, { "start": 1080.4, "end": 1086.8000000000002, "text": " strong assumption, right? The fact for example that you know this is a Gaussian" }, { "start": 1086.8000000000002, "end": 1090.64, "text": " distribution, like this is adequately modeled as a Gaussian" }, { "start": 1090.64, "end": 1095.72, "text": " distribution, it's a strong assumption that you can only make because you make" }, { "start": 1095.72, "end": 1100.3600000000001, "text": " these very small steps. Because nothing, I mean nothing stops you from actually" }, { "start": 1100.3600000000001, "end": 1105.64, "text": " doing this in one step, right? Nothing stops you from taking, you know," }, { "start": 1105.64, "end": 1110.72, "text": " the data distribution just adding like a wild bunch of noise because then you're" }, { "start": 1110.72, "end": 1116.88, "text": " also approximately normally distributed. Maybe not, I don't know, you maybe" }, { "start": 1116.88, "end": 1124.44, "text": " end up at some other distribution. But I mean certainly if you, like you can do" }, { "start": 1124.44, "end": 1129.6000000000001, "text": " the reverse, also you can train a neural network to do it in one step. In fact" }, { "start": 1129.6000000000001, "end": 1134.3200000000002, "text": " that's a little bit what GANs do, right? But if you want to do this in this sort" }, { "start": 1134.3200000000002, "end": 1138.6000000000001, "text": " of manner where you model all the distributions, notice this is a very" }, { "start": 1138.6000000000001, "end": 1143.2, "text": " different language than GANs. Here it's all kind of in the" }, { "start": 1143.2, "end": 1148.24, "text": " distributional semantics. If you want to do this and you want to say well I" }, { "start": 1148.24, "end": 1153.04, "text": " modeled the reverse as a normal distribution, this is just not true if" }, { "start": 1153.04, "end": 1159.16, "text": " you took large enough steps, right? But if you take very tiny steps you can" }, { "start": 1159.16, "end": 1163.8, "text": " adequately make sort of the argument that the normal distribution is kind of" }, { "start": 1163.8, "end": 1172.88, "text": " okay for this to work. And of course it makes life easier after that. So they" }, { "start": 1172.88, "end": 1177.44, "text": " need the tiny steps because in the tiny steps they're able to sort of, the" }, { "start": 1177.44, "end": 1185.2800000000002, "text": " modeling assumptions hold, also I guess it works better. And then you can" }, { "start": 1185.2800000000002, "end": 1191.24, "text": " define the loss function right here. So they say the combination of QMP is a" }, { "start": 1191.24, "end": 1195.92, "text": " variational autoencoder and we can write the variational lower bound as follows." }, { "start": 1195.92, "end": 1202, "text": " So I'm not sure if I have ever gone over variational autoencoders, but" }, { "start": 1202, "end": 1208.84, "text": " they, it's very much, it's very similar to here. What you can do is you can" }, { "start": 1208.84, "end": 1215, "text": " define this variational lower bound which essentially boils down to saying I" }, { "start": 1215, "end": 1222.28, "text": " would like the distribution that I want a model and the thing I actually output" }, { "start": 1222.28, "end": 1228.4, "text": " to be close together, right? So this is the reverse process that my neural" }, { "start": 1228.4, "end": 1233.3600000000001, "text": " network does and this is the thing that I actually would like to model, okay? And" }, { "start": 1233.3600000000001, "end": 1238.3600000000001, "text": " we're going to, this is the thing that needs the entire data distribution. We're" }, { "start": 1238.3600000000001, "end": 1247.88, "text": " going to look at that in just a second. So yeah there are some other terms here" }, { "start": 1247.88, "end": 1253.4, "text": " but you can get around that and the last term right here, like the" }, { "start": 1253.4, "end": 1260.3600000000001, "text": " last term, you just assume that's kind of a Gaussian. So really it comes down to" }, { "start": 1260.3600000000001, "end": 1266.8000000000002, "text": " does the distribution that your neural network outputs match what you, what it" }, { "start": 1266.8000000000002, "end": 1274.6000000000001, "text": " actually is? And here you can see the sort of proxy for well this needs the" }, { "start": 1274.6000000000001, "end": 1281.52, "text": " whole data distribution is the following. If I tell you that this is" }, { "start": 1281.52, "end": 1288.08, "text": " the process by which I derive the data, right? And I ask you what is the reverse" }, { "start": 1288.08, "end": 1293.4, "text": " distribution of one of these steps? You can't possibly compute that, right?" }, { "start": 1293.4, "end": 1297.12, "text": " Accurately because you don't know the data distribution. However what you can" }, { "start": 1297.12, "end": 1304.48, "text": " do is for this particular sample you can compute it if I tell you that you know" }, { "start": 1304.48, "end": 1309.92, "text": " this is the process by which I derived it and also if I actually give you x0" }, { "start": 1309.92, "end": 1318.6000000000001, "text": " right here. If I give you that then you can do, you can do, you can calculate and" }, { "start": 1318.6000000000001, "end": 1323.28, "text": " that's what they show here, you can actually calculate this distribution. You" }, { "start": 1323.28, "end": 1329.3600000000001, "text": " can say what is the actual distribution I'd like to model and that's going to be" }, { "start": 1329.3600000000001, "end": 1336.5600000000002, "text": " a normal distribution but what just, it makes sense right? In this case like if" }, { "start": 1336.56, "end": 1344.96, "text": " this is, if this is the forward process and I give you x0, if you already know" }, { "start": 1344.96, "end": 1351.76, "text": " the result you can calculate the distribution. So that's what they derive" }, { "start": 1351.76, "end": 1361.08, "text": " right here and that is dependent of course on your noise scale which is like" }, { "start": 1361.08, "end": 1367.9199999999998, "text": " all over the place in this, in these formulas but you can calculate that and" }, { "start": 1367.9199999999998, "end": 1373.52, "text": " this is a Gaussian and they model the output of the neural network as a" }, { "start": 1373.52, "end": 1378.6, "text": " Gaussian so these KL divergences just they become really easy to calculate and" }, { "start": 1378.6, "end": 1385.36, "text": " then you have a loss function. So now they say how do we, how do we actually" }, { "start": 1385.36, "end": 1392.08, "text": " train this thing in practice? Because it turned out in the last papers that this" }, { "start": 1392.08, "end": 1403.36, "text": " thing right here, the actual variational lower bound isn't too effective. I think" }, { "start": 1403.36, "end": 1414.8, "text": " that's what they're saying. So yeah what the, what the authors here say is they go" }, { "start": 1414.8, "end": 1422.72, "text": " back to previous paper and say the previous paper found that modeling the" }, { "start": 1422.72, "end": 1430.8, "text": " noise here is the best way to do it. So the question is how exactly, what exactly" }, { "start": 1430.8, "end": 1435.68, "text": " does the neural network do? Like the neural network could do many things, it" }, { "start": 1435.68, "end": 1442.6399999999999, "text": " it could actually just predict this mean parameter which we've talked about right?" }, { "start": 1442.64, "end": 1447.44, "text": " The neural network could simply, I give you an image and you tell me what's the" }, { "start": 1447.44, "end": 1452.1200000000001, "text": " most probable image where it comes from or sort of the mean and also give me the" }, { "start": 1452.1200000000001, "end": 1457.24, "text": " covariance but also what you could do is you could just model the" }, { "start": 1457.24, "end": 1463.8400000000001, "text": " noise, that's a different thing. You could model the noise and that's" }, { "start": 1463.8400000000001, "end": 1469, "text": " equivalent from a computational perspective right or from a conceptual" }, { "start": 1469, "end": 1476.32, "text": " perspective. If I give you again this image you can either tell me where it" }, { "start": 1476.32, "end": 1481.4, "text": " came from or equivalently you can tell me what's the noise that I've added" }, { "start": 1481.4, "end": 1487.52, "text": " right and you tell me what this, you've probably added this noise. It's a, this is" }, { "start": 1487.52, "end": 1493.64, "text": " a both the same from an information perspective, however the authors" }, { "start": 1493.64, "end": 1500.24, "text": " previously noted that the modeling the noise is better just from a neural" }, { "start": 1500.24, "end": 1505.88, "text": " network training standpoint. In fact they make a point here to define a new loss" }, { "start": 1505.88, "end": 1513.72, "text": " function that simply estimates, that simply says well the noise that I output" }, { "start": 1513.72, "end": 1518.6000000000001, "text": " from the neural network should approximately match the actual noise" }, { "start": 1518.6000000000001, "end": 1523.24, "text": " that I've added right because I know what noise I sampled in my forward" }, { "start": 1523.24, "end": 1532.24, "text": " noise process and that works better. However these authors here say okay this" }, { "start": 1532.24, "end": 1537.04, "text": " does not tell you anything about the covariance because that only tells you" }, { "start": 1537.04, "end": 1541.72, "text": " something about the mean and the old authors found that we don't actually" }, { "start": 1541.72, "end": 1546.92, "text": " need the covariance we just we fix it and that works a lot better or equally" }, { "start": 1546.92, "end": 1552.96, "text": " well to actually learning it and the authors here say maybe they've you know" }, { "start": 1552.96, "end": 1557.8, "text": " missed something maybe they've missed the opportunity to learn the covariance" }, { "start": 1557.8, "end": 1565.8, "text": " so this was a little bit of a rant but to repeat we define this noising process" }, { "start": 1565.8, "end": 1569.48, "text": " and then we try to learn a neural network that reverts that noising" }, { "start": 1569.48, "end": 1576.46, "text": " process. In order to do so we train a neural network to reverse each of the" }, { "start": 1576.46, "end": 1582.44, "text": " little steps that we do right here and the way we do it is the neural network" }, { "start": 1582.44, "end": 1589.92, "text": " will predict the distribution of the predecessor so given a noised image the" }, { "start": 1589.92, "end": 1593.92, "text": " neural network will output the distribution modeled as a normal" }, { "start": 1593.92, "end": 1602.3200000000002, "text": " distribution over where that noisy image probably came from and it the previous" }, { "start": 1602.3200000000002, "end": 1607.1200000000001, "text": " authors have said well there are two things to model there is the mean and" }, { "start": 1607.12, "end": 1613.3999999999999, "text": " the covariance and we find first of all if we just fix the covariance that's" }, { "start": 1613.3999999999999, "end": 1619.08, "text": " enough right we fix the covariance matrix to the noise scale that we know" }, { "start": 1619.08, "end": 1625.9599999999998, "text": " we applied and good enough we don't actually need to model the the true" }, { "start": 1625.9599999999998, "end": 1631.4199999999998, "text": " covariance matrix just from an empirical standpoint and then when we model the" }, { "start": 1631.42, "end": 1637.96, "text": " mean we don't model the mean directly we actually model the noise and which is" }, { "start": 1637.96, "end": 1642.28, "text": " equivalent but it works better from a neural network standpoint. The authors" }, { "start": 1642.28, "end": 1647.24, "text": " now say maybe you've missed an opportunity learning that covariance" }, { "start": 1647.24, "end": 1652.8400000000001, "text": " matrix because it's one thing to say this is probably a Gaussian right it's" }, { "start": 1652.8400000000001, "end": 1656.76, "text": " another thing to say this is probably a Gaussian with completely isotropic" }, { "start": 1656.76, "end": 1663.08, "text": " covariance matrix and you would expect the second one is easier but also it's" }, { "start": 1663.08, "end": 1674.56, "text": " more wrong so that's what we're that's what we go about here so they say can we" }, { "start": 1674.56, "end": 1679.2, "text": " improve the log likelihood right here and the first topic they go into is" }, { "start": 1679.2, "end": 1687.72, "text": " learning this covariance matrix and what they discover I want to say is that if" }, { "start": 1687.72, "end": 1693.3600000000001, "text": " you fix the covariance matrix right here you have to know what scale to fix it at" }, { "start": 1693.3600000000001, "end": 1699.32, "text": " which is dependent on the the noise that you applied in the forward process right" }, { "start": 1699.32, "end": 1706.92, "text": " so you applied some noise and you can calculate what the average covariance of" }, { "start": 1706.92, "end": 1712.76, "text": " the reverse step should be at that particular time step and in fact you can" }, { "start": 1712.76, "end": 1718.0800000000002, "text": " derive an upper and a lower bound so if beta here is their schedule for noise" }, { "start": 1718.0800000000002, "end": 1724.44, "text": " then these are the two bounds so this this is the actual beta you used in that" }, { "start": 1724.44, "end": 1730.04, "text": " step the noise scale and this is sort of an accumulated noise scale up until that" }, { "start": 1730.04, "end": 1737.6, "text": " step these are the two bounds in which in which the noise can be right the" }, { "start": 1737.6, "end": 1743.6, "text": " noise level or the covariance and the previous author said well we can use" }, { "start": 1743.6, "end": 1747.28, "text": " either one of them it's actually fine it doesn't matter and these authors say" }, { "start": 1747.28, "end": 1756.04, "text": " okay look at this right here this is the ratio between the two so the ratio" }, { "start": 1756.04, "end": 1761.56, "text": " between the upper and the lower bound as a function of the diffusion step now" }, { "start": 1761.56, "end": 1765.76, "text": " especially if you go to a large amount of step size you see this immediately" }, { "start": 1765.76, "end": 1771.52, "text": " clamps at one right so there is like almost no difference between the upper" }, { "start": 1771.52, "end": 1777.36, "text": " and the lower bound which is probably why the other authors estimated it" }, { "start": 1777.36, "end": 1781.96, "text": " didn't matter now these authors go further and they say well if you just" }, { "start": 1781.96, "end": 1787.72, "text": " try to learn like a number neural networks are kind of bad at regression" }, { "start": 1787.72, "end": 1793.4, "text": " right so if you tell neural network learn me any number on the number" }, { "start": 1793.4, "end": 1798.88, "text": " string whatever you call that in English if there me any number like here's one" }, { "start": 1798.88, "end": 1807.48, "text": " here's two here's three like here's 500 any number whatsoever but however the" }, { "start": 1807.48, "end": 1818.1200000000001, "text": " only actual right answers are going to be a tiny tiny sliver between like like" }, { "start": 1818.1200000000001, "end": 1824.56, "text": " the ratio between them is going to be a tiny tiny sliver somewhere in in like" }, { "start": 1824.56, "end": 1828.6, "text": " three orders of magnitude down the neural networks going to have trouble" }, { "start": 1828.6, "end": 1836.68, "text": " hitting these correctly so the way they do it is they reparameterize the the" }, { "start": 1836.68, "end": 1841.64, "text": " how they predict the covariance matrix in fact what they come up with is they" }, { "start": 1841.64, "end": 1848.3200000000002, "text": " simply learn an interpolation parameter V right here to interpolate between the" }, { "start": 1848.3200000000002, "end": 1854.28, "text": " upper and the lower bound and that turns out to be quite a good decision" }, { "start": 1854.28, "end": 1860.24, "text": " because now the neural network can predict a number V for each dimension" }, { "start": 1860.24, "end": 1866.16, "text": " which is between 0 and 1 right and that's neural networks can predict" }, { "start": 1866.16, "end": 1870.44, "text": " stuff between 0 and 1 they're pretty good at it and the whole rest the whole" }, { "start": 1870.44, "end": 1877.0400000000002, "text": " scale issue will be taken care of by interpolating between the two valid" }, { "start": 1877.0400000000002, "end": 1883.3000000000002, "text": " bounds so this this is one thing they're able to learn the covariance matrix now" }, { "start": 1883.3000000000002, "end": 1891.66, "text": " and that boosts them a bit and then they also look at the noising process right" }, { "start": 1891.66, "end": 1895.8400000000001, "text": " here and they say well if you look at this and this is something I find a" }, { "start": 1895.84, "end": 1901.4399999999998, "text": " bit shady they say if you look at this and this top row is what is currently" }, { "start": 1901.4399999999998, "end": 1908.32, "text": " done with the noise schedule that is usually defined it's just kind of noisy" }, { "start": 1908.32, "end": 1915.8, "text": " a bit too much right like from here on out there's just noise right could we" }, { "start": 1915.8, "end": 1921.1999999999998, "text": " not schedule this a little bit such that the drop-off is more gradual that might" }, { "start": 1921.2, "end": 1925.8400000000001, "text": " help a lot and so they come up with a new schedule that does this now this" }, { "start": 1925.8400000000001, "end": 1930.16, "text": " seems very subjective right you know this is you as a human looking at it" }, { "start": 1930.16, "end": 1937.32, "text": " they they do some experiments here where they say we measure the inception" }, { "start": 1937.32, "end": 1942.72, "text": " distance as we just leave away a fraction of the reverse diffusion" }, { "start": 1942.72, "end": 1947.24, "text": " process so they wonder how many of these steps can we just leave away and still" }, { "start": 1947.24, "end": 1952.08, "text": " end up with something that's fine like can we can we just skip the first step" }, { "start": 1952.08, "end": 1957.08, "text": " of the reverse process and start here can we skip five steps and start here it" }, { "start": 1957.08, "end": 1962.8, "text": " turns out in the linear schedule you're just able to skip a lot more steps which" }, { "start": 1962.8, "end": 1967.8, "text": " gives you an indication that those steps weren't really helpful and it'd probably" }, { "start": 1967.8, "end": 1975.36, "text": " be better that you define a schedule where all of the steps are helpful so" }, { "start": 1975.36, "end": 1979.3999999999999, "text": " that's what they what they come up with you can see the linear schedule right" }, { "start": 1979.3999999999999, "end": 1985.28, "text": " here is dumping pretty fast like it goes down pretty fast while their new cosine" }, { "start": 1985.28, "end": 1990.7199999999998, "text": " schedule is much much slower like this these are now actual practical" }, { "start": 1990.7199999999998, "end": 1995.08, "text": " considerations that are just done by kind of looking evaluating a bit" }, { "start": 1995.08, "end": 2000.24, "text": " empirically and then going and saying well can't we do something better now" }, { "start": 2000.24, "end": 2004.04, "text": " this something better they admit that themselves isn't by no means the best" }, { "start": 2004.04, "end": 2007.76, "text": " thing you can do it's just something better like ultimately you would want" }, { "start": 2007.76, "end": 2012.76, "text": " the same step in the noising process probably to contribute equally to the" }, { "start": 2012.76, "end": 2017.6399999999999, "text": " quality of the entire system you know but that's what they do the last thing" }, { "start": 2017.6399999999999, "end": 2022.84, "text": " is very similar they say we reduce the gradient noise so they observe if they" }, { "start": 2022.84, "end": 2028.12, "text": " use they have now two loss functions right they have the loss the original" }, { "start": 2028.12, "end": 2032.12, "text": " loss function where you simply look at the L2 distance between the noise and" }, { "start": 2032.12, "end": 2036.7199999999998, "text": " the predicted noise like no variational lower bound yada KL divergence and who" }, { "start": 2036.7199999999998, "end": 2042.28, "text": " needs that crap right that's what they call the simple objective now the simple" }, { "start": 2042.28, "end": 2048.44, "text": " objective doesn't contain the covariance so what they would like to do is they" }, { "start": 2048.44, "end": 2051.7999999999997, "text": " would like to go back to the variational objective and that's the blue line here" }, { "start": 2051.7999999999997, "end": 2055.88, "text": " I know you can't really read it but that's the blue line here and you can see" }, { "start": 2055.88, "end": 2061.68, "text": " only is it pretty noisy it's also well okay I guess it's like it's pretty noisy" }, { "start": 2061.68, "end": 2068.6, "text": " the loss curve if they mix the variational objective together with the" }, { "start": 2068.6, "end": 2073.16, "text": " simple objective they get a better loss curve you see that right here this this" }, { "start": 2073.16, "end": 2080.52, "text": " is this hybrid loss it's the orange loss it it's still noisy their new loss which" }, { "start": 2080.52, "end": 2087.2799999999997, "text": " they call resampled loss that's again the variational lower bound loss but in" }, { "start": 2087.28, "end": 2092.7200000000003, "text": " a sampled in a different way is the green line which is much much smoother" }, { "start": 2092.7200000000003, "end": 2105.6800000000003, "text": " and also lower and that comes from this fact right here if you look at the sorry" }, { "start": 2105.6800000000003, "end": 2114.8, "text": " not from this right here is it okay so they what they say is if you look at the" }, { "start": 2114.8, "end": 2121.04, "text": " process like this noise process here and you look at where the actual loss comes" }, { "start": 2121.04, "end": 2127.76, "text": " from where does the the majority of the loss contribution come from they notice" }, { "start": 2127.76, "end": 2132.84, "text": " that the majority of the loss contribution comes from the first step so" }, { "start": 2132.84, "end": 2137.04, "text": " there is a real imbalance of how much these individual steps in the noising" }, { "start": 2137.04, "end": 2145.68, "text": " process differ from like contribute to the overall loss and say well if you know" }, { "start": 2145.68, "end": 2150.52, "text": " if we just add all of them up equally right because what do you need to do to" }, { "start": 2150.52, "end": 2156.18, "text": " train these neural networks you need to start off with a clean image then sample" }, { "start": 2156.18, "end": 2163.56, "text": " some step like some step you say okay I'm gonna now train the t equals 205" }, { "start": 2163.56, "end": 2169.24, "text": " network right so you add noise 205 times you can do this in one go by the way but" }, { "start": 2169.24, "end": 2175.56, "text": " essentially you add noise 205 times you get here right you add noise once more" }, { "start": 2175.56, "end": 2181.7999999999997, "text": " to here and now you have your if your training sample right here you can" }, { "start": 2181.7999999999997, "end": 2188.04, "text": " calculate the the distribution you want to match by also including this one as" }, { "start": 2188.04, "end": 2193.7599999999998, "text": " we discussed and you good right so this is one training sample the next training" }, { "start": 2193.7599999999998, "end": 2197.84, "text": " sample is you select a different t and you produce another training sample" }, { "start": 2197.84, "end": 2205.52, "text": " it's one now if the first few steps are much more important than you know the" }, { "start": 2205.52, "end": 2212.84, "text": " step at t equals 5000 and you're just sampling t uniform you will end up with" }, { "start": 2212.84, "end": 2219, "text": " you know a correct on probably unbiased estimate of your laws oh sorry of your" }, { "start": 2219, "end": 2224.08, "text": " loss however it will be super duper noisy so they're saying can't we just" }, { "start": 2224.08, "end": 2232.82, "text": " focus a bit on where a loss actually occurs so they devise a scheme to do" }, { "start": 2232.82, "end": 2240.4, "text": " important sampling notice that the different terms of of the variational" }, { "start": 2240.4, "end": 2244.44, "text": " around have greatly different magnitudes and figure two where's which" }, { "start": 2244.44, "end": 2251.56, "text": " one's figure or figure two figure two oh there we go that was the plot so here is" }, { "start": 2251.56, "end": 2257.34, "text": " the step in the noising process and here is the loss term magnitude and you can" }, { "start": 2257.34, "end": 2263.08, "text": " see that the the first few steps they have a really lot like a larger loss" }, { "start": 2263.08, "end": 2269.96, "text": " this is a log scale right on the left then the last ones so they devise an" }, { "start": 2269.96, "end": 2275.92, "text": " important sampling scheme to counter that this is not specific right to this" }, { "start": 2275.92, "end": 2280.8, "text": " particular technique you can use this anywhere where different samples have" }, { "start": 2280.8, "end": 2286.36, "text": " very different contributions to loss you can choose to focus on the ones where" }, { "start": 2286.36, "end": 2292.8, "text": " the loss is high and I will not give you that will give you a biased estimate of" }, { "start": 2292.8, "end": 2299.92, "text": " your loss however it might decrease your variance by quite a bit and that" }, { "start": 2299.92, "end": 2306.08, "text": "'s what they they end up with they in this paper they end up with something" }, { "start": 2306.08, "end": 2313.36, "text": " that's competitive but not better than the best GANs however it already it" }, { "start": 2313.36, "end": 2319.56, "text": " already looks pretty good they also investigate model size but I don't want" }, { "start": 2319.56, "end": 2327.76, "text": " to go into this I actually want to jump quickly into this next paper where they" }, { "start": 2327.76, "end": 2334.36, "text": " improve again on their models to make them actually better than GANs and the" }, { "start": 2334.36, "end": 2339.6400000000003, "text": " improvements right here are much more I don't know I want to say boring because" }, { "start": 2339.6400000000003, "end": 2344.5600000000004, "text": " it's like okay architecture improvements so we're going through the same process" }, { "start": 2344.5600000000004, "end": 2349.6400000000003, "text": " that we've gone through with GANs where it's like well here's a tweak here's a" }, { "start": 2349.6400000000003, "end": 2353.32, "text": " tweak here is an architecture a better architecture here is kind of a better" }, { "start": 2353.32, "end": 2358.8, "text": " loss function regularizer whatnot and it's quite conceivable right that this" }, { "start": 2358.8, "end": 2364.6000000000004, "text": " these models here come to the level of GANs now whether they are actually you" }, { "start": 2364.6000000000004, "end": 2370.8, "text": " know better than GANs like I think this is remains to be seen because you know" }, { "start": 2370.8, "end": 2375.36, "text": " it also depends quite a bit on how much compute you put into this and then you" }, { "start": 2375.36, "end": 2381.4, "text": " also have to see that here you have to it went when you want to sample a sample" }, { "start": 2381.4, "end": 2386.6, "text": " you have to input the sample and then do this denoising process a bunch of times" }, { "start": 2386.6, "end": 2391.88, "text": " like thousands of times until you end up with the data sample now they do have a" }, { "start": 2391.88, "end": 2401.7200000000003, "text": " kind of a trick going into another model class where you only have to have they" }, { "start": 2401.7200000000003, "end": 2407.64, "text": " say 25 of these steps so it's pretty cool but still like that's 25 forward" }, { "start": 2407.64, "end": 2413.96, "text": " passes through this neural network that predicts the denoising where again is" }, { "start": 2413.96, "end": 2420.56, "text": " just like you sample once the latent you you ship it through the GAN and you end" }, { "start": 2420.56, "end": 2429.44, "text": " up with a you end up with a sample and I'm actually wondering if GANs could" }, { "start": 2429.44, "end": 2434.2, "text": " take some sort of lesson from here we'll we'll look at this after we look at this" }, { "start": 2434.2, "end": 2439.3599999999997, "text": " right here which is what I think is the kind of cool improvement that they do in" }, { "start": 2439.3599999999997, "end": 2446.72, "text": " the new paper which is where they say classifier guidance so they say if you" }, { "start": 2446.72, "end": 2454.56, "text": " use GANs for conditional image synthesis so if you if you conditionally if you" }, { "start": 2454.56, "end": 2459.16, "text": " use a GAN to create images that are of a particular class condition on a class" }, { "start": 2459.16, "end": 2466.6, "text": " label they make heavy use of class label okay so they say it makes sense to" }, { "start": 2466.6, "end": 2471.2, "text": " explore different ways to condition diffusion models on class labels we" }, { "start": 2471.2, "end": 2474.72, "text": " already incorporate class information into normalization layers so you have" }, { "start": 2474.72, "end": 2479.04, "text": " different normalization layers for different classes here we explore a" }, { "start": 2479.04, "end": 2483.04, "text": " different approach exploiting a classifier to improve a diffusion" }, { "start": 2483.04, "end": 2491.32, "text": " generator as they say the kind of a previous work two previous works show" }, { "start": 2491.32, "end": 2494.48, "text": " one way to achieve this we're in a pre-trained diffusion model can be" }, { "start": 2494.48, "end": 2498.2, "text": " conditioned using the gradients of a classifier in particular we can train a" }, { "start": 2498.2, "end": 2503.72, "text": " classifier and on noisy images and then use the gradients to guide the diffusion" }, { "start": 2503.72, "end": 2509.12, "text": " sampling process towards an arbitrary class label in this section we first" }, { "start": 2509.12, "end": 2513.48, "text": " review two ways of driving conditional sampling processes we then describe how" }, { "start": 2513.48, "end": 2520.64, "text": " we use such classifiers in practice to improve sample quality so the idea here" }, { "start": 2520.64, "end": 2524.7599999999998, "text": " is that if you have class labels together with your data set you can train" }, { "start": 2524.7599999999998, "end": 2530.96, "text": " a classifier on not only the data set but also noisy samples of that data set" }, { "start": 2530.96, "end": 2537.48, "text": " right and then you can use that classifier in order to guide the process" }, { "start": 2537.48, "end": 2545.56, "text": " so this is what we're dealing with right here they say well instead of simply" }, { "start": 2545.56, "end": 2550.68, "text": " reverting the process which would be this part right here like instead of" }, { "start": 2550.68, "end": 2557.96, "text": " simply reverting the noise process if I tell you what label that image is from" }, { "start": 2557.96, "end": 2562.72, "text": " like what class that image is from can you do a better job right so if I in" }, { "start": 2562.72, "end": 2567.52, "text": " our original example if I tell you if I give you a noisy picture of a house and" }, { "start": 2567.52, "end": 2572.9599999999996, "text": " I tell you about by the way this is a house you're much more able to tell me" }, { "start": 2572.9599999999996, "end": 2577.2, "text": " what the original image was or alternatively what the noise is that" }, { "start": 2577.2, "end": 2586.2799999999997, "text": " I've added to the image so if you write this as a as a distribution as we did so" }, { "start": 2586.2799999999997, "end": 2591.8399999999997, "text": " far you can say if you want you want to predict the previous image from the next" }, { "start": 2591.84, "end": 2597.92, "text": " image and the class label and you can pull this apart into these two" }, { "start": 2597.92, "end": 2606.36, "text": " components which is the old component like how likely is the previous image" }, { "start": 2606.36, "end": 2611.36, "text": " given the noisy version times the what they I think what they call this this" }, { "start": 2611.36, "end": 2617.7200000000003, "text": " the prior right yeah they call this prior you can see that if you just like" }, { "start": 2617.72, "end": 2625.7599999999998, "text": " kind of ship this out it just it just swaps well I don't know how to explain" }, { "start": 2625.7599999999998, "end": 2637.52, "text": " this properly but I mean this is this is just probability manipulation so if you" }, { "start": 2637.52, "end": 2644.2, "text": " have a probability product between whatever we had before and how likely is" }, { "start": 2644.2, "end": 2651.16, "text": " that is the class label under this so this is sort of you want an image that" }, { "start": 2651.16, "end": 2656.96, "text": " makes sense given the noisy image but you also want you want an image that's" }, { "start": 2656.96, "end": 2662, "text": " that Mac that is a high probability of being of the class that you want to" }, { "start": 2662, "end": 2669.04, "text": " produce and of course this is exactly a classifier on the right which you can" }, { "start": 2669.04, "end": 2678.48, "text": " use so since we it since our model of so the question is what are these two" }, { "start": 2678.48, "end": 2685.24, "text": " things and can we sort of derive an easy form how we can work with this so the" }, { "start": 2685.24, "end": 2689.48, "text": " first thing we've already seen and we model this as a normal distribution and" }, { "start": 2689.48, "end": 2697.56, "text": " if we know the mean and covariance of that thing the the log is simply this" }, { "start": 2697.56, "end": 2701.2799999999997, "text": " form so you should recognize this as being just the form of the normal" }, { "start": 2701.2799999999997, "end": 2705.08, "text": " distribution this here is the normalization constant if you work in" }, { "start": 2705.08, "end": 2710.7999999999997, "text": " log space that is added and it is a constant so if you're just interesting" }, { "start": 2710.7999999999997, "end": 2718.08, "text": " in minimizing a function you might as well leave it away the second part is a" }, { "start": 2718.08, "end": 2723.24, "text": " bit more tricky but you can say well this distribution right here I can do a" }, { "start": 2723.24, "end": 2730.08, "text": " Taylor expansion around the predicted mean right then the first order Taylor" }, { "start": 2730.08, "end": 2735.8399999999997, "text": " expansion which becomes this so this is it's just kind of a vector form of the" }, { "start": 2735.8399999999997, "end": 2744.3999999999996, "text": " Taylor expansion if you've never seen it so this is this is f of x 0 right here" }, { "start": 2744.4, "end": 2754.08, "text": " and this is the this is f of x 1 this is the derivative at the point x 0 how do" }, { "start": 2754.08, "end": 2761.6, "text": " you say it is the derivative according to X at X 0 times X minus X 0 right here" }, { "start": 2761.6, "end": 2770.1600000000003, "text": " it's the same thing okay so what you end up with is this form right here and if" }, { "start": 2770.16, "end": 2776.68, "text": " you calculate this through what you end up with is the entire distributions of" }, { "start": 2776.68, "end": 2785.2799999999997, "text": " the product of the two things in log space looks like this and therefore" }, { "start": 2786, "end": 2792.48, "text": " therefore the distribution that you're looking at is a distribution you're" }, { "start": 2792.48, "end": 2799.6, "text": " saying here somewhere is the image that is the noisy version you ask your two" }, { "start": 2799.6, "end": 2805.04, "text": " models you ask your first model well what's what's an image or where does" }, { "start": 2805.04, "end": 2809.12, "text": " this likely come from and that model tells you well it's probably from here" }, { "start": 2809.12, "end": 2816.7599999999998, "text": " and the the covariance is like so like I think that's where it it came from when" }, { "start": 2816.7599999999998, "end": 2824.2, "text": " it was noised and the other model simply shifts that towards it says well but if" }, { "start": 2824.2, "end": 2830.3999999999996, "text": " you shift it a bit like this and it actually comes from here then it's much" }, { "start": 2830.3999999999996, "end": 2837.24, "text": " more likely under the classifier that's what you have you have the predicted" }, { "start": 2837.24, "end": 2842.68, "text": " mean right here that says where does it probably come from given that I've had" }, { "start": 2842.68, "end": 2850.3599999999997, "text": " a noise and this part right here says so the G is the gradient of the classifier" }, { "start": 2850.36, "end": 2854.32, "text": " with respect to the input this says well but if I shift it like this a little" }, { "start": 2854.32, "end": 2857.7200000000003, "text": " bit it becomes much more likely under the class and given that you've already" }, { "start": 2857.7200000000003, "end": 2862.92, "text": " told me what the class label is right I'm just gonna choose I'm I'm gonna" }, { "start": 2862.92, "end": 2867.56, "text": " choose to shift over here so this is what the classifier buys you the" }, { "start": 2867.56, "end": 2872.1200000000003, "text": " classifier will tell you without the classifier I think it comes from here" }, { "start": 2872.1200000000003, "end": 2877.6800000000003, "text": " but now that I know it comes from this class I can refine my belief of where it" }, { "start": 2877.68, "end": 2881.6, "text": " came from and that's how you become more accurate like if this is really the" }, { "start": 2881.6, "end": 2887.2, "text": " class it came from you're gonna be more accurate right given that the" }, { "start": 2887.2, "end": 2893.7599999999998, "text": " assumptions of the Taylor expansion hold now here as you can see we're really" }, { "start": 2893.7599999999998, "end": 2900.52, "text": " kind of getting close to the land of the GANs okay now if as soon as you have" }, { "start": 2900.52, "end": 2907.2799999999997, "text": " something like this where you derive the gradient of a model right of a" }, { "start": 2907.28, "end": 2912.44, "text": " classifier model with respect to its input and you use that gradient to sort" }, { "start": 2912.44, "end": 2918.1400000000003, "text": " of guide your search that is it's it's very close to a GAN it's very close to" }, { "start": 2918.1400000000003, "end": 2923.6400000000003, "text": " models that do score matching actually this very bad at explaining score" }, { "start": 2923.6400000000003, "end": 2927.96, "text": " matching but it is exactly sort of this you use the gradient of the log" }, { "start": 2927.96, "end": 2936.28, "text": " probability in order to model a distribution and I wonder if GANs can't" }, { "start": 2936.28, "end": 2942.32, "text": " sort of take a bit of a lesson from here like I wonder what happens if you don't" }, { "start": 2942.32, "end": 2948.32, "text": " have a GAN that just goes from noise to data but again like like here you have" }, { "start": 2948.32, "end": 2955.1600000000003, "text": " like little GANs or the discriminators at intermediate steps right that do" }, { "start": 2955.1600000000003, "end": 2959.92, "text": " their discrimination you can generate training data pretty easily again by" }, { "start": 2959.92, "end": 2965.8, "text": " doing this reverse noising process you can generate training data and you just" }, { "start": 2965.8, "end": 2970, "text": " have like little discriminators that discriminate between true data that was" }, { "start": 2970, "end": 2975.1200000000003, "text": " actually noised and data that you just produced and by you just produced I" }, { "start": 2975.1200000000003, "end": 2979, "text": " don't know what I'm just coming up with this right now this is not a prepared" }, { "start": 2979, "end": 2984.76, "text": " thing by the way you could probably use your existing model to somehow" }, { "start": 2984.76, "end": 2991.4, "text": " forward propagate and then you noise whatever that is right and then you have" }, { "start": 2991.4, "end": 2996.12, "text": " generated data and true data in all their noisy fashion and you can do" }, { "start": 2996.12, "end": 3005.56, "text": " discriminator at each level I'm not sure maybe it works maybe it won't I'm just" }, { "start": 3005.56, "end": 3009.36, "text": " saying maybe there is a way to get sort of the best out of both worlds because" }, { "start": 3009.36, "end": 3015.8, "text": " this this here like if this weren't a class label but kind of a label of true" }, { "start": 3015.8, "end": 3022.5600000000004, "text": " and fake data this would very much look like again and maybe we don't need all" }, { "start": 3022.5600000000004, "end": 3029.6000000000004, "text": " of this distribution distribution Schmistribution I guess it's a forever" }, { "start": 3029.6000000000004, "end": 3037.1600000000003, "text": " war between people who do formally correct their things and people who just" }, { "start": 3037.1600000000003, "end": 3042.8, "text": " throw everything out that doesn't contribute to the end quality in any" }, { "start": 3042.8, "end": 3049.6400000000003, "text": " case they also go into this DDIM models which are different class of models very" }, { "start": 3049.6400000000003, "end": 3056, "text": " close here but they do they they say to this and we use a score based" }, { "start": 3056, "end": 3060.2400000000002, "text": " conditioning trick adapted from these other papers which can leverage is the" }, { "start": 3060.2400000000002, "end": 3063.48, "text": " connection between diffusion models and score matching so there is an actual" }, { "start": 3063.48, "end": 3068.92, "text": " formal connection and you can use that to kind of actually what I said right now" }, { "start": 3068.92, "end": 3078.88, "text": " get rid of the noise in the system and directly sort of directly predict the" }, { "start": 3078.88, "end": 3085.8, "text": " predecessors and that will still end up at a formally correct thing and that" }, { "start": 3085.8, "end": 3091.08, "text": " allows you I think with this trick they don't have to sample as much or they" }, { "start": 3091.08, "end": 3100.16, "text": " they only use 25 reverse steps instead of 4000 which is important right and the" }, { "start": 3100.16, "end": 3103.68, "text": " last thing they discover if they discover like a hyper parameter like if" }, { "start": 3103.68, "end": 3109.72, "text": " you scale classifier gradients like this you have to observe that the classifier" }, { "start": 3109.72, "end": 3115.08, "text": " gradients are in log scale so technically the way multiplication" }, { "start": 3115.08, "end": 3120.16, "text": " behaves with a log is it becomes an exponent right here and that simply" }, { "start": 3120.16, "end": 3125.3199999999997, "text": " means that this distribution also you know the normalization that distribution" }, { "start": 3125.3199999999997, "end": 3130.2799999999997, "text": " is going to be more or less peaky and define depending on that hyper parameter" }, { "start": 3130.2799999999997, "end": 3136, "text": " and they notice that you can make it sort of more peaky and then the sample" }, { "start": 3136, "end": 3142.2, "text": " quality becomes higher right I think they a issue that the variational auto" }, { "start": 3142.2, "end": 3146.24, "text": " encoders had for a long time is that they were sort of blurry and so on and" }, { "start": 3146.24, "end": 3152.3599999999997, "text": " you know this is this is a little bit I think how that might be fixed though" }, { "start": 3152.3599999999997, "end": 3155.9199999999996, "text": " this is you know the classifier gradients so you want to make the" }, { "start": 3155.9199999999996, "end": 3160.4799999999996, "text": " classifier gradients more peaky which means that you get a stronger signal for" }, { "start": 3160.4799999999996, "end": 3170, "text": " from them which apparently results in better things so here all the results" }, { "start": 3170, "end": 3175.72, "text": " you see whenever they say 80m that's their model they have several" }, { "start": 3175.72, "end": 3181.9599999999996, "text": " variations namely this dash G here is the classifier guided version and" }, { "start": 3181.9599999999996, "end": 3187.56, "text": " whenever they say 25 steps that is the version without the noise with the trick" }, { "start": 3187.56, "end": 3196.3999999999996, "text": " connection to score matching yep so you can see in sort of the FID scores they" }, { "start": 3196.4, "end": 3206.44, "text": " do beat a big GAN on these tasks yeah maybe they you know the GANs will one up" }, { "start": 3206.44, "end": 3210.84, "text": " taking some tricks from here or maybe it's quite possible that these models" }, { "start": 3210.84, "end": 3217.96, "text": " will go beyond GANs because we've poured a lot of effort into GANs and not so" }, { "start": 3217.96, "end": 3224.84, "text": " much yet into these models into the denoising models and you know the" }, { "start": 3224.84, "end": 3231.6400000000003, "text": " samples look pretty good so the left is GAN and the middle here it's a bit small" }, { "start": 3231.6400000000003, "end": 3236.7200000000003, "text": " but the middle here is is their model and I have actually like I've gone" }, { "start": 3236.7200000000003, "end": 3241.88, "text": " through this entire image net class I've looked at every single image to try to" }, { "start": 3241.88, "end": 3246.96, "text": " find these images and I can I can tell you that the images are not in the" }, { "start": 3246.96, "end": 3252.7200000000003, "text": " training or the validation data set here are these are images from the actual" }, { "start": 3252.72, "end": 3258.04, "text": " data set they're pretty close but still I always fear a little bit that you know" }, { "start": 3258.04, "end": 3263.2, "text": " at some point a model is just gonna learn to copy the data all right so that" }, { "start": 3263.2, "end": 3267.8399999999997, "text": " was it I know this video is already too long if you're still here thank you I" }, { "start": 3267.84, "end": 3284.88, "text": " hope you've enjoyed this and I'll see you next time bye bye" } ]
VQoyypYTz2U
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
All about AI Accelerators: GPU, TPU, Dataflow, Near-Memory, Optical, Neuromorphic & more (w/ Author)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "gpu", "tpu", "ipu", "wave computing", "dataflow", "near memory compute", "ai accelerators", "deep learning hardware", "sambanova", "cerebras", "graphcore", "mythic", "optical computing", "lightmatter", "groq", "why are gpus so fast", "why does deep learning need gpus", "do i need a gpu for deep learning", "transformers hardware", "hardware matrix multiplication", "fast deep learning", "machine learning hardware" ]
#ai #gpu #tpu This video is an interview with Adi Fuchs, author of a series called "AI Accelerators", and an expert in modern AI acceleration technology. Accelerators like GPUs and TPUs are an integral part of today's AI landscape. Deep Neural Network training can be sped up by orders of magnitudes by making good use of these specialized pieces of hardware. However, GPUs and TPUs are only the beginning of a vast landscape of emerging technologies and companies that build accelerators for the next generation of AI models. In this interview, we go over many aspects of building hardware for AI, including why GPUs have been so successful, what the most promising approaches look like, how they work, and what the main challenges are. OUTLINE: 0:00 - Intro 5:10 - What does it mean to make hardware for AI? 8:20 - Why were GPUs so successful? 16:25 - What is "dark silicon"? 20:00 - Beyond GPUs: How can we get even faster AI compute? 28:00 - A look at today's accelerator landscape 30:00 - Systolic Arrays and VLIW 35:30 - Reconfigurable dataflow hardware 40:50 - The failure of Wave Computing 42:30 - What is near-memory compute? 46:50 - Optical and Neuromorphic Computing 49:50 - Hardware as enabler and limiter 55:20 - Everything old is new again 1:00:00 - Where to go to dive deeper? Read the full blog series here: Part I: https://medium.com/@adi.fu7/ai-accelerators-part-i-intro-822c2cdb4ca4 Part II: https://medium.com/@adi.fu7/ai-accelerators-part-ii-transistors-and-pizza-or-why-do-we-need-accelerators-75738642fdaa Part III: https://medium.com/@adi.fu7/ai-accelerators-part-iii-architectural-foundations-3f1f73d61f1f Part IV: https://medium.com/@adi.fu7/ai-accelerators-part-iv-the-very-rich-landscape-17481be80917 Part V: https://medium.com/@adi.fu7/ai-accelerators-part-v-final-thoughts-94eae9dbfafb Links: Merch: http://store.ykilcher.com TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello there! Today I'm talking to Adi Fuchs, who is an expert in AI acceleration technology. We talk about a whole bunch of things in this interview, but it is a little bit of a special thing because it's not about a paper or anything, but it is about a series of blog posts that Adi has authored. I am very much a noob in the AI accelerator field, so I thought it'd be really cool to talk to someone who really know what they're talking about, who are in this industry and can explain everything from very technical to very noobish for me. So we go over a whole bunch of things like why do we even need accelerators? What are the reasons behind it? Why are GPUs here and why are they good for AI? Up to very, very modern approaches to AI accelerations, TPUs, and beyond that. So if you're interested in this, watch the interview. It was very cool. I learned a lot and I hope you do too. Without further ado, have fun! Hello everyone! Today I have Adi Fuchs with me right here. He is the author of a series on Medium called AI Accelerators. I have noticed in the last few years and certainly months that I have no clue about hardware. My conception of hardware is something that goes vvvvv, and if I want a neural network, I need a GPU that goes vvvvvv. And then there's TPUs, and then there's IPUs, and there's lots of stuff, but I never had any clue what any of it meant. So this article series was really valuable to me. I thought maybe it's valuable to some of you too. So, Adi, thank you very much for being here. Yeah, thanks for having me and thanks for the kind introduction. Can you tell us a little bit about what your background is in this space? Why did you decide to write a series like this? And why did you think that you had the knowledge to do so? Well, so I've been back and forth between, I would say, industry and academia. I've been working for several hardware and software companies. You know, Philips, I also worked for Mellanox. I also worked for Apple for some, you know, short period. And I've been back and forth. I did my masters back in Israel. And then I did my PhD at the US at the Princeton University. And I always, you know, my studies have been mainly focused on computer architecture. You know, more recently, my experience has been with computer architectures, processor architectures in general. There's a lot of software going on into it. But, you know, from the architectural perspective is how you can design systems that can execute these applications very efficiently. And there's a myriad way of actually doing so. So after my studies, I started working for one of the big companies in the landscape. And I said, actually, when I graduated, I had, when I graduated my PhD, I always had in the back of my mind that AI and machine learning and deep learning, all that has been very, very exciting. You know, I took just like one or two classes, but I didn't really have any extensive experience in it. But I do feel like I do, I was able to see that potential. And I wanted to say, okay, one of the natural things for me after I graduate would be to work for one of those companies that are developing hardware for AI. But, you know, the story goes well beyond just hardware, you know, people right now understand that they need to develop smart systems, smart software, it needs to be a full stack view, just going beyond just like you said, look, the GPU that goes for the TPU or the underlying processor, whatnot. So the landscape seemed to be very exciting. It's rapidly evolving, there are a lot of solutions out there. And I thought that, you know, as a hobby, what I did, it's just started as a hobby, you know, just observing what people are doing, trying to look at the competitive landscape and try to see if there's anything that could be interesting for someone that wants to know more about that world, either be it a research scientist that wants to know a little bit of what's going on under the hood, or people that are hardware engineers that wants to know a little bit more about, you know, the high level motivation for why people are doing AI accelerator. So I was hoping that I will be able to create something like that, that will be able to contribute to several types of people, I would say. Very cool. So my question is a little bit, why, what does it even mean to build hardware for something like obviously, you know, we have computers, and I can, you know, I can do pretty much anything with a computer. What does it mean to, to, to say, make hardware for AI, you have this term of user to hardware expressiveness? What does that mean? So I would say it's, it's, I would, as I said, there is, it's more of my term, in lack of a better term, I would say that probably people have several either academic or industry more accurate ways to depict this is that the user knows on the high level what they're doing, what they want to do, what type of models they want to explore, and how they translate it to high level code, you know, like cafe, pytorch, TensorFlow, and all that. So the research scientist has the big model that they want to explore. But under the hood, there is what the hardware understand it what it can execute. So if you look at it, you can see that there is a lot of layers that you need to go to get you need to lower from the high level code all the way to the, you know, to the bits that are basically executing on on on on, you know, that the electrons that are flowing, and it gets really, really complex, because you need to have a full stack view, and really know, whatever crazy idea that the user is doing, and whatever and, and the last low level detail of everything that your hardware basically can can execute, you know, 80 degrees of parallelism, how it accesses the memory, be it DRAM, high bandwidth memories, HBMs, there's a there's a lot of things that are going on how you're what are your precisions? Are you doing FP 32? Are you doing FP 16, BF 16? Are you doing integers? What is your bit width? And and there are a lot of details that someone needs to understand, in order to build a full flesh, fully capable compiler stack, that you can basically write whatever you can think of, and it'll out of the box be not only working, because as you said, you can basically compute everything right there. I don't know church during thesis, a computer is a computer, but there is a difference between just solving the problem mathematically, or accurately, and actually doing it performant in a performant fashion, because you can either solve a single problem, and it will take a month to run, or you can solve the same problem, and it will be more efficient, it can take, I don't know, like a few hours or even a few minutes. So that's that's the idea of user to hardware expressiveness, you know, the user can think of whatever, and the hardware can execute whatever and you need to bridge that cemented gap between them. And, and, okay, let's say we agree that we need to build hardware for AI, you go through a little bit of the history of that, I guess, starting with what everyone knows, which is kind of Moore's law, that processors or number of transistors increased over time in an exponential fashion, but then you go into some into some less known laws like Dennert scaling, all of this leading up to saying, you know, we've reached the end of clock frequency, I think this is also known. What's also known is probably that the, we have a we have replaced essentially speed with number of cores, and we're going to parallelism. Now you draw an excellent comparison to GPUs here, GPUs being the current super many core architectures, or not current, but in the history, they had more cores. What makes GPUs so attractive for AI in the first place? Yes. So this, I think this goes back a little bit to more of a D intro. You know, you're just saying hardware and you're saying computer, but the fact that you can compute things at certain speeds have been key enablers. I go in the introduction, I'm talking about Alex net, right? You see in the Alex net paper, they say in the abstract, we were able to develop a GPU implementation and efficient GPU implementation that allows it that allowed us to number to crunch a lot of data and train a lot of data within a reasonable timeframe and get a super fancy model that can run efficiently and within reasonable times. And that basically was a key enabler. What I didn't even mention is that for example, for natural language processing, the same story happened. If you look at the attention is all you need paper, they were able to say in the abstract, we were able to train it on GPU for three and a half days, which was order of magnitude pastored and previous solution, you know, all those LSTNs and RNNs that have this inherent sequential part that we were able to devise a new architecture that is able to run on hardware. And just by being able to harness the power of GPUs, we were able to run and it basically unlocked our capabilities. So the ability of hardware has been the role of hardware has been very significant and basically being the key enabler of AI capabilities. And that's why I think this series is more is very important. Going back to our discussion, you know, trying to talk about frequency, it's good to know about the history because when you're talking about AI accelerators is essentially why do we need accelerators? Why and why now? So as you can see, as we said at the beginning, there was frequency, we were able to get our circuitry going faster. You can say that, okay, we have we back at the 90s, you can have like this 486 going at 33 megahertz all the way to like 100 megahertz. Then you came the Pentiums and people will say, yeah, I have like, I don't know, like 300 megahertz and then you go to like a gigahertz. And then ultimately going to the Pentium four with like three or four gigahertz back at the time, you know, during that time, people understood that because you're not able to do the NART scaling, you know, that the NART scaling, what I mentioned there is the actual real problem, you know, going beyond Moore's law, the NART scaling says that it's not only that you can have smaller transistors, they can also go faster and you can cram more transistors and you can have like, if your dimension scales by K, you can have K to the squared number of transistors, each one will be K faster. And the key enabler there was that you were able to, you know, to lower the voltage by that factor. The thing is back at the 2000, the voltage stopped scaling at the rate that you were able to increase the frequency. So you can get faster circuitry, but your power density essentially increases and that's where you can see that the graph that increases and then people say, okay, we cannot have faster transistors. So that was the first stage in the evolution, cannot have faster transistors. You can see like the green dot, the dot is basically plateauing and say, we cannot, so the implication is that we cannot have a single task going faster, but as Moore's law saying, we can still have more transistors. They just cannot go faster. So instead of having one task going fast, we're going to have multiple tasks going at the same speed. So instead of, you know, increasing the frequency twice, we'll have twice the number of cores and depending on how we can map the problem, how efficiently we can map the problem, we'll be able to still get 2X by essentially paralyzing. And that was phase two, which is essentially the multi-core era. So you're able to cram more transistors. They'll be able to getting on the same silicon wafer or the same silicon die. You'll be able to get twice as many cores. And as you can see here, the green line, especially for GPUs as the main beneficent, you're saying, let's develop these instead of having this design, which is the CPU, which has all sorts of very sophisticated mechanisms like stuff that there are branch predictors, prefetchers, and all these speculative things that are saying we can execute an instruction, but this will take too long. We can do out of order execution, but doing all sorts of tricks to make a single stream of instruction go fast. Instead of it, let's do, let's re-devise our software a little bit and break these, the stream of instruction to several independent stream of instructions that are called threads. And we're going to be able to run them hopefully in a perfectly parallel fashion on different, what we call cores and each core will execute its own stream of instructions. So essentially we'll break up one task into multiple subtasks and by that, we'll be able to still get the same degree of speed up. If we'll be able to get it to be able to get like 2X tasks, we'll be able to get a speed up of 2X. Obviously there's a lot of difficulties, but that's the main idea. So we'll be able to, so eventually if we have enough parallelism, we'll be able to get to hundreds or even thousands of cores and we'll be able to get hundreds of thousands of speed up compared to our regular task. But at the mid, I would say the beginning of the 2000, around 2010 and 2011, there were two different works that highlighted the same phenomenon as meaning that because the NART scaling, again, we're not able to scale the voltage, just having transistors powered, not even doing computation, it doesn't matter even at what speed, just having them powered on will increase our power density. Meaning Moore's lie is still working, we can still shrink down the transistors, we can still cram more and more cores into the same silicon square, square millimeter, you know, in the same silicon area, we'll be able to get more transistors to get more cores, but the power at that time will not remain constant. So the power also increases. So that will be unsustainable. And this created the phenomenon that these works are talking about that is called either the utilization wall or dark silicon. Yeah, what's that? That it means that, you know, you can have, let's say a million, it doesn't matter that you're going to have micro transistors, it means that not all cores can be turned on at the same time. Meaning for the purpose of your computation, you're going to remain under a fixed budget, just due to power constraints. So basically what it means that you're not going to be able to get more transistors. And at this point, the power constraints are mainly due to us not being able to cool down a thing that consumes more power. What are the constraints there? So the constraints is that the power density, the watt per millimeter square just starts growing exponentially as you start exponentially cramming more transistors, because the power per transistor stops scaling, it remains constant. So you'll have 1000X transistors, you'll have 1000X to power. And that creates a problem that will be unsus... And that will require cooling that either does not exist or is super expensive to manufacture. So, and that created a problem that essentially says that, okay, we're not going to be able to get more transistors. So if you're not going to be able to get more transistors, then came the notion of building accelerators. Meaning that instead of having a single piece of silicon solving a wide range of problems, you're going to be focused on a little bit of a narrow scope of certain applications. And those applications needs to have some properties. So, and that's the idea. If we're not going to get more transistors, we're going to be able to create smart, purpose-built circuitry with purpose-built compute and memory and communication that is basically targeting specific problems. You can see an example like video encoders, Bitcoin miners, and AI. Yep. So you can see there, if you look at more general purpose processors, if you can look at power efficiency or even performance, you can see that the general purpose processor is fairly does fairly well for a wide application range. But those accelerators are, for example, for FFT or graphs or matrix multiply, they're really good at a certain task, but they do really poorly on something else. For example, you cannot run your operating system or it wouldn't be recommended for you to run your operating system on an AI accelerator. Well, wait, just wait. The community is going to figure it out. You just need to scale enough. But I guess I think from this point on, it's sort of common, let's say common knowledge again, that GPUs were purpose-built for graphics, but inherently that meant kind of matrix multiply things together. And then on the other hand, deep neural networks, just by happenstance, by being ConvNet or feed forward networks, also using a lot of matrix multiplies. And I guess that was just how the universe works. These things came together. And that was just a really neat fit. And the point though is, the GPUs weren't made for AI in the first place, even though it seems to be a really good application for them. GPUs are good for AI, but what can be even better? In which places are GPUs still suboptimal for the AI things that we are doing? Well, it really depends on your application's demands and the application scopes. For example, you can see in the map that you're showing here, you can see that GPUs are really good at flexibility and they're really good in having matrix multiply. As you can say, linear algebra is something that GPUs do pretty well. And if you can map a lot of these problems, like a lot of cons and recommender models and all that, you can map them into a GPU and do dense and to do dense linear algebra pretty well. That will give you a fairly good boost. But if you would devise a certain, you know, if you would go all the way to the efficiency and doing something really, really specialized, you'll be able to say, let's develop an accelerator that just does ResNet, for example. That'll be really, really contrived to collapse to a certain type of network. Theoretically, everything will be hardwired. Even the weights and everything will be perfectly, perfectly fit for that. But it would not be able to execute anything else. So if you would be, yeah, it'll be very, very bad in doing other more general purpose AI. So that comes to question, you know, what, how can you trade flexibility for efficiency? For example, one of the things that some of the companies are, that are not GPU based companies are tackling are these big, these large language models, for example, those GPT-3s and all that. And GPUs, if you look at the A100s, you can see that GPUs from the, I would say that it was a conscious engineering decision for Nvidia to go for high bandwidth models, high bandwidth memories, I'm sorry, that are basically fast memories, but they're limited in capacity. Alternatively, you can go for something else. You can go for a slower DRAM based memory. So HBMs are fast, but they're limited in capacity. And DRAMs are huge and have like terabytes versus, you know, dozens of gigabytes. And if your model requires terabytes of data, you would need hundreds or even thousands of GPUs just to be able to have the same, to do everything in memory, you know, to have the same, to map the memory space of your model. And that would be something that, you know, I'm not saying that GPUs can do, but it would require a lot of GPUs turned on and a lot of power and a lot of communication going on. And so, you know, it would require a lot of communication going from different GPU systems to be able to train a single, you know, like hundreds or hundreds of billions of parameter model. So. I mean, that's exactly what we see, right? Okay. So yeah, I guess we can just dive into what kind of data that goes beyond GPUs exist. That is to say, in part three, okay, in part three of your series, you go into a little bit of the architectural, sorry, foundations, and you describe kind of what, what exists, you know, what instruction sets are, what kind of models exist, for example, configurable processors. You make sort of a good, very extensive background overview, which we're going to skip right now, just due to time. I just found this very, very funny. I guess that's why you posted it here. So there is, this is a single instruction on, that I can use on an Intel processor that computes approximations to the reciprocal square root with less than two to the negative 28, the relative error of the pack double precision floating point values from these things and stores the result in that thing with right mass K one. That is excellent. Like I, I, I need, I need that instruction every day. Yeah. So, you know, depending on the way that, that this is basically showing how you can devise, when you look at a processor, you know, the traditional, the traditional model of processor is called a for Neumann model. It's, you're saying that you're, you have a processor, your processor accesses the memory, your processor fetches an instruction from the memory. It decodes the instruction and says, Oh yeah, we should do this and that. So this instruction accesses the memory and loads, let's fetch the next instruction and all that. So the, the instructions are basically built from an ISA, which is the instruction set architecture, which you can think about it as the vocabulary in which the, the processor says that the processor supports some processors support X86, some processors support arm. And so which, which is, I would say like the X86 is an example of what we call a complex instruction set computing or CISC and arm is the risk. So there was a trade-off between, you know, how much you're going to be able to, to have a single instruction, you know, compact nicely, which will take less memory. So you're going to have a large vocabulary to express more complex computation versus the risk, the reduced instruction set computer like arm that it's going to be basically be translated to a lot of, lot of micro instructions that are B that will be simpler. So that was an ongoing discussion, but you know, this, you know, this gives a background of how basically a processor works. So there are a lot of concepts that I showed at the, at the part three that were basically used as the background for part four, you know, historically I wrote part four as the combination of part three and part four, but someone said, but you know, a lot of people just advised me that this is just going to be super long. So I needed to break it down. So yeah. So if, if anyone, if anyone wants, wants the background, this article is, is really nice on sort of the foundations of all of this. If you, if you want that, and I think people can relate a little bit because in NLP, you have this whole tokenization problem of, you know, how big do you make your vocabulary? And if you make it too small, you're going to have to break down stuff into smaller pieces and so on. Just, I think it's, it's approximately the same concept right here. You're trading essentially memory for, for, for, for speed. And, and also the, the thing is that you need a difficult, you need a very smart compiler to look at your code and say, okay, these sequence of, for example, if you're writing in C, so these sequence of instructions are going to be translated all to that single instruction. And that way you'll have a smart and very, very complex compiler that will be able to map your sequence of operation into that. Sometimes it works and sometimes you're just going to have like these ghost instructions that no one's really going to use. So, So here in part four, I think that that is, it is the longest part. And you dive into the various companies, startups that exist today, building AI, AI accelerators or AI hardware in any form. And it is, we have to say that you are associated with one of those companies. We're not going to say which one though, obviously with the best one. But, but I felt, I felt reading the article that there was no, there was no, I didn't feel any favoritism. So I was, I was pretty happy to see that. Now we have a lot of them even discussed in your articles. Do you maybe have some that you want to, you know, want to highlight in particular to just maybe show the diversity of the field and, and where it's going? Yes. So while there are a lot of solutions out there, I would say most of them stem from a handful of, of, of a few architectural ideas that were highlighted in part three. So I would say that there is originally there's the GPU with the CUDA that has dense linear algebra that is basically has this model, this execution model, single instruction, multiple thread. It's the idea of the classical von Neumann model. You have instructions, they're translated to processor level ISA that the instruction set architecture that Nvidia GPUs understand. And it's being parallelized and it, and you know, it has all these, you know, systolic like execution. And a systolic array is, is an idea that dates back to the 1970s, where you're going to have a single piece of hardware that is really good in doing matrix multiply, because the data, when you're doing matrix multiply, the data from the A and the B matrix is basically flowing like that. And if you have a very smart circuitry like that, which is in a sense, a smart arc accelerator like engine just for matrix multiply, it'll be able to carry out matrix multiply really efficiently. So, yeah, so the GPUs have that. And you can say that there are some other companies that I would say that are in the camp of VLI, a combination of what we call a VLIW, a very large instruction word, where you're going to have a heterogeneous array of compute machines, like a memory compute machine, a vector compute machine, a matrix multiply, and maybe, you know, some sort of a linear compute machine for your re-use or tangents operators and whatnot. Then you have a static compiler that basically creates this huge instruction that says, okay, this data goes to the vector unit, this data goes to the matrix multiply, and this data goes to the vector unit. And you're able to, and you know the timing of all these units, and you'll be able to have a smart compiler that statically creates this single word that is going to be fed to all of them. So you can have, at compile time, a smart compiler that will be able to efficiently schedule these different data or operands to these machines, and they will be able to get really efficient execution. So for, I would say, the systolic slash VLIW camp, I would say things that are, I would, arguably the most famous example is the Google's TPU that was presented at, I would say, mid-2017 at a conference called ISCA, the International Symposium of Computer Architecture, which is the biggest computer architecture conference. So they showed a model that is basically, the TPU is based on a big systolic array execution with a linear unit, and this smart memory, and everything is being fed, and they have a smart compiler that translates AI code for, that is able to execute DNNs, these deep neural nets. And that was the first time, arguably the most famous non-GPU AI accelerator that was presented. So you have the Google TPU. You also have a startup that is called Grok. Some of its founding members were part of the Google TPU team. There were architects at Google that took parts of, that took some of the ideas of Google's TPU and created a more commercialized accelerator for deep neural nets. And also there is Hibana. So I would say Google, Grok, and Hibana are, I would say, the camp VLIW plus systolic array accelerators. So I understand this correctly. Essentially they have a chip or a board, and that has many different, let's say, subchips on it. One is really good at matrix multiplying. One is really good at doing ReLU. One is really good at whatever, softmax. So kind of all these operations that we need in AI, they have like specially subchips for, and then they have a very smart essentially router that says, okay, you go here, you go here, you go here. So, you know, I could compute, let's say, I could compute the last layers ReLU at the same time, or the last batches ReLU at the same time that I compute this layers forward through a linear layer. Is that? Yeah, this is essentially like you're basically pipelining it. So if you have like one thing that needs to ReLU, and then one thing that needs the matrix multiply for the conv operation, then it needs to ReLU, and then you can feed the next sample or whatnot that uses the matrix multiply while the other one is already doing ReLU. So you can do like sort of a pipeline execution. And by that, you're basically filling up your compute machines, right? And by that, you're getting better utilization, because you're using all of your hardware at a single point and everybody's happy and your architecture is perfectly balanced because your compiler is smart enough to understand the program. Yeah. So essentially, we're saying we want the purpose built hardware like the unit that just does ReLU, because that's way better than having a CPU do ReLU. But in order to have the flexibility, we have a bunch of them on a chip and then we have a router and the compiler that knows how to use that router and the pipelines. Okay, excellent. So but that it seems really, it seems like just from for me now, it seems a little bit still in the spirit of like a GPU of what you said that you you essentially have this von Neumann model, except here, there's sort of pipelining added, there is distribution to different subunits added, right, but it's still these kind of instructions that are in sequence and the compiler needs to understand how to translate a program into that. And as I understand the other companies here, they're trying to go sort of bit more out of like out of that paradigm, is that correct? So I would say the, the other big directions that companies are doing is the data flow directions. So some companies are combining two elements, one is called reconfigurability. And the other one is called data flow. So the reconfigurable data flow, I think that tense torrents are doing it, I think that Samba Nova is doing it. Originally, there was a company called wave computing that did it. That are and there is another company, there was another company called simple machines that are doing it. So the idea of reconfigurable data flow is that, first of all, if you look at a pie torch or tensor floor, Keras or a cafe program and AI, a deep learning application, you can see that there are different layers, and they're communicating with each other. So you have a known, a predetermined set of operands, and you know how the data is basically being communicated between different parts of your graph. So in the underlying computation, the data flow, the underlying computation is basically constructing of a computation graph. What does that mean? Like you can see over there, you have your layer. And from that you have another layer that does ReLU, and then you feed it to another conv layer or waste and do that. So you have basically something that is not instruction level, but basically more of the way that your data, you know, you can see that your data is basically flowing between different layers. So the idea is that instead of having that data, that program, that data flow communication graph, go, you flatten it to the classic von Neumann model, then you try to reparalyze it. You can start off from this data flow model, from this data flow graph, and you can basically statically map it via another, again, you need a smart compiler to do that as well. You need to map it to your existing, to a specialized hardware that is capable of executing data flow. Meaning you can have a compute element that does multiply in here, and you can have another one that does add in here, and you can have, you can basically break down your dense linear algebra to compute unit, and you can feed them to other compute unit instead of, you know, breaking down your computation to micro unit, like saying, oh, here's an add, then oh, you need to multiply and all that. So it would be more natural to look at the compute, looking at the computation graph as a data flow graph and map it to the hardware, and you can start it instead of, you know, going back and forth, flattening it to the von Neumann and then parallel, reparalyzing it to the von Neumann. So they're, you know, these companies' bets are that this model is more natural, it's more hardware friendly, and ultimately you can have, you can get a better gain because you're able to have a better, more complex understanding of the graph. You can look at different elements in your graph, you can have a smart compiler that fully understands your hardware, it knows the underline, the number of compute elements and what each compute element in your processor, in your accelerator is doing, and from that it will create a mapping that will essentially go be very static and your data is just going to flow instead of you needing to manually orchestrate it and breaking it down to instructions. So, you know, one of the main selling points of the existing landscape like GPUs is that GPUs are, they have a very mature software stack and they're very flexible, you can program everything from that von Neumann model. If you can create a flexible enough architecture, you'll be able to basically handle new models because, you know, the main challenge for you to build an accelerator company is that it takes two or three years to take out a chip, meaning you need to think about your idea, you need to think about your architecture, all of what you can execute, and you need to be generic enough because within two or three years, it's possible that your application has completely shifted away and if you look at those, the mapping of specialized accelerators, if you're here but your application space is moved here, you're not going to be able to execute it efficiently. So, you need to be very open-minded, you need to be very mindful about being flexible enough to support this. One of the main challenges for that is the ability to create a smart enough software stack that will be able to execute it. So, it's not a trivial task. So, you can take the Wave Computing case as an example. Wave Computing was a company that was really revolutionary. They were able to present a commercialized accelerator that does reconfigurable data flow at the beginning of 2017. So, they had a fancy hardware with 15,000 cores running at 6.7 gigahertz with a lot of engineering complexity that is able to have both slow memory and fast memory and all that. But from what I understood that the CEO interviewed and said, okay, we were not able to succeed in it because it was so complex that going from the basic cases where we were able to showcase a few kernels, trying to generalize that to more complex and real-world application, we found that our hardware software stack had to solve intractable problems and that would become unreasonable. So, I would say that their problem was that they were way, way ahead of the curve. People were just exploring these problems and they were not able to estimate those difficulties. They were pioneers, but ultimately, it didn't pan out so great for them because eventually they filed for bankruptcy. There's also this concept of in-memory compute or near-memory compute. What does that mean? So, there are several notions of how close the compute and your memory should be. One form of near-memory compute is saying that you have your memory model and from that you're loading it to what we call a software control scratchpad memory. So, you have small fast memories. You can think of it as a processor cache, but they're software control. Traditionally, a processor cache like in the Fonoymon model is basically trying, has a heuristic of saving the most recent accesses just because this is the hot data. A software-defined scratchpad memory is something that is more compiler-controlled that you know how you're going to be able to access. One of the guiding principles of devising an accelerator is that you're basically able to anticipate how your memory and data accesses are going to be like. You're going to have a handful of basic, very simple, very simple, very simple, very simple, very simple, very simple basic computational structures that you're going to iterate over a lot of data and it's going to be really recurring. That's one of the things that enable you to develop an accelerator in the first place. So, a scratchpad memory is a very small, a fairly small and fast memory. It can be kilobytes, like a megabyte of data that is really close and it sits within the same piece of, not even the piece of silicon, but within the same core within that piece of silicon and you'll be able to communicate that data fast. It will take like one or two clock cycles. Another approach would be a processor and memory approach. That's when the processing element sits really close to the actual memory model. If you're going to manufacture something like a DRAM or something that is called memristors, which are memory-based resistors, you're going to be able to manufacture a memory module that is going to have logic elements inside of it. You can see of those examples like Mythic or one of those companies that are developing what we call the processor in memory is the idea that you can look at deep learning computation and you can look at the dot product and from that you can do analog computation and that will be fairly, fairly complex. But the idea is that you don't really need to fetch back and forth data from the memory because it's all within this special circuitry that sits within your memory module and you're saving a lot of that energy going back and forth from the memory chip and into a different chip, which is the compute memory, the compute processing element. It's essentially like having a lot of, a lot of cores that we also have lots and lots of registers at those cores, but the registers aren't just for temporary data, but they are actually the memory. In a sense, you can think about it as the difficulty is that you needed to really change the memory that you're manufacturing. And that's something that not a lot of companies are doing, but it's a promising direction because if you have something that is more, that is less depending on your transistors, so it's less prone to the failures of Moore's law. So the end of Moore's law is, might not be the bottleneck for some of these modules, but there are other things like you can see that there's like an analog to digital converter, which could be power hungry and that creates a slew of analog compute problems. There are also a bit more, let's say call them esoteric things that you, all of these were already esoteric to me, but they are, there are more esoteric things like there's like optical computing and neuromorphic computing and things like this. What are, do you have any favorites there or anything that you think is promising and not buzzwordy? I think that these, I think that Lightmatter is a company that is, was founded by a few MIT graduates and they have this idea that light, that representing analog computation via light could be more efficient than using it, but then expressing it through the digital domain. It's an interesting problem. I am not really versed on the different types of difficulties there, but it's sort of like thinking about an analog neuromorphic model where the brain acts basically like on analog pulses. So this is a little bit more trying to mimic the way that the brain works than you would go traditional artificial neural networks where you're going to have a BF16 represent your weights and you can say that this is closer to reality and it's also more energy efficient, but these are, you can say that these are more advanced technologies. So I would say that they probably have their own set of challenges and they're not as efficient as the other challenges. And you never know which one of these technologies will prevail and be the winner. And what is neuromorphic computing? I think that the neuromorphic computing as the way that we know it is the form of analog computing. You're going to have data over here. You're going to have the weights that are sitting within, your memory and your activation is going to be coming from that memory from as inputs to that memory. You're going to be able to do an analog addition and instead of doing that dot product between the weights, you're going to have a single dot product doing vectorized compute in an analog fashion and you're going to be using analog circuitry to compute the results. So it's more of, I would say it's more similar in theory to the spiking neural network model where you're going to have like your brain act on electric pulses. So that's what these solutions are trying to mimic conceptually. And you know that eventually if you look at hardware from the grand scheme of things, you know, you have those accelerators. These accelerators are good at doing AI. But you know, if you really want to get into the definitions, you know, you can go, you can look at the in Goodfellow's deep learning book. It's not really AI. There's an event diagram where AI and inside of it there is machine learning and then there's presentation learning. And then there's deep learning. And from within that deep learning, you can say that these accelerators are good at, you know, a subset of deep learning and a subset of ML that is good at doing matrix multiplication. You know, they're really good at doing things like conv and transformers. But is that a general solution to AI? No one really knows. You know, you can say that the interesting thing is that because the hardware was a key enabler, it's also sort of used as a limiter to what you can achieve. You know, people are saying, is attention all you need? Is conv all you need? Could be. But one thing is for sure is that it consists of most of what your hardware can do. You know, your hardware is really good at transformers and attention and cons. But, you know, is that how intelligence really work? Maybe there's a huge slew of applications that can mimic more human intelligence that are not, that cannot be efficiently ran on hardware accelerators the way that they're built today. And we're not going to be able to explore it just because we don't have the hardware for it and we don't have a way to run it efficiently. So it's an interesting problem. There is this concept, people say this, right, this is a sentiment that's echoed throughout the community that, for example, graph neural networks, we don't have good hardware for graph neural networks, and therefore, probably, we're not going to explore them as much, which also means that hardware manufacturers, since, you know, we can't demonstrate that graph neural networks are really good, won't build graph neural network chips. Do you see this? Do you see it generally going, let's say, more and more converging on some applications? Or do you think, okay, we'll discard some of the applications, but also the ones we have will sort of morph and develop into different variants and so on? Like, how do you see the hardware, essentially the expansiveness of manufacturing hardware's effect on the diversity of the ideas in the field? Do you think there is hope to increase diversity, even with the cost of hardware? It's an interesting question. I would say, obviously, money makes the world go round. If there's money within these applications, you're going to be able to build the hardware for it. The thing is, like we said earlier, hardware has been a key enabler for what you can achieve. And basically, if you cannot run your application on hardware, it will be hard to create that ecosystem for that application to be able to justify building special hardware, because it's a bit of a chicken and an egg problem. If I were to develop an accelerator for a non-Euclidean set of problems, I would first need to look for the applications for it. I will need to be looking for that justification for it, simply because if I'm a startup company, I'm going to have to need funding for it, right? But if you don't have people that are experienced in the industry, you won't be able to find that justification. So it's a bit of a chicken and an egg problem. So as I said, maybe attention is all you need, maybe it's all you need. For surely, it's most of what we have right now. And it would be interesting to see. I would say that, as I said in the final thoughts, I would think that in the next two or three years or so, the things are going to become clearer and architectures are going to be able to stabilize just because we understand the problem better. It will take us four or five years to really converge to a set of common practices and the way that we're developing software libraries and the way that we're developing compilers. We're going to be able to have this I would say three or four stable software stacks that are really good at the conv and transformer games. Will there be other models to create other stacks? Sure. But if I were to start a startup today, it will be really hard for me to go for the conv and the transformers, just because this is a saturated field and people are doing it fairly well and you're basically almost maximizing what you can do in your hardware. The last saying here in your final thoughts is everything old is new again. Do you want to explain what that's about? Yes. It seems like there's a bit of, you can say that on one hand, these models have been the most popular models, those key enablers, those Alex net and those Resnets, those attentions and BERTs and the GPT-3s, they all originated in academic papers, right? But in the hardware field, things are, there's a little bit more of a disconnect. I would say that there are a lot of papers, there are dozens of papers presenting new ideas every year in the top conferences, there are the ESCA, HPCA, ASPLOS and Micro. But eventually you can see that all these fundamental, all these accelerators were basically using ideas originated like 30, 40 years ago. Processing and memories was, I would say in the 1980s, VLIW again, the 1980s, systolic arrays, the 1970s, data flow programming is the 1970s, processing and memory also like 1970s. So it's a bit of conservatism because as you can say that a company building hardware knows, at least in the older days where it was hard to get money funding for it, you would need to really, really justify and really go for these well hashed out ideas before you would go for those wild card ideas. And once you have that, you might be able to explore more revolutionary ideas. Unfortunately, I think that at this point, a lot of your architectural foundations are already established. So you won't be able to explore this crazy accelerators or those things that are really really out there. You'll be able to somewhat integrate it into your existing architecture, but it would be very daring to go and break your entire architecture completely. And especially in a very competitive landscape, you might not be able to go for that risk. You would be surprised, but there are many people in the AI community that say that all the AI ideas have been had in the 80s and 90s as well. And there's essentially nothing new under the sun. But it's a debated position. It's a debated position. Well, I would say that for one thing, for sure, that going back to the attention is all you need and convo is all you need and essentially is what you got. A lot of these, the basic computational structures are already there. People are building on the baseline of these architectures simply because for me as a hardware architect, from my perspective, this is what the hardware can do. It even goes back to this academic notion of accelerators. There's a work called Stream Data Flow Acceleration that was presented in ISCA of 2017, that they're saying, okay, the acceleratable domains need to fulfill certain properties. They need to have a fairly confined control flow. They need to be fairly repetitive. You need to know how the data reuse. You need to know a lot of how your computation patterns behave. So if you're not going to be able to build an accelerator that completely breaks out from this common wisdom and breaks out this template, you might not be able to have an AI model that behaves that way. Is it true or not? Could be or could be not. Maybe we will find out that our existing patterns are fulfilling enough. I would say that there are a lot of problems even within the existing architectures that we were able to fully explore. Cool. Is there anything else you'd like to want to give people on the way? I guess there's not an easy way to necessarily get into hardware yourself at home or something, but if people want to dive, they can certainly go to your articles, which I think are great. I will obviously link them in the video description. Is there any message you want to get out there regarding this? I would say, I cannot really say anything about looking at the blog. Try to look at high level overviews of how hardware and software behaves. It's really tightly coupled today. It's a really exciting time to be either in AI or in hardware because it's a really great opportunity from many aspects historically that you can explore AI hardware either as a research scientist, as a data scientist, or even a computer scientist. It's really good to see how all these pieces pan out. Start looking at the high level overviews and then just deep dive into any of them. Open a computer architecture book. The old ideas are already there. Try to look at the high level white papers from the big companies, the Googles and the NVIDIAs and some of the accelerator companies. Try to understand how your software behaves and you might find that it's not as good as it should be. It's really great that you can execute your models much faster than you have anticipated. If it's going to take you three days to train your model versus if it's going to take you three hours to train your model, it's going to be a key enabler to a lot of your capabilities. Just try to do all those tweaks. Try to understand the common practices. Try to follow programming books and rules and best practices and you might find out that you're going to be able to be a kickass data scientist. Excellent. Well, Adi, it was a great pleasure having you here. I learned a lot. Really, I had no clue before this. Thank you very much for these articles and thanks for being here. Thanks a lot for having me.
[ { "start": 0, "end": 5.84, "text": " Hello there! Today I'm talking to Adi Fuchs, who is an expert in AI acceleration technology." }, { "start": 6.5600000000000005, "end": 11.52, "text": " We talk about a whole bunch of things in this interview, but it is a little bit of a special" }, { "start": 11.52, "end": 17.04, "text": " thing because it's not about a paper or anything, but it is about a series of blog posts that Adi" }, { "start": 17.04, "end": 23.36, "text": " has authored. I am very much a noob in the AI accelerator field, so I thought it'd be really" }, { "start": 23.36, "end": 28.16, "text": " cool to talk to someone who really know what they're talking about, who are in this industry" }, { "start": 28.16, "end": 35.2, "text": " and can explain everything from very technical to very noobish for me. So we go over a whole bunch" }, { "start": 35.2, "end": 42.08, "text": " of things like why do we even need accelerators? What are the reasons behind it? Why are GPUs here" }, { "start": 42.08, "end": 49.28, "text": " and why are they good for AI? Up to very, very modern approaches to AI accelerations, TPUs," }, { "start": 49.28, "end": 56.480000000000004, "text": " and beyond that. So if you're interested in this, watch the interview. It was very cool. I learned a" }, { "start": 56.48, "end": 70.8, "text": " lot and I hope you do too. Without further ado, have fun! Hello everyone! Today I have Adi Fuchs" }, { "start": 70.8, "end": 79.28, "text": " with me right here. He is the author of a series on Medium called AI Accelerators. I have noticed in" }, { "start": 79.28, "end": 87.68, "text": " the last few years and certainly months that I have no clue about hardware. My conception of hardware" }, { "start": 87.68, "end": 94, "text": " is something that goes vvvvv, and if I want a neural network, I need a GPU that goes vvvvvv." }, { "start": 94, "end": 101.52000000000001, "text": " And then there's TPUs, and then there's IPUs, and there's lots of stuff, but I never had any clue" }, { "start": 101.52000000000001, "end": 108.8, "text": " what any of it meant. So this article series was really valuable to me. I thought maybe it's" }, { "start": 108.8, "end": 113.2, "text": " valuable to some of you too. So, Adi, thank you very much for being here." }, { "start": 114.32, "end": 117.67999999999999, "text": " Yeah, thanks for having me and thanks for the kind introduction." }, { "start": 119.03999999999999, "end": 125.75999999999999, "text": " Can you tell us a little bit about what your background is in this space? Why did you decide" }, { "start": 125.75999999999999, "end": 133.44, "text": " to write a series like this? And why did you think that you had the knowledge to do so?" }, { "start": 133.44, "end": 141.28, "text": " Well, so I've been back and forth between, I would say, industry and academia. I've been working for" }, { "start": 141.28, "end": 146.32, "text": " several hardware and software companies. You know, Philips, I also worked for Mellanox. I also worked" }, { "start": 146.32, "end": 151.92, "text": " for Apple for some, you know, short period. And I've been back and forth. I did my masters back in" }, { "start": 151.92, "end": 161.04, "text": " Israel. And then I did my PhD at the US at the Princeton University. And I always, you know," }, { "start": 161.04, "end": 168.07999999999998, "text": " my studies have been mainly focused on computer architecture. You know, more recently, my" }, { "start": 168.07999999999998, "end": 172.95999999999998, "text": " experience has been with computer architectures, processor architectures in general. There's a lot" }, { "start": 172.95999999999998, "end": 177.84, "text": " of software going on into it. But, you know, from the architectural perspective is how you can" }, { "start": 179.12, "end": 189.35999999999999, "text": " design systems that can execute these applications very efficiently. And there's a myriad way of" }, { "start": 189.36, "end": 195.84, "text": " actually doing so. So after my studies, I started working for one of the big companies in the" }, { "start": 195.84, "end": 204.24, "text": " landscape. And I said, actually, when I graduated, I had, when I graduated my PhD, I always had in" }, { "start": 204.24, "end": 210.96, "text": " the back of my mind that AI and machine learning and deep learning, all that has been very, very" }, { "start": 210.96, "end": 216.8, "text": " exciting. You know, I took just like one or two classes, but I didn't really have any extensive" }, { "start": 216.8, "end": 223.20000000000002, "text": " experience in it. But I do feel like I do, I was able to see that potential. And I wanted to say," }, { "start": 223.20000000000002, "end": 228.48000000000002, "text": " okay, one of the natural things for me after I graduate would be to work for one of those" }, { "start": 228.48000000000002, "end": 235.36, "text": " companies that are developing hardware for AI. But, you know, the story goes well beyond just" }, { "start": 235.36, "end": 241.28, "text": " hardware, you know, people right now understand that they need to develop smart systems, smart" }, { "start": 241.28, "end": 248.16, "text": " software, it needs to be a full stack view, just going beyond just like you said, look, the GPU" }, { "start": 248.16, "end": 256.24, "text": " that goes for the TPU or the underlying processor, whatnot. So the landscape seemed to be very" }, { "start": 256.24, "end": 263.28, "text": " exciting. It's rapidly evolving, there are a lot of solutions out there. And I thought that," }, { "start": 264.16, "end": 270.4, "text": " you know, as a hobby, what I did, it's just started as a hobby, you know, just observing what people" }, { "start": 270.4, "end": 275.28, "text": " are doing, trying to look at the competitive landscape and try to see if there's anything" }, { "start": 275.28, "end": 283.12, "text": " that could be interesting for someone that wants to know more about that world, either be it a" }, { "start": 283.12, "end": 289.59999999999997, "text": " research scientist that wants to know a little bit of what's going on under the hood, or people" }, { "start": 289.59999999999997, "end": 294.4, "text": " that are hardware engineers that wants to know a little bit more about, you know, the high level" }, { "start": 294.4, "end": 300.15999999999997, "text": " motivation for why people are doing AI accelerator. So I was hoping that I will be able to create" }, { "start": 300.16, "end": 305.92, "text": " something like that, that will be able to contribute to several types of people, I would say." }, { "start": 306.88000000000005, "end": 314.96000000000004, "text": " Very cool. So my question is a little bit, why, what does it even mean to build hardware for" }, { "start": 314.96000000000004, "end": 319.76000000000005, "text": " something like obviously, you know, we have computers, and I can, you know, I can do pretty" }, { "start": 319.76000000000005, "end": 328, "text": " much anything with a computer. What does it mean to, to, to say, make hardware for AI, you have" }, { "start": 328, "end": 335.36, "text": " this term of user to hardware expressiveness? What does that mean? So I would say it's, it's," }, { "start": 336.16, "end": 341.12, "text": " I would, as I said, there is, it's more of my term, in lack of a better term, I would say that" }, { "start": 341.12, "end": 347.36, "text": " probably people have several either academic or industry more accurate ways to depict this is that" }, { "start": 347.36, "end": 353.2, "text": " the user knows on the high level what they're doing, what they want to do, what type of models" }, { "start": 353.2, "end": 360, "text": " they want to explore, and how they translate it to high level code, you know, like cafe," }, { "start": 360, "end": 365.03999999999996, "text": " pytorch, TensorFlow, and all that. So the research scientist has the big model that they want to" }, { "start": 365.03999999999996, "end": 372.15999999999997, "text": " explore. But under the hood, there is what the hardware understand it what it can execute." }, { "start": 372.88, "end": 380, "text": " So if you look at it, you can see that there is a lot of layers that you need to go to get you need" }, { "start": 380, "end": 384.88, "text": " to lower from the high level code all the way to the, you know, to the bits that are basically" }, { "start": 384.88, "end": 391.6, "text": " executing on on on on, you know, that the electrons that are flowing, and it gets really," }, { "start": 391.6, "end": 399.28, "text": " really complex, because you need to have a full stack view, and really know, whatever crazy idea" }, { "start": 399.28, "end": 407.6, "text": " that the user is doing, and whatever and, and the last low level detail of everything that your" }, { "start": 407.6, "end": 414.16, "text": " hardware basically can can execute, you know, 80 degrees of parallelism, how it accesses the memory," }, { "start": 414.8, "end": 422.16, "text": " be it DRAM, high bandwidth memories, HBMs, there's a there's a lot of things that are going on how" }, { "start": 422.16, "end": 430.32000000000005, "text": " you're what are your precisions? Are you doing FP 32? Are you doing FP 16, BF 16? Are you doing" }, { "start": 430.32, "end": 438.15999999999997, "text": " integers? What is your bit width? And and there are a lot of details that someone needs to understand," }, { "start": 438.15999999999997, "end": 444.96, "text": " in order to build a full flesh, fully capable compiler stack, that you can basically write" }, { "start": 444.96, "end": 450.15999999999997, "text": " whatever you can think of, and it'll out of the box be not only working, because as you said," }, { "start": 450.88, "end": 455.6, "text": " you can basically compute everything right there. I don't know church during thesis," }, { "start": 455.6, "end": 461.84000000000003, "text": " a computer is a computer, but there is a difference between just solving the problem mathematically," }, { "start": 461.84000000000003, "end": 468.48, "text": " or accurately, and actually doing it performant in a performant fashion, because you can either" }, { "start": 468.48, "end": 474.32000000000005, "text": " solve a single problem, and it will take a month to run, or you can solve the same problem, and it" }, { "start": 474.32000000000005, "end": 479.6, "text": " will be more efficient, it can take, I don't know, like a few hours or even a few minutes. So" }, { "start": 479.6, "end": 484.72, "text": " that's that's the idea of user to hardware expressiveness, you know, the user can think" }, { "start": 484.72, "end": 489.52000000000004, "text": " of whatever, and the hardware can execute whatever and you need to bridge that cemented gap" }, { "start": 489.52000000000004, "end": 498.48, "text": " between them. And, and, okay, let's say we agree that we need to build hardware for AI, you go" }, { "start": 498.48, "end": 503.76000000000005, "text": " through a little bit of the history of that, I guess, starting with what everyone knows," }, { "start": 503.76, "end": 512.3199999999999, "text": " which is kind of Moore's law, that processors or number of transistors increased over time in an" }, { "start": 512.3199999999999, "end": 518.8, "text": " exponential fashion, but then you go into some into some less known laws like Dennert scaling," }, { "start": 519.52, "end": 525.52, "text": " all of this leading up to saying, you know, we've reached the end of clock frequency," }, { "start": 525.52, "end": 532.96, "text": " I think this is also known. What's also known is probably that the, we have a" }, { "start": 532.96, "end": 539.12, "text": " we have replaced essentially speed with number of cores, and we're going to parallelism. Now" }, { "start": 539.12, "end": 547.6, "text": " you draw an excellent comparison to GPUs here, GPUs being the current super many core architectures," }, { "start": 547.6, "end": 558.24, "text": " or not current, but in the history, they had more cores. What makes GPUs so attractive for AI in the" }, { "start": 558.24, "end": 565.12, "text": " first place? Yes. So this, I think this goes back a little bit to more of a D intro. You know," }, { "start": 565.12, "end": 569.84, "text": " you're just saying hardware and you're saying computer, but the fact that you can compute" }, { "start": 569.84, "end": 576.8, "text": " things at certain speeds have been key enablers. I go in the introduction, I'm talking about Alex" }, { "start": 576.8, "end": 582.88, "text": " net, right? You see in the Alex net paper, they say in the abstract, we were able to develop a" }, { "start": 582.88, "end": 589.76, "text": " GPU implementation and efficient GPU implementation that allows it that allowed us to number to crunch" }, { "start": 589.76, "end": 597.84, "text": " a lot of data and train a lot of data within a reasonable timeframe and get a super fancy model" }, { "start": 597.84, "end": 603.4399999999999, "text": " that can run efficiently and within reasonable times. And that basically was a key enabler." }, { "start": 603.4399999999999, "end": 610.08, "text": " What I didn't even mention is that for example, for natural language processing, the same story" }, { "start": 610.08, "end": 616.5600000000001, "text": " happened. If you look at the attention is all you need paper, they were able to say in the abstract," }, { "start": 616.5600000000001, "end": 622.48, "text": " we were able to train it on GPU for three and a half days, which was order of magnitude pastored" }, { "start": 622.48, "end": 629.0400000000001, "text": " and previous solution, you know, all those LSTNs and RNNs that have this inherent sequential part" }, { "start": 629.0400000000001, "end": 636.08, "text": " that we were able to devise a new architecture that is able to run on hardware. And just by being" }, { "start": 636.08, "end": 644, "text": " able to harness the power of GPUs, we were able to run and it basically unlocked our capabilities." }, { "start": 644, "end": 650.88, "text": " So the ability of hardware has been the role of hardware has been very significant and basically" }, { "start": 650.88, "end": 657.9200000000001, "text": " being the key enabler of AI capabilities. And that's why I think this series is more is very" }, { "start": 657.9200000000001, "end": 663.0400000000001, "text": " important. Going back to our discussion, you know, trying to talk about frequency, it's good to know" }, { "start": 663.04, "end": 669.1999999999999, "text": " about the history because when you're talking about AI accelerators is essentially why do we" }, { "start": 669.1999999999999, "end": 676.9599999999999, "text": " need accelerators? Why and why now? So as you can see, as we said at the beginning, there was" }, { "start": 676.9599999999999, "end": 684.3199999999999, "text": " frequency, we were able to get our circuitry going faster. You can say that, okay, we have we" }, { "start": 684.3199999999999, "end": 690.88, "text": " back at the 90s, you can have like this 486 going at 33 megahertz all the way to like 100 megahertz." }, { "start": 690.88, "end": 695.68, "text": " Then you came the Pentiums and people will say, yeah, I have like, I don't know, like 300 megahertz" }, { "start": 695.68, "end": 702.24, "text": " and then you go to like a gigahertz. And then ultimately going to the Pentium four with like" }, { "start": 702.24, "end": 708, "text": " three or four gigahertz back at the time, you know, during that time, people understood that" }, { "start": 708.72, "end": 714.48, "text": " because you're not able to do the NART scaling, you know, that the NART scaling, what I mentioned" }, { "start": 714.48, "end": 719.36, "text": " there is the actual real problem, you know, going beyond Moore's law, the NART scaling says that" }, { "start": 719.36, "end": 725.2, "text": " it's not only that you can have smaller transistors, they can also go faster and you can cram more" }, { "start": 725.2, "end": 732, "text": " transistors and you can have like, if your dimension scales by K, you can have K to the" }, { "start": 732, "end": 739.04, "text": " squared number of transistors, each one will be K faster. And the key enabler there was that you were" }, { "start": 739.04, "end": 748.5600000000001, "text": " able to, you know, to lower the voltage by that factor. The thing is back at the 2000, the voltage" }, { "start": 748.56, "end": 756, "text": " stopped scaling at the rate that you were able to increase the frequency. So you can get faster" }, { "start": 756, "end": 761.1199999999999, "text": " circuitry, but your power density essentially increases and that's where you can see that the" }, { "start": 761.1199999999999, "end": 766.3199999999999, "text": " graph that increases and then people say, okay, we cannot have faster transistors. So that was" }, { "start": 766.3199999999999, "end": 770.56, "text": " the first stage in the evolution, cannot have faster transistors. You can see like the green" }, { "start": 771.1999999999999, "end": 778.3199999999999, "text": " dot, the dot is basically plateauing and say, we cannot, so the implication is that we cannot" }, { "start": 778.32, "end": 787.12, "text": " have a single task going faster, but as Moore's law saying, we can still have more transistors." }, { "start": 787.6800000000001, "end": 793.5200000000001, "text": " They just cannot go faster. So instead of having one task going fast, we're going to have multiple" }, { "start": 793.5200000000001, "end": 800.48, "text": " tasks going at the same speed. So instead of, you know, increasing the frequency twice, we'll have" }, { "start": 800.48, "end": 805.2, "text": " twice the number of cores and depending on how we can map the problem, how efficiently we can" }, { "start": 805.2, "end": 813.44, "text": " map the problem, we'll be able to still get 2X by essentially paralyzing. And that was phase two," }, { "start": 813.44, "end": 819.84, "text": " which is essentially the multi-core era. So you're able to cram more transistors. They'll be able to" }, { "start": 819.84, "end": 827.6, "text": " getting on the same silicon wafer or the same silicon die. You'll be able to get twice as many" }, { "start": 827.6, "end": 834.72, "text": " cores. And as you can see here, the green line, especially for GPUs as the main beneficent," }, { "start": 834.72, "end": 842.24, "text": " you're saying, let's develop these instead of having this design, which is the CPU, which has" }, { "start": 842.24, "end": 847.12, "text": " all sorts of very sophisticated mechanisms like stuff that there are branch predictors," }, { "start": 847.9200000000001, "end": 854.4, "text": " prefetchers, and all these speculative things that are saying we can execute an instruction," }, { "start": 854.4, "end": 859.0400000000001, "text": " but this will take too long. We can do out of order execution, but doing all sorts of tricks to make" }, { "start": 859.04, "end": 867.52, "text": " a single stream of instruction go fast. Instead of it, let's do, let's re-devise our software a" }, { "start": 867.52, "end": 872.48, "text": " little bit and break these, the stream of instruction to several independent stream of" }, { "start": 872.48, "end": 877.92, "text": " instructions that are called threads. And we're going to be able to run them hopefully in a" }, { "start": 877.92, "end": 884.24, "text": " perfectly parallel fashion on different, what we call cores and each core will execute its own" }, { "start": 884.24, "end": 890.96, "text": " stream of instructions. So essentially we'll break up one task into multiple subtasks and by that," }, { "start": 890.96, "end": 898.64, "text": " we'll be able to still get the same degree of speed up. If we'll be able to get it to be able" }, { "start": 898.64, "end": 905.12, "text": " to get like 2X tasks, we'll be able to get a speed up of 2X. Obviously there's a lot of difficulties," }, { "start": 905.12, "end": 911.84, "text": " but that's the main idea. So we'll be able to, so eventually if we have enough parallelism," }, { "start": 911.84, "end": 917.84, "text": " we'll be able to get to hundreds or even thousands of cores and we'll be able to get" }, { "start": 917.84, "end": 924.24, "text": " hundreds of thousands of speed up compared to our regular task. But at the mid, I would say the" }, { "start": 924.24, "end": 931.36, "text": " beginning of the 2000, around 2010 and 2011, there were two different works that highlighted" }, { "start": 931.36, "end": 938.24, "text": " the same phenomenon as meaning that because the NART scaling, again, we're not able to scale the" }, { "start": 938.24, "end": 945.2, "text": " voltage, just having transistors powered, not even doing computation, it doesn't matter even at what" }, { "start": 945.2, "end": 952.64, "text": " speed, just having them powered on will increase our power density. Meaning Moore's lie is still" }, { "start": 952.64, "end": 958.88, "text": " working, we can still shrink down the transistors, we can still cram more and more cores into the same" }, { "start": 959.6, "end": 965.52, "text": " silicon square, square millimeter, you know, in the same silicon area, we'll be able to" }, { "start": 965.52, "end": 974.4, "text": " get more transistors to get more cores, but the power at that time will not remain constant." }, { "start": 974.4, "end": 981.76, "text": " So the power also increases. So that will be unsustainable. And this created the phenomenon" }, { "start": 981.76, "end": 986.96, "text": " that these works are talking about that is called either the utilization wall or dark silicon." }, { "start": 986.96, "end": 987.76, "text": " Yeah, what's that?" }, { "start": 987.76, "end": 992.64, "text": " That it means that, you know, you can have, let's say a million, it doesn't matter that you're going" }, { "start": 992.64, "end": 999.52, "text": " to have micro transistors, it means that not all cores can be turned on at the same time." }, { "start": 999.52, "end": 1004.3199999999999, "text": " Meaning for the purpose of your computation, you're going to remain under a fixed budget," }, { "start": 1005.04, "end": 1010.48, "text": " just due to power constraints. So basically what it means that you're not going to be able to get" }, { "start": 1010.48, "end": 1016.72, "text": " more transistors. And at this point, the power constraints are mainly due to us not being able" }, { "start": 1016.72, "end": 1024.32, "text": " to cool down a thing that consumes more power. What are the constraints there?" }, { "start": 1024.88, "end": 1032.08, "text": " So the constraints is that the power density, the watt per millimeter square just starts" }, { "start": 1032.08, "end": 1037.92, "text": " growing exponentially as you start exponentially cramming more transistors, because the power per" }, { "start": 1037.92, "end": 1043.6000000000001, "text": " transistor stops scaling, it remains constant. So you'll have 1000X transistors, you'll have" }, { "start": 1043.6, "end": 1050.08, "text": " 1000X to power. And that creates a problem that will be unsus... And that will require cooling" }, { "start": 1050.9599999999998, "end": 1059.6, "text": " that either does not exist or is super expensive to manufacture. So, and that created a problem" }, { "start": 1059.6, "end": 1063.4399999999998, "text": " that essentially says that, okay, we're not going to be able to get more transistors." }, { "start": 1064.32, "end": 1069.9199999999998, "text": " So if you're not going to be able to get more transistors, then came the notion of building" }, { "start": 1069.92, "end": 1076.5600000000002, "text": " accelerators. Meaning that instead of having a single piece of silicon solving a wide range" }, { "start": 1076.5600000000002, "end": 1084.64, "text": " of problems, you're going to be focused on a little bit of a narrow scope of certain applications." }, { "start": 1084.64, "end": 1089.3600000000001, "text": " And those applications needs to have some properties. So, and that's the idea. If we're" }, { "start": 1089.3600000000001, "end": 1096.96, "text": " not going to get more transistors, we're going to be able to create smart, purpose-built circuitry" }, { "start": 1096.96, "end": 1102.88, "text": " with purpose-built compute and memory and communication that is basically targeting" }, { "start": 1102.88, "end": 1111.3600000000001, "text": " specific problems. You can see an example like video encoders, Bitcoin miners, and AI. Yep." }, { "start": 1113.1200000000001, "end": 1119.76, "text": " So you can see there, if you look at more general purpose processors, if you can look at power" }, { "start": 1119.76, "end": 1126.48, "text": " efficiency or even performance, you can see that the general purpose processor is fairly" }, { "start": 1126.48, "end": 1135.76, "text": " does fairly well for a wide application range. But those accelerators are, for example, for FFT" }, { "start": 1136.64, "end": 1146.8, "text": " or graphs or matrix multiply, they're really good at a certain task, but they do really poorly" }, { "start": 1146.8, "end": 1154.32, "text": " on something else. For example, you cannot run your operating system or it wouldn't be recommended" }, { "start": 1154.32, "end": 1164.1599999999999, "text": " for you to run your operating system on an AI accelerator. Well, wait, just wait. The community" }, { "start": 1164.1599999999999, "end": 1169.84, "text": " is going to figure it out. You just need to scale enough. But I guess I think from this point on," }, { "start": 1169.84, "end": 1177.4399999999998, "text": " it's sort of common, let's say common knowledge again, that GPUs were purpose-built for graphics," }, { "start": 1177.4399999999998, "end": 1183.52, "text": " but inherently that meant kind of matrix multiply things together. And then on the other hand," }, { "start": 1183.52, "end": 1194.4, "text": " deep neural networks, just by happenstance, by being ConvNet or feed forward networks, also using" }, { "start": 1194.4, "end": 1202.32, "text": " a lot of matrix multiplies. And I guess that was just how the universe works. These things came" }, { "start": 1202.32, "end": 1211.12, "text": " together. And that was just a really neat fit. And the point though is, the GPUs weren't made for AI" }, { "start": 1211.12, "end": 1222, "text": " in the first place, even though it seems to be a really good application for them. GPUs are good" }, { "start": 1222, "end": 1232.3999999999999, "text": " for AI, but what can be even better? In which places are GPUs still suboptimal for the AI things" }, { "start": 1232.3999999999999, "end": 1239.4399999999998, "text": " that we are doing? Well, it really depends on your application's demands and the application" }, { "start": 1239.44, "end": 1245.6000000000001, "text": " scopes. For example, you can see in the map that you're showing here, you can see that GPUs" }, { "start": 1246.24, "end": 1252.72, "text": " are really good at flexibility and they're really good in having matrix multiply. As you can say," }, { "start": 1252.72, "end": 1260, "text": " linear algebra is something that GPUs do pretty well. And if you can map a lot of these problems," }, { "start": 1260.72, "end": 1268.8, "text": " like a lot of cons and recommender models and all that, you can map them into a GPU and do dense" }, { "start": 1268.8, "end": 1278.3999999999999, "text": " and to do dense linear algebra pretty well. That will give you a fairly good boost. But if you" }, { "start": 1278.3999999999999, "end": 1284.32, "text": " would devise a certain, you know, if you would go all the way to the efficiency and doing something" }, { "start": 1284.32, "end": 1291.52, "text": " really, really specialized, you'll be able to say, let's develop an accelerator that just does" }, { "start": 1291.52, "end": 1297.6, "text": " ResNet, for example. That'll be really, really contrived to collapse to a certain type of network." }, { "start": 1297.6, "end": 1302.7199999999998, "text": " Theoretically, everything will be hardwired. Even the weights and everything will be perfectly," }, { "start": 1302.7199999999998, "end": 1309.52, "text": " perfectly fit for that. But it would not be able to execute anything else. So if you would be," }, { "start": 1309.52, "end": 1316.9599999999998, "text": " yeah, it'll be very, very bad in doing other more general purpose AI. So that comes to question," }, { "start": 1316.9599999999998, "end": 1322.32, "text": " you know, what, how can you trade flexibility for efficiency? For example, one of the things that" }, { "start": 1322.32, "end": 1331.84, "text": " some of the companies are, that are not GPU based companies are tackling are these big," }, { "start": 1331.84, "end": 1339.04, "text": " these large language models, for example, those GPT-3s and all that. And GPUs, if you look at the" }, { "start": 1339.04, "end": 1347.6, "text": " A100s, you can see that GPUs from the, I would say that it was a conscious engineering decision" }, { "start": 1347.6, "end": 1354.56, "text": " for Nvidia to go for high bandwidth models, high bandwidth memories, I'm sorry, that are basically" }, { "start": 1354.56, "end": 1360.48, "text": " fast memories, but they're limited in capacity. Alternatively, you can go for something else." }, { "start": 1360.48, "end": 1367.12, "text": " You can go for a slower DRAM based memory. So HBMs are fast, but they're limited in capacity." }, { "start": 1367.12, "end": 1374.8799999999999, "text": " And DRAMs are huge and have like terabytes versus, you know, dozens of gigabytes. And if your model" }, { "start": 1374.88, "end": 1382.4, "text": " requires terabytes of data, you would need hundreds or even thousands of GPUs just to be able to have" }, { "start": 1382.4, "end": 1389.0400000000002, "text": " the same, to do everything in memory, you know, to have the same, to map the memory space of your" }, { "start": 1389.0400000000002, "end": 1396, "text": " model. And that would be something that, you know, I'm not saying that GPUs can do, but it would" }, { "start": 1396, "end": 1404.24, "text": " require a lot of GPUs turned on and a lot of power and a lot of communication going on. And so," }, { "start": 1404.24, "end": 1411.76, "text": " you know, it would require a lot of communication going from different GPU systems to be able to" }, { "start": 1411.76, "end": 1418.64, "text": " train a single, you know, like hundreds or hundreds of billions of parameter model. So." }, { "start": 1419.1200000000001, "end": 1429.28, "text": " I mean, that's exactly what we see, right? Okay. So yeah, I guess we can just dive into what kind" }, { "start": 1429.28, "end": 1437.52, "text": " of data that goes beyond GPUs exist. That is to say, in part three, okay, in part three of your" }, { "start": 1437.52, "end": 1445.52, "text": " series, you go into a little bit of the architectural, sorry, foundations, and you describe kind of what," }, { "start": 1445.52, "end": 1452.96, "text": " what exists, you know, what instruction sets are, what kind of models exist, for example," }, { "start": 1452.96, "end": 1461.76, "text": " configurable processors. You make sort of a good, very extensive background overview, which we're" }, { "start": 1461.76, "end": 1467.3600000000001, "text": " going to skip right now, just due to time. I just found this very, very funny. I guess that's why" }, { "start": 1467.3600000000001, "end": 1474.48, "text": " you posted it here. So there is, this is a single instruction on, that I can use on an Intel processor" }, { "start": 1474.48, "end": 1481.04, "text": " that computes approximations to the reciprocal square root with less than two to the negative" }, { "start": 1481.04, "end": 1486.6399999999999, "text": " 28, the relative error of the pack double precision floating point values from these things" }, { "start": 1487.2, "end": 1494.24, "text": " and stores the result in that thing with right mass K one. That is excellent. Like I, I, I need," }, { "start": 1494.24, "end": 1500.48, "text": " I need that instruction every day. Yeah. So, you know, depending on the way that, that this is" }, { "start": 1500.48, "end": 1507.6, "text": " basically showing how you can devise, when you look at a processor, you know, the traditional," }, { "start": 1507.6, "end": 1512.56, "text": " the traditional model of processor is called a for Neumann model. It's, you're saying that you're," }, { "start": 1512.56, "end": 1518.8, "text": " you have a processor, your processor accesses the memory, your processor fetches an instruction from" }, { "start": 1518.8, "end": 1523.1999999999998, "text": " the memory. It decodes the instruction and says, Oh yeah, we should do this and that. So this" }, { "start": 1523.1999999999998, "end": 1529.1999999999998, "text": " instruction accesses the memory and loads, let's fetch the next instruction and all that. So the," }, { "start": 1529.1999999999998, "end": 1534.8799999999999, "text": " the instructions are basically built from an ISA, which is the instruction set architecture," }, { "start": 1534.88, "end": 1540.8000000000002, "text": " which you can think about it as the vocabulary in which the, the processor says that the processor" }, { "start": 1540.8000000000002, "end": 1548.0800000000002, "text": " supports some processors support X86, some processors support arm. And so which, which is," }, { "start": 1548.0800000000002, "end": 1554.88, "text": " I would say like the X86 is an example of what we call a complex instruction set computing or CISC" }, { "start": 1554.88, "end": 1561.7600000000002, "text": " and arm is the risk. So there was a trade-off between, you know, how much you're going to be" }, { "start": 1561.76, "end": 1569.12, "text": " able to, to have a single instruction, you know, compact nicely, which will take less memory. So" }, { "start": 1569.12, "end": 1575.76, "text": " you're going to have a large vocabulary to express more complex computation versus the risk, the" }, { "start": 1575.76, "end": 1581.04, "text": " reduced instruction set computer like arm that it's going to be basically be translated to a lot of," }, { "start": 1581.04, "end": 1587.92, "text": " lot of micro instructions that are B that will be simpler. So that was an ongoing discussion, but" }, { "start": 1587.92, "end": 1594.72, "text": " you know, this, you know, this gives a background of how basically a processor works. So there are" }, { "start": 1594.72, "end": 1601.1200000000001, "text": " a lot of concepts that I showed at the, at the part three that were basically used as the background" }, { "start": 1601.1200000000001, "end": 1606.24, "text": " for part four, you know, historically I wrote part four as the combination of part three and part four," }, { "start": 1606.24, "end": 1609.76, "text": " but someone said, but you know, a lot of people just advised me that this is just going to be" }, { "start": 1610.3200000000002, "end": 1616.64, "text": " super long. So I needed to break it down. So yeah. So if, if anyone, if anyone wants," }, { "start": 1616.64, "end": 1622.5600000000002, "text": " wants the background, this article is, is really nice on sort of the foundations of all of this." }, { "start": 1622.5600000000002, "end": 1627.6000000000001, "text": " If you, if you want that, and I think people can relate a little bit because in NLP, you have this" }, { "start": 1627.6000000000001, "end": 1632.64, "text": " whole tokenization problem of, you know, how big do you make your vocabulary? And if you make it too" }, { "start": 1632.64, "end": 1638.72, "text": " small, you're going to have to break down stuff into smaller pieces and so on. Just, I think it's," }, { "start": 1639.44, "end": 1645.3600000000001, "text": " it's approximately the same concept right here. You're trading essentially memory for," }, { "start": 1645.36, "end": 1651.12, "text": " for, for, for speed. And, and also the, the thing is that you need a difficult," }, { "start": 1651.6799999999998, "end": 1657.76, "text": " you need a very smart compiler to look at your code and say, okay, these sequence of," }, { "start": 1658.32, "end": 1662.9599999999998, "text": " for example, if you're writing in C, so these sequence of instructions are going to be translated" }, { "start": 1662.9599999999998, "end": 1669.04, "text": " all to that single instruction. And that way you'll have a smart and very, very complex compiler" }, { "start": 1669.04, "end": 1674.9599999999998, "text": " that will be able to map your sequence of operation into that. Sometimes it works and sometimes you're" }, { "start": 1674.96, "end": 1679.52, "text": " just going to have like these ghost instructions that no one's really going to use. So," }, { "start": 1679.52, "end": 1686.64, "text": " So here in part four, I think that that is, it is the longest part. And you dive into the various" }, { "start": 1686.64, "end": 1696.24, "text": " companies, startups that exist today, building AI, AI accelerators or AI hardware in any form." }, { "start": 1696.24, "end": 1701.3600000000001, "text": " And it is, we have to say that you are associated with one of those companies. We're not going to" }, { "start": 1701.36, "end": 1709.4399999999998, "text": " say which one though, obviously with the best one. But, but I felt, I felt reading the article" }, { "start": 1709.4399999999998, "end": 1715.84, "text": " that there was no, there was no, I didn't feel any favoritism. So I was, I was pretty happy to see" }, { "start": 1715.84, "end": 1722.8, "text": " that. Now we have a lot of them even discussed in your articles. Do you maybe have some that you" }, { "start": 1722.8, "end": 1729.12, "text": " want to, you know, want to highlight in particular to just maybe show the diversity of the field and," }, { "start": 1729.12, "end": 1735.9199999999998, "text": " and where it's going? Yes. So while there are a lot of solutions out there, I would say most of them" }, { "start": 1736.4799999999998, "end": 1743.28, "text": " stem from a handful of, of, of a few architectural ideas that were highlighted in part three." }, { "start": 1743.9199999999998, "end": 1751.52, "text": " So I would say that there is originally there's the GPU with the CUDA that has dense linear algebra" }, { "start": 1751.52, "end": 1758.2399999999998, "text": " that is basically has this model, this execution model, single instruction, multiple thread." }, { "start": 1758.24, "end": 1764.56, "text": " It's the idea of the classical von Neumann model. You have instructions, they're translated to" }, { "start": 1764.56, "end": 1770.96, "text": " processor level ISA that the instruction set architecture that Nvidia GPUs understand. And" }, { "start": 1770.96, "end": 1778.08, "text": " it's being parallelized and it, and you know, it has all these, you know, systolic like execution." }, { "start": 1778.08, "end": 1782.88, "text": " And a systolic array is, is an idea that dates back to the 1970s, where you're going to have a" }, { "start": 1782.88, "end": 1788.48, "text": " single piece of hardware that is really good in doing matrix multiply, because the data," }, { "start": 1788.48, "end": 1794, "text": " when you're doing matrix multiply, the data from the A and the B matrix is basically flowing like" }, { "start": 1794, "end": 1801.3600000000001, "text": " that. And if you have a very smart circuitry like that, which is in a sense, a smart arc accelerator" }, { "start": 1801.3600000000001, "end": 1807.7600000000002, "text": " like engine just for matrix multiply, it'll be able to carry out matrix multiply really efficiently." }, { "start": 1807.76, "end": 1816.4, "text": " So, yeah, so the GPUs have that. And you can say that there are some other companies that I would" }, { "start": 1816.4, "end": 1824.8799999999999, "text": " say that are in the camp of VLI, a combination of what we call a VLIW, a very large instruction word," }, { "start": 1824.8799999999999, "end": 1832.32, "text": " where you're going to have a heterogeneous array of compute machines, like a memory compute machine," }, { "start": 1832.32, "end": 1839.36, "text": " a vector compute machine, a matrix multiply, and maybe, you know, some sort of a linear compute" }, { "start": 1839.36, "end": 1848.08, "text": " machine for your re-use or tangents operators and whatnot. Then you have a static compiler that" }, { "start": 1848.08, "end": 1852.96, "text": " basically creates this huge instruction that says, okay, this data goes to the vector unit," }, { "start": 1852.96, "end": 1858.8, "text": " this data goes to the matrix multiply, and this data goes to the vector unit. And you're able to," }, { "start": 1858.8, "end": 1863.6, "text": " and you know the timing of all these units, and you'll be able to have a smart compiler that" }, { "start": 1863.6, "end": 1870.08, "text": " statically creates this single word that is going to be fed to all of them. So you can have," }, { "start": 1870.8, "end": 1876.32, "text": " at compile time, a smart compiler that will be able to efficiently schedule these" }, { "start": 1878.96, "end": 1883.6, "text": " different data or operands to these machines, and they will be able to get really efficient" }, { "start": 1883.6, "end": 1890.7199999999998, "text": " execution. So for, I would say, the systolic slash VLIW camp, I would say things that are," }, { "start": 1890.7199999999998, "end": 1898.24, "text": " I would, arguably the most famous example is the Google's TPU that was presented at, I would say," }, { "start": 1899.4399999999998, "end": 1908.6399999999999, "text": " mid-2017 at a conference called ISCA, the International Symposium of Computer" }, { "start": 1908.64, "end": 1915.76, "text": " Architecture, which is the biggest computer architecture conference. So they showed a model" }, { "start": 1915.76, "end": 1921.92, "text": " that is basically, the TPU is based on a big systolic array execution with a linear unit," }, { "start": 1922.64, "end": 1928, "text": " and this smart memory, and everything is being fed, and they have a smart compiler that" }, { "start": 1928, "end": 1936.3200000000002, "text": " translates AI code for, that is able to execute DNNs, these deep neural nets. And that was" }, { "start": 1936.32, "end": 1945.84, "text": " the first time, arguably the most famous non-GPU AI accelerator that was presented." }, { "start": 1946.96, "end": 1954.8799999999999, "text": " So you have the Google TPU. You also have a startup that is called Grok. Some of its" }, { "start": 1954.8799999999999, "end": 1960.8, "text": " founding members were part of the Google TPU team. There were architects at Google that" }, { "start": 1960.8, "end": 1970.56, "text": " took parts of, that took some of the ideas of Google's TPU and created a more commercialized" }, { "start": 1971.52, "end": 1980.08, "text": " accelerator for deep neural nets. And also there is Hibana. So I would say Google," }, { "start": 1980.08, "end": 1992.32, "text": " Grok, and Hibana are, I would say, the camp VLIW plus systolic array accelerators." }, { "start": 1993.9199999999998, "end": 2003.12, "text": " So I understand this correctly. Essentially they have a chip or a board, and that has many" }, { "start": 2003.12, "end": 2008.8, "text": " different, let's say, subchips on it. One is really good at matrix multiplying. One is really good at" }, { "start": 2008.8, "end": 2015.68, "text": " doing ReLU. One is really good at whatever, softmax. So kind of all these operations that we need" }, { "start": 2015.68, "end": 2023.68, "text": " in AI, they have like specially subchips for, and then they have a very smart essentially router" }, { "start": 2023.68, "end": 2029.9199999999998, "text": " that says, okay, you go here, you go here, you go here. So, you know, I could compute, let's say," }, { "start": 2030.48, "end": 2036.72, "text": " I could compute the last layers ReLU at the same time, or the last batches ReLU at the same time" }, { "start": 2036.72, "end": 2043.84, "text": " that I compute this layers forward through a linear layer. Is that? Yeah, this is essentially" }, { "start": 2043.84, "end": 2050.2400000000002, "text": " like you're basically pipelining it. So if you have like one thing that needs to ReLU, and then" }, { "start": 2051.44, "end": 2056.48, "text": " one thing that needs the matrix multiply for the conv operation, then it needs to ReLU, and then" }, { "start": 2056.48, "end": 2063.44, "text": " you can feed the next sample or whatnot that uses the matrix multiply while the other one is already" }, { "start": 2063.44, "end": 2068.56, "text": " doing ReLU. So you can do like sort of a pipeline execution. And by that, you're basically filling" }, { "start": 2068.56, "end": 2076.7200000000003, "text": " up your compute machines, right? And by that, you're getting better utilization, because you're" }, { "start": 2076.7200000000003, "end": 2082, "text": " using all of your hardware at a single point and everybody's happy and your architecture is" }, { "start": 2082, "end": 2085.6, "text": " perfectly balanced because your compiler is smart enough to understand the program." }, { "start": 2085.6, "end": 2093.44, "text": " Yeah. So essentially, we're saying we want the purpose built hardware like the unit that just" }, { "start": 2093.44, "end": 2100.64, "text": " does ReLU, because that's way better than having a CPU do ReLU. But in order to have the flexibility," }, { "start": 2100.64, "end": 2105.68, "text": " we have a bunch of them on a chip and then we have a router and the compiler that knows how to use" }, { "start": 2105.68, "end": 2114.7999999999997, "text": " that router and the pipelines. Okay, excellent. So but that it seems really, it seems like just" }, { "start": 2114.8, "end": 2120.96, "text": " from for me now, it seems a little bit still in the spirit of like a GPU of what you said that you" }, { "start": 2120.96, "end": 2127.04, "text": " you essentially have this von Neumann model, except here, there's sort of pipelining added," }, { "start": 2127.04, "end": 2132.88, "text": " there is distribution to different subunits added, right, but it's still these kind of" }, { "start": 2132.88, "end": 2139.28, "text": " instructions that are in sequence and the compiler needs to understand how to translate" }, { "start": 2139.28, "end": 2145.2000000000003, "text": " a program into that. And as I understand the other companies here, they're trying to go sort of" }, { "start": 2146.1600000000003, "end": 2150.1600000000003, "text": " bit more out of like out of that paradigm, is that correct?" }, { "start": 2150.1600000000003, "end": 2157.92, "text": " So I would say the, the other big directions that companies are doing is the data flow directions." }, { "start": 2157.92, "end": 2165.92, "text": " So some companies are combining two elements, one is called reconfigurability. And the other one is" }, { "start": 2165.92, "end": 2172.48, "text": " called data flow. So the reconfigurable data flow, I think that tense torrents are doing it," }, { "start": 2172.48, "end": 2178.56, "text": " I think that Samba Nova is doing it. Originally, there was a company called wave computing that" }, { "start": 2178.56, "end": 2185.28, "text": " did it. That are and there is another company, there was another company called simple machines" }, { "start": 2185.28, "end": 2192.32, "text": " that are doing it. So the idea of reconfigurable data flow is that, first of all, if you look at a" }, { "start": 2192.32, "end": 2199.6800000000003, "text": " pie torch or tensor floor, Keras or a cafe program and AI, a deep learning application," }, { "start": 2199.6800000000003, "end": 2205.2000000000003, "text": " you can see that there are different layers, and they're communicating with each other. So you have" }, { "start": 2205.2000000000003, "end": 2214.6400000000003, "text": " a known, a predetermined set of operands, and you know how the data is basically being communicated" }, { "start": 2214.6400000000003, "end": 2221.76, "text": " between different parts of your graph. So in the underlying computation, the data flow," }, { "start": 2221.76, "end": 2229.6000000000004, "text": " the underlying computation is basically constructing of a computation graph. What does that mean? Like" }, { "start": 2229.6000000000004, "end": 2236.0800000000004, "text": " you can see over there, you have your layer. And from that you have another layer that does ReLU," }, { "start": 2236.0800000000004, "end": 2242.32, "text": " and then you feed it to another conv layer or waste and do that. So you have basically something" }, { "start": 2242.32, "end": 2250.5600000000004, "text": " that is not instruction level, but basically more of the way that your data, you know, you can see" }, { "start": 2250.56, "end": 2256.72, "text": " that your data is basically flowing between different layers. So the idea is that instead of" }, { "start": 2256.72, "end": 2264.16, "text": " having that data, that program, that data flow communication graph, go, you flatten it to the" }, { "start": 2264.16, "end": 2271.2799999999997, "text": " classic von Neumann model, then you try to reparalyze it. You can start off from this data flow model," }, { "start": 2271.2799999999997, "end": 2277.7599999999998, "text": " from this data flow graph, and you can basically statically map it via another, again, you need a" }, { "start": 2277.76, "end": 2284.32, "text": " smart compiler to do that as well. You need to map it to your existing, to a specialized hardware that" }, { "start": 2284.32, "end": 2292.1600000000003, "text": " is capable of executing data flow. Meaning you can have a compute element that does multiply in here," }, { "start": 2292.1600000000003, "end": 2297.6000000000004, "text": " and you can have another one that does add in here, and you can have, you can basically break" }, { "start": 2297.6000000000004, "end": 2304, "text": " down your dense linear algebra to compute unit, and you can feed them to other compute unit instead of," }, { "start": 2304, "end": 2310, "text": " you know, breaking down your computation to micro unit, like saying, oh, here's an add, then oh," }, { "start": 2310, "end": 2317.6, "text": " you need to multiply and all that. So it would be more natural to look at the compute, looking at" }, { "start": 2317.6, "end": 2323.68, "text": " the computation graph as a data flow graph and map it to the hardware, and you can start it instead" }, { "start": 2323.68, "end": 2329.12, "text": " of, you know, going back and forth, flattening it to the von Neumann and then parallel, reparalyzing" }, { "start": 2329.12, "end": 2336.56, "text": " it to the von Neumann. So they're, you know, these companies' bets are that this model is more" }, { "start": 2336.56, "end": 2345.44, "text": " natural, it's more hardware friendly, and ultimately you can have, you can get a better gain because" }, { "start": 2345.44, "end": 2350.88, "text": " you're able to have a better, more complex understanding of the graph. You can look at" }, { "start": 2350.88, "end": 2355.3599999999997, "text": " different elements in your graph, you can have a smart compiler that fully understands your hardware," }, { "start": 2355.36, "end": 2360.32, "text": " it knows the underline, the number of compute elements and what each compute element in your" }, { "start": 2360.32, "end": 2366.7200000000003, "text": " processor, in your accelerator is doing, and from that it will create a mapping that will essentially" }, { "start": 2366.7200000000003, "end": 2373.44, "text": " go be very static and your data is just going to flow instead of you needing to manually orchestrate" }, { "start": 2373.44, "end": 2378.8, "text": " it and breaking it down to instructions. So, you know, one of the main selling points of" }, { "start": 2378.8, "end": 2388.88, "text": " the existing landscape like GPUs is that GPUs are, they have a very mature software stack and they're" }, { "start": 2388.88, "end": 2393.84, "text": " very flexible, you can program everything from that von Neumann model. If you can create" }, { "start": 2396.88, "end": 2407.44, "text": " a flexible enough architecture, you'll be able to basically handle new models because, you know," }, { "start": 2407.44, "end": 2414.56, "text": " the main challenge for you to build an accelerator company is that it takes two or three years to" }, { "start": 2414.56, "end": 2419.36, "text": " take out a chip, meaning you need to think about your idea, you need to think about your architecture," }, { "start": 2419.92, "end": 2426, "text": " all of what you can execute, and you need to be generic enough because within two or three years," }, { "start": 2426, "end": 2431.68, "text": " it's possible that your application has completely shifted away and if you look at those," }, { "start": 2431.68, "end": 2438.56, "text": " the mapping of specialized accelerators, if you're here but your application space is moved here," }, { "start": 2438.56, "end": 2444.7999999999997, "text": " you're not going to be able to execute it efficiently. So, you need to be very open-minded," }, { "start": 2444.7999999999997, "end": 2450.64, "text": " you need to be very mindful about being flexible enough to support this. One of the main challenges" }, { "start": 2450.64, "end": 2458.24, "text": " for that is the ability to create a smart enough software stack that will be able to execute it." }, { "start": 2458.24, "end": 2465.8399999999997, "text": " So, it's not a trivial task. So, you can take the Wave Computing case as an example." }, { "start": 2466.56, "end": 2474.8799999999997, "text": " Wave Computing was a company that was really revolutionary. They were able to present a" }, { "start": 2476.16, "end": 2482.3999999999996, "text": " commercialized accelerator that does reconfigurable data flow at the beginning of 2017." }, { "start": 2482.4, "end": 2489.52, "text": " So, they had a fancy hardware with 15,000 cores running at 6.7 gigahertz with" }, { "start": 2490.56, "end": 2496.48, "text": " a lot of engineering complexity that is able to have both slow memory and fast memory and all that." }, { "start": 2497.28, "end": 2504.88, "text": " But from what I understood that the CEO interviewed and said, okay, we were not able to" }, { "start": 2504.88, "end": 2512.8, "text": " succeed in it because it was so complex that going from the basic cases where we were able to showcase" }, { "start": 2512.8, "end": 2519.12, "text": " a few kernels, trying to generalize that to more complex and real-world application, we found that" }, { "start": 2519.12, "end": 2525.52, "text": " our hardware software stack had to solve intractable problems and that would become" }, { "start": 2526.56, "end": 2530.88, "text": " unreasonable. So, I would say that their problem was that they were" }, { "start": 2530.88, "end": 2535.6800000000003, "text": " way, way ahead of the curve. People were just exploring these problems and they were not" }, { "start": 2536.4, "end": 2543.52, "text": " able to estimate those difficulties. They were pioneers, but ultimately, it didn't pan out" }, { "start": 2543.84, "end": 2548.1600000000003, "text": " so great for them because eventually they filed for bankruptcy." }, { "start": 2549.44, "end": 2557.44, "text": " There's also this concept of in-memory compute or near-memory compute. What does that mean?" }, { "start": 2557.44, "end": 2565.84, "text": " So, there are several notions of how close the compute and your memory should be." }, { "start": 2566.48, "end": 2573.76, "text": " One form of near-memory compute is saying that you have your memory model and from that you're" }, { "start": 2573.76, "end": 2580.2400000000002, "text": " loading it to what we call a software control scratchpad memory. So, you have small fast" }, { "start": 2580.24, "end": 2587.6, "text": " memories. You can think of it as a processor cache, but they're software control. Traditionally," }, { "start": 2587.6, "end": 2594.64, "text": " a processor cache like in the Fonoymon model is basically trying, has a heuristic of saving" }, { "start": 2594.64, "end": 2604, "text": " the most recent accesses just because this is the hot data. A software-defined scratchpad memory is" }, { "start": 2604, "end": 2609.12, "text": " something that is more compiler-controlled that you know how you're going to be able to access." }, { "start": 2609.12, "end": 2619.6, "text": " One of the guiding principles of devising an accelerator is that you're basically able to" }, { "start": 2619.6, "end": 2624.32, "text": " anticipate how your memory and data accesses are going to be like. You're going to have a" }, { "start": 2625.12, "end": 2633.6, "text": " handful of basic, very simple, very simple, very simple, very simple, very simple, very simple" }, { "start": 2633.6, "end": 2638.4, "text": " basic computational structures that you're going to iterate over a lot of data and it's going to" }, { "start": 2638.4, "end": 2643.2799999999997, "text": " be really recurring. That's one of the things that enable you to develop an accelerator in the first" }, { "start": 2643.2799999999997, "end": 2651.44, "text": " place. So, a scratchpad memory is a very small, a fairly small and fast memory. It can be kilobytes," }, { "start": 2651.44, "end": 2661.6, "text": " like a megabyte of data that is really close and it sits within the same piece of, not even the" }, { "start": 2661.6, "end": 2667.12, "text": " piece of silicon, but within the same core within that piece of silicon and you'll be able to" }, { "start": 2667.12, "end": 2674.16, "text": " communicate that data fast. It will take like one or two clock cycles. Another approach would be" }, { "start": 2674.96, "end": 2683.92, "text": " a processor and memory approach. That's when the processing element sits really close to the actual" }, { "start": 2683.92, "end": 2689.6, "text": " memory model. If you're going to manufacture something like a DRAM or something that is called" }, { "start": 2689.6, "end": 2695.44, "text": " memristors, which are memory-based resistors, you're going to be able to manufacture a" }, { "start": 2696.16, "end": 2706.16, "text": " memory module that is going to have logic elements inside of it. You can see of those examples like" }, { "start": 2706.16, "end": 2711.52, "text": " Mythic or one of those companies that are developing what we call the processor in memory" }, { "start": 2711.52, "end": 2721.12, "text": " is the idea that you can look at deep learning computation and you can look at the dot product" }, { "start": 2721.12, "end": 2727.7599999999998, "text": " and from that you can do analog computation and that will be fairly, fairly complex. But the idea" }, { "start": 2727.7599999999998, "end": 2734.48, "text": " is that you don't really need to fetch back and forth data from the memory because it's all within" }, { "start": 2734.48, "end": 2742.4, "text": " this special circuitry that sits within your memory module and you're saving a lot of that energy" }, { "start": 2742.4, "end": 2750.88, "text": " going back and forth from the memory chip and into a different chip, which is the compute" }, { "start": 2750.88, "end": 2758.72, "text": " memory, the compute processing element. It's essentially like having a lot of," }, { "start": 2758.72, "end": 2767.12, "text": " a lot of cores that we also have lots and lots of registers at those cores, but the registers" }, { "start": 2767.12, "end": 2775.68, "text": " aren't just for temporary data, but they are actually the memory. In a sense, you can think" }, { "start": 2775.68, "end": 2781.6, "text": " about it as the difficulty is that you needed to really change the memory that you're manufacturing." }, { "start": 2781.6, "end": 2787.4399999999996, "text": " And that's something that not a lot of companies are doing, but it's a promising direction because" }, { "start": 2787.44, "end": 2795.36, "text": " if you have something that is more, that is less depending on your transistors, so it's less prone" }, { "start": 2795.36, "end": 2803.28, "text": " to the failures of Moore's law. So the end of Moore's law is, might not be the bottleneck for" }, { "start": 2803.28, "end": 2807.76, "text": " some of these modules, but there are other things like you can see that there's like an analog to" }, { "start": 2807.76, "end": 2814.08, "text": " digital converter, which could be power hungry and that creates a slew of analog compute problems." }, { "start": 2814.08, "end": 2820, "text": " There are also a bit more, let's say call them esoteric things that you, all of these were" }, { "start": 2820, "end": 2827.2, "text": " already esoteric to me, but they are, there are more esoteric things like there's like optical" }, { "start": 2827.2, "end": 2834.56, "text": " computing and neuromorphic computing and things like this. What are, do you have any favorites" }, { "start": 2834.56, "end": 2839.36, "text": " there or anything that you think is promising and not buzzwordy?" }, { "start": 2839.36, "end": 2848.6400000000003, "text": " I think that these, I think that Lightmatter is a company that is, was founded by a few MIT graduates" }, { "start": 2849.2000000000003, "end": 2856.48, "text": " and they have this idea that light, that representing analog computation via light" }, { "start": 2856.48, "end": 2864, "text": " could be more efficient than using it, but then expressing it through the digital domain." }, { "start": 2864, "end": 2870.56, "text": " It's an interesting problem. I am not really versed on the different types of difficulties there," }, { "start": 2871.04, "end": 2881.36, "text": " but it's sort of like thinking about an analog neuromorphic model where the brain acts basically" }, { "start": 2881.36, "end": 2889.12, "text": " like on analog pulses. So this is a little bit more trying to mimic the way that the brain works" }, { "start": 2889.12, "end": 2895.92, "text": " than you would go traditional artificial neural networks where you're going to have a BF16" }, { "start": 2895.92, "end": 2901.7599999999998, "text": " represent your weights and you can say that this is closer to reality and it's also more energy" }, { "start": 2901.7599999999998, "end": 2908.7999999999997, "text": " efficient, but these are, you can say that these are more advanced technologies. So I would say" }, { "start": 2908.7999999999997, "end": 2916.64, "text": " that they probably have their own set of challenges and they're not as efficient as the" }, { "start": 2916.64, "end": 2925.12, "text": " other challenges. And you never know which one of these technologies will prevail and be the winner." }, { "start": 2927.12, "end": 2930.64, "text": " And what is neuromorphic computing?" }, { "start": 2932, "end": 2938.7999999999997, "text": " I think that the neuromorphic computing as the way that we know it is the form of analog computing." }, { "start": 2938.7999999999997, "end": 2944.16, "text": " You're going to have data over here. You're going to have the weights that are sitting within," }, { "start": 2944.16, "end": 2950.64, "text": " your memory and your activation is going to be coming from that memory from as inputs to that" }, { "start": 2950.64, "end": 2958.3999999999996, "text": " memory. You're going to be able to do an analog addition and instead of doing that dot product" }, { "start": 2958.3999999999996, "end": 2963.7599999999998, "text": " between the weights, you're going to have a single dot product doing vectorized compute in an analog" }, { "start": 2963.7599999999998, "end": 2969.92, "text": " fashion and you're going to be using analog circuitry to compute the results. So it's more of," }, { "start": 2969.92, "end": 2977.04, "text": " I would say it's more similar in theory to the spiking neural network model where you're going" }, { "start": 2977.04, "end": 2985.2000000000003, "text": " to have like your brain act on electric pulses. So that's what these solutions are trying to mimic" }, { "start": 2986.2400000000002, "end": 2994.96, "text": " conceptually. And you know that eventually if you look at hardware from the grand scheme of things," }, { "start": 2994.96, "end": 3000.64, "text": " you know, you have those accelerators. These accelerators are good at doing AI. But you know," }, { "start": 3001.92, "end": 3006.8, "text": " if you really want to get into the definitions, you know, you can go, you can look at the" }, { "start": 3007.76, "end": 3013.68, "text": " in Goodfellow's deep learning book. It's not really AI. There's an event diagram where" }, { "start": 3013.68, "end": 3018.7200000000003, "text": " AI and inside of it there is machine learning and then there's presentation learning." }, { "start": 3018.7200000000003, "end": 3023.04, "text": " And then there's deep learning. And from within that deep learning, you can say that these" }, { "start": 3023.04, "end": 3033.2799999999997, "text": " accelerators are good at, you know, a subset of deep learning and a subset of ML that is good at" }, { "start": 3034.24, "end": 3040.56, "text": " doing matrix multiplication. You know, they're really good at doing things like conv and" }, { "start": 3040.56, "end": 3047.36, "text": " transformers. But is that a general solution to AI? No one really knows. You know, you can say that" }, { "start": 3047.36, "end": 3057.6, "text": " the interesting thing is that because the hardware was a key enabler, it's also sort of used as a" }, { "start": 3057.6, "end": 3063.84, "text": " limiter to what you can achieve. You know, people are saying, is attention all you need? Is conv all" }, { "start": 3063.84, "end": 3072.2400000000002, "text": " you need? Could be. But one thing is for sure is that it consists of most of what your hardware" }, { "start": 3072.24, "end": 3078.3999999999996, "text": " can do. You know, your hardware is really good at transformers and attention and cons. But, you" }, { "start": 3078.3999999999996, "end": 3088.16, "text": " know, is that how intelligence really work? Maybe there's a huge slew of applications that can" }, { "start": 3088.16, "end": 3097.6, "text": " mimic more human intelligence that are not, that cannot be efficiently ran on hardware accelerators" }, { "start": 3097.6, "end": 3101.04, "text": " the way that they're built today. And we're not going to be able to explore it just because we" }, { "start": 3101.04, "end": 3106.96, "text": " don't have the hardware for it and we don't have a way to run it efficiently. So it's an interesting" }, { "start": 3106.96, "end": 3107.44, "text": " problem." }, { "start": 3108.3199999999997, "end": 3114, "text": " There is this concept, people say this, right, this is a sentiment that's echoed throughout the" }, { "start": 3114, "end": 3120.08, "text": " community that, for example, graph neural networks, we don't have good hardware for graph neural" }, { "start": 3120.08, "end": 3125.7599999999998, "text": " networks, and therefore, probably, we're not going to explore them as much, which also means that" }, { "start": 3125.76, "end": 3131.28, "text": " hardware manufacturers, since, you know, we can't demonstrate that graph neural networks are really" }, { "start": 3131.28, "end": 3139.5200000000004, "text": " good, won't build graph neural network chips. Do you see this? Do you see it generally going," }, { "start": 3140.0800000000004, "end": 3146.6400000000003, "text": " let's say, more and more converging on some applications? Or do you think, okay, we'll" }, { "start": 3146.6400000000003, "end": 3153.28, "text": " discard some of the applications, but also the ones we have will sort of morph and develop into" }, { "start": 3153.28, "end": 3159.1200000000003, "text": " different variants and so on? Like, how do you see the hardware, essentially the" }, { "start": 3159.1200000000003, "end": 3166.1600000000003, "text": " expansiveness of manufacturing hardware's effect on the diversity of the ideas in the field? Do" }, { "start": 3166.1600000000003, "end": 3172, "text": " you think there is hope to increase diversity, even with the cost of hardware?" }, { "start": 3173.28, "end": 3177.76, "text": " It's an interesting question. I would say, obviously, money makes the world go round. If" }, { "start": 3177.76, "end": 3183.6000000000004, "text": " there's money within these applications, you're going to be able to build the hardware for it." }, { "start": 3184.0800000000004, "end": 3189.36, "text": " The thing is, like we said earlier, hardware has been a key enabler for what you can achieve." }, { "start": 3190.88, "end": 3198.2400000000002, "text": " And basically, if you cannot run your application on hardware, it will be hard to create that" }, { "start": 3198.2400000000002, "end": 3206.2400000000002, "text": " ecosystem for that application to be able to justify building special hardware, because" }, { "start": 3206.24, "end": 3212.4799999999996, "text": " it's a bit of a chicken and an egg problem. If I were to develop an accelerator for a" }, { "start": 3213.2799999999997, "end": 3219.12, "text": " non-Euclidean set of problems, I would first need to look for the applications for it. I will need" }, { "start": 3219.12, "end": 3225.2799999999997, "text": " to be looking for that justification for it, simply because if I'm a startup company, I'm going to" }, { "start": 3225.2799999999997, "end": 3234.3199999999997, "text": " have to need funding for it, right? But if you don't have people that are experienced in the" }, { "start": 3234.32, "end": 3239.04, "text": " industry, you won't be able to find that justification. So it's a bit of a chicken and" }, { "start": 3239.04, "end": 3245.52, "text": " an egg problem. So as I said, maybe attention is all you need, maybe it's all you need. For" }, { "start": 3245.52, "end": 3251.6000000000004, "text": " surely, it's most of what we have right now. And it would be interesting to see. I would say that," }, { "start": 3252.6400000000003, "end": 3261.76, "text": " as I said in the final thoughts, I would think that in the next two or three years or so," }, { "start": 3261.76, "end": 3267.36, "text": " the things are going to become clearer and architectures are going to be able to stabilize" }, { "start": 3267.36, "end": 3273.1200000000003, "text": " just because we understand the problem better. It will take us four or five years to really" }, { "start": 3273.6800000000003, "end": 3283.84, "text": " converge to a set of common practices and the way that we're developing software libraries and the" }, { "start": 3283.84, "end": 3287.28, "text": " way that we're developing compilers. We're going to be able to have this" }, { "start": 3287.28, "end": 3295.2000000000003, "text": " I would say three or four stable software stacks that are really good at the conv and transformer" }, { "start": 3295.2000000000003, "end": 3303.28, "text": " games. Will there be other models to create other stacks? Sure. But if I were to start a startup" }, { "start": 3303.28, "end": 3311.0400000000004, "text": " today, it will be really hard for me to go for the conv and the transformers, just because this is" }, { "start": 3311.04, "end": 3317.2799999999997, "text": " a saturated field and people are doing it fairly well and you're basically almost maximizing what" }, { "start": 3317.2799999999997, "end": 3324.96, "text": " you can do in your hardware. The last saying here in your final thoughts is" }, { "start": 3326.96, "end": 3331.7599999999998, "text": " everything old is new again. Do you want to explain what that's about?" }, { "start": 3331.76, "end": 3348.5600000000004, "text": " Yes. It seems like there's a bit of, you can say that on one hand, these models have been" }, { "start": 3348.5600000000004, "end": 3354.88, "text": " the most popular models, those key enablers, those Alex net and those Resnets, those attentions and" }, { "start": 3354.88, "end": 3363.44, "text": " BERTs and the GPT-3s, they all originated in academic papers, right? But in the hardware field," }, { "start": 3364, "end": 3370.08, "text": " things are, there's a little bit more of a disconnect. I would say that there are a lot of" }, { "start": 3370.08, "end": 3377.6800000000003, "text": " papers, there are dozens of papers presenting new ideas every year in the top conferences," }, { "start": 3377.68, "end": 3387.44, "text": " there are the ESCA, HPCA, ASPLOS and Micro. But eventually you can see that all these fundamental," }, { "start": 3388.48, "end": 3396.24, "text": " all these accelerators were basically using ideas originated like 30, 40 years ago." }, { "start": 3396.24, "end": 3402.8799999999997, "text": " Processing and memories was, I would say in the 1980s, VLIW again, the 1980s, systolic arrays," }, { "start": 3402.88, "end": 3410.96, "text": " the 1970s, data flow programming is the 1970s, processing and memory also like 1970s. So it's a" }, { "start": 3410.96, "end": 3421.12, "text": " bit of conservatism because as you can say that a company building hardware knows, at least in the" }, { "start": 3421.12, "end": 3428.56, "text": " older days where it was hard to get money funding for it, you would need to really, really justify" }, { "start": 3428.56, "end": 3434, "text": " and really go for these well hashed out ideas before you would go for those wild card ideas." }, { "start": 3434, "end": 3445.04, "text": " And once you have that, you might be able to explore more revolutionary ideas. Unfortunately," }, { "start": 3445.04, "end": 3450.7999999999997, "text": " I think that at this point, a lot of your architectural foundations are already established." }, { "start": 3450.7999999999997, "end": 3458.08, "text": " So you won't be able to explore this crazy accelerators or those things that are really" }, { "start": 3458.08, "end": 3463.36, "text": " really out there. You'll be able to somewhat integrate it into your existing architecture," }, { "start": 3464.08, "end": 3470.72, "text": " but it would be very daring to go and break your entire architecture completely. And especially in" }, { "start": 3470.72, "end": 3477.36, "text": " a very competitive landscape, you might not be able to go for that risk." }, { "start": 3479.12, "end": 3484.96, "text": " You would be surprised, but there are many people in the AI community that say that all the AI" }, { "start": 3484.96, "end": 3491.6, "text": " ideas have been had in the 80s and 90s as well. And there's essentially nothing new under the sun." }, { "start": 3493.04, "end": 3494.2400000000002, "text": " But it's a debated position." }, { "start": 3494.2400000000002, "end": 3501.2, "text": " It's a debated position. Well, I would say that for one thing, for sure, that going back to the" }, { "start": 3502.32, "end": 3507.04, "text": " attention is all you need and convo is all you need and essentially is what you got. A lot of these," }, { "start": 3507.04, "end": 3515.12, "text": " the basic computational structures are already there. People are building on the baseline of" }, { "start": 3515.12, "end": 3521.44, "text": " these architectures simply because for me as a hardware architect, from my perspective," }, { "start": 3521.44, "end": 3528.48, "text": " this is what the hardware can do. It even goes back to this academic notion of accelerators." }, { "start": 3528.48, "end": 3534.48, "text": " There's a work called Stream Data Flow Acceleration that was presented in ISCA of 2017," }, { "start": 3534.48, "end": 3542.4, "text": " that they're saying, okay, the acceleratable domains need to fulfill certain properties." }, { "start": 3542.4, "end": 3550.2400000000002, "text": " They need to have a fairly confined control flow. They need to be fairly repetitive. You need to" }, { "start": 3550.2400000000002, "end": 3557.44, "text": " know how the data reuse. You need to know a lot of how your computation patterns behave. So" }, { "start": 3557.44, "end": 3565.36, "text": " if you're not going to be able to build an accelerator that completely breaks out from" }, { "start": 3565.36, "end": 3570.48, "text": " this common wisdom and breaks out this template, you might not be able to have" }, { "start": 3571.36, "end": 3579.52, "text": " an AI model that behaves that way. Is it true or not? Could be or could be not. Maybe we will" }, { "start": 3579.52, "end": 3587.44, "text": " find out that our existing patterns are fulfilling enough. I would say that there are a lot of problems" }, { "start": 3587.44, "end": 3591.84, "text": " even within the existing architectures that we were able to fully explore." }, { "start": 3591.84, "end": 3597.68, "text": " Cool. Is there anything else you'd like to want to give people on the way? I guess there's not an" }, { "start": 3597.68, "end": 3606.16, "text": " easy way to necessarily get into hardware yourself at home or something, but if people want to dive," }, { "start": 3606.16, "end": 3610.96, "text": " they can certainly go to your articles, which I think are great. I will obviously link them" }, { "start": 3611.52, "end": 3616.7999999999997, "text": " in the video description. Is there any message you want to get out there regarding this?" }, { "start": 3617.68, "end": 3623.68, "text": " I would say, I cannot really say anything about looking at the blog. Try to look at high level" }, { "start": 3623.68, "end": 3630.64, "text": " overviews of how hardware and software behaves. It's really tightly coupled today. It's a really" }, { "start": 3630.64, "end": 3638, "text": " exciting time to be either in AI or in hardware because it's a really great opportunity from" }, { "start": 3638, "end": 3649.6, "text": " many aspects historically that you can explore AI hardware either as a research scientist," }, { "start": 3650.4, "end": 3657.2799999999997, "text": " as a data scientist, or even a computer scientist. It's really good to see how all these pieces" }, { "start": 3657.28, "end": 3663.6000000000004, "text": " pan out. Start looking at the high level overviews and then just deep dive into any of them. Open" }, { "start": 3663.6000000000004, "end": 3670.88, "text": " a computer architecture book. The old ideas are already there. Try to look at the high level" }, { "start": 3670.88, "end": 3676.8, "text": " white papers from the big companies, the Googles and the NVIDIAs and some of the accelerator" }, { "start": 3676.8, "end": 3685.2000000000003, "text": " companies. Try to understand how your software behaves and you might find that it's not as" }, { "start": 3685.2, "end": 3694.3999999999996, "text": " good as it should be. It's really great that you can execute your models much faster than you have" }, { "start": 3694.3999999999996, "end": 3702, "text": " anticipated. If it's going to take you three days to train your model versus if it's going to take" }, { "start": 3702, "end": 3708.24, "text": " you three hours to train your model, it's going to be a key enabler to a lot of your capabilities." }, { "start": 3709.7599999999998, "end": 3714.08, "text": " Just try to do all those tweaks. Try to understand the common practices. Try to follow" }, { "start": 3714.08, "end": 3719.36, "text": " programming books and rules and best practices and you might find out that" }, { "start": 3720.3199999999997, "end": 3723.2799999999997, "text": " you're going to be able to be a kickass data scientist." }, { "start": 3724.72, "end": 3732.4, "text": " Excellent. Well, Adi, it was a great pleasure having you here. I learned a lot. Really," }, { "start": 3732.4, "end": 3737.44, "text": " I had no clue before this. Thank you very much for these articles and thanks for being here." }, { "start": 3737.44, "end": 3747.28, "text": " Thanks a lot for having me." } ]
EA96xh9qog0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
I'm at ICML19 :)
[ "Science & Technology" ]
[ "machine learning", "conference", "long beach", "california", "icml19", "icml", "artificial intelligence", "ai", "deep learning" ]
Short intro to the International Conference on Machine Learning in Long Beach, CA. I'll be making some updates from the conference.
Hi there, it's day one of ICML and we'll be attending the conference here and just quickly pre-video to let everyone know I'll be trying to report from here kind of what papers are cool, what I liked, what are kind of the trends and so hopefully get this conference out to a broader community. So everyone's conglomerating here, the line's probably going to be huge, I'm already registered so that's pretty good. It's beautiful weather and looking forward to five days of conference. So today is tutorial day and I'll think I'll be attending some cool tutorials. Yeah, just look how pretty it is here, nice. All right, bye everyone, see you later.
[ { "start": 0, "end": 12.4, "text": " Hi there, it's day one of ICML and we'll be attending the conference here and just" }, { "start": 12.4, "end": 19.28, "text": " quickly pre-video to let everyone know I'll be trying to report from here kind of what" }, { "start": 19.28, "end": 27.12, "text": " papers are cool, what I liked, what are kind of the trends and so hopefully get this conference" }, { "start": 27.12, "end": 31.520000000000003, "text": " out to a broader community. So everyone's conglomerating here, the line's probably" }, { "start": 31.520000000000003, "end": 35.760000000000005, "text": " going to be huge, I'm already registered so that's pretty good. It's beautiful weather" }, { "start": 36.64, "end": 45.2, "text": " and looking forward to five days of conference. So today is tutorial day and I'll think I'll be" }, { "start": 45.92, "end": 54.480000000000004, "text": " attending some cool tutorials. Yeah, just look how pretty it is here, nice." }, { "start": 54.48, "end": 59.44, "text": " All right, bye everyone, see you later." } ]
-MCYbmU9kfg
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
RoBERTa: A Robustly Optimized BERT Pretraining Approach
[ "Science & Technology" ]
[ "deep learning", "machine learning", "nlp", "natural language processing", "machine translation", "arxiv", "google", "attention mechanism", "attention", "transformer", "tensor2tensor", "rnn", "recurrent", "seq2seq", "bert", "unsupervised", "squad", "wordpiece", "embeddings", "language", "language modeling", "attention layers", "bidirectional", "elmo", "word vectors", "pretrained", "fine tuning" ]
This paper shows that the original BERT model, if trained correctly, can outperform all of the improvements that have been proposed lately, raising questions about the necessity and reasoning behind these. Abstract: Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code. Authors: Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov https://arxiv.org/abs/1907.11692 YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Minds: https://www.minds.com/ykilcher BitChute: https://www.bitchute.com/channel/10a5ui845DOJ/
Hello everyone, today we're looking at Roberta, a robustly optimized BERT pre-training approach by Yin-Han Liu at AL, mainly of Facebook research. So this paper is a pretty short, pretty simple paper and the main premise is we've seen a number of improvements over the initial BERT paper where different pre-training of the transformer architecture or extensions of the architecture have been shown to have better performance than the original BERT model. And this paper basically says if you get the design choices right, then BERT is able to basically be on par or exceed all of these other methods so far. So they're basically exploring design choices in the pre-training and training of BERT. Alright, so if you don't know what BERT is, by the way, I have made a video about BERT, I've also made a video about transformers. In very quick terms, BERT is a language neural network architecture that takes as input text such as this kind of thing you see here, text such as that, and it will kind of encode it out and it can do various things, for example, classify it into certain categories or kind of segment it, extract answers from questions and so on. The whole thing is pre-trained with what's called a masked language model objective where you don't need labels to train it. So in a masked language model objective, you basically mask out certain words during training and then you ask BERT to reconstruct these words from the surrounding information. And that kind of has given some improvements in the original BERT paper, but subsequent papers have claimed that you can improve even more by using different pre-training objectives and so on such as Excel, NET. But here, these researchers basically explore different things. So they use a regular BERT architecture, that's what they describe here, so they use both the BERT base, the 12-layer, as well as the 24-layer BERT that has originally been described. They use masked language modeling as a pre-training objective and they explore the necessity of this next sentence prediction loss that has been part of BERT. So along with the masked sentence modeling, BERT has also had an objective where if you input a piece of, actually you input two pieces of text, two sentences such as this, these are two sentences, and BERT has to decide if the second sentence follows the first sentence in the corpus or in 50% of the cases, the second sentence is sampled from a different document. This kind of is, so the original paper argued this is necessary to incorporate long-distance relationships between text. Yeah, here the NSP objective was designed to improve performance on downstream tasks such as natural language inference. And this paper kind of explores the necessity of that loss. In terms of optimization, there is of course kind of a pre-training scheme and then a training scheme using Adam here with certain parameters and also this paper explores the use of these parameters. Lastly you have data and of course these models sometimes they're trained on different data and that's why comparing them makes it a bit harder to compare them because the pre-training is done on differently sized and differently structured data. This paper also tries to investigate the influence of the training data and especially what happens if we keep the training data constant. So all right, so they implement BERT, they re-implement BERT and then they fix some hyperparameters while they tune others and first of all the data set. So they use different data sets. The original BERT has been trained on this Book Corpus and Wikipedia, English Wikipedia data set which is 16 gigabytes large. Now this paper here collects a, what's this CC News data set which is the subset of the Common Crawl News data set which is all in. So the subset is the English portion and that's 76 gigabytes which is on par with for example what GPT-2 used I believe. So this is a very large training set and kind of comparing this original data to the large corpus, kind of what influence that is should make very clear what the influence of more training of more pre-training data is. They also have a number of other corpora open web text as well as here I believe there's one more stories, yes. So these are also pretty sizable but these are like, yeah these are like, have very specific schemas to them. Then the evaluation here happens on several different kind of downstream tasks. So the idea is you first you pre-train this BERT model on with the masked language modeling and so on and then you have this GLU task which is actually a collection of nine tasks and you have some other tasks such as SQUAD which is a question answering task and here RACE I don't even know what that is in particular but suffice to say these are kind of downstream NLP tasks. The paper isn't about these downstream tasks but it's just a way to measure how well your pre-training worked if then you can fine tune on such a task and you get a good performance. But what the tasks are in particular isn't too important. Alright so here we get into the meat of the paper. First they decide on what they call static versus dynamic masking. So in the original BERT paper whenever they do masked language modeling they take a piece of text and they basically replicate it a bunch of times because they want to iterate through training data a bunch of times and then in each iteration they mask out different tokens. They compare this to what's called dynamic masking. So this is static masking. Dynamic masking would be where you basically on the fly generate your mask. You don't pre-compute it and save it you on the fly generate it. This allows you to go through kind of more or less of the data as you want and when you encounter the same sample twice even though you replicate it in the original BERT model you could still encounter it twice if you train for longer than the number of replications. Then you basically see the exact same mask again and the dynamic masking is actually much more useful. It's much more ad hoc. Each time you see a sample you generate the mask on the fly. So they compare this here and they see that there is a marginal improvement so here higher is better marginal improvement in two tasks and a less marginal decrease in performance in one task. So they decide that this dynamic masking is of use. Second thing they investigate is the kind of input format and this next sentence prediction. So as I already said the original BERT training objective always gets two sentences next to each other and has to decide if the second one follows from the first one. Actually it doesn't it observes two concatenated document segments which are either sampled contiguously from the same document or from distinct documents and this is half and half. So in addition to the masked language modeling the model is trained to predict whether the observed document segments come from the same or distinct document via an auxiliary next sentence prediction loss. They investigate different ways of including or excluding this loss. So first is what they define if here if it's plus NSP that means that this particular thing includes the next sentence or next segment prediction loss. So they have segment pair plus NSP which means that each input has a pair of segments and these segments now the difference the distinction between a segment and a sentence is important where the sentence is really a natural sentence a segment can actually be multiple natural sentences which is what the original BERT does. So as long as the combined length is less than 512 tokens there can also be multiple sentences but there's clearly two segments and you have to decide if they follow after each other or not. The second thing they try is the same thing so the next segment prediction but now it's just two sentences it's just natural sentences so it must be one sentence a period and then the next sentence a period and you have to distinguish these two if they follow or not. Then they investigate full sentences which is they leave away this next segment prediction loss and they simply fill up the 512 tokens with text from the corpus. So each input is packed with full sentences sampled continuously from one or more documents and the one or more document means if you so if you sample text right you sample here text you put all of this in the thing and you are at the end of a document you simply continue with the next one and go on until you have the 512 tokens. So you basically fill fill fill until you have 512 tokens and that's this variant here. And then in the last variant you do the same thing this called dock sentences but you basically you stop at the end. So even so you put all of this in your state and if you here you stop and then you have to be content by simply padding the rest of the 512 tokens or something like this so you don't have as much data but the all the text that you have in one sample is actually continuous text from the same document. So they pit these four things against each other. This is this table here and as you can see here the best thing is this dock sentences thing so on these things followed by the full sentences encoding. So there's some some ambiguities here but in general you can kind of rank them as best second best and then here third best and fourth best and they conclude that this next segment or next sentence prediction loss here is more hurtful than helpful in the ways we see here and they say even though this is most most effective they in their case they'd rather go with this one because it's well I guess easier to implement you get more data through the model in the same time and the performance decrease isn't that much. So but it's pretty interesting to see that this next next segment next sentence prediction isn't super super helpful in actuality. Here so removing the NSP loss matches or slightly improves the downstream task performance. This is yeah in contrast to what the original BERT authors found but you have to keep in mind this is also on hasn't a bunch of other changes in. Then next thing they investigate batch size so batch size sorry batch size pretty seems to be pretty interesting for these large models in that they love large batch sizes and they actually explore batch sizes 512 here as a smallest one and they go up to 8000 so this they do this actually in a in a data parallel way where they have many many machines with many GPUs and they parallelize the data and then they accumulate the gradient of all of these different samples and so they can go up to a batch size of about 8k and they find generally that the 2000 batch size here as you can see helps to improve the so perplexity lower is better and the other numbers higher is better helps to to improve the performances if you control the control for data set size so the number of times you go through the data set is the same but if you go with a larger batch size that seems to help up to a point here the 2000 seems to be the best they found so again marginal improvement you can make by training with larger batch sizes and then this the last thing they've looked at is actually is text encoding so how do you encode text and the the pit here is basically between byte pair encoding or word piece encoding to that to to decide how large your vocabulary is basically and as I understand it they didn't find a much of a difference between the different implementations of the text encoding so they decide they go with they decide to go with one I don't even remember which one I think they go decide to go with byte pair encoding instead of word pieces all right so they combine all of this into Roberta which is a robustly optimized Bert approach and they say Roberta is trained with dynamic masking so what they showed first full sentence without the next segment prediction loss large mini batches a larger byte level byte pair encoding as well as of course their collection of training data and then here they also investigate how long to pre train so if you look at the original Bert models or the XL net models and then compare it to Roberta so Roberta this is the original data and they already beat Bert yet they do not they do not yet beat Excel net with that so if they add data they get even better actually on par mostly with the with Excel net if they pre train longer they get even better and if they want to say pre train even longer right so that here's the the number of steps if your number of steps then match the number of steps that the Excel net does with the same additional data then or with their additional data then you outperform Excel net as well so this this kind of just an an overview of this and they evaluate on other downstream tasks and they basically show that in most of them they can reach state-of-the-art performance or exceed it with their approach and in conclusion they basically say well this only shows that kind of the the gains that these other models make and the reasons why they make gains may be questionable if you simply pre train Bert in a better way you can reach the same performances so I think the end is not reached yet most of all they publish their code their data I believe I have not looked into this but definitely check out their repository where this is implemented seems pretty easy seems pretty straightforward and that was it for me bye bye
[ { "start": 0, "end": 6.84, "text": " Hello everyone, today we're looking at Roberta, a robustly optimized BERT pre-training approach" }, { "start": 6.84, "end": 11.96, "text": " by Yin-Han Liu at AL, mainly of Facebook research." }, { "start": 11.96, "end": 18.84, "text": " So this paper is a pretty short, pretty simple paper and the main premise is we've seen a" }, { "start": 18.84, "end": 28.44, "text": " number of improvements over the initial BERT paper where different pre-training of the" }, { "start": 28.44, "end": 35.92, "text": " transformer architecture or extensions of the architecture have been shown to have better" }, { "start": 35.92, "end": 38.8, "text": " performance than the original BERT model." }, { "start": 38.8, "end": 48.56, "text": " And this paper basically says if you get the design choices right, then BERT is able to" }, { "start": 48.56, "end": 53.28, "text": " basically be on par or exceed all of these other methods so far." }, { "start": 53.28, "end": 60.28, "text": " So they're basically exploring design choices in the pre-training and training of BERT." }, { "start": 60.28, "end": 67.84, "text": " Alright, so if you don't know what BERT is, by the way, I have made a video about BERT," }, { "start": 67.84, "end": 72.08, "text": " I've also made a video about transformers." }, { "start": 72.08, "end": 81.44, "text": " In very quick terms, BERT is a language neural network architecture that takes as input text" }, { "start": 81.44, "end": 90.4, "text": " such as this kind of thing you see here, text such as that, and it will kind of encode it" }, { "start": 90.4, "end": 99.12, "text": " out and it can do various things, for example, classify it into certain categories or kind" }, { "start": 99.12, "end": 106.03999999999999, "text": " of segment it, extract answers from questions and so on." }, { "start": 106.04, "end": 111.92, "text": " The whole thing is pre-trained with what's called a masked language model objective where" }, { "start": 111.92, "end": 113.52000000000001, "text": " you don't need labels to train it." }, { "start": 113.52000000000001, "end": 118.96000000000001, "text": " So in a masked language model objective, you basically mask out certain words during training" }, { "start": 118.96000000000001, "end": 126.32000000000001, "text": " and then you ask BERT to reconstruct these words from the surrounding information." }, { "start": 126.32000000000001, "end": 133.56, "text": " And that kind of has given some improvements in the original BERT paper, but subsequent" }, { "start": 133.56, "end": 138.88, "text": " papers have claimed that you can improve even more by using different pre-training objectives" }, { "start": 138.88, "end": 142.6, "text": " and so on such as Excel, NET." }, { "start": 142.6, "end": 150.52, "text": " But here, these researchers basically explore different things." }, { "start": 150.52, "end": 156.48000000000002, "text": " So they use a regular BERT architecture, that's what they describe here, so they use both" }, { "start": 156.48, "end": 167.07999999999998, "text": " the BERT base, the 12-layer, as well as the 24-layer BERT that has originally been described." }, { "start": 167.07999999999998, "end": 176.83999999999997, "text": " They use masked language modeling as a pre-training objective and they explore the necessity of" }, { "start": 176.83999999999997, "end": 180.79999999999998, "text": " this next sentence prediction loss that has been part of BERT." }, { "start": 180.8, "end": 187.36, "text": " So along with the masked sentence modeling, BERT has also had an objective where if you" }, { "start": 187.36, "end": 194.10000000000002, "text": " input a piece of, actually you input two pieces of text, two sentences such as this, these" }, { "start": 194.10000000000002, "end": 199.92000000000002, "text": " are two sentences, and BERT has to decide if the second sentence follows the first sentence" }, { "start": 199.92000000000002, "end": 205.04000000000002, "text": " in the corpus or in 50% of the cases, the second sentence is sampled from a different" }, { "start": 205.04000000000002, "end": 206.12, "text": " document." }, { "start": 206.12, "end": 212.76, "text": " This kind of is, so the original paper argued this is necessary to incorporate long-distance" }, { "start": 212.76, "end": 215.8, "text": " relationships between text." }, { "start": 215.8, "end": 222.6, "text": " Yeah, here the NSP objective was designed to improve performance on downstream tasks" }, { "start": 222.6, "end": 227.36, "text": " such as natural language inference." }, { "start": 227.36, "end": 231.24, "text": " And this paper kind of explores the necessity of that loss." }, { "start": 231.24, "end": 237.44, "text": " In terms of optimization, there is of course kind of a pre-training scheme and then a training" }, { "start": 237.44, "end": 245.32000000000002, "text": " scheme using Adam here with certain parameters and also this paper explores the use of these" }, { "start": 245.32000000000002, "end": 247.28, "text": " parameters." }, { "start": 247.28, "end": 254.56, "text": " Lastly you have data and of course these models sometimes they're trained on different data" }, { "start": 254.56, "end": 259.76, "text": " and that's why comparing them makes it a bit harder to compare them because the pre-training" }, { "start": 259.76, "end": 265.64, "text": " is done on differently sized and differently structured data." }, { "start": 265.64, "end": 271.4, "text": " This paper also tries to investigate the influence of the training data and especially what happens" }, { "start": 271.4, "end": 275.28, "text": " if we keep the training data constant." }, { "start": 275.28, "end": 287.8, "text": " So all right, so they implement BERT, they re-implement BERT and then they fix some hyperparameters" }, { "start": 287.8, "end": 291.88, "text": " while they tune others and first of all the data set." }, { "start": 291.88, "end": 295.28000000000003, "text": " So they use different data sets." }, { "start": 295.28000000000003, "end": 301.44, "text": " The original BERT has been trained on this Book Corpus and Wikipedia, English Wikipedia" }, { "start": 301.44, "end": 304.52, "text": " data set which is 16 gigabytes large." }, { "start": 304.52, "end": 311.92, "text": " Now this paper here collects a, what's this CC News data set which is the subset of the" }, { "start": 311.92, "end": 316.36, "text": " Common Crawl News data set which is all in." }, { "start": 316.36, "end": 326.2, "text": " So the subset is the English portion and that's 76 gigabytes which is on par with for example" }, { "start": 326.2, "end": 330.16, "text": " what GPT-2 used I believe." }, { "start": 330.16, "end": 338.8, "text": " So this is a very large training set and kind of comparing this original data to the large" }, { "start": 338.8, "end": 344.40000000000003, "text": " corpus, kind of what influence that is should make very clear what the influence of more" }, { "start": 344.4, "end": 347.64, "text": " training of more pre-training data is." }, { "start": 347.64, "end": 356.03999999999996, "text": " They also have a number of other corpora open web text as well as here I believe there's" }, { "start": 356.03999999999996, "end": 358.12, "text": " one more stories, yes." }, { "start": 358.12, "end": 366, "text": " So these are also pretty sizable but these are like, yeah these are like, have very specific" }, { "start": 366, "end": 369.79999999999995, "text": " schemas to them." }, { "start": 369.8, "end": 377.28000000000003, "text": " Then the evaluation here happens on several different kind of downstream tasks." }, { "start": 377.28000000000003, "end": 383.6, "text": " So the idea is you first you pre-train this BERT model on with the masked language modeling" }, { "start": 383.6, "end": 392.64, "text": " and so on and then you have this GLU task which is actually a collection of nine tasks" }, { "start": 392.64, "end": 402.24, "text": " and you have some other tasks such as SQUAD which is a question answering task and here" }, { "start": 402.24, "end": 408.4, "text": " RACE I don't even know what that is in particular but suffice to say these are kind of downstream" }, { "start": 408.4, "end": 410.08, "text": " NLP tasks." }, { "start": 410.08, "end": 417.47999999999996, "text": " The paper isn't about these downstream tasks but it's just a way to measure how well your" }, { "start": 417.48, "end": 425, "text": " pre-training worked if then you can fine tune on such a task and you get a good performance." }, { "start": 425, "end": 429.72, "text": " But what the tasks are in particular isn't too important." }, { "start": 429.72, "end": 433.88, "text": " Alright so here we get into the meat of the paper." }, { "start": 433.88, "end": 440.16, "text": " First they decide on what they call static versus dynamic masking." }, { "start": 440.16, "end": 446.16, "text": " So in the original BERT paper whenever they do masked language modeling they take a piece" }, { "start": 446.16, "end": 451.40000000000003, "text": " of text and they basically replicate it a bunch of times because they want to iterate" }, { "start": 451.40000000000003, "end": 457.6, "text": " through training data a bunch of times and then in each iteration they mask out different" }, { "start": 457.6, "end": 461.24, "text": " tokens." }, { "start": 461.24, "end": 468.40000000000003, "text": " They compare this to what's called dynamic masking." }, { "start": 468.40000000000003, "end": 471.28000000000003, "text": " So this is static masking." }, { "start": 471.28, "end": 480.96, "text": " Dynamic masking would be where you basically on the fly generate your mask." }, { "start": 480.96, "end": 484.41999999999996, "text": " You don't pre-compute it and save it you on the fly generate it." }, { "start": 484.41999999999996, "end": 490.91999999999996, "text": " This allows you to go through kind of more or less of the data as you want and when you" }, { "start": 490.91999999999996, "end": 498.67999999999995, "text": " encounter the same sample twice even though you replicate it in the original BERT model" }, { "start": 498.68, "end": 503.56, "text": " you could still encounter it twice if you train for longer than the number of replications." }, { "start": 503.56, "end": 511.08, "text": " Then you basically see the exact same mask again and the dynamic masking is actually" }, { "start": 511.08, "end": 513.2, "text": " much more useful." }, { "start": 513.2, "end": 514.32, "text": " It's much more ad hoc." }, { "start": 514.32, "end": 517.62, "text": " Each time you see a sample you generate the mask on the fly." }, { "start": 517.62, "end": 522.24, "text": " So they compare this here and they see that there is a marginal improvement so here higher" }, { "start": 522.24, "end": 533.04, "text": " is better marginal improvement in two tasks and a less marginal decrease in performance" }, { "start": 533.04, "end": 534.04, "text": " in one task." }, { "start": 534.04, "end": 542.94, "text": " So they decide that this dynamic masking is of use." }, { "start": 542.94, "end": 549.74, "text": " Second thing they investigate is the kind of input format and this next sentence prediction." }, { "start": 549.74, "end": 555.92, "text": " So as I already said the original BERT training objective always gets two sentences next to" }, { "start": 555.92, "end": 561.86, "text": " each other and has to decide if the second one follows from the first one." }, { "start": 561.86, "end": 569.16, "text": " Actually it doesn't it observes two concatenated document segments which are either sampled" }, { "start": 569.16, "end": 577.58, "text": " contiguously from the same document or from distinct documents and this is half and half." }, { "start": 577.58, "end": 581.62, "text": " So in addition to the masked language modeling the model is trained to predict whether the" }, { "start": 581.62, "end": 588.9000000000001, "text": " observed document segments come from the same or distinct document via an auxiliary next" }, { "start": 588.9000000000001, "end": 592.48, "text": " sentence prediction loss." }, { "start": 592.48, "end": 598.26, "text": " They investigate different ways of including or excluding this loss." }, { "start": 598.26, "end": 606.08, "text": " So first is what they define if here if it's plus NSP that means that this particular thing" }, { "start": 606.08, "end": 610.84, "text": " includes the next sentence or next segment prediction loss." }, { "start": 610.84, "end": 620.72, "text": " So they have segment pair plus NSP which means that each input has a pair of segments and" }, { "start": 620.72, "end": 628.5200000000001, "text": " these segments now the difference the distinction between a segment and a sentence is important" }, { "start": 628.5200000000001, "end": 635.36, "text": " where the sentence is really a natural sentence a segment can actually be multiple natural" }, { "start": 635.36, "end": 641.44, "text": " sentences which is what the original BERT does." }, { "start": 641.44, "end": 648.6800000000001, "text": " So as long as the combined length is less than 512 tokens there can also be multiple" }, { "start": 648.6800000000001, "end": 654.5600000000001, "text": " sentences but there's clearly two segments and you have to decide if they follow after" }, { "start": 654.5600000000001, "end": 656.6800000000001, "text": " each other or not." }, { "start": 656.6800000000001, "end": 661.96, "text": " The second thing they try is the same thing so the next segment prediction but now it's" }, { "start": 661.96, "end": 673, "text": " just two sentences it's just natural sentences so it must be one sentence a period and then" }, { "start": 673, "end": 678.72, "text": " the next sentence a period and you have to distinguish these two if they follow or not." }, { "start": 678.72, "end": 687, "text": " Then they investigate full sentences which is they leave away this next segment prediction" }, { "start": 687, "end": 695.04, "text": " loss and they simply fill up the 512 tokens with text from the corpus." }, { "start": 695.04, "end": 700.68, "text": " So each input is packed with full sentences sampled continuously from one or more documents" }, { "start": 700.68, "end": 706.48, "text": " and the one or more document means if you so if you sample text right you sample here" }, { "start": 706.48, "end": 711.82, "text": " text you put all of this in the thing and you are at the end of a document you simply" }, { "start": 711.82, "end": 717.4000000000001, "text": " continue with the next one and go on until you have the 512 tokens." }, { "start": 717.4000000000001, "end": 725.2800000000001, "text": " So you basically fill fill fill until you have 512 tokens and that's this variant here." }, { "start": 725.2800000000001, "end": 729.96, "text": " And then in the last variant you do the same thing this called dock sentences but you basically" }, { "start": 729.96, "end": 731.5200000000001, "text": " you stop at the end." }, { "start": 731.5200000000001, "end": 738.44, "text": " So even so you put all of this in your state and if you here you stop and then you have" }, { "start": 738.44, "end": 745.5200000000001, "text": " to be content by simply padding the rest of the 512 tokens or something like this so you" }, { "start": 745.5200000000001, "end": 752.6800000000001, "text": " don't have as much data but the all the text that you have in one sample is actually continuous" }, { "start": 752.6800000000001, "end": 755.1800000000001, "text": " text from the same document." }, { "start": 755.1800000000001, "end": 760.1, "text": " So they pit these four things against each other." }, { "start": 760.1, "end": 776.8000000000001, "text": " This is this table here and as you can see here the best thing is this dock sentences" }, { "start": 776.8000000000001, "end": 785.52, "text": " thing so on these things followed by the full sentences encoding." }, { "start": 785.52, "end": 794.68, "text": " So there's some some ambiguities here but in general you can kind of rank them as best" }, { "start": 794.68, "end": 803.92, "text": " second best and then here third best and fourth best and they conclude that this next segment" }, { "start": 803.92, "end": 812.8, "text": " or next sentence prediction loss here is more hurtful than helpful in the ways we see here" }, { "start": 812.8, "end": 819.8599999999999, "text": " and they say even though this is most most effective they in their case they'd rather" }, { "start": 819.8599999999999, "end": 824.28, "text": " go with this one because it's well I guess easier to implement you get more data through" }, { "start": 824.28, "end": 832, "text": " the model in the same time and the performance decrease isn't that much." }, { "start": 832, "end": 837.18, "text": " So but it's pretty interesting to see that this next next segment next sentence prediction" }, { "start": 837.18, "end": 847.0799999999999, "text": " isn't super super helpful in actuality." }, { "start": 847.0799999999999, "end": 855.56, "text": " Here so removing the NSP loss matches or slightly improves the downstream task performance." }, { "start": 855.56, "end": 859.68, "text": " This is yeah in contrast to what the original BERT authors found but you have to keep in" }, { "start": 859.68, "end": 868.04, "text": " mind this is also on hasn't a bunch of other changes in." }, { "start": 868.04, "end": 875.8, "text": " Then next thing they investigate batch size so batch size sorry batch size pretty seems" }, { "start": 875.8, "end": 882.4, "text": " to be pretty interesting for these large models in that they love large batch sizes and they" }, { "start": 882.4, "end": 891.68, "text": " actually explore batch sizes 512 here as a smallest one and they go up to 8000 so this" }, { "start": 891.68, "end": 895.88, "text": " they do this actually in a in a data parallel way where they have many many machines with" }, { "start": 895.88, "end": 904.3199999999999, "text": " many GPUs and they parallelize the data and then they accumulate the gradient of all of" }, { "start": 904.3199999999999, "end": 909.0799999999999, "text": " these different samples and so they can go up to a batch size of about 8k and they find" }, { "start": 909.08, "end": 916.88, "text": " generally that the 2000 batch size here as you can see helps to improve the so perplexity" }, { "start": 916.88, "end": 925.2, "text": " lower is better and the other numbers higher is better helps to to improve the performances" }, { "start": 925.2, "end": 929.5200000000001, "text": " if you control the control for data set size so the number of times you go through the" }, { "start": 929.5200000000001, "end": 936.44, "text": " data set is the same but if you go with a larger batch size that seems to help up to" }, { "start": 936.44, "end": 943.6800000000001, "text": " a point here the 2000 seems to be the best they found so again marginal improvement you" }, { "start": 943.6800000000001, "end": 951, "text": " can make by training with larger batch sizes and then this the last thing they've looked" }, { "start": 951, "end": 957.32, "text": " at is actually is text encoding so how do you encode text and the the pit here is basically" }, { "start": 957.32, "end": 968.84, "text": " between byte pair encoding or word piece encoding to that to to decide how large your vocabulary" }, { "start": 968.84, "end": 975.96, "text": " is basically and as I understand it they didn't find a much of a difference between the different" }, { "start": 975.96, "end": 984.6800000000001, "text": " implementations of the text encoding so they decide they go with they decide to go with" }, { "start": 984.68, "end": 991.04, "text": " one I don't even remember which one I think they go decide to go with byte pair encoding" }, { "start": 991.04, "end": 998.4, "text": " instead of word pieces all right so they combine all of this into Roberta which is a robustly" }, { "start": 998.4, "end": 1009.12, "text": " optimized Bert approach and they say Roberta is trained with dynamic masking so what they" }, { "start": 1009.12, "end": 1016.96, "text": " showed first full sentence without the next segment prediction loss large mini batches" }, { "start": 1016.96, "end": 1024.08, "text": " a larger byte level byte pair encoding as well as of course their collection of training" }, { "start": 1024.08, "end": 1038.28, "text": " data and then here they also investigate how long to pre train so if you look at the original" }, { "start": 1038.28, "end": 1045.2, "text": " Bert models or the XL net models and then compare it to Roberta so Roberta this is the" }, { "start": 1045.2, "end": 1053.3999999999999, "text": " original data and they already beat Bert yet they do not they do not yet beat Excel net" }, { "start": 1053.3999999999999, "end": 1062.78, "text": " with that so if they add data they get even better actually on par mostly with the with" }, { "start": 1062.78, "end": 1069.28, "text": " Excel net if they pre train longer they get even better and if they want to say pre train" }, { "start": 1069.28, "end": 1075.96, "text": " even longer right so that here's the the number of steps if your number of steps then match" }, { "start": 1075.96, "end": 1085.8799999999999, "text": " the number of steps that the Excel net does with the same additional data then or with" }, { "start": 1085.88, "end": 1095.64, "text": " their additional data then you outperform Excel net as well so this this kind of just" }, { "start": 1095.64, "end": 1104.7600000000002, "text": " an an overview of this and they evaluate on other downstream tasks and they basically" }, { "start": 1104.7600000000002, "end": 1115.8600000000001, "text": " show that in most of them they can reach state-of-the-art performance or exceed it with their approach" }, { "start": 1115.86, "end": 1123.6, "text": " and in conclusion they basically say well this only shows that kind of the the gains" }, { "start": 1123.6, "end": 1128.4799999999998, "text": " that these other models make and the reasons why they make gains may be questionable if" }, { "start": 1128.4799999999998, "end": 1135.1999999999998, "text": " you simply pre train Bert in a better way you can reach the same performances so I think" }, { "start": 1135.1999999999998, "end": 1142.8, "text": " the end is not reached yet most of all they publish their code their data I believe I" }, { "start": 1142.8, "end": 1148.8799999999999, "text": " have not looked into this but definitely check out their repository where this is implemented" }, { "start": 1148.88, "end": 1176.88, "text": " seems pretty easy seems pretty straightforward and that was it for me bye bye" } ]
pPBqM4CKjUU
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Discriminating Systems - Gender, Race, and Power in AI
[ "Science & Technology" ]
[ "ai", "machine learning", "bias", "fairness", "ml fairness", "algorithmic bias", "algorithmic discrimination", "ai and society", "ainow", "google", "microsoft", "race", "gender", "stem", "pipeline", "gender gap", "diversity", "inclusion", "equity", "power" ]
TL;DR: - There exists both an unequal representation of people in the AI workforce as well as examples of societal bias in AI systems. - The authors claim that the former causally leads to the latter and vice versa. - To me, the report does not manage to make a strong enough argument for that claim. - I find the statements made quite dishonest at times. https://ainowinstitute.org/discriminatingsystems.pdf Authors: Sarah Myers West, Meredith Whittaker, Kate Crawford
Hi there, today we're looking at discriminating systems, gender, race and power in AI by Sarah Myers-West, Meredith Whitaker and Kate Crawford of the AI Now Institute, which is a part of New York University or associated with it. This is not as much a paper as it is a report, kind of summarizing current literature and also kind of an opinion piece slash recommendation giving document. Yes, so we'll dive into it. As you can see from the index, it's quite a long report and we don't have time to go into all of it. Actually, we don't have time to go into most of it. I just hope to kind of point out what the main arguments and themes are in the report, kind of what it's trying to say, pick out some interesting things and summarize it to the best of my ability. Also give a little critique. So let me actually go ahead and try to state the kind of core argument that the report is trying to make, because it's not really clear from reading it and you have to kind of read the whole thing and then kind of becomes clear what the argument is, I feel, though they somehow stated in the introduction numerous times in various ways. So I might just be not as attentive reader at first time. But all right, so here's the argument and I really hope I'm representing this correctly. We have a problem currently that sometimes AI systems can exhibit what we usually call bias. And we don't mean mathematical bias, like bias variance tradeoff. We mean bias in a societal sense, let's say bias against certain types of people where they shouldn't exist. So for example, let me draw an AI system and I'll just draw a little computer screen with a little light bulb. All right. So this is because it's smart, this is an AI system and the AI system and they give numerous examples. One example they give us for is like face recognition algorithm that is much more accurate on faces of white males, as opposed to darker skinned females. So let me draw like two curves to represent these distributions are unequal. And so the AI system exhibits some bias with respect to some kinds of people with an especially protected attributes. And in this report, they focus mainly on gender and race. So that's what we're going to talk about. The second thing they observe, so this observation one, the second thing they observe is, I'm going to draw some generic people here that represent the workforce of AI. So the AI workforce is classified as all the people that work on AI, be that university researchers or within companies building AI products or deploying them. So this is the workforce and they observe that there is an unequal distribution among the AI workforce. So this distribution, I'm also going to do this for unequal distribution. There's an unequal distribution in the AI workforce, most notably, it's predominantly males who work on AI. And also white people are overrepresented compared to the world population at large. So that's kind of the two observations they make. And now what they claim is that the unequal representation in the workforce is causing the bias in the AI systems. So they're basically saying these AI systems are biased because that the workforce is unequally distributed. And also they claim in a less powerful sense, I feel, but they claim there is a loop that this then leads back that because there is bias in the AI system, that again leads to an unequal, more unequal distribution of the workforce. So the core argument really is, as they set out to do, like in the introduction, and also claim that they have done in the conclusion, is to demonstrate these two directions here in a causal way. So the systems are biased because there is an unequal representation in the workforce and that feeds back. So the argument is that if you want to fix the bias here, if you want to fix that, then you will have to fix it via making the workforce more what they call diverse, so less unilaterally distributed towards white males. That's kind of the final conclusion. If you read their report and the recommendations, that's mainly what they're going for. Yeah, so my opinion, or in my opinion, having read the report a couple of times, is that as I see it, they really don't demonstrate these links. So they give examples of this and they give examples of this. They show that the workforce is unequally distributed. They show that AI systems can exhibit such bias, but they never actually show these links in my opinion. They don't show this. So if you make the claim that in order to fix the bias in AI systems, you must fix the unequal representation in the workforce, I would need an argument that says because there is unequal representation, therefore A, therefore B, therefore C, therefore bias, like an actual argument to follow that says because of this, that, because of that, that, and so on. It's just not there. They simply show parallels. They simply show that these two things exist and they just list example after example of that. I don't think they make this argument. But I think, also the other direction, they don't really make this argument. Except in one case, where if you give them benefit of the doubt. What I also think is that it appears like the article, if you read it, and I encourage you to read it if you have some time, it makes a lot of sense if you have already accepted this conclusion. Like if you've already accepted this, then it's like, oh yeah, because I feel this is just a text where the confirmation bias is so high, just the way it's written, that it must make a lot of sense to someone who's already kind of in on this conclusion. But to someone who isn't sold yet, like myself, I am just not finding this convincing at all. The second thing is that it very much feels like this isn't like a discovery or something. But someone actually set out with the goal to address this here with the goal of I want companies to hire more of these people or certain kinds of people or to become more diverse or to promote more of a certain type of people. And now I'm going to find reasons for this. And the reason is like, oh, look at look at this bias here. This is caused. This is caused by this other thing. And therefore we must fix this other thing. It very much feels like someone setting out with already the conclusion in mind rather than this being an honest investigation. But yeah, I mean, read it for yourself. I can't prove the absence of an argument by not reading every single line. And I can't read every single line because it'll just get very long and boring. But read it yourself. And I think I'm pretty I'm pretty I've read it numerous times with really an open mind to be convinced that there is an argument in there. But I don't think there is or I don't think there is a very strong argument for this. All right. Let this first part here is more or less a summary. So research findings is more or less a summary. And we'll get to these things as they are important. Then they state recommendations right at the beginning. So actually, you'd have to read the article first. This is kind of more of an abstract section. But since it's right here, we'll kind of jump right into it. So these are recommendations and I've claimed they don't really show a connection. But they actually just show examples, examples of this and examples of this and parallel them. And this is reflected in like every single section, including here in the recommendations. They have recommendations for improving workplace diversity. And they have recommendations for addressing bias and discrimination in AI systems. Right. So all right, in my case, if you make this argument, I would I would feel you also make recommendations for breaking these links. But or argue why they can't be broken. But all right, let's jump into some of them. And it is really a mixed bag here, really. So some recommendations I'm really in favor of just from from the go not even you don't even need the article for those here. Discrimination, harassment and discrimination, transparency reports, including number of claims over time, the types of claims submitted and actions taken. So it's known that especially in these larger companies, sexual harassment claims often go down in either bureaucracy or are kind of hushed under the table or something like this. What you have to recognize is that a human resource department of a large company isn't there to serve the human resources. It's there to serve the company providing human resources. That's why a sexual harassment claim to an HR department is just a potential lawsuit. And that's why they don't want to take it seriously except for it must go away really quickly. So I think to kind of force companies or to ask companies to be more transparent, to take more seriously these the accusations of sexual harassment and assault and also discrimination is a very valuable goal. And I fully, fully support this. Also the here commit to transparency around hiring practices, especially hiring regarding how candidates are leveled, compensated and promoted. But also the larger the company gets, the less transparent this process usually becomes or the more bureaucratic, the more people are able to game it and so on and distort it. So I feel it's always good to be transparent around, okay, this person provides this much value to the company, therefore they should be compensated according to that or at least be transparent about it. So these are kind of recommendations I like. Then recommendations that really go into a different direction is something like this here, change hiring practices to maximize diversity. And this is kind of reflect, I'm not going to go on this reflected in other points, increase the number of people of color, women and other underrepresented groups at senior leadership levels of AI companies across all departments. So these things, they are usually within like company diversity goals and so on, doesn't really say how to do it. But then the I mean, as such, they're not really recommendations yet. They're more like goals. But here recommendation seven, I think is the the crucial one, ensure executive incentive structures are tied to increases in hiring and retention of underrepresented groups. So this is it's a bit of coded language. But here they talk about executive incentive structure tied to hiring and retention of underrepresented groups. This basically means if you are a manager or someone in charge of hiring or promoting, and you hire or promote a underrepresented person, and since they're talking about gender and race here, if you that means if you hire or promote a person of color or a woman, in this case, you will be compensated more. So at the end of the year, you'll somehow have more money, like more bonuses or more base comp or more equity or something like you'll get more money. So this, this recommendation is a direct call to hire based on race and gender. So this, this is a direct call to racist and sexist hiring basically to discriminate people according to their skin color and according to their gender, which I mean, how, how is this okay with anyone? Like how can anyone how are people even able to state this and in like a high profile report like this and get away with it and not have people criticize them, this directly calls for people to be treated according to their gender and race. And probably as directly as you can go without getting into actual legal trouble. But yeah, I'm really, really against such such practices. I mean, yeah, that's I just I just don't know how this how this can ever how this can ever be thought of as a good thing by anyone. All right, so, well, yeah, in my mind, this recommendation, and this recommendation kind of are counter to each other. Because if if I commit to transparency, how people are okay now I can, I can transparently commit to to be racist, I guess. But if I say, okay, I'm going to come and promote people based on how much value they provide to the company, then yeah, I'd much rather have that than saying I'm going to come and promote people based on their skin color. Alright, so let's actually jump into the report. I'm not gonna these recommendations for addressing bias and discrimination in systems this these are fairly general and common. So as well, as I said, we'll jump most of the things in the report. So introduction. So they start out with there is a diversity crisis in the AI industry. This they give like some numbers like 15% of AI research staff and 10% at Google, so 15% of Facebook are women. So these are some kind of fairly known statistics about how the AI field is kind of gender and race skewed. Currently, so they say they claim in bold the diversity problem is not just about women. It's about gender, race, and most fundamentally about power. It affects how companies work, what products get built, who they're designed to serve, and who benefits from their development. So this, I find this, this, this word power and this notion of power, a lot in this report, it appears again and again and again in in like power dynamics and power dynamics among groups. It's like a worldview, it paints like a worldview, where these different gender and race groups kind of struggle against each other to gain power over another. And whoever's in power will try to remain in power in alliance with their gender and race group and try to keep the other groups down. I'm not sure that's the correct view of the world. In my mind, the world is comprised of individual people that want to achieve something for themselves and they would like to prop themselves up. Whereas in this worldview, it's like, I'm going to use the power of my group to keep other groups down. I don't know which worldview you subscribe to, but I find the world is comprised of individuals. Yeah, and this is not discrediting that some people have it harder because of their gender or race. But to see the entire world as a power struggle between these groups, to me, it's, it's, yeah, and I'm not going to point out everywhere it appears, this power wording, but it appears a lot and it's really shapes how the report reads. You have to, you have to kind of remember, if you're a white male, and currently, the field is comprised of 90% white males, you, if you have like 10, like 10 hours, let's say you have to have 10 hours to do something, right, you can either choose to put down some other groups, like put down groups that you're not part of, or you can choose to invest these 10 hours in putting up yourself, you, right. So if, if I, like I profit, if I'm a white male, I profit minimally from keeping the other groups down because guess what, I still have to compete with the like 1 billion other white males there are. It's not going to help me to keep down anyone else, and especially, like it's, it's moronic, like who does that, who like has alliance, except most fringe people, like to their race or gender, rather than to the people they admire and respect and like to work with. So I'm going to, if I have like 10 hours today, I'm going to rather spend this in propping up myself compared to everyone else, and I don't care what gender or race they are. And so that to me, that's a much more accurate or, I don't know, plausible worldview. But just be aware that this report really takes on the language of kind of groups and power between groups and groups trying to, you know, kind of gain power and keep in, keep power and keep others from having power. All right, so say, to date, the diversity problems of the industry and the issues of bias in the systems it builds have tended to be considered separately. We suggest that these are two versions of the same problem. Issues of discrimination in the workforce and in system buildings are deeply intertwined. Challenge, and moreover, tackling the challenges of bias within technical systems requires addressing workforce diversity and vice versa. So the, I think this, this here actually is like how I described the argument and they kind of restated multiple times in a bit different way. But I think this is the core. And I really think I'm not misrepresenting the article here in that this is what they are setting out to do. They're setting out to say, okay, the diversity, the kind of unequal representation in the workforce and the bias in some AI systems are causally linked to each other and tackling one requires tackling the other. So yeah, if I'm misrepresenting them, let me know, but I really think I'm accurately representing their argument. So what they, what they do, as I said, is they give examples of one and of the other and also they really, they're really on kind of discrediting the kind of issues to solve problems of bias in a different way. So they point a little bit to this here in the introduction. They say in the face of growing evidence, the AI research community and the industry producing our products have begun addressing the problem of bias by building on a body of work of fairness, accountability and transparency. So fairness, accountability and transparency research concerns these issues. For one is research showing that some products are unfair or untransparent and so on. On the other hand, it's trying to devise algorithms that are more fair according to some notions or more accountable and transparent, which means that the algorithm can kind of say why it made a certain decision rather than it being a deep learning system that you don't really have an insight. These fields are active fields of research, definitely very interesting to look into. So but they, they kind of, it is not already here, but they say, yeah, we have adjusting AI systems that produce a result deemed fair by one of various mathematical definitions. You can already see in the language here, they don't really like this research and they are trying in this report to kind of discredit it or at least claim that it doesn't solve the whole problem because their point is, of course, you have to address this diversity issue in the workforce in order to fix the problems. So to this, I just want to say no, like if you can, I mean, you can criticize the fairness and accountability and transparency research field in that they haven't solved the problem fully yet. But in principle, if I have an algorithm, if I'm being delivered an algorithm, right, and the fairness literature has been applied to that algorithm and someone tells me, I guarantee you here is a proof, the algorithm is fair, right, then I really don't care who made that algorithm. As long as it's fair, the problem is fixed. If the bias is gone, the problem is fixed. And I don't care who fix it. I don't care if the person who fixed it is black or white or purple. Then the problem is fixed. And they, they really have to, they really try to just make the counter argument here is that no, that's it's not enough. But I claim yes, it, if you can actually solve the fairness problem, technically, then you have solved the fairness problem. Yeah, the only thing you can do is claim that it is not good enough yet, but not that it's fun to they kind of have to make the argument that it's fundamentally flawed approach. And I don't think they succeed in doing that here. Um, yeah, so they go on to say, we should expand to consider not only how I tools can be biased technically, but how they're shaped by the environments in which you're built in and the people that built them. Again, this this focus like who builds the AI system, I don't care, I care what it does, right? As much as if, if I hear an argument for or against something, I don't care who makes the argument, right? I care what the argument says. This is, it's like an ad hominem attack for an entire community. That's kind of how this this article, this report shows, or is appears to me. So they say, currently, large scale AI systems are developed almost exclusively in a handful of technology companies and a small set of elite university laboratories spaces that in the West tend to be extremely white, affluent, technically oriented and male. So yeah, their their problem, that's their fundamental problem here that these these spaces are skewed in one direction. Interestingly enough, their problem is not so much that it's that they're all in the same place, right? That they all live like 20 miles from each other in around San Francisco. That's that seems to be not a problem at all, as long as we get to like enough people of color and women into these 20 miles. But yeah, so that that's pointing out the the problem here or the yeah, kind of issue they have. All right, so they go on. Just kind of want to highlight again, they say both within the spaces where AI is being created and the logic of how AI systems are being designed. So paralleling the two things, the cost of bias, harassment and discrimination are born by the same people, gender minorities, people of color, other underrepresented groups. And they also say similarly, the benefits of such systems from profit to efficiency, accrue primarily to those are already in positions of power tend to be white, educated and male. So they again, they say the this points to a systematic relationship between patterns of exclusion within the field of AI and the industry driving its production on the one hand and the biases that manifest in the logics and applications of the technologies on the other. And they try to make this connection because they say the cost and the benefit of these two things are overlap in the people that where it costs and it benefits. And I really, again, it's just a parallel, but I really even don't think that's true because they kind of, they kind of argue against themselves later. So they always say, we have to look at again, they shoot against the take much more than the technically driven problem solving. They point to this. So our research requires looking at gender and racist categories within which humans think in short, sorry, studies of discriminatory systems, we need to ask who is harmed, who benefits, who gets to decide. So it's kind of who bears the cost, who bears the benefits and who has the power. So that's the, and again, it's we seek to understand how AI disadvantages some, we also consider how it works to the advantage of others. So keep that in mind. That's kind of the lens through how they analyze the this thing again, one that acknowledges power relationships and centers equity and justice. That's the, they want to see this bigger picture. So that's yeah, keep, again, keep that in mind. So they go into a section called which humans are in the loop, how workforces and AI systems interact. So this kind of from the title of this section, you think, okay, here's where we get in. Here's where we make the argument. And they start by listing examples of how AI systems can be discriminatory. And first, they go into an example of Amazon had developed an experimental hiring tool to help rank job candidates. By learning from its past reference preferences, Amazon hoped that the resume scanning tool will be able to efficiently identify qualified applicants, comparing their applications to previous hires. The system quickly began to downgrade resumes from candidates who attended all women's colleges along with any resumes that included the word women's. After uncovering this bias, Amazon engineers tried to fix the problem by directing the system to treat these terms in a neutral manner. The company eventually abandoned the tool when they were unable to ensure that the algorithm would not be biased against women. Gender based discrimination was built too deeply within the system and in Amazon's past hiring practices to be uprooted using a purely technical approach. So this just the way is written, I find to be quite dishonest. But let's analyze what happened here. So their final claim is that gender based discrimination was built too deeply within the system to be uprooted using a purely technical approach. So this is one of their arguments. They say technical approaches, they don't help because the Amazon engineers tried to fix the problem. But when they were unable to ensure that the algorithm would not be biased against women. So if you read this, you really I mean, I really get the impression that's not what happened here. What happened here most probably is Amazon built this tool, okay, and it fed in its past hires and we know of issues of like data set bias bias inherent in data set. So if your data set is skewed, the AI tends to pick up on the skewed data set and become skewed itself. Okay, so I actually would argue that most or all of the examples they stayed in here are examples of such biased data sets and not. So the the cause of the bias is the data set that they are strained on and not the person that ran the code or built the algorithm to train it on or built the deployment. And so but it doesn't matter you're a you're Amazon, you built this tool and you realize, oh, it discriminates against people having women's on their CV. So this is a pretty bad PR wise. So you tell your engineers engineers fix the problem. So the engineers go fix the problem, they come back and say, okay, we fixed the problem. And then what you do is you say, okay, engineers, can you ensure me that the algorithm would not be biased against women? Because if only the slightest bias exists, if only it doesn't even have to be if one journalist finds one example, where there is a down rank, because I add the word women's, then we are screwed, right? And the engineers will say, No, we can't guarantee that it's a deep learning system or something, right? We, we can't like give you a proof that it's not biased. If you're a smart executive, at that point, you'll scrap the tool, because the potential PR downside are just huge. And probably they've also realized it's not that handy to have this, this tool compared to their recruiters doing their job, because their recruiters might actually be good and have been doing this for a while. So to the to the fact that this tool was scrapped is probably much more a result of a PR disaster. But also independent of that to say gender based discrimination, sorry, gender based discrimination was built too deeply within the system to be uprooted using a purely technical approach. It's just I mean, what is what is this? This is just trying to discredit this kind of technical, technical going about solving this problem. I'm pretty sure if someone comes to me and says here, I have this tool, and I can mathematically prove to you that it's not biased, then it's not then the problem is solved. And also, I really don't see how the person training the algorithm, or the person researching such an algorithm has any influence over how the algorithm works, because they're not the ones making the data set, or if they are, yeah, then they can make a better data set. Also, if a person comes and makes a better data set, that will fix the problem. And it doesn't matter what skin color the person has that makes the better data set. So all of this, this link is just not demonstrated here, or anywhere here at all. But this this here is the closest Amazon that this report actually comes to making this point. And I said before, I drew that drew this thing workforce AI bias, right? So this this link since it here the AI system is used for hiring the workforce. So at least one could make a claim that this link is somewhat demonstrated. But I this it's a weak case, I would agree, but this is the closest they come. So that and but then to go this direction, you have to somehow argue, well, the workforce somehow makes the AI system bias, no, the workforce influences the data set. If the AI is trained, so if a hiring AI, how do you train a hiring AI, you optimally train it on the performance. So this this employee here is going to have a performance over time, right? And the AI system will look at that performance over time. So if the AI system even if it's initially biased, because it learns from the risk recruiters, it will learn that, okay, actually, if I always forgo these women, then I don't get as much performance of a workforce, so I should correct for that. So if you train the AI system on a good metric, then then then this problem will leave even out itself. But again, this Yeah, this this is this could be considered like one point in the argument, but I think it's a very weak point. And only because the AI system is actually used for hiring, where I think the point they're making is a much larger one is the general bias in the AI systems contributes to the workforce imbalances. And there you somehow have to say that, okay, the AI system somehow influences society at large and society at large then go leads to the workforce being skewed. I don't Yeah, that it's just not strong enough, in my opinion. And the other direction also isn't isn't strong here. But again, the examples only get weaker from here on. They go on to say, this is just one of many examples that show how the functional logics of a given technology echo the gender and racial dynamics of the industry that produced it here. Yeah, this, that's the claim they're making to echo the gender and racial dynamics. And they're actually making a stronger claim, namely a causal claim. They give the other example of the Amazon's recognition facial analysis service previously demonstrated gender and racial biases worse than those of comparable tools. So it failed to see dark skinned women while being most proficient at detecting likes light skinned men. And they later go into this example again, where they basically also state yes, this is an issue of the data set, the data set being much more comprised of white men. And they say, but then they have to kind of make the turnaround argument and say, well, the data set is a reflection of society and society, you know, part of society is the workforce. And it's just not, I mean, it's again, this argument only works if you already believe the conclusion. Otherwise, there's actually no argument there or no solid one. But what they do here is they say Amazon's initial response to such criticism has been to try and discredit the research behind it. This reaction, or let's let's first discuss this. So the Amazon, yeah, Amazon, of course, being the accused here and a multi billion dollar company and the criticism is something that is PR wise very bad for them. They discredit the research tried to discredit the research behind it. It's understandable that this could be dishonest from Amazon side, right? I mean, they're getting attacked. It's like, you know, the tobacco companies trying to discredit the smoking research, but still, I mean, that doesn't mean it's wrong. It could actually be bad research, right? You have to actually go and look at what's Amazon saying, what is the research really doing? Is Amazon right or wrong? Completely open that Amazon is wrong here, but you still have to go look. And this citation here, I've tried this citation here. This one isn't to a to Amazon's response. It's to like a medium article and the medium article doesn't even include Amazon's response. I've looked, maybe I haven't seen it. It doesn't also doesn't link Amazon's response. Maybe it links something that links something or that includes it in some way. But basically this medium article only states, yeah, Amazon has been denying this or Amazon has been critical of this. And if you state such a sentence, Amazon's initial response to such criticism has been to try and discredit the research behind it. I at least expect the citation to lead me to Amazon's response so that I can verify what they're saying. Right. So this, I mean, I don't know, willing to chalk it up to incompetence rather than malice. Right, but then they go on and they say this reaction is evidence of the wider problem. The research was conducted by two well-regarded AI researchers who are women of color. By attempting to publicly discredit their expertise and research methods, Amazon is reinforcing the same kinds of prejudice and derasers that the research critiques. Yeah, here you go straight to the identity of the researchers. Like play the race card straight out. I mean, this is maximum dishonesty, right? Except if Amazon said something like, well, these women of color, clearly because they're women of color, they have no idea what they're doing or something like this. This is basically it's coded language for saying either saying you're not allowed to criticize people of color because they're a minority or you're basically saying Amazon is racist and that's why they criticize them. They just don't take them seriously because they're women of color. I mean, both are both are abhorrent. This is just dishonesty really stated here too. I mean, again, I'm perfectly willing to accept that Amazon's critique of this research is wrong and is not well intended because they're the ones attacked, but you still have to examine it rather than say, well, they shoot against women of color and therefore somehow that makes their counter argument irrelevant or even racist or something. That's I don't know. I find this dishonest. Yeah, I don't know about you. Moving on. So they go on and state a number of examples of bias and discrimination in the workforce and they a lot of times they make a mixture of the gender and race imbalance in workforce and things like sexual harassment not being taken seriously by the companies and also the things like gender or race pay gaps, which I'm open to accept that these things exist and are even intertwined. But just to tell you what's happening because we're kind of skipping but it's kind of a mixture of these things. So they say these issues are systemic. There's a close relationship between these workplaces with discriminatory practices and discriminatory tools, a feedback loop that is shaping the industry and its tools. So again here to state, I think I've stated it enough now that or demonstrated enough that I'm really representing their arguments as they intended it to namely that there is this kind of causal links and loop between these two things. And they shoot against the fairness literature by saying from this perspective, locating individual biases within given technical systems and attempting to fix them by tweaking the system becomes an exercise in futility. Only by examining discrimination through the lens of social logics, who it benefits, who it harms and how can we see the workings of these systems in the context of existing power relationships. So they say these issues aren't technically fixing these systems won't help. If that's the problem. And I agree, if that causal link actually exists, then technically fixing the system might not solve the problem. Not even sure. I mean, if you technically fix a system like this, then you technically break the causal link and thereby fix the problem. I would not sure, but again, this is based on the hypothesis that they've already reached, like demonstrated their, their conclusion, which they haven't and which they are not in the entire article. Yeah, so the next section goes into who makes AI so I don't know about you, but this section was titled how workforces and AI systems interact. And apart from one, the AI system being used for hiring the workforce, which is said this one instance where actually there could be one causal direction from bias to different misrepresentation the workforce. Other than that, there isn't really anything in there that really shows how these two interact, especially in a in a causal way. Alright, the next section is called who makes AI is broadly about the about the gender and race imbalances or miss not unequal representation in the workforce. And we're going to skip this diversity statistics that kind of that discuss that diversity statistics of companies aren't really accurate, or can be, you know, massaged kind of by the companies, which you know, is true. Definitely companies will always try to maximize their profits. And even if they give out such a report, so that definitely critical thinking is in order. Alright, so the next section is called the discrimination feedback loop. Right, if so if in the earlier section, you felt like here we go into the meat, then you must feel with this title, like, okay, we're actually going to see how this loop works and how the two things are really linked, like how one causes the other and vice versa. So let's jump in. They say AI systems increasingly play a role in our social and political institutions, including education, healthcare, hiring, criminal justice. Yes, therefore, we need to consider the relationship between the workplace diversity crisis and the problems with bias and discrimination in AI systems. No, why I don't see how therefore, but yeah, so I don't see how therefore we need to consider the relationship. Okay, if there is a relationship, we need to consider whether there's a relationship. Okay, granted. So they say fairness, accountability and transparency research is playing an emerging role. Now what they mean here is the aspect of fairness, accountability and transparency research that shows that there is a problem. So I told you there's two sides, one side is showing there is a problem in current systems and the other side is trying to fix them. So they're very much fans of the side that shows that there is a problem and they use show some of these problems here, we've already seen some but they show some more like Facebook's ad delivery systems let users to be shown as for housing and employment in a discriminatory manner. So giving 2019 study found significant racial bias in a widely used commercial algorithm used to determine whether patients will be enrolled in care management programs. So these are these are just examples of these AI systems being biased. So they go into this say taking a contextualized view may enable more extensive account and the contextualized view they when they say this they mean anything more than just a technical approach at solving these problems. More extensive account of bias to emerge future work could examine the politics of system design study how AI systems in situated reality and study AI systems in situated realities ask why a system was designed in a particular way, how it was constructed, whose interest it shaped shaped by the metrics in which its success or failure is assessed, rather than solely focusing on improving existing data sets or individual algorithms. Yeah, I agree. I mean, we always have to we always have to pay attention to these things, especially like looking at the metrics by which its success or failure is assessed. But a lot of times this is this is rather straightforward in kind of if you look at the metric, the metric most often, especially in commercial applications is money, right? So the metric of like an ad showing system, like if I have a system to recommend ads to people, show people ads and personalize them and so on, I simply want to maximize my revenue. So I want to sell someone something. And everything I want to know is how likely is it that person is going to buy that thing? Right? I that's basically Yeah. So in essence, sometimes it's really valuable to consider what capitalism is. So in capitalism in so capitalism, these kind of this system we're working on is kind of a form of limited capitalism, but mostly mostly capitalism. And capitalism is very greedy. So capitalism, all corporations want to do basically is make money. And that is and on the other side, you have discrimination. So discrimination meaning these unequal represent like unequal distribution actively. So and often sometimes these go hand in hand, sometimes you can make more money by discriminating against a certain type of people. And that's, that's a really bad scenario. Like that's a very, like, this is really something where we need to take action. But a lot of times, a lot of times, these two things stand in opposition to each other. So little arrow here, non compatible. That means if I want to sell someone something, then I maximize my profit by not caring by accurately assessing how likely is it that person buys that thing. If I want to discriminate here, if I want to discriminate, start discriminating, according to skin color saying like, No, I don't like that this person with the skin color is able to buy this product, I want to kind of keep them down, and so on, then I forgo profit, right, then I actually, even though this person could buy this thing, I forego that. So often these things are in direct opposition to each other. Also, if I am in charge of hiring, and I don't like people of a certain gender, but they would actually be really, really good, whatever, good employees. So I forgo that, that means I'm getting a pay more for less qualified people just because I'm biased and I'm down ranking unjustifiably, these people of the gender I don't like. So oftentimes, you have to ask yourself, are people fundamentally greedy, or discriminatory? Which are they more? If push comes to shove, would they rather have more money? Or would they rather keep their own race and gender group in power? And with just, yeah, so the and you have to ask this of corporations, you have to ask this of people. And in my experience and view, like people are much, much more greedy than they are willing to discriminate and give up money for discrimination. And so if we look at metrics by which success or failure of AI systems are designed, then I would argue a lot of the times metrics are actually profit incentives. And especially if we look at data set construction, if there is a skewed data set that makes my AI system be biased, that actually loses me money and the company would profit a lot from building a better data set. So looking at kind of metrics actually makes a lot of sense to me and very much in favor of that. And I think by designing accurate metrics and then getting the best possible information, the best possible data sets to maximize these metrics will oftentimes actually eliminate such forms of discrimination. Again, there are situations where they don't, we have to be very cognizant of these. They go into this and they say, also examine more thoroughly how societal discrimination surfaces in data provenance, examining the history and process of data set construction and considering how cultural norms and stereotypes were enumerated and represented at the time of data creation. This is a big issue. Yes. The data set construction kind of at the time of data creation and so on, this is a big issue in these systems and a lot of bias. And I would argue most of the bias we've seen here arises from corrupt data sets and from data sets that were constructed in an already biased way. And the AI system trained on these data sets simply replicates this bias. So I think that's very correct here. They go into this example, they say the labeled faces in the wild data set contains over 15,000 images. Only 7% of images are of black people. This is because these, the media landscape of the early 2000s, these images were gathered from the news media at the time, predominantly featured white men in positions of celebrity and power. This exactly. So if you train a system on this data set, the system will inherit this bias. Yeah, so this is a classic example of a corrupt data set. Also this isn't only with race and gender. This is also if you like take pictures from IMDB, yes, a lot of this currently Celeb A data set that is used in all the GAN research is collected from IMDB. You probably have overly beautiful, like pretty face people on there. So that your AI system, your generative model is only going to produce mostly pretty face people, since movie stars tend to be a lot prettier than the average humans. So that the kind of data set construction process, I think is currently the biggest source of bias in AI. But that also, it's interesting that they go into this here and they kind of want to make the point that this is because society and power in society, the data set reflects that. But I would argue if someone makes a data set that doesn't have this bias, then the problem is solved. And I don't care who makes the data set. So the link between the workforce and the bias is really broken by an argument like this, because as soon as we have a correct data set, an unbiased data set, we can mitigate the bias. And they even go, they go into this here. They say, sorry. Yeah, they say down here. They say these people, these researchers have looked at these facial recognition systems and they assessed this what we saw earlier, higher error rates for darker skinned women than for any other group, lowest error rates for light skinned men. To measure this disparity, these researchers developed a new data set that is more balanced, both in terms of gender and skin color. Good. Problem, like make a larger data set to actually train on and then problem solved. And I don't care at all what race and what gender these people are. Well done. Good people make a good data set like this. And then we've solved the problem. What's the problem here? Why would you ever care what these people look like if they do good work? That's to me, this actually breaks their own argument. I don't know why they included here. To me that to then suggest that there is a link to the workforces, if here is obvious that if you fix the data set, you can fix the recognition system. All right, so we'll go on here, jump a couple more paragraphs. Except when they say they shoot again against this kind of say to this point, a focus on fixing technical systems in isolation without examining their broader context of use and power and dynamics that attends issues is not limited in its intervention, it can actively cause harm. So if you fix the problem in a technical manner, they argue here it can actively cause harm. And the example they give is that facial and image recognition systems, they are often applied in service of police surveillance, which disproportionately harms poor people and communities of color. So there's a quote from this person that says, is this not social progress to make black people equally visible to software that will inevitably be further weaponized against us? We are considered criminal and more surveillable by orders of magnitude. Whatever claim to a right of privacy that we may have is diminished by a state that believes we must always be watched and seen. So this is an example where by improving the facial recognition for black people, it makes the police better at surveilling them, which is true. And then it is an ethical problem that the police is able to use these facial recognition systems to surveil people. That's a massive privacy problem. That's a massive problem in how much the state is allowed to overreach and so on. So I think it's a discussion in itself, but here they argue because at the very beginning I asked you to remember this whole notion of we always have to look at who benefits from the way the AI system is constructed, who is harmed from that, who benefits from how the metrics are shaped and so on. In this case, we actually have a perfect example where if the face recognition system is very inaccurate for black people's faces, that actually helps them in the societal context. So by logic of this report here, that must mean that somehow the bias works for them and thereby the system is good or something like this. And by fixing it, you actually make it worse. Yeah, they say it can actively cause harm. So I think this is pretty much arguing against themselves earlier where they say, oh, we always have to look at who benefits from the system. Yeah, here, if the face recognition system can't recognize you, you actually benefit. So I don't think that argument works in any case except if you only look at it when you want to look at it. All right, so we're going to jump a couple of sections here. But the core thing here was the feedback loop. And again, the feedback loop isn't demonstrated at all here. Just examples of systems that are biased and of data sets that are biased, because of data sets that are biased. But there's no demonstration of how the workforce, I mean, yeah, just take this previous argument. So the workforce is supposedly supremely white. And it makes a face recognition system that makes that is performing poorly for darker skinned people. And that actually in this context of police surveillance helps the darker skinned people compared to the lighter skinned people. So that kind of is an exact counterexample to the argument that this misrepresentation in the workforce leads to the biases in the system. If we interpret it through the lens, who it costs and who it benefits. All right. So the next section is corporate diversity beyond the pipeline problem. And this is kind of an odd inclusion when I read it first to interpret to go against the pipeline problem here. But it kind of makes sense if you know what these people set out to do. So what these people set out to do is to argue we must fix the workforce, right? We must fix the, we must hire more people of color, more women and so on, promote them more. And they have a very much have a problem with this pipeline argument. What the pipeline argument is, is the following. So at the beginning, if you consider like the educational or career paths of people, then you have like 100% of people that's represented at this at the beginning, and then most of these people go through school. So most of these go on. This is kind of the area in here is the population. And then some of them pursue higher education like some drop out. So this gets a smaller amount. So this is here, this is time and this is kind of volume of people. And then very few go into computer science, right? And then even fewer go into AI. So what you end up is just a tiny sliver of people that actually go into AI. So this is called a pipeline, and we have various junctions here like where you would go into higher education, where you would choose your major in university, where you would go into a subfield of computer science, where the kind of volume of people drops significantly from one point to the other. And now if you compare this, if you compare this and use it say, we're not considered all of society, but here over here we'll call consider all just men and over here we'll consider all women again, they all go to high school and then university and then maybe very few go to CS, even fewer go to AI. What you'll find is, and I've drawn it maybe wrong here, is that this is smaller than this. So if you comparatively look at how many males end up in the AI field, you will find that fewer end up in more and will end up in our field than women. If you comparatively look at it. So at and this is over time, like at the beginning, you have 5050 main women distribution in society, almost I guess, I think slightly more boys are born, but I could be wrong about this. And then as you go through time here, excuse that I believe. So you go through high school and let's just assume like high school is still kind of equal, it depends on the country. Then you go to university, where there's actually more women at university slightly. And then you go into computer science and in computer science, and this is just relative here, that's why I kind of norm it at 100%. Otherwise these things would go down all of them at the same time. But comparatively, you have then much more men than women in computer science. And then if you see who chooses AI, I don't know if there's any statistics of specifically choosing AI from computer science. I'm just going to assume that remains the same. So if you look into the AI field, kind of this, this will stay the same. So in the AI field, you have much more men than women. And presumably, because you already have much more men than women choosing computer science as their major or choosing any technical field as their major. This is kind of the so called pipeline argument. So where do AI companies hiring come in? AI companies come in here, they hire at this point, after your university degree, presumably. There's exceptions, but just say they hire after your university degree. And therefore, they basically have to choose from this distribution. And if they just say, okay, we'll just take the top, I don't know, 10% people will hire the good people of this, we don't care what gender they are. Right, so the top 10% here, the top 10% here, then this will end up being the same distribution as you have graduates. Right, so this is kind of the company, company hiring from an let's say an 80 20 distribution without looking at gender will end up with an 80 20 distribution. That's the pipeline argument of companies. And they don't like the pipeline argument, because the pipeline argument basically says that the problem is somewhere here, right? The problem isn't the company's hiring wrongly. The problem isn't that the company's here, deselected, the problem is somewhere here. And because they want to make the argument that the company should hire in a different way, they can't have that. So they argue against it. Now to argue against this would actually be very easy. If this argument were wrong, like they claim the argument is is is not good, the pipeline argument isn't good. If the pipeline argument were wrong, what you'd have to do is you would have to say, you would have to say, hey, companies, look at that. In your company, you have an 80 20 distribution men to women, right? That's pretty unequal. And you know, in university graduates, the pool you choose from is actually 5050. So obviously, you're engaged in discriminatory hiring, because you know, the pool is 5050. There's no reason why it why your hiring practices should cause this inequality. And therefore, we can clearly show you do discriminatory hiring, you should stop it, you should definitely hire more women and people of color, more of these more of the minorities, because your hiring practices are the problem. But that's not the case. How do I know? Because if it were the case, they would simply state this. Definitely in this report, if that were the case, that you could actually show with numbers that the pipeline argument is wrong, then they would absolutely do this. That they have to like, go back and they have to like, ramble around it for several pages, which will mostly skip but mainly because this is the case, it is the case that these companies hire from a pool of of unequally represented people. And the only argument that you can make is that, well, if if you were to equalize this here, then maybe here where the problem is that would fix like, so the argument is often made if young girls choosing their majors have no one to look up to, like no strong women in in corporation CEO roles, they will think that it's not a climate for women and they will elect not to go into these fields, which is a valid argument, like I'm completely open to that to that argument. But it's the only argument you can make. And still then, even if you determine this as the cause, I would still not support racist and sexist hiring practices like do something else like make them clear that the environment can be changed or change the environment, like change the if if it really is the case that it's kind of a non anti woman environment, change that. If it's just the case that they perceive it as such change the perception, but do not engage in discriminatory hiring practices, because there's always someone losing out unfairly on these practices. And that's, that's something I'm not willing to, to go into, like that's something I'm not willing to engage in. And I don't think people should engage be engaging in that. Actually, that's why it's illegal. So let's, let's actually look at very few points. This is just why the so they claim they go kind of go over these pipeline studies. And they yeah, they say term used in industry to reference the absence of diverse candidates in the hiring pool of to justify the inability of large firms to achieve diversity due to scarcity. Right? So that's, they basically agree the of that on the definition that I stated here. So the companies that are challenged on their lack of diversity frequently site pipeline studies as proof of the persistent challenge of finding enough women and people of color to hire. Yes, and, and the yeah, but they say but the evidence suggests otherwise. For example, in 2016, Facebook chief diversity officer wrote that it has become clear that at the most fundamental level, appropriate representation, technology or any other industry will depend upon more people having the opportunity to gain necessary skills through the public education system. Well, yes, that's something I would agree. And that's something clearly that addresses this region here. Then and where the actual problem is happening. So I would say that's a very, very good statement from the Facebook's chief diversity officer. They say but as the Center for Investigative Reporting study of tech company diversity data found 91 large tech companies headquartered in Silicon Valley managed to hire higher percent of black, Latino and multiracial employees than Facebook that year. Well, just if other just just because other companies employ racist and sexist hiring to improve their diversity numbers doesn't mean that Facebook has to do this. Right? It it like just because other companies do this doesn't mean that it's a it's a it's a good thing to do or that's how you should go about it. Facebook simply says like, if we want to hire without being racist or sexist, if we want to just hire the best people, then more of the best people have to be in the pipeline, like more people have to gain access to educational opportunities so we can then hire them. Whereas these other companies probably make a big effort to say, well, even if you are not as educated, even if you're not as qualified as this other person will hire you because of your skin color. I don't think that's that's an argument in that in the favor of what the report is claiming. Like I don't think that that is evidence that the pipeline argument is invalid. All right, so they go into core themes in pipeline research, and they do some they do some overview of the kind of pipeline research that often so sometimes the pipeline research examines why, why, for example, why women don't choose to go into computer science as much and sometimes they focus on what is their perception of the field, what was it, what is their perceptions of the stereotypes of the field, what is their perceptions of the kind of culture in the field, is it suited to them, what is their perception of how qualified they are for the field, and is that true, is that false, and so on. So this research examines a whole variety of things. And it's very interesting, actually, to read through this research. I want to point out this here. Other studies suggest that gender is correlated with a person's motivations for pursuing a career in the field. Women and particularly women from low socioeconomic status or minority backgrounds are more likely to see computing as a versatile profession that provides an opportunity for secure employment, higher pay, and better social standing. Moreover, their interests go beyond technical aspects of computing, focusing instead on the purpose and application of software. However, such interests are often de-emphasized in computer science curricula, a price technical skill and its applicability to industrial settings above all else. So I find this really interesting because it's basically saying that women have different interests than men on average. That's basically saying that, which is almost heresy. To say this in this context, people will come after you if you suggest something like this, and yet they're just stating it here. Remember this for later. This is really funny that they're like, yeah, the interests could be different for women than for men. And we might have to adjust our curriculum to be more suited to these different interests. I mean, yeah. I'm sure that's... Yeah, as I said, you're like, usually this is forbidden to say. All right. So they go on. They say limitations of pipeline research, right? These are fairly like common limitations, let's say, of studies in general, social science studies, which I won't go into much. Again, they state we have to examine... We don't only have to examine this, but the problem... They basically say the problem is actually the culture and the problem is actually the perpetrators, where do I say? I don't remember where this is stated, but they again say we have to examine who benefits from its present construction, who is underserved within the current tech ecology, who benefits from its present construction, how these dynamics might be untangled, and so on. So again, stating these kind of power relationships for the different groups, which I don't agree is in large part what's happening. They say it's worth considering the scope of these studies and by and large, the recommendations they issue are limited, targeted at the administrators of university computer science programs seeking to broaden the diversity of their student body. Yes, that's exactly where we saw the problem appears to be, right? So the reason they have a problem with these studies is that they actually focus on the point where this discrepancy appears to happen, because they want to claim that no, no, no, you should focus on a different point, namely hiring in these companies, hiring and promotion. They say though important, so at least they acknowledge that that's an important problem. This is a narrow frame through which potential solutions to barriers to inclusion. It does not address the companies that hire computer science students, the peers responsible for promulgating stereotype views or engaging in hostile behavior or the broader social conditions that may influence students' success in computer science programs. Actually the research and even some of the examples they've included of this research addresses all of this. But the research often addresses the kind of stereotypes and how the peers act and how the companies act and also how the companies hire and how people have something to look forward to or nothing to look forward to and how that influences their decisions. Yeah, again, they say the studies are frequently cited by those within corporate environments to justify their own lack of diversity as they situate the locus of change outside of the corporation itself. As such pipeline studies are disproportionately emphasized as a part of the broader research agenda on diversity and technology. Again, they state companies use this to get out and of course, like companies, of course they're going to use this to get out. I mean, I agree at least with that. I agree that companies are going to try to use this to get out of responsibility. Certainly. All right. So the last section here is the pipeline dreams after years of research. Again this is on this pipeline studies. Basically they say the pipeline research hasn't shown, like hasn't borne fruit. It hasn't led to meaningful change in the field even though we've researched this. The reason they say the number of reasons they tend to place the owners to solve issues of discrimination, Silicon Valley on those who are discriminated against rather than the perpetrators. I find this word choice really interesting. Perpetrators, right? Like again, the group of white men is trying to put down everyone else. That's the perspective that the article takes. And it's not even true. This research, a lot of times it actually says the reason why, for example, women don't choose to go into computer science is the male dominated culture within these corporations, is the perception of this not being a woman friendly environment, is the people here of sexual harassment and so on. So it's not even true. But moreover, I just wanted to point out the choice of word here, perpetrators. I don't know how you get to this word. It really shows kind of a worldview of the authors in my opinion. All right. So they go on and say, okay, this pipeline studies haven't been beneficial and companies haven't done much or hasn't been successful. They're going to worker led initiatives, which I'm going to skip here. It's just a kind of a reporting of what happened at companies where the workers themselves organized. And then the last section here is the pushback against diversity. So in this section, they're kind of documenting and arguing against people who have basically stated counter arguments to their recommendations mainly. So their recommendations being, let's change the hiring, let's change the promotion, and so on to be based on race and gender. And the pushback here characterized in different ways. So we'll go through this. This is the last section. I know it's a long video already. If you're still here, like the one person who's still here, hi, I hope you're doing well. Good. Keep hydrated. Yeah. So they say, it's a critical time. We now see diversity itself being weaponized. So they say this growing awareness accompanied by demands for inclusion and equity has led to some change, but there has also been resistance, especially among those implicitly privileged by the status quo. So again, jumping straight to attack on the person. Like I don't care if who makes an argument against me. I want to go on the argument and I'm going to go on the content of the argument. But these people straight, first thing they stayed is that's just by the people who are benefiting. That's just by the white men, basically. Straight to the identity of the person. That's dishonesty right there. So those questioning and even rejecting the idea that racism, misogyny, and harassment are problems within the AI field and the tech industry have appropriated the language of diversity to argue that efforts to improve inclusion are in fact exclusionary and addressing the deeper structural challenges posed by racism, sex and inequity is misguided. And yes, yes, definitely efforts to improve inclusion can be exclusionary. Like just because, so this is a thing, just because you're fixing a problem doesn't mean the method you're using to fixing it is justified and is itself good. Methods to improve inclusion can be exclusionary and some that have been proposed are exclusionary. Definitely it depends on the method. It doesn't mean these people are against these efforts. It means that the measures, for example, implementing racist hiring policy, I can definitely see that this is going to lead to more equal representation within the workforce. But the tool itself is really bad and exclusionary and discriminating. So yeah, I would say that it's accurate that it can be exclusionary. I say, for example, some AI researchers greeted the announcement of Black in AI Workshop at NRIPS leading machine learning conference by questioning whether the event was necessary, arguing that it would be discriminatory. But can't they? Can't they question whether the event was necessary? Like that would, I would, here I would need a discussion. What is it for? Right? Why is this event happening? And what is it doing? And is it discriminatory? It could be. Any event can be discriminatory. Does it discriminate based on race or gender or anything? Is it, you know, does it do so unjustly and all? So I don't, I don't just don't see why. Could still be wrong. Like you could question and then you could be wrong. But you should be taken on your argument. But the argument here is just already questioning this is already on the wrong side of the argument. And I don't agree with this. I don't agree with these people that question this workshop. Don't have a particular opinion on these things. But I have the opinion that you have to take arguments at their argument value and not just at who makes them or whether or not they're against a particular viewpoint. All right. They say such pushback often centers calls for cognitive diversity or viewpoint diversity. The idea that individual differences in the ways people think and understand the world are distinctions that should be counted alongside or instead of other identity categories such as race and gender. Well, yes, that's I mean, isn't that isn't that a very reasonable thing to say? Isn't it very reasonable to say that differences in the ways people think and understand the world, their distinctions that should be counted alongside other identity categories such as race and gender, they say a dozen white men so long as they were not raised in the same household and don't think identical thoughts could be considered diverse. That's I don't know if this is a sarcastic statement or not, but clearly it's it's kind of the counterpoint they're trying to make here that but yes, I would I would totally agree with this statement in a way a white man growing up in San Francisco, a white man growing up in rural Idaho, a white man growing up in Florida, a white man growing up in Western Europe, one in Russia, and one growing up on the road with its circus, his circus parents in Mongolia would definitely be that plenty diverse, right? I mean, they criticize this here, but this is is actually how can you how can you not see this that? Yes, these are valid differences, and people are going to think differently, independent of how they look, people are going to have different thoughts. And it's important to recognize other people think differently. And therefore, you should, you know, include them if it's relevant. And the counter argument to this is, of course, what the authors here are saying basically is that 12, a dozen people, as long as they are don't look the same, could be considered diverse, even if they all were raised in the same place, and basically all live in San Francisco, and think the exact same thing. Yeah, that's, I mean, it sounds to me, it sounds as absurd as the other way around. To me. So here's, here's my, here's my thoughts on this. I am not going to pretend that I know what life is like as a woman. Right? I'm absolutely sure that for areas of life, it is it is definitely valuable to listen to the experience of a woman or multiple women, an aggregate of women, because the life is just different as a woman. Life is also different. As a black person, I absolutely concede that there are things that I might not be able to draw from my life experience, because I am not of that skin color that different problems that people face. And that's why it's important to have an opinion of that at the table. But I'm also absolutely certain that I have no relation to someone who grew up as a child pop star from the age of 12, and then had that life. I have no relation to someone growing up under a communist regime. I have no relation to someone growing up in in kind of a Buddhist religious tradition. I just don't. And I don't care how they look. They have different experiences. They have different bodies of knowledge to draw on. And I don't think why we should make the difference along the exact lines of race and gender. Yeah, but that's what they that's of course what they argue here. Those arguments work by centering identity while flattening or ignoring power relationships. Here the VP, the Facebook VP of engineering said that the ultimate goal is cognitive diversity and cognitive diversity is correlated with identity diversity. That means it's not just about getting women in tech, it's about broad voices, broad representation. Right? So the the this is exactly what I would say the reason why we want different the reason why we want a woman or a black person at the table is because they have a different knowledge is because they have different thoughts because of their different life experience. They have different thoughts that they can bring in. So actually, by including these what they call bodies, it is about cognitive diversity, even in itself. But the authors here really see this from a different angle. They really see this in terms of power relationships between race and gender groups. And I yeah, the arguments of the authors don't make sense if you don't view it through that lens. That lens to me is just such a it's such a I don't know, it's just sad look on the world. And also, I think it's a very, very inaccurate look on the world. And it's, I think, a very dangerous look on the world. Um, yeah, again, they say instead of looking at historical patterns of marginalization, calls for cognitive diversity argued that all differences are equal. No, we're not. Like, no calls for cognitive diversity or don't argue that all differences are equal. Well aware that some people have it harder, well aware that some differences are bigger, worse or better. That's absolutely well aware all they're saying is that race and gender shouldn't be the like, only things to consider and shouldn't be in itself be considered diverse. Just because someone is of a certain skin color, it doesn't mean anything, right? It doesn't actually tell you anything about that person. So why not consider people as individuals and look at what was their life like until this point and what could they contribute to the discussion we're having rather than looking at the color of their skin. I mean, if the color of their skin played a role in their life, then obviously that would manifest in my suggestion as well. But to just look at people through this kind of group lens is is so foreign to me. And yeah, I feel it's it's quite dangerous. Yeah, so again, and this this could argue that all differences are equal. I mean, the point where you have to start misrepresenting what the counter argument is saying, that's really how you know you're dealing with a with not a well intentioned person on the other side of the of the discussion. This is really politics now. This isn't a well intended argumentation. It's really someone to trying to achieve some goal, because they have to misrepresent the other side. And this only gets worse from here. They say recently was exemplified in the controversy over Google's appointment of Heritage Foundation CEO K calls James to its Advanced Technology External Advisory Council. Google's reasoning for the appointment of James was ostensibly to ensure diversity of thought by including a conservative viewpoint on the council. Alright, so Google has a technology advisory board, or council, sorry, of external people, and they've included a conservative. And she is by all by all metrics, let's say, a standard conservative. So this is not a far right neo Nazi type. I don't know. But this is this is someone who has similar opinions than half the US country and in generally in at least in the Western world, generally half of the of the country's population tends to be conservative. More or less, I mean, there's differences. But yeah, so this this is a this is an opinion that a large portion of the population shares. So it would be I don't know, it would be suitable to include at least someone of that opinion in an external advisory council to to have that on board. You don't have to listen to her like she's not like she's made king. It's simply that she will have the opportunity to input her voice representative of kind of that large, very large percentage of people. They go on to say, James is also a black woman, thus adding racial and gender diversity to the panel. So even further, right, this is it's a conservative black woman. All right, but the pushback following James's inclusion focused on her policy position, citing specifically her vocal anti LGBTQ and anti immigrant views and highlighted why cognitive diversity is a particularly limited lens. And the pushback here was very much spearheaded by one of the authors of this article. So I am this isn't just reporting. I will also I'll also criticize the the this pushback here since it's, you know, it's kind of argued for in this article. It's not just reported and also because the authors are the same. So here they say they have vocal anti LGBTQ and anti immigrant views. And I haven't actually gone specifically and looked at what this person particularly has said, but given that she's a standard conservative and has been in public office, I believe under George W. Bush, she can't like I have trouble believing that she has like extremely hateful opinions like these people shouldn't exist or like something like that nature. Like often people like conservative people have have issues with forcing people to adopt certain pronouns for people or issues with which bathrooms do people go in and, you know, generally are tougher on immigration, especially illegal immigration and so on. I mean, these are these are views that people hold. It's a large part of people and these are discussions to be had. So including this this person would be very sensible move. But they say in a letter opposing the appointment, a group of Google workers calling themselves Googlers against transphobia and hate, transphobia and hate responded to the idea that diversity of thought justified James's addition to the council. This is a weaponization of the language of diversity by appointing James to the ATAC. Google elevates and endorses her view, implying that hers is a valid perspective worthy of inclusions in its decision making. This is unacceptable. Here it says again, the author was one of the organizers of that. And that's what they're saying here. The views, if you don't have our views, these are unacceptable views, right? It's valid perspective worthy of inclusion. It's what they're saying basically is you don't even talk to these to this person, like talking to this person, considering their opinion. You can still evaluate the opinion, but even considering their opinion is already wrong. And that given that the person is a black woman. So basically, they are called the author's idea of diversity is people that look different that are from race and gender groups that have don't have much power or perceived what they call power right now. As long as they all think exactly as we think, right, then that's fine. As long as they they share our thoughts, as long as they don't have dissenting opinions, we want the we want the different looking people. But don't dare talk to anyone of a different opinion. Yeah, this, I don't I don't see how I mean, these these authors, in my opinion, they really live in in a bubble, they really live in the in a tiny Silicon Valley or Silicon Valley influenced spaces, because this is this is half the people they basically saying half the people in their greater community in their country aren't even worthy listening to their opinions aren't even worthy of inclusion in of consideration. So yeah, well, well done might as well discredit them at once. I'm sure I'm sure I'm sure that's gonna fly well with these people. All right. Yeah, might might start calling them deplorables and see what they do. Maybe they'll return the favor and elect a moron just to stick it in your face. I mean, that's what happened. So the idea of cognitive diversity is mobilized by some support in support that the AI field and the tech industry are already diverse. Including as far as to support claims that not including identities like white and male constitutes discrimination. Yes, it can. Like if, if you include every single identity except white and male, that constitutes discrimination. That's I mean, yes, even if they're in the majority is still constitutes discrimination, like no one can help being born white and male, no one white and male chose to be born like that. Don't mostly don't choose the melanin content of your skin, you can modulate it a bit by going to the sun, which computer science people statistically don't do very often. So there's not much leeway there. So yeah, to not include identities like that, if you include every other one, can constitute discrimination. True. A July 2017 memo written by James Damore, a software engineer at Google is illustrative of such pushback titled Google's ideological echo chamber. And published in an internal mailing list, the memo critiqued the company's diversity policies arguing that biological differences between men and women rather than bias and discrimination help explain gender disparities at the company. I feel the you can leave out the rather than here. I think the memo simply stated that biological differences can help explain the gender disparities. The most objective writing the memo was to make the case that policies designed to achieve equal representation are unfair, divisive and bad for business. Well some are. Yes, especially the recommendations that you've given at the beginning, number seven, is unfair, divisive and I would also argue bad for business. So supporters for Damore's point of view at times even drew on the rhetoric of the pipeline to make the case that diversity initiatives are in fact discriminatory. They argue incorrectly that if there aren't qualified candidates in the pipeline, then hiring those who are unqualified on the basis of identity discriminates against those who are qualified. No, I would say hiring anyone on the basis of identity discriminates. I mean inherently. So again I think that's the larger argument that these people are making, which is not incorrect, is very correct. So in an update to the memo Damore himself asserted that he values diversity and inclusion, but his primary concern was cognitive diversity. He says diversity inclusion is not denying that sexism exists, doesn't endorse using stereotypes. And in specific I've read the memo and it directly says these are population level kind of statistics and there is more overlap than difference and you absolutely can't say anything about an individual by looking at these statistics. That's almost a quote from this memo. So he was very much concerned with considering people as individuals, but also if you like he was basically making the same argument as earlier. I told you to remember, hey look this one study that found that women's interests might be different and we might shape the curriculum. That's basically what Damore said. He said women's interests might be different and we'd have to maybe shape the way we do work, like change the way we do software engineering to attract more of them. That was one of his points. So he's exactly the same thing, but of course he's a misogynist because he suggested that this could be due partly because of biological differences. And the way he was dragged through the mud is just crazy. And they shoot here very much against this kind of biological, what they call biological determinism. We'll see this very briefly. I'd say diversity becomes an empty signifier, stripped of the histories and experiences of systemic discrimination, repurposed around ideology rather than bodies. I'd say diversity has nothing inherently to do with bodies as such. I think that's only the case if you are already convinced of this. Within hours of the memo's publication, harassment targeting minority advocates who pushed back against the claims in the memo began, with a particular focus on queer and trans workers. That's bad, but also I think the pushback against people who voiced support was also pretty bad because one of them was fired, as you already stated. Google's vice president of diversity even locked down her Twitter account shortly after Demours firing, responding to the barrage of threats describing her as a police Nazi. Well yeah, if you fire something. I mean undoubtedly Google fired this guy because they thought it was less of a PR disaster if they also fired him now. This probably wasn't an ideological decision, much more a PR decision. If you fire someone after stating something like this, it very much looks like you're firing them because you don't like their ideas and you don't like what they're saying, which people generally are not in favor of censoring freedom of speech. But yeah, that being said, harassment is bad, don't harass people. Also that being said, criticism isn't always harassment and don't conflate the two. Demours' memo also stated that the distribution of preference abilities of men and women differ in part due to biological causes and that these differences may explain why we don't see equal representation of women in tech and leadership. This assertion hinges on a flawed assumption that identities like gender and race are essential and fixed biological attributes and that inequalities are at least in part the product of such irreducible differences. Well, I mean, if they're not fixed biological attributes, certainly gender and race have a 0.99 correlation with biology. Since your biology is first and it's determined when you're conceived, that demonstrates a causal direction. Even if they're not exactly fixed, they are overwhelmingly fixed. And to suggest that this is a flawed assumption, that these inequalities are at least part the product of such differences, what you'd have to do, they simply state it's a flawed assumption. What you have to do in order to show this is a flawed assumption, you have to show that gender and race, as far as they're biologically determined, have no influence whatsoever on these differences. That's what you have to show, right? That's the counterclaim because the claim is they have at least in part something to do with it. And that's also, I believe, what the more stated and what the predominant opinion like is very like all the research points to, for example, there is a large difference in interest between genders as far as, for example, career selection goes and so on. Now, we can talk about why that is, but there's also a large consensus, I believe, that this is at least partly determined to however degree, but it is at least partly determined by biology. In order to show that this is flawed, you need to show that it does not have, it can't have any influence, right? You have to basically prove them the impossibility of this having an influence, which no one has done so far, much to the contrary. So simply state this is a flawed assumption kind of shows to me that they've already, they are there, they're in a bubble and they're expecting to speak to people in the same bubble. Yeah, so they go on and kind of discredit this as called a biological determinism, which I don't think that's a correct use of the term biological determinism, but you can judge for yourself. All I think these people are saying that biology might have some influence and we could adjust for that. It's not even right, it's not even. Yeah, this comes up here. So conclusion, conclusion, finally, I think it's been two hours. Sorry. Conclusion. Throughout this report, we've outlined the scope and scale of the problem, tracing how the diversity crisis in the industry and the problems of bias and AI systems are interrelated aspect of the same issue. No. In the past, these topics are commonly examined in isolation, but increasing evidence shows that they are closely intertwined. No, you've shown that they're parallel. You have absolutely not shown that they're interrelated aspects of the same issue and you have not shown that one, any one of these causally influences the other, that there is any feedback loop. You have not shown that fixing one leads to fixing the other. I mean, you could also take a company that extremely is focused on, or for some reason has a different workforce and then show how their products with the same data sets as the previous companies don't end up being biased. Probably not so easy. But again, none of that is in the report. There are many things you could actually do to show what you wanted to show, but it's just not the case in this article. Our analysis surfaced two prominent responses to the diversity crisis. On one hand, a worker driven movement, which we've skipped. On the other hand, we observe a small but vocal counter movement that actively resists diversity in the industry. What dishonesty actively resists diversity? I mean, the thought that these people stray around like, no, I don't like the other looking people. It's just so absurd. All they're saying is that either we don't understand the problem in the correct way or our tools aren't appropriate to solve the problem. I think everyone has the same goal of the workplace and the AI systems being as fair and as non discriminatory as possible. Misrepresentation of the other side is something that really bugs me. And it's something that these authors do a lot. So yeah, I lose my polite side maybe. And uses arguments from biological determinism to assert that women are inherently less suited to computer science and AI. What a load of crap. Sorry, but uses to assert that women are inherently less suited to computer science. No one. Okay, not no one, but no one that I know. Asserts that absolutely no one that makes these arguments. Sorry, not no one. You can always find a sexist douchebag that makes that argument. But this is not a serious argument made. And this is not this counter movement. Most people in the argument that most people in this counter movement make. Not at all. And to represent them as such is just so dishonest that yeah, this this this basically this is the it's nice that it's in the conclusion because it finally like at the end it completely destroys the credibility of me taking seriously these authors. I thought they had so that the parts we skipped over I mostly would say I'm mostly okay with they mostly show parallels between the that AI systems are biased and they also show that there is unequal representation. They also show examples of discrimination, harassment and so on. Problems in AI companies and universities that all you can read the report for this that's it's pretty interesting to read. But the points I've addressed, I'm not happy with. Yeah, so that was it for now. Sorry this was took so long, but I felt that a thorough take was necessary. Have a nice rest of the day.
[ { "start": 0, "end": 7.5200000000000005, "text": " Hi there, today we're looking at discriminating systems, gender, race and power in AI by Sarah" }, { "start": 7.5200000000000005, "end": 14.72, "text": " Myers-West, Meredith Whitaker and Kate Crawford of the AI Now Institute, which is a part of" }, { "start": 14.72, "end": 18.8, "text": " New York University or associated with it." }, { "start": 18.8, "end": 24.8, "text": " This is not as much a paper as it is a report, kind of summarizing current literature and" }, { "start": 24.8, "end": 31.76, "text": " also kind of an opinion piece slash recommendation giving document." }, { "start": 31.76, "end": 35.86, "text": " Yes, so we'll dive into it." }, { "start": 35.86, "end": 40.68, "text": " As you can see from the index, it's quite a long report and we don't have time to go" }, { "start": 40.68, "end": 41.68, "text": " into all of it." }, { "start": 41.68, "end": 43.92, "text": " Actually, we don't have time to go into most of it." }, { "start": 43.92, "end": 50.400000000000006, "text": " I just hope to kind of point out what the main arguments and themes are in the report," }, { "start": 50.4, "end": 58.4, "text": " kind of what it's trying to say, pick out some interesting things and summarize it to" }, { "start": 58.4, "end": 60.64, "text": " the best of my ability." }, { "start": 60.64, "end": 62.72, "text": " Also give a little critique." }, { "start": 62.72, "end": 73.48, "text": " So let me actually go ahead and try to state the kind of core argument that the report" }, { "start": 73.48, "end": 78.44, "text": " is trying to make, because it's not really clear from reading it and you have to kind" }, { "start": 78.44, "end": 84.8, "text": " of read the whole thing and then kind of becomes clear what the argument is, I feel, though" }, { "start": 84.8, "end": 89.96, "text": " they somehow stated in the introduction numerous times in various ways." }, { "start": 89.96, "end": 94.24, "text": " So I might just be not as attentive reader at first time." }, { "start": 94.24, "end": 100.47999999999999, "text": " But all right, so here's the argument and I really hope I'm representing this correctly." }, { "start": 100.47999999999999, "end": 107.68, "text": " We have a problem currently that sometimes AI systems can exhibit what we usually call" }, { "start": 107.68, "end": 109.08000000000001, "text": " bias." }, { "start": 109.08000000000001, "end": 113.52000000000001, "text": " And we don't mean mathematical bias, like bias variance tradeoff." }, { "start": 113.52000000000001, "end": 120.60000000000001, "text": " We mean bias in a societal sense, let's say bias against certain types of people where" }, { "start": 120.60000000000001, "end": 122, "text": " they shouldn't exist." }, { "start": 122, "end": 129.28, "text": " So for example, let me draw an AI system and I'll just draw a little computer screen with" }, { "start": 129.28, "end": 131.60000000000002, "text": " a little light bulb." }, { "start": 131.60000000000002, "end": 132.60000000000002, "text": " All right." }, { "start": 132.6, "end": 137.92, "text": " So this is because it's smart, this is an AI system and the AI system and they give" }, { "start": 137.92, "end": 139.04, "text": " numerous examples." }, { "start": 139.04, "end": 145.2, "text": " One example they give us for is like face recognition algorithm that is much more accurate" }, { "start": 145.2, "end": 151.92, "text": " on faces of white males, as opposed to darker skinned females." }, { "start": 151.92, "end": 159.04, "text": " So let me draw like two curves to represent these distributions are unequal." }, { "start": 159.04, "end": 165.48, "text": " And so the AI system exhibits some bias with respect to some kinds of people with an especially" }, { "start": 165.48, "end": 167.2, "text": " protected attributes." }, { "start": 167.2, "end": 171.39999999999998, "text": " And in this report, they focus mainly on gender and race." }, { "start": 171.39999999999998, "end": 174.51999999999998, "text": " So that's what we're going to talk about." }, { "start": 174.51999999999998, "end": 179.68, "text": " The second thing they observe, so this observation one, the second thing they observe is, I'm" }, { "start": 179.68, "end": 185.32, "text": " going to draw some generic people here that represent the workforce of AI." }, { "start": 185.32, "end": 191.76, "text": " So the AI workforce is classified as all the people that work on AI, be that university" }, { "start": 191.76, "end": 197, "text": " researchers or within companies building AI products or deploying them." }, { "start": 197, "end": 202.51999999999998, "text": " So this is the workforce and they observe that there is an unequal distribution among" }, { "start": 202.51999999999998, "end": 205.64, "text": " the AI workforce." }, { "start": 205.64, "end": 211.84, "text": " So this distribution, I'm also going to do this for unequal distribution." }, { "start": 211.84, "end": 217.48, "text": " There's an unequal distribution in the AI workforce, most notably, it's predominantly" }, { "start": 217.48, "end": 221.76, "text": " males who work on AI." }, { "start": 221.76, "end": 228.08, "text": " And also white people are overrepresented compared to the world population at large." }, { "start": 228.08, "end": 231.72, "text": " So that's kind of the two observations they make." }, { "start": 231.72, "end": 240.36, "text": " And now what they claim is that the unequal representation in the workforce is causing" }, { "start": 240.36, "end": 243.14000000000001, "text": " the bias in the AI systems." }, { "start": 243.14000000000001, "end": 250.52, "text": " So they're basically saying these AI systems are biased because that the workforce is unequally" }, { "start": 250.52, "end": 251.96, "text": " distributed." }, { "start": 251.96, "end": 258.48, "text": " And also they claim in a less powerful sense, I feel, but they claim there is a loop that" }, { "start": 258.48, "end": 265.24, "text": " this then leads back that because there is bias in the AI system, that again leads to" }, { "start": 265.24, "end": 270.08000000000004, "text": " an unequal, more unequal distribution of the workforce." }, { "start": 270.08, "end": 276.56, "text": " So the core argument really is, as they set out to do, like in the introduction, and also" }, { "start": 276.56, "end": 282.28, "text": " claim that they have done in the conclusion, is to demonstrate these two directions here" }, { "start": 282.28, "end": 283.84, "text": " in a causal way." }, { "start": 283.84, "end": 289.21999999999997, "text": " So the systems are biased because there is an unequal representation in the workforce" }, { "start": 289.21999999999997, "end": 293, "text": " and that feeds back." }, { "start": 293, "end": 300.03999999999996, "text": " So the argument is that if you want to fix the bias here, if you want to fix that, then" }, { "start": 300.04, "end": 309.88, "text": " you will have to fix it via making the workforce more what they call diverse, so less unilaterally" }, { "start": 309.88, "end": 313.40000000000003, "text": " distributed towards white males." }, { "start": 313.40000000000003, "end": 315.48, "text": " That's kind of the final conclusion." }, { "start": 315.48, "end": 321.12, "text": " If you read their report and the recommendations, that's mainly what they're going for." }, { "start": 321.12, "end": 331.8, "text": " Yeah, so my opinion, or in my opinion, having read the report a couple of times, is that" }, { "start": 331.8, "end": 335.98, "text": " as I see it, they really don't demonstrate these links." }, { "start": 335.98, "end": 341.04, "text": " So they give examples of this and they give examples of this." }, { "start": 341.04, "end": 344.08, "text": " They show that the workforce is unequally distributed." }, { "start": 344.08, "end": 350.2, "text": " They show that AI systems can exhibit such bias, but they never actually show these links" }, { "start": 350.2, "end": 351.4, "text": " in my opinion." }, { "start": 351.4, "end": 352.8, "text": " They don't show this." }, { "start": 352.8, "end": 358.94, "text": " So if you make the claim that in order to fix the bias in AI systems, you must fix the" }, { "start": 358.94, "end": 364.42, "text": " unequal representation in the workforce, I would need an argument that says because there" }, { "start": 364.42, "end": 372.12, "text": " is unequal representation, therefore A, therefore B, therefore C, therefore bias, like an actual" }, { "start": 372.12, "end": 382.32, "text": " argument to follow that says because of this, that, because of that, that, and so on." }, { "start": 382.32, "end": 384.8, "text": " It's just not there." }, { "start": 384.8, "end": 386.56, "text": " They simply show parallels." }, { "start": 386.56, "end": 392, "text": " They simply show that these two things exist and they just list example after example of" }, { "start": 392, "end": 396.52, "text": " that." }, { "start": 396.52, "end": 398.84000000000003, "text": " I don't think they make this argument." }, { "start": 398.84, "end": 406.2, "text": " But I think, also the other direction, they don't really make this argument." }, { "start": 406.2, "end": 415.47999999999996, "text": " Except in one case, where if you give them benefit of the doubt." }, { "start": 415.47999999999996, "end": 423.91999999999996, "text": " What I also think is that it appears like the article, if you read it, and I encourage" }, { "start": 423.92, "end": 429.72, "text": " you to read it if you have some time, it makes a lot of sense if you have already accepted" }, { "start": 429.72, "end": 430.72, "text": " this conclusion." }, { "start": 430.72, "end": 437.20000000000005, "text": " Like if you've already accepted this, then it's like, oh yeah, because I feel this is" }, { "start": 437.20000000000005, "end": 443.40000000000003, "text": " just a text where the confirmation bias is so high, just the way it's written, that it" }, { "start": 443.40000000000003, "end": 448.84000000000003, "text": " must make a lot of sense to someone who's already kind of in on this conclusion." }, { "start": 448.84, "end": 456.52, "text": " But to someone who isn't sold yet, like myself, I am just not finding this convincing at all." }, { "start": 456.52, "end": 465.64, "text": " The second thing is that it very much feels like this isn't like a discovery or something." }, { "start": 465.64, "end": 472.96, "text": " But someone actually set out with the goal to address this here with the goal of I want" }, { "start": 472.96, "end": 479.64, "text": " companies to hire more of these people or certain kinds of people or to become more" }, { "start": 479.64, "end": 484.2, "text": " diverse or to promote more of a certain type of people." }, { "start": 484.2, "end": 487.35999999999996, "text": " And now I'm going to find reasons for this." }, { "start": 487.35999999999996, "end": 492.2, "text": " And the reason is like, oh, look at look at this bias here." }, { "start": 492.2, "end": 493.79999999999995, "text": " This is caused." }, { "start": 493.79999999999995, "end": 495.79999999999995, "text": " This is caused by this other thing." }, { "start": 495.79999999999995, "end": 498.84, "text": " And therefore we must fix this other thing." }, { "start": 498.84, "end": 505.08, "text": " It very much feels like someone setting out with already the conclusion in mind rather" }, { "start": 505.08, "end": 508.67999999999995, "text": " than this being an honest investigation." }, { "start": 508.67999999999995, "end": 510.64, "text": " But yeah, I mean, read it for yourself." }, { "start": 510.64, "end": 514.36, "text": " I can't prove the absence of an argument by not reading every single line." }, { "start": 514.36, "end": 519.12, "text": " And I can't read every single line because it'll just get very long and boring." }, { "start": 519.12, "end": 520.88, "text": " But read it yourself." }, { "start": 520.88, "end": 528.68, "text": " And I think I'm pretty I'm pretty I've read it numerous times with really an open mind" }, { "start": 528.68, "end": 531.1999999999999, "text": " to be convinced that there is an argument in there." }, { "start": 531.1999999999999, "end": 536.4399999999999, "text": " But I don't think there is or I don't think there is a very strong argument for this." }, { "start": 536.4399999999999, "end": 537.4399999999999, "text": " All right." }, { "start": 537.4399999999999, "end": 540.76, "text": " Let this first part here is more or less a summary." }, { "start": 540.76, "end": 543.3199999999999, "text": " So research findings is more or less a summary." }, { "start": 543.3199999999999, "end": 547.28, "text": " And we'll get to these things as they are important." }, { "start": 547.28, "end": 550.0999999999999, "text": " Then they state recommendations right at the beginning." }, { "start": 550.0999999999999, "end": 552.92, "text": " So actually, you'd have to read the article first." }, { "start": 552.92, "end": 554.76, "text": " This is kind of more of an abstract section." }, { "start": 554.76, "end": 558.54, "text": " But since it's right here, we'll kind of jump right into it." }, { "start": 558.54, "end": 563.68, "text": " So these are recommendations and I've claimed they don't really show a connection." }, { "start": 563.68, "end": 569.52, "text": " But they actually just show examples, examples of this and examples of this and parallel" }, { "start": 569.52, "end": 570.52, "text": " them." }, { "start": 570.52, "end": 575.38, "text": " And this is reflected in like every single section, including here in the recommendations." }, { "start": 575.38, "end": 579.12, "text": " They have recommendations for improving workplace diversity." }, { "start": 579.12, "end": 583.5999999999999, "text": " And they have recommendations for addressing bias and discrimination in AI systems." }, { "start": 583.5999999999999, "end": 584.5999999999999, "text": " Right." }, { "start": 584.6, "end": 591.84, "text": " So all right, in my case, if you make this argument, I would I would feel you also make" }, { "start": 591.84, "end": 594.96, "text": " recommendations for breaking these links." }, { "start": 594.96, "end": 598.9200000000001, "text": " But or argue why they can't be broken." }, { "start": 598.9200000000001, "end": 600.94, "text": " But all right, let's jump into some of them." }, { "start": 600.94, "end": 604.34, "text": " And it is really a mixed bag here, really." }, { "start": 604.34, "end": 610.48, "text": " So some recommendations I'm really in favor of just from from the go not even you don't" }, { "start": 610.48, "end": 613.9200000000001, "text": " even need the article for those here." }, { "start": 613.92, "end": 617.5999999999999, "text": " Discrimination, harassment and discrimination, transparency reports, including number of" }, { "start": 617.5999999999999, "end": 621.4399999999999, "text": " claims over time, the types of claims submitted and actions taken." }, { "start": 621.4399999999999, "end": 627.8, "text": " So it's known that especially in these larger companies, sexual harassment claims often" }, { "start": 627.8, "end": 633.8399999999999, "text": " go down in either bureaucracy or are kind of hushed under the table or something like" }, { "start": 633.8399999999999, "end": 634.8399999999999, "text": " this." }, { "start": 634.8399999999999, "end": 638.24, "text": " What you have to recognize is that a human resource department of a large company isn't" }, { "start": 638.24, "end": 640.52, "text": " there to serve the human resources." }, { "start": 640.52, "end": 645.52, "text": " It's there to serve the company providing human resources." }, { "start": 645.52, "end": 651.96, "text": " That's why a sexual harassment claim to an HR department is just a potential lawsuit." }, { "start": 651.96, "end": 657.1999999999999, "text": " And that's why they don't want to take it seriously except for it must go away really" }, { "start": 657.1999999999999, "end": 658.1999999999999, "text": " quickly." }, { "start": 658.1999999999999, "end": 664.48, "text": " So I think to kind of force companies or to ask companies to be more transparent, to take" }, { "start": 664.48, "end": 673.64, "text": " more seriously these the accusations of sexual harassment and assault and also discrimination" }, { "start": 673.64, "end": 675.88, "text": " is a very valuable goal." }, { "start": 675.88, "end": 680.9200000000001, "text": " And I fully, fully support this." }, { "start": 680.9200000000001, "end": 687.84, "text": " Also the here commit to transparency around hiring practices, especially hiring regarding" }, { "start": 687.84, "end": 691.8000000000001, "text": " how candidates are leveled, compensated and promoted." }, { "start": 691.8, "end": 698.3599999999999, "text": " But also the larger the company gets, the less transparent this process usually becomes" }, { "start": 698.3599999999999, "end": 703.8, "text": " or the more bureaucratic, the more people are able to game it and so on and distort" }, { "start": 703.8, "end": 704.8, "text": " it." }, { "start": 704.8, "end": 711.1999999999999, "text": " So I feel it's always good to be transparent around, okay, this person provides this much" }, { "start": 711.1999999999999, "end": 718.7199999999999, "text": " value to the company, therefore they should be compensated according to that or at least" }, { "start": 718.7199999999999, "end": 721.18, "text": " be transparent about it." }, { "start": 721.18, "end": 723.68, "text": " So these are kind of recommendations I like." }, { "start": 723.68, "end": 730.12, "text": " Then recommendations that really go into a different direction is something like this" }, { "start": 730.12, "end": 734.2399999999999, "text": " here, change hiring practices to maximize diversity." }, { "start": 734.2399999999999, "end": 739.68, "text": " And this is kind of reflect, I'm not going to go on this reflected in other points, increase" }, { "start": 739.68, "end": 744.12, "text": " the number of people of color, women and other underrepresented groups at senior leadership" }, { "start": 744.12, "end": 746.9599999999999, "text": " levels of AI companies across all departments." }, { "start": 746.96, "end": 752.6, "text": " So these things, they are usually within like company diversity goals and so on, doesn't" }, { "start": 752.6, "end": 754.12, "text": " really say how to do it." }, { "start": 754.12, "end": 759.2800000000001, "text": " But then the I mean, as such, they're not really recommendations yet." }, { "start": 759.2800000000001, "end": 760.2800000000001, "text": " They're more like goals." }, { "start": 760.2800000000001, "end": 766.4000000000001, "text": " But here recommendation seven, I think is the the crucial one, ensure executive incentive" }, { "start": 766.4000000000001, "end": 774.0400000000001, "text": " structures are tied to increases in hiring and retention of underrepresented groups." }, { "start": 774.04, "end": 777.56, "text": " So this is it's a bit of coded language." }, { "start": 777.56, "end": 783.56, "text": " But here they talk about executive incentive structure tied to hiring and retention of" }, { "start": 783.56, "end": 785.12, "text": " underrepresented groups." }, { "start": 785.12, "end": 790.12, "text": " This basically means if you are a manager or someone in charge of hiring or promoting," }, { "start": 790.12, "end": 795.52, "text": " and you hire or promote a underrepresented person, and since they're talking about gender" }, { "start": 795.52, "end": 802.68, "text": " and race here, if you that means if you hire or promote a person of color or a woman, in" }, { "start": 802.68, "end": 805.64, "text": " this case, you will be compensated more." }, { "start": 805.64, "end": 809.5999999999999, "text": " So at the end of the year, you'll somehow have more money, like more bonuses or more" }, { "start": 809.5999999999999, "end": 814.12, "text": " base comp or more equity or something like you'll get more money." }, { "start": 814.12, "end": 822.9599999999999, "text": " So this, this recommendation is a direct call to hire based on race and gender." }, { "start": 822.9599999999999, "end": 829.4399999999999, "text": " So this, this is a direct call to racist and sexist hiring basically to discriminate people" }, { "start": 829.44, "end": 838.5200000000001, "text": " according to their skin color and according to their gender, which I mean, how, how is" }, { "start": 838.5200000000001, "end": 840, "text": " this okay with anyone?" }, { "start": 840, "end": 846.8000000000001, "text": " Like how can anyone how are people even able to state this and in like a high profile report" }, { "start": 846.8000000000001, "end": 852.1400000000001, "text": " like this and get away with it and not have people criticize them, this directly calls" }, { "start": 852.1400000000001, "end": 856.8800000000001, "text": " for people to be treated according to their gender and race." }, { "start": 856.88, "end": 863.64, "text": " And probably as directly as you can go without getting into actual legal trouble." }, { "start": 863.64, "end": 868.28, "text": " But yeah, I'm really, really against such such practices." }, { "start": 868.28, "end": 875.12, "text": " I mean, yeah, that's I just I just don't know how this how this can ever how this can ever" }, { "start": 875.12, "end": 879.12, "text": " be thought of as a good thing by anyone." }, { "start": 879.12, "end": 887.52, "text": " All right, so, well, yeah, in my mind, this recommendation, and this recommendation kind" }, { "start": 887.52, "end": 889.52, "text": " of are counter to each other." }, { "start": 889.52, "end": 895.6, "text": " Because if if I commit to transparency, how people are okay now I can, I can transparently" }, { "start": 895.6, "end": 898.32, "text": " commit to to be racist, I guess." }, { "start": 898.32, "end": 903.5600000000001, "text": " But if I say, okay, I'm going to come and promote people based on how much value they" }, { "start": 903.56, "end": 910.04, "text": " provide to the company, then yeah, I'd much rather have that than saying I'm going to" }, { "start": 910.04, "end": 913, "text": " come and promote people based on their skin color." }, { "start": 913, "end": 916.2399999999999, "text": " Alright, so let's actually jump into the report." }, { "start": 916.2399999999999, "end": 920.9399999999999, "text": " I'm not gonna these recommendations for addressing bias and discrimination in systems this these" }, { "start": 920.9399999999999, "end": 923.3199999999999, "text": " are fairly general and common." }, { "start": 923.3199999999999, "end": 928.04, "text": " So as well, as I said, we'll jump most of the things in the report." }, { "start": 928.04, "end": 930.3199999999999, "text": " So introduction." }, { "start": 930.32, "end": 935.8000000000001, "text": " So they start out with there is a diversity crisis in the AI industry." }, { "start": 935.8000000000001, "end": 942.72, "text": " This they give like some numbers like 15% of AI research staff and 10% at Google, so" }, { "start": 942.72, "end": 946.48, "text": " 15% of Facebook are women." }, { "start": 946.48, "end": 953.96, "text": " So these are some kind of fairly known statistics about how the AI field is kind of gender and" }, { "start": 953.96, "end": 956.1600000000001, "text": " race skewed." }, { "start": 956.16, "end": 963.3199999999999, "text": " Currently, so they say they claim in bold the diversity problem is not just about women." }, { "start": 963.3199999999999, "end": 969.5799999999999, "text": " It's about gender, race, and most fundamentally about power." }, { "start": 969.5799999999999, "end": 974.18, "text": " It affects how companies work, what products get built, who they're designed to serve," }, { "start": 974.18, "end": 976.6, "text": " and who benefits from their development." }, { "start": 976.6, "end": 985.72, "text": " So this, I find this, this, this word power and this notion of power, a lot in this report," }, { "start": 985.72, "end": 992.52, "text": " it appears again and again and again in in like power dynamics and power dynamics among" }, { "start": 992.52, "end": 993.52, "text": " groups." }, { "start": 993.52, "end": 1001.6, "text": " It's like a worldview, it paints like a worldview, where these different gender and race groups" }, { "start": 1001.6, "end": 1007.52, "text": " kind of struggle against each other to gain power over another." }, { "start": 1007.52, "end": 1014.24, "text": " And whoever's in power will try to remain in power in alliance with their gender and" }, { "start": 1014.24, "end": 1018.5600000000001, "text": " race group and try to keep the other groups down." }, { "start": 1018.5600000000001, "end": 1021.88, "text": " I'm not sure that's the correct view of the world." }, { "start": 1021.88, "end": 1029.48, "text": " In my mind, the world is comprised of individual people that want to achieve something for" }, { "start": 1029.48, "end": 1033.6, "text": " themselves and they would like to prop themselves up." }, { "start": 1033.6, "end": 1039.24, "text": " Whereas in this worldview, it's like, I'm going to use the power of my group to keep" }, { "start": 1039.24, "end": 1041.84, "text": " other groups down." }, { "start": 1041.84, "end": 1048.8, "text": " I don't know which worldview you subscribe to, but I find the world is comprised of individuals." }, { "start": 1048.8, "end": 1054, "text": " Yeah, and this is not discrediting that some people have it harder because of their gender" }, { "start": 1054, "end": 1055.52, "text": " or race." }, { "start": 1055.52, "end": 1060.52, "text": " But to see the entire world as a power struggle between these groups, to me, it's, it's," }, { "start": 1060.52, "end": 1068.3999999999999, "text": " yeah, and I'm not going to point out everywhere it appears, this power wording, but it appears" }, { "start": 1068.4, "end": 1072.24, "text": " a lot and it's really shapes how the report reads." }, { "start": 1072.24, "end": 1079.3600000000001, "text": " You have to, you have to kind of remember, if you're a white male, and currently, the" }, { "start": 1079.3600000000001, "end": 1086.76, "text": " field is comprised of 90% white males, you, if you have like 10, like 10 hours, let's" }, { "start": 1086.76, "end": 1093.96, "text": " say you have to have 10 hours to do something, right, you can either choose to put down some" }, { "start": 1093.96, "end": 1101.92, "text": " other groups, like put down groups that you're not part of, or you can choose to invest these" }, { "start": 1101.92, "end": 1106.8, "text": " 10 hours in putting up yourself, you, right." }, { "start": 1106.8, "end": 1113.2, "text": " So if, if I, like I profit, if I'm a white male, I profit minimally from keeping the" }, { "start": 1113.2, "end": 1120.32, "text": " other groups down because guess what, I still have to compete with the like 1 billion other" }, { "start": 1120.32, "end": 1123.04, "text": " white males there are." }, { "start": 1123.04, "end": 1131.68, "text": " It's not going to help me to keep down anyone else, and especially, like it's, it's moronic," }, { "start": 1131.68, "end": 1138.92, "text": " like who does that, who like has alliance, except most fringe people, like to their race" }, { "start": 1138.92, "end": 1144.68, "text": " or gender, rather than to the people they admire and respect and like to work with." }, { "start": 1144.68, "end": 1149.3999999999999, "text": " So I'm going to, if I have like 10 hours today, I'm going to rather spend this in propping" }, { "start": 1149.4, "end": 1155.92, "text": " up myself compared to everyone else, and I don't care what gender or race they are." }, { "start": 1155.92, "end": 1162.1200000000001, "text": " And so that to me, that's a much more accurate or, I don't know, plausible worldview." }, { "start": 1162.1200000000001, "end": 1166.64, "text": " But just be aware that this report really takes on the language of kind of groups and" }, { "start": 1166.64, "end": 1173.2800000000002, "text": " power between groups and groups trying to, you know, kind of gain power and keep in," }, { "start": 1173.2800000000002, "end": 1176.52, "text": " keep power and keep others from having power." }, { "start": 1176.52, "end": 1183.44, "text": " All right, so say, to date, the diversity problems of the industry and the issues of" }, { "start": 1183.44, "end": 1188.44, "text": " bias in the systems it builds have tended to be considered separately." }, { "start": 1188.44, "end": 1193.02, "text": " We suggest that these are two versions of the same problem." }, { "start": 1193.02, "end": 1197.6399999999999, "text": " Issues of discrimination in the workforce and in system buildings are deeply intertwined." }, { "start": 1197.6399999999999, "end": 1203.8, "text": " Challenge, and moreover, tackling the challenges of bias within technical systems requires" }, { "start": 1203.8, "end": 1207.76, "text": " addressing workforce diversity and vice versa." }, { "start": 1207.76, "end": 1214.72, "text": " So the, I think this, this here actually is like how I described the argument and they" }, { "start": 1214.72, "end": 1218.1599999999999, "text": " kind of restated multiple times in a bit different way." }, { "start": 1218.1599999999999, "end": 1219.76, "text": " But I think this is the core." }, { "start": 1219.76, "end": 1224.28, "text": " And I really think I'm not misrepresenting the article here in that this is what they" }, { "start": 1224.28, "end": 1225.3999999999999, "text": " are setting out to do." }, { "start": 1225.3999999999999, "end": 1233, "text": " They're setting out to say, okay, the diversity, the kind of unequal representation in the" }, { "start": 1233, "end": 1240.48, "text": " workforce and the bias in some AI systems are causally linked to each other and tackling" }, { "start": 1240.48, "end": 1243.96, "text": " one requires tackling the other." }, { "start": 1243.96, "end": 1249.16, "text": " So yeah, if I'm misrepresenting them, let me know, but I really think I'm accurately" }, { "start": 1249.16, "end": 1253.98, "text": " representing their argument." }, { "start": 1253.98, "end": 1261, "text": " So what they, what they do, as I said, is they give examples of one and of the other" }, { "start": 1261, "end": 1271.24, "text": " and also they really, they're really on kind of discrediting the kind of issues to solve" }, { "start": 1271.24, "end": 1273.66, "text": " problems of bias in a different way." }, { "start": 1273.66, "end": 1276.56, "text": " So they point a little bit to this here in the introduction." }, { "start": 1276.56, "end": 1280.04, "text": " They say in the face of growing evidence, the AI research community and the industry" }, { "start": 1280.04, "end": 1285.26, "text": " producing our products have begun addressing the problem of bias by building on a body" }, { "start": 1285.26, "end": 1288.36, "text": " of work of fairness, accountability and transparency." }, { "start": 1288.36, "end": 1294.8, "text": " So fairness, accountability and transparency research concerns these issues." }, { "start": 1294.8, "end": 1300.4399999999998, "text": " For one is research showing that some products are unfair or untransparent and so on." }, { "start": 1300.4399999999998, "end": 1308.6399999999999, "text": " On the other hand, it's trying to devise algorithms that are more fair according to some notions" }, { "start": 1308.6399999999999, "end": 1314.36, "text": " or more accountable and transparent, which means that the algorithm can kind of say why" }, { "start": 1314.36, "end": 1320, "text": " it made a certain decision rather than it being a deep learning system that you don't" }, { "start": 1320, "end": 1321.58, "text": " really have an insight." }, { "start": 1321.58, "end": 1326.6799999999998, "text": " These fields are active fields of research, definitely very interesting to look into." }, { "start": 1326.6799999999998, "end": 1334.6, "text": " So but they, they kind of, it is not already here, but they say, yeah, we have adjusting" }, { "start": 1334.6, "end": 1342.08, "text": " AI systems that produce a result deemed fair by one of various mathematical definitions." }, { "start": 1342.08, "end": 1345.96, "text": " You can already see in the language here, they don't really like this research and they" }, { "start": 1345.96, "end": 1352.76, "text": " are trying in this report to kind of discredit it or at least claim that it doesn't solve" }, { "start": 1352.76, "end": 1357.76, "text": " the whole problem because their point is, of course, you have to address this diversity" }, { "start": 1357.76, "end": 1364.24, "text": " issue in the workforce in order to fix the problems." }, { "start": 1364.24, "end": 1372.32, "text": " So to this, I just want to say no, like if you can, I mean, you can criticize the fairness" }, { "start": 1372.32, "end": 1376.1200000000001, "text": " and accountability and transparency research field in that they haven't solved the problem" }, { "start": 1376.1200000000001, "end": 1377.32, "text": " fully yet." }, { "start": 1377.32, "end": 1384.8, "text": " But in principle, if I have an algorithm, if I'm being delivered an algorithm, right," }, { "start": 1384.8, "end": 1390.4, "text": " and the fairness literature has been applied to that algorithm and someone tells me, I" }, { "start": 1390.4, "end": 1397, "text": " guarantee you here is a proof, the algorithm is fair, right, then I really don't care who" }, { "start": 1397, "end": 1398.3200000000002, "text": " made that algorithm." }, { "start": 1398.3200000000002, "end": 1400.96, "text": " As long as it's fair, the problem is fixed." }, { "start": 1400.96, "end": 1404.16, "text": " If the bias is gone, the problem is fixed." }, { "start": 1404.16, "end": 1405.3600000000001, "text": " And I don't care who fix it." }, { "start": 1405.3600000000001, "end": 1410.64, "text": " I don't care if the person who fixed it is black or white or purple." }, { "start": 1410.64, "end": 1412.52, "text": " Then the problem is fixed." }, { "start": 1412.52, "end": 1418.4, "text": " And they, they really have to, they really try to just make the counter argument here" }, { "start": 1418.4, "end": 1421.2800000000002, "text": " is that no, that's it's not enough." }, { "start": 1421.2800000000002, "end": 1428.16, "text": " But I claim yes, it, if you can actually solve the fairness problem, technically, then you" }, { "start": 1428.16, "end": 1430.3600000000001, "text": " have solved the fairness problem." }, { "start": 1430.3600000000001, "end": 1436.76, "text": " Yeah, the only thing you can do is claim that it is not good enough yet, but not that it's" }, { "start": 1436.76, "end": 1441.6000000000001, "text": " fun to they kind of have to make the argument that it's fundamentally flawed approach." }, { "start": 1441.6000000000001, "end": 1445.1200000000001, "text": " And I don't think they succeed in doing that here." }, { "start": 1445.12, "end": 1452.1999999999998, "text": " Um, yeah, so they go on to say, we should expand to consider not only how I tools can" }, { "start": 1452.1999999999998, "end": 1456.04, "text": " be biased technically, but how they're shaped by the environments in which you're built" }, { "start": 1456.04, "end": 1458.28, "text": " in and the people that built them." }, { "start": 1458.28, "end": 1463.8, "text": " Again, this this focus like who builds the AI system, I don't care, I care what it does," }, { "start": 1463.8, "end": 1464.9199999999998, "text": " right?" }, { "start": 1464.9199999999998, "end": 1469.4399999999998, "text": " As much as if, if I hear an argument for or against something, I don't care who makes" }, { "start": 1469.4399999999998, "end": 1470.8, "text": " the argument, right?" }, { "start": 1470.8, "end": 1473.28, "text": " I care what the argument says." }, { "start": 1473.28, "end": 1477.8, "text": " This is, it's like an ad hominem attack for an entire community." }, { "start": 1477.8, "end": 1487.76, "text": " That's kind of how this this article, this report shows, or is appears to me." }, { "start": 1487.76, "end": 1493.44, "text": " So they say, currently, large scale AI systems are developed almost exclusively in a handful" }, { "start": 1493.44, "end": 1497.76, "text": " of technology companies and a small set of elite university laboratories spaces that" }, { "start": 1497.76, "end": 1502.74, "text": " in the West tend to be extremely white, affluent, technically oriented and male." }, { "start": 1502.74, "end": 1508.1200000000001, "text": " So yeah, their their problem, that's their fundamental problem here that these these" }, { "start": 1508.1200000000001, "end": 1511.72, "text": " spaces are skewed in one direction." }, { "start": 1511.72, "end": 1515.84, "text": " Interestingly enough, their problem is not so much that it's that they're all in the" }, { "start": 1515.84, "end": 1518.04, "text": " same place, right?" }, { "start": 1518.04, "end": 1523.68, "text": " That they all live like 20 miles from each other in around San Francisco." }, { "start": 1523.68, "end": 1528.1200000000001, "text": " That's that seems to be not a problem at all, as long as we get to like enough people of" }, { "start": 1528.1200000000001, "end": 1532.32, "text": " color and women into these 20 miles." }, { "start": 1532.32, "end": 1540.52, "text": " But yeah, so that that's pointing out the the problem here or the yeah, kind of issue" }, { "start": 1540.52, "end": 1541.52, "text": " they have." }, { "start": 1541.52, "end": 1546.28, "text": " All right, so they go on." }, { "start": 1546.28, "end": 1554.12, "text": " Just kind of want to highlight again, they say both within the spaces where AI is being" }, { "start": 1554.12, "end": 1557.8, "text": " created and the logic of how AI systems are being designed." }, { "start": 1557.8, "end": 1563, "text": " So paralleling the two things, the cost of bias, harassment and discrimination are born" }, { "start": 1563, "end": 1570.28, "text": " by the same people, gender minorities, people of color, other underrepresented groups." }, { "start": 1570.28, "end": 1576.56, "text": " And they also say similarly, the benefits of such systems from profit to efficiency," }, { "start": 1576.56, "end": 1583.24, "text": " accrue primarily to those are already in positions of power tend to be white, educated and male." }, { "start": 1583.24, "end": 1592.88, "text": " So they again, they say the this points to a systematic relationship between patterns" }, { "start": 1592.88, "end": 1597.6, "text": " of exclusion within the field of AI and the industry driving its production on the one" }, { "start": 1597.6, "end": 1602.04, "text": " hand and the biases that manifest in the logics and applications of the technologies on the" }, { "start": 1602.04, "end": 1603.04, "text": " other." }, { "start": 1603.04, "end": 1609.84, "text": " And they try to make this connection because they say the cost and the benefit of these" }, { "start": 1609.84, "end": 1614.6, "text": " two things are overlap in the people that where it costs and it benefits." }, { "start": 1614.6, "end": 1619.28, "text": " And I really, again, it's just a parallel, but I really even don't think that's true" }, { "start": 1619.28, "end": 1626.04, "text": " because they kind of, they kind of argue against themselves later." }, { "start": 1626.04, "end": 1632.8799999999999, "text": " So they always say, we have to look at again, they shoot against the take much more than" }, { "start": 1632.8799999999999, "end": 1638.28, "text": " the technically driven problem solving." }, { "start": 1638.28, "end": 1640.12, "text": " They point to this." }, { "start": 1640.12, "end": 1645.28, "text": " So our research requires looking at gender and racist categories within which humans" }, { "start": 1645.28, "end": 1652.24, "text": " think in short, sorry, studies of discriminatory systems, we need to ask who is harmed, who" }, { "start": 1652.24, "end": 1654.84, "text": " benefits, who gets to decide." }, { "start": 1654.84, "end": 1664.84, "text": " So it's kind of who bears the cost, who bears the benefits and who has the power." }, { "start": 1664.84, "end": 1671.52, "text": " So that's the, and again, it's we seek to understand how AI disadvantages some, we also" }, { "start": 1671.52, "end": 1676.04, "text": " consider how it works to the advantage of others." }, { "start": 1676.04, "end": 1677.3999999999999, "text": " So keep that in mind." }, { "start": 1677.3999999999999, "end": 1682.4399999999998, "text": " That's kind of the lens through how they analyze the this thing again, one that acknowledges" }, { "start": 1682.4399999999998, "end": 1685.72, "text": " power relationships and centers equity and justice." }, { "start": 1685.72, "end": 1691.6399999999999, "text": " That's the, they want to see this bigger picture." }, { "start": 1691.64, "end": 1696.5600000000002, "text": " So that's yeah, keep, again, keep that in mind." }, { "start": 1696.5600000000002, "end": 1703.8400000000001, "text": " So they go into a section called which humans are in the loop, how workforces and AI systems" }, { "start": 1703.8400000000001, "end": 1705.0800000000002, "text": " interact." }, { "start": 1705.0800000000002, "end": 1710.6000000000001, "text": " So this kind of from the title of this section, you think, okay, here's where we get in." }, { "start": 1710.6000000000001, "end": 1712.76, "text": " Here's where we make the argument." }, { "start": 1712.76, "end": 1720.76, "text": " And they start by listing examples of how AI systems can be discriminatory." }, { "start": 1720.76, "end": 1728.4, "text": " And first, they go into an example of Amazon had developed an experimental hiring tool" }, { "start": 1728.4, "end": 1733.16, "text": " to help rank job candidates." }, { "start": 1733.16, "end": 1738.12, "text": " By learning from its past reference preferences, Amazon hoped that the resume scanning tool" }, { "start": 1738.12, "end": 1743.3799999999999, "text": " will be able to efficiently identify qualified applicants, comparing their applications" }, { "start": 1743.3799999999999, "end": 1745, "text": " to previous hires." }, { "start": 1745, "end": 1750.64, "text": " The system quickly began to downgrade resumes from candidates who attended all women's" }, { "start": 1750.64, "end": 1757.38, "text": " colleges along with any resumes that included the word women's." }, { "start": 1757.38, "end": 1762.8400000000001, "text": " After uncovering this bias, Amazon engineers tried to fix the problem by directing the" }, { "start": 1762.8400000000001, "end": 1765.92, "text": " system to treat these terms in a neutral manner." }, { "start": 1765.92, "end": 1772.4, "text": " The company eventually abandoned the tool when they were unable to ensure that the algorithm" }, { "start": 1772.4, "end": 1776.1200000000001, "text": " would not be biased against women." }, { "start": 1776.12, "end": 1781.4399999999998, "text": " Gender based discrimination was built too deeply within the system and in Amazon's past" }, { "start": 1781.4399999999998, "end": 1785.4799999999998, "text": " hiring practices to be uprooted using a purely technical approach." }, { "start": 1785.4799999999998, "end": 1790.4799999999998, "text": " So this just the way is written, I find to be quite dishonest." }, { "start": 1790.4799999999998, "end": 1793.84, "text": " But let's analyze what happened here." }, { "start": 1793.84, "end": 1798.9199999999998, "text": " So their final claim is that gender based discrimination was built too deeply within" }, { "start": 1798.9199999999998, "end": 1804.6, "text": " the system to be uprooted using a purely technical approach." }, { "start": 1804.6, "end": 1806.1999999999998, "text": " So this is one of their arguments." }, { "start": 1806.1999999999998, "end": 1812.12, "text": " They say technical approaches, they don't help because the Amazon engineers tried to" }, { "start": 1812.12, "end": 1814.9599999999998, "text": " fix the problem." }, { "start": 1814.9599999999998, "end": 1823, "text": " But when they were unable to ensure that the algorithm would not be biased against women." }, { "start": 1823, "end": 1828.6399999999999, "text": " So if you read this, you really I mean, I really get the impression that's not what" }, { "start": 1828.6399999999999, "end": 1830.12, "text": " happened here." }, { "start": 1830.12, "end": 1837.1599999999999, "text": " What happened here most probably is Amazon built this tool, okay, and it fed in its past" }, { "start": 1837.1599999999999, "end": 1843.9599999999998, "text": " hires and we know of issues of like data set bias bias inherent in data set." }, { "start": 1843.9599999999998, "end": 1851.2399999999998, "text": " So if your data set is skewed, the AI tends to pick up on the skewed data set and become" }, { "start": 1851.2399999999998, "end": 1852.2399999999998, "text": " skewed itself." }, { "start": 1852.2399999999998, "end": 1860.08, "text": " Okay, so I actually would argue that most or all of the examples they stayed in here" }, { "start": 1860.08, "end": 1865.1599999999999, "text": " are examples of such biased data sets and not." }, { "start": 1865.1599999999999, "end": 1871, "text": " So the the cause of the bias is the data set that they are strained on and not the person" }, { "start": 1871, "end": 1879.24, "text": " that ran the code or built the algorithm to train it on or built the deployment." }, { "start": 1879.24, "end": 1885.56, "text": " And so but it doesn't matter you're a you're Amazon, you built this tool and you realize," }, { "start": 1885.56, "end": 1891.3999999999999, "text": " oh, it discriminates against people having women's on their CV." }, { "start": 1891.3999999999999, "end": 1895.98, "text": " So this is a pretty bad PR wise." }, { "start": 1895.98, "end": 1899.62, "text": " So you tell your engineers engineers fix the problem." }, { "start": 1899.62, "end": 1903.78, "text": " So the engineers go fix the problem, they come back and say, okay, we fixed the problem." }, { "start": 1903.78, "end": 1909.44, "text": " And then what you do is you say, okay, engineers, can you ensure me that the algorithm would" }, { "start": 1909.44, "end": 1911.12, "text": " not be biased against women?" }, { "start": 1911.12, "end": 1918, "text": " Because if only the slightest bias exists, if only it doesn't even have to be if one" }, { "start": 1918, "end": 1926.52, "text": " journalist finds one example, where there is a down rank, because I add the word women's," }, { "start": 1926.52, "end": 1928.8, "text": " then we are screwed, right?" }, { "start": 1928.8, "end": 1934.08, "text": " And the engineers will say, No, we can't guarantee that it's a deep learning system or something," }, { "start": 1934.08, "end": 1935.08, "text": " right?" }, { "start": 1935.08, "end": 1938.78, "text": " We, we can't like give you a proof that it's not biased." }, { "start": 1938.78, "end": 1943.56, "text": " If you're a smart executive, at that point, you'll scrap the tool, because the potential" }, { "start": 1943.56, "end": 1946.54, "text": " PR downside are just huge." }, { "start": 1946.54, "end": 1952, "text": " And probably they've also realized it's not that handy to have this, this tool compared" }, { "start": 1952, "end": 1956.3999999999999, "text": " to their recruiters doing their job, because their recruiters might actually be good and" }, { "start": 1956.3999999999999, "end": 1958.6399999999999, "text": " have been doing this for a while." }, { "start": 1958.6399999999999, "end": 1967.78, "text": " So to the to the fact that this tool was scrapped is probably much more a result of a PR disaster." }, { "start": 1967.78, "end": 1974.32, "text": " But also independent of that to say gender based discrimination, sorry, gender based" }, { "start": 1974.32, "end": 1980.6, "text": " discrimination was built too deeply within the system to be uprooted using a purely technical" }, { "start": 1980.6, "end": 1982.8799999999999, "text": " approach." }, { "start": 1982.8799999999999, "end": 1988.12, "text": " It's just I mean, what is what is this?" }, { "start": 1988.12, "end": 1993.94, "text": " This is just trying to discredit this kind of technical, technical going about solving" }, { "start": 1993.94, "end": 1994.94, "text": " this problem." }, { "start": 1994.94, "end": 1999.88, "text": " I'm pretty sure if someone comes to me and says here, I have this tool, and I can mathematically" }, { "start": 1999.88, "end": 2006.26, "text": " prove to you that it's not biased, then it's not then the problem is solved." }, { "start": 2006.26, "end": 2014.72, "text": " And also, I really don't see how the person training the algorithm, or the person researching" }, { "start": 2014.72, "end": 2019.8400000000001, "text": " such an algorithm has any influence over how the algorithm works, because they're not the" }, { "start": 2019.84, "end": 2025.6399999999999, "text": " ones making the data set, or if they are, yeah, then they can make a better data set." }, { "start": 2025.6399999999999, "end": 2031.3999999999999, "text": " Also, if a person comes and makes a better data set, that will fix the problem." }, { "start": 2031.3999999999999, "end": 2036.1999999999998, "text": " And it doesn't matter what skin color the person has that makes the better data set." }, { "start": 2036.1999999999998, "end": 2042.82, "text": " So all of this, this link is just not demonstrated here, or anywhere here at all." }, { "start": 2042.82, "end": 2048.56, "text": " But this this here is the closest Amazon that this report actually comes to making this" }, { "start": 2048.56, "end": 2049.56, "text": " point." }, { "start": 2049.56, "end": 2055.64, "text": " And I said before, I drew that drew this thing workforce AI bias, right?" }, { "start": 2055.64, "end": 2061.86, "text": " So this this link since it here the AI system is used for hiring the workforce." }, { "start": 2061.86, "end": 2069.22, "text": " So at least one could make a claim that this link is somewhat demonstrated." }, { "start": 2069.22, "end": 2075.38, "text": " But I this it's a weak case, I would agree, but this is the closest they come." }, { "start": 2075.38, "end": 2082.2000000000003, "text": " So that and but then to go this direction, you have to somehow argue, well, the workforce" }, { "start": 2082.2000000000003, "end": 2088.02, "text": " somehow makes the AI system bias, no, the workforce influences the data set." }, { "start": 2088.02, "end": 2093.9, "text": " If the AI is trained, so if a hiring AI, how do you train a hiring AI, you optimally train" }, { "start": 2093.9, "end": 2095.7200000000003, "text": " it on the performance." }, { "start": 2095.7200000000003, "end": 2101.82, "text": " So this this employee here is going to have a performance over time, right?" }, { "start": 2101.82, "end": 2104.5, "text": " And the AI system will look at that performance over time." }, { "start": 2104.5, "end": 2109.7, "text": " So if the AI system even if it's initially biased, because it learns from the risk recruiters," }, { "start": 2109.7, "end": 2118.56, "text": " it will learn that, okay, actually, if I always forgo these women, then I don't get as much" }, { "start": 2118.56, "end": 2121.86, "text": " performance of a workforce, so I should correct for that." }, { "start": 2121.86, "end": 2130.02, "text": " So if you train the AI system on a good metric, then then then this problem will leave even" }, { "start": 2130.02, "end": 2131.02, "text": " out itself." }, { "start": 2131.02, "end": 2138.42, "text": " But again, this Yeah, this this is this could be considered like one point in the argument," }, { "start": 2138.42, "end": 2140.58, "text": " but I think it's a very weak point." }, { "start": 2140.58, "end": 2146.04, "text": " And only because the AI system is actually used for hiring, where I think the point they're" }, { "start": 2146.04, "end": 2152.74, "text": " making is a much larger one is the general bias in the AI systems contributes to the" }, { "start": 2152.74, "end": 2153.74, "text": " workforce imbalances." }, { "start": 2153.74, "end": 2159.44, "text": " And there you somehow have to say that, okay, the AI system somehow influences society at" }, { "start": 2159.44, "end": 2165.98, "text": " large and society at large then go leads to the workforce being skewed." }, { "start": 2165.98, "end": 2171.7400000000002, "text": " I don't Yeah, that it's just not strong enough, in my opinion." }, { "start": 2171.7400000000002, "end": 2176.18, "text": " And the other direction also isn't isn't strong here." }, { "start": 2176.18, "end": 2180.54, "text": " But again, the examples only get weaker from here on." }, { "start": 2180.54, "end": 2185.66, "text": " They go on to say, this is just one of many examples that show how the functional logics" }, { "start": 2185.66, "end": 2189.8599999999997, "text": " of a given technology echo the gender and racial dynamics of the industry that produced" }, { "start": 2189.8599999999997, "end": 2190.8599999999997, "text": " it here." }, { "start": 2190.8599999999997, "end": 2194.66, "text": " Yeah, this, that's the claim they're making to echo the gender and racial dynamics." }, { "start": 2194.66, "end": 2200.18, "text": " And they're actually making a stronger claim, namely a causal claim." }, { "start": 2200.18, "end": 2205.8199999999997, "text": " They give the other example of the Amazon's recognition facial analysis service previously" }, { "start": 2205.8199999999997, "end": 2210.54, "text": " demonstrated gender and racial biases worse than those of comparable tools." }, { "start": 2210.54, "end": 2215.94, "text": " So it failed to see dark skinned women while being most proficient at detecting likes light" }, { "start": 2215.94, "end": 2218.42, "text": " skinned men." }, { "start": 2218.42, "end": 2224.5, "text": " And they later go into this example again, where they basically also state yes, this" }, { "start": 2224.5, "end": 2231.3, "text": " is an issue of the data set, the data set being much more comprised of white men." }, { "start": 2231.3, "end": 2236.02, "text": " And they say, but then they have to kind of make the turnaround argument and say, well," }, { "start": 2236.02, "end": 2242.82, "text": " the data set is a reflection of society and society, you know, part of society is the" }, { "start": 2242.82, "end": 2243.82, "text": " workforce." }, { "start": 2243.82, "end": 2248.78, "text": " And it's just not, I mean, it's again, this argument only works if you already believe" }, { "start": 2248.78, "end": 2249.78, "text": " the conclusion." }, { "start": 2249.78, "end": 2257.14, "text": " Otherwise, there's actually no argument there or no solid one." }, { "start": 2257.14, "end": 2262.72, "text": " But what they do here is they say Amazon's initial response to such criticism has been" }, { "start": 2262.72, "end": 2267.7, "text": " to try and discredit the research behind it." }, { "start": 2267.7, "end": 2270.8799999999997, "text": " This reaction, or let's let's first discuss this." }, { "start": 2270.8799999999997, "end": 2278.02, "text": " So the Amazon, yeah, Amazon, of course, being the accused here and a multi billion dollar" }, { "start": 2278.02, "end": 2283.8999999999996, "text": " company and the criticism is something that is PR wise very bad for them." }, { "start": 2283.8999999999996, "end": 2289.2999999999997, "text": " They discredit the research tried to discredit the research behind it." }, { "start": 2289.3, "end": 2292.7400000000002, "text": " It's understandable that this could be dishonest from Amazon side, right?" }, { "start": 2292.7400000000002, "end": 2293.7400000000002, "text": " I mean, they're getting attacked." }, { "start": 2293.7400000000002, "end": 2297.82, "text": " It's like, you know, the tobacco companies trying to discredit the smoking research," }, { "start": 2297.82, "end": 2300.5800000000004, "text": " but still, I mean, that doesn't mean it's wrong." }, { "start": 2300.5800000000004, "end": 2303.98, "text": " It could actually be bad research, right?" }, { "start": 2303.98, "end": 2308.5800000000004, "text": " You have to actually go and look at what's Amazon saying, what is the research really" }, { "start": 2308.5800000000004, "end": 2309.5800000000004, "text": " doing?" }, { "start": 2309.5800000000004, "end": 2313.54, "text": " Is Amazon right or wrong?" }, { "start": 2313.54, "end": 2317.5, "text": " Completely open that Amazon is wrong here, but you still have to go look." }, { "start": 2317.5, "end": 2321.1, "text": " And this citation here, I've tried this citation here." }, { "start": 2321.1, "end": 2324.94, "text": " This one isn't to a to Amazon's response." }, { "start": 2324.94, "end": 2330.94, "text": " It's to like a medium article and the medium article doesn't even include Amazon's response." }, { "start": 2330.94, "end": 2332.86, "text": " I've looked, maybe I haven't seen it." }, { "start": 2332.86, "end": 2335.98, "text": " It doesn't also doesn't link Amazon's response." }, { "start": 2335.98, "end": 2340.46, "text": " Maybe it links something that links something or that includes it in some way." }, { "start": 2340.46, "end": 2346.58, "text": " But basically this medium article only states, yeah, Amazon has been denying this or Amazon" }, { "start": 2346.58, "end": 2348.74, "text": " has been critical of this." }, { "start": 2348.74, "end": 2353.94, "text": " And if you state such a sentence, Amazon's initial response to such criticism has been" }, { "start": 2353.94, "end": 2355.7799999999997, "text": " to try and discredit the research behind it." }, { "start": 2355.7799999999997, "end": 2362.7799999999997, "text": " I at least expect the citation to lead me to Amazon's response so that I can verify what" }, { "start": 2362.7799999999997, "end": 2363.7799999999997, "text": " they're saying." }, { "start": 2363.7799999999997, "end": 2364.7799999999997, "text": " Right." }, { "start": 2364.7799999999997, "end": 2373.98, "text": " So this, I mean, I don't know, willing to chalk it up to incompetence rather than malice." }, { "start": 2373.98, "end": 2381.5, "text": " Right, but then they go on and they say this reaction is evidence of the wider problem." }, { "start": 2381.5, "end": 2387.82, "text": " The research was conducted by two well-regarded AI researchers who are women of color." }, { "start": 2387.82, "end": 2393.1, "text": " By attempting to publicly discredit their expertise and research methods, Amazon is" }, { "start": 2393.1, "end": 2398.14, "text": " reinforcing the same kinds of prejudice and derasers that the research critiques." }, { "start": 2398.14, "end": 2403.34, "text": " Yeah, here you go straight to the identity of the researchers." }, { "start": 2403.34, "end": 2405.98, "text": " Like play the race card straight out." }, { "start": 2405.98, "end": 2409.54, "text": " I mean, this is maximum dishonesty, right?" }, { "start": 2409.54, "end": 2415.1800000000003, "text": " Except if Amazon said something like, well, these women of color, clearly because they're" }, { "start": 2415.1800000000003, "end": 2419.06, "text": " women of color, they have no idea what they're doing or something like this." }, { "start": 2419.06, "end": 2425.2200000000003, "text": " This is basically it's coded language for saying either saying you're not allowed to" }, { "start": 2425.22, "end": 2433.74, "text": " criticize people of color because they're a minority or you're basically saying Amazon" }, { "start": 2433.74, "end": 2437.8999999999996, "text": " is racist and that's why they criticize them." }, { "start": 2437.8999999999996, "end": 2440.98, "text": " They just don't take them seriously because they're women of color." }, { "start": 2440.98, "end": 2443.7599999999998, "text": " I mean, both are both are abhorrent." }, { "start": 2443.7599999999998, "end": 2448.2999999999997, "text": " This is just dishonesty really stated here too." }, { "start": 2448.2999999999997, "end": 2454.22, "text": " I mean, again, I'm perfectly willing to accept that Amazon's critique of this research is" }, { "start": 2454.22, "end": 2460.2999999999997, "text": " wrong and is not well intended because they're the ones attacked, but you still have to examine" }, { "start": 2460.2999999999997, "end": 2468.4199999999996, "text": " it rather than say, well, they shoot against women of color and therefore somehow that" }, { "start": 2468.4199999999996, "end": 2474.5, "text": " makes their counter argument irrelevant or even racist or something." }, { "start": 2474.5, "end": 2476.1, "text": " That's I don't know." }, { "start": 2476.1, "end": 2477.8999999999996, "text": " I find this dishonest." }, { "start": 2477.8999999999996, "end": 2483.58, "text": " Yeah, I don't know about you." }, { "start": 2483.58, "end": 2485.5, "text": " Moving on." }, { "start": 2485.5, "end": 2496.42, "text": " So they go on and state a number of examples of bias and discrimination in the workforce" }, { "start": 2496.42, "end": 2504.46, "text": " and they a lot of times they make a mixture of the gender and race imbalance in workforce" }, { "start": 2504.46, "end": 2512.02, "text": " and things like sexual harassment not being taken seriously by the companies and also" }, { "start": 2512.02, "end": 2521.94, "text": " the things like gender or race pay gaps, which I'm open to accept that these things exist" }, { "start": 2521.94, "end": 2525.34, "text": " and are even intertwined." }, { "start": 2525.34, "end": 2530.34, "text": " But just to tell you what's happening because we're kind of skipping but it's kind of a" }, { "start": 2530.34, "end": 2532.62, "text": " mixture of these things." }, { "start": 2532.62, "end": 2535.46, "text": " So they say these issues are systemic." }, { "start": 2535.46, "end": 2539.94, "text": " There's a close relationship between these workplaces with discriminatory practices and" }, { "start": 2539.94, "end": 2546.7000000000003, "text": " discriminatory tools, a feedback loop that is shaping the industry and its tools." }, { "start": 2546.7000000000003, "end": 2552.06, "text": " So again here to state, I think I've stated it enough now that or demonstrated enough" }, { "start": 2552.06, "end": 2558.2200000000003, "text": " that I'm really representing their arguments as they intended it to namely that there is" }, { "start": 2558.2200000000003, "end": 2564.46, "text": " this kind of causal links and loop between these two things." }, { "start": 2564.46, "end": 2572.06, "text": " And they shoot against the fairness literature by saying from this perspective, locating" }, { "start": 2572.06, "end": 2577.94, "text": " individual biases within given technical systems and attempting to fix them by tweaking the" }, { "start": 2577.94, "end": 2582.94, "text": " system becomes an exercise in futility." }, { "start": 2582.94, "end": 2587.02, "text": " Only by examining discrimination through the lens of social logics, who it benefits, who" }, { "start": 2587.02, "end": 2592.18, "text": " it harms and how can we see the workings of these systems in the context of existing power" }, { "start": 2592.18, "end": 2593.18, "text": " relationships." }, { "start": 2593.18, "end": 2599.7, "text": " So they say these issues aren't technically fixing these systems won't help." }, { "start": 2599.7, "end": 2600.7, "text": " If that's the problem." }, { "start": 2600.7, "end": 2607.62, "text": " And I agree, if that causal link actually exists, then technically fixing the system" }, { "start": 2607.62, "end": 2608.8999999999996, "text": " might not solve the problem." }, { "start": 2608.8999999999996, "end": 2609.8999999999996, "text": " Not even sure." }, { "start": 2609.8999999999996, "end": 2615.58, "text": " I mean, if you technically fix a system like this, then you technically break the causal" }, { "start": 2615.58, "end": 2617.7, "text": " link and thereby fix the problem." }, { "start": 2617.7, "end": 2624.1, "text": " I would not sure, but again, this is based on the hypothesis that they've already reached," }, { "start": 2624.1, "end": 2630.3399999999997, "text": " like demonstrated their, their conclusion, which they haven't and which they are not" }, { "start": 2630.3399999999997, "end": 2632.8599999999997, "text": " in the entire article." }, { "start": 2632.8599999999997, "end": 2641.2999999999997, "text": " Yeah, so the next section goes into who makes AI so I don't know about you, but this section" }, { "start": 2641.3, "end": 2648.1000000000004, "text": " was titled how workforces and AI systems interact." }, { "start": 2648.1000000000004, "end": 2655.34, "text": " And apart from one, the AI system being used for hiring the workforce, which is said this" }, { "start": 2655.34, "end": 2662.9, "text": " one instance where actually there could be one causal direction from bias to different" }, { "start": 2662.9, "end": 2664.78, "text": " misrepresentation the workforce." }, { "start": 2664.78, "end": 2671.38, "text": " Other than that, there isn't really anything in there that really shows how these two interact," }, { "start": 2671.38, "end": 2673.46, "text": " especially in a in a causal way." }, { "start": 2673.46, "end": 2682.82, "text": " Alright, the next section is called who makes AI is broadly about the about the gender and" }, { "start": 2682.82, "end": 2688.6200000000003, "text": " race imbalances or miss not unequal representation in the workforce." }, { "start": 2688.62, "end": 2698.2599999999998, "text": " And we're going to skip this diversity statistics that kind of that discuss that diversity statistics" }, { "start": 2698.2599999999998, "end": 2706.54, "text": " of companies aren't really accurate, or can be, you know, massaged kind of by the companies," }, { "start": 2706.54, "end": 2709.9, "text": " which you know, is true." }, { "start": 2709.9, "end": 2714.46, "text": " Definitely companies will always try to maximize their profits." }, { "start": 2714.46, "end": 2722.62, "text": " And even if they give out such a report, so that definitely critical thinking is in order." }, { "start": 2722.62, "end": 2729.5, "text": " Alright, so the next section is called the discrimination feedback loop." }, { "start": 2729.5, "end": 2734.18, "text": " Right, if so if in the earlier section, you felt like here we go into the meat, then you" }, { "start": 2734.18, "end": 2740.78, "text": " must feel with this title, like, okay, we're actually going to see how this loop works" }, { "start": 2740.78, "end": 2748.7000000000003, "text": " and how the two things are really linked, like how one causes the other and vice versa." }, { "start": 2748.7000000000003, "end": 2750.02, "text": " So let's jump in." }, { "start": 2750.02, "end": 2758.38, "text": " They say AI systems increasingly play a role in our social and political institutions," }, { "start": 2758.38, "end": 2762.2200000000003, "text": " including education, healthcare, hiring, criminal justice." }, { "start": 2762.2200000000003, "end": 2769.38, "text": " Yes, therefore, we need to consider the relationship between the workplace diversity crisis and" }, { "start": 2769.38, "end": 2774.06, "text": " the problems with bias and discrimination in AI systems." }, { "start": 2774.06, "end": 2783.94, "text": " No, why I don't see how therefore, but yeah, so I don't see how therefore we need to consider" }, { "start": 2783.94, "end": 2784.94, "text": " the relationship." }, { "start": 2784.94, "end": 2789.58, "text": " Okay, if there is a relationship, we need to consider whether there's a relationship." }, { "start": 2789.58, "end": 2792.38, "text": " Okay, granted." }, { "start": 2792.38, "end": 2797.1600000000003, "text": " So they say fairness, accountability and transparency research is playing an emerging role." }, { "start": 2797.16, "end": 2802.62, "text": " Now what they mean here is the aspect of fairness, accountability and transparency research that" }, { "start": 2802.62, "end": 2804.3799999999997, "text": " shows that there is a problem." }, { "start": 2804.3799999999997, "end": 2809.5, "text": " So I told you there's two sides, one side is showing there is a problem in current systems" }, { "start": 2809.5, "end": 2811.42, "text": " and the other side is trying to fix them." }, { "start": 2811.42, "end": 2818.46, "text": " So they're very much fans of the side that shows that there is a problem and they use" }, { "start": 2818.46, "end": 2823.94, "text": " show some of these problems here, we've already seen some but they show some more like Facebook's" }, { "start": 2823.94, "end": 2828.98, "text": " ad delivery systems let users to be shown as for housing and employment in a discriminatory" }, { "start": 2828.98, "end": 2829.98, "text": " manner." }, { "start": 2829.98, "end": 2836.9, "text": " So giving 2019 study found significant racial bias in a widely used commercial algorithm" }, { "start": 2836.9, "end": 2843.02, "text": " used to determine whether patients will be enrolled in care management programs." }, { "start": 2843.02, "end": 2855.1, "text": " So these are these are just examples of these AI systems being biased." }, { "start": 2855.1, "end": 2861.02, "text": " So they go into this say taking a contextualized view may enable more extensive account and" }, { "start": 2861.02, "end": 2866.86, "text": " the contextualized view they when they say this they mean anything more than just a technical" }, { "start": 2866.86, "end": 2870.02, "text": " approach at solving these problems." }, { "start": 2870.02, "end": 2874.62, "text": " More extensive account of bias to emerge future work could examine the politics of system" }, { "start": 2874.62, "end": 2881.58, "text": " design study how AI systems in situated reality and study AI systems in situated realities" }, { "start": 2881.58, "end": 2888.18, "text": " ask why a system was designed in a particular way, how it was constructed, whose interest" }, { "start": 2888.18, "end": 2894.34, "text": " it shaped shaped by the metrics in which its success or failure is assessed, rather than" }, { "start": 2894.34, "end": 2898.9, "text": " solely focusing on improving existing data sets or individual algorithms." }, { "start": 2898.9, "end": 2901.02, "text": " Yeah, I agree." }, { "start": 2901.02, "end": 2906.46, "text": " I mean, we always have to we always have to pay attention to these things, especially" }, { "start": 2906.46, "end": 2913.46, "text": " like looking at the metrics by which its success or failure is assessed." }, { "start": 2913.46, "end": 2922.1, "text": " But a lot of times this is this is rather straightforward in kind of if you look at" }, { "start": 2922.1, "end": 2929.06, "text": " the metric, the metric most often, especially in commercial applications is money, right?" }, { "start": 2929.06, "end": 2936.62, "text": " So the metric of like an ad showing system, like if I have a system to recommend ads to" }, { "start": 2936.62, "end": 2943.7599999999998, "text": " people, show people ads and personalize them and so on, I simply want to maximize my revenue." }, { "start": 2943.7599999999998, "end": 2946.7, "text": " So I want to sell someone something." }, { "start": 2946.7, "end": 2952.8199999999997, "text": " And everything I want to know is how likely is it that person is going to buy that thing?" }, { "start": 2952.8199999999997, "end": 2953.8199999999997, "text": " Right?" }, { "start": 2953.8199999999997, "end": 2956.7799999999997, "text": " I that's basically Yeah." }, { "start": 2956.7799999999997, "end": 2965.7599999999998, "text": " So in essence, sometimes it's really valuable to consider what capitalism is." }, { "start": 2965.7599999999998, "end": 2975.2999999999997, "text": " So in capitalism in so capitalism, these kind of this system we're working on is kind of" }, { "start": 2975.3, "end": 2980.1000000000004, "text": " a form of limited capitalism, but mostly mostly capitalism." }, { "start": 2980.1000000000004, "end": 2984.3, "text": " And capitalism is very greedy." }, { "start": 2984.3, "end": 2990.42, "text": " So capitalism, all corporations want to do basically is make money." }, { "start": 2990.42, "end": 2998.02, "text": " And that is and on the other side, you have discrimination." }, { "start": 2998.02, "end": 3004.76, "text": " So discrimination meaning these unequal represent like unequal distribution actively." }, { "start": 3004.76, "end": 3009.4, "text": " So and often sometimes these go hand in hand, sometimes you can make more money by discriminating" }, { "start": 3009.4, "end": 3010.82, "text": " against a certain type of people." }, { "start": 3010.82, "end": 3013.26, "text": " And that's, that's a really bad scenario." }, { "start": 3013.26, "end": 3018.5200000000004, "text": " Like that's a very, like, this is really something where we need to take action." }, { "start": 3018.5200000000004, "end": 3025.9, "text": " But a lot of times, a lot of times, these two things stand in opposition to each other." }, { "start": 3025.9, "end": 3030.78, "text": " So little arrow here, non compatible." }, { "start": 3030.78, "end": 3041.82, "text": " That means if I want to sell someone something, then I maximize my profit by not caring by" }, { "start": 3041.82, "end": 3047.42, "text": " accurately assessing how likely is it that person buys that thing." }, { "start": 3047.42, "end": 3053.2200000000003, "text": " If I want to discriminate here, if I want to discriminate, start discriminating, according" }, { "start": 3053.2200000000003, "end": 3059.76, "text": " to skin color saying like, No, I don't like that this person with the skin color is able" }, { "start": 3059.76, "end": 3065.2200000000003, "text": " to buy this product, I want to kind of keep them down, and so on, then I forgo profit," }, { "start": 3065.2200000000003, "end": 3073.1400000000003, "text": " right, then I actually, even though this person could buy this thing, I forego that." }, { "start": 3073.1400000000003, "end": 3077.6200000000003, "text": " So often these things are in direct opposition to each other." }, { "start": 3077.6200000000003, "end": 3084.1000000000004, "text": " Also, if I am in charge of hiring, and I don't like people of a certain gender, but they" }, { "start": 3084.1000000000004, "end": 3088.94, "text": " would actually be really, really good, whatever, good employees." }, { "start": 3088.94, "end": 3097.7000000000003, "text": " So I forgo that, that means I'm getting a pay more for less qualified people just because" }, { "start": 3097.7000000000003, "end": 3107.32, "text": " I'm biased and I'm down ranking unjustifiably, these people of the gender I don't like." }, { "start": 3107.32, "end": 3115.92, "text": " So oftentimes, you have to ask yourself, are people fundamentally greedy, or discriminatory?" }, { "start": 3115.92, "end": 3116.92, "text": " Which are they more?" }, { "start": 3116.92, "end": 3120.2200000000003, "text": " If push comes to shove, would they rather have more money?" }, { "start": 3120.2200000000003, "end": 3127.26, "text": " Or would they rather keep their own race and gender group in power?" }, { "start": 3127.26, "end": 3133.94, "text": " And with just, yeah, so the and you have to ask this of corporations, you have to ask" }, { "start": 3133.94, "end": 3135.7400000000002, "text": " this of people." }, { "start": 3135.7400000000002, "end": 3144.58, "text": " And in my experience and view, like people are much, much more greedy than they are willing" }, { "start": 3144.58, "end": 3150.7799999999997, "text": " to discriminate and give up money for discrimination." }, { "start": 3150.7799999999997, "end": 3158.02, "text": " And so if we look at metrics by which success or failure of AI systems are designed, then" }, { "start": 3158.02, "end": 3165.66, "text": " I would argue a lot of the times metrics are actually profit incentives." }, { "start": 3165.66, "end": 3172.2599999999998, "text": " And especially if we look at data set construction, if there is a skewed data set that makes my" }, { "start": 3172.26, "end": 3178.38, "text": " AI system be biased, that actually loses me money and the company would profit a lot from" }, { "start": 3178.38, "end": 3180.0600000000004, "text": " building a better data set." }, { "start": 3180.0600000000004, "end": 3186.38, "text": " So looking at kind of metrics actually makes a lot of sense to me and very much in favor" }, { "start": 3186.38, "end": 3187.78, "text": " of that." }, { "start": 3187.78, "end": 3192.84, "text": " And I think by designing accurate metrics and then getting the best possible information," }, { "start": 3192.84, "end": 3198.5800000000004, "text": " the best possible data sets to maximize these metrics will oftentimes actually eliminate" }, { "start": 3198.5800000000004, "end": 3199.98, "text": " such forms of discrimination." }, { "start": 3199.98, "end": 3205.5, "text": " Again, there are situations where they don't, we have to be very cognizant of these." }, { "start": 3205.5, "end": 3211.7, "text": " They go into this and they say, also examine more thoroughly how societal discrimination" }, { "start": 3211.7, "end": 3217.3, "text": " surfaces in data provenance, examining the history and process of data set construction" }, { "start": 3217.3, "end": 3221.3, "text": " and considering how cultural norms and stereotypes were enumerated and represented at the time" }, { "start": 3221.3, "end": 3222.44, "text": " of data creation." }, { "start": 3222.44, "end": 3223.62, "text": " This is a big issue." }, { "start": 3223.62, "end": 3224.62, "text": " Yes." }, { "start": 3224.62, "end": 3230.3399999999997, "text": " The data set construction kind of at the time of data creation and so on, this is a big" }, { "start": 3230.3399999999997, "end": 3232.62, "text": " issue in these systems and a lot of bias." }, { "start": 3232.62, "end": 3238.02, "text": " And I would argue most of the bias we've seen here arises from corrupt data sets and from" }, { "start": 3238.02, "end": 3241.42, "text": " data sets that were constructed in an already biased way." }, { "start": 3241.42, "end": 3247.38, "text": " And the AI system trained on these data sets simply replicates this bias." }, { "start": 3247.38, "end": 3252.74, "text": " So I think that's very correct here." }, { "start": 3252.74, "end": 3258.74, "text": " They go into this example, they say the labeled faces in the wild data set contains over 15,000" }, { "start": 3258.74, "end": 3259.8599999999997, "text": " images." }, { "start": 3259.8599999999997, "end": 3262.8999999999996, "text": " Only 7% of images are of black people." }, { "start": 3262.8999999999996, "end": 3270.54, "text": " This is because these, the media landscape of the early 2000s, these images were gathered" }, { "start": 3270.54, "end": 3275.3799999999997, "text": " from the news media at the time, predominantly featured white men in positions of celebrity" }, { "start": 3275.3799999999997, "end": 3276.9799999999996, "text": " and power." }, { "start": 3276.9799999999996, "end": 3278.9399999999996, "text": " This exactly." }, { "start": 3278.94, "end": 3284.86, "text": " So if you train a system on this data set, the system will inherit this bias." }, { "start": 3284.86, "end": 3290.14, "text": " Yeah, so this is a classic example of a corrupt data set." }, { "start": 3290.14, "end": 3293.38, "text": " Also this isn't only with race and gender." }, { "start": 3293.38, "end": 3299.82, "text": " This is also if you like take pictures from IMDB, yes, a lot of this currently Celeb A" }, { "start": 3299.82, "end": 3304.2200000000003, "text": " data set that is used in all the GAN research is collected from IMDB." }, { "start": 3304.22, "end": 3311.4599999999996, "text": " You probably have overly beautiful, like pretty face people on there." }, { "start": 3311.4599999999996, "end": 3316.06, "text": " So that your AI system, your generative model is only going to produce mostly pretty face" }, { "start": 3316.06, "end": 3324.04, "text": " people, since movie stars tend to be a lot prettier than the average humans." }, { "start": 3324.04, "end": 3332.22, "text": " So that the kind of data set construction process, I think is currently the biggest" }, { "start": 3332.22, "end": 3335.1, "text": " source of bias in AI." }, { "start": 3335.1, "end": 3339.18, "text": " But that also, it's interesting that they go into this here and they kind of want to" }, { "start": 3339.18, "end": 3347.3399999999997, "text": " make the point that this is because society and power in society, the data set reflects" }, { "start": 3347.3399999999997, "end": 3348.3399999999997, "text": " that." }, { "start": 3348.3399999999997, "end": 3354.4599999999996, "text": " But I would argue if someone makes a data set that doesn't have this bias, then the" }, { "start": 3354.4599999999996, "end": 3355.8199999999997, "text": " problem is solved." }, { "start": 3355.8199999999997, "end": 3357.4599999999996, "text": " And I don't care who makes the data set." }, { "start": 3357.46, "end": 3363.14, "text": " So the link between the workforce and the bias is really broken by an argument like" }, { "start": 3363.14, "end": 3367.94, "text": " this, because as soon as we have a correct data set, an unbiased data set, we can mitigate" }, { "start": 3367.94, "end": 3368.94, "text": " the bias." }, { "start": 3368.94, "end": 3373.82, "text": " And they even go, they go into this here." }, { "start": 3373.82, "end": 3378.1, "text": " They say, sorry." }, { "start": 3378.1, "end": 3385.76, "text": " Yeah, they say down here." }, { "start": 3385.76, "end": 3393.38, "text": " They say these people, these researchers have looked at these facial recognition systems" }, { "start": 3393.38, "end": 3398.1000000000004, "text": " and they assessed this what we saw earlier, higher error rates for darker skinned women" }, { "start": 3398.1000000000004, "end": 3402.6200000000003, "text": " than for any other group, lowest error rates for light skinned men." }, { "start": 3402.6200000000003, "end": 3408.78, "text": " To measure this disparity, these researchers developed a new data set that is more balanced," }, { "start": 3408.78, "end": 3411.5800000000004, "text": " both in terms of gender and skin color." }, { "start": 3411.5800000000004, "end": 3412.5800000000004, "text": " Good." }, { "start": 3412.58, "end": 3419.22, "text": " Problem, like make a larger data set to actually train on and then problem solved." }, { "start": 3419.22, "end": 3424.94, "text": " And I don't care at all what race and what gender these people are." }, { "start": 3424.94, "end": 3427.54, "text": " Well done." }, { "start": 3427.54, "end": 3432.38, "text": " Good people make a good data set like this." }, { "start": 3432.38, "end": 3434.14, "text": " And then we've solved the problem." }, { "start": 3434.14, "end": 3436.1, "text": " What's the problem here?" }, { "start": 3436.1, "end": 3443.46, "text": " Why would you ever care what these people look like if they do good work?" }, { "start": 3443.46, "end": 3447.9, "text": " That's to me, this actually breaks their own argument." }, { "start": 3447.9, "end": 3454.5, "text": " I don't know why they included here." }, { "start": 3454.5, "end": 3462.22, "text": " To me that to then suggest that there is a link to the workforces, if here is obvious" }, { "start": 3462.22, "end": 3470.22, "text": " that if you fix the data set, you can fix the recognition system." }, { "start": 3470.22, "end": 3483.2599999999998, "text": " All right, so we'll go on here, jump a couple more paragraphs." }, { "start": 3483.2599999999998, "end": 3489.66, "text": " Except when they say they shoot again against this kind of say to this point, a focus on" }, { "start": 3489.66, "end": 3494.18, "text": " fixing technical systems in isolation without examining their broader context of use and" }, { "start": 3494.18, "end": 3499.58, "text": " power and dynamics that attends issues is not limited in its intervention, it can actively" }, { "start": 3499.58, "end": 3501.02, "text": " cause harm." }, { "start": 3501.02, "end": 3506.58, "text": " So if you fix the problem in a technical manner, they argue here it can actively cause harm." }, { "start": 3506.58, "end": 3514.46, "text": " And the example they give is that facial and image recognition systems, they are often" }, { "start": 3514.46, "end": 3519.7400000000002, "text": " applied in service of police surveillance, which disproportionately harms poor people" }, { "start": 3519.7400000000002, "end": 3523.46, "text": " and communities of color." }, { "start": 3523.46, "end": 3530.78, "text": " So there's a quote from this person that says, is this not social progress to make black" }, { "start": 3530.78, "end": 3537.38, "text": " people equally visible to software that will inevitably be further weaponized against us?" }, { "start": 3537.38, "end": 3543.82, "text": " We are considered criminal and more surveillable by orders of magnitude." }, { "start": 3543.82, "end": 3548.98, "text": " Whatever claim to a right of privacy that we may have is diminished by a state that" }, { "start": 3548.98, "end": 3551.7000000000003, "text": " believes we must always be watched and seen." }, { "start": 3551.7000000000003, "end": 3557.02, "text": " So this is an example where by improving the facial recognition for black people, it makes" }, { "start": 3557.02, "end": 3559.94, "text": " the police better at surveilling them, which is true." }, { "start": 3559.94, "end": 3565.1400000000003, "text": " And then it is an ethical problem that the police is able to use these facial recognition" }, { "start": 3565.1400000000003, "end": 3566.7400000000002, "text": " systems to surveil people." }, { "start": 3566.7400000000002, "end": 3568.98, "text": " That's a massive privacy problem." }, { "start": 3568.98, "end": 3574.1, "text": " That's a massive problem in how much the state is allowed to overreach and so on." }, { "start": 3574.1, "end": 3581.38, "text": " So I think it's a discussion in itself, but here they argue because at the very beginning" }, { "start": 3581.38, "end": 3588.58, "text": " I asked you to remember this whole notion of we always have to look at who benefits" }, { "start": 3588.58, "end": 3595.82, "text": " from the way the AI system is constructed, who is harmed from that, who benefits from" }, { "start": 3595.82, "end": 3599.1400000000003, "text": " how the metrics are shaped and so on." }, { "start": 3599.1400000000003, "end": 3607.54, "text": " In this case, we actually have a perfect example where if the face recognition system is very" }, { "start": 3607.54, "end": 3615.26, "text": " inaccurate for black people's faces, that actually helps them in the societal context." }, { "start": 3615.26, "end": 3626.94, "text": " So by logic of this report here, that must mean that somehow the bias works for them" }, { "start": 3626.94, "end": 3630.78, "text": " and thereby the system is good or something like this." }, { "start": 3630.78, "end": 3632.86, "text": " And by fixing it, you actually make it worse." }, { "start": 3632.86, "end": 3635.6000000000004, "text": " Yeah, they say it can actively cause harm." }, { "start": 3635.6000000000004, "end": 3641.78, "text": " So I think this is pretty much arguing against themselves earlier where they say, oh, we" }, { "start": 3641.78, "end": 3645.42, "text": " always have to look at who benefits from the system." }, { "start": 3645.42, "end": 3652.7000000000003, "text": " Yeah, here, if the face recognition system can't recognize you, you actually benefit." }, { "start": 3652.7000000000003, "end": 3659.0600000000004, "text": " So I don't think that argument works in any case except if you only look at it when you" }, { "start": 3659.0600000000004, "end": 3662.42, "text": " want to look at it." }, { "start": 3662.42, "end": 3672.1, "text": " All right, so we're going to jump a couple of sections here." }, { "start": 3672.1, "end": 3677.06, "text": " But the core thing here was the feedback loop." }, { "start": 3677.06, "end": 3680.78, "text": " And again, the feedback loop isn't demonstrated at all here." }, { "start": 3680.78, "end": 3687.06, "text": " Just examples of systems that are biased and of data sets that are biased, because of data" }, { "start": 3687.06, "end": 3689.58, "text": " sets that are biased." }, { "start": 3689.58, "end": 3697.2999999999997, "text": " But there's no demonstration of how the workforce, I mean, yeah, just take this previous argument." }, { "start": 3697.2999999999997, "end": 3701.74, "text": " So the workforce is supposedly supremely white." }, { "start": 3701.74, "end": 3711.4, "text": " And it makes a face recognition system that makes that is performing poorly for darker" }, { "start": 3711.4, "end": 3713.86, "text": " skinned people." }, { "start": 3713.86, "end": 3718.44, "text": " And that actually in this context of police surveillance helps the darker skinned people" }, { "start": 3718.44, "end": 3721.18, "text": " compared to the lighter skinned people." }, { "start": 3721.18, "end": 3727.44, "text": " So that kind of is an exact counterexample to the argument that this misrepresentation" }, { "start": 3727.44, "end": 3732.56, "text": " in the workforce leads to the biases in the system." }, { "start": 3732.56, "end": 3738.62, "text": " If we interpret it through the lens, who it costs and who it benefits." }, { "start": 3738.62, "end": 3740.26, "text": " All right." }, { "start": 3740.26, "end": 3745.66, "text": " So the next section is corporate diversity beyond the pipeline problem." }, { "start": 3745.66, "end": 3750.7799999999997, "text": " And this is kind of an odd inclusion when I read it first to interpret to go against" }, { "start": 3750.7799999999997, "end": 3754.14, "text": " the pipeline problem here." }, { "start": 3754.14, "end": 3758.5, "text": " But it kind of makes sense if you know what these people set out to do." }, { "start": 3758.5, "end": 3765.2599999999998, "text": " So what these people set out to do is to argue we must fix the workforce, right?" }, { "start": 3765.2599999999998, "end": 3772.1, "text": " We must fix the, we must hire more people of color, more women and so on, promote them" }, { "start": 3772.1, "end": 3773.1, "text": " more." }, { "start": 3773.1, "end": 3778.14, "text": " And they have a very much have a problem with this pipeline argument." }, { "start": 3778.14, "end": 3780.62, "text": " What the pipeline argument is, is the following." }, { "start": 3780.62, "end": 3786.02, "text": " So at the beginning, if you consider like the educational or career paths of people," }, { "start": 3786.02, "end": 3792.22, "text": " then you have like 100% of people that's represented at this at the beginning, and then most of" }, { "start": 3792.22, "end": 3794.02, "text": " these people go through school." }, { "start": 3794.02, "end": 3795.8199999999997, "text": " So most of these go on." }, { "start": 3795.8199999999997, "end": 3799.86, "text": " This is kind of the area in here is the population." }, { "start": 3799.86, "end": 3803.58, "text": " And then some of them pursue higher education like some drop out." }, { "start": 3803.58, "end": 3806.7000000000003, "text": " So this gets a smaller amount." }, { "start": 3806.7000000000003, "end": 3811.6200000000003, "text": " So this is here, this is time and this is kind of volume of people." }, { "start": 3811.6200000000003, "end": 3816.2200000000003, "text": " And then very few go into computer science, right?" }, { "start": 3816.2200000000003, "end": 3818.7400000000002, "text": " And then even fewer go into AI." }, { "start": 3818.7400000000002, "end": 3824.86, "text": " So what you end up is just a tiny sliver of people that actually go into AI." }, { "start": 3824.86, "end": 3831.3, "text": " So this is called a pipeline, and we have various junctions here like where you would" }, { "start": 3831.3, "end": 3835.54, "text": " go into higher education, where you would choose your major in university, where you" }, { "start": 3835.54, "end": 3844.34, "text": " would go into a subfield of computer science, where the kind of volume of people drops significantly" }, { "start": 3844.34, "end": 3846.7000000000003, "text": " from one point to the other." }, { "start": 3846.7000000000003, "end": 3853.26, "text": " And now if you compare this, if you compare this and use it say, we're not considered" }, { "start": 3853.26, "end": 3858.7000000000003, "text": " all of society, but here over here we'll call consider all just men and over here we'll" }, { "start": 3858.7000000000003, "end": 3864.26, "text": " consider all women again, they all go to high school and then university and then maybe" }, { "start": 3864.26, "end": 3869.0200000000004, "text": " very few go to CS, even fewer go to AI." }, { "start": 3869.0200000000004, "end": 3874.94, "text": " What you'll find is, and I've drawn it maybe wrong here, is that this is smaller than this." }, { "start": 3874.94, "end": 3883.86, "text": " So if you comparatively look at how many males end up in the AI field, you will find that" }, { "start": 3883.86, "end": 3889.46, "text": " fewer end up in more and will end up in our field than women." }, { "start": 3889.46, "end": 3891.62, "text": " If you comparatively look at it." }, { "start": 3891.62, "end": 3902.9, "text": " So at and this is over time, like at the beginning, you have 5050 main women distribution in society," }, { "start": 3902.9, "end": 3911.58, "text": " almost I guess, I think slightly more boys are born, but I could be wrong about this." }, { "start": 3911.58, "end": 3918.94, "text": " And then as you go through time here, excuse that I believe." }, { "start": 3918.94, "end": 3923.26, "text": " So you go through high school and let's just assume like high school is still kind of equal," }, { "start": 3923.26, "end": 3924.92, "text": " it depends on the country." }, { "start": 3924.92, "end": 3932.2400000000002, "text": " Then you go to university, where there's actually more women at university slightly." }, { "start": 3932.24, "end": 3936.5, "text": " And then you go into computer science and in computer science, and this is just relative" }, { "start": 3936.5, "end": 3939.3799999999997, "text": " here, that's why I kind of norm it at 100%." }, { "start": 3939.3799999999997, "end": 3943.02, "text": " Otherwise these things would go down all of them at the same time." }, { "start": 3943.02, "end": 3950.1, "text": " But comparatively, you have then much more men than women in computer science." }, { "start": 3950.1, "end": 3956.4599999999996, "text": " And then if you see who chooses AI, I don't know if there's any statistics of specifically" }, { "start": 3956.4599999999996, "end": 3958.3399999999997, "text": " choosing AI from computer science." }, { "start": 3958.3399999999997, "end": 3961.3399999999997, "text": " I'm just going to assume that remains the same." }, { "start": 3961.34, "end": 3967.46, "text": " So if you look into the AI field, kind of this, this will stay the same." }, { "start": 3967.46, "end": 3971.82, "text": " So in the AI field, you have much more men than women." }, { "start": 3971.82, "end": 3978.38, "text": " And presumably, because you already have much more men than women choosing computer science" }, { "start": 3978.38, "end": 3985.1400000000003, "text": " as their major or choosing any technical field as their major." }, { "start": 3985.1400000000003, "end": 3987.82, "text": " This is kind of the so called pipeline argument." }, { "start": 3987.82, "end": 3990.58, "text": " So where do AI companies hiring come in?" }, { "start": 3990.58, "end": 3999.66, "text": " AI companies come in here, they hire at this point, after your university degree, presumably." }, { "start": 3999.66, "end": 4003.86, "text": " There's exceptions, but just say they hire after your university degree." }, { "start": 4003.86, "end": 4010.2599999999998, "text": " And therefore, they basically have to choose from this distribution." }, { "start": 4010.2599999999998, "end": 4015.1, "text": " And if they just say, okay, we'll just take the top, I don't know, 10% people will hire" }, { "start": 4015.1, "end": 4018.22, "text": " the good people of this, we don't care what gender they are." }, { "start": 4018.22, "end": 4026.7, "text": " Right, so the top 10% here, the top 10% here, then this will end up being the same distribution" }, { "start": 4026.7, "end": 4028.74, "text": " as you have graduates." }, { "start": 4028.74, "end": 4036.3799999999997, "text": " Right, so this is kind of the company, company hiring from an let's say an 80 20 distribution" }, { "start": 4036.3799999999997, "end": 4041.2599999999998, "text": " without looking at gender will end up with an 80 20 distribution." }, { "start": 4041.2599999999998, "end": 4045.02, "text": " That's the pipeline argument of companies." }, { "start": 4045.02, "end": 4049.7, "text": " And they don't like the pipeline argument, because the pipeline argument basically says" }, { "start": 4049.7, "end": 4052.58, "text": " that the problem is somewhere here, right?" }, { "start": 4052.58, "end": 4060.58, "text": " The problem isn't the company's hiring wrongly." }, { "start": 4060.58, "end": 4067.22, "text": " The problem isn't that the company's here, deselected, the problem is somewhere here." }, { "start": 4067.22, "end": 4070.7, "text": " And because they want to make the argument that the company should hire in a different" }, { "start": 4070.7, "end": 4073.36, "text": " way, they can't have that." }, { "start": 4073.36, "end": 4076.1, "text": " So they argue against it." }, { "start": 4076.1, "end": 4079.76, "text": " Now to argue against this would actually be very easy." }, { "start": 4079.76, "end": 4085.44, "text": " If this argument were wrong, like they claim the argument is is is not good, the pipeline" }, { "start": 4085.44, "end": 4087.58, "text": " argument isn't good." }, { "start": 4087.58, "end": 4092.52, "text": " If the pipeline argument were wrong, what you'd have to do is you would have to say," }, { "start": 4092.52, "end": 4098.1, "text": " you would have to say, hey, companies, look at that." }, { "start": 4098.1, "end": 4105.22, "text": " In your company, you have an 80 20 distribution men to women, right?" }, { "start": 4105.22, "end": 4106.780000000001, "text": " That's pretty unequal." }, { "start": 4106.780000000001, "end": 4112.14, "text": " And you know, in university graduates, the pool you choose from is actually 5050." }, { "start": 4112.14, "end": 4118.740000000001, "text": " So obviously, you're engaged in discriminatory hiring, because you know, the pool is 5050." }, { "start": 4118.740000000001, "end": 4127.42, "text": " There's no reason why it why your hiring practices should cause this inequality." }, { "start": 4127.42, "end": 4132.12, "text": " And therefore, we can clearly show you do discriminatory hiring, you should stop it," }, { "start": 4132.12, "end": 4136.42, "text": " you should definitely hire more women and people of color, more of these more of the" }, { "start": 4136.42, "end": 4141.82, "text": " minorities, because your hiring practices are the problem." }, { "start": 4141.82, "end": 4143, "text": " But that's not the case." }, { "start": 4143, "end": 4144.06, "text": " How do I know?" }, { "start": 4144.06, "end": 4146.9400000000005, "text": " Because if it were the case, they would simply state this." }, { "start": 4146.9400000000005, "end": 4151.7, "text": " Definitely in this report, if that were the case, that you could actually show with numbers" }, { "start": 4151.7, "end": 4156.14, "text": " that the pipeline argument is wrong, then they would absolutely do this." }, { "start": 4156.14, "end": 4163.1, "text": " That they have to like, go back and they have to like, ramble around it for several pages," }, { "start": 4163.1, "end": 4170.58, "text": " which will mostly skip but mainly because this is the case, it is the case that these" }, { "start": 4170.58, "end": 4178.660000000001, "text": " companies hire from a pool of of unequally represented people." }, { "start": 4178.66, "end": 4187.0599999999995, "text": " And the only argument that you can make is that, well, if if you were to equalize this" }, { "start": 4187.0599999999995, "end": 4193.98, "text": " here, then maybe here where the problem is that would fix like, so the argument is often" }, { "start": 4193.98, "end": 4201.66, "text": " made if young girls choosing their majors have no one to look up to, like no strong" }, { "start": 4201.66, "end": 4208.94, "text": " women in in corporation CEO roles, they will think that it's not a climate for women and" }, { "start": 4208.94, "end": 4213.7, "text": " they will elect not to go into these fields, which is a valid argument, like I'm completely" }, { "start": 4213.7, "end": 4216.66, "text": " open to that to that argument." }, { "start": 4216.66, "end": 4218.58, "text": " But it's the only argument you can make." }, { "start": 4218.58, "end": 4225.58, "text": " And still then, even if you determine this as the cause, I would still not support racist" }, { "start": 4225.58, "end": 4231.58, "text": " and sexist hiring practices like do something else like make them clear that the environment" }, { "start": 4231.58, "end": 4238.1, "text": " can be changed or change the environment, like change the if if it really is the case" }, { "start": 4238.1, "end": 4245.3, "text": " that it's kind of a non anti woman environment, change that." }, { "start": 4245.3, "end": 4250.82, "text": " If it's just the case that they perceive it as such change the perception, but do not" }, { "start": 4250.82, "end": 4256.42, "text": " engage in discriminatory hiring practices, because there's always someone losing out" }, { "start": 4256.42, "end": 4258.22, "text": " unfairly on these practices." }, { "start": 4258.22, "end": 4266.58, "text": " And that's, that's something I'm not willing to, to go into, like that's something I'm" }, { "start": 4266.58, "end": 4267.66, "text": " not willing to engage in." }, { "start": 4267.66, "end": 4271.46, "text": " And I don't think people should engage be engaging in that." }, { "start": 4271.46, "end": 4273.9400000000005, "text": " Actually, that's why it's illegal." }, { "start": 4273.9400000000005, "end": 4278.72, "text": " So let's, let's actually look at very few points." }, { "start": 4278.72, "end": 4285.780000000001, "text": " This is just why the so they claim they go kind of go over these pipeline studies." }, { "start": 4285.78, "end": 4291.179999999999, "text": " And they yeah, they say term used in industry to reference the absence of diverse candidates" }, { "start": 4291.179999999999, "end": 4296.139999999999, "text": " in the hiring pool of to justify the inability of large firms to achieve diversity due to" }, { "start": 4296.139999999999, "end": 4297.139999999999, "text": " scarcity." }, { "start": 4297.139999999999, "end": 4298.139999999999, "text": " Right?" }, { "start": 4298.139999999999, "end": 4306.42, "text": " So that's, they basically agree the of that on the definition that I stated here." }, { "start": 4306.42, "end": 4311.259999999999, "text": " So the companies that are challenged on their lack of diversity frequently site pipeline" }, { "start": 4311.259999999999, "end": 4315.5, "text": " studies as proof of the persistent challenge of finding enough women and people of color" }, { "start": 4315.5, "end": 4316.82, "text": " to hire." }, { "start": 4316.82, "end": 4323.3, "text": " Yes, and, and the yeah, but they say but the evidence suggests otherwise." }, { "start": 4323.3, "end": 4328.5, "text": " For example, in 2016, Facebook chief diversity officer wrote that it has become clear that" }, { "start": 4328.5, "end": 4332.52, "text": " at the most fundamental level, appropriate representation, technology or any other industry" }, { "start": 4332.52, "end": 4337.1, "text": " will depend upon more people having the opportunity to gain necessary skills through the public" }, { "start": 4337.1, "end": 4338.42, "text": " education system." }, { "start": 4338.42, "end": 4341.7, "text": " Well, yes, that's something I would agree." }, { "start": 4341.7, "end": 4348.82, "text": " And that's something clearly that addresses this region here." }, { "start": 4348.82, "end": 4353.5199999999995, "text": " Then and where the actual problem is happening." }, { "start": 4353.5199999999995, "end": 4359.54, "text": " So I would say that's a very, very good statement from the Facebook's chief diversity officer." }, { "start": 4359.54, "end": 4364.82, "text": " They say but as the Center for Investigative Reporting study of tech company diversity" }, { "start": 4364.82, "end": 4371.66, "text": " data found 91 large tech companies headquartered in Silicon Valley managed to hire higher percent" }, { "start": 4371.66, "end": 4376.42, "text": " of black, Latino and multiracial employees than Facebook that year." }, { "start": 4376.42, "end": 4386.9, "text": " Well, just if other just just because other companies employ racist and sexist hiring" }, { "start": 4386.9, "end": 4392.98, "text": " to improve their diversity numbers doesn't mean that Facebook has to do this." }, { "start": 4392.98, "end": 4393.98, "text": " Right?" }, { "start": 4393.98, "end": 4401.54, "text": " It it like just because other companies do this doesn't mean that it's a it's a it's" }, { "start": 4401.54, "end": 4405.459999999999, "text": " a good thing to do or that's how you should go about it." }, { "start": 4405.459999999999, "end": 4413.66, "text": " Facebook simply says like, if we want to hire without being racist or sexist, if we want" }, { "start": 4413.66, "end": 4420.98, "text": " to just hire the best people, then more of the best people have to be in the pipeline," }, { "start": 4420.98, "end": 4427.7, "text": " like more people have to gain access to educational opportunities so we can then hire them." }, { "start": 4427.7, "end": 4434.86, "text": " Whereas these other companies probably make a big effort to say, well, even if you are" }, { "start": 4434.86, "end": 4439.74, "text": " not as educated, even if you're not as qualified as this other person will hire you because" }, { "start": 4439.74, "end": 4441.98, "text": " of your skin color." }, { "start": 4441.98, "end": 4450.74, "text": " I don't think that's that's an argument in that in the favor of what the report is claiming." }, { "start": 4450.74, "end": 4455.58, "text": " Like I don't think that that is evidence that the pipeline argument is invalid." }, { "start": 4455.58, "end": 4462.66, "text": " All right, so they go into core themes in pipeline research, and they do some they do" }, { "start": 4462.66, "end": 4470.58, "text": " some overview of the kind of pipeline research that often so sometimes the pipeline research" }, { "start": 4470.58, "end": 4476.36, "text": " examines why, why, for example, why women don't choose to go into computer science as" }, { "start": 4476.36, "end": 4481.82, "text": " much and sometimes they focus on what is their perception of the field, what was it, what" }, { "start": 4481.82, "end": 4487.86, "text": " is their perceptions of the stereotypes of the field, what is their perceptions of the" }, { "start": 4487.86, "end": 4494.54, "text": " kind of culture in the field, is it suited to them, what is their perception of how qualified" }, { "start": 4494.54, "end": 4498.0199999999995, "text": " they are for the field, and is that true, is that false, and so on." }, { "start": 4498.0199999999995, "end": 4500.78, "text": " So this research examines a whole variety of things." }, { "start": 4500.78, "end": 4503.7, "text": " And it's very interesting, actually, to read through this research." }, { "start": 4503.7, "end": 4507.74, "text": " I want to point out this here." }, { "start": 4507.74, "end": 4512.62, "text": " Other studies suggest that gender is correlated with a person's motivations for pursuing a" }, { "start": 4512.62, "end": 4514.34, "text": " career in the field." }, { "start": 4514.34, "end": 4520.62, "text": " Women and particularly women from low socioeconomic status or minority backgrounds are more likely" }, { "start": 4520.62, "end": 4526.5, "text": " to see computing as a versatile profession that provides an opportunity for secure employment," }, { "start": 4526.5, "end": 4529.74, "text": " higher pay, and better social standing." }, { "start": 4529.74, "end": 4535.3, "text": " Moreover, their interests go beyond technical aspects of computing, focusing instead on" }, { "start": 4535.3, "end": 4537.98, "text": " the purpose and application of software." }, { "start": 4537.98, "end": 4543.62, "text": " However, such interests are often de-emphasized in computer science curricula, a price technical" }, { "start": 4543.62, "end": 4550.98, "text": " skill and its applicability to industrial settings above all else." }, { "start": 4550.98, "end": 4556.76, "text": " So I find this really interesting because it's basically saying that women have different" }, { "start": 4556.76, "end": 4560.46, "text": " interests than men on average." }, { "start": 4560.46, "end": 4564.92, "text": " That's basically saying that, which is almost heresy." }, { "start": 4564.92, "end": 4570.9800000000005, "text": " To say this in this context, people will come after you if you suggest something like this," }, { "start": 4570.9800000000005, "end": 4573.3, "text": " and yet they're just stating it here." }, { "start": 4573.3, "end": 4575.2, "text": " Remember this for later." }, { "start": 4575.2, "end": 4581.02, "text": " This is really funny that they're like, yeah, the interests could be different for women" }, { "start": 4581.02, "end": 4582.02, "text": " than for men." }, { "start": 4582.02, "end": 4589.46, "text": " And we might have to adjust our curriculum to be more suited to these different interests." }, { "start": 4589.46, "end": 4591.540000000001, "text": " I mean, yeah." }, { "start": 4591.540000000001, "end": 4593.540000000001, "text": " I'm sure that's..." }, { "start": 4593.540000000001, "end": 4600.42, "text": " Yeah, as I said, you're like, usually this is forbidden to say." }, { "start": 4600.42, "end": 4602.900000000001, "text": " All right." }, { "start": 4602.900000000001, "end": 4605.620000000001, "text": " So they go on." }, { "start": 4605.62, "end": 4618.46, "text": " They say limitations of pipeline research, right?" }, { "start": 4618.46, "end": 4627.099999999999, "text": " These are fairly like common limitations, let's say, of studies in general, social science" }, { "start": 4627.099999999999, "end": 4633.0199999999995, "text": " studies, which I won't go into much." }, { "start": 4633.02, "end": 4643.26, "text": " Again, they state we have to examine..." }, { "start": 4643.26, "end": 4646.38, "text": " We don't only have to examine this, but the problem..." }, { "start": 4646.38, "end": 4653.38, "text": " They basically say the problem is actually the culture and the problem is actually the" }, { "start": 4653.38, "end": 4659.620000000001, "text": " perpetrators, where do I say?" }, { "start": 4659.62, "end": 4664.78, "text": " I don't remember where this is stated, but they again say we have to examine who benefits" }, { "start": 4664.78, "end": 4671.7, "text": " from its present construction, who is underserved within the current tech ecology, who benefits" }, { "start": 4671.7, "end": 4676.62, "text": " from its present construction, how these dynamics might be untangled, and so on." }, { "start": 4676.62, "end": 4686.22, "text": " So again, stating these kind of power relationships for the different groups, which I don't agree" }, { "start": 4686.22, "end": 4689.22, "text": " is in large part what's happening." }, { "start": 4689.22, "end": 4696.22, "text": " They say it's worth considering the scope of these studies and by and large, the recommendations" }, { "start": 4696.22, "end": 4701.900000000001, "text": " they issue are limited, targeted at the administrators of university computer science programs seeking" }, { "start": 4701.900000000001, "end": 4704.02, "text": " to broaden the diversity of their student body." }, { "start": 4704.02, "end": 4708.96, "text": " Yes, that's exactly where we saw the problem appears to be, right?" }, { "start": 4708.96, "end": 4713.58, "text": " So the reason they have a problem with these studies is that they actually focus on the" }, { "start": 4713.58, "end": 4721.62, "text": " point where this discrepancy appears to happen, because they want to claim that no, no, no," }, { "start": 4721.62, "end": 4732.18, "text": " you should focus on a different point, namely hiring in these companies, hiring and promotion." }, { "start": 4732.18, "end": 4737.74, "text": " They say though important, so at least they acknowledge that that's an important problem." }, { "start": 4737.74, "end": 4743.9, "text": " This is a narrow frame through which potential solutions to barriers to inclusion." }, { "start": 4743.9, "end": 4748.94, "text": " It does not address the companies that hire computer science students, the peers responsible" }, { "start": 4748.94, "end": 4753.82, "text": " for promulgating stereotype views or engaging in hostile behavior or the broader social" }, { "start": 4753.82, "end": 4758.58, "text": " conditions that may influence students' success in computer science programs." }, { "start": 4758.58, "end": 4762.179999999999, "text": " Actually the research and even some of the examples they've included of this research" }, { "start": 4762.179999999999, "end": 4764.0599999999995, "text": " addresses all of this." }, { "start": 4764.06, "end": 4773.580000000001, "text": " But the research often addresses the kind of stereotypes and how the peers act and how" }, { "start": 4773.580000000001, "end": 4781.740000000001, "text": " the companies act and also how the companies hire and how people have something to look" }, { "start": 4781.740000000001, "end": 4787.02, "text": " forward to or nothing to look forward to and how that influences their decisions." }, { "start": 4787.02, "end": 4792.1, "text": " Yeah, again, they say the studies are frequently cited by those within corporate environments" }, { "start": 4792.1, "end": 4796.5, "text": " to justify their own lack of diversity as they situate the locus of change outside of" }, { "start": 4796.5, "end": 4799.26, "text": " the corporation itself." }, { "start": 4799.26, "end": 4803.14, "text": " As such pipeline studies are disproportionately emphasized as a part of the broader research" }, { "start": 4803.14, "end": 4805.22, "text": " agenda on diversity and technology." }, { "start": 4805.22, "end": 4810.9800000000005, "text": " Again, they state companies use this to get out and of course, like companies, of course" }, { "start": 4810.9800000000005, "end": 4812.58, "text": " they're going to use this to get out." }, { "start": 4812.58, "end": 4814.58, "text": " I mean, I agree at least with that." }, { "start": 4814.58, "end": 4821.26, "text": " I agree that companies are going to try to use this to get out of responsibility." }, { "start": 4821.26, "end": 4822.26, "text": " Certainly." }, { "start": 4822.26, "end": 4823.26, "text": " All right." }, { "start": 4823.26, "end": 4831.62, "text": " So the last section here is the pipeline dreams after years of research." }, { "start": 4831.62, "end": 4833.820000000001, "text": " Again this is on this pipeline studies." }, { "start": 4833.820000000001, "end": 4843.74, "text": " Basically they say the pipeline research hasn't shown, like hasn't borne fruit." }, { "start": 4843.74, "end": 4850.780000000001, "text": " It hasn't led to meaningful change in the field even though we've researched this." }, { "start": 4850.78, "end": 4855.139999999999, "text": " The reason they say the number of reasons they tend to place the owners to solve issues" }, { "start": 4855.139999999999, "end": 4859.86, "text": " of discrimination, Silicon Valley on those who are discriminated against rather than" }, { "start": 4859.86, "end": 4860.86, "text": " the perpetrators." }, { "start": 4860.86, "end": 4863.86, "text": " I find this word choice really interesting." }, { "start": 4863.86, "end": 4865.5, "text": " Perpetrators, right?" }, { "start": 4865.5, "end": 4871.94, "text": " Like again, the group of white men is trying to put down everyone else." }, { "start": 4871.94, "end": 4874.9, "text": " That's the perspective that the article takes." }, { "start": 4874.9, "end": 4879.139999999999, "text": " And it's not even true." }, { "start": 4879.14, "end": 4886.22, "text": " This research, a lot of times it actually says the reason why, for example, women don't" }, { "start": 4886.22, "end": 4892.54, "text": " choose to go into computer science is the male dominated culture within these corporations," }, { "start": 4892.54, "end": 4901.860000000001, "text": " is the perception of this not being a woman friendly environment, is the people here of" }, { "start": 4901.860000000001, "end": 4903.54, "text": " sexual harassment and so on." }, { "start": 4903.54, "end": 4905.46, "text": " So it's not even true." }, { "start": 4905.46, "end": 4910.34, "text": " But moreover, I just wanted to point out the choice of word here, perpetrators." }, { "start": 4910.34, "end": 4917.9800000000005, "text": " I don't know how you get to this word." }, { "start": 4917.9800000000005, "end": 4924.86, "text": " It really shows kind of a worldview of the authors in my opinion." }, { "start": 4924.86, "end": 4927.22, "text": " All right." }, { "start": 4927.22, "end": 4933.22, "text": " So they go on and say, okay, this pipeline studies haven't been beneficial and companies" }, { "start": 4933.22, "end": 4937.26, "text": " haven't done much or hasn't been successful." }, { "start": 4937.26, "end": 4943.14, "text": " They're going to worker led initiatives, which I'm going to skip here." }, { "start": 4943.14, "end": 4950.26, "text": " It's just a kind of a reporting of what happened at companies where the workers themselves" }, { "start": 4950.26, "end": 4951.46, "text": " organized." }, { "start": 4951.46, "end": 4955.9400000000005, "text": " And then the last section here is the pushback against diversity." }, { "start": 4955.94, "end": 4963.379999999999, "text": " So in this section, they're kind of documenting and arguing against people who have basically" }, { "start": 4963.379999999999, "end": 4967.78, "text": " stated counter arguments to their recommendations mainly." }, { "start": 4967.78, "end": 4973.62, "text": " So their recommendations being, let's change the hiring, let's change the promotion, and" }, { "start": 4973.62, "end": 4979.78, "text": " so on to be based on race and gender." }, { "start": 4979.78, "end": 4984.54, "text": " And the pushback here characterized in different ways." }, { "start": 4984.54, "end": 4986.98, "text": " So we'll go through this." }, { "start": 4986.98, "end": 4987.98, "text": " This is the last section." }, { "start": 4987.98, "end": 4990.6, "text": " I know it's a long video already." }, { "start": 4990.6, "end": 4995.1, "text": " If you're still here, like the one person who's still here, hi, I hope you're doing" }, { "start": 4995.1, "end": 4996.1, "text": " well." }, { "start": 4996.1, "end": 4997.1, "text": " Good." }, { "start": 4997.1, "end": 4998.1, "text": " Keep hydrated." }, { "start": 4998.1, "end": 4999.1, "text": " Yeah." }, { "start": 4999.1, "end": 5002.22, "text": " So they say, it's a critical time." }, { "start": 5002.22, "end": 5010.62, "text": " We now see diversity itself being weaponized." }, { "start": 5010.62, "end": 5016.9, "text": " So they say this growing awareness accompanied by demands for inclusion and equity has led" }, { "start": 5016.9, "end": 5023.22, "text": " to some change, but there has also been resistance, especially among those implicitly privileged" }, { "start": 5023.22, "end": 5024.54, "text": " by the status quo." }, { "start": 5024.54, "end": 5028.7, "text": " So again, jumping straight to attack on the person." }, { "start": 5028.7, "end": 5033.74, "text": " Like I don't care if who makes an argument against me." }, { "start": 5033.74, "end": 5039.34, "text": " I want to go on the argument and I'm going to go on the content of the argument." }, { "start": 5039.34, "end": 5047.34, "text": " But these people straight, first thing they stayed is that's just by the people who are" }, { "start": 5047.34, "end": 5048.34, "text": " benefiting." }, { "start": 5048.34, "end": 5051.900000000001, "text": " That's just by the white men, basically." }, { "start": 5051.900000000001, "end": 5053.900000000001, "text": " Straight to the identity of the person." }, { "start": 5053.900000000001, "end": 5058.38, "text": " That's dishonesty right there." }, { "start": 5058.38, "end": 5065.66, "text": " So those questioning and even rejecting the idea that racism, misogyny, and harassment" }, { "start": 5065.66, "end": 5070.46, "text": " are problems within the AI field and the tech industry have appropriated the language of" }, { "start": 5070.46, "end": 5077.34, "text": " diversity to argue that efforts to improve inclusion are in fact exclusionary and addressing" }, { "start": 5077.34, "end": 5082.62, "text": " the deeper structural challenges posed by racism, sex and inequity is misguided." }, { "start": 5082.62, "end": 5089.58, "text": " And yes, yes, definitely efforts to improve inclusion can be exclusionary." }, { "start": 5089.58, "end": 5101.1, "text": " Like just because, so this is a thing, just because you're fixing a problem doesn't mean" }, { "start": 5101.1, "end": 5107.98, "text": " the method you're using to fixing it is justified and is itself good." }, { "start": 5107.98, "end": 5115.3, "text": " Methods to improve inclusion can be exclusionary and some that have been proposed are exclusionary." }, { "start": 5115.3, "end": 5117.58, "text": " Definitely it depends on the method." }, { "start": 5117.58, "end": 5121.48, "text": " It doesn't mean these people are against these efforts." }, { "start": 5121.48, "end": 5128.66, "text": " It means that the measures, for example, implementing racist hiring policy, I can definitely see" }, { "start": 5128.66, "end": 5134.0199999999995, "text": " that this is going to lead to more equal representation within the workforce." }, { "start": 5134.0199999999995, "end": 5141.86, "text": " But the tool itself is really bad and exclusionary and discriminating." }, { "start": 5141.86, "end": 5149.5, "text": " So yeah, I would say that it's accurate that it can be exclusionary." }, { "start": 5149.5, "end": 5154.98, "text": " I say, for example, some AI researchers greeted the announcement of Black in AI Workshop at" }, { "start": 5154.98, "end": 5159.7, "text": " NRIPS leading machine learning conference by questioning whether the event was necessary," }, { "start": 5159.7, "end": 5162.62, "text": " arguing that it would be discriminatory." }, { "start": 5162.62, "end": 5163.98, "text": " But can't they?" }, { "start": 5163.98, "end": 5166.98, "text": " Can't they question whether the event was necessary?" }, { "start": 5166.98, "end": 5170.42, "text": " Like that would, I would, here I would need a discussion." }, { "start": 5170.42, "end": 5172.06, "text": " What is it for?" }, { "start": 5172.06, "end": 5173.06, "text": " Right?" }, { "start": 5173.06, "end": 5175.64, "text": " Why is this event happening?" }, { "start": 5175.64, "end": 5177.74, "text": " And what is it doing?" }, { "start": 5177.74, "end": 5180.5, "text": " And is it discriminatory?" }, { "start": 5180.5, "end": 5181.62, "text": " It could be." }, { "start": 5181.62, "end": 5183.22, "text": " Any event can be discriminatory." }, { "start": 5183.22, "end": 5190.3, "text": " Does it discriminate based on race or gender or anything?" }, { "start": 5190.3, "end": 5194.74, "text": " Is it, you know, does it do so unjustly and all?" }, { "start": 5194.74, "end": 5198.42, "text": " So I don't, I don't just don't see why." }, { "start": 5198.42, "end": 5199.42, "text": " Could still be wrong." }, { "start": 5199.42, "end": 5203.74, "text": " Like you could question and then you could be wrong." }, { "start": 5203.74, "end": 5206.7, "text": " But you should be taken on your argument." }, { "start": 5206.7, "end": 5216.06, "text": " But the argument here is just already questioning this is already on the wrong side of the argument." }, { "start": 5216.06, "end": 5217.66, "text": " And I don't agree with this." }, { "start": 5217.66, "end": 5221.46, "text": " I don't agree with these people that question this workshop." }, { "start": 5221.46, "end": 5225.74, "text": " Don't have a particular opinion on these things." }, { "start": 5225.74, "end": 5231.82, "text": " But I have the opinion that you have to take arguments at their argument value and not" }, { "start": 5231.82, "end": 5238.54, "text": " just at who makes them or whether or not they're against a particular viewpoint." }, { "start": 5238.54, "end": 5240.66, "text": " All right." }, { "start": 5240.66, "end": 5247.139999999999, "text": " They say such pushback often centers calls for cognitive diversity or viewpoint diversity." }, { "start": 5247.139999999999, "end": 5251.7, "text": " The idea that individual differences in the ways people think and understand the world" }, { "start": 5251.7, "end": 5257.0199999999995, "text": " are distinctions that should be counted alongside or instead of other identity categories such" }, { "start": 5257.0199999999995, "end": 5258.5, "text": " as race and gender." }, { "start": 5258.5, "end": 5266.34, "text": " Well, yes, that's I mean, isn't that isn't that a very reasonable thing to say?" }, { "start": 5266.34, "end": 5272.54, "text": " Isn't it very reasonable to say that differences in the ways people think and understand the" }, { "start": 5272.54, "end": 5278.139999999999, "text": " world, their distinctions that should be counted alongside other identity categories such as" }, { "start": 5278.14, "end": 5285.780000000001, "text": " race and gender, they say a dozen white men so long as they were not raised in the same" }, { "start": 5285.780000000001, "end": 5291.02, "text": " household and don't think identical thoughts could be considered diverse." }, { "start": 5291.02, "end": 5295.700000000001, "text": " That's I don't know if this is a sarcastic statement or not, but clearly it's it's kind" }, { "start": 5295.700000000001, "end": 5302.18, "text": " of the counterpoint they're trying to make here that but yes, I would I would totally" }, { "start": 5302.18, "end": 5309.740000000001, "text": " agree with this statement in a way a white man growing up in San Francisco, a white man" }, { "start": 5309.740000000001, "end": 5317.820000000001, "text": " growing up in rural Idaho, a white man growing up in Florida, a white man growing up in Western" }, { "start": 5317.820000000001, "end": 5326.02, "text": " Europe, one in Russia, and one growing up on the road with its circus, his circus parents" }, { "start": 5326.02, "end": 5334.26, "text": " in Mongolia would definitely be that plenty diverse, right?" }, { "start": 5334.26, "end": 5342.02, "text": " I mean, they criticize this here, but this is is actually how can you how can you not" }, { "start": 5342.02, "end": 5343.740000000001, "text": " see this that?" }, { "start": 5343.740000000001, "end": 5348.540000000001, "text": " Yes, these are valid differences, and people are going to think differently, independent" }, { "start": 5348.540000000001, "end": 5351.5, "text": " of how they look, people are going to have different thoughts." }, { "start": 5351.5, "end": 5356.42, "text": " And it's important to recognize other people think differently." }, { "start": 5356.42, "end": 5362.7, "text": " And therefore, you should, you know, include them if it's relevant." }, { "start": 5362.7, "end": 5366.82, "text": " And the counter argument to this is, of course, what the authors here are saying basically" }, { "start": 5366.82, "end": 5379.62, "text": " is that 12, a dozen people, as long as they are don't look the same, could be considered" }, { "start": 5379.62, "end": 5383.98, "text": " diverse, even if they all were raised in the same place, and basically all live in San" }, { "start": 5383.98, "end": 5387.98, "text": " Francisco, and think the exact same thing." }, { "start": 5387.98, "end": 5395.58, "text": " Yeah, that's, I mean, it sounds to me, it sounds as absurd as the other way around." }, { "start": 5395.58, "end": 5396.66, "text": " To me." }, { "start": 5396.66, "end": 5401.46, "text": " So here's, here's my, here's my thoughts on this." }, { "start": 5401.46, "end": 5407.58, "text": " I am not going to pretend that I know what life is like as a woman." }, { "start": 5407.58, "end": 5408.58, "text": " Right?" }, { "start": 5408.58, "end": 5418.0599999999995, "text": " I'm absolutely sure that for areas of life, it is it is definitely valuable to listen" }, { "start": 5418.0599999999995, "end": 5427.5, "text": " to the experience of a woman or multiple women, an aggregate of women, because the life is" }, { "start": 5427.5, "end": 5429.46, "text": " just different as a woman." }, { "start": 5429.46, "end": 5431.18, "text": " Life is also different." }, { "start": 5431.18, "end": 5437.5199999999995, "text": " As a black person, I absolutely concede that there are things that I might not be able" }, { "start": 5437.52, "end": 5445.5, "text": " to draw from my life experience, because I am not of that skin color that different problems" }, { "start": 5445.5, "end": 5446.5, "text": " that people face." }, { "start": 5446.5, "end": 5450.5, "text": " And that's why it's important to have an opinion of that at the table." }, { "start": 5450.5, "end": 5461.22, "text": " But I'm also absolutely certain that I have no relation to someone who grew up as a child" }, { "start": 5461.22, "end": 5466.9400000000005, "text": " pop star from the age of 12, and then had that life." }, { "start": 5466.94, "end": 5472.339999999999, "text": " I have no relation to someone growing up under a communist regime." }, { "start": 5472.339999999999, "end": 5480.179999999999, "text": " I have no relation to someone growing up in in kind of a Buddhist religious tradition." }, { "start": 5480.179999999999, "end": 5481.179999999999, "text": " I just don't." }, { "start": 5481.179999999999, "end": 5482.74, "text": " And I don't care how they look." }, { "start": 5482.74, "end": 5485.219999999999, "text": " They have different experiences." }, { "start": 5485.219999999999, "end": 5488.94, "text": " They have different bodies of knowledge to draw on." }, { "start": 5488.94, "end": 5496.219999999999, "text": " And I don't think why we should make the difference along the exact lines of race and gender." }, { "start": 5496.22, "end": 5500.900000000001, "text": " Yeah, but that's what they that's of course what they argue here." }, { "start": 5500.900000000001, "end": 5508.18, "text": " Those arguments work by centering identity while flattening or ignoring power relationships." }, { "start": 5508.18, "end": 5515.34, "text": " Here the VP, the Facebook VP of engineering said that the ultimate goal is cognitive diversity" }, { "start": 5515.34, "end": 5519.62, "text": " and cognitive diversity is correlated with identity diversity." }, { "start": 5519.62, "end": 5525.34, "text": " That means it's not just about getting women in tech, it's about broad voices, broad representation." }, { "start": 5525.34, "end": 5526.34, "text": " Right?" }, { "start": 5526.34, "end": 5537.38, "text": " So the the this is exactly what I would say the reason why we want different the reason" }, { "start": 5537.38, "end": 5542.62, "text": " why we want a woman or a black person at the table is because they have a different knowledge" }, { "start": 5542.62, "end": 5546.38, "text": " is because they have different thoughts because of their different life experience." }, { "start": 5546.38, "end": 5549.34, "text": " They have different thoughts that they can bring in." }, { "start": 5549.34, "end": 5557.860000000001, "text": " So actually, by including these what they call bodies, it is about cognitive diversity," }, { "start": 5557.860000000001, "end": 5559.5, "text": " even in itself." }, { "start": 5559.5, "end": 5562.62, "text": " But the authors here really see this from a different angle." }, { "start": 5562.62, "end": 5568.4400000000005, "text": " They really see this in terms of power relationships between race and gender groups." }, { "start": 5568.4400000000005, "end": 5573.5, "text": " And I yeah, the arguments of the authors don't make sense if you don't view it through that" }, { "start": 5573.5, "end": 5574.5, "text": " lens." }, { "start": 5574.5, "end": 5581.54, "text": " That lens to me is just such a it's such a I don't know, it's just sad look on the world." }, { "start": 5581.54, "end": 5585.78, "text": " And also, I think it's a very, very inaccurate look on the world." }, { "start": 5585.78, "end": 5590.22, "text": " And it's, I think, a very dangerous look on the world." }, { "start": 5590.22, "end": 5597.94, "text": " Um, yeah, again, they say instead of looking at historical patterns of marginalization," }, { "start": 5597.94, "end": 5601.34, "text": " calls for cognitive diversity argued that all differences are equal." }, { "start": 5601.34, "end": 5602.42, "text": " No, we're not." }, { "start": 5602.42, "end": 5608.54, "text": " Like, no calls for cognitive diversity or don't argue that all differences are equal." }, { "start": 5608.54, "end": 5614.7, "text": " Well aware that some people have it harder, well aware that some differences are bigger," }, { "start": 5614.7, "end": 5616.9, "text": " worse or better." }, { "start": 5616.9, "end": 5625.26, "text": " That's absolutely well aware all they're saying is that race and gender shouldn't be the like," }, { "start": 5625.26, "end": 5633.74, "text": " only things to consider and shouldn't be in itself be considered diverse." }, { "start": 5633.74, "end": 5639.22, "text": " Just because someone is of a certain skin color, it doesn't mean anything, right?" }, { "start": 5639.22, "end": 5643.3, "text": " It doesn't actually tell you anything about that person." }, { "start": 5643.3, "end": 5650.56, "text": " So why not consider people as individuals and look at what was their life like until" }, { "start": 5650.56, "end": 5655.22, "text": " this point and what could they contribute to the discussion we're having rather than" }, { "start": 5655.22, "end": 5657.860000000001, "text": " looking at the color of their skin." }, { "start": 5657.860000000001, "end": 5663.18, "text": " I mean, if the color of their skin played a role in their life, then obviously that" }, { "start": 5663.18, "end": 5667.22, "text": " would manifest in my suggestion as well." }, { "start": 5667.22, "end": 5673.34, "text": " But to just look at people through this kind of group lens is is so foreign to me." }, { "start": 5673.34, "end": 5681.26, "text": " And yeah, I feel it's it's quite dangerous." }, { "start": 5681.26, "end": 5690.9800000000005, "text": " Yeah, so again, and this this could argue that all differences are equal." }, { "start": 5690.9800000000005, "end": 5697.06, "text": " I mean, the point where you have to start misrepresenting what the counter argument" }, { "start": 5697.06, "end": 5701.62, "text": " is saying, that's really how you know you're dealing with a with not a well intentioned" }, { "start": 5701.62, "end": 5704.46, "text": " person on the other side of the of the discussion." }, { "start": 5704.46, "end": 5706.62, "text": " This is really politics now." }, { "start": 5706.62, "end": 5710.04, "text": " This isn't a well intended argumentation." }, { "start": 5710.04, "end": 5714.7, "text": " It's really someone to trying to achieve some goal, because they have to misrepresent the" }, { "start": 5714.7, "end": 5715.9, "text": " other side." }, { "start": 5715.9, "end": 5719.0599999999995, "text": " And this only gets worse from here." }, { "start": 5719.0599999999995, "end": 5727.0199999999995, "text": " They say recently was exemplified in the controversy over Google's appointment of Heritage Foundation" }, { "start": 5727.02, "end": 5733.700000000001, "text": " CEO K calls James to its Advanced Technology External Advisory Council." }, { "start": 5733.700000000001, "end": 5738.540000000001, "text": " Google's reasoning for the appointment of James was ostensibly to ensure diversity of" }, { "start": 5738.540000000001, "end": 5743.3, "text": " thought by including a conservative viewpoint on the council." }, { "start": 5743.3, "end": 5751.18, "text": " Alright, so Google has a technology advisory board, or council, sorry, of external people," }, { "start": 5751.18, "end": 5753.780000000001, "text": " and they've included a conservative." }, { "start": 5753.78, "end": 5760.38, "text": " And she is by all by all metrics, let's say, a standard conservative." }, { "start": 5760.38, "end": 5765.78, "text": " So this is not a far right neo Nazi type." }, { "start": 5765.78, "end": 5766.78, "text": " I don't know." }, { "start": 5766.78, "end": 5774.62, "text": " But this is this is someone who has similar opinions than half the US country and in generally" }, { "start": 5774.62, "end": 5781.38, "text": " in at least in the Western world, generally half of the of the country's population tends" }, { "start": 5781.38, "end": 5784.46, "text": " to be conservative." }, { "start": 5784.46, "end": 5786.3, "text": " More or less, I mean, there's differences." }, { "start": 5786.3, "end": 5792.66, "text": " But yeah, so this this is a this is an opinion that a large portion of the population shares." }, { "start": 5792.66, "end": 5799.46, "text": " So it would be I don't know, it would be suitable to include at least someone of that opinion" }, { "start": 5799.46, "end": 5804.46, "text": " in an external advisory council to to have that on board." }, { "start": 5804.46, "end": 5809.34, "text": " You don't have to listen to her like she's not like she's made king." }, { "start": 5809.34, "end": 5818.22, "text": " It's simply that she will have the opportunity to input her voice representative of kind" }, { "start": 5818.22, "end": 5821.9400000000005, "text": " of that large, very large percentage of people." }, { "start": 5821.9400000000005, "end": 5828.9400000000005, "text": " They go on to say, James is also a black woman, thus adding racial and gender diversity to" }, { "start": 5828.9400000000005, "end": 5830.22, "text": " the panel." }, { "start": 5830.22, "end": 5835.46, "text": " So even further, right, this is it's a conservative black woman." }, { "start": 5835.46, "end": 5841.86, "text": " All right, but the pushback following James's inclusion focused on her policy position," }, { "start": 5841.86, "end": 5849.42, "text": " citing specifically her vocal anti LGBTQ and anti immigrant views and highlighted why cognitive" }, { "start": 5849.42, "end": 5853.1, "text": " diversity is a particularly limited lens." }, { "start": 5853.1, "end": 5861.46, "text": " And the pushback here was very much spearheaded by one of the authors of this article." }, { "start": 5861.46, "end": 5864.46, "text": " So I am this isn't just reporting." }, { "start": 5864.46, "end": 5873.34, "text": " I will also I'll also criticize the the this pushback here since it's, you know, it's kind" }, { "start": 5873.34, "end": 5875.46, "text": " of argued for in this article." }, { "start": 5875.46, "end": 5881.86, "text": " It's not just reported and also because the authors are the same." }, { "start": 5881.86, "end": 5887.14, "text": " So here they say they have vocal anti LGBTQ and anti immigrant views." }, { "start": 5887.14, "end": 5891.82, "text": " And I haven't actually gone specifically and looked at what this person particularly has" }, { "start": 5891.82, "end": 5899.179999999999, "text": " said, but given that she's a standard conservative and has been in public office, I believe under" }, { "start": 5899.179999999999, "end": 5909.139999999999, "text": " George W. Bush, she can't like I have trouble believing that she has like extremely hateful" }, { "start": 5909.139999999999, "end": 5915.299999999999, "text": " opinions like these people shouldn't exist or like something like that nature." }, { "start": 5915.3, "end": 5924.22, "text": " Like often people like conservative people have have issues with forcing people to adopt" }, { "start": 5924.22, "end": 5931.38, "text": " certain pronouns for people or issues with which bathrooms do people go in and, you know," }, { "start": 5931.38, "end": 5937.34, "text": " generally are tougher on immigration, especially illegal immigration and so on." }, { "start": 5937.34, "end": 5943.22, "text": " I mean, these are these are views that people hold." }, { "start": 5943.22, "end": 5946.900000000001, "text": " It's a large part of people and these are discussions to be had." }, { "start": 5946.900000000001, "end": 5952.06, "text": " So including this this person would be very sensible move." }, { "start": 5952.06, "end": 5957.26, "text": " But they say in a letter opposing the appointment, a group of Google workers calling themselves" }, { "start": 5957.26, "end": 5964.780000000001, "text": " Googlers against transphobia and hate, transphobia and hate responded to the idea that diversity" }, { "start": 5964.780000000001, "end": 5967.62, "text": " of thought justified James's addition to the council." }, { "start": 5967.62, "end": 5973.66, "text": " This is a weaponization of the language of diversity by appointing James to the ATAC." }, { "start": 5973.66, "end": 5978.86, "text": " Google elevates and endorses her view, implying that hers is a valid perspective worthy of" }, { "start": 5978.86, "end": 5980.86, "text": " inclusions in its decision making." }, { "start": 5980.86, "end": 5981.86, "text": " This is unacceptable." }, { "start": 5981.86, "end": 5989.099999999999, "text": " Here it says again, the author was one of the organizers of that." }, { "start": 5989.099999999999, "end": 5990.86, "text": " And that's what they're saying here." }, { "start": 5990.86, "end": 5996.94, "text": " The views, if you don't have our views, these are unacceptable views, right?" }, { "start": 5996.94, "end": 5999.9, "text": " It's valid perspective worthy of inclusion." }, { "start": 5999.9, "end": 6005.379999999999, "text": " It's what they're saying basically is you don't even talk to these to this person, like" }, { "start": 6005.379999999999, "end": 6009.379999999999, "text": " talking to this person, considering their opinion." }, { "start": 6009.379999999999, "end": 6015.339999999999, "text": " You can still evaluate the opinion, but even considering their opinion is already wrong." }, { "start": 6015.339999999999, "end": 6018.58, "text": " And that given that the person is a black woman." }, { "start": 6018.58, "end": 6026.58, "text": " So basically, they are called the author's idea of diversity is people that look different" }, { "start": 6026.58, "end": 6033.42, "text": " that are from race and gender groups that have don't have much power or perceived what" }, { "start": 6033.42, "end": 6035.44, "text": " they call power right now." }, { "start": 6035.44, "end": 6039.94, "text": " As long as they all think exactly as we think, right, then that's fine." }, { "start": 6039.94, "end": 6044.78, "text": " As long as they they share our thoughts, as long as they don't have dissenting opinions," }, { "start": 6044.78, "end": 6049.18, "text": " we want the we want the different looking people." }, { "start": 6049.18, "end": 6053.58, "text": " But don't dare talk to anyone of a different opinion." }, { "start": 6053.58, "end": 6060.3, "text": " Yeah, this, I don't I don't see how I mean, these these authors, in my opinion, they really" }, { "start": 6060.3, "end": 6067.74, "text": " live in in a bubble, they really live in the in a tiny Silicon Valley or Silicon Valley" }, { "start": 6067.74, "end": 6074.34, "text": " influenced spaces, because this is this is half the people they basically saying half" }, { "start": 6074.34, "end": 6083.38, "text": " the people in their greater community in their country aren't even worthy listening to their" }, { "start": 6083.38, "end": 6090.14, "text": " opinions aren't even worthy of inclusion in of consideration." }, { "start": 6090.14, "end": 6102.02, "text": " So yeah, well, well done might as well discredit them at once." }, { "start": 6102.02, "end": 6106.86, "text": " I'm sure I'm sure I'm sure that's gonna fly well with these people." }, { "start": 6106.86, "end": 6109.14, "text": " All right." }, { "start": 6109.14, "end": 6114.700000000001, "text": " Yeah, might might start calling them deplorables and see what they do." }, { "start": 6114.700000000001, "end": 6122.14, "text": " Maybe they'll return the favor and elect a moron just to stick it in your face." }, { "start": 6122.14, "end": 6124.14, "text": " I mean, that's what happened." }, { "start": 6124.14, "end": 6134.780000000001, "text": " So the idea of cognitive diversity is mobilized by some support in support that the AI field" }, { "start": 6134.780000000001, "end": 6139.02, "text": " and the tech industry are already diverse." }, { "start": 6139.02, "end": 6143.1, "text": " Including as far as to support claims that not including identities like white and male" }, { "start": 6143.1, "end": 6145.1, "text": " constitutes discrimination." }, { "start": 6145.1, "end": 6146.9400000000005, "text": " Yes, it can." }, { "start": 6146.9400000000005, "end": 6157.3, "text": " Like if, if you include every single identity except white and male, that constitutes discrimination." }, { "start": 6157.3, "end": 6163.1, "text": " That's I mean, yes, even if they're in the majority is still constitutes discrimination," }, { "start": 6163.1, "end": 6168.9800000000005, "text": " like no one can help being born white and male, no one white and male chose to be born" }, { "start": 6168.98, "end": 6169.98, "text": " like that." }, { "start": 6169.98, "end": 6177.219999999999, "text": " Don't mostly don't choose the melanin content of your skin, you can modulate it a bit by" }, { "start": 6177.219999999999, "end": 6184.62, "text": " going to the sun, which computer science people statistically don't do very often." }, { "start": 6184.62, "end": 6187.0599999999995, "text": " So there's not much leeway there." }, { "start": 6187.0599999999995, "end": 6196.74, "text": " So yeah, to not include identities like that, if you include every other one, can constitute" }, { "start": 6196.74, "end": 6197.74, "text": " discrimination." }, { "start": 6197.74, "end": 6199.099999999999, "text": " True." }, { "start": 6199.099999999999, "end": 6205.34, "text": " A July 2017 memo written by James Damore, a software engineer at Google is illustrative" }, { "start": 6205.34, "end": 6210.7, "text": " of such pushback titled Google's ideological echo chamber." }, { "start": 6210.7, "end": 6215.0599999999995, "text": " And published in an internal mailing list, the memo critiqued the company's diversity" }, { "start": 6215.0599999999995, "end": 6220.62, "text": " policies arguing that biological differences between men and women rather than bias and" }, { "start": 6220.62, "end": 6225.26, "text": " discrimination help explain gender disparities at the company." }, { "start": 6225.26, "end": 6230.14, "text": " I feel the you can leave out the rather than here." }, { "start": 6230.14, "end": 6240.06, "text": " I think the memo simply stated that biological differences can help explain the gender disparities." }, { "start": 6240.06, "end": 6244.66, "text": " The most objective writing the memo was to make the case that policies designed to achieve" }, { "start": 6244.66, "end": 6249.14, "text": " equal representation are unfair, divisive and bad for business." }, { "start": 6249.14, "end": 6250.26, "text": " Well some are." }, { "start": 6250.26, "end": 6256.74, "text": " Yes, especially the recommendations that you've given at the beginning, number seven, is unfair," }, { "start": 6256.74, "end": 6264.46, "text": " divisive and I would also argue bad for business." }, { "start": 6264.46, "end": 6272.5, "text": " So supporters for Damore's point of view at times even drew on the rhetoric of the pipeline" }, { "start": 6272.5, "end": 6275.900000000001, "text": " to make the case that diversity initiatives are in fact discriminatory." }, { "start": 6275.9, "end": 6281.299999999999, "text": " They argue incorrectly that if there aren't qualified candidates in the pipeline, then" }, { "start": 6281.299999999999, "end": 6287.0199999999995, "text": " hiring those who are unqualified on the basis of identity discriminates against those who" }, { "start": 6287.0199999999995, "end": 6288.7, "text": " are qualified." }, { "start": 6288.7, "end": 6300.98, "text": " No, I would say hiring anyone on the basis of identity discriminates." }, { "start": 6300.98, "end": 6303.259999999999, "text": " I mean inherently." }, { "start": 6303.26, "end": 6310.18, "text": " So again I think that's the larger argument that these people are making, which is not" }, { "start": 6310.18, "end": 6316.22, "text": " incorrect, is very correct." }, { "start": 6316.22, "end": 6322.5, "text": " So in an update to the memo Damore himself asserted that he values diversity and inclusion," }, { "start": 6322.5, "end": 6326.7, "text": " but his primary concern was cognitive diversity." }, { "start": 6326.7, "end": 6331.54, "text": " He says diversity inclusion is not denying that sexism exists, doesn't endorse using" }, { "start": 6331.54, "end": 6332.900000000001, "text": " stereotypes." }, { "start": 6332.9, "end": 6339.74, "text": " And in specific I've read the memo and it directly says these are population level kind" }, { "start": 6339.74, "end": 6344.78, "text": " of statistics and there is more overlap than difference and you absolutely can't say anything" }, { "start": 6344.78, "end": 6348.66, "text": " about an individual by looking at these statistics." }, { "start": 6348.66, "end": 6351.62, "text": " That's almost a quote from this memo." }, { "start": 6351.62, "end": 6359.86, "text": " So he was very much concerned with considering people as individuals, but also if you like" }, { "start": 6359.86, "end": 6362.379999999999, "text": " he was basically making the same argument as earlier." }, { "start": 6362.38, "end": 6370.3, "text": " I told you to remember, hey look this one study that found that women's interests might" }, { "start": 6370.3, "end": 6373.3, "text": " be different and we might shape the curriculum." }, { "start": 6373.3, "end": 6375.22, "text": " That's basically what Damore said." }, { "start": 6375.22, "end": 6380.66, "text": " He said women's interests might be different and we'd have to maybe shape the way we do" }, { "start": 6380.66, "end": 6386.1, "text": " work, like change the way we do software engineering to attract more of them." }, { "start": 6386.1, "end": 6388.9800000000005, "text": " That was one of his points." }, { "start": 6388.98, "end": 6394.86, "text": " So he's exactly the same thing, but of course he's a misogynist because he suggested that" }, { "start": 6394.86, "end": 6400.259999999999, "text": " this could be due partly because of biological differences." }, { "start": 6400.259999999999, "end": 6407.0199999999995, "text": " And the way he was dragged through the mud is just crazy." }, { "start": 6407.0199999999995, "end": 6413.82, "text": " And they shoot here very much against this kind of biological, what they call biological" }, { "start": 6413.82, "end": 6414.82, "text": " determinism." }, { "start": 6414.82, "end": 6417.94, "text": " We'll see this very briefly." }, { "start": 6417.94, "end": 6423.139999999999, "text": " I'd say diversity becomes an empty signifier, stripped of the histories and experiences" }, { "start": 6423.139999999999, "end": 6429.379999999999, "text": " of systemic discrimination, repurposed around ideology rather than bodies." }, { "start": 6429.379999999999, "end": 6436.94, "text": " I'd say diversity has nothing inherently to do with bodies as such." }, { "start": 6436.94, "end": 6449.419999999999, "text": " I think that's only the case if you are already convinced of this." }, { "start": 6449.419999999999, "end": 6453.98, "text": " Within hours of the memo's publication, harassment targeting minority advocates who pushed back" }, { "start": 6453.98, "end": 6460.9, "text": " against the claims in the memo began, with a particular focus on queer and trans workers." }, { "start": 6460.9, "end": 6468.379999999999, "text": " That's bad, but also I think the pushback against people who voiced support was also" }, { "start": 6468.379999999999, "end": 6474.54, "text": " pretty bad because one of them was fired, as you already stated." }, { "start": 6474.54, "end": 6477.62, "text": " Google's vice president of diversity even locked down her Twitter account shortly after" }, { "start": 6477.62, "end": 6483.42, "text": " Demours firing, responding to the barrage of threats describing her as a police Nazi." }, { "start": 6483.42, "end": 6484.74, "text": " Well yeah, if you fire something." }, { "start": 6484.74, "end": 6489.759999999999, "text": " I mean undoubtedly Google fired this guy because they thought it was less of a PR disaster" }, { "start": 6489.76, "end": 6492.62, "text": " if they also fired him now." }, { "start": 6492.62, "end": 6501.860000000001, "text": " This probably wasn't an ideological decision, much more a PR decision." }, { "start": 6501.860000000001, "end": 6508.780000000001, "text": " If you fire someone after stating something like this, it very much looks like you're" }, { "start": 6508.780000000001, "end": 6514.3, "text": " firing them because you don't like their ideas and you don't like what they're saying," }, { "start": 6514.3, "end": 6522.860000000001, "text": " which people generally are not in favor of censoring freedom of speech." }, { "start": 6522.860000000001, "end": 6527.5, "text": " But yeah, that being said, harassment is bad, don't harass people." }, { "start": 6527.5, "end": 6540, "text": " Also that being said, criticism isn't always harassment and don't conflate the two." }, { "start": 6540, "end": 6544.7, "text": " Demours' memo also stated that the distribution of preference abilities of men and women differ" }, { "start": 6544.7, "end": 6550.54, "text": " in part due to biological causes and that these differences may explain why we don't" }, { "start": 6550.54, "end": 6556.58, "text": " see equal representation of women in tech and leadership." }, { "start": 6556.58, "end": 6561.42, "text": " This assertion hinges on a flawed assumption that identities like gender and race are essential" }, { "start": 6561.42, "end": 6568.5, "text": " and fixed biological attributes and that inequalities are at least in part the product of such irreducible" }, { "start": 6568.5, "end": 6569.5, "text": " differences." }, { "start": 6569.5, "end": 6576.26, "text": " Well, I mean, if they're not fixed biological attributes, certainly gender and race have" }, { "start": 6576.26, "end": 6582.54, "text": " a 0.99 correlation with biology." }, { "start": 6582.54, "end": 6590.46, "text": " Since your biology is first and it's determined when you're conceived, that demonstrates a" }, { "start": 6590.46, "end": 6594.14, "text": " causal direction." }, { "start": 6594.14, "end": 6600.14, "text": " Even if they're not exactly fixed, they are overwhelmingly fixed." }, { "start": 6600.14, "end": 6607.5, "text": " And to suggest that this is a flawed assumption, that these inequalities are at least part" }, { "start": 6607.5, "end": 6612.860000000001, "text": " the product of such differences, what you'd have to do, they simply state it's a flawed" }, { "start": 6612.860000000001, "end": 6614.18, "text": " assumption." }, { "start": 6614.18, "end": 6621.820000000001, "text": " What you have to do in order to show this is a flawed assumption, you have to show that" }, { "start": 6621.82, "end": 6628.66, "text": " gender and race, as far as they're biologically determined, have no influence whatsoever on" }, { "start": 6628.66, "end": 6629.66, "text": " these differences." }, { "start": 6629.66, "end": 6631.299999999999, "text": " That's what you have to show, right?" }, { "start": 6631.299999999999, "end": 6636.94, "text": " That's the counterclaim because the claim is they have at least in part something to" }, { "start": 6636.94, "end": 6637.94, "text": " do with it." }, { "start": 6637.94, "end": 6644.54, "text": " And that's also, I believe, what the more stated and what the predominant opinion like" }, { "start": 6644.54, "end": 6651.179999999999, "text": " is very like all the research points to, for example, there is a large difference in interest" }, { "start": 6651.18, "end": 6657.5, "text": " between genders as far as, for example, career selection goes and so on." }, { "start": 6657.5, "end": 6664.780000000001, "text": " Now, we can talk about why that is, but there's also a large consensus, I believe, that this" }, { "start": 6664.780000000001, "end": 6673.14, "text": " is at least partly determined to however degree, but it is at least partly determined by biology." }, { "start": 6673.14, "end": 6680.12, "text": " In order to show that this is flawed, you need to show that it does not have, it can't" }, { "start": 6680.12, "end": 6682.099999999999, "text": " have any influence, right?" }, { "start": 6682.099999999999, "end": 6688.9, "text": " You have to basically prove them the impossibility of this having an influence, which no one" }, { "start": 6688.9, "end": 6692.94, "text": " has done so far, much to the contrary." }, { "start": 6692.94, "end": 6698.12, "text": " So simply state this is a flawed assumption kind of shows to me that they've already," }, { "start": 6698.12, "end": 6706.22, "text": " they are there, they're in a bubble and they're expecting to speak to people in the same bubble." }, { "start": 6706.22, "end": 6719.66, "text": " Yeah, so they go on and kind of discredit this as called a biological determinism, which" }, { "start": 6719.66, "end": 6728.14, "text": " I don't think that's a correct use of the term biological determinism, but you can judge" }, { "start": 6728.14, "end": 6729.14, "text": " for yourself." }, { "start": 6729.14, "end": 6735.46, "text": " All I think these people are saying that biology might have some influence and we could adjust" }, { "start": 6735.46, "end": 6737.5, "text": " for that." }, { "start": 6737.5, "end": 6739.46, "text": " It's not even right, it's not even." }, { "start": 6739.46, "end": 6741.38, "text": " Yeah, this comes up here." }, { "start": 6741.38, "end": 6745.82, "text": " So conclusion, conclusion, finally, I think it's been two hours." }, { "start": 6745.82, "end": 6746.82, "text": " Sorry." }, { "start": 6746.82, "end": 6747.82, "text": " Conclusion." }, { "start": 6747.82, "end": 6754.38, "text": " Throughout this report, we've outlined the scope and scale of the problem, tracing how" }, { "start": 6754.38, "end": 6759.52, "text": " the diversity crisis in the industry and the problems of bias and AI systems are interrelated" }, { "start": 6759.52, "end": 6762.58, "text": " aspect of the same issue." }, { "start": 6762.58, "end": 6765.24, "text": " No." }, { "start": 6765.24, "end": 6770.36, "text": " In the past, these topics are commonly examined in isolation, but increasing evidence shows" }, { "start": 6770.36, "end": 6772.98, "text": " that they are closely intertwined." }, { "start": 6772.98, "end": 6776.48, "text": " No, you've shown that they're parallel." }, { "start": 6776.48, "end": 6782.84, "text": " You have absolutely not shown that they're interrelated aspects of the same issue and" }, { "start": 6782.84, "end": 6787.86, "text": " you have not shown that one, any one of these causally influences the other, that there" }, { "start": 6787.86, "end": 6789.179999999999, "text": " is any feedback loop." }, { "start": 6789.179999999999, "end": 6792.82, "text": " You have not shown that fixing one leads to fixing the other." }, { "start": 6792.82, "end": 6801.86, "text": " I mean, you could also take a company that extremely is focused on, or for some reason" }, { "start": 6801.86, "end": 6808.42, "text": " has a different workforce and then show how their products with the same data sets as" }, { "start": 6808.42, "end": 6814.219999999999, "text": " the previous companies don't end up being biased." }, { "start": 6814.219999999999, "end": 6816.38, "text": " Probably not so easy." }, { "start": 6816.38, "end": 6819.299999999999, "text": " But again, none of that is in the report." }, { "start": 6819.3, "end": 6825.38, "text": " There are many things you could actually do to show what you wanted to show, but it's" }, { "start": 6825.38, "end": 6830.820000000001, "text": " just not the case in this article." }, { "start": 6830.820000000001, "end": 6835.22, "text": " Our analysis surfaced two prominent responses to the diversity crisis." }, { "start": 6835.22, "end": 6840.18, "text": " On one hand, a worker driven movement, which we've skipped." }, { "start": 6840.18, "end": 6846.66, "text": " On the other hand, we observe a small but vocal counter movement that actively resists" }, { "start": 6846.66, "end": 6850.5, "text": " diversity in the industry." }, { "start": 6850.5, "end": 6854.42, "text": " What dishonesty actively resists diversity?" }, { "start": 6854.42, "end": 6861.3, "text": " I mean, the thought that these people stray around like, no, I don't like the other looking" }, { "start": 6861.3, "end": 6862.3, "text": " people." }, { "start": 6862.3, "end": 6864.42, "text": " It's just so absurd." }, { "start": 6864.42, "end": 6871.18, "text": " All they're saying is that either we don't understand the problem in the correct way" }, { "start": 6871.18, "end": 6873.98, "text": " or our tools aren't appropriate to solve the problem." }, { "start": 6873.98, "end": 6881.9, "text": " I think everyone has the same goal of the workplace and the AI systems being as fair" }, { "start": 6881.9, "end": 6887.339999999999, "text": " and as non discriminatory as possible." }, { "start": 6887.339999999999, "end": 6890.9, "text": " Misrepresentation of the other side is something that really bugs me." }, { "start": 6890.9, "end": 6893.419999999999, "text": " And it's something that these authors do a lot." }, { "start": 6893.419999999999, "end": 6900.82, "text": " So yeah, I lose my polite side maybe." }, { "start": 6900.82, "end": 6907.94, "text": " And uses arguments from biological determinism to assert that women are inherently less suited" }, { "start": 6907.94, "end": 6910.5, "text": " to computer science and AI." }, { "start": 6910.5, "end": 6912.179999999999, "text": " What a load of crap." }, { "start": 6912.179999999999, "end": 6919.139999999999, "text": " Sorry, but uses to assert that women are inherently less suited to computer science." }, { "start": 6919.139999999999, "end": 6920.139999999999, "text": " No one." }, { "start": 6920.139999999999, "end": 6925.78, "text": " Okay, not no one, but no one that I know." }, { "start": 6925.78, "end": 6930.179999999999, "text": " Asserts that absolutely no one that makes these arguments." }, { "start": 6930.18, "end": 6931.820000000001, "text": " Sorry, not no one." }, { "start": 6931.820000000001, "end": 6939.700000000001, "text": " You can always find a sexist douchebag that makes that argument." }, { "start": 6939.700000000001, "end": 6943.62, "text": " But this is not a serious argument made." }, { "start": 6943.62, "end": 6947.900000000001, "text": " And this is not this counter movement." }, { "start": 6947.900000000001, "end": 6951.46, "text": " Most people in the argument that most people in this counter movement make." }, { "start": 6951.46, "end": 6952.62, "text": " Not at all." }, { "start": 6952.62, "end": 6962.82, "text": " And to represent them as such is just so dishonest that yeah, this this this basically this is" }, { "start": 6962.82, "end": 6968.94, "text": " the it's nice that it's in the conclusion because it finally like at the end it completely" }, { "start": 6968.94, "end": 6975.98, "text": " destroys the credibility of me taking seriously these authors." }, { "start": 6975.98, "end": 6981.74, "text": " I thought they had so that the parts we skipped over I mostly would say I'm mostly okay with" }, { "start": 6981.74, "end": 6989.66, "text": " they mostly show parallels between the that AI systems are biased and they also show that" }, { "start": 6989.66, "end": 6991.3, "text": " there is unequal representation." }, { "start": 6991.3, "end": 6996.0199999999995, "text": " They also show examples of discrimination, harassment and so on." }, { "start": 6996.0199999999995, "end": 7001.38, "text": " Problems in AI companies and universities that all you can read the report for this" }, { "start": 7001.38, "end": 7003.98, "text": " that's it's pretty interesting to read." }, { "start": 7003.98, "end": 7008.94, "text": " But the points I've addressed, I'm not happy with." }, { "start": 7008.94, "end": 7011.78, "text": " Yeah, so that was it for now." }, { "start": 7011.78, "end": 7018.179999999999, "text": " Sorry this was took so long, but I felt that a thorough take was necessary." }, { "start": 7018.18, "end": 7039.22, "text": " Have a nice rest of the day." } ]
PDRtyrVskMU
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Chip Placement with Deep Reinforcement Learning (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "reinforcement learning", "deep reinforcement learning", "gans", "gan", "deconvolution", "computer chip", "gpu", "tpu", "fpga", "netlist", "constrained", "google" ]
The AI Singularity is here! Computers designing new computers! It takes human experts multiple weeks to design new computer chips. What looks like a large game of Tetris is actually a very complex optimization problem. This paper uses Deep Reinforcement Learning to solve this optimization both faster and better than humans. https://arxiv.org/abs/2004.10746 Abstract: In this work, we present a learning-based approach to chip placement, one of the most complex and time-consuming stages of the chip design process. Unlike prior methods, our approach has the ability to learn from past experience and improve over time. In particular, as we train over a greater number of chip blocks, our method becomes better at rapidly generating optimized placements for previously unseen chip blocks. To achieve these results, we pose placement as a Reinforcement Learning (RL) problem and train an agent to place the nodes of a chip netlist onto a chip canvas. To enable our RL policy to generalize to unseen blocks, we ground representation learning in the supervised task of predicting placement quality. By designing a neural architecture that can accurately predict reward across a wide variety of netlists and their placements, we are able to generate rich feature embeddings of the input netlists. We then use this architecture as the encoder of our policy and value networks to enable transfer learning. Our objective is to minimize PPA (power, performance, and area), and we show that, in under 6 hours, our method can generate placements that are superhuman or comparable on modern accelerator netlists, whereas existing baselines require human experts in the loop and take several weeks. Authors: Azalia Mirhoseini, Anna Goldie, Mustafa Yazgan, Joe Jiang, Ebrahim Songhori, Shen Wang, Young-Joon Lee, Eric Johnson, Omkar Pathak, Sungmin Bae, Azade Nazi, Jiwoo Pak, Andy Tong, Kavya Srinivasa, William Hang, Emre Tuncer, Anand Babu, Quoc V. Le, James Laudon, Richard Ho, Roger Carpenter, Jeff Dean Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at Chip Placement with Deep Reinforcement Learning by Azalia Miroszajny, Anna Goldi and a long list of authors that I have no stamina to read down. I'm sorry. So this work is a cool application of reinforcement learning to the real world. And we're gonna go through it and the cool thing about it is it pulls together parts from so many different areas of machine learning and also here chip engineering. So what's the fundamental problem? The fundamental problem of chip design is this. You have a canvas, an empty chip and you want to build a computer chip. Now what you have given is a so-called netlist. So your netlist is any parts that you want on that computer chip and their shape or their size. So you can imagine this like a bit of a Tetris game. So here's this netlist. There's this part and then this part and then there's maybe this part and also this part. Many many parts. Now these as I understand it can be thousands of parts but you can sort of group them together but still there are a lot of these parts. And the netlist also contains information about how they're connected. So for each of these parts you would have a list of which other ones of these parts they must be connected to. So maybe it says okay this part here needs to be connected to those three parts and for each of those you'd also have like a list of how they must be connected. You can represent this as an adjacency matrix right? But ultimately this is a graph of these nodes. Now your goal is to place those things on this board. So for example we're gonna place this right here and we're gonna place the second one maybe here and the third one maybe here. So you can imagine if this is a CPU maybe look I have no clue of chip design but I imagine it like this. This is your clock that you need on there. This is your NAND gates right? NAND gates pretty important for a CPU and this is your floating point unit also pretty important and so on. So I need to place these things and then you need to connect them using these using wires. Now wires are of course etched into the board but you need to connect them according to the maybe there is a component right here. According to the netlist right they need to be connected like that. Maybe the algorithm that came up with the chip told you they need to be connected like this and if you lay them out like this you can draw the wires. So this is your finished you you want to go from the thing on the right to the thing on the left and your goal here in order to get the fastest possible computer chip is three things. First of all you want first of all the density is important. By density it basically just means you can't place stuff on top of other stuff so you could not place a block right here. Not possible because the clock is already there. So that's first thing you can't place stuff on top of other stuff. Then the second thing is the wires and specifically the length of the wires. So you see for example this thing here is a pretty short wire that means the signal travels fast. This thing here is a long wire so the signal travels more slowly. Now the lower sorry the the faster you want your signal to go that means you have to make your wires as short as possible. So you want to keep the total amount of wire length as short as possible. Then third is what's called congestion. So congestion is when for example okay I actually don't know what it is but maybe it is when two wires cross like somewhere here there might be congestion or maybe some parts actually share their wires. I can imagine sorry I'm gonna draw this again I can imagine maybe there's a part here that wants to go up and then it maybe shares sorry shares this part of the wire here with the other one and so there's congestion like it's roads or something. In any case you can measure congestion it's a bad thing and you want to lay out your components basically in order to minimize congestion and also minimize the length of the wires and not have them on top of one another. Easy enough right? It takes human experts and combined with state-of-the-art algorithms multiple weeks to design chips like this and that's the fundamental problem and this paper takes reinforcement learning in order to solve this problem in a few hours. So how does it do this? The reinforcement learning is basically a sequential measure method so you want to do one action at a time. So you start off with what they call the chip canvas which is just empty. This is your state and then the agent here it gets to decide where to place the next thing. Now how do you decide what the next thing is? I believe they simply go by size so they take the largest component first of the netlist first and they just go down the netlist like this. So you tell the agent hey agent I want you to place this this thing here I want you to place this next and the agent will tell you I will place it right here and then again you tell the agent agent I have this this thing here where do you want to place it and along with sorry along with all the connections right along with everything that it needs to be connected to and also the entire list so the agent must also think of what what is to come yet what it hasn't placed yet. So it everything goes into this decision and it tells you okay I want to place this here and then at the end you end up with this filled board. Now this isn't actually the end. After you have placed all your things there is another method coming in this is called this force directed method placing yet more things so what you actually place with this agent is only the things called macros. I have no idea what those are or how these are different from these standard cells but apparently you must use a force directed method which you can think of just just an algorithm you run to place the standard cells and these are these these gray blobs here and at the end of all of this you can finally evaluate how good your design is. So at the end of all your of this you get a reward that is a mixture of wire length and congestion. Now this is actually an approximation to wire length and this is an approximation to congestion that they use because they need to evaluate it quickly but in essentially it's highly correlated with wire length and congestion so the negative of that is going to be your reward. So in terms of a reinforcement learning problem this is pretty nasty right because as you can see here you get basically a reward of zero for every step until the very end you get your true reward and it is actually worse than that because so from here to here your agent gets to perform actions right this is action time but usually when you have these sparse reward tasks you'll get your reward at the end of that action time but not here at the end of the action time there is an algorithm over which the agent basically has no control that comes in and does a bunch of things this force directed method and only then do you get your reward right so the agent must purposefully sort of leave room here for what this algorithm is going to do so it needs to learn that as well this is a as far as reinforcement learning goes this is a pretty good reinforcement learning problem right so now we have an environment which is the canvas here and also this you can you can consider this force directed method to be part of the environment and of course the reward giver and you have also the netlist as part of the environment and you have the agent that can do actions now we have to go into how does the agent perform actions on this so by the way maybe a bit confusing because it was a bit confusing for me for a given reinforcement learning problem we'll just start out by saying that the netlist is always the same right if you might be coming from a deep learning framework where you're used to many many different training samples in this case basically the netlist the goal is always the same you can think of it like a reinforcement learning agent for the game of chess where it's always the same chess game that you're trying to optimize this is the difference to let's say supervised learning if you have a label in supervised learning if you know the solution to a particular data point you're happy right you that data point is no longer interesting you want to generalize here even though they generalize later here you can give it a single problem right and it will already a solution to that problem will be valuable because it can be a better solution that humanity has come up with until this point so always think that we're now just working on one single netlist one problem to optimally place this netlist and an episode is simply to place these things until here and then you get a reward and then you go back to the beginning and just do it all over again but just try to do better and then you go back to the beginning and you do it all over again the same problem right okay so how does this work by the way that the paper has great technical detail on chip engineering and how the reward function exactly works and so on I have not the expertise to go into this with you beyond what I just described alright so here is how the model looks from a deep RL perspective now there's two parts to this model you can divide it about here so on the right you would have what is your policy and value networks and on the left the feature embeddings so in reinforcement learning and we won't go much into reinforcement learning now but what you need are basically a way to encode the state this is the encoder so all the information of the observation they might be in different modalities and so on you need to encode this into but in for simplicity let's say a single vector that's thing that this thing here this is the state encoding and then you can employ a policy and a value network in order to do reinforcement learning so the side on the right this comes from standard RL you have a policy network and a value network and they do I believe PPO with it this is a standard reinforcement learning architecture it's an actor critic architecture so the value net is simply telling you what's the value of the state that you're in now given that you have a state embedding it simply takes a fully connected layer to transform this into a single float that is the value network the policy is a bit different because usually in reinforcement learning you just have a list of actions right you just say I have these 16 buttons on my controller you compress them here we if you run if you look at this chip from above we have a question where do you want to place the next thing so in order to do that we take this embedding of the state and they run it through a series of D convolutions and the D convolutions they have the ability to basically up sample an image so you see here you transform this vector into a 4 by 4 by 32 tensor and that gets D convolved into more and more though less and less channels but more and more height and width images so it kind of from a vector it produces an image right here you might recognize this from a lot of generator architectures for when you make GANs for images have exactly this D convolution architecture so as I said pretty cool it pulls in kind of architectures and methods from different fields we already have reinforcement learning now we have generators for images now so you come up with an image and basically this image if you can imagine this from above is discretized so you can place the thing you have to place pretty much every single nanometer but they discretize this into a grid and for each point in the grid the network outputs a number so the number maybe nine here three four and so on eight right here so for each of these they it outputs a number where it would or how much it would like to place the next thing at this particular location so this is a distribution over locations so the first thing you have to do is you have to mask out where there are already things we said the first condition is things cannot be on top of other things so maybe you already have placed something here so you ignore those numbers and you have already placed something here so you ignore those numbers as well this is this masking operation right here and then you simply look at where is your highest number and maybe there's maybe there's an 11 down here somewhere say ah this is my highest number okay cool and you look at what you need to place maybe the thing you need to place looks like this and you say all right I will place maybe the 11 marks the top left corner I will place it right here okay and then you do the same thing again for the next piece so the next piece you would simply also mark this to be blue so you can't place here you evaluate your network again of course you'll have a new shape something like this and then you ask the network where would you like to place this and you do this step by step by step until the entire netlist is empty so this is how we do the reinforcement learning this is how we decide on an action but how do we actually put the state into this encoding now this pulls in yet another framework from another field of deep learning namely graph convolutional neural networks so since the netlist is a graph right the netlist is again if you have your wow this is slow today if you have your netlist right here with the part right the shape or size whatever and the list of things it needs to be connected to then this forms naturally a graph so you can transform this into a graph with the things that need to be connected connected by an edge and they run a graph convolutional network across that now in a graph convolutional network you're trying to take a graph like this and have embeddings for the edges and the vertices so ultimately you want what's called a graph embedding in order to do that you need to propagate information along the graph usually as we said this is done during a graph convolution if you are in machine learning for a while longer you might remember also things like conditional random fields or generally graphical methods that were once popular and are kind of a precursor to this so the way they do it is they do it in an iterative fashion they have multiple so they say this right here so how do they embed a graph they have nodes in the graph as we saw before I'm going to draw this one again so this is maybe vi vj and vk now these represent the pieces in the netlist that you have to place so for each of those it has a bunch of features right so the features might be its size it's I believe they they have them somewhere here its size maybe how much power it uses and also its x and y coordinates if it is already placed right so you start with a vector like this and then you iteratively do the following thing you compute edge edge features by running first these things so this is vi and vj for an for each edge you take its nodes run them through this fully connected layer so you embed the features of the nodes you concatenate them and you run it through another neural network layer and that's how you get embeddings for edges and then you update the embeddings for the nodes again by taking the mean embeddings of the edges so you do this in an iterative fashion first you compute the edges from the nodes and then you compute the nodes from the edges and so on right so this means that information can now propagate through the graph so information from this thing propagates into this edge embedding and then in the next step that will propagate into this and then that can propagate into this EJK and this is the same as if you're used to yeah something like a conditional random fields over time if you have a big graph like this the information from any particular node will kind of propagate out throughout the graph and at some point you can sort of reach an equilibrium where everyone everyone in the graph knows about everyone else I have not found how many times they do it they simply say we repeatedly perform the following updates maybe that's somewhere and I just haven't read it closely enough but also I don't haven't seen whether or not they then back propagate through this through these multiple updates or whether they just back prop through one of them not entirely sure but ultimately they get embeddings these edge embeddings out of this graph and they simply take the mean to get the graph embeddings and that goes into their state embeddings right along with that they also have the macro embeddings which are the nodes here the things to be placed along with the current macro ID this is which one do you need to place right now so this comes out of these are the two things out of graphs vertices and edges and then which one you need to place right now it's pretty important right so you take the ones the one that you need to place this also goes into your embedding and then you have some metadata about the netlist like how many things there are and so on and this is also embedded using a fully connected layer all of that goes into your embedding right so your embedding will contain all of this information if you've done a good job and if you train it correctly so this is the model now they do pre train this encoder part right here and the encoder part it's also kind of circular first of all they just generate a giant list so they take this chip here and they just run a policy network that is maybe not super optimal but they just run it a bunch of times in intermediate states and they pre train the encoder to predict the final reward for each of these placements or sorry the the the wire length and congestion and so on and that pre trains the encoder but ultimately you can train this with reinforcement learning you can now let it try to solve this board over and over and over and over and it will get better over time all right the last thing they do is they do transfer learning now finding a better architecture for a single board is already better and faster than the humans but what is cool is that if you have now trained on this one particular board sorry with one particular netlist where was it right we've we've now trained on this particular netlist this was this was our problem and we've solved that we have a great solution can we now when we get a netlist another netlist so here is netlist 2 right it's maybe a bit different so this one is more longer and this one is here and so on what if so we would have to start again from scratch a training reinforcement learning agent on the netlist too so maybe a RL agent trained on chess if we now wanted to play go you know we need to start over again but they try to just transfer this to the new one and astonishingly enough if you train the same RL agent not only on one netlist but on a set of netlists and the biggest set they have is 20 so their data set size is 20 imagine how small this is compared to supervised learning but maybe think of this like you train on 20 Atari games and then it will play the 21st one much better than if you started from scratch interestingly though even zero shot embeddings tend to be pretty good so they don't optimize for the new thing at all and it's already better you can see that here so if you train a policy from scratch then you this here then it takes a long time but if you fine-tune a pre-trained policy it's much shorter and interestingly enough at the beginning it is already better than the policy from scratch that means the knowledge from one chip transfers over to the other chip so the problems are sufficiently close and that basically means that if we now want to design a new AI chip not only are we better because of RL we're also faster because we can transfer learn and they show here that this effect basically appears when you have a large enough data set and again large here is just 20 blocks here you see one of these placements on the left the zero shot placement on the right and fine-tuned on that particular architecture obviously me being an expert in chip placement is it clearly obvious that both are extremely good and yes though actually more funny I find this one where they compare human experts to what their approach is and it says the figures are intentionally blurred as the designs are provided like why do you put them that clearly I can't even couldn't even judge if they're super crisp I yeah all right I guess it's their trade secret so they compare this with the standard algorithms for these things and not only are they faster they are they also better on the metrics yeah overall as I said I find this to be a pretty cool work that pulls in a lot of things from a lot of different fields at one point they say we propose a novel graph convolutional architecture I'm not sure that it is novel maybe it's novel for this problem but I'm pretty sure graph convolutional networks and things like this have been around for a while but again it pulls together things from many different fields and applies them very well very well engineered paper and a step towards the singularity as now AI can design AI accelerators how amazing yeah humanity is doomed all right I invite you to check out this paper if you're still here please subscribe leave a like and a comment and I'll see you next time bye bye
[ { "start": 0, "end": 5.48, "text": " Hi there! Today we're looking at Chip Placement with Deep Reinforcement Learning" }, { "start": 5.48, "end": 11.88, "text": " by Azalia Miroszajny, Anna Goldi and a long list of authors that I have no" }, { "start": 11.88, "end": 18.96, "text": " stamina to read down. I'm sorry. So this work is a cool application of" }, { "start": 18.96, "end": 26.36, "text": " reinforcement learning to the real world. And we're gonna go through it and the" }, { "start": 26.36, "end": 31.88, "text": " cool thing about it is it pulls together parts from so many different areas of" }, { "start": 31.88, "end": 37, "text": " machine learning and also here chip engineering. So what's the fundamental" }, { "start": 37, "end": 44.760000000000005, "text": " problem? The fundamental problem of chip design is this. You have a canvas, an" }, { "start": 44.760000000000005, "end": 50.16, "text": " empty chip and you want to build a computer chip. Now what you have given is" }, { "start": 50.16, "end": 57.959999999999994, "text": " a so-called netlist. So your netlist is any parts that you want on that computer" }, { "start": 57.959999999999994, "end": 63.12, "text": " chip and their shape or their size. So you can imagine this like a bit of a" }, { "start": 63.12, "end": 69.64, "text": " Tetris game. So here's this netlist. There's this part and then this part and" }, { "start": 69.64, "end": 75.52, "text": " then there's maybe this part and also this part. Many many parts. Now these as I" }, { "start": 75.52, "end": 80.75999999999999, "text": " understand it can be thousands of parts but you can sort of group them together" }, { "start": 80.75999999999999, "end": 85.8, "text": " but still there are a lot of these parts. And the netlist also contains" }, { "start": 85.8, "end": 90.64, "text": " information about how they're connected. So for each of these parts you would" }, { "start": 90.64, "end": 96.75999999999999, "text": " have a list of which other ones of these parts they must be connected to. So" }, { "start": 96.75999999999999, "end": 101.16, "text": " maybe it says okay this part here needs to be connected to those three parts and" }, { "start": 101.16, "end": 105.32, "text": " for each of those you'd also have like a list of how they must be connected." }, { "start": 105.32, "end": 110.27999999999999, "text": " You can represent this as an adjacency matrix right? But ultimately this is a" }, { "start": 110.27999999999999, "end": 117.55999999999999, "text": " graph of these nodes. Now your goal is to place those things on this board. So for" }, { "start": 117.55999999999999, "end": 122.11999999999999, "text": " example we're gonna place this right here and we're gonna place the second" }, { "start": 122.11999999999999, "end": 127.35999999999999, "text": " one maybe here and the third one maybe here. So you can imagine if this is a" }, { "start": 127.35999999999999, "end": 132.92, "text": " CPU maybe look I have no clue of chip design but I imagine it like this. This" }, { "start": 132.92, "end": 139.23999999999998, "text": " is your clock that you need on there. This is your NAND gates right? NAND gates" }, { "start": 139.23999999999998, "end": 145.07999999999998, "text": " pretty important for a CPU and this is your floating point unit also pretty" }, { "start": 145.07999999999998, "end": 149.67999999999998, "text": " important and so on. So I need to place these things and then you need to" }, { "start": 149.67999999999998, "end": 154.51999999999998, "text": " connect them using these using wires. Now wires are of course etched into the" }, { "start": 154.51999999999998, "end": 160.2, "text": " board but you need to connect them according to the maybe there is a" }, { "start": 160.2, "end": 165.92, "text": " component right here. According to the netlist right they need to be connected" }, { "start": 165.92, "end": 172.35999999999999, "text": " like that. Maybe the algorithm that came up with the chip told you they need to" }, { "start": 172.35999999999999, "end": 177.23999999999998, "text": " be connected like this and if you lay them out like this you can draw the" }, { "start": 177.23999999999998, "end": 181.95999999999998, "text": " wires. So this is your finished you you want to go from the thing on the right" }, { "start": 181.95999999999998, "end": 187.23999999999998, "text": " to the thing on the left and your goal here in order to get the fastest" }, { "start": 187.24, "end": 194.8, "text": " possible computer chip is three things. First of all you want first of all the" }, { "start": 194.8, "end": 201.60000000000002, "text": " density is important. By density it basically just means you can't place" }, { "start": 201.60000000000002, "end": 207.92000000000002, "text": " stuff on top of other stuff so you could not place a block right here. Not possible" }, { "start": 207.92000000000002, "end": 212.8, "text": " because the clock is already there. So that's first thing you can't place" }, { "start": 212.8, "end": 221.76000000000002, "text": " stuff on top of other stuff. Then the second thing is the wires and" }, { "start": 221.76000000000002, "end": 228.36, "text": " specifically the length of the wires. So you see for example this thing here is a" }, { "start": 228.36, "end": 233.56, "text": " pretty short wire that means the signal travels fast. This thing here is a long" }, { "start": 233.56, "end": 241.04000000000002, "text": " wire so the signal travels more slowly. Now the lower sorry the the faster you" }, { "start": 241.04, "end": 244.76, "text": " want your signal to go that means you have to make your wires as short as" }, { "start": 244.76, "end": 249.28, "text": " possible. So you want to keep the total amount of wire length as short as" }, { "start": 249.28, "end": 259.28, "text": " possible. Then third is what's called congestion. So congestion is when for" }, { "start": 259.28, "end": 265.36, "text": " example okay I actually don't know what it is but maybe it is when two wires" }, { "start": 265.36, "end": 272.08000000000004, "text": " cross like somewhere here there might be congestion or maybe some parts" }, { "start": 272.08000000000004, "end": 278.16, "text": " actually share their wires. I can imagine sorry I'm gonna draw this again I can" }, { "start": 278.16, "end": 287.08000000000004, "text": " imagine maybe there's a part here that wants to go up and then it maybe shares" }, { "start": 287.08000000000004, "end": 292.88, "text": " sorry shares this part of the wire here with the other one and so there's" }, { "start": 292.88, "end": 298.56, "text": " congestion like it's roads or something. In any case you can measure congestion" }, { "start": 298.56, "end": 303.52, "text": " it's a bad thing and you want to lay out your components basically in order to" }, { "start": 303.52, "end": 308.15999999999997, "text": " minimize congestion and also minimize the length of the wires and not have" }, { "start": 308.15999999999997, "end": 314.76, "text": " them on top of one another. Easy enough right? It takes human experts and" }, { "start": 314.76, "end": 320.36, "text": " combined with state-of-the-art algorithms multiple weeks to design chips like this" }, { "start": 320.36, "end": 326, "text": " and that's the fundamental problem and this paper takes reinforcement learning" }, { "start": 326, "end": 333.04, "text": " in order to solve this problem in a few hours. So how does it do this? The" }, { "start": 333.04, "end": 338.8, "text": " reinforcement learning is basically a sequential measure method so you want to" }, { "start": 338.8, "end": 345.16, "text": " do one action at a time. So you start off with what they call the chip canvas" }, { "start": 345.16, "end": 351.6, "text": " which is just empty. This is your state and then the agent here it gets to" }, { "start": 351.6, "end": 355.56, "text": " decide where to place the next thing. Now how do you decide what the next thing is?" }, { "start": 355.56, "end": 361.48, "text": " I believe they simply go by size so they take the largest component first of" }, { "start": 361.48, "end": 367.92, "text": " the netlist first and they just go down the netlist like this. So you tell the" }, { "start": 367.92, "end": 374.12, "text": " agent hey agent I want you to place this this thing here I want you to place this" }, { "start": 374.12, "end": 380.2, "text": " next and the agent will tell you I will place it right here and then again you" }, { "start": 380.2, "end": 385.04, "text": " tell the agent agent I have this this thing here where do you want to place it" }, { "start": 385.04, "end": 389.56, "text": " and along with sorry along with all the connections right along with everything" }, { "start": 389.56, "end": 394.84000000000003, "text": " that it needs to be connected to and also the entire list so the agent must" }, { "start": 394.84000000000003, "end": 400.72, "text": " also think of what what is to come yet what it hasn't placed yet. So it" }, { "start": 400.72, "end": 405.12, "text": " everything goes into this decision and it tells you okay I want to place this" }, { "start": 405.12, "end": 411.32000000000005, "text": " here and then at the end you end up with this filled board. Now this isn't" }, { "start": 411.32000000000005, "end": 418.24, "text": " actually the end. After you have placed all your things there is another method" }, { "start": 418.24, "end": 423.64000000000004, "text": " coming in this is called this force directed method placing yet more things" }, { "start": 423.64000000000004, "end": 430, "text": " so what you actually place with this agent is only the things called macros. I" }, { "start": 430, "end": 434.92, "text": " have no idea what those are or how these are different from these standard cells" }, { "start": 434.92, "end": 439.84, "text": " but apparently you must use a force directed method which you can think of" }, { "start": 439.84, "end": 445.48, "text": " just just an algorithm you run to place the standard cells and these are these" }, { "start": 445.48, "end": 453.84, "text": " these gray blobs here and at the end of all of this you can finally evaluate how" }, { "start": 453.84, "end": 459.1, "text": " good your design is. So at the end of all your of this you get a reward that is a" }, { "start": 459.1, "end": 464.28000000000003, "text": " mixture of wire length and congestion. Now this is actually an approximation to" }, { "start": 464.28000000000003, "end": 468.44, "text": " wire length and this is an approximation to congestion that they use because they" }, { "start": 468.44, "end": 473.6, "text": " need to evaluate it quickly but in essentially it's highly correlated" }, { "start": 473.6, "end": 478.96000000000004, "text": " with wire length and congestion so the negative of that is going to be your" }, { "start": 478.96000000000004, "end": 486, "text": " reward. So in terms of a reinforcement learning problem this is pretty nasty" }, { "start": 486, "end": 491.8, "text": " right because as you can see here you get basically a reward of zero for every" }, { "start": 491.8, "end": 498.04, "text": " step until the very end you get your true reward and it is actually worse" }, { "start": 498.04, "end": 505.36, "text": " than that because so from here to here your agent gets to perform actions" }, { "start": 505.36, "end": 512.88, "text": " right this is action time but usually when you have these sparse reward tasks" }, { "start": 512.88, "end": 517.84, "text": " you'll get your reward at the end of that action time but not here at the end" }, { "start": 517.84, "end": 522.76, "text": " of the action time there is an algorithm over which the agent basically has no" }, { "start": 522.76, "end": 529.48, "text": " control that comes in and does a bunch of things this force directed method and" }, { "start": 529.48, "end": 535.68, "text": " only then do you get your reward right so the agent must purposefully sort of" }, { "start": 535.68, "end": 540.6, "text": " leave room here for what this algorithm is going to do so it needs to learn that" }, { "start": 540.6, "end": 546.4, "text": " as well this is a as far as reinforcement learning goes this is a" }, { "start": 546.4, "end": 551.72, "text": " pretty good reinforcement learning problem right so now we have an" }, { "start": 551.72, "end": 557.8000000000001, "text": " environment which is the canvas here and also this you can you can consider this" }, { "start": 557.8000000000001, "end": 562.84, "text": " force directed method to be part of the environment and of course the reward" }, { "start": 562.84, "end": 570.2, "text": " giver and you have also the netlist as part of the environment and you have the" }, { "start": 570.2, "end": 574.76, "text": " agent that can do actions now we have to go into how does the agent perform" }, { "start": 574.76, "end": 583.1600000000001, "text": " actions on this so by the way maybe a bit confusing because it was a bit" }, { "start": 583.1600000000001, "end": 587.6800000000001, "text": " confusing for me for a given reinforcement learning problem we'll" }, { "start": 587.6800000000001, "end": 593.6, "text": " just start out by saying that the netlist is always the same right if you" }, { "start": 593.6, "end": 599.9200000000001, "text": " might be coming from a deep learning framework where you're used to many many" }, { "start": 599.92, "end": 606.3199999999999, "text": " different training samples in this case basically the netlist the goal is always" }, { "start": 606.3199999999999, "end": 611.04, "text": " the same you can think of it like a reinforcement learning agent for the" }, { "start": 611.04, "end": 615.76, "text": " game of chess where it's always the same chess game that you're trying to" }, { "start": 615.76, "end": 620.4, "text": " optimize this is the difference to let's say supervised learning if you have a" }, { "start": 620.4, "end": 624.5999999999999, "text": " label in supervised learning if you know the solution to a particular data point" }, { "start": 624.5999999999999, "end": 628.8399999999999, "text": " you're happy right you that data point is no longer interesting you want to" }, { "start": 628.84, "end": 635.08, "text": " generalize here even though they generalize later here you can give it a" }, { "start": 635.08, "end": 641.44, "text": " single problem right and it will already a solution to that problem will be" }, { "start": 641.44, "end": 645.5600000000001, "text": " valuable because it can be a better solution that humanity has come up with" }, { "start": 645.5600000000001, "end": 650.84, "text": " until this point so always think that we're now just working on one single" }, { "start": 650.84, "end": 656.44, "text": " netlist one problem to optimally place this netlist and an episode is simply to" }, { "start": 656.44, "end": 662.4000000000001, "text": " place these things until here and then you get a reward and then you go back to" }, { "start": 662.4000000000001, "end": 666.2800000000001, "text": " the beginning and just do it all over again but just try to do better and then" }, { "start": 666.2800000000001, "end": 671.2, "text": " you go back to the beginning and you do it all over again the same problem right" }, { "start": 671.2, "end": 678.6, "text": " okay so how does this work by the way that the paper has great technical" }, { "start": 678.6, "end": 684.5200000000001, "text": " detail on chip engineering and how the reward function exactly works and so on" }, { "start": 684.52, "end": 691.0799999999999, "text": " I have not the expertise to go into this with you beyond what I just described" }, { "start": 691.0799999999999, "end": 696.12, "text": " alright so here is how the model looks from a deep RL perspective now there's" }, { "start": 696.12, "end": 703.6, "text": " two parts to this model you can divide it about here so on the right you would" }, { "start": 703.6, "end": 707.96, "text": " have what is your policy and value networks and on the left the feature" }, { "start": 707.96, "end": 713.48, "text": " embeddings so in reinforcement learning and we won't go much into reinforcement" }, { "start": 713.48, "end": 718.96, "text": " learning now but what you need are basically a way to encode the state this" }, { "start": 718.96, "end": 724.5600000000001, "text": " is the encoder so all the information of the observation they might be in" }, { "start": 724.5600000000001, "end": 729.08, "text": " different modalities and so on you need to encode this into but in for" }, { "start": 729.08, "end": 733.48, "text": " simplicity let's say a single vector that's thing that this thing here this" }, { "start": 733.48, "end": 745.72, "text": " is the state encoding and then you can employ a policy and a value network in" }, { "start": 745.72, "end": 749.6, "text": " order to do reinforcement learning so the side on the right this comes from" }, { "start": 749.6, "end": 754.36, "text": " standard RL you have a policy network and a value network and they do I" }, { "start": 754.36, "end": 760.4, "text": " believe PPO with it this is a standard reinforcement learning architecture" }, { "start": 760.4, "end": 766.84, "text": " it's an actor critic architecture so the value net is simply telling you what's" }, { "start": 766.84, "end": 771.4399999999999, "text": " the value of the state that you're in now given that you have a state embedding" }, { "start": 771.4399999999999, "end": 776.72, "text": " it simply takes a fully connected layer to transform this into a single float" }, { "start": 776.72, "end": 781.24, "text": " that is the value network the policy is a bit different because usually in" }, { "start": 781.24, "end": 784.4, "text": " reinforcement learning you just have a list of actions right you just say I" }, { "start": 784.4, "end": 792.16, "text": " have these 16 buttons on my controller you compress them here we if you run if" }, { "start": 792.16, "end": 798.4, "text": " you look at this chip from above we have a question where do you want to place" }, { "start": 798.4, "end": 805.12, "text": " the next thing so in order to do that we take this embedding of the state and" }, { "start": 805.12, "end": 811.24, "text": " they run it through a series of D convolutions and the D convolutions they" }, { "start": 811.24, "end": 818.2, "text": " have the ability to basically up sample an image so you see here you transform" }, { "start": 818.2, "end": 824.88, "text": " this vector into a 4 by 4 by 32 tensor and that gets D convolved into more and" }, { "start": 824.88, "end": 830.96, "text": " more though less and less channels but more and more height and width images so" }, { "start": 830.96, "end": 836.16, "text": " it kind of from a vector it produces an image right here you might recognize" }, { "start": 836.16, "end": 842.28, "text": " this from a lot of generator architectures for when you make GANs for" }, { "start": 842.28, "end": 849.68, "text": " images have exactly this D convolution architecture so as I said pretty cool" }, { "start": 849.68, "end": 853.68, "text": " it pulls in kind of architectures and methods from different fields we already" }, { "start": 853.68, "end": 860.8, "text": " have reinforcement learning now we have generators for images now so you come up" }, { "start": 860.8, "end": 866.92, "text": " with an image and basically this image if you can imagine this from above is" }, { "start": 866.92, "end": 871.1999999999999, "text": " discretized so you can place the thing you have to place pretty much every" }, { "start": 871.1999999999999, "end": 878.92, "text": " single nanometer but they discretize this into a grid and for each point in" }, { "start": 878.92, "end": 885.4, "text": " the grid the network outputs a number so the number maybe nine here three four" }, { "start": 885.4, "end": 893.0799999999999, "text": " and so on eight right here so for each of these they it outputs a number where" }, { "start": 893.0799999999999, "end": 897.8, "text": " it would or how much it would like to place the next thing at this particular" }, { "start": 897.8, "end": 903.56, "text": " location so this is a distribution over locations so the first thing you have to" }, { "start": 903.56, "end": 906.84, "text": " do is you have to mask out where there are already things we said the first" }, { "start": 906.84, "end": 911.16, "text": " condition is things cannot be on top of other things so maybe you already have" }, { "start": 911.16, "end": 914.04, "text": " placed something here so you ignore those numbers and you have already" }, { "start": 914.04, "end": 918.12, "text": " placed something here so you ignore those numbers as well this is this" }, { "start": 918.12, "end": 922.8, "text": " masking operation right here and then you simply look at where is your highest" }, { "start": 922.8, "end": 929.0799999999999, "text": " number and maybe there's maybe there's an 11 down here somewhere say ah this is" }, { "start": 929.0799999999999, "end": 934.14, "text": " my highest number okay cool and you look at what you need to place maybe the" }, { "start": 934.14, "end": 939.68, "text": " thing you need to place looks like this and you say all right I will place maybe" }, { "start": 939.68, "end": 947.76, "text": " the 11 marks the top left corner I will place it right here okay and then you do" }, { "start": 947.76, "end": 953.2399999999999, "text": " the same thing again for the next piece so the next piece you would simply also" }, { "start": 953.2399999999999, "end": 959.3199999999999, "text": " mark this to be blue so you can't place here you evaluate your network again of" }, { "start": 959.3199999999999, "end": 963.8399999999999, "text": " course you'll have a new shape something like this and then you ask the network" }, { "start": 963.8399999999999, "end": 967.64, "text": " where would you like to place this and you do this step by step by step until" }, { "start": 967.64, "end": 976.28, "text": " the entire netlist is empty so this is how we do the reinforcement learning" }, { "start": 976.28, "end": 981.4399999999999, "text": " this is how we decide on an action but how do we actually put the state into" }, { "start": 981.4399999999999, "end": 988.56, "text": " this encoding now this pulls in yet another framework from another field of" }, { "start": 988.56, "end": 994.28, "text": " deep learning namely graph convolutional neural networks so since the netlist is" }, { "start": 994.28, "end": 1003.92, "text": " a graph right the netlist is again if you have your wow this is slow today if" }, { "start": 1003.92, "end": 1008.88, "text": " you have your netlist right here with the part right the shape or size" }, { "start": 1008.88, "end": 1017.12, "text": " whatever and the list of things it needs to be connected to then this forms" }, { "start": 1017.12, "end": 1022.4, "text": " naturally a graph so you can transform this into a graph with the things that" }, { "start": 1022.4, "end": 1027.84, "text": " need to be connected connected by an edge and they run a graph convolutional" }, { "start": 1027.84, "end": 1032.92, "text": " network across that now in a graph convolutional network you're trying to" }, { "start": 1032.92, "end": 1040.72, "text": " take a graph like this and have embeddings for the edges and the" }, { "start": 1040.72, "end": 1048.5, "text": " vertices so ultimately you want what's called a graph embedding in order to do" }, { "start": 1048.5, "end": 1053.92, "text": " that you need to propagate information along the graph usually as we said this" }, { "start": 1053.92, "end": 1059.68, "text": " is done during a graph convolution if you are in machine learning for a while" }, { "start": 1059.68, "end": 1065.48, "text": " longer you might remember also things like conditional random fields or" }, { "start": 1065.48, "end": 1072.88, "text": " generally graphical methods that were once popular and are kind of a precursor" }, { "start": 1072.88, "end": 1078.72, "text": " to this so the way they do it is they do it in an iterative fashion they have" }, { "start": 1078.72, "end": 1090.7600000000002, "text": " multiple so they say this right here so how do they embed a graph they have" }, { "start": 1090.7600000000002, "end": 1095.6000000000001, "text": " nodes in the graph as we saw before I'm going to draw this one again so this is" }, { "start": 1095.6000000000001, "end": 1102.6000000000001, "text": " maybe vi vj and vk now these represent the pieces in the netlist that you have" }, { "start": 1102.6, "end": 1109.6399999999999, "text": " to place so for each of those it has a bunch of features right so the features" }, { "start": 1109.6399999999999, "end": 1120, "text": " might be its size it's I believe they they have them somewhere here its size" }, { "start": 1120, "end": 1125.6799999999998, "text": " maybe how much power it uses and also its x and y coordinates if it is already" }, { "start": 1125.6799999999998, "end": 1131.2199999999998, "text": " placed right so you start with a vector like this and then you iteratively do" }, { "start": 1131.22, "end": 1140.64, "text": " the following thing you compute edge edge features by running first these" }, { "start": 1140.64, "end": 1149.46, "text": " things so this is vi and vj for an for each edge you take its nodes run them" }, { "start": 1149.46, "end": 1155.64, "text": " through this fully connected layer so you embed the features of the nodes you" }, { "start": 1155.64, "end": 1160.52, "text": " concatenate them and you run it through another neural network layer and that's" }, { "start": 1160.52, "end": 1168.2, "text": " how you get embeddings for edges and then you update the embeddings for the" }, { "start": 1168.2, "end": 1172.96, "text": " nodes again by taking the mean embeddings of the edges so you do this" }, { "start": 1172.96, "end": 1177.48, "text": " in an iterative fashion first you compute the edges from the nodes and" }, { "start": 1177.48, "end": 1185.32, "text": " then you compute the nodes from the edges and so on right so this means that" }, { "start": 1185.32, "end": 1190.2, "text": " information can now propagate through the graph so information from this thing" }, { "start": 1190.2, "end": 1195.8400000000001, "text": " propagates into this edge embedding and then in the next step that will" }, { "start": 1195.8400000000001, "end": 1202.56, "text": " propagate into this and then that can propagate into this EJK and this is the" }, { "start": 1202.56, "end": 1206.96, "text": " same as if you're used to yeah something like a conditional random fields over" }, { "start": 1206.96, "end": 1215.32, "text": " time if you have a big graph like this the information from any particular node" }, { "start": 1215.32, "end": 1220.9199999999998, "text": " will kind of propagate out throughout the graph and at some point you can sort" }, { "start": 1220.9199999999998, "end": 1226.6, "text": " of reach an equilibrium where everyone everyone in the graph knows about" }, { "start": 1226.6, "end": 1236.1599999999999, "text": " everyone else I have not found how many times they do it they simply say we" }, { "start": 1236.1599999999999, "end": 1241.32, "text": " repeatedly perform the following updates maybe that's somewhere and I just" }, { "start": 1241.32, "end": 1246.6, "text": " haven't read it closely enough but also I don't haven't seen whether or not they" }, { "start": 1246.6, "end": 1251.24, "text": " then back propagate through this through these multiple updates or whether they" }, { "start": 1251.24, "end": 1259.8, "text": " just back prop through one of them not entirely sure but ultimately they get" }, { "start": 1259.8, "end": 1266.12, "text": " embeddings these edge embeddings out of this graph and they simply take the mean" }, { "start": 1266.12, "end": 1272.12, "text": " to get the graph embeddings and that goes into their state embeddings right" }, { "start": 1272.12, "end": 1279.8, "text": " along with that they also have the macro embeddings which are the nodes here the" }, { "start": 1279.8, "end": 1284.9599999999998, "text": " things to be placed along with the current macro ID this is which one do you" }, { "start": 1284.9599999999998, "end": 1290.28, "text": " need to place right now so this comes out of these are the two things out of" }, { "start": 1290.28, "end": 1295.32, "text": " graphs vertices and edges and then which one you need to place right now it's" }, { "start": 1295.32, "end": 1303.48, "text": " pretty important right so you take the ones the one that you need to place" }, { "start": 1303.48, "end": 1307.6, "text": " this also goes into your embedding and then you have some metadata about the" }, { "start": 1307.6, "end": 1313.9199999999998, "text": " netlist like how many things there are and so on and this is also embedded" }, { "start": 1313.9199999999998, "end": 1318.1599999999999, "text": " using a fully connected layer all of that goes into your embedding right so" }, { "start": 1318.1599999999999, "end": 1322.32, "text": " your embedding will contain all of this information if you've done a good job" }, { "start": 1322.32, "end": 1331.1599999999999, "text": " and if you train it correctly so this is the model now they do pre train this" }, { "start": 1331.1599999999999, "end": 1336.72, "text": " encoder part right here and the encoder part it's also kind of circular first of" }, { "start": 1336.72, "end": 1343.4399999999998, "text": " all they just generate a giant list so they take this chip here and they just" }, { "start": 1343.4399999999998, "end": 1348.3999999999999, "text": " run a policy network that is maybe not super optimal but they just run it a" }, { "start": 1348.4, "end": 1353.2800000000002, "text": " bunch of times in intermediate states and they pre train the encoder to" }, { "start": 1353.2800000000002, "end": 1360.1200000000001, "text": " predict the final reward for each of these placements or sorry the the the" }, { "start": 1360.1200000000001, "end": 1366.6000000000001, "text": " wire length and congestion and so on and that pre trains the encoder but" }, { "start": 1366.6000000000001, "end": 1370.3200000000002, "text": " ultimately you can train this with reinforcement learning you can now let" }, { "start": 1370.3200000000002, "end": 1374.76, "text": " it try to solve this board over and over and over and over and it will get better" }, { "start": 1374.76, "end": 1383.32, "text": " over time all right the last thing they do is they do transfer learning now" }, { "start": 1383.32, "end": 1389.76, "text": " finding a better architecture for a single board is already better and" }, { "start": 1389.76, "end": 1395.76, "text": " faster than the humans but what is cool is that if you have now trained on this" }, { "start": 1395.76, "end": 1403, "text": " one particular board sorry with one particular netlist where was it right" }, { "start": 1403, "end": 1408.28, "text": " we've we've now trained on this particular netlist this was this was our" }, { "start": 1408.28, "end": 1415.64, "text": " problem and we've solved that we have a great solution can we now when we get a" }, { "start": 1415.64, "end": 1420.84, "text": " netlist another netlist so here is netlist 2 right it's maybe a bit" }, { "start": 1420.84, "end": 1427.56, "text": " different so this one is more longer and this one is here and so on what if so we" }, { "start": 1427.56, "end": 1432.88, "text": " would have to start again from scratch a training reinforcement learning" }, { "start": 1432.88, "end": 1438.2, "text": " agent on the netlist too so maybe a RL agent trained on chess if we now wanted" }, { "start": 1438.2, "end": 1444, "text": " to play go you know we need to start over again but they try to just" }, { "start": 1444, "end": 1450.1000000000001, "text": " transfer this to the new one and astonishingly enough if you train the" }, { "start": 1450.1000000000001, "end": 1456.0400000000002, "text": " same RL agent not only on one netlist but on a set of netlists and the biggest" }, { "start": 1456.0400000000002, "end": 1462.0400000000002, "text": " set they have is 20 so their data set size is 20 imagine how small this is" }, { "start": 1462.04, "end": 1467.8799999999999, "text": " compared to supervised learning but maybe think of this like you train on" }, { "start": 1467.8799999999999, "end": 1474.44, "text": " 20 Atari games and then it will play the 21st one much better than if you started" }, { "start": 1474.44, "end": 1481.24, "text": " from scratch interestingly though even zero shot embeddings tend to be pretty" }, { "start": 1481.24, "end": 1486.28, "text": " good so they don't optimize for the new thing at all and it's already better you" }, { "start": 1486.28, "end": 1493.68, "text": " can see that here so if you train a policy from scratch then you this here" }, { "start": 1493.68, "end": 1501.8799999999999, "text": " then it takes a long time but if you fine-tune a pre-trained policy it's much" }, { "start": 1501.8799999999999, "end": 1509.24, "text": " shorter and interestingly enough at the beginning it is already better than the" }, { "start": 1509.24, "end": 1514.44, "text": " policy from scratch that means the knowledge from one chip transfers over" }, { "start": 1514.44, "end": 1518.76, "text": " to the other chip so the problems are sufficiently close and that basically" }, { "start": 1518.76, "end": 1524.68, "text": " means that if we now want to design a new AI chip not only are we better" }, { "start": 1524.68, "end": 1530.8400000000001, "text": " because of RL we're also faster because we can transfer learn and they show here" }, { "start": 1530.8400000000001, "end": 1536.0800000000002, "text": " that this effect basically appears when you have a large enough data set and" }, { "start": 1536.0800000000002, "end": 1541.8, "text": " again large here is just 20 blocks here you see one of these placements on the" }, { "start": 1541.8, "end": 1545.72, "text": " left the zero shot placement on the right and fine-tuned on that particular" }, { "start": 1545.72, "end": 1550.68, "text": " architecture obviously me being an expert in chip placement is it clearly" }, { "start": 1550.68, "end": 1559.12, "text": " obvious that both are extremely good and yes though actually more funny I find" }, { "start": 1559.12, "end": 1565.8, "text": " this one where they compare human experts to what their approach is and it" }, { "start": 1565.8, "end": 1570.24, "text": " says the figures are intentionally blurred as the designs are provided like" }, { "start": 1570.24, "end": 1574.8, "text": " why do you put them that clearly I can't even couldn't even judge if they're" }, { "start": 1574.8, "end": 1582.52, "text": " super crisp I yeah all right I guess it's their trade secret so they compare" }, { "start": 1582.52, "end": 1586.8, "text": " this with the standard algorithms for these things and not only are they" }, { "start": 1586.8, "end": 1595.84, "text": " faster they are they also better on the metrics yeah overall as I said I find" }, { "start": 1595.84, "end": 1600.24, "text": " this to be a pretty cool work that pulls in a lot of things from a lot of" }, { "start": 1600.24, "end": 1605.8799999999999, "text": " different fields at one point they say we propose a novel graph convolutional" }, { "start": 1605.8799999999999, "end": 1611.1599999999999, "text": " architecture I'm not sure that it is novel maybe it's novel for this problem" }, { "start": 1611.1599999999999, "end": 1614.6399999999999, "text": " but I'm pretty sure graph convolutional networks and things like this have been" }, { "start": 1614.6399999999999, "end": 1620.04, "text": " around for a while but again it pulls together things from many different" }, { "start": 1620.04, "end": 1627.92, "text": " fields and applies them very well very well engineered paper and a step towards" }, { "start": 1627.92, "end": 1636.28, "text": " the singularity as now AI can design AI accelerators how amazing yeah humanity" }, { "start": 1636.28, "end": 1639.52, "text": " is doomed all right I invite you to check out this paper if you're still" }, { "start": 1639.52, "end": 1645.52, "text": " here please subscribe leave a like and a comment and I'll see you next time bye" }, { "start": 1645.52, "end": 1650.6399999999999, "text": " bye" } ]
B9PL__gVxLI
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
DeepMind's AlphaFold 2 Explained! AI Breakthrough in Protein Folding! What we know (& what we don't)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "google", "deepmind", "deep mind", "alphago", "alphazero", "alphafold", "protein", "dna", "rna", "folding", "casp", "casp14", "alphafold 2", "blog", "hassabis", "biology", "translation", "amino acid", "transformer", "convolution", "residual", "spatial graph", "refine", "gradient descent", "van der waals", "torsion angles", "google ai", "google brain", "nobel prize", "msa", "multiple sequence alignment", "covariation", "evolution", "contact prediction", "distogram" ]
#deepmind #biology #ai This is Biology's AlexNet moment! DeepMind solves a 50-year old problem in Protein Folding Prediction. AlphaFold 2 improves over DeepMind's 2018 AlphaFold system with a new architecture and massively outperforms all competition. In this Video, we take a look at how AlphaFold 1 works and what we can gather about AlphaFold 2 from the little information that's out there. OUTLINE: 0:00 - Intro & Overview 3:10 - Proteins & Protein Folding 14:20 - AlphaFold 1 Overview 18:20 - Optimizing a differentiable geometric model at inference 25:40 - Learning the Spatial Graph Distance Matrix 31:20 - Multiple Sequence Alignment of Evolutionarily Similar Sequences 39:40 - Distance Matrix Output Results 43:45 - Guessing AlphaFold 2 (it's Transformers) 53:30 - Conclusion & Comments AlphaFold 2 Blog: https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology AlphaFold 1 Blog: https://deepmind.com/blog/article/AlphaFold-Using-AI-for-scientific-discovery AlphaFold 1 Paper: https://www.nature.com/articles/s41586-019-1923-7 MSA Reference: https://arxiv.org/abs/1211.1281 CASP14 Challenge: https://predictioncenter.org/casp14/index.cgi CASP14 Result Bar Chart: https://www.predictioncenter.org/casp14/zscores_final.cgi Paper Title: High Accuracy Protein Structure Prediction Using Deep Learning Abstract: Proteins are essential to life, supporting practically all its functions. They are large complex molecules, made up of chains of amino acids, and what a protein does largely depends on its unique 3D structure. Figuring out what shapes proteins fold into is known as the “protein folding problem”, and has stood as a grand challenge in biology for the past 50 years. In a major scientific advance, the latest version of our AI system AlphaFold has been recognised as a solution to this grand challenge by the organisers of the biennial Critical Assessment of protein Structure Prediction (CASP). This breakthrough demonstrates the impact AI can have on scientific discovery and its potential to dramatically accelerate progress in some of the most fundamental fields that explain and shape our world. Authors: John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Kathryn Tunyasuvunakool, Olaf Ronneberger, Russ Bates, Augustin Žídek, Alex Bridgland, Clemens Meyer, Simon A A Kohl, Anna Potapenko, Andrew J Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Jonas Adler, Trevor Back, Stig Petersen, David Reiman, Martin Steinegger, Michalina Pacholska, David Silver, Oriol Vinyals, Andrew W Senior, Koray Kavukcuoglu, Pushmeet Kohli, Demis Hassabis. Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
It will change everything. DeepMind solves 50 year old grand challenge. The game has changed. DeepMind's latest AI breakthrough achieves historic new milestone, helps solve how diseases invade cells, improve protein folding prediction, AI breakthrough it also wipes your butt automatically. It is the newest DeepMind big publication. Actually, it's not a publication yet. But so what happened and I'm sure you've heard this is that every year there is this competition of protein folding prediction. So proteins are the structures that fold in a given way. And we'll go into that in a bit. But basically every year there is this competition and the results of this year's competition came out and they looked something like this. Namely, every entry here you see is a team participating in that competition of protein folding prediction. And there is one team which is DeepMind's system Alpha Fold 2, which completely dominates all the others. To the point where the problem is now considered to be solved. Now solved in this case simply means that you're past a certain number in this test set. And if you're past that certain number, your predictions are useful enough so that other scientists can basically take them and base work on them. So that's what it means for this protein folding problem to be solved. Now we don't have much information on Alpha Fold 2 yet, other than it's really good. And like a blog post and a bunch of advertisement videos by DeepMind, they are writing a paper on it. But today I want to go into this blog post and maybe parse out what we can gather from that blog post. And I also want to go actually through the Alpha Fold 1 paper. So as you can see, the performance here increased drastically with Alpha Fold 2. But you know, guesses are high that the system is going to be somewhat similar to Alpha Fold 1, of which we do have a paper. So today we'll go into Alpha Fold 1. We'll go into some speculations of Alpha Fold 2. I can already give you my speculation. It's transformers. It's attention that all of a sudden made this big jump together with probably a few other improvements to the Alpha Fold 1 system. Basically, transformers continuing to dominate the entire field. So where do we start? It's probably best. By the way, if this is not a great meme template, I don't know what is. Just saying. Just saying. Yeah. So let's actually start with the problem itself. I realize if you're here, you're probably a machine learning person, might not know too much about protein folding. So these things here are computer representations of proteins. They don't really look that way, but sort of similar. A protein essentially is a chain of amino acids. So an amino acid, where do we have this? Right here. Amino acids are these what they're called basic building blocks of life. Since the proteins are what make the cell do things. So protein are sort of the workers in the cell. They are used as signaling molecules, receptors. They are parts of your muscles. Actually, the parts that move are proteins. So they are all the work doers. Whatever something needs to work in a cell to do mechanical or work, proteins are involved. And amino acids are the building blocks of proteins. So each amino acid has a given certain common structure. And there are 21 of them. So all the proteins in the world are simply made out of chains of these 21 amino acids. And these chains, they are formed. And so there's always this sort of body that can link up to other bodies of amino acids. It's very similar. If you maybe know how DNA is structured, it's a very similar concept. Except in DNA, there are four different bases. Here there are 21 amino acids. And each amino acid is a little bit different. In each amino acid has like a tail that hangs off. So the tail can be, you know, look like this, or it can look like this, like with a side chain. Or there is there one where it's like maybe a cyclic one. I'm not sure. Maybe you can look out here or it can have sort of no tail at all. I think that's the case for glycine. So the important part is depending on this tail, the properties, the chemical properties of the amino acids are different. And then what happens next is really interesting. Once this amino acid chain is built in this. So this is the central dogma of modern biology is that you have DNA. And DNA is translated to RNA. Sorry. And then it's translated to. So it's read off, copied to RNA, which is sort of a DNA clone. And then the RNA is translated into the amino acid chain. And there is always three, three pieces of DNA mapped to one amino acid. This is very much like a compiler. Notably, the interesting part is that these steps right here, this compilation steps are done by proteins. So there are proteins that do these things. So nature in a very real sense is its own compiler. So this here you can see as like the binary. And this here is like the source code. But what happens once you build this chain of amino acid and you set it out into the cell, because of these different properties of these side chains, they're also called residues. These chain begins to fold. So this is if you know a bit of chemistry, you might know that these are these are sort of atoms that are linked with covalent bonds in this case. And it can be that part of this chain is rather like electrically negatively charged. And here part of this chain might be like electrically positively charged in a given place over a given other place. And it also depends on the surrounding medium, of course. And that means that in this case, for example, these two things will attract. And so if you release this amino acid chain, what you're going to get is sort of a bend where now the chain sort of bends. And these two, this chain right here, this tail goes like here, this tail goes like here. I'm sorry, if there is no if there is no if there is no I don't even know what to call it, pyrene rings or something like this. If there isn't an amino acid with that, I apologize. But the point is that these two things attract and sort of form this shape. And this shape is very important. We know that proteins and proteins consist of it can be hundreds, thousands, tens of thousands of these amino acids in a chain. The proteins function is, interestingly, largely determined by its structure, by its 3D structure, not necessarily by the actual amino acid. So technically you can substitute amino acids for each other. So this amino acid here can be could be substituted for another amino acid that maybe isn't the same, but is has the same properties of its side chain, such that if the structure is still the same, the protein would perform the same function. So that that is is very special property of proteins, namely their 3D structure largely determines their function. So, for example, in this step here, when you read off the RNA to the DNA, as you know, the RNA is sorry, the DNA is like this double strand of connected base pairs. And in order to replicate the DNA or to read it off, there is a there more or let's call it. There is also this step of DNA replication, right, where you copy the DNA in mitosis. In order to do that, you need to split off the two strands. You need to split it up because you want to get like a protein needs to get here to actually read it off. For that, there is a protein, a specific protein that will insert right here to split up the DNA, which is called a helicase. And that really is very important how that protein is shaped. So the shape needs to be actually such that it kind of removes these bonds from each other. So the shape is very, very important for a protein. And conceivably, you could build a helicase from many, many different amino acid sequences as long as it has the same shape. Now, I think something like something like fundamental like a helicase is probably conserved in the evolutionary tree. But I hope you get the point. The shape is super duper important. Now, the shape isn't just arbitrary. There are some amino acid chain is called the primary structure. And then the first thing that happens is that two very distinct kind of sub shapes appear. So often repeating shapes, these things, I think, are called alpha helicase or helix. This is a helix. And this here is I don't know what's in English. It's probably called a strand or something like this. These are like long sheets like I think they're called beta strands. And these things form. These are often repeated sequences. And then the third tertiary structure is when the whole thing starts to kind of fold on itself and so on and give itself the the final structure. So this is part, I guess, of the RNA polymerase, which is the molecule that reads DNA and outputs RNA. And there are many, many, many proteins. Now, since the shape is so important, it is vital that we know of it. And technically, technically, this is what why this problem is 50 years old, I guess. They say it's a 50 year old problem. I think that's due to the fact that 50 years ago, a Nobel laureate said the following. Since a protein is fully determined by its amino acid chain and since the amino acid chain determines the structure that it's going to do because of these kind of chemical properties, it should be possible to read in the amino acid sequence or read in the DNA sequence. We know what amino acid sequence results and output the shape of a protein. However, this is an extremely complicated problem. It turned out to be because they're very subtle interactions. They're not always the same. It depends, right? Like somewhere out here, there could be some amino acid with like some weird chain that, you know, everything folds on itself all the time. So at some point, these get in contact and the changes kind of the local properties here. So this is a very, very difficult problem to solve. And people have sort of tried to do this and now apparently deep mind the first system that does this to such a satisfaction that it's beneficial. All right. Now I lost my train of thought. Yeah. So the shape prediction, what happened so far is what you have to do is you'd have to sort of do this, determine this experimentally. So you'd have to take these proteins and crystallize them and then like shoot X-rays at them and then infer the structure. You can do that from crystallized proteins because I think it's due to crystals or like very regular accumulations of proteins. So if you look at a snowflake, that is if we knew nothing about the water molecule that it's like H2O, if we knew nothing of that, we could just look at a snowflake and determine this structure, this specific angles here from the snowflake. We would just look at the snowflakes and if someone tells us, look, that's all the same material, that's all water, we could infer what the water molecule looks like just by analyzing snowflakes because they're crystals. And the pretty much the same here is you build, you make crystals out of these materials, you shoot X-rays at them. And then you sort of reason over the patterns that come out. This is very, very difficult, very expensive. And so to solve this problem computationally is super important. Now we'll get to this graphic in a minute. This is sort of the only thing we know about AlphaFold2 is this graphic right now, because they have not yet released. The paper or any descriptions of the model, as I said. But what we'll do is we'll go into AlphaFold1. So this is AlphaFold1. And AlphaFold1 was participating in the same competition two years ago and was already dominant there, but not yet dominant to the point of having, quote unquote, solved the problem just better than other systems. So this is the basic structure of AlphaFold1. So what do you have right here? Let's give ourselves an overview. So the overview is the following. There are two different stages to this algorithm. Stage one is over here and stage two is over here. Maybe it's easiest to start with stage two. So the output of stage one is this thing right here, a distance and torsion distribution prediction. So this matrix here that's kind of tilted on its side, I believe there are more down here. Right. OK. So what you do right here is you take an amino acid sequence and you line it up right here. You line it up. This is the amino acid sequence. It's a bit harder if there's like a split, but let's just say a protein is... Actually, there can't be a split. Sorry, that's in the amino acids. I'm dumb. So a protein is a single chain of these amino acids. There can be multiple sort of parts to a bigger protein conglomerate. But there is this chain. You line it up here and here. So now we're building sort of a pairwise matrix between the sequence and itself. And this pairwise matrix is going to be a distance matrix. So what we are going to do is we're going to input some features about this sequence of amino acids. That's what we get as an input. And we're going to predict for any pair. So we have the sequence and we're going to predict for any pair how far are they apart? So of course, here the answer is always kind of zero. They're zero apart. But you might say, you know, these two are five apart and these two here are seven apart. But these two here are only one apart. So it's reasonable, you know, that the final structure, these two are close together. We don't worry about close together right now. We just worry about for each two, we'll predict how far they are apart. OK, so this is you can view this as, you know, a machine learning problem, right? You have an input sequence and you simply want to predict the distance matrix. So here you can see that. In fact, you can see the top and bottom. One is the predicted and one is the real. I don't even remember which one's which. You can see that this system does a pretty good job at that. There are minute differences. You really go look like down here, you can see a bit of a difference over here. There is a bit of a difference. But in general, this system does a pretty good job. So this is the output of stage one is this matrix. It's a bunch of other it's like also the torsion angles and so on. But the main thing is you predict the distances between those two. That's what you take as a input to stage two. So what stage two does is stage two builds a model of this molecule. And the model is sort of a differentiable geometrical model. So they say they. Where is it? I don't get these nature papers like they're split into two parts, but then they are they largely say the same things. I am absolutely confused by them. So we're going to jump around a fair bit. They say we parameterize protein structures by the backbone torsion angles of all residues and build a differentiable model of protein geometry to compute the coordinates for all residues. And thus the inter residue distances. So what they do is essentially they build a computer model of these amino acids. And these are parameterized by the torsion angles. Now, the torsion angle is simply the angle between any two of them. So this would be like a torsion angle of 180 degrees. And then if it folds like this, it would be torsion angle of 90 degrees and so on. And you need two torsion angles because you're in 3D. But essentially the torsion angles determine the structure of the protein. So it's one way of parameterizing it. So they build a differentiable model, a differentiable model of protein geometry. Now, the important thing is they don't do any learning with this differentiable model. The purpose of this differentiable model is such that what you can do now, if you have a differentiable model, you can run gradient descent. So imagine they pretty much lay it out right here. So they have the x, x is the output of your differentiable geometry, right, of your torsion angles. Let's just call it this Greek letter phi, psi, whatever. If x is the output and now x goes into your loss function. So x goes into your loss function and the loss function simply compares x. To the predicted x. So the loss function will take in x and it will compare it to the x that you predicted from this thing here. So we start off with a flat chain, maybe. Actually, I think we start off with some initialization because they also predict the torsion angles directly. Right here, they predict the torsion angles directly. And that's what we initialize from. But let's just say we initialize from the flat chain. And then because this is differentiable, we do so your L is x minus x prime. And what we do is we derive the loss with respect to the angle, to the torsion angle. So we can do this since this is differentiable. So now we know how do we need to change the angle, which is this thing right here, in order to make the loss smaller. And maybe it says, actually, you need to turn it down, right, make the angle smaller. And we do that. Okay, cool. Now it's only 90 degrees. And then we do it again and again and again. And you can see that by changing all the angles such that this loss is smaller, we end up through steps, step, step, step. We in our computer model, we sort of replicate this process that happens in nature, where what we feed in is how far any two amino acids should be apart. And by running gradient descent, just gradient descent on the torsion angles, we figure out what do the angles need to be in order to make this happen. So first, we predict all the distances, and then we figure out how do we need to set the angles such that these distances are fulfilled. These are not true distances. These are predicted distances, right? So everything depends on how well we can predict these distances. But once we have them, we can sort of replicate in our computers the process as it happens in nature, except in nature, the whole folding is dependent on these all these chemical interactions and so on. And now we do none of this. We simply look see how do we need to fold in order to make these distances in our computer model like these like the distance between this and this and this and this. Any two distances may agree with the distances that we have predicted right here. And you can see that over time, this as you run gradient descent, this goes up. This this TM score was up the root mean square distance goes down between and then you of course can compare it if you have a test set with stuff that people have already figured out, you can analyze these metrics and see that indeed, you do get the correct folding. It's also pretty interesting that so here in blue and red, I believe you have. Yeah, exactly. So the the helix in blue and the strands in red. So in this case, you from if you have this folded structure or partially folded structure, you can already see that these sort of substructures emerge like this is a helix, right? As you can see, and then you sort of made this maybe a strand and so on. There are ways to heuristically classify that. And you can see that if you look at the database, right, you can see that this here is a strand. These are helices, and this is a strand and these are helix. This is a strand and so on. And you can see that the model here is what the model thinks at the beginning. It doesn't get many things correct, though it does some, but then over time, it sort of refines its guesses until at the end, it's pretty much equal to what the database to what the true sample is. And here is simply the distribution of, I guess, confidence about these things and the torsion angles right here. So it, as you can see, this two step process is the key here to do that. Now, Alpha Fold 2 conceivably probably changes this a little bit. But again, we're not sure. The step one right here is a deep learning system. So step two is simply a gradient descent procedure that you run at inference time, right? This at training, you can you can just do step one. So step one is is the machine learning bit. So the goal is to output this distance, this distance tensor right here. And there are more things than distances, as we said, there are torsion angles and so on. But ultimately, you want to output this distance matrix. And how do they do it? You can already see it's a deep neural network. So you want to build a input data point, let's say, of L by L, which is sequence length by sequence length. So you want to collect some features, you don't know the distances yet, right? But you can collect some features that are either pairwise features between these two things, right? So here, maybe this is, I don't know, leucine, and this is what's a different amino acid glycine. And in here, you want to put features, maybe it can be features for that position, right? Maybe leucine here is at the 100th position in the in this particular protein, and this is at the 90th position. So we want to put in some features of that that you can derive from a data set. You can put in correlation statistics in general between these two amino acids. You can even put in just single features. So you have these tiled L by one features, which is just features for the sequence itself, not pairwise features. But what you do is you simply replicate them along along any given dimension right here. You always put the same features. This is very common in conv nets. And you can even do a scalar feature. So there are some scalar features. And what you would do is you would simply fill an entire plane with that scalar feature, all the same number. It's just easier to do it like this because it fits into the convolutional architecture. Well, so you want to provide all kinds of features and the features they provide are, you know, plentiful. And a lot of them do introduce some domain tools, domain expertise and so on. But once they have that, they simply take that sort of image with many, many channels and they predict this image if you want. So it's just an image to image translation problem. And they do this via a convolutional neural network. As you can see, there are 220 residual convolutional blocks. Now, I assume that most of the viewers of this video are familiar what convolutional neural networks are. If not, I'm deeply sorry, but we'll not go into that. But you can see they sort of they tile this tensor right here and they tile it differently from from from instance to instance. So they tile it in the training procedure. They always tile it differently. That's a form of data augmentation. But ultimately, you slide over this image with this 64 by 64 ConvNet and you produce the image on the right. Here you can see an inherent weakness of these approaches, namely that this thing can only ever look at 64 amino acids at a time. So now that can that can be the same if you're on the diagonal of this. Let's say let's say this is not 64 by 64, but three by three. If you're on the diagonal, you would only consider three amino acids and their interactions with each other. Right. Any to any interactions with each other. If you're off the diagonal, what you would consider is maybe these three amino acids and these three amino acids. And you would only consider you consider features for maybe for those three, but interactions only in between like the these not interactions actually within the same amino acids. So you're the thing that you can look at any point in time is going to be very limited. Right. And these so these distances that you get out here, they necessarily cannot directly depend on, let's say, this amino acid right here. You always have this limited view of your protein that sort of local. Now, people argue that that's actually enough if you look at maybe the green connections right here in order to establish them. What's most important is the vicinity of these of this amino acid and the immediate vicinity of this amino acid. And, of course, the interaction between those two vicinities. But it is quite conceivable that this green thing down here being so close will actually sort of push the two apart and sort of do this interaction, which, in my understanding, would not be covered by a system like this. And that's where alpha fold two, I believe, is is one point where it makes the big gains that it does. Now, the features that go in here, as I said, they are they're quite plentiful. One of the more interesting features is this MSA, these multiple sequence alignment. And I believe they're they're up right here. Yeah, sequences. So here they introduce them in recent years. The accuracy of structure predictions has improved through the use of evolutionary covariation data that are found in sets of related sequences. Sequences that are similar to the target sequence are found by searching large data sets of protein sequences derived from DNA sequencing and aligned to the target sequence to generate a multiple sequence alignment. Correlated changes in the positions of two amino acid residues across the sequences of MSA can be used to infer which residues might be in contact. So what what this I've searched out one of the papers right here, and this is from a paper called improved contact prediction proteins using pseudo likelihoods to infer POTS models. The entire basis here is that here is your chain of amino acid that you're considering. And this is you. This is the human. And they actually have one like a very similar graphic in their blog post. But we'll draw this ourselves. I'll just kind of sort of copy it. And what you do is you go and look into your database. Right. This this is the amino acid sequence. And each amino acid can actually be abbreviated by a single letter since they're 21. And luckily, the holy alphabet creators have given us what 26. So that fits. So each of these can be done by like S Y C M D and so on. Can be then you go look into your database and your database is of sort of all of life. And you go look for similar sequences and there are tools that you can very quickly see through databases and get out similar sequences to yours. And those are sequences that are overlapping in amino acid sequence. Right. So you could find in the fish. This is an alpha. This is not a fish in the fish. There is a similar sequence right here in the iron. Like this is OK in the whatever this is. This might be a horsey. No, this is not a horse. Let's make an alligator out of this. So in the alligator, raw does the alligator have? There might be a sequence and so you get the point. My drawing skills are to be criticized in another video. So you search for all of these similar sequences just by amino acid sequence and from the correlations, you can derive something. For example, I've already told you that sometimes you can substitute an amino acid and the sort of function of the protein isn't really affected. And this may be what you can see right here. So in the human, this is maybe a D, but or sorry, maybe this here. It's a C, but in the in the let's call this an M in the fish, it's a C2. But, you know, in the alligator, it's a P and in the cockroach, it's K and so on. You can see that maybe if the alignment is good, right, this is sort of from the same protein or from a protein that does maybe the same thing in these life forms, because life is continuous. Often these things are preserved or slightly modified. So here there are variations that happen in life, right? Mutations, variations. And so we can safely maybe assume that, you know, a K, whether there's a K or a P or a C in this particular point, it doesn't really matter. The shape doesn't seem to be too affected. So that's step one. And now so this might be this protein, this amino acid right here, you see, whether it's this chain or whether it's this chain, maybe doesn't really matter for the function of the protein. However, if you look at two proteins that are in contact, what needs to happen? So if my protein here has this chain and the other protein has sort of is in contact, that means there is like a chemical interaction between the two. So now if a mutation happens, if a mutation happens and the protein is still functioning the same way, but the mutation happened, let's say it's now this right here, that must mean the shape is still the same sort of. And that must mean that probably if one of them changed, the other one probably changed sort of analogously at the same time because structure is preserved, function is preserved. So structure is preserved. And since structure is determined by chemical interactions, one of the parts changed. That means probably the other part has changed as well. So maybe now this is sort of this chain right here. So what you would expect to see in the statistics is that if one changes, the other one changes accordingly. So there can be variations, right? There can be mutations. But if the mutation happens in one of them, a corresponding mutation should happen in the other one as well. Otherwise, the protein would be non-functional and the organism would sort of die. Not always, but you know, this is kind of a statistics game. And this is what you see here. Like the fish has an S like the human and an H right here. But the alligator has an F and a W right here. And then in the cockroach, you see the S and the H again, and so on. And here down here, you see the F and the W again. And this is an indication that these the correlation here is an indication that these two things might be in contact with each other. Now, there have been systems, for example, in this paper right here, that directly go from these statistics to contact predictions and so on. Alpha Fold simply takes in this stuff as features. So this right here, all of this, there can be, I think they derive 488 features from this. So this goes down here. I think they say it again. As I said, this is confused. Like here, article stops, references, article starts again. Thanks. And they like say almost the same things. It's just a little bit more detailed, but it's not longer. So here they derive 484 features from these multiple sequence alignment for each residue pair. Right. So in our big tensor right here, right here, each dot, each thing right here already now has 400. So each one of these already has 484 features and then some more. Right. This is already this is from the MSA, but then more features. So they incorporate lots of features right here. Where are we at? Here. They incorporate lots of features. In addition, we provide the network with features that explicitly represent gaps and deletions. They also represent scalar features and so on. So here you can see they have scalar features, sequence length features, amino acid type, profiles, HH blitz profiles. These are all sort of these comp bio tools, these genetic tools and so on. You also have sequence length features. These are these 484 features and so on. So these are all akin. There are some positional. One of these acts as positional encodings and so on. So lots of features, input, convolutional network, output, the distance matrix. And that's that. Right. So there you have the inputs, the distance matrix from the distance matrix. You can run gradient descent to get the protein structure at inference time. And they make some pretty cool points. Not only do they compare the distance matrices, but they here is the not only the single prediction for the distance, but they, of course, output a probability distribution. They bin all of these distances. They output a probability distribution. And you can see that the black line in these histograms. So this is this is for a particular thing. This is for this this red line, this red row right here. It's the extraction. So it's for one of the amino acid, the distribution of probabilities of distance bins. With each of the other ones. So this is number 29. And we look at the distance between number 29 and one, two, three, and so on. The black line represent the represents, I think, eight angstroms, which is generally considered the barrier for being in contact or not being in contact. And here it's colored in blue if not in contact and in green if in contact. And the red bar represents the true distance. And you can see this is pretty accurate. So whenever the network predicts blue, usually the red line is on the right of the black line. And if the network predicts no, sorry, these green and blue is the ground truth. So whenever it's blue, the network's distribution is usually shifted towards the right. And whenever it's green, the network's distribution is shifted towards the left. There are some failure cases, as you can see right here, the network predicts a higher distance than the than the the truth. Right. You can also see what's pretty interesting is that the most accurate predictions sort of the highest confidence, the smallest variation in distribution are around here, which is exactly around. So 29 would be in the middle right here. And that's where you find the most accurate predictions, of course, since local local distances are much more easier. And then as you go farther away, you get less sure. And this is a cool thing. So here you can see model prediction versus true distance fits fairly well. But you can also see that here they plot the standard deviation of their prediction. And you can see that the the means are very close, but the higher the sort of standard deviation, the less sure the model is. So there seems to be a there seems to be like a built in confidence metric. Right. So you can see the distance error it makes here are bigger and also its standard deviation is bigger at the same time, which means that you can sort of look at the standard deviation of this distribution right here. And that is an estimate for how sure how confident the model is in its prediction. And apparently that's something that in Alpha Fold 2, the the model relies upon very, very crucially. So here you these are just on the bottom, you see one of these residual blocks here, more distance matrices. They do a lot of analysis in this article, which is pretty cool. So you can go into it fairly far. They also have look at what the network pays attention to. And it makes a lot of sense like it pays attention to kind of these these helices and then these interactions between the helices and the parts where it's close in close contact with and so on. But now we want to go into Alpha Fold 2. Alpha Fold 2. Now the what we have isn't much we have this graphic right here, which is also in the article. It's probably better we go to the blog post to the blog post is like a fluff piece saying we they are going to publish a paper. But of course, they don't have it yet because we've just gotten the results. Yeah, they have they have these these cool these videos were like, ah, so good. As I said, I like there's so many Twitter threads with. I'm not usually up for the hype, but this is the best thing and so on and everyone's everyone's hyping and I thought, is it really up to me to be the grumpy one here. But then I couldn't find anything to be grumpy about. So this is what we what we get. Let's see. It's it's deep mind. I expect them to not fully maybe release the code. Maybe they will. But in Alpha Fold 1, they've released like half the code, which is already pretty cool. So there are open source implementations based on that. So again, nothing to be grumpy about. All right. So what can we what can we say? They say a folded, folded protein can be thought of as a spatial graph. And then this is kind of a new word they introduced. But ultimately, it's simply this distance matrix that we've seen before is a representation of that spatial graph. Right. It's simply a graph of nodes and the edges say whether or not they're in contact or respectively how far they are apart, where the residues are nodes and edges connect the residues in close proximity. This graph is important for understanding the physical interactions within proteins as well as their evolutionary history. For the latest version of Alpha Fold used at CAS 14, that's this challenge. We created an attention based neural network system trained end to end that attempts to interpret the structure of this graph while reasoning over the implicit graph that it's building. I look this it's sound like this. This is fluff. Maybe. I don't know. But this here attention based. OK. So I'm going to guess for sure that they've replaced this convent with and with a transformer style with an attention attention layer or multiple attention layers. They say it uses evolutionary evolutionarily related sequences, multiple sequence alignment and the representation of amino acid residue pairs to refine this graph. This is this is what we've already seen. So use these other sequences plus like a lot of stats that you can gather from the data sets on amino acid pairs in order to develop this this graph. And the graph is distance, the distance matrix or other things we'll see in just a second. They say by iterating this process, the system develops strong predictions of the underlying physical structure of the protein and is able to determine highly accurate structures in a matter of days. Additionally, Alpha Fold can predict which parts of each predicted protein structure are reliable using an internal confidence measure. Again, this is something that we've already sort of seen in Alpha Fold 1 that there is sort of an internal confidence measure. And the part here is they say by iterating this process, which could mean that it's no longer just this two stage approach, but it could be an actually fully cycling approach that sort of goes back to the neural network to refine the structure that it's building with the gradient descent procedure. It's entirely possible. So this is the graphic of Alpha Fold 2. You can see at the very beginning, you have protein sequence. And at first you have this embed and outer embed and outer sum, which I'm going to guess this is just kind of features for pairs or individual amino acids. This this is correlation statistics from your data set. It can be chemical properties, whatever. It's just a bunch of features that you can attach to each of these amino acids in the sequence. The other path here is this genetic search and embed. So this is what we've already seen with the MSA. I told you they have the same graphic. So there's human, there's fishy, there's rabbit. And you simply search for sequences in your database. It could even be from other humans that are similar. And from that from those, you can also derive features. So here is where I'm a bit confused. You can see they build up this again, this square matrix right here. I mean, this it already screamed attention before. Right. So I'm going to guess they no longer limit themselves to the maybe maybe to the 64 by 64. Maybe they do something bigger. Maybe they use local attention. Who knows? I'm going to guess they use attention to and these this here is simply given by an attention layer of some sort to go into the next to just this is basically I would guess this is a big transformer right here. The interesting part is that it appears to interact much like much like the original transformer, maybe encoder decoder here. They pass information around. So this top thing isn't amino acid sequence to amino acid sequence like to itself, but it appears to be a matrix that you build up between the amino acid sequence and these sequences you built. So I would guess that they are no longer, let's say happy with simply inputting the features of these algorithms that go over these other sequences. But now they also want to sort of put these features through through steps of transformations. So again, I would guess this is an attention layer. And how can we interpret this matrix? As you can see, this matrix relates individual amino acids in the sequence to other species. So I would guess that this square here represents something like how important is this particular location in the chain, which is a purple thing in the human. How important is that in the in the in the chicken or how related is that to the chicken at that particular position or as a whole? I don't know. Probably DeepMind doesn't know. Like they probably just ship these features in here, right? And then they just ship it through transformers. They pass information around. I don't know whether it's just in this direction and then in this direction or whether there's like an arrow right here conceivably. But in any case, it seems like they've replaced what was a conv net. So no longer friends with ConvNet. New best friend is transformer. And then at the end, you see what they get out is these pairwise distances again. Now, it's also not really clear because I would expect maybe an arrow going like this. If they again use these pairwise distances to predict the structure. I don't know. OK. Or if that's just a side output. I would guess they still actually use the pairwise distances and the confidence score. Again, you can it might be something very similar that we saw again being the sort of standard deviation on the predicted distances. But they could also refine that. And then the last thing is I don't know if this iterative process is simply referring to there being multiple layers of this attention and passing around. So the passing around will simply be like you stack the representations on top of each other. I don't know if this is the iterative procedure or if there is actually like the structure module actually sort of builds the structure and then goes back. And then you consult the neural network again and then you build some more of the structure and so on. I can't tell right now. It's quite conceivable that they they do like that the search here is not only gradient descent, but is actually informed by the neural network. So you sort of go back and refine, though I don't know. There doesn't seem to be any features in the neural networks that would represent that would represent whatever you could read from a partially built 3D model. So, you know, the boring guess is that the part two is very is a lot of the same. But there could also be substantial improvements in that part. All right. I hope this was this was sort of a good overview. So, as I said, the paper isn't out yet. If you want to cite this, I guess you can you can refer to the blog post and here they say until we've published a paper on this work, please cite high accuracy instruction prediction using deep learning by these people. I just want to highlight shout out to to Anna, who was educated right here. She was an intern. So in a way, I'm actually saying that this is my discovery and I take full responsibility for it. You're welcome. World shout out to Anna. Very nice job. Good work. Good work to all of these people. Yeah, I hope that was enough. If I got something horribly wrong, please tell me in the comments and share the video out if you liked it. Other than that, have fun. Bye bye.
[ { "start": 0, "end": 11, "text": " It will change everything. DeepMind solves 50 year old grand challenge. The game has changed." }, { "start": 11, "end": 21, "text": " DeepMind's latest AI breakthrough achieves historic new milestone, helps solve how diseases invade cells," }, { "start": 21, "end": 28, "text": " improve protein folding prediction, AI breakthrough it also wipes your butt automatically." }, { "start": 28, "end": 35, "text": " It is the newest DeepMind big publication. Actually, it's not a publication yet." }, { "start": 35, "end": 47, "text": " But so what happened and I'm sure you've heard this is that every year there is this competition of protein folding prediction." }, { "start": 47, "end": 54, "text": " So proteins are the structures that fold in a given way. And we'll go into that in a bit." }, { "start": 54, "end": 63, "text": " But basically every year there is this competition and the results of this year's competition came out and they looked something like this." }, { "start": 63, "end": 71, "text": " Namely, every entry here you see is a team participating in that competition of protein folding prediction." }, { "start": 71, "end": 81, "text": " And there is one team which is DeepMind's system Alpha Fold 2, which completely dominates all the others." }, { "start": 81, "end": 86, "text": " To the point where the problem is now considered to be solved." }, { "start": 86, "end": 93, "text": " Now solved in this case simply means that you're past a certain number in this test set." }, { "start": 93, "end": 103, "text": " And if you're past that certain number, your predictions are useful enough so that other scientists can basically take them and base work on them." }, { "start": 103, "end": 108, "text": " So that's what it means for this protein folding problem to be solved." }, { "start": 108, "end": 115, "text": " Now we don't have much information on Alpha Fold 2 yet, other than it's really good." }, { "start": 115, "end": 123, "text": " And like a blog post and a bunch of advertisement videos by DeepMind, they are writing a paper on it." }, { "start": 123, "end": 132, "text": " But today I want to go into this blog post and maybe parse out what we can gather from that blog post." }, { "start": 132, "end": 136, "text": " And I also want to go actually through the Alpha Fold 1 paper." }, { "start": 136, "end": 142, "text": " So as you can see, the performance here increased drastically with Alpha Fold 2." }, { "start": 142, "end": 150, "text": " But you know, guesses are high that the system is going to be somewhat similar to Alpha Fold 1, of which we do have a paper." }, { "start": 150, "end": 158, "text": " So today we'll go into Alpha Fold 1. We'll go into some speculations of Alpha Fold 2." }, { "start": 158, "end": 161, "text": " I can already give you my speculation. It's transformers." }, { "start": 161, "end": 171, "text": " It's attention that all of a sudden made this big jump together with probably a few other improvements to the Alpha Fold 1 system." }, { "start": 171, "end": 177, "text": " Basically, transformers continuing to dominate the entire field." }, { "start": 177, "end": 182, "text": " So where do we start? It's probably best." }, { "start": 182, "end": 189, "text": " By the way, if this is not a great meme template, I don't know what is. Just saying. Just saying." }, { "start": 189, "end": 194, "text": " Yeah. So let's actually start with the problem itself." }, { "start": 194, "end": 203, "text": " I realize if you're here, you're probably a machine learning person, might not know too much about protein folding." }, { "start": 203, "end": 210, "text": " So these things here are computer representations of proteins." }, { "start": 210, "end": 214, "text": " They don't really look that way, but sort of similar." }, { "start": 214, "end": 219, "text": " A protein essentially is a chain of amino acids." }, { "start": 219, "end": 224, "text": " So an amino acid, where do we have this? Right here." }, { "start": 224, "end": 229, "text": " Amino acids are these what they're called basic building blocks of life." }, { "start": 229, "end": 236, "text": " Since the proteins are what make the cell do things." }, { "start": 236, "end": 239, "text": " So protein are sort of the workers in the cell." }, { "start": 239, "end": 244, "text": " They are used as signaling molecules, receptors." }, { "start": 244, "end": 247, "text": " They are parts of your muscles." }, { "start": 247, "end": 250, "text": " Actually, the parts that move are proteins." }, { "start": 250, "end": 254, "text": " So they are all the work doers." }, { "start": 254, "end": 261, "text": " Whatever something needs to work in a cell to do mechanical or work, proteins are involved." }, { "start": 261, "end": 265, "text": " And amino acids are the building blocks of proteins." }, { "start": 265, "end": 271, "text": " So each amino acid has a given certain common structure." }, { "start": 271, "end": 274, "text": " And there are 21 of them." }, { "start": 274, "end": 281, "text": " So all the proteins in the world are simply made out of chains of these 21 amino acids." }, { "start": 281, "end": 284, "text": " And these chains, they are formed." }, { "start": 284, "end": 291, "text": " And so there's always this sort of body that can link up to other bodies of amino acids." }, { "start": 291, "end": 296, "text": " It's very similar. If you maybe know how DNA is structured, it's a very similar concept." }, { "start": 296, "end": 300, "text": " Except in DNA, there are four different bases." }, { "start": 300, "end": 303, "text": " Here there are 21 amino acids." }, { "start": 303, "end": 306, "text": " And each amino acid is a little bit different." }, { "start": 306, "end": 309, "text": " In each amino acid has like a tail that hangs off." }, { "start": 309, "end": 317, "text": " So the tail can be, you know, look like this, or it can look like this, like with a side chain." }, { "start": 317, "end": 321, "text": " Or there is there one where it's like maybe a cyclic one. I'm not sure." }, { "start": 321, "end": 325, "text": " Maybe you can look out here or it can have sort of no tail at all." }, { "start": 325, "end": 328, "text": " I think that's the case for glycine." }, { "start": 328, "end": 338, "text": " So the important part is depending on this tail, the properties, the chemical properties of the amino acids are different." }, { "start": 338, "end": 342, "text": " And then what happens next is really interesting." }, { "start": 342, "end": 347, "text": " Once this amino acid chain is built in this." }, { "start": 347, "end": 353, "text": " So this is the central dogma of modern biology is that you have DNA." }, { "start": 353, "end": 358, "text": " And DNA is translated to RNA." }, { "start": 358, "end": 362, "text": " Sorry." }, { "start": 362, "end": 364, "text": " And then it's translated to." }, { "start": 364, "end": 369, "text": " So it's read off, copied to RNA, which is sort of a DNA clone." }, { "start": 369, "end": 373, "text": " And then the RNA is translated into the amino acid chain." }, { "start": 373, "end": 379, "text": " And there is always three, three pieces of DNA mapped to one amino acid." }, { "start": 379, "end": 381, "text": " This is very much like a compiler." }, { "start": 381, "end": 389, "text": " Notably, the interesting part is that these steps right here, this compilation steps are done by proteins." }, { "start": 389, "end": 392, "text": " So there are proteins that do these things." }, { "start": 392, "end": 397, "text": " So nature in a very real sense is its own compiler." }, { "start": 397, "end": 400, "text": " So this here you can see as like the binary." }, { "start": 400, "end": 402, "text": " And this here is like the source code." }, { "start": 402, "end": 413, "text": " But what happens once you build this chain of amino acid and you set it out into the cell, because of these different properties of these side chains, they're also called residues." }, { "start": 413, "end": 416, "text": " These chain begins to fold." }, { "start": 416, "end": 427, "text": " So this is if you know a bit of chemistry, you might know that these are these are sort of atoms that are linked with covalent bonds in this case." }, { "start": 427, "end": 434, "text": " And it can be that part of this chain is rather like electrically negatively charged." }, { "start": 434, "end": 442, "text": " And here part of this chain might be like electrically positively charged in a given place over a given other place." }, { "start": 442, "end": 446, "text": " And it also depends on the surrounding medium, of course." }, { "start": 446, "end": 451, "text": " And that means that in this case, for example, these two things will attract." }, { "start": 451, "end": 461, "text": " And so if you release this amino acid chain, what you're going to get is sort of a bend where now the chain sort of bends." }, { "start": 461, "end": 466, "text": " And these two, this chain right here, this tail goes like here, this tail goes like here." }, { "start": 466, "end": 474, "text": " I'm sorry, if there is no if there is no if there is no I don't even know what to call it, pyrene rings or something like this." }, { "start": 474, "end": 477, "text": " If there isn't an amino acid with that, I apologize." }, { "start": 477, "end": 484, "text": " But the point is that these two things attract and sort of form this shape." }, { "start": 484, "end": 486, "text": " And this shape is very important." }, { "start": 486, "end": 496, "text": " We know that proteins and proteins consist of it can be hundreds, thousands, tens of thousands of these amino acids in a chain." }, { "start": 496, "end": 508, "text": " The proteins function is, interestingly, largely determined by its structure, by its 3D structure, not necessarily by the actual amino acid." }, { "start": 508, "end": 513, "text": " So technically you can substitute amino acids for each other." }, { "start": 513, "end": 522, "text": " So this amino acid here can be could be substituted for another amino acid that maybe isn't the same," }, { "start": 522, "end": 533, "text": " but is has the same properties of its side chain, such that if the structure is still the same, the protein would perform the same function." }, { "start": 533, "end": 542, "text": " So that that is is very special property of proteins, namely their 3D structure largely determines their function." }, { "start": 542, "end": 555, "text": " So, for example, in this step here, when you read off the RNA to the DNA, as you know, the RNA is sorry, the DNA is like this double strand of connected base pairs." }, { "start": 555, "end": 563, "text": " And in order to replicate the DNA or to read it off, there is a there more or let's call it." }, { "start": 563, "end": 569, "text": " There is also this step of DNA replication, right, where you copy the DNA in mitosis." }, { "start": 569, "end": 574, "text": " In order to do that, you need to split off the two strands." }, { "start": 574, "end": 581, "text": " You need to split it up because you want to get like a protein needs to get here to actually read it off." }, { "start": 581, "end": 591, "text": " For that, there is a protein, a specific protein that will insert right here to split up the DNA, which is called a helicase." }, { "start": 591, "end": 598, "text": " And that really is very important how that protein is shaped." }, { "start": 598, "end": 605, "text": " So the shape needs to be actually such that it kind of removes these bonds from each other." }, { "start": 605, "end": 608, "text": " So the shape is very, very important for a protein." }, { "start": 608, "end": 616, "text": " And conceivably, you could build a helicase from many, many different amino acid sequences as long as it has the same shape." }, { "start": 616, "end": 623, "text": " Now, I think something like something like fundamental like a helicase is probably conserved in the evolutionary tree." }, { "start": 623, "end": 625, "text": " But I hope you get the point." }, { "start": 625, "end": 627, "text": " The shape is super duper important." }, { "start": 627, "end": 631, "text": " Now, the shape isn't just arbitrary." }, { "start": 631, "end": 635, "text": " There are some amino acid chain is called the primary structure." }, { "start": 635, "end": 641, "text": " And then the first thing that happens is that two very distinct kind of sub shapes appear." }, { "start": 641, "end": 648, "text": " So often repeating shapes, these things, I think, are called alpha helicase or helix." }, { "start": 648, "end": 649, "text": " This is a helix." }, { "start": 649, "end": 652, "text": " And this here is I don't know what's in English." }, { "start": 652, "end": 654, "text": " It's probably called a strand or something like this." }, { "start": 654, "end": 658, "text": " These are like long sheets like I think they're called beta strands." }, { "start": 658, "end": 661, "text": " And these things form." }, { "start": 661, "end": 662, "text": " These are often repeated sequences." }, { "start": 662, "end": 674, "text": " And then the third tertiary structure is when the whole thing starts to kind of fold on itself and so on and give itself the the final structure." }, { "start": 674, "end": 681, "text": " So this is part, I guess, of the RNA polymerase, which is the molecule that reads DNA and outputs RNA." }, { "start": 681, "end": 685, "text": " And there are many, many, many proteins." }, { "start": 685, "end": 692, "text": " Now, since the shape is so important, it is vital that we know of it." }, { "start": 692, "end": 699, "text": " And technically, technically, this is what why this problem is 50 years old, I guess." }, { "start": 699, "end": 701, "text": " They say it's a 50 year old problem." }, { "start": 701, "end": 707, "text": " I think that's due to the fact that 50 years ago, a Nobel laureate said the following." }, { "start": 707, "end": 722, "text": " Since a protein is fully determined by its amino acid chain and since the amino acid chain determines the structure that it's going to do because of these kind of chemical properties," }, { "start": 722, "end": 727, "text": " it should be possible to read in the amino acid sequence or read in the DNA sequence." }, { "start": 727, "end": 732, "text": " We know what amino acid sequence results and output the shape of a protein." }, { "start": 732, "end": 736, "text": " However, this is an extremely complicated problem." }, { "start": 736, "end": 741, "text": " It turned out to be because they're very subtle interactions." }, { "start": 741, "end": 743, "text": " They're not always the same. It depends, right?" }, { "start": 743, "end": 753, "text": " Like somewhere out here, there could be some amino acid with like some weird chain that, you know, everything folds on itself all the time." }, { "start": 753, "end": 759, "text": " So at some point, these get in contact and the changes kind of the local properties here." }, { "start": 759, "end": 763, "text": " So this is a very, very difficult problem to solve." }, { "start": 763, "end": 776, "text": " And people have sort of tried to do this and now apparently deep mind the first system that does this to such a satisfaction that it's beneficial." }, { "start": 776, "end": 780, "text": " All right. Now I lost my train of thought." }, { "start": 780, "end": 790, "text": " Yeah. So the shape prediction, what happened so far is what you have to do is you'd have to sort of do this, determine this experimentally." }, { "start": 790, "end": 798, "text": " So you'd have to take these proteins and crystallize them and then like shoot X-rays at them and then infer the structure." }, { "start": 798, "end": 808, "text": " You can do that from crystallized proteins because I think it's due to crystals or like very regular accumulations of proteins." }, { "start": 808, "end": 816, "text": " So if you look at a snowflake, that is if we knew nothing about the water molecule that it's like H2O," }, { "start": 816, "end": 827, "text": " if we knew nothing of that, we could just look at a snowflake and determine this structure, this specific angles here from the snowflake." }, { "start": 827, "end": 833, "text": " We would just look at the snowflakes and if someone tells us, look, that's all the same material, that's all water," }, { "start": 833, "end": 842, "text": " we could infer what the water molecule looks like just by analyzing snowflakes because they're crystals." }, { "start": 842, "end": 849, "text": " And the pretty much the same here is you build, you make crystals out of these materials, you shoot X-rays at them." }, { "start": 849, "end": 853, "text": " And then you sort of reason over the patterns that come out." }, { "start": 853, "end": 857, "text": " This is very, very difficult, very expensive." }, { "start": 857, "end": 861, "text": " And so to solve this problem computationally is super important." }, { "start": 861, "end": 863, "text": " Now we'll get to this graphic in a minute." }, { "start": 863, "end": 871, "text": " This is sort of the only thing we know about AlphaFold2 is this graphic right now, because they have not yet released." }, { "start": 871, "end": 876, "text": " The paper or any descriptions of the model, as I said." }, { "start": 876, "end": 880, "text": " But what we'll do is we'll go into AlphaFold1." }, { "start": 880, "end": 882, "text": " So this is AlphaFold1." }, { "start": 882, "end": 893, "text": " And AlphaFold1 was participating in the same competition two years ago and was already dominant there," }, { "start": 893, "end": 902, "text": " but not yet dominant to the point of having, quote unquote, solved the problem just better than other systems." }, { "start": 902, "end": 908, "text": " So this is the basic structure of AlphaFold1." }, { "start": 908, "end": 911, "text": " So what do you have right here?" }, { "start": 911, "end": 914, "text": " Let's give ourselves an overview." }, { "start": 914, "end": 916, "text": " So the overview is the following." }, { "start": 916, "end": 919, "text": " There are two different stages to this algorithm." }, { "start": 919, "end": 924, "text": " Stage one is over here and stage two is over here." }, { "start": 924, "end": 928, "text": " Maybe it's easiest to start with stage two." }, { "start": 928, "end": 937, "text": " So the output of stage one is this thing right here, a distance and torsion distribution prediction." }, { "start": 937, "end": 943, "text": " So this matrix here that's kind of tilted on its side, I believe there are more down here." }, { "start": 943, "end": 945, "text": " Right. OK." }, { "start": 945, "end": 957, "text": " So what you do right here is you take an amino acid sequence and you line it up right here." }, { "start": 957, "end": 960, "text": " You line it up. This is the amino acid sequence." }, { "start": 960, "end": 966, "text": " It's a bit harder if there's like a split, but let's just say a protein is..." }, { "start": 966, "end": 969, "text": " Actually, there can't be a split. Sorry, that's in the amino acids. I'm dumb." }, { "start": 969, "end": 976, "text": " So a protein is a single chain of these amino acids." }, { "start": 976, "end": 981, "text": " There can be multiple sort of parts to a bigger protein conglomerate." }, { "start": 981, "end": 986, "text": " But there is this chain. You line it up here and here." }, { "start": 986, "end": 993, "text": " So now we're building sort of a pairwise matrix between the sequence and itself." }, { "start": 993, "end": 998, "text": " And this pairwise matrix is going to be a distance matrix." }, { "start": 998, "end": 1005, "text": " So what we are going to do is we're going to input some features about this sequence of amino acids." }, { "start": 1005, "end": 1007, "text": " That's what we get as an input." }, { "start": 1007, "end": 1012, "text": " And we're going to predict for any pair." }, { "start": 1012, "end": 1018, "text": " So we have the sequence and we're going to predict for any pair how far are they apart?" }, { "start": 1018, "end": 1021, "text": " So of course, here the answer is always kind of zero." }, { "start": 1021, "end": 1030, "text": " They're zero apart. But you might say, you know, these two are five apart and these two here are seven apart." }, { "start": 1030, "end": 1033, "text": " But these two here are only one apart." }, { "start": 1033, "end": 1040, "text": " So it's reasonable, you know, that the final structure, these two are close together." }, { "start": 1040, "end": 1042, "text": " We don't worry about close together right now." }, { "start": 1042, "end": 1047, "text": " We just worry about for each two, we'll predict how far they are apart." }, { "start": 1047, "end": 1052, "text": " OK, so this is you can view this as, you know, a machine learning problem, right?" }, { "start": 1052, "end": 1057, "text": " You have an input sequence and you simply want to predict the distance matrix." }, { "start": 1057, "end": 1061, "text": " So here you can see that. In fact, you can see the top and bottom." }, { "start": 1061, "end": 1065, "text": " One is the predicted and one is the real." }, { "start": 1065, "end": 1067, "text": " I don't even remember which one's which." }, { "start": 1067, "end": 1071, "text": " You can see that this system does a pretty good job at that." }, { "start": 1071, "end": 1073, "text": " There are minute differences." }, { "start": 1073, "end": 1078, "text": " You really go look like down here, you can see a bit of a difference over here." }, { "start": 1078, "end": 1080, "text": " There is a bit of a difference." }, { "start": 1080, "end": 1084, "text": " But in general, this system does a pretty good job." }, { "start": 1084, "end": 1087, "text": " So this is the output of stage one is this matrix." }, { "start": 1087, "end": 1091, "text": " It's a bunch of other it's like also the torsion angles and so on." }, { "start": 1091, "end": 1096, "text": " But the main thing is you predict the distances between those two." }, { "start": 1096, "end": 1102, "text": " That's what you take as a input to stage two." }, { "start": 1102, "end": 1109, "text": " So what stage two does is stage two builds a model of this molecule." }, { "start": 1109, "end": 1115, "text": " And the model is sort of a differentiable geometrical model." }, { "start": 1115, "end": 1119, "text": " So they say they. Where is it?" }, { "start": 1119, "end": 1122, "text": " I don't get these nature papers like they're split into two parts," }, { "start": 1122, "end": 1125, "text": " but then they are they largely say the same things." }, { "start": 1125, "end": 1129, "text": " I am absolutely confused by them." }, { "start": 1129, "end": 1131, "text": " So we're going to jump around a fair bit." }, { "start": 1131, "end": 1136, "text": " They say we parameterize protein structures by the backbone torsion angles of all residues" }, { "start": 1136, "end": 1142, "text": " and build a differentiable model of protein geometry to compute the coordinates for all residues." }, { "start": 1142, "end": 1145, "text": " And thus the inter residue distances." }, { "start": 1145, "end": 1152, "text": " So what they do is essentially they build a computer model of these amino acids." }, { "start": 1152, "end": 1156, "text": " And these are parameterized by the torsion angles." }, { "start": 1156, "end": 1160, "text": " Now, the torsion angle is simply the angle between any two of them." }, { "start": 1160, "end": 1164, "text": " So this would be like a torsion angle of 180 degrees." }, { "start": 1164, "end": 1170, "text": " And then if it folds like this, it would be torsion angle of 90 degrees and so on." }, { "start": 1170, "end": 1174, "text": " And you need two torsion angles because you're in 3D." }, { "start": 1174, "end": 1180, "text": " But essentially the torsion angles determine the structure of the protein." }, { "start": 1180, "end": 1183, "text": " So it's one way of parameterizing it." }, { "start": 1183, "end": 1191, "text": " So they build a differentiable model, a differentiable model of protein geometry." }, { "start": 1191, "end": 1195, "text": " Now, the important thing is they don't do any learning with this differentiable model." }, { "start": 1195, "end": 1201, "text": " The purpose of this differentiable model is such that what you can do now," }, { "start": 1201, "end": 1205, "text": " if you have a differentiable model, you can run gradient descent." }, { "start": 1205, "end": 1209, "text": " So imagine they pretty much lay it out right here." }, { "start": 1209, "end": 1220, "text": " So they have the x, x is the output of your differentiable geometry, right, of your torsion angles." }, { "start": 1220, "end": 1226, "text": " Let's just call it this Greek letter phi, psi, whatever." }, { "start": 1226, "end": 1233, "text": " If x is the output and now x goes into your loss function." }, { "start": 1233, "end": 1238, "text": " So x goes into your loss function and the loss function simply compares x." }, { "start": 1238, "end": 1241, "text": " To the predicted x." }, { "start": 1241, "end": 1252, "text": " So the loss function will take in x and it will compare it to the x that you predicted from this thing here." }, { "start": 1252, "end": 1256, "text": " So we start off with a flat chain, maybe." }, { "start": 1256, "end": 1263, "text": " Actually, I think we start off with some initialization because they also predict the torsion angles directly." }, { "start": 1263, "end": 1265, "text": " Right here, they predict the torsion angles directly." }, { "start": 1265, "end": 1271, "text": " And that's what we initialize from. But let's just say we initialize from the flat chain." }, { "start": 1271, "end": 1282, "text": " And then because this is differentiable, we do so your L is x minus x prime." }, { "start": 1282, "end": 1291, "text": " And what we do is we derive the loss with respect to the angle, to the torsion angle." }, { "start": 1291, "end": 1295, "text": " So we can do this since this is differentiable." }, { "start": 1295, "end": 1302, "text": " So now we know how do we need to change the angle, which is this thing right here, in order to make the loss smaller." }, { "start": 1302, "end": 1309, "text": " And maybe it says, actually, you need to turn it down, right, make the angle smaller." }, { "start": 1309, "end": 1312, "text": " And we do that. Okay, cool. Now it's only 90 degrees." }, { "start": 1312, "end": 1315, "text": " And then we do it again and again and again." }, { "start": 1315, "end": 1326, "text": " And you can see that by changing all the angles such that this loss is smaller, we end up through steps, step, step, step." }, { "start": 1326, "end": 1340, "text": " We in our computer model, we sort of replicate this process that happens in nature, where what we feed in is how far any two amino acids should be apart." }, { "start": 1340, "end": 1354, "text": " And by running gradient descent, just gradient descent on the torsion angles, we figure out what do the angles need to be in order to make this happen." }, { "start": 1354, "end": 1363, "text": " So first, we predict all the distances, and then we figure out how do we need to set the angles such that these distances are fulfilled." }, { "start": 1363, "end": 1366, "text": " These are not true distances. These are predicted distances, right?" }, { "start": 1366, "end": 1370, "text": " So everything depends on how well we can predict these distances." }, { "start": 1370, "end": 1385, "text": " But once we have them, we can sort of replicate in our computers the process as it happens in nature, except in nature, the whole folding is dependent on these all these chemical interactions and so on." }, { "start": 1385, "end": 1387, "text": " And now we do none of this." }, { "start": 1387, "end": 1398, "text": " We simply look see how do we need to fold in order to make these distances in our computer model like these like the distance between this and this and this and this." }, { "start": 1398, "end": 1405, "text": " Any two distances may agree with the distances that we have predicted right here." }, { "start": 1405, "end": 1412, "text": " And you can see that over time, this as you run gradient descent, this goes up." }, { "start": 1412, "end": 1429, "text": " This this TM score was up the root mean square distance goes down between and then you of course can compare it if you have a test set with stuff that people have already figured out, you can analyze these metrics and see that indeed, you do get the correct folding." }, { "start": 1429, "end": 1436, "text": " It's also pretty interesting that so here in blue and red, I believe you have." }, { "start": 1436, "end": 1442, "text": " Yeah, exactly. So the the helix in blue and the strands in red." }, { "start": 1442, "end": 1459, "text": " So in this case, you from if you have this folded structure or partially folded structure, you can already see that these sort of substructures emerge like this is a helix, right?" }, { "start": 1459, "end": 1466, "text": " As you can see, and then you sort of made this maybe a strand and so on. There are ways to heuristically classify that." }, { "start": 1466, "end": 1476, "text": " And you can see that if you look at the database, right, you can see that this here is a strand." }, { "start": 1476, "end": 1480, "text": " These are helices, and this is a strand and these are helix." }, { "start": 1480, "end": 1485, "text": " This is a strand and so on. And you can see that the model here is what the model thinks at the beginning." }, { "start": 1485, "end": 1502, "text": " It doesn't get many things correct, though it does some, but then over time, it sort of refines its guesses until at the end, it's pretty much equal to what the database to what the true sample is." }, { "start": 1502, "end": 1512, "text": " And here is simply the distribution of, I guess, confidence about these things and the torsion angles right here." }, { "start": 1512, "end": 1520, "text": " So it, as you can see, this two step process is the key here to do that." }, { "start": 1520, "end": 1525, "text": " Now, Alpha Fold 2 conceivably probably changes this a little bit." }, { "start": 1525, "end": 1535, "text": " But again, we're not sure. The step one right here is a deep learning system." }, { "start": 1535, "end": 1540, "text": " So step two is simply a gradient descent procedure that you run at inference time, right?" }, { "start": 1540, "end": 1544, "text": " This at training, you can you can just do step one." }, { "start": 1544, "end": 1549, "text": " So step one is is the machine learning bit." }, { "start": 1549, "end": 1557, "text": " So the goal is to output this distance, this distance tensor right here." }, { "start": 1557, "end": 1561, "text": " And there are more things than distances, as we said, there are torsion angles and so on." }, { "start": 1561, "end": 1565, "text": " But ultimately, you want to output this distance matrix." }, { "start": 1565, "end": 1569, "text": " And how do they do it? You can already see it's a deep neural network." }, { "start": 1569, "end": 1579, "text": " So you want to build a input data point, let's say, of L by L, which is sequence length by sequence length." }, { "start": 1579, "end": 1584, "text": " So you want to collect some features, you don't know the distances yet, right?" }, { "start": 1584, "end": 1591, "text": " But you can collect some features that are either pairwise features between these two things, right?" }, { "start": 1591, "end": 1600, "text": " So here, maybe this is, I don't know, leucine, and this is what's a different amino acid glycine." }, { "start": 1600, "end": 1609, "text": " And in here, you want to put features, maybe it can be features for that position, right?" }, { "start": 1609, "end": 1617, "text": " Maybe leucine here is at the 100th position in the in this particular protein, and this is at the 90th position." }, { "start": 1617, "end": 1623, "text": " So we want to put in some features of that that you can derive from a data set." }, { "start": 1623, "end": 1628, "text": " You can put in correlation statistics in general between these two amino acids." }, { "start": 1628, "end": 1632, "text": " You can even put in just single features." }, { "start": 1632, "end": 1642, "text": " So you have these tiled L by one features, which is just features for the sequence itself, not pairwise features." }, { "start": 1642, "end": 1649, "text": " But what you do is you simply replicate them along along any given dimension right here." }, { "start": 1649, "end": 1654, "text": " You always put the same features. This is very common in conv nets." }, { "start": 1654, "end": 1658, "text": " And you can even do a scalar feature. So there are some scalar features." }, { "start": 1658, "end": 1665, "text": " And what you would do is you would simply fill an entire plane with that scalar feature, all the same number." }, { "start": 1665, "end": 1671, "text": " It's just easier to do it like this because it fits into the convolutional architecture." }, { "start": 1671, "end": 1679, "text": " Well, so you want to provide all kinds of features and the features they provide are, you know, plentiful." }, { "start": 1679, "end": 1685, "text": " And a lot of them do introduce some domain tools, domain expertise and so on." }, { "start": 1685, "end": 1694, "text": " But once they have that, they simply take that sort of image with many, many channels and they predict this image if you want." }, { "start": 1694, "end": 1701, "text": " So it's just an image to image translation problem. And they do this via a convolutional neural network." }, { "start": 1701, "end": 1706, "text": " As you can see, there are 220 residual convolutional blocks." }, { "start": 1706, "end": 1712, "text": " Now, I assume that most of the viewers of this video are familiar what convolutional neural networks are." }, { "start": 1712, "end": 1716, "text": " If not, I'm deeply sorry, but we'll not go into that." }, { "start": 1716, "end": 1725, "text": " But you can see they sort of they tile this tensor right here and they tile it differently from from from instance to instance." }, { "start": 1725, "end": 1729, "text": " So they tile it in the training procedure. They always tile it differently." }, { "start": 1729, "end": 1741, "text": " That's a form of data augmentation. But ultimately, you slide over this image with this 64 by 64 ConvNet and you produce the image on the right." }, { "start": 1741, "end": 1752, "text": " Here you can see an inherent weakness of these approaches, namely that this thing can only ever look at 64 amino acids at a time." }, { "start": 1752, "end": 1759, "text": " So now that can that can be the same if you're on the diagonal of this." }, { "start": 1759, "end": 1763, "text": " Let's say let's say this is not 64 by 64, but three by three." }, { "start": 1763, "end": 1771, "text": " If you're on the diagonal, you would only consider three amino acids and their interactions with each other." }, { "start": 1771, "end": 1774, "text": " Right. Any to any interactions with each other." }, { "start": 1774, "end": 1781, "text": " If you're off the diagonal, what you would consider is maybe these three amino acids and these three amino acids." }, { "start": 1781, "end": 1792, "text": " And you would only consider you consider features for maybe for those three, but interactions only in between like the these not interactions" }, { "start": 1792, "end": 1795, "text": " actually within the same amino acids." }, { "start": 1795, "end": 1802, "text": " So you're the thing that you can look at any point in time is going to be very limited." }, { "start": 1802, "end": 1813, "text": " Right. And these so these distances that you get out here, they necessarily cannot directly depend on, let's say, this amino acid right here." }, { "start": 1813, "end": 1818, "text": " You always have this limited view of your protein that sort of local." }, { "start": 1818, "end": 1826, "text": " Now, people argue that that's actually enough if you look at maybe the green connections right here in order to establish them." }, { "start": 1826, "end": 1834, "text": " What's most important is the vicinity of these of this amino acid and the immediate vicinity of this amino acid." }, { "start": 1834, "end": 1838, "text": " And, of course, the interaction between those two vicinities." }, { "start": 1838, "end": 1845, "text": " But it is quite conceivable that this green thing down here being so close will actually sort of push the two apart" }, { "start": 1845, "end": 1853, "text": " and sort of do this interaction, which, in my understanding, would not be covered by a system like this." }, { "start": 1853, "end": 1861, "text": " And that's where alpha fold two, I believe, is is one point where it makes the big gains that it does." }, { "start": 1861, "end": 1869, "text": " Now, the features that go in here, as I said, they are they're quite plentiful." }, { "start": 1869, "end": 1876, "text": " One of the more interesting features is this MSA, these multiple sequence alignment." }, { "start": 1876, "end": 1881, "text": " And I believe they're they're up right here." }, { "start": 1881, "end": 1886, "text": " Yeah, sequences. So here they introduce them in recent years." }, { "start": 1886, "end": 1895, "text": " The accuracy of structure predictions has improved through the use of evolutionary covariation data that are found in sets of related sequences." }, { "start": 1895, "end": 1903, "text": " Sequences that are similar to the target sequence are found by searching large data sets of protein sequences derived from DNA sequencing" }, { "start": 1903, "end": 1908, "text": " and aligned to the target sequence to generate a multiple sequence alignment." }, { "start": 1908, "end": 1918, "text": " Correlated changes in the positions of two amino acid residues across the sequences of MSA can be used to infer which residues might be in contact." }, { "start": 1918, "end": 1931, "text": " So what what this I've searched out one of the papers right here, and this is from a paper called improved contact prediction proteins using pseudo likelihoods to infer POTS models." }, { "start": 1931, "end": 1937, "text": " The entire basis here is that here is your chain of amino acid that you're considering." }, { "start": 1937, "end": 1939, "text": " And this is you. This is the human." }, { "start": 1939, "end": 1948, "text": " And they actually have one like a very similar graphic in their blog post. But we'll draw this ourselves." }, { "start": 1948, "end": 1954, "text": " I'll just kind of sort of copy it. And what you do is you go and look into your database." }, { "start": 1954, "end": 1956, "text": " Right. This this is the amino acid sequence." }, { "start": 1956, "end": 1962, "text": " And each amino acid can actually be abbreviated by a single letter since they're 21." }, { "start": 1962, "end": 1969, "text": " And luckily, the holy alphabet creators have given us what 26." }, { "start": 1969, "end": 1978, "text": " So that fits. So each of these can be done by like S Y C M D and so on." }, { "start": 1978, "end": 1985, "text": " Can be then you go look into your database and your database is of sort of all of life." }, { "start": 1985, "end": 1996, "text": " And you go look for similar sequences and there are tools that you can very quickly see through databases and get out similar sequences to yours." }, { "start": 1996, "end": 2002, "text": " And those are sequences that are overlapping in amino acid sequence." }, { "start": 2002, "end": 2005, "text": " Right. So you could find in the fish." }, { "start": 2005, "end": 2010, "text": " This is an alpha. This is not a fish in the fish." }, { "start": 2010, "end": 2019, "text": " There is a similar sequence right here in the iron. Like this is OK in the whatever this is." }, { "start": 2019, "end": 2025, "text": " This might be a horsey. No, this is not a horse. Let's make an alligator out of this." }, { "start": 2025, "end": 2030, "text": " So in the alligator, raw does the alligator have?" }, { "start": 2030, "end": 2038, "text": " There might be a sequence and so you get the point. My drawing skills are to be criticized in another video." }, { "start": 2038, "end": 2047, "text": " So you search for all of these similar sequences just by amino acid sequence and from the correlations, you can derive something." }, { "start": 2047, "end": 2058, "text": " For example, I've already told you that sometimes you can substitute an amino acid and the sort of function of the protein isn't really affected." }, { "start": 2058, "end": 2066, "text": " And this may be what you can see right here. So in the human, this is maybe a D, but or sorry, maybe this here." }, { "start": 2066, "end": 2075, "text": " It's a C, but in the in the let's call this an M in the fish, it's a C2." }, { "start": 2075, "end": 2081, "text": " But, you know, in the alligator, it's a P and in the cockroach, it's K and so on." }, { "start": 2081, "end": 2094, "text": " You can see that maybe if the alignment is good, right, this is sort of from the same protein or from a protein that does maybe the same thing in these life forms, because life is continuous." }, { "start": 2094, "end": 2102, "text": " Often these things are preserved or slightly modified. So here there are variations that happen in life, right?" }, { "start": 2102, "end": 2115, "text": " Mutations, variations. And so we can safely maybe assume that, you know, a K, whether there's a K or a P or a C in this particular point, it doesn't really matter." }, { "start": 2115, "end": 2120, "text": " The shape doesn't seem to be too affected. So that's step one." }, { "start": 2120, "end": 2133, "text": " And now so this might be this protein, this amino acid right here, you see, whether it's this chain or whether it's this chain, maybe doesn't really matter for the function of the protein." }, { "start": 2133, "end": 2139, "text": " However, if you look at two proteins that are in contact, what needs to happen?" }, { "start": 2139, "end": 2152, "text": " So if my protein here has this chain and the other protein has sort of is in contact, that means there is like a chemical interaction between the two." }, { "start": 2152, "end": 2167, "text": " So now if a mutation happens, if a mutation happens and the protein is still functioning the same way, but the mutation happened, let's say it's now this right here," }, { "start": 2167, "end": 2183, "text": " that must mean the shape is still the same sort of. And that must mean that probably if one of them changed, the other one probably changed sort of analogously at the same time because structure is preserved, function is preserved." }, { "start": 2183, "end": 2189, "text": " So structure is preserved. And since structure is determined by chemical interactions, one of the parts changed." }, { "start": 2189, "end": 2197, "text": " That means probably the other part has changed as well. So maybe now this is sort of this chain right here." }, { "start": 2197, "end": 2206, "text": " So what you would expect to see in the statistics is that if one changes, the other one changes accordingly." }, { "start": 2206, "end": 2209, "text": " So there can be variations, right? There can be mutations." }, { "start": 2209, "end": 2219, "text": " But if the mutation happens in one of them, a corresponding mutation should happen in the other one as well." }, { "start": 2219, "end": 2224, "text": " Otherwise, the protein would be non-functional and the organism would sort of die." }, { "start": 2224, "end": 2227, "text": " Not always, but you know, this is kind of a statistics game." }, { "start": 2227, "end": 2234, "text": " And this is what you see here. Like the fish has an S like the human and an H right here." }, { "start": 2234, "end": 2241, "text": " But the alligator has an F and a W right here. And then in the cockroach, you see the S and the H again, and so on." }, { "start": 2241, "end": 2244, "text": " And here down here, you see the F and the W again." }, { "start": 2244, "end": 2254, "text": " And this is an indication that these the correlation here is an indication that these two things might be in contact with each other." }, { "start": 2254, "end": 2265, "text": " Now, there have been systems, for example, in this paper right here, that directly go from these statistics to contact predictions and so on." }, { "start": 2265, "end": 2278, "text": " Alpha Fold simply takes in this stuff as features. So this right here, all of this, there can be, I think they derive 488 features from this." }, { "start": 2278, "end": 2282, "text": " So this goes down here. I think they say it again." }, { "start": 2282, "end": 2288, "text": " As I said, this is confused. Like here, article stops, references, article starts again. Thanks." }, { "start": 2288, "end": 2294, "text": " And they like say almost the same things. It's just a little bit more detailed, but it's not longer." }, { "start": 2294, "end": 2303, "text": " So here they derive 484 features from these multiple sequence alignment for each residue pair." }, { "start": 2303, "end": 2314, "text": " Right. So in our big tensor right here, right here, each dot, each thing right here already now has 400." }, { "start": 2314, "end": 2323, "text": " So each one of these already has 484 features and then some more." }, { "start": 2323, "end": 2327, "text": " Right. This is already this is from the MSA, but then more features." }, { "start": 2327, "end": 2333, "text": " So they incorporate lots of features right here." }, { "start": 2333, "end": 2337, "text": " Where are we at? Here. They incorporate lots of features." }, { "start": 2337, "end": 2343, "text": " In addition, we provide the network with features that explicitly represent gaps and deletions." }, { "start": 2343, "end": 2346, "text": " They also represent scalar features and so on." }, { "start": 2346, "end": 2354, "text": " So here you can see they have scalar features, sequence length features, amino acid type, profiles, HH blitz profiles." }, { "start": 2354, "end": 2360, "text": " These are all sort of these comp bio tools, these genetic tools and so on." }, { "start": 2360, "end": 2367, "text": " You also have sequence length features. These are these 484 features and so on." }, { "start": 2367, "end": 2373, "text": " So these are all akin. There are some positional. One of these acts as positional encodings and so on." }, { "start": 2373, "end": 2381, "text": " So lots of features, input, convolutional network, output, the distance matrix." }, { "start": 2381, "end": 2388, "text": " And that's that. Right. So there you have the inputs, the distance matrix from the distance matrix." }, { "start": 2388, "end": 2393, "text": " You can run gradient descent to get the protein structure at inference time." }, { "start": 2393, "end": 2404, "text": " And they make some pretty cool points. Not only do they compare the distance matrices, but they here is the not only the single prediction for the distance," }, { "start": 2404, "end": 2411, "text": " but they, of course, output a probability distribution. They bin all of these distances. They output a probability distribution." }, { "start": 2411, "end": 2417, "text": " And you can see that the black line in these histograms. So this is this is for a particular thing." }, { "start": 2417, "end": 2424, "text": " This is for this this red line, this red row right here." }, { "start": 2424, "end": 2433, "text": " It's the extraction. So it's for one of the amino acid, the distribution of probabilities of distance bins." }, { "start": 2433, "end": 2443, "text": " With each of the other ones. So this is number 29. And we look at the distance between number 29 and one, two, three, and so on." }, { "start": 2443, "end": 2453, "text": " The black line represent the represents, I think, eight angstroms, which is generally considered the barrier for being in contact or not being in contact." }, { "start": 2453, "end": 2465, "text": " And here it's colored in blue if not in contact and in green if in contact. And the red bar represents the true distance." }, { "start": 2465, "end": 2475, "text": " And you can see this is pretty accurate. So whenever the network predicts blue, usually the red line is on the right of the black line." }, { "start": 2475, "end": 2483, "text": " And if the network predicts no, sorry, these green and blue is the ground truth." }, { "start": 2483, "end": 2488, "text": " So whenever it's blue, the network's distribution is usually shifted towards the right." }, { "start": 2488, "end": 2492, "text": " And whenever it's green, the network's distribution is shifted towards the left." }, { "start": 2492, "end": 2503, "text": " There are some failure cases, as you can see right here, the network predicts a higher distance than the than the the truth." }, { "start": 2503, "end": 2517, "text": " Right. You can also see what's pretty interesting is that the most accurate predictions sort of the highest confidence, the smallest variation in distribution are around here, which is exactly around." }, { "start": 2517, "end": 2527, "text": " So 29 would be in the middle right here. And that's where you find the most accurate predictions, of course, since local local distances are much more easier." }, { "start": 2527, "end": 2537, "text": " And then as you go farther away, you get less sure. And this is a cool thing. So here you can see model prediction versus true distance fits fairly well." }, { "start": 2537, "end": 2543, "text": " But you can also see that here they plot the standard deviation of their prediction." }, { "start": 2543, "end": 2558, "text": " And you can see that the the means are very close, but the higher the sort of standard deviation, the less sure the model is." }, { "start": 2558, "end": 2567, "text": " So there seems to be a there seems to be like a built in confidence metric. Right." }, { "start": 2567, "end": 2581, "text": " So you can see the distance error it makes here are bigger and also its standard deviation is bigger at the same time, which means that you can sort of look at the standard deviation of this distribution right here." }, { "start": 2581, "end": 2598, "text": " And that is an estimate for how sure how confident the model is in its prediction. And apparently that's something that in Alpha Fold 2, the the model relies upon very, very crucially." }, { "start": 2598, "end": 2606, "text": " So here you these are just on the bottom, you see one of these residual blocks here, more distance matrices." }, { "start": 2606, "end": 2613, "text": " They do a lot of analysis in this article, which is pretty cool. So you can go into it fairly far." }, { "start": 2613, "end": 2616, "text": " They also have look at what the network pays attention to." }, { "start": 2616, "end": 2629, "text": " And it makes a lot of sense like it pays attention to kind of these these helices and then these interactions between the helices and the parts where it's close in close contact with and so on." }, { "start": 2629, "end": 2634, "text": " But now we want to go into Alpha Fold 2. Alpha Fold 2." }, { "start": 2634, "end": 2643, "text": " Now the what we have isn't much we have this graphic right here, which is also in the article." }, { "start": 2643, "end": 2651, "text": " It's probably better we go to the blog post to the blog post is like a fluff piece saying we they are going to publish a paper." }, { "start": 2651, "end": 2658, "text": " But of course, they don't have it yet because we've just gotten the results." }, { "start": 2658, "end": 2665, "text": " Yeah, they have they have these these cool these videos were like, ah, so good." }, { "start": 2665, "end": 2680, "text": " As I said, I like there's so many Twitter threads with. I'm not usually up for the hype, but this is the best thing and so on and everyone's everyone's hyping and I thought, is it really up to me to be the grumpy one here." }, { "start": 2680, "end": 2688, "text": " But then I couldn't find anything to be grumpy about. So this is what we what we get." }, { "start": 2688, "end": 2691, "text": " Let's see. It's it's deep mind." }, { "start": 2691, "end": 2696, "text": " I expect them to not fully maybe release the code. Maybe they will." }, { "start": 2696, "end": 2702, "text": " But in Alpha Fold 1, they've released like half the code, which is already pretty cool." }, { "start": 2702, "end": 2710, "text": " So there are open source implementations based on that. So again, nothing to be grumpy about." }, { "start": 2710, "end": 2714, "text": " All right. So what can we what can we say?" }, { "start": 2714, "end": 2719, "text": " They say a folded, folded protein can be thought of as a spatial graph." }, { "start": 2719, "end": 2723, "text": " And then this is kind of a new word they introduced." }, { "start": 2723, "end": 2730, "text": " But ultimately, it's simply this distance matrix that we've seen before is a representation of that spatial graph." }, { "start": 2730, "end": 2739, "text": " Right. It's simply a graph of nodes and the edges say whether or not they're in contact or respectively how far they are apart," }, { "start": 2739, "end": 2744, "text": " where the residues are nodes and edges connect the residues in close proximity." }, { "start": 2744, "end": 2751, "text": " This graph is important for understanding the physical interactions within proteins as well as their evolutionary history." }, { "start": 2751, "end": 2755, "text": " For the latest version of Alpha Fold used at CAS 14, that's this challenge." }, { "start": 2755, "end": 2767, "text": " We created an attention based neural network system trained end to end that attempts to interpret the structure of this graph while reasoning over the implicit graph that it's building." }, { "start": 2767, "end": 2772, "text": " I look this it's sound like this." }, { "start": 2772, "end": 2776, "text": " This is fluff. Maybe. I don't know." }, { "start": 2776, "end": 2779, "text": " But this here attention based. OK." }, { "start": 2779, "end": 2794, "text": " So I'm going to guess for sure that they've replaced this convent with and with a transformer style with an attention attention layer or multiple attention layers." }, { "start": 2794, "end": 2798, "text": " They say it uses evolutionary evolutionarily related sequences," }, { "start": 2798, "end": 2804, "text": " multiple sequence alignment and the representation of amino acid residue pairs to refine this graph." }, { "start": 2804, "end": 2809, "text": " This is this is what we've already seen." }, { "start": 2809, "end": 2819, "text": " So use these other sequences plus like a lot of stats that you can gather from the data sets on amino acid pairs in order to develop this this graph." }, { "start": 2819, "end": 2826, "text": " And the graph is distance, the distance matrix or other things we'll see in just a second." }, { "start": 2826, "end": 2837, "text": " They say by iterating this process, the system develops strong predictions of the underlying physical structure of the protein and is able to determine highly accurate structures in a matter of days." }, { "start": 2837, "end": 2845, "text": " Additionally, Alpha Fold can predict which parts of each predicted protein structure are reliable using an internal confidence measure." }, { "start": 2845, "end": 2852, "text": " Again, this is something that we've already sort of seen in Alpha Fold 1 that there is sort of an internal confidence measure." }, { "start": 2852, "end": 2861, "text": " And the part here is they say by iterating this process, which could mean that it's no longer just this two stage approach," }, { "start": 2861, "end": 2873, "text": " but it could be an actually fully cycling approach that sort of goes back to the neural network to refine the structure that it's building with the gradient descent procedure." }, { "start": 2873, "end": 2878, "text": " It's entirely possible. So this is the graphic of Alpha Fold 2." }, { "start": 2878, "end": 2882, "text": " You can see at the very beginning, you have protein sequence." }, { "start": 2882, "end": 2898, "text": " And at first you have this embed and outer embed and outer sum, which I'm going to guess this is just kind of features for pairs or individual amino acids." }, { "start": 2898, "end": 2902, "text": " This this is correlation statistics from your data set." }, { "start": 2902, "end": 2906, "text": " It can be chemical properties, whatever." }, { "start": 2906, "end": 2914, "text": " It's just a bunch of features that you can attach to each of these amino acids in the sequence." }, { "start": 2914, "end": 2918, "text": " The other path here is this genetic search and embed." }, { "start": 2918, "end": 2921, "text": " So this is what we've already seen with the MSA." }, { "start": 2921, "end": 2923, "text": " I told you they have the same graphic." }, { "start": 2923, "end": 2926, "text": " So there's human, there's fishy, there's rabbit." }, { "start": 2926, "end": 2930, "text": " And you simply search for sequences in your database." }, { "start": 2930, "end": 2934, "text": " It could even be from other humans that are similar." }, { "start": 2934, "end": 2939, "text": " And from that from those, you can also derive features." }, { "start": 2939, "end": 2941, "text": " So here is where I'm a bit confused." }, { "start": 2941, "end": 2946, "text": " You can see they build up this again, this square matrix right here." }, { "start": 2946, "end": 2950, "text": " I mean, this it already screamed attention before." }, { "start": 2950, "end": 2957, "text": " Right. So I'm going to guess they no longer limit themselves to the maybe maybe to the 64 by 64." }, { "start": 2957, "end": 2960, "text": " Maybe they do something bigger." }, { "start": 2960, "end": 2962, "text": " Maybe they use local attention. Who knows?" }, { "start": 2962, "end": 2979, "text": " I'm going to guess they use attention to and these this here is simply given by an attention layer of some sort to go into the next to just this is basically I would guess this is a big transformer right here." }, { "start": 2979, "end": 2989, "text": " The interesting part is that it appears to interact much like much like the original transformer, maybe encoder decoder here." }, { "start": 2989, "end": 2991, "text": " They pass information around." }, { "start": 2991, "end": 3005, "text": " So this top thing isn't amino acid sequence to amino acid sequence like to itself, but it appears to be a matrix that you build up between the amino acid sequence and these sequences you built." }, { "start": 3005, "end": 3016, "text": " So I would guess that they are no longer, let's say happy with simply inputting the features of these algorithms that go over these other sequences." }, { "start": 3016, "end": 3025, "text": " But now they also want to sort of put these features through through steps of transformations." }, { "start": 3025, "end": 3028, "text": " So again, I would guess this is an attention layer." }, { "start": 3028, "end": 3030, "text": " And how can we interpret this matrix?" }, { "start": 3030, "end": 3038, "text": " As you can see, this matrix relates individual amino acids in the sequence to other species." }, { "start": 3038, "end": 3053, "text": " So I would guess that this square here represents something like how important is this particular location in the chain, which is a purple thing in the human." }, { "start": 3053, "end": 3067, "text": " How important is that in the in the in the chicken or how related is that to the chicken at that particular position or as a whole?" }, { "start": 3067, "end": 3070, "text": " I don't know. Probably DeepMind doesn't know." }, { "start": 3070, "end": 3073, "text": " Like they probably just ship these features in here, right?" }, { "start": 3073, "end": 3077, "text": " And then they just ship it through transformers." }, { "start": 3077, "end": 3079, "text": " They pass information around." }, { "start": 3079, "end": 3087, "text": " I don't know whether it's just in this direction and then in this direction or whether there's like an arrow right here conceivably." }, { "start": 3087, "end": 3094, "text": " But in any case, it seems like they've replaced what was a conv net." }, { "start": 3094, "end": 3097, "text": " So no longer friends with ConvNet." }, { "start": 3097, "end": 3102, "text": " New best friend is transformer." }, { "start": 3102, "end": 3109, "text": " And then at the end, you see what they get out is these pairwise distances again." }, { "start": 3109, "end": 3114, "text": " Now, it's also not really clear because I would expect maybe an arrow going like this." }, { "start": 3114, "end": 3119, "text": " If they again use these pairwise distances to predict the structure." }, { "start": 3119, "end": 3120, "text": " I don't know." }, { "start": 3120, "end": 3121, "text": " OK." }, { "start": 3121, "end": 3123, "text": " Or if that's just a side output." }, { "start": 3123, "end": 3128, "text": " I would guess they still actually use the pairwise distances and the confidence score." }, { "start": 3128, "end": 3138, "text": " Again, you can it might be something very similar that we saw again being the sort of standard deviation on the predicted distances." }, { "start": 3138, "end": 3140, "text": " But they could also refine that." }, { "start": 3140, "end": 3152, "text": " And then the last thing is I don't know if this iterative process is simply referring to there being multiple layers of this attention and passing around." }, { "start": 3152, "end": 3158, "text": " So the passing around will simply be like you stack the representations on top of each other." }, { "start": 3158, "end": 3169, "text": " I don't know if this is the iterative procedure or if there is actually like the structure module actually sort of builds the structure and then goes back." }, { "start": 3169, "end": 3175, "text": " And then you consult the neural network again and then you build some more of the structure and so on." }, { "start": 3175, "end": 3186, "text": " I can't tell right now. It's quite conceivable that they they do like that the search here is not only gradient descent, but is actually informed by the neural network." }, { "start": 3186, "end": 3190, "text": " So you sort of go back and refine, though I don't know." }, { "start": 3190, "end": 3202, "text": " There doesn't seem to be any features in the neural networks that would represent that would represent whatever you could read from a partially built 3D model." }, { "start": 3202, "end": 3209, "text": " So, you know, the boring guess is that the part two is very is a lot of the same." }, { "start": 3209, "end": 3213, "text": " But there could also be substantial improvements in that part." }, { "start": 3213, "end": 3221, "text": " All right. I hope this was this was sort of a good overview." }, { "start": 3221, "end": 3224, "text": " So, as I said, the paper isn't out yet." }, { "start": 3224, "end": 3237, "text": " If you want to cite this, I guess you can you can refer to the blog post and here they say until we've published a paper on this work, please cite high accuracy instruction prediction using deep learning by these people." }, { "start": 3237, "end": 3245, "text": " I just want to highlight shout out to to Anna, who was educated right here." }, { "start": 3245, "end": 3253, "text": " She was an intern. So in a way, I'm actually saying that this is my discovery and I take full responsibility for it." }, { "start": 3253, "end": 3257, "text": " You're welcome. World shout out to Anna." }, { "start": 3257, "end": 3262, "text": " Very nice job. Good work. Good work to all of these people." }, { "start": 3262, "end": 3265, "text": " Yeah, I hope that was enough." }, { "start": 3265, "end": 3273, "text": " If I got something horribly wrong, please tell me in the comments and share the video out if you liked it." }, { "start": 3273, "end": 3283, "text": " Other than that, have fun. Bye bye." } ]
kOy49NqZeqI
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures
[ "Science & Technology" ]
[ "machine learning", "ml", "ai", "artificial intellgence", "deepmind", "reinforcement learning", "deep rl", "a2c", "a3c", "actor", "critic", "distributed", "scale", "bias", "off-policy", "policy gradient", "deepmind lab", "vtrace" ]
Policy Gradient RL on a massively distributed scale with theoretical guarantees! Abstract: In this work we aim to solve a large collection of tasks using a single reinforcement learning agent with a single set of parameters. A key challenge is to handle the increased amount of data and extended training time. We have developed a new distributed agent IMPALA (Importance Weighted Actor-Learner Architecture) that not only uses resources more efficiently in single-machine training but also scales to thousands of machines without sacrificing data efficiency or resource utilisation. We achieve stable learning at high throughput by combining decoupled acting and learning with a novel off-policy correction method called V-trace. We demonstrate the effectiveness of IMPALA for multi-task reinforcement learning on DMLab-30 (a set of 30 tasks from the DeepMind Lab environment (Beattie et al., 2016)) and Atari-57 (all available Atari games in Arcade Learning Environment (Bellemare et al., 2013a)). Our results show that IMPALA is able to achieve better performance than previous agents with less data, and crucially exhibits positive transfer between tasks as a result of its multi-task approach. Authors: Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymir Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, Koray Kavukcuoglu https://arxiv.org/abs/1802.01561 https://github.com/deepmind/scalable_agent
Hi there! Today we're looking at Impala, scalable distributed deep RL with importance-weighted actor learner architectures by Lasse Espejolt, Hubert Sawyer, Remy Munoz and Al. So this paper deals with a new architecture for deep reinforcement learning, specifically distributed deep reinforcement learning. So that means settings where you go beyond one single machine or beyond one single accelerator like a GPU. So I want to introduce this by showing you this task here. This is called the DeepMind lab and the DeepMind lab is a kind of a 3D environment as you can see here. These are screenshots where they're very different goals but some of this as you can see are kind of labyrinth style things where you have to collect apples, some are platformers where you I guess have to jump around and so on or find objects. So the DeepMind introduced this as kind of a as an reinforcement learning environment and what you can do the agent as you can see here has a camera it perceives pixels and it can get rewards for performing actions. The actions it can perform is it can you know walk back and forth, it can jump, it can crouch, it can rotate. So this is kind of a limited set of actions that it can do but it can move around in this 3D world and it needs to achieve some goals. So that usually this is kind of a good setting for reinforcement learning and this paper doesn't do a whole lot of new things in terms of reinforcement learning but it does a lot of things to kind of make it work in a distributed setting. So usually what you would like to do is something like A2C. A2C is advantage actor critic learning and it's a very successful algorithm in reinforcement learning. We won't go into this much here but basic elements of it is you have are two things you have a policy and usually this is called PI, sorry about that, usually this is called PI policy that you input your current state so your current observation at time t and you want to score an action right action A. Now you might have maybe as we saw before you can walk left walk right and so on so you might have ten actions or so. So in here you would put action one or action two or action three and for this you would get probability distributions over each action so maybe in this particular state so each time with the same state. So you would get a distribution something like this right so here you should probably go with action three. That's your policy function. Policy function PI tells you in this particular state which action should you take how often kind of gives you distribution. The second thing you want is a what's called a value function so the value function V, capital V usually, you input your state and it will output it will output what the value is of that state and that's usually termed kind of as a lowercase V. The value of the state is given if you're in a maze right I'm gonna draw maze from the top here right you can't reach there like here so here is the goal and let's say you are oops you're right here the green right and you have the choice of going forward to the right or to the left. Now this would be your policy here. You would ask your policy and A1 would maybe be go forward A2 go to the left A3 to the right so your policy would decide what to do. Your value function however would decide in each of the states so where you are plus where you could go here here here so basically for each state in the system it would give you a value in particular in this case it would probably give you a very very high value here like yeah this is a good point because you're very close to the goal right here this is probably not so good a point and this is a very bad point because you're you're going to corner you're actually moving farther away from the goal so if your value function is trained well then you can you can use that also to assess your situation so the value function for each state s it will give you a numerical value of how good that state is in terms of reaching your goal and the A2C algorithm now deals with the interplay of these the A2C uses actually both of these in an interplay so it will use one to teach the other one right and this interplay between those gives makes for a very successful reinforcement learning algorithm now the way A2C does it is as you can see here what it does is it has to there are two variants here think synced step and synced trajectories but in essence it has to run these episodes and these here are steps in the episodes and let's say an episode as four steps before it can do the learning part and the learning part is here the orange thing once it has done a step of learning it has to run episodes again and then it has can do learning again and that's because of a limitation of this which is called on policy learning so on in on policy learning you always want to have your update step which is the orange part to be fed with data so the this all of these app all of these steps here go into this update steps and it's necessary that the steps that you make the updates from are computed with kind of the most current version of the of the agent right so that the agent will go into the world make some steps using its neural network maybe I should explain so that the agent right is this box and the agent has this policy right and with this policy as we saw it will go and it will interact with the world right outside of itself and it will kind of the world will give back observations and it will then interact again so you can move a step forward right first first thing is move the step step forward and then the world gives it back a high you are now no longer here you've moved here right and then it's on I want to move to the left and the world says okay so you're no longer here you've moved one to the left and this on the right here are the observations and is on the left here are the actions and for the a to see is kind of necessary that we always have a current version of the policy generating these steps in order to be able to learn from them and then the next steps also need to be kind of current to be learned now there have been attempts to decentralize this and is exactly what impala does impala splits this into multiple workers you can think of this as different machines so there is a split here and these are called actors and this is called a learner now the actors they will go ahead and they will run episodes on their own right occasionally or they will run episodes and they will communicate those episodes to the learner and the learner will continuously here learn so these orange steps can be made in much more quick success succession and don't have to be like synchronized as in the a to see here is another way of seeing this over here and we'll just concentrate on this on this left thing here so there is a learner and there are actors and every now and then the actor sinks its model from the learner these are different machines so this can happen over the network every now and then the actor gets like an update of the newest policy network and then the actor will just go ahead and run episodes using that policy right episode episode episode episode step steps without interfering with anything else and then once it has run an episode or multiple ones it will communicate this back to the learner and if all the actors do this all right the learner gets a whole bunch of these episodes and then can learn from all of them simultaneously and it can do so in kind of with in in kind of very fast succession as you see here so the work is split of course you run into a problem namely as we saw in the a to see algorithm this type of reinforcement learning requires basically that you always run the episode with the current model and that's not the case here right the actor may sink the parameters they sink the parameters once in a while but then it will run these episodes right when it runs these episodes here it has no idea or it the learner in the meantime has continually been updating the model while the actor kind of has an old model so these episodes here are run with an old model so the learner if it tries to learn from this must kind of correct for this fact and the big kind of theoretical contribution of this paper is how to correct for the fact that the data you learn from comes from an outdated policy model and this is what's called V trace correction so without going too much into the into the details here V trace correction happens as as fall it happens as follows so what you define are what's called V trace targets and these V trace targets are basically the targets that you train your value function towards right so the the the value function as we discussed before that is a that is the thing that tells you how good each state is and the targets you train this towards and you're also by the way using this V V trace corrections in policy updates but these are defined as follows so the V trace target for step s is the value function at step s plus this correction thing and the the correction thing basically well I've I want to break this down some more so the V at current s is your value function plus and this is a sum over all future steps over and this is a discount factor and this is kind of a delta from one step to the next so you're in an episode and you've made some steps right and let's say we are here right this is s and so your your little V s will be whatever your value function says of s plus kind of a correction for each step that you make go into the future like this and the main part of these is is this here which is basically the reward at the step plus the difference of the value functions of the steps after it and what V trace introduces now is this bit here and these CI again are computed as such so all of this kind of is very nested so there is a there's a big multiplication here it's a very nested thing but in the very very very core of it you can see the following these V trace corrections are a ratio between pi and mu and pi is the policy of the learner that is the current policy and mu is the policy that has been used to generate the to generate the episode and this is truncated by a minimum and usually the C bar is one so let's consider what happens here what happens is let's say that mu is higher than pi for a given pair of AI index a what does it mean it means that in the past you run an episode you come you are in this maze right such to them and you're here right now the and the goal let's say the goal is down here and the action is going over here that does the action that you're considering here now your mu which is your old policy that the actor has synced at some point mu might say this is very good right because it moves you towards the goal more but then your your pie the learner has been learning since the eight since the agent the actor has synchronized the weights the learner has been learning and the learner might know wait wait since you have decided this I have actually learned that this might not be such a good move because you know there's a wall here and I'd rather go down here and then over here so what it will do it will since pi is low and mu is higher it will down weigh this action and this is how you correct for the fact that there are old weights by basically down weighing wherever the old policy thought of an action as being worth more than the new policy does and this is how you make up for the fact that the new policy you assume it knows better because it has learned more and thereby you you give lower weight to the data points where the the policies have diverged a lot so that's at the core of it and you can think of in terms of here you can think of it as maybe here at this step you're at a point where the old policy that the actor has has updated itself to says we should do action one right but the new policy that the learner has in the meantime has learned more says now we should do action two and if this is the case then this whole rest of the episode is down weight because it is no longer current knowledge right and this is not just kind of a heuristic but they actually do prove that this this this comes with some guarantees especially reduces to kind of the classic reinforcement algorithms if you assume that mu is always pi so that current policy is the old policy and therefore you're in the old setting alright so this was a bit of a lengthy explanation of the math behind it and at the end what you do is following you train your value function using this update and you can see here it's simply the gradient of the value function scaled by the thing that contains this V trace target right you then you update your policy in this direction and this is the classic reinforcement learning reinforce style policy update where here you have the gradient of the of the policy and here you have the weighing by the reward and specifically here it is the reward plus this V trace target and this thing here is a bias correction or a bias reducing sorry variance reducing bias that was terrible the final form is what's called an entropy penalty where you want to push the entropy of your policy up such that the agent kind of is biased towards exploring more than exploiting if you know of the classic exploration exploitation dilemma so that's that's what you do compute these V trace targets update your value and policy according to these equations and there you go so what do what does Impala do specifically in this deep mind lab they have two architectures first of all they have this they have this small architecture second they have this large architecture and they just kind of try it out on these and they measure how many frames per second they can get in and you see here compared to on single machine compared to a 3c they bring in a lot more frames per second this is just on a single machine but then on distributed setting the scale up also is very significant that they reach that's because they don't have to wait for other things they can just go ahead everything runs at full speed basically and everything runs in parallel and the fact that that some of the information is old is corrected by V trace and the last thing I want to show is the wall clock time I think this is the important plot in this deep mind lab on over all the tasks the wall clock time compared to the score you can see a 3c while it does you know increase over time the Impala variants up here increase in much much faster wall clock time so that's the that's the paper they have a lot of proofs in the appendix which I'm not gonna go over if you want to give it a try then it is it is not called Impala on github it is called I think scalable agent so on github it is called scalable agent I think but you'll find it if you if you search for Impala github or something like this yeah other than that thanks for listening and see you next time
[ { "start": 0, "end": 5.88, "text": " Hi there! Today we're looking at Impala, scalable distributed deep RL with" }, { "start": 5.88, "end": 11.48, "text": " importance-weighted actor learner architectures by Lasse Espejolt, Hubert" }, { "start": 11.48, "end": 18.48, "text": " Sawyer, Remy Munoz and Al. So this paper deals with a new architecture for deep" }, { "start": 18.48, "end": 23.44, "text": " reinforcement learning, specifically distributed deep reinforcement learning." }, { "start": 23.44, "end": 29.88, "text": " So that means settings where you go beyond one single machine or beyond one" }, { "start": 29.88, "end": 35.4, "text": " single accelerator like a GPU. So I want to introduce this by showing you this" }, { "start": 35.4, "end": 41.6, "text": " task here. This is called the DeepMind lab and the DeepMind lab is a kind of a" }, { "start": 41.6, "end": 47.44, "text": " 3D environment as you can see here. These are screenshots where they're very" }, { "start": 47.44, "end": 51.239999999999995, "text": " different goals but some of this as you can see are kind of labyrinth style" }, { "start": 51.239999999999995, "end": 56.96, "text": " things where you have to collect apples, some are platformers where you I guess" }, { "start": 56.96, "end": 62.120000000000005, "text": " have to jump around and so on or find objects. So the DeepMind introduced this" }, { "start": 62.120000000000005, "end": 69.76, "text": " as kind of a as an reinforcement learning environment and what you can do" }, { "start": 69.76, "end": 75.76, "text": " the agent as you can see here has a camera it perceives pixels and it can" }, { "start": 75.76, "end": 81.6, "text": " get rewards for performing actions. The actions it can perform is it can you" }, { "start": 81.6, "end": 86.92, "text": " know walk back and forth, it can jump, it can crouch, it can rotate. So this is" }, { "start": 86.92, "end": 91.12, "text": " kind of a limited set of actions that it can do but it can move around in this" }, { "start": 91.12, "end": 97.28, "text": " 3D world and it needs to achieve some goals. So that usually this is" }, { "start": 97.28, "end": 103.96000000000001, "text": " kind of a good setting for reinforcement learning and this paper doesn't" }, { "start": 103.96000000000001, "end": 109.04, "text": " do a whole lot of new things in terms of reinforcement learning but it does a lot" }, { "start": 109.04, "end": 115.88, "text": " of things to kind of make it work in a distributed setting. So usually what you" }, { "start": 115.88, "end": 121.96, "text": " would like to do is something like A2C. A2C is advantage actor critic learning" }, { "start": 121.96, "end": 128.04, "text": " and it's a very successful algorithm in reinforcement learning. We won't go into" }, { "start": 128.04, "end": 135.2, "text": " this much here but basic elements of it is you have are two things you have" }, { "start": 135.2, "end": 141.07999999999998, "text": " a policy and usually this is called PI, sorry about that, usually this is called" }, { "start": 141.08, "end": 146.36, "text": " PI policy that you input your current state so your current observation at" }, { "start": 146.36, "end": 156.52, "text": " time t and you want to score an action right action A. Now you might have maybe" }, { "start": 156.52, "end": 160.8, "text": " as we saw before you can walk left walk right and so on so you might have ten" }, { "start": 160.8, "end": 169.52, "text": " actions or so. So in here you would put action one or action two or action three" }, { "start": 169.52, "end": 174.84, "text": " and for this you would get probability distributions over each action so maybe" }, { "start": 174.84, "end": 182.24, "text": " in this particular state so each time with the same state. So you would get a" }, { "start": 182.24, "end": 188.76000000000002, "text": " distribution something like this right so here you should probably go with" }, { "start": 188.76000000000002, "end": 195.44, "text": " action three. That's your policy function. Policy function PI tells you in this" }, { "start": 195.44, "end": 200.44, "text": " particular state which action should you take how often kind of gives you" }, { "start": 200.44, "end": 206.35999999999999, "text": " distribution. The second thing you want is a what's called a value function so" }, { "start": 206.35999999999999, "end": 212.72, "text": " the value function V, capital V usually, you input your state and it will output" }, { "start": 212.72, "end": 220.4, "text": " it will output what the value is of that state and that's usually termed kind of" }, { "start": 220.4, "end": 227.76000000000002, "text": " as a lowercase V. The value of the state is given if you're in a maze right I'm" }, { "start": 227.76000000000002, "end": 236.08, "text": " gonna draw maze from the top here right you can't reach there like here so here" }, { "start": 236.08, "end": 245.04000000000002, "text": " is the goal and let's say you are oops you're right here the green right and" }, { "start": 245.04, "end": 252.2, "text": " you have the choice of going forward to the right or to the left. Now this" }, { "start": 252.2, "end": 257.84, "text": " would be your policy here. You would ask your policy and A1" }, { "start": 257.84, "end": 264.12, "text": " would maybe be go forward A2 go to the left A3 to the right so your policy" }, { "start": 264.12, "end": 270.12, "text": " would decide what to do. Your value function however would decide in each of" }, { "start": 270.12, "end": 275.04, "text": " the states so where you are plus where you could go here here here so basically" }, { "start": 275.04, "end": 279.8, "text": " for each state in the system it would give you a value in particular in this" }, { "start": 279.8, "end": 285.88, "text": " case it would probably give you a very very high value here like yeah this is" }, { "start": 285.88, "end": 290.64, "text": " a good point because you're very close to the goal right here this is probably" }, { "start": 290.64, "end": 296.16, "text": " not so good a point and this is a very bad point because you're you're going to" }, { "start": 296.16, "end": 300.66, "text": " corner you're actually moving farther away from the goal so if your value" }, { "start": 300.66, "end": 306.68, "text": " function is trained well then you can you can use that also to assess your" }, { "start": 306.68, "end": 313.04, "text": " situation so the value function for each state s it will give you a numerical" }, { "start": 313.04, "end": 320.20000000000005, "text": " value of how good that state is in terms of reaching your goal and the A2C" }, { "start": 320.2, "end": 326.47999999999996, "text": " algorithm now deals with the interplay of these the A2C uses actually both of" }, { "start": 326.47999999999996, "end": 334.52, "text": " these in an interplay so it will use one to teach the other one right and this" }, { "start": 334.52, "end": 340, "text": " interplay between those gives makes for a very successful reinforcement learning" }, { "start": 340, "end": 347.2, "text": " algorithm now the way A2C does it is as you can see here what it does is it has" }, { "start": 347.2, "end": 352.64, "text": " to there are two variants here think synced step and synced trajectories but" }, { "start": 352.64, "end": 358.24, "text": " in essence it has to run these episodes and these here are steps in the episodes" }, { "start": 358.24, "end": 363.4, "text": " and let's say an episode as four steps before it can do the learning part and" }, { "start": 363.4, "end": 367.48, "text": " the learning part is here the orange thing once it has done a step of" }, { "start": 367.48, "end": 372.91999999999996, "text": " learning it has to run episodes again and then it has can do learning again" }, { "start": 372.92, "end": 377.96000000000004, "text": " and that's because of a limitation of this which is called on policy learning" }, { "start": 377.96000000000004, "end": 384.36, "text": " so on in on policy learning you always want to have your update step which is" }, { "start": 384.36, "end": 390.88, "text": " the orange part to be fed with data so the this all of these app all of these" }, { "start": 390.88, "end": 396.24, "text": " steps here go into this update steps and it's necessary that the steps that you" }, { "start": 396.24, "end": 402.84000000000003, "text": " make the updates from are computed with kind of the most current version of the" }, { "start": 402.84, "end": 407.71999999999997, "text": " of the agent right so that the agent will go into the world make some steps" }, { "start": 407.71999999999997, "end": 413.67999999999995, "text": " using its neural network maybe I should explain so that the agent right is this" }, { "start": 413.67999999999995, "end": 420.12, "text": " box and the agent has this policy right and with this policy as we saw it will" }, { "start": 420.12, "end": 425.64, "text": " go and it will interact with the world right outside of itself and it will kind" }, { "start": 425.64, "end": 430.55999999999995, "text": " of the world will give back observations and it will then interact again so you" }, { "start": 430.56, "end": 435.6, "text": " can move a step forward right first first thing is move the step step forward" }, { "start": 435.6, "end": 441.12, "text": " and then the world gives it back a high you are now no longer here you've moved" }, { "start": 441.12, "end": 445.56, "text": " here right and then it's on I want to move to the left and the world says okay" }, { "start": 445.56, "end": 450.72, "text": " so you're no longer here you've moved one to the left and this on the right" }, { "start": 450.72, "end": 455.76, "text": " here are the observations and is on the left here are the actions and for the a" }, { "start": 455.76, "end": 459.64, "text": " to see is kind of necessary that we always have a current version of the" }, { "start": 459.64, "end": 466.44, "text": " policy generating these steps in order to be able to learn from them and then" }, { "start": 466.44, "end": 472.15999999999997, "text": " the next steps also need to be kind of current to be learned now there have" }, { "start": 472.15999999999997, "end": 478.59999999999997, "text": " been attempts to decentralize this and is exactly what impala does impala" }, { "start": 478.59999999999997, "end": 486.8, "text": " splits this into multiple workers you can think of this as different machines" }, { "start": 486.8, "end": 492.92, "text": " so there is a split here and these are called actors and this is called a" }, { "start": 492.92, "end": 498.92, "text": " learner now the actors they will go ahead and they will run episodes on" }, { "start": 498.92, "end": 505.08000000000004, "text": " their own right occasionally or they will run episodes and they will" }, { "start": 505.08000000000004, "end": 510.36, "text": " communicate those episodes to the learner and the learner will continuously" }, { "start": 510.36, "end": 515.64, "text": " here learn so these orange steps can be made in much more quick success" }, { "start": 515.64, "end": 523.76, "text": " succession and don't have to be like synchronized as in the a to see here is" }, { "start": 523.76, "end": 527.88, "text": " another way of seeing this over here and we'll just concentrate on this on this" }, { "start": 527.88, "end": 534.24, "text": " left thing here so there is a learner and there are actors and every now and" }, { "start": 534.24, "end": 539.6, "text": " then the actor sinks its model from the learner these are different machines so" }, { "start": 539.6, "end": 543.52, "text": " this can happen over the network every now and then the actor gets like an" }, { "start": 543.52, "end": 549, "text": " update of the newest policy network and then the actor will just go ahead and" }, { "start": 549, "end": 555.88, "text": " run episodes using that policy right episode episode episode episode step" }, { "start": 555.88, "end": 561.06, "text": " steps without interfering with anything else and then once it has run an episode" }, { "start": 561.06, "end": 565.72, "text": " or multiple ones it will communicate this back to the learner and if all the" }, { "start": 565.72, "end": 571.1999999999999, "text": " actors do this all right the learner gets a whole bunch of these episodes and" }, { "start": 571.2, "end": 577.5200000000001, "text": " then can learn from all of them simultaneously and it can do so in kind" }, { "start": 577.5200000000001, "end": 583.36, "text": " of with in in kind of very fast succession as you see here so the work" }, { "start": 583.36, "end": 589.9200000000001, "text": " is split of course you run into a problem namely as we saw in the a to see" }, { "start": 589.9200000000001, "end": 596.84, "text": " algorithm this type of reinforcement learning requires basically that you" }, { "start": 596.84, "end": 602.32, "text": " always run the episode with the current model and that's not the case here right" }, { "start": 602.32, "end": 608.88, "text": " the actor may sink the parameters they sink the parameters once in a while but" }, { "start": 608.88, "end": 614.12, "text": " then it will run these episodes right when it runs these episodes here it has" }, { "start": 614.12, "end": 621.84, "text": " no idea or it the learner in the meantime has continually been updating" }, { "start": 621.84, "end": 627.2800000000001, "text": " the model while the actor kind of has an old model so these episodes here are run" }, { "start": 627.2800000000001, "end": 633.12, "text": " with an old model so the learner if it tries to learn from this must kind of" }, { "start": 633.12, "end": 638.4, "text": " correct for this fact and the big kind of theoretical contribution of this" }, { "start": 638.4, "end": 644.88, "text": " paper is how to correct for the fact that the data you learn from comes from" }, { "start": 644.88, "end": 653.76, "text": " an outdated policy model and this is what's called V trace correction so" }, { "start": 653.76, "end": 663.32, "text": " without going too much into the into the details here V trace correction happens" }, { "start": 663.32, "end": 669.76, "text": " as as fall it happens as follows so what you define are what's called V trace" }, { "start": 669.76, "end": 676.08, "text": " targets and these V trace targets are basically the targets that you train" }, { "start": 676.08, "end": 684.36, "text": " your value function towards right so the the the value function as we discussed" }, { "start": 684.36, "end": 690.76, "text": " before that is a that is the thing that tells you how good each state is and the" }, { "start": 690.76, "end": 696.16, "text": " targets you train this towards and you're also by the way using this V V" }, { "start": 696.16, "end": 704.52, "text": " trace corrections in policy updates but these are defined as follows so the V" }, { "start": 704.52, "end": 712.04, "text": " trace target for step s is the value function at step s plus this correction" }, { "start": 712.04, "end": 720.0799999999999, "text": " thing and the the correction thing basically well I've I want to break this" }, { "start": 720.08, "end": 730.5200000000001, "text": " down some more so the V at current s is your value function plus and this is a" }, { "start": 730.5200000000001, "end": 738.12, "text": " sum over all future steps over and this is a discount factor and this is kind of" }, { "start": 738.12, "end": 743.0600000000001, "text": " a delta from one step to the next so you're in an episode and you've made" }, { "start": 743.06, "end": 754.4, "text": " some steps right and let's say we are here right this is s and so your your" }, { "start": 754.4, "end": 766.8, "text": " little V s will be whatever your value function says of s plus kind of a" }, { "start": 766.8, "end": 773.4, "text": " correction for each step that you make go into the future like this and the" }, { "start": 773.4, "end": 781.3599999999999, "text": " main part of these is is this here which is basically the reward at the step plus" }, { "start": 781.3599999999999, "end": 787.8, "text": " the difference of the value functions of the steps after it and what V trace" }, { "start": 787.8, "end": 798.76, "text": " introduces now is this bit here and these CI again are computed as such so" }, { "start": 798.76, "end": 802.68, "text": " all of this kind of is very nested so there is a there's a big multiplication" }, { "start": 802.68, "end": 808.24, "text": " here it's a very nested thing but in the very very very core of it you can see" }, { "start": 808.24, "end": 816.3599999999999, "text": " the following these V trace corrections are a ratio between pi and mu and pi is" }, { "start": 816.36, "end": 824.32, "text": " the policy of the learner that is the current policy and mu is the policy that" }, { "start": 824.32, "end": 832.88, "text": " has been used to generate the to generate the episode and this is truncated" }, { "start": 832.88, "end": 838.64, "text": " by a minimum and usually the C bar is one so let's consider what happens here" }, { "start": 838.64, "end": 848.56, "text": " what happens is let's say that mu is higher than pi for a given pair of AI" }, { "start": 848.56, "end": 855.76, "text": " index a what does it mean it means that in the past you run an episode you come" }, { "start": 855.76, "end": 867.96, "text": " you are in this maze right such to them and you're here right now the and the" }, { "start": 867.96, "end": 879.2800000000001, "text": " goal let's say the goal is down here and the action is going over here that" }, { "start": 879.2800000000001, "end": 886.2, "text": " does the action that you're considering here now your mu which is your old" }, { "start": 886.2, "end": 892.52, "text": " policy that the actor has synced at some point mu might say this is very good" }, { "start": 892.52, "end": 901.84, "text": " right because it moves you towards the goal more but then your your pie the" }, { "start": 901.84, "end": 906.1999999999999, "text": " learner has been learning since the eight since the agent the actor has" }, { "start": 906.1999999999999, "end": 910.8, "text": " synchronized the weights the learner has been learning and the learner might know" }, { "start": 910.8, "end": 916.96, "text": " wait wait since you have decided this I have actually learned that this might" }, { "start": 916.96, "end": 922.24, "text": " not be such a good move because you know there's a wall here and I'd rather go" }, { "start": 922.24, "end": 930.4, "text": " down here and then over here so what it will do it will since pi is low and mu" }, { "start": 930.4, "end": 935.96, "text": " is higher it will down weigh this action and this is how you correct for the fact" }, { "start": 935.96, "end": 942.36, "text": " that there are old weights by basically down weighing wherever the old policy" }, { "start": 942.36, "end": 948.12, "text": " thought of an action as being worth more than the new policy does and this is how" }, { "start": 948.12, "end": 951.76, "text": " you make up for the fact that the new policy you assume it knows better" }, { "start": 951.76, "end": 956.68, "text": " because it has learned more and thereby you you give lower weight to the data" }, { "start": 956.68, "end": 963.36, "text": " points where the the policies have diverged a lot so that's at the core of" }, { "start": 963.36, "end": 974.24, "text": " it and you can think of in terms of here you can think of it as maybe here at" }, { "start": 974.24, "end": 980.2, "text": " this step you're at a point where the old policy that the actor has has" }, { "start": 980.2, "end": 988.5200000000001, "text": " updated itself to says we should do action one right but the new policy that" }, { "start": 988.5200000000001, "end": 993.84, "text": " the learner has in the meantime has learned more says now we should do action" }, { "start": 993.84, "end": 1002.84, "text": " two and if this is the case then this whole rest of the episode is down weight" }, { "start": 1002.84, "end": 1009.24, "text": " because it is no longer current knowledge right and this is not just" }, { "start": 1009.24, "end": 1014.5600000000001, "text": " kind of a heuristic but they actually do prove that this this this comes with" }, { "start": 1014.5600000000001, "end": 1018.24, "text": " some guarantees especially reduces to kind of the classic reinforcement" }, { "start": 1018.24, "end": 1023.84, "text": " algorithms if you assume that mu is always pi so that current policy is the" }, { "start": 1023.84, "end": 1027.76, "text": " old policy and therefore you're in the old setting alright so this was a bit of" }, { "start": 1027.76, "end": 1035.04, "text": " a lengthy explanation of the math behind it and at the end what you do is" }, { "start": 1035.04, "end": 1043.1599999999999, "text": " following you train your value function using this update and you can see here" }, { "start": 1043.1599999999999, "end": 1048.24, "text": " it's simply the gradient of the value function scaled by the thing that" }, { "start": 1048.24, "end": 1055.1599999999999, "text": " contains this V trace target right you then you update your policy in this" }, { "start": 1055.1599999999999, "end": 1060.68, "text": " direction and this is the classic reinforcement learning reinforce style" }, { "start": 1060.68, "end": 1068.72, "text": " policy update where here you have the gradient of the of the policy and here" }, { "start": 1068.72, "end": 1076.44, "text": " you have the weighing by the reward and specifically here it is the reward plus" }, { "start": 1076.44, "end": 1084.44, "text": " this V trace target and this thing here is a bias correction or a bias reducing" }, { "start": 1084.44, "end": 1092.92, "text": " sorry variance reducing bias that was terrible the final form is what's called" }, { "start": 1092.92, "end": 1099.28, "text": " an entropy penalty where you want to push the entropy of your policy up such" }, { "start": 1099.28, "end": 1105.92, "text": " that the agent kind of is biased towards exploring more than exploiting if you" }, { "start": 1105.92, "end": 1110, "text": " know of the classic exploration exploitation dilemma so that's that's" }, { "start": 1110, "end": 1115.72, "text": " what you do compute these V trace targets update your value and policy" }, { "start": 1115.72, "end": 1122.84, "text": " according to these equations and there you go so what do what does Impala do" }, { "start": 1122.84, "end": 1127.76, "text": " specifically in this deep mind lab they have two architectures first of all they" }, { "start": 1127.76, "end": 1132.32, "text": " have this they have this small architecture second they have this large" }, { "start": 1132.32, "end": 1138.24, "text": " architecture and they just kind of try it out on these and they measure how" }, { "start": 1138.24, "end": 1143.32, "text": " many frames per second they can get in and you see here compared to on single" }, { "start": 1143.32, "end": 1150.92, "text": " machine compared to a 3c they bring in a lot more frames per second this is just" }, { "start": 1150.92, "end": 1157.08, "text": " on a single machine but then on distributed setting the scale up also is" }, { "start": 1157.08, "end": 1163.52, "text": " very significant that they reach that's because they don't have to wait for" }, { "start": 1163.52, "end": 1168.44, "text": " other things they can just go ahead everything runs at full speed basically" }, { "start": 1168.44, "end": 1174.2, "text": " and everything runs in parallel and the fact that that some of the information" }, { "start": 1174.2, "end": 1182.4, "text": " is old is corrected by V trace and the last thing I want to show is the wall" }, { "start": 1182.4, "end": 1187.6, "text": " clock time I think this is the important plot in this deep mind lab on over all" }, { "start": 1187.6, "end": 1195.1599999999999, "text": " the tasks the wall clock time compared to the score you can see a 3c while it" }, { "start": 1195.1599999999999, "end": 1201.1999999999998, "text": " does you know increase over time the Impala variants up here increase in much" }, { "start": 1201.1999999999998, "end": 1210.6399999999999, "text": " much faster wall clock time so that's the that's the paper they have a lot of" }, { "start": 1210.6399999999999, "end": 1215.2199999999998, "text": " proofs in the appendix which I'm not gonna go over if you want to give it a" }, { "start": 1215.22, "end": 1223.04, "text": " try then it is it is not called Impala on github it is called I think scalable" }, { "start": 1223.04, "end": 1237.04, "text": " agent so on github it is called scalable agent I think but you'll find it if you" }, { "start": 1237.04, "end": 1242.96, "text": " if you search for Impala github or something like this yeah other than that" }, { "start": 1242.96, "end": 1247.16, "text": " thanks for listening and see you next time" } ]
RrvC8YW0pT0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Reinforcement Learning Upside Down: Don't Predict Rewards -- Just Map Them to Actions
[ "Science & Technology" ]
[ "rl", "reinforcement learning", "ai", "artificial intelligence", "udrl", "schmidhuber", "policy", "value", "reward" ]
Schmidhuber thinking outside the box! Upside-Down RL turns RL on its head and constructs a behavior function that uses the desired reward as an input. The new paradigm shows surprising performance compared to classic RL algorithms. Abstract: We transform reinforcement learning (RL) into a form of supervised learning (SL) by turning traditional RL on its head, calling this Upside Down RL (UDRL). Standard RL predicts rewards, while UDRL instead uses rewards as task-defining inputs, together with representations of time horizons and other computable functions of historic and desired future data. UDRL learns to interpret these input observations as commands, mapping them to actions (or action probabilities) through SL on past (possibly accidental) experience. UDRL generalizes to achieve high rewards or other goals, through input commands such as: get lots of reward within at most so much time! A separate paper [61] on first experiments with UDRL shows that even a pilot version of UDRL can outperform traditional baseline algorithms on certain challenging RL problems. We also introduce a related simple but general approach for teaching a robot to imitate humans. First videotape humans imitating the robot's current behaviors, then let the robot learn through SL to map the videos (as input commands) to these behaviors, then let it generalize and imitate videos of humans executing previously unknown behavior. This Imitate-Imitator concept may actually explain why biological evolution has resulted in parents who imitate the babbling of their babies. Author: Juergen Schmidhuber https://arxiv.org/abs/1912.02875 https://arxiv.org/abs/1912.02877
He did it! Crazy son of a bitch did it again! What am I talking about? Jürgen Schmidhuber reinforcement learning upside down! New paper just dropped on the verge of the NeurIPS conference being presented at a workshop here. Presenting upside down reinforcement learning. I am pumped for this one, can you tell? It says we transform reinforcement learning into a form of supervised learning by turning traditional RL on its head. Calling this RL-lar. What do we call this? We'll just call it lar. Upside down reinforcement learning. And so this is upside down. Never mind. Okay, let's just check out how it works. So I'm going to give a brief overview before we go into this paper. Alright, so let's say you have a reinforcement learning problem. Let's say an Atari game for example. And in an Atari game you usually have a screen, right? And let's just say you're playing this marine commander. So there's water here, right? And there might be a bunch of... Here's your boat, right? There's a boat, a little boat. There might be a bunch of opponents right here. Fishy fish opponents, fishy fish opponents and so on. And there are a bunch of gold coins like here. That's a big gold coin, right? And you're kind of supposed to, I think you're supposed to like go get air. You have some air meter over here. Whatever. So there's this Atari game, right? You're supposed to get the reward which is this maybe this coin here. And stay alive as long as possible and so on. So this is a classic reinforcement learning problem. And there are various techniques for this. We've looked at a couple of them. And what upside down reinforcement learning does is basically what you do is you want to transform this input to a new representation. Which basically, well, if I can, maybe I can... Let me get this correctly. So then there's this over here and then there's a little fishy, a little fishy here. And there's a coin right here. So what you want to do is basically turn this input on its head like upside down. And so this way is kind of up or down or whatever in this new representation. And if you actually learn on this new representation with pretty the same techniques, it works much better than the classic RL setting. And this is not only for like these Atari games. Like this appears to hold throughout the RL space. So in robotics, like if you have a robot or whatever, this is a robot. It has a square head, as you can tell. You know, it's supposed to like open a door. You've seen this DARPA challenge. This doesn't work, right? But if you just transform this and actually turn the robot upside down, the robot will be able to open the door just fine. And even like if you have a chessboard and there's like a bunch of pieces on it. The problem in this case is you have to simulate this chessboard. And if you turn this around now, basically all the pieces will fall off. So what you need to do is you need to have a simulator that encodes a magnetic chessboard such that the pieces don't fall off. So it's a bit of programming effort. But if you do that... All right, I'm kidding. This is a new paradigm for RL, but it's unfortunately not as good. Someone should try the magnetic chessboard simulator. Upside down RL is a new paradigm for RL where basically the kind of notion of inputs and outputs of the RL algorithm are switched around a bit. So basic ideas here is that you have an RL algorithm that is also fed with a bunch of commands. So in classic RL what you'll have... Let's actually go back to this Atari game here, right? In classic RL, an RL algorithm will get the Atari game as a screen as an input and is asked from this to predict a bunch of outputs. So in classic Atari, these are eight actions. I'm going to draw three here, like go to the left, go to the right, or press the button for shoot, right? These are the actions you have and the algorithm is tasked. And there are different versions of this. In policy methods, policy gradient methods, typically the algorithm is tasked with outputting a distribution over these actions. In other methods like value learning, Q learning, the algorithm is tasked with assigning each of these actions a value. So in this situation, going to the left will be worth three in the future. Going to the right will be worth negative one and shooting will be worth zero. So you might want to go with this action here. Now in upside-down reinforcement learning, we've had observation going into the model and the model coming up with the value estimation of the different actions. In upside-down reinforcement learning, you'll have the observation and something else going into the model and the model coming up with an action. And this something else is the key. What you input here is your desire, your future desire. And in this paper, they call it a command. So you'll have a command as an input together with the observations. You basically say, here's my state and I would like to achieve, let's say five reward in the next five reward in the next two time steps, right? Make this happen. Right. This is this is your command going into the model and the model will then try to find actions such that in the next two time steps, you'll get five reward. You can easily see a model that learns this will actually be able to, you know, do various things, including doing the classic RL things like get as much reward as possible in given or in the shortest amount of time, but can also do much more. And in the general sense, the difference is how this is trained now. This model, when you train it, as you can see, you don't it's not trained with in my having in mind kind of only to get the maximum reward. It is trained to be much more a general kind of understanding of the world. I mean, learning what do I need to do to achieve a variety of goals? Specifically, what you want to do to train this is the following. Say you have a method of of moving in the world and collecting traces, right? So you go from state, state one, state two, state three. You go with like your action one, action two. Let's draw action three. The state four. And in each of these, you get a you get rewards, right? Reward one reward to reward three. Now, this in classic RL, this will kind of give you one training example, right? So this is if you consider this to be an episode, this will give you one training example to to run this sequence of actions. Upside down RL, you can actually consider this as many, many training examples. And here's what I mean. So if you, for example, start at state one, you can say, aha, within one time step, one one time step, I go to state two and I have achieved our one rewards by doing action a one. Right. So this now can be an input to your model. Your model could learn if you get as an observation, remember the previous thing as an observation, you get s one as a command. You get I want to achieve in one time step. Are one reward. Right. And you train this goes into the model and the model is trained to say a one given if I am in s one and I do a one, I will achieve that. Right. So you train the model to give a one as an output. And this is valid because in the past you've observed going from s one using a one to a state where you get this this kind of reward in this kind of time. But you can also so you can do all of these single steps. They will all provide individual training examples to your model. Right. But then also you can consider a two step thing. So you can say I'm in state s one and I go I go in two time steps. I have achieved our one plus our two reward by doing actions a one then a two. Right. And a two I'm going to do in parents here because what you want to do is you want to always always consider the action that comes right after where you are now. So again your training sample let me draw this up here. Maybe your training sample would be the following. I am in state s one. This would be my observation. My command would be I would like to achieve in two time steps reward r one plus r two reward. Right. This reward this both goes into the model. Right. You tell the model please given the state s one achieve this reward in this time. And the model is supposed to output a one saying ha in the past I was in this state and I did achieve this goal by using that. So the model is supposed to learn to achieve different goals. Right. So now you can not only train from good episodes right. You can train for any episode any episode usually in classic or you kind of want to focus on the good episodes because you want to maximize your reward. But here you can tell the model hey if you've done something particularly stupid let's say here in s three you done something the a three was particularly stupid gave you. So r three here was really bad reward like a negative five billion trillion. And you can actually train the model to recognize this can be a hey look if you are in s three and within one time step you want to achieve negative five billion billion billion trillion. Reward you all you have to do is action a three right. And then the cool thing now is if you are at evaluation time you actually want the big reward what you'll do is you simply plug in a different command simply in one time step still I'm in state s three in one time step. I want to achieve actually three reward not negative a lot right. And the model will have learned that a three will lead to a situation where you get a lot of negative reward. So the model will be like I'm for sure not going to do a three right. I'm going to do something else here because I have learned to map a three to like this really low reward. So in essence this has connections to things like hindsight experience replay and kind of universal value function where you kind of learn to go from any state to any other state in this. But none of these do none of these have this kind of command what Schmidhuber calls command here as an input to the model. And I think actually this is this is really positive to input this because usually in universal value functions what you would say is let's consider a simple grid world right. Whatever your agent is here and you need to you need to reach a goal that's down here. But you might not be able to learn it because it's super sparse reward and so on. But what you can do is you can learn to reach this position and this position and this position from various positions like go here go from here to here. You can learn to go from here to here. And you know in essence you would like it eventually to generalize to all the fields. So you basically learn to go from any position to any other position with your agent with these universal value or universal policy functions having sub goals. But they during that phase where they learn to go from anything to anything they don't they don't necessarily include this reward thing as a as an input. It's more like kind of either a sub goal or like the usual value function will simply approximate the reward. Whereas whereas in this technique we actually have a policy learning we actually output a an action value. Also hindsight experience replay what hindsight experience replay would do in the same situation right. You're here. We might do a videos on this in the future. You're here and you try right. And your agent actually it ends up here right ends up right here. What you can do is you can simply say oh well actually this this was my goal all along and then simply train train your model as if as if this thing here was your goal all along. And not this thing here and treat it as kind of a positive reward for this. At least that's how I understand it. Right. And both of these things are quite different than here where we have this command as input and I do I do like it. So I think this this is very much the basic things here. This it is extra extrapolated to kind of noisy inputs and noisy environments and so on. But this is the basic the basic gist of it. So here you see your you what you will learn is to map all and all is your representation of your input. So the screen for example or the chessboard. And I think also kind of the last action and there were you get in this step plus your horizon and desire. So in how much time you would like to achieve how much reward and then you can also get input some extra goals that you have. And so you can see basically any any episode that you've run in the past will give you a valid training example for this. Your model will simply learn to match the previous experience with the goals that were achieved in the previous experience. So there is lots of lots of generalizations here like how exactly these things are represented. This this time horizon can be a high dimensional object. The desire can be as I understand it somewhat a dimensional object. The extra commands can be like conditionals on these two things. It gets very complicated, but I want to jump ahead to a different paper where so this paper is basically just describing the algorithm. And then the next paper is doing experiments with this. Let's scroll past here. All right. So this paper training agents using up that down reinforcement learning released on the same day, but different authors that have used also made who was also here but have used this to implement a variant of this. And here you see again what I was trying to to explain. So in traditional RL, this especially here Q learning, you'll have this function which gets an observation as input and then Q learning especially. So you also get the action as an input and you're supposed to say for the given observation this particular action has this expected value as a return. Right. That's what I explained at the beginning. That's kind of value based reinforcement learning. Whereas the behavior function here, which would be upside down reinforcement learning gets the observation and a command and will map that to an action. And here again is what we've gone over. This is a bit of a different thing. So this agent has apparently run two different episodes. One point it did this sequence of actions and at the other point from the same starting state it did this sequence of action and you can see here on the right all the training samples we can we can derive from this. So we can say from state s 0 right. If I want to return in one time step, I have experienced this in the past right to return in one time step. All I have to do is take action a one. But if I want one return in one time step, I have to take action a two and you teach your behavior function to learn these things to learn to output these actions with these things here as inputs. And then what you hope of course is that this will generalize that it will learn to generalize that you can say now give me more reward than I have ever seen before right. And it will kind of learn which things correspond to lower reward, which things correspond to higher award and will be able to extrapolate which things will correspond to even higher report reward. Sorry. So they have two algorithms and this is kind of this is reminiscent of the old of the old RL kind of world where you do kind of one algorithm is continuously learning from the experience gathered by another algorithm. So you have one set of algorithms and this even in modern RL this this this is how it's done right. You have two different boxes right. Actually you have probably one box learning the model like this is I'm going to represent this here learner right. And the learner distributes the model to many many machines interacting with the simulators and these machines all they do is run episodes with the learned model and they will send back their experience here. And then the learner can learn from it and then at the end send it again. So so. All right here we go. So in each step what we do in order to to generate a new episode we don't always want to want to kind of execute one given policy. What we do is we sample from the end of the replay buffer and the replay buffer is sorted by returns right. So the highest return episodes are on top. So we want to sample the highest return episodes then we want to say maybe some of them are 10 steps long maybe some of them are five steps long and so on. So we set the horizon to be the mean of the length of these right and we set the desired return how much return should be achieved in this time to be the unit to sample from the uniform distribution between M and M plus S and M is the mean and S is the standard deviation of the selected episode. So so what this means is is like here is a bunch of episodes from the start at the same time. Here's a bunch of episodes that I ran right from here is time zero and then time goes on that I ran that had really high returns right. Now I'm going to take the mean time that these episodes ran like this. This is maybe five time steps. So in five time I want to achieve now how much reward now you look at all the rewards that were achieved. This is maybe a distribution that has some mean here like so and then you say I want to achieve a reward between here and one standard deviation higher than here. So right and this this would be the reward you want to achieve. So what you do is you kind of push your learned model to just go a bit beyond what it has seen so far is basically say look I you can do this but you can just do a bit more in the same amount of time. Please do this and you hope the model has learned to kind of generalize to do this. And if so you will execute these episodes and then these episodes will go back to the learner right. I'll go back to the learner here and the learner will learn from them and hopefully then you can like generalize even more and then you can say I now know how to achieve this bit more reward. Now I can if I run the episode I will achieve even more reward. I can push the model even further right. So at eval time you can always ask the model to produce as much reward as possible in the given time. And of course every episode sent back here is not only one training example as we saw but many many training examples can be derived from these models even beyond what's in what's in this paper. All right. So I think this was a good first shot at describing this algorithm. I hope you get the gist of it. I enjoy this a bit of a criticism for me would be it's still kind of doesn't it. So it doesn't touch the exploration dilemma. So it again deals with kind of incremental incrementally getting better whereas I feel this can easily get stuck in some minimum where it's not possible to do this incremental generalization of the model where you really need a new approach. And that's why games like Montezuma's Revenge are solved using algorithms like Go Explore and not any of the classic algorithms. That being said they have experiments where they show that especially in sparse reward environments they do better than classic or algorithms. So if you for example here take the lunar lander where A to C beats upside down RL and I guess you didn't get Matt Ploidlip to do the upside down. Well the in other in other environments upside down RL clearly beats the classic algorithms. And what I like here is they took a lunar lander and which basically at every time step you get a reward in lunar lander and they hypothesized. Okay this is really good for these classic algorithms that do reward maximization instead of kind of learning this general behavior function. And what they did is they modified the game such that all the reward is given at the end of the episode. And then you see that upside down RL will actually outperform here the classic things where it's exactly the same game you just get the reward at the end. So upside down RL kind of learns the structure of the world learns that you get this reward at the end after such and such many time steps. So you can it will learn please get me zero reward in 50 time steps like no problem. But please get me a thousand rewards in a hundred time steps. No problem. I just go to the end of the episode right. Whereas these pure reward maximization techniques they don't they somehow have a harder time to do that. I like this investigation. I like the thinking outside the box. The Schmidhuber ism of the paper. It's just all great. It's a great time to be alive and check this out and I'll see you. Bye bye.
[ { "start": 0, "end": 6.4, "text": " He did it! Crazy son of a bitch did it again!" }, { "start": 6.4, "end": 12.8, "text": " What am I talking about? Jürgen Schmidhuber reinforcement learning upside down!" }, { "start": 12.8, "end": 20.6, "text": " New paper just dropped on the verge of the NeurIPS conference being presented at a workshop here." }, { "start": 20.6, "end": 26.2, "text": " Presenting upside down reinforcement learning. I am pumped for this one, can you tell?" }, { "start": 26.2, "end": 35.6, "text": " It says we transform reinforcement learning into a form of supervised learning by turning traditional RL on its head." }, { "start": 35.6, "end": 42.4, "text": " Calling this RL-lar. What do we call this? We'll just call it lar." }, { "start": 42.4, "end": 45.4, "text": " Upside down reinforcement learning." }, { "start": 45.4, "end": 52.6, "text": " And so this is upside down. Never mind." }, { "start": 52.6, "end": 56.6, "text": " Okay, let's just check out how it works." }, { "start": 56.6, "end": 62.400000000000006, "text": " So I'm going to give a brief overview before we go into this paper." }, { "start": 62.400000000000006, "end": 69.8, "text": " Alright, so let's say you have a reinforcement learning problem. Let's say an Atari game for example." }, { "start": 69.8, "end": 73, "text": " And in an Atari game you usually have a screen, right?" }, { "start": 73, "end": 79.4, "text": " And let's just say you're playing this marine commander. So there's water here, right?" }, { "start": 79.4, "end": 84.60000000000001, "text": " And there might be a bunch of... Here's your boat, right?" }, { "start": 84.60000000000001, "end": 88.4, "text": " There's a boat, a little boat. There might be a bunch of opponents right here." }, { "start": 88.4, "end": 92.60000000000001, "text": " Fishy fish opponents, fishy fish opponents and so on." }, { "start": 92.60000000000001, "end": 96.9, "text": " And there are a bunch of gold coins like here. That's a big gold coin, right?" }, { "start": 96.9, "end": 101.80000000000001, "text": " And you're kind of supposed to, I think you're supposed to like go get air." }, { "start": 101.80000000000001, "end": 104.30000000000001, "text": " You have some air meter over here. Whatever." }, { "start": 104.30000000000001, "end": 106.80000000000001, "text": " So there's this Atari game, right?" }, { "start": 106.8, "end": 111.2, "text": " You're supposed to get the reward which is this maybe this coin here." }, { "start": 111.2, "end": 114, "text": " And stay alive as long as possible and so on." }, { "start": 114, "end": 116.5, "text": " So this is a classic reinforcement learning problem." }, { "start": 116.5, "end": 120.7, "text": " And there are various techniques for this. We've looked at a couple of them." }, { "start": 120.7, "end": 128.7, "text": " And what upside down reinforcement learning does is basically what you do is you want to transform this input to a new representation." }, { "start": 128.7, "end": 138.6, "text": " Which basically, well, if I can, maybe I can... Let me get this correctly." }, { "start": 138.6, "end": 145.7, "text": " So then there's this over here and then there's a little fishy, a little fishy here." }, { "start": 145.7, "end": 147.89999999999998, "text": " And there's a coin right here." }, { "start": 147.89999999999998, "end": 153.39999999999998, "text": " So what you want to do is basically turn this input on its head like upside down." }, { "start": 153.4, "end": 159.1, "text": " And so this way is kind of up or down or whatever in this new representation." }, { "start": 159.1, "end": 166.4, "text": " And if you actually learn on this new representation with pretty the same techniques," }, { "start": 166.4, "end": 169.70000000000002, "text": " it works much better than the classic RL setting." }, { "start": 169.70000000000002, "end": 172.3, "text": " And this is not only for like these Atari games." }, { "start": 172.3, "end": 177.5, "text": " Like this appears to hold throughout the RL space." }, { "start": 177.5, "end": 181.70000000000002, "text": " So in robotics, like if you have a robot or whatever, this is a robot." }, { "start": 181.7, "end": 184.79999999999998, "text": " It has a square head, as you can tell." }, { "start": 184.79999999999998, "end": 186.89999999999998, "text": " You know, it's supposed to like open a door." }, { "start": 186.89999999999998, "end": 190.6, "text": " You've seen this DARPA challenge. This doesn't work, right?" }, { "start": 190.6, "end": 198.7, "text": " But if you just transform this and actually turn the robot upside down," }, { "start": 198.7, "end": 202.1, "text": " the robot will be able to open the door just fine." }, { "start": 202.1, "end": 207.6, "text": " And even like if you have a chessboard and there's like a bunch of pieces on it." }, { "start": 207.6, "end": 212.7, "text": " The problem in this case is you have to simulate this chessboard." }, { "start": 212.7, "end": 218.1, "text": " And if you turn this around now, basically all the pieces will fall off." }, { "start": 218.1, "end": 224, "text": " So what you need to do is you need to have a simulator that encodes a magnetic chessboard" }, { "start": 224, "end": 226.7, "text": " such that the pieces don't fall off." }, { "start": 226.7, "end": 230.4, "text": " So it's a bit of programming effort. But if you do that..." }, { "start": 230.4, "end": 234.1, "text": " All right, I'm kidding." }, { "start": 234.1, "end": 240.29999999999998, "text": " This is a new paradigm for RL, but it's unfortunately not as good." }, { "start": 240.29999999999998, "end": 244.4, "text": " Someone should try the magnetic chessboard simulator." }, { "start": 244.4, "end": 254, "text": " Upside down RL is a new paradigm for RL where basically the kind of notion of inputs" }, { "start": 254, "end": 259.5, "text": " and outputs of the RL algorithm are switched around a bit." }, { "start": 259.5, "end": 272.4, "text": " So basic ideas here is that you have an RL algorithm that is also fed with a bunch of commands." }, { "start": 272.4, "end": 275.2, "text": " So in classic RL what you'll have..." }, { "start": 275.2, "end": 279.3, "text": " Let's actually go back to this Atari game here, right?" }, { "start": 279.3, "end": 285.8, "text": " In classic RL, an RL algorithm will get the Atari game as a screen as an input" }, { "start": 285.8, "end": 290.3, "text": " and is asked from this to predict a bunch of outputs." }, { "start": 290.3, "end": 293.5, "text": " So in classic Atari, these are eight actions." }, { "start": 293.5, "end": 300.3, "text": " I'm going to draw three here, like go to the left, go to the right, or press the button for shoot, right?" }, { "start": 300.3, "end": 305.5, "text": " These are the actions you have and the algorithm is tasked." }, { "start": 305.5, "end": 307.5, "text": " And there are different versions of this." }, { "start": 307.5, "end": 312, "text": " In policy methods, policy gradient methods, typically the algorithm is tasked" }, { "start": 312, "end": 316, "text": " with outputting a distribution over these actions." }, { "start": 316, "end": 323.5, "text": " In other methods like value learning, Q learning, the algorithm is tasked with assigning each of these actions a value." }, { "start": 323.5, "end": 330, "text": " So in this situation, going to the left will be worth three in the future." }, { "start": 330, "end": 336.2, "text": " Going to the right will be worth negative one and shooting will be worth zero." }, { "start": 336.2, "end": 342.2, "text": " So you might want to go with this action here." }, { "start": 342.2, "end": 349.2, "text": " Now in upside-down reinforcement learning, we've had observation going into the model" }, { "start": 349.2, "end": 355.4, "text": " and the model coming up with the value estimation of the different actions." }, { "start": 355.4, "end": 363.09999999999997, "text": " In upside-down reinforcement learning, you'll have the observation and something else going into the model" }, { "start": 363.1, "end": 366.70000000000005, "text": " and the model coming up with an action." }, { "start": 366.70000000000005, "end": 368.8, "text": " And this something else is the key." }, { "start": 368.8, "end": 374.1, "text": " What you input here is your desire, your future desire." }, { "start": 374.1, "end": 377.1, "text": " And in this paper, they call it a command." }, { "start": 377.1, "end": 380.40000000000003, "text": " So you'll have a command as an input together with the observations." }, { "start": 380.40000000000003, "end": 386.90000000000003, "text": " You basically say, here's my state and I would like to achieve," }, { "start": 386.9, "end": 393.4, "text": " let's say five reward in the next five reward in the next two time steps, right?" }, { "start": 393.4, "end": 394.59999999999997, "text": " Make this happen." }, { "start": 394.59999999999997, "end": 400.59999999999997, "text": " Right. This is this is your command going into the model and the model will then try to find actions" }, { "start": 400.59999999999997, "end": 406.09999999999997, "text": " such that in the next two time steps, you'll get five reward." }, { "start": 406.09999999999997, "end": 413, "text": " You can easily see a model that learns this will actually be able to, you know, do various things," }, { "start": 413, "end": 418.9, "text": " including doing the classic RL things like get as much reward as possible in given" }, { "start": 418.9, "end": 424.4, "text": " or in the shortest amount of time, but can also do much more." }, { "start": 424.4, "end": 429.5, "text": " And in the general sense, the difference is how this is trained now." }, { "start": 429.5, "end": 436.8, "text": " This model, when you train it, as you can see, you don't it's not trained with in my having in mind" }, { "start": 436.8, "end": 440.1, "text": " kind of only to get the maximum reward." }, { "start": 440.1, "end": 445.20000000000005, "text": " It is trained to be much more a general kind of understanding of the world." }, { "start": 445.20000000000005, "end": 452.40000000000003, "text": " I mean, learning what do I need to do to achieve a variety of goals?" }, { "start": 452.40000000000003, "end": 457.90000000000003, "text": " Specifically, what you want to do to train this is the following." }, { "start": 457.90000000000003, "end": 464.90000000000003, "text": " Say you have a method of of moving in the world and collecting traces, right?" }, { "start": 464.9, "end": 472.79999999999995, "text": " So you go from state, state one, state two, state three." }, { "start": 472.79999999999995, "end": 478.29999999999995, "text": " You go with like your action one, action two." }, { "start": 478.29999999999995, "end": 481.79999999999995, "text": " Let's draw action three." }, { "start": 481.79999999999995, "end": 484.5, "text": " The state four." }, { "start": 484.5, "end": 487.5, "text": " And in each of these, you get a you get rewards, right?" }, { "start": 487.5, "end": 492, "text": " Reward one reward to reward three." }, { "start": 492, "end": 498.9, "text": " Now, this in classic RL, this will kind of give you one training example, right?" }, { "start": 498.9, "end": 508.1, "text": " So this is if you consider this to be an episode, this will give you one training example to to run this sequence of actions." }, { "start": 508.1, "end": 513.6, "text": " Upside down RL, you can actually consider this as many, many training examples." }, { "start": 513.6, "end": 515, "text": " And here's what I mean." }, { "start": 515, "end": 529.5, "text": " So if you, for example, start at state one, you can say, aha, within one time step, one one time step," }, { "start": 529.5, "end": 537.3, "text": " I go to state two and I have achieved our one rewards by doing action a one." }, { "start": 537.3, "end": 538.8, "text": " Right." }, { "start": 538.8, "end": 541.9, "text": " So this now can be an input to your model." }, { "start": 541.9, "end": 552, "text": " Your model could learn if you get as an observation, remember the previous thing as an observation, you get s one as a command." }, { "start": 552, "end": 557.5, "text": " You get I want to achieve in one time step." }, { "start": 557.5, "end": 560.4, "text": " Are one reward." }, { "start": 560.4, "end": 570.6999999999999, "text": " Right. And you train this goes into the model and the model is trained to say a one given if I am in s one" }, { "start": 570.7, "end": 574.3000000000001, "text": " and I do a one, I will achieve that." }, { "start": 574.3000000000001, "end": 578, "text": " Right. So you train the model to give a one as an output." }, { "start": 578, "end": 590.8000000000001, "text": " And this is valid because in the past you've observed going from s one using a one to a state where you get this this kind of reward in this kind of time." }, { "start": 590.8000000000001, "end": 594, "text": " But you can also so you can do all of these single steps." }, { "start": 594, "end": 598.2, "text": " They will all provide individual training examples to your model." }, { "start": 598.2, "end": 601.5, "text": " Right. But then also you can consider a two step thing." }, { "start": 601.5, "end": 609, "text": " So you can say I'm in state s one and I go I go in two time steps." }, { "start": 609, "end": 618.4000000000001, "text": " I have achieved our one plus our two reward by doing actions a one then a two." }, { "start": 618.4000000000001, "end": 627.6, "text": " Right. And a two I'm going to do in parents here because what you want to do is you want to always always consider the action that comes right after where you are now." }, { "start": 627.6, "end": 631.9, "text": " So again your training sample let me draw this up here." }, { "start": 631.9, "end": 635.2, "text": " Maybe your training sample would be the following." }, { "start": 635.2, "end": 637.1, "text": " I am in state s one." }, { "start": 637.1, "end": 638.7, "text": " This would be my observation." }, { "start": 638.7, "end": 646.6, "text": " My command would be I would like to achieve in two time steps reward r one plus r two reward." }, { "start": 646.6, "end": 650.3000000000001, "text": " Right. This reward this both goes into the model." }, { "start": 650.3000000000001, "end": 656.2, "text": " Right. You tell the model please given the state s one achieve this reward in this time." }, { "start": 656.2, "end": 666.3000000000001, "text": " And the model is supposed to output a one saying ha in the past I was in this state and I did achieve this goal by using that." }, { "start": 666.3000000000001, "end": 670.2, "text": " So the model is supposed to learn to achieve different goals." }, { "start": 670.2, "end": 674, "text": " Right. So now you can not only train from good episodes right." }, { "start": 674, "end": 683.8000000000001, "text": " You can train for any episode any episode usually in classic or you kind of want to focus on the good episodes because you want to maximize your reward." }, { "start": 683.8, "end": 694.3, "text": " But here you can tell the model hey if you've done something particularly stupid let's say here in s three you done something the a three was particularly stupid gave you." }, { "start": 694.3, "end": 700.8, "text": " So r three here was really bad reward like a negative five billion trillion." }, { "start": 700.8, "end": 713.6999999999999, "text": " And you can actually train the model to recognize this can be a hey look if you are in s three and within one time step you want to achieve negative five billion billion billion trillion." }, { "start": 713.7, "end": 717.7, "text": " Reward you all you have to do is action a three right." }, { "start": 717.7, "end": 730.6, "text": " And then the cool thing now is if you are at evaluation time you actually want the big reward what you'll do is you simply plug in a different command simply in one time step still I'm in state s three in one time step." }, { "start": 730.6, "end": 736.3000000000001, "text": " I want to achieve actually three reward not negative a lot right." }, { "start": 736.3, "end": 744.4, "text": " And the model will have learned that a three will lead to a situation where you get a lot of negative reward." }, { "start": 744.4, "end": 750.3, "text": " So the model will be like I'm for sure not going to do a three right." }, { "start": 750.3, "end": 757.4, "text": " I'm going to do something else here because I have learned to map a three to like this really low reward." }, { "start": 757.4, "end": 772.6, "text": " So in essence this has connections to things like hindsight experience replay and kind of universal value function where you kind of learn to go from any state to any other state in this." }, { "start": 772.6, "end": 781.5, "text": " But none of these do none of these have this kind of command what Schmidhuber calls command here as an input to the model." }, { "start": 781.5, "end": 794.2, "text": " And I think actually this is this is really positive to input this because usually in universal value functions what you would say is let's consider a simple grid world right." }, { "start": 794.2, "end": 801.3, "text": " Whatever your agent is here and you need to you need to reach a goal that's down here." }, { "start": 801.3, "end": 805.3, "text": " But you might not be able to learn it because it's super sparse reward and so on." }, { "start": 805.3, "end": 814.8, "text": " But what you can do is you can learn to reach this position and this position and this position from various positions like go here go from here to here." }, { "start": 814.8, "end": 816.5, "text": " You can learn to go from here to here." }, { "start": 816.5, "end": 822, "text": " And you know in essence you would like it eventually to generalize to all the fields." }, { "start": 822, "end": 832.4, "text": " So you basically learn to go from any position to any other position with your agent with these universal value or universal policy functions having sub goals." }, { "start": 832.4, "end": 842.1999999999999, "text": " But they during that phase where they learn to go from anything to anything they don't they don't necessarily include this reward thing as a as an input." }, { "start": 842.1999999999999, "end": 853.8, "text": " It's more like kind of either a sub goal or like the usual value function will simply approximate the reward." }, { "start": 853.8, "end": 862.3, "text": " Whereas whereas in this technique we actually have a policy learning we actually output a an action value." }, { "start": 862.3, "end": 867.6999999999999, "text": " Also hindsight experience replay what hindsight experience replay would do in the same situation right." }, { "start": 867.6999999999999, "end": 869.9, "text": " You're here." }, { "start": 869.9, "end": 872.5999999999999, "text": " We might do a videos on this in the future." }, { "start": 872.5999999999999, "end": 875.4, "text": " You're here and you try right." }, { "start": 875.4, "end": 879.6999999999999, "text": " And your agent actually it ends up here right ends up right here." }, { "start": 879.6999999999999, "end": 892.1999999999999, "text": " What you can do is you can simply say oh well actually this this was my goal all along and then simply train train your model as if as if this thing here was your goal all along." }, { "start": 892.2, "end": 899, "text": " And not this thing here and treat it as kind of a positive reward for this." }, { "start": 899, "end": 901.5, "text": " At least that's how I understand it." }, { "start": 901.5, "end": 902.9000000000001, "text": " Right." }, { "start": 902.9000000000001, "end": 910.5, "text": " And both of these things are quite different than here where we have this command as input and I do I do like it." }, { "start": 910.5, "end": 918.9000000000001, "text": " So I think this this is very much the basic things here." }, { "start": 918.9, "end": 927.5, "text": " This it is extra extrapolated to kind of noisy inputs and noisy environments and so on." }, { "start": 927.5, "end": 933.1999999999999, "text": " But this is the basic the basic gist of it." }, { "start": 933.1999999999999, "end": 944.4, "text": " So here you see your you what you will learn is to map all and all is your representation of your input." }, { "start": 944.4, "end": 947, "text": " So the screen for example or the chessboard." }, { "start": 947, "end": 953.2, "text": " And I think also kind of the last action and there were you get in this step plus your horizon and desire." }, { "start": 953.2, "end": 962, "text": " So in how much time you would like to achieve how much reward and then you can also get input some extra goals that you have." }, { "start": 962, "end": 972.1, "text": " And so you can see basically any any episode that you've run in the past will give you a valid training example for this." }, { "start": 972.1, "end": 982.9, "text": " Your model will simply learn to match the previous experience with the goals that were achieved in the previous experience." }, { "start": 982.9, "end": 988.9, "text": " So there is lots of lots of generalizations here like how exactly these things are represented." }, { "start": 988.9, "end": 991.8000000000001, "text": " This this time horizon can be a high dimensional object." }, { "start": 991.8000000000001, "end": 996.2, "text": " The desire can be as I understand it somewhat a dimensional object." }, { "start": 996.2, "end": 1000.7, "text": " The extra commands can be like conditionals on these two things." }, { "start": 1000.7, "end": 1012.2, "text": " It gets very complicated, but I want to jump ahead to a different paper where so this paper is basically just describing the algorithm." }, { "start": 1012.2, "end": 1017.4000000000001, "text": " And then the next paper is doing experiments with this." }, { "start": 1017.4000000000001, "end": 1019, "text": " Let's scroll past here." }, { "start": 1019, "end": 1019.4000000000001, "text": " All right." }, { "start": 1019.4000000000001, "end": 1025.8, "text": " So this paper training agents using up that down reinforcement learning released on the same day," }, { "start": 1025.8, "end": 1038.1, "text": " but different authors that have used also made who was also here but have used this to implement a variant of this." }, { "start": 1038.1, "end": 1041.3999999999999, "text": " And here you see again what I was trying to to explain." }, { "start": 1041.3999999999999, "end": 1051, "text": " So in traditional RL, this especially here Q learning, you'll have this function which gets an observation as input and then Q learning especially." }, { "start": 1051, "end": 1062.6, "text": " So you also get the action as an input and you're supposed to say for the given observation this particular action has this expected value as a return." }, { "start": 1062.6, "end": 1062.9, "text": " Right." }, { "start": 1062.9, "end": 1064.4, "text": " That's what I explained at the beginning." }, { "start": 1064.4, "end": 1068.4, "text": " That's kind of value based reinforcement learning." }, { "start": 1068.4, "end": 1079.8, "text": " Whereas the behavior function here, which would be upside down reinforcement learning gets the observation and a command and will map that to an action." }, { "start": 1079.8, "end": 1082, "text": " And here again is what we've gone over." }, { "start": 1082, "end": 1083.6, "text": " This is a bit of a different thing." }, { "start": 1083.6, "end": 1087.3999999999999, "text": " So this agent has apparently run two different episodes." }, { "start": 1087.3999999999999, "end": 1101.5, "text": " One point it did this sequence of actions and at the other point from the same starting state it did this sequence of action and you can see here on the right all the training samples we can we can derive from this." }, { "start": 1101.5, "end": 1106.5, "text": " So we can say from state s 0 right." }, { "start": 1106.5, "end": 1114.3, "text": " If I want to return in one time step, I have experienced this in the past right to return in one time step." }, { "start": 1114.3, "end": 1117.5, "text": " All I have to do is take action a one." }, { "start": 1117.5, "end": 1133.5, "text": " But if I want one return in one time step, I have to take action a two and you teach your behavior function to learn these things to learn to output these actions with these things here as inputs." }, { "start": 1133.5, "end": 1144.7, "text": " And then what you hope of course is that this will generalize that it will learn to generalize that you can say now give me more reward than I have ever seen before right." }, { "start": 1144.7, "end": 1158.5, "text": " And it will kind of learn which things correspond to lower reward, which things correspond to higher award and will be able to extrapolate which things will correspond to even higher report reward." }, { "start": 1158.5, "end": 1159.6, "text": " Sorry." }, { "start": 1159.6, "end": 1180.5, "text": " So they have two algorithms and this is kind of this is reminiscent of the old of the old RL kind of world where you do kind of one algorithm is continuously learning from the experience gathered by another algorithm." }, { "start": 1180.5, "end": 1185.6999999999998, "text": " So you have one set of algorithms and this even in modern RL this this this is how it's done right." }, { "start": 1185.6999999999998, "end": 1188.3, "text": " You have two different boxes right." }, { "start": 1188.3, "end": 1195.7, "text": " Actually you have probably one box learning the model like this is I'm going to represent this here learner right." }, { "start": 1195.7, "end": 1211.3999999999999, "text": " And the learner distributes the model to many many machines interacting with the simulators and these machines all they do is run episodes with the learned model and they will send back their experience here." }, { "start": 1211.3999999999999, "end": 1216.6, "text": " And then the learner can learn from it and then at the end send it again." }, { "start": 1216.6, "end": 1226.6, "text": " So so." }, { "start": 1226.6, "end": 1230.1, "text": " All right here we go." }, { "start": 1230.1, "end": 1242.6, "text": " So in each step what we do in order to to generate a new episode we don't always want to want to kind of execute one given policy." }, { "start": 1242.6, "end": 1249.3999999999999, "text": " What we do is we sample from the end of the replay buffer and the replay buffer is sorted by returns right." }, { "start": 1249.3999999999999, "end": 1252.1, "text": " So the highest return episodes are on top." }, { "start": 1252.1, "end": 1261.8, "text": " So we want to sample the highest return episodes then we want to say maybe some of them are 10 steps long maybe some of them are five steps long and so on." }, { "start": 1261.8, "end": 1286.1, "text": " So we set the horizon to be the mean of the length of these right and we set the desired return how much return should be achieved in this time to be the unit to sample from the uniform distribution between M and M plus S and M is the mean and S is the standard deviation of the selected episode." }, { "start": 1286.1, "end": 1292.6, "text": " So so what this means is is like here is a bunch of episodes from the start at the same time." }, { "start": 1292.6, "end": 1305, "text": " Here's a bunch of episodes that I ran right from here is time zero and then time goes on that I ran that had really high returns right." }, { "start": 1305, "end": 1310.8, "text": " Now I'm going to take the mean time that these episodes ran like this." }, { "start": 1310.8, "end": 1321.8, "text": " This is maybe five time steps. So in five time I want to achieve now how much reward now you look at all the rewards that were achieved." }, { "start": 1321.8, "end": 1334.5, "text": " This is maybe a distribution that has some mean here like so and then you say I want to achieve a reward between here and one standard deviation higher than here." }, { "start": 1334.5, "end": 1353.3, "text": " So right and this this would be the reward you want to achieve. So what you do is you kind of push your learned model to just go a bit beyond what it has seen so far is basically say look I you can do this but you can just do a bit more in the same amount of time." }, { "start": 1353.3, "end": 1357.4, "text": " Please do this and you hope the model has learned to kind of generalize to do this." }, { "start": 1357.4, "end": 1365, "text": " And if so you will execute these episodes and then these episodes will go back to the learner right." }, { "start": 1365, "end": 1377.8000000000002, "text": " I'll go back to the learner here and the learner will learn from them and hopefully then you can like generalize even more and then you can say I now know how to achieve this bit more reward." }, { "start": 1377.8000000000002, "end": 1381.5, "text": " Now I can if I run the episode I will achieve even more reward." }, { "start": 1381.5, "end": 1391, "text": " I can push the model even further right. So at eval time you can always ask the model to produce as much reward as possible in the given time." }, { "start": 1391, "end": 1404.9, "text": " And of course every episode sent back here is not only one training example as we saw but many many training examples can be derived from these models even beyond what's in what's in this paper." }, { "start": 1404.9, "end": 1413.7, "text": " All right. So I think this was a good first shot at describing this algorithm. I hope you get the gist of it." }, { "start": 1413.7, "end": 1419.6000000000001, "text": " I enjoy this a bit of a criticism for me would be it's still kind of doesn't it." }, { "start": 1419.6000000000001, "end": 1422.7, "text": " So it doesn't touch the exploration dilemma." }, { "start": 1422.7, "end": 1440.7, "text": " So it again deals with kind of incremental incrementally getting better whereas I feel this can easily get stuck in some minimum where it's not possible to do this incremental generalization of the model where you really need a new approach." }, { "start": 1440.7, "end": 1449.6000000000001, "text": " And that's why games like Montezuma's Revenge are solved using algorithms like Go Explore and not any of the classic algorithms." }, { "start": 1449.6, "end": 1459.3999999999999, "text": " That being said they have experiments where they show that especially in sparse reward environments they do better than classic or algorithms." }, { "start": 1459.3999999999999, "end": 1476.3, "text": " So if you for example here take the lunar lander where A to C beats upside down RL and I guess you didn't get Matt Ploidlip to do the upside down." }, { "start": 1476.3, "end": 1483.5, "text": " Well the in other in other environments upside down RL clearly beats the classic algorithms." }, { "start": 1483.5, "end": 1492.3, "text": " And what I like here is they took a lunar lander and which basically at every time step you get a reward in lunar lander and they hypothesized." }, { "start": 1492.3, "end": 1499.3999999999999, "text": " Okay this is really good for these classic algorithms that do reward maximization instead of kind of learning this general behavior function." }, { "start": 1499.3999999999999, "end": 1504.8, "text": " And what they did is they modified the game such that all the reward is given at the end of the episode." }, { "start": 1504.8, "end": 1515.8999999999999, "text": " And then you see that upside down RL will actually outperform here the classic things where it's exactly the same game you just get the reward at the end." }, { "start": 1515.8999999999999, "end": 1523.5, "text": " So upside down RL kind of learns the structure of the world learns that you get this reward at the end after such and such many time steps." }, { "start": 1523.5, "end": 1529.6, "text": " So you can it will learn please get me zero reward in 50 time steps like no problem." }, { "start": 1529.6, "end": 1532.5, "text": " But please get me a thousand rewards in a hundred time steps." }, { "start": 1532.5, "end": 1536.9, "text": " No problem. I just go to the end of the episode right." }, { "start": 1536.9, "end": 1543.4, "text": " Whereas these pure reward maximization techniques they don't they somehow have a harder time to do that." }, { "start": 1543.4, "end": 1548.1, "text": " I like this investigation. I like the thinking outside the box." }, { "start": 1548.1, "end": 1552.4, "text": " The Schmidhuber ism of the paper. It's just all great." }, { "start": 1552.4, "end": 1562.7, "text": " It's a great time to be alive and check this out and I'll see you. Bye bye." } ]
hsOMCwvFv80
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
I'm out of Academia
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper" ]
#machinelearning #ai #phd Done with my PhD in Machine Learning at ETH Zurich. On to new lands! Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Howdy diddly doo. Hi everyone. If you're wondering what the ridiculous thing on my head is, then that is my official graduation slash successful defense hat. I'm not yet allowed to technically use the title Doctor but let's be honest who gives a crap anyway about titles. I'm a huge fan of this hat my lab mates made this for me and I thought I'd share a little bit what's going on right here. So the everything on here is kind of like a meme and therefore that that has to do with me in some way. First of all you see my name which is made up out of letters of our lab homepage picture which is like the cringiest lab homepage picture you've ever seen where everybody's just kind of made the whole the letter and it's just it's very I love cringe by the way cringe is the best. There's obviously the meme of me being a youtuber and having followed or not followed my own advice. There is me as Schmidhuber in Schmidhuber attire. I went to his talk dressed in his style to to honor him. There is 2 plus 2 equals 5 which I made an extensive video about. I made the first neural network in Minecraft not technically true I made the first analog neural network in vanilla Minecraft that could also do back prop and weight updates. It's very specific but it's the first. There are the hugging face that's a transformer I don't know if you can see this that's a I don't know which one that is. That might be a Decepticon. There is the Asfazette which is my kind of side occupation as a fitness instructor. There are the sunglasses I also like cats. There is I'm always chilling for Vin as an editor though I use Niovin. Also the pronouns you know gotta have them I'm you know happy they're here. There is crypto because I'm also always chilling for crypto sometimes for the wrong ones but you know you can't always win. There is cheese and chocolate which is my standard lunch depending on the season. If I'm doing keto it's no chocolate but you know recently yeah just I'm Swiss after all. There is yeah there is the skeleton and the sword from Minecraft again due to my extensive research into the technicalities of redstone. Ili Cafe five years five years of that coffee will you know get you through a PhD hopefully. There are the tweets who that got me into trouble. Yeah there's also trigger happy Gandhi asking you earn 80k just for a PhD. Yes yeah we are like the best paid PhD students on the planet. It's fantastic can recommend. There is a Deep Judge logo which is the thing I'm going to do next which is a legal tech startup. If you need legal tech please buy our stuff. And so on the inside you'll see Joe and obviously the Donald. Oh I'm gonna have to reattach that again. Yeah so because I have lost a bit of money betting. I bet on the you know the really old dude and it turned out the really old dude won so I lost. Yeah so this is this is sort of a bunch of memes throughout my PhD. I'm gonna reattach the the Vim you know you don't want to that dropped. So yeah I you know thanks to to all my lab mates that this is this is really cool and yeah I'll see you around the corner bye bye.
[ { "start": 0, "end": 6.32, "text": " Howdy diddly doo. Hi everyone. If you're wondering what the ridiculous thing on my head is," }, { "start": 6.88, "end": 15.76, "text": " then that is my official graduation slash successful defense hat. I'm not yet allowed to" }, { "start": 15.76, "end": 21.52, "text": " technically use the title Doctor but let's be honest who gives a crap anyway about titles." }, { "start": 22.64, "end": 28.96, "text": " I'm a huge fan of this hat my lab mates made this for me and I thought I'd share a little bit what's" }, { "start": 28.96, "end": 35.6, "text": " going on right here. So the everything on here is kind of like a meme and therefore that that has" }, { "start": 35.6, "end": 43.120000000000005, "text": " to do with me in some way. First of all you see my name which is made up out of letters of our lab" }, { "start": 43.120000000000005, "end": 50.96, "text": " homepage picture which is like the cringiest lab homepage picture you've ever seen where everybody's" }, { "start": 50.96, "end": 57.120000000000005, "text": " just kind of made the whole the letter and it's just it's very I love cringe by the way cringe is" }, { "start": 57.12, "end": 64.24, "text": " the best. There's obviously the meme of me being a youtuber and having followed or not followed my" }, { "start": 64.24, "end": 75.28, "text": " own advice. There is me as Schmidhuber in Schmidhuber attire. I went to his talk dressed in his style to" }, { "start": 75.28, "end": 84.64, "text": " to honor him. There is 2 plus 2 equals 5 which I made an extensive video about. I made the first" }, { "start": 84.64, "end": 91.2, "text": " neural network in Minecraft not technically true I made the first analog neural network in vanilla" }, { "start": 91.2, "end": 97.76, "text": " Minecraft that could also do back prop and weight updates. It's very specific but it's the first." }, { "start": 99.12, "end": 105.76, "text": " There are the hugging face that's a transformer I don't know if you can see this that's a I don't" }, { "start": 105.76, "end": 115.04, "text": " know which one that is. That might be a Decepticon. There is the Asfazette which is my kind of side" }, { "start": 115.04, "end": 123.04, "text": " occupation as a fitness instructor. There are the sunglasses I also like cats. There is I'm always" }, { "start": 123.04, "end": 133.04000000000002, "text": " chilling for Vin as an editor though I use Niovin. Also the pronouns you know gotta have them I'm" }, { "start": 133.04, "end": 138.32, "text": " you know happy they're here. There is crypto because I'm also always chilling for crypto" }, { "start": 138.32, "end": 144.95999999999998, "text": " sometimes for the wrong ones but you know you can't always win. There is cheese and chocolate" }, { "start": 144.95999999999998, "end": 152.23999999999998, "text": " which is my standard lunch depending on the season. If I'm doing keto it's no chocolate but you know" }, { "start": 152.23999999999998, "end": 159.68, "text": " recently yeah just I'm Swiss after all. There is yeah there is the skeleton and the sword from" }, { "start": 159.68, "end": 168.08, "text": " Minecraft again due to my extensive research into the technicalities of redstone. Ili Cafe" }, { "start": 168.88, "end": 174.48000000000002, "text": " five years five years of that coffee will you know get you through a PhD hopefully." }, { "start": 175.28, "end": 185.20000000000002, "text": " There are the tweets who that got me into trouble. Yeah there's also trigger happy Gandhi" }, { "start": 185.2, "end": 192.23999999999998, "text": " asking you earn 80k just for a PhD. Yes yeah we are like the best paid PhD students on the planet." }, { "start": 192.23999999999998, "end": 199.11999999999998, "text": " It's fantastic can recommend. There is a Deep Judge logo which is the thing I'm going to do next" }, { "start": 199.11999999999998, "end": 204.95999999999998, "text": " which is a legal tech startup. If you need legal tech please buy our stuff." }, { "start": 204.96, "end": 212.56, "text": " And so on the inside you'll see Joe and obviously the Donald." }, { "start": 214.56, "end": 221.60000000000002, "text": " Oh I'm gonna have to reattach that again. Yeah so because I have lost a bit of money betting." }, { "start": 221.60000000000002, "end": 228.8, "text": " I bet on the you know the really old dude and it turned out the really old dude won so I lost." }, { "start": 228.8, "end": 235.92000000000002, "text": " Yeah so this is this is sort of a bunch of memes throughout my PhD. I'm gonna reattach the the Vim" }, { "start": 236.8, "end": 243.60000000000002, "text": " you know you don't want to that dropped. So yeah I you know thanks to to all my lab mates" }, { "start": 243.6, "end": 259.44, "text": " that this is this is really cool and yeah I'll see you around the corner bye bye." } ]
X4S8F3bwuuw
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Author Interview: SayCan - Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
[ "Science & Technology" ]
[]
#saycan #robots #ai This is an interview with the authors Brian Ichter, Karol Hausman, and Fei Xia. Original Paper Review Video: https://youtu.be/Ru23eWAQ6_E Large Language Models are excellent at generating plausible plans in response to real-world problems, but without interacting with the environment, they have no abilities to estimate which of these plans are feasible or appropriate. SayCan combines the semantic capabilities of language models with a bank of low-level skills, which are available to the agent as individual policies to execute. SayCan automatically finds the best policy to execute by considering a trade-off between the policy's ability to progress towards the goal, given by the language model, and the policy's probability of executing successfully, given by the respective value function. The result is a system that can generate and execute long-horizon action sequences in the real world to fulfil complex tasks. OUTLINE: 0:00 - Introduction & Setup 3:40 - Acquiring atomic low-level skills 7:45 - How does the language model come in? 11:45 - Why are you scoring instead of generating? 15:20 - How do you deal with ambiguity in language? 20:00 - The whole system is modular 22:15 - Going over the full algorithm 23:20 - What if an action fails? 24:30 - Debunking a marketing video :) 27:25 - Experimental Results 32:50 - The insane scale of data collection 40:15 - How do you go about large-scale projects? 43:20 - Where did things go wrong? 45:15 - Where do we go from here? 52:00 - What is the largest unsolved problem in this? 53:35 - Thoughts on the Tesla Bot 55:00 - Final thoughts Paper: https://arxiv.org/abs/2204.01691 Website: https://say-can.github.io/ Abstract: Large language models can encode a wealth of semantic knowledge about the world. Such knowledge could be extremely useful to robots aiming to act upon high-level, temporally extended instructions expressed in natural language. However, a significant weakness of language models is that they lack real-world experience, which makes it difficult to leverage them for decision making within a given embodiment. For example, asking a language model to describe how to clean a spill might result in a reasonable narrative, but it may not be applicable to a particular agent, such as a robot, that needs to perform this task in a particular environment. We propose to provide real-world grounding by means of pretrained skills, which are used to constrain the model to propose natural language actions that are both feasible and contextually appropriate. The robot can act as the language model's "hands and eyes," while the language model supplies high-level semantic knowledge about the task. We show how low-level skills can be combined with large language models so that the language model provides high-level knowledge about the procedures for performing complex and temporally-extended instructions, while value functions associated with these skills provide the grounding necessary to connect this knowledge to a particular physical environment. We evaluate our method on a number of real-world robotic tasks, where we show the need for real-world grounding and that this approach is capable of completing long-horizon, abstract, natural language instructions on a mobile manipulator. The project's website and the video can be found at this https URL Authors: Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ykilcher.com/discord BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
So today we're here with three of the authors of this paper with I have to say a lot of authors It seems like a giant work just from what I could gather From the from the paper itself and the data collection and the evaluation and so on So this was a huge thing, but the results are pretty cool. So here with me today are Faye Xia Brian Ictor and Karol Hausmann who are three of the authors of this work. Welcome to the channel everyone Thanks. Thank you for having us. It's great to have you here The I like I love the title Because it's a bit of a mantra on the do as I do as I say not as I do which is kind of the other Way around right here and this idea of Connecting robots and language. It seems pretty natural I have to say I've I've seen a lot number of paper attempt to do something like this Like can we maybe translate what the language model says into the space of what the robot understands and things like this? But this here it seems like a bit of a new approach Why why did you try? Why did you attempt to do this? Like why does this seem promising and why did no one else do this thing yet? Yeah, I think to start like the To I guess like prior work on like using a language model to kind of translate it down I think we first started out with sort of like playing around with that and and realized I guess how much information is Embued in these language models and how well they're able to reason over sequences and remember what they've done But when we really like started thinking about applying it to the world It was sort of like odd that there's no way to basically Make sure that whatever it's saying actually makes sense for the environment that was in And so I think like after playing around that for a while we were sort of like stuck there like okay We have these like interesting plans But they don't actually make sense for everything that the the robot can do and so we started kind of like shifting towards towards that problem Yeah, I think also separately we've been trying to get robots to do many things and learn multiple skills and This is a very difficult problem and we were debating kind of the the best way to do this whether we should predefined the skills up front or whether we should just demonstrate kind of anything that comes to mind and label it afterwards and Just connecting these two dots the language models with the skills that we already have on the robots Seems like a nice way of factorizing this problem Did you always could you so you have this robot in this environment and is if I understood correctly? Maybe here is a good demonstration of that. So you have the robot in these two environments and These are the environments that exist to understand this correctly. So it's only these two environments. There's no generalization across environments Yeah, so we've been collecting data in beautiful environments. These are the two environments that we use for evaluation We also have a separate environment that is right next to the environment that it's a Mark this be here where robots are practicing But it looks fairly similar to to at least the stations that the robots practice on are fairly similar to the stations that you see here The backgrounds are changing the the objects are changing that we practice with and things like that. We also use Simulation as an additional environment that we then try to make look similar to the real world But we don't really focus in this paper on generalization to Completely new environment. We rather try to focus on kind of having a robot do as many things In a single environment when we talk about robot practicing things, I guess that's where your methods starts with robots Practicing things and by things I guess we mean a bunch of very Low-level let's call them unit unitary Skills like here. For example find a coke can pick up the coke can bring it to you something like this So these these could be things that Conceivably we could learn with something like behavior cloning or something like this. How did you? Decide on what actions are possible for these robots to do on their own like as a unit Some of it is based on what the robots capable of some of it's like what? Gives us a like a easy reward function And some of it was sort of motivated by what? Composes well into long horizon behaviors that you really want to do in the world like if we have a robot operating in a kitchen What would I ask it to do what what's required of it to do that? And how would I break down the task? I think was like part of the motivation like really how this robot is gonna operate in the world Yeah, and also it's interesting to see how this picture came out So initially we kind of have to come up with these and we kind of have to think up front What would that person ask a robot to do? But now that we have something running we can actually ask people and see how they interact with the robot and Decide on which skills we should be learning next based on that Sorry, I want to add that at the beginning we choose pick and place because these are Two fundamental skills that can unlock a large number of instructions that we are able to solve But it is also very easy to add new skills into the picture like we only need to Have a have a language language description for the skill and we also need a policy and value function So these are all the three three things you need to import a new skill into the second framework What I like here is that you said you need a policy and a value function that policy doesn't even have to be like Neural network based policy conceivably one skill can be a very classic control problem I believe when you pick up things You is is that correct that you classically control where the actuator should go and when you move the robot you kind of plan in space So not everything is like reinforcement learned or behavior cloned Yeah, so different skills are learned differently in this case pick was learned through behavior cloning on real data But yeah, for instance for instance moving around this is not Trained with reinforcement learning or behavior cloning. So yeah, you can compose you can have different algorithms Train different skills and these skills just to to round out the picture right here the input is Whatever the camera sees plus, you know kind of all the states of the actuators So that conceivably there's an apple in front of you and the task is pick up an apple And that that would be kind of the state from from where you operate. That's right Yeah, we are in the state of the actuators. So that's the input From where you operate. That's right. Yeah, we are going the value function. The value function describes kind of how likely you are to fulfill that task That's right. Yeah, so the input to the policy is the image that the robot sees that you get at every after every action We actuate the arm by doing end effector position control Yeah, these are the inputs and outputs And also there's a terminate action, right? Sorry that so the robot can say it itself when it's done Yes, so one of the actions that the robot can command is terminate which basically means i'm done now we can move on to the next one And okay, so now I guess that this is one part of the puzzle You have robots you have all these policies for all the little things that the robots could do these little things Were developed by you by the community conceivably you could also use the large language models itself To suggest new things to train right on the basic level. You could ask gpd3 What would you do right here and then the little steps you could conceivably Make into like train into little actions, but you have this library of things And now the question is how do you how do you compose them and that's where the large language models comes in Do you want to comment maybe a little bit on like how does that look in a base in a basic way? How do we combine the knowledge of language models with these skills that the robots can do? Yeah, I guess uh at a high level so the language model already has So much knowledge about the world And how to do things in order and memory and things like that And the way to get it to like really speak in the way that is amenable to the robot first We show it a few like prompt examples. So we show it solving, you know, like about 10 problems And breaking it down from the query into the sequence of steps that it would take to solve that It turns out you can actually not use that and you still actually get like Some level of performance maybe like half the performance. So the language model just like comes out of the box With pretty good understanding of these tasks We then show with these examples this kind of brings it into the right frame of thought But if you do that and you ask for something new it doesn't like fully constrain it in a way that the robot will be able to understand it So our tasks along with the image and the states that we uh mentioned before also takes in like a task id So it says like um like pick up the apple So really what we need it to do is like output pick up the apple It can't say like pick up the fruit because the low level policies are not generalizing to that So to make sure that every time we actually output things you can do we instead of like taking the generative output of the language model We use what's called a scoring model So when a language model outputs, uh some text it also comes with a probability that it would output that text And so instead we can just like force it to only respond in these ways and say basically how likely it is to respond in that way So in this case we get like a score of if I were going to pick up the apple or put the apple somewhere These are the things i'd likely to respond These are the things there's no way I would respond And this gives us some like probability that the language model thinks this is really useful to the downstream task On the other side we have these value functions and policies that we've talked about There are actually the value functions outputting how likely it is to achieve a task I think actually there's another uh slide one like one more down Um, but this is basically or yeah, this yeah is saying basically these are possible from this state And so on one hand we have a language model saying this seems really useful for the task And on the other hand we have the value function saying this seems possible And together they give some probability that this is what you want to do to basically accomplish the high level instruction I have a number of okay. Let's just start at at the beginning at the at the language model level I see the high level picture you you ask both the language model and the value functions What's what you know what they think should happen next and uh the combination of the two is what then you really do Which makes a lot of sense when you do ask these language models what to do um right here You said you use the you use the essentially you you ask the language model for the likelihood of of an output instead of letting it generate the output Was this your first try because one could also imagine uh you know saying something like of the following options Which one would you pick right and then you list all the options which would conceivably be more general because you could option you could add options over time And stuff like I guess you could do that here as well but um was this your first attempt or did you did you have some prompt engineering attempts before that Yeah I think at first we tried just like prompt engineering to see like basically what the generative model would output I think like our our initial thinking was we just want the generative model to basically plan as much as we can But that runs into two problems one is that it doesn't constrain the output fully so if I give it all these examples and then I said How would you put a fruit on the table instead of an apple on the table the generative model will actually respond with like number one find a fruit number two pick up the fruit And then you need to figure out how to like take that and project it into the final like thing that the robot can actually handle You can project this in some sort of like embedding space and that works sort of well but you actually lose some context on the overall query so I guess the way that we do it is a little bit more like well founded so to speak But the other really nice benefit of this is it gives us scores for everything which is really interpretable it lets us like see the trade off between these two options So in your example you said you know what if I just said here are your options pick one and the language model would probably pick one but now you only know that this is its favorite option You don't know the probability that it would have done maybe maybe it's actually okay with the next three options so this gives us this like interpretable score that we can then combine with the value functions Yeah there are some caveats to this I feel in that for example we know that by definition longer outputs are less likely right so I guess it's not too much of a problem for you because most of yours are like three or four words but have you noticed any of kind of these effects of just how these probabilities are constructed as kind of multiplications of softmax outputs like that's got to bring its own bias into the into the picture Have you observed any of that have you had problems with any of that or was it was it generally okay Yeah it's it's definitely a little bit of an issue I mean I think in general it's also very particular to the to these like if you were to misspell a word in there or like have an A versus an M it's not particularly robust to those in the options it is in the query like to what the user might say but not when you're scoring these options because if one word is off then this like multiplication of each word just kind of tanks the entire score so we did have to be somewhat careful with what we have one way to kind of like get around this a little bit is if you have some like end of statement token and if it if it adds extra words on the end then it's saying if if there's like more to come that end of token will basically kind of normalize the rest of it like you can't end a statement or a word and a statement early the yeah I think in the other thing that we did try to do is like potentially normalize them so knowing that this query is longer perhaps we need to upweight it or have some normalization on the language output but we found that it wasn't particularly consistent and there wasn't like just a constant effect across one or the other and it depends on the way you like referred to the query and so at the end of the day we just took the outputs as they were so it was an issue but it wasn't like a huge one I imagine that there's another factor here for example if you if you say you said before pick up a fruit or please bring me a fruit or something of this you're essentially relying on the ability of the large language model to sort of recognize that apple is a fruit and and and kind of interpret that in the same way and and so on so the kind of close as the language model estimates how close the things are did you find this generally in agreement of in how how humans find how close the things are and maybe yeah I'm just I'm just wondering about this notion of how how close things in language are together also what happens if you for example have an apple and an orange in your scene these two things would be quite close together so even if you said you know please pick up an apple the pickup an orange thing would conceivably score quite high in the language model which might perturb your things so I can kind of I can sort of make out that you have an ideal environment right here in that you probably picked objects that are distinct from each other locations that are fairly distinct from each other right such that there's a nice semantic gap between the things what like do you think this is well applicable to a real world setting or what kind of hurdles could there be with connecting language models and the set of actions in this way so I guess the first question was about do these families kind of align with what you would expect and that was actually that was one of the first things that I was looking at was like how well do these scores sort of match up to what you think it's going to be so yeah it turns out that like apples are apples and orange and banana are all going to score quite highly when you're asking for a fruit if you ask for a snack all the food options are going to score highly similarly drink soda any category like that it performs about yes you would expect as a human which is good but then yeah it comes to this problem of what if there's an apple and orange or what if there's an orange but not an apple and that's where these value functions come in this is actually like one of the key reasons why we have to do this in volume and grounding. Because if you just asked a regular language model that doesn't know what's there then how does it make that decision maybe it uses the wrong one then your plan isn't really correct and our policies may not actually work. But the value function tells you if there is an apple in the scene and no orange then you're going to see a high value function on the apple because the pick apple command could work versus the orange command is going to be quite low and so that actually lets you sort of like disambiguate this so. In the in figure B if it had a pick up the Red Bull if you said bring me a drink and there's a Red Bull but no water it's going to pick up the Red Bull because that's actually what's there. And if not then then the instruction itself is ambiguous right if you say pick up a drink and there's two drinks and both are affordable according to the value function yeah. Yeah then we think like either is completely fine I think it's also interesting because then the robot is making the trade off itself dependent maybe on the value function so for instance if you ask for a fruit and there's an orange and an apple but it's much better at picking up apples. Maybe it will pick up the apple because the value function will just tip the scale. So it will make some errors in that sense but since this is interpretable and you can kind of look back and see why I decided for that it can also inform us as to what skill we should train a little bit more or which value functions are a little under fitted and things like that so it will make some sort of mistake. But maybe that's that's okay maybe that's acceptable. I think one like really nice feature of that too is it's not necessarily always like it's better at picking up oranges or apples but you can see like these objects are in different locations one may be better for the policy than the other. So we're going to end up doing the one that's a little more robust and a little more likely to succeed as long as it still fulfills the high level query. Yeah I like the fact that you have success probability as sort of the ultimate score because I was I also thought one failure mode here is that some tasks are inherently harder than others right and so naturally your value function would be lower and therefore you can misinterpret just by the fact like well like this this this is me the procrastinator like this thing seems really hard and we'll do this other thing that I'm not sure how to do. But it's really easy so it's almost it's almost too human how the robot would act in this way. So yeah you have these what I like here as well is that you have to bank of value functions on one hand the language model on the other hand and the language model on the other hand is the one that's the most difficult to do. So yeah you have these what I like here as well is that you have to bank of value functions on one hand the language model on the other hand and they are never if I understand correctly trained together right there never in fact the language model is probably just frozen. So they're never trained together which means that you could conceivably just add a skill to the robot train its value function for it and just plug it in and and go. Yeah we can scale this fairly easily so we can continue adding skills we can also change the underlying algorithm how we train the skills. Or how we train the particular skill that we want to add if we if suddenly there is a really good script that allows to I don't know swipe the floor or something like that. We can we can also add that as long as we have a value function for it and also at the same time if the language model becomes better we can also swap out the language model and get improvements through that. I want to add that so our current value function is one way that we instantiate affordance but there are many other ways that we can instantiate affordance like for example we can directly do prediction. We can also use classical motion planning like to calculate for example length of the trajectory is also or the probability of success if you do like sampling based motion planning. So there are many ways that we can come to the affordance and the method is really flexible to plug in any type of affordance. I guess a big topic in maybe maybe it's more the space of blockchains and things like this is agents that do an action for you but also optimize for example for cost or for resources or something like this. This could directly flow into that where you can tell the robot you know do whatever fulfills my task but also costs very little and this could if this directly flows into affordance there might be a normalization issue but if this directly flows in you'd have you could tune the knobs on these on these functions fairly easily. So this is the full algorithm I guess we haven't talked yet about how you extend this to multiple steps but it is as far as I can tell fairly easy in that you do this in sort of a stepwise fashion so first you ask your language model your value functions at the current state and the current camera position where what should be done. Then you try to whatever should be done according to both scores combined you execute that and after you execute it you ask the same thing again but now the prompt changes and it's simply that you so here the prompt is essentially I would first and then first action is decided and once you go on the prompt now says I would first. The prompt now says I would first and then whatever was decided on and then second and then it's simply the same thing with the next action did I get this approximately correct. Do you pay any attention to whether or not the task was fulfilled successfully. So right now we don't we assume it will successfully execute I think some things could happen like if it fails at a navigation task say it was trying to navigate to an apple and the and it doesn't get there then the value functions at that next state are going to be quite low so you're not going to be able to basically pick something up or whatever so maybe then you end up selecting navigate to the apple again or navigate to a table instead but we don't have any like explicit success. Detection I think this is like one area that we're like pretty interested in going basically like finishing the job closing the loop entirely on when you try to do something did you succeed telling the language model and then having a language model adapt accordingly. I want to show one video from from your website which in this case if I got this right it confused me I guess a little bit because this thing right here if you see it kind of looks around sees all the things right like looks and sees and then it kind of scores the actions. And like this so pick apple I can't do that pick sponge okay. Bring you a sponge no not go to trash can place the sponge place the sponge is good and that's the place the sponge kind of up ways to bring you a sponge or like what's going on right here because in my in my estimation the robot shouldn't even look around initially the robot should just have its camera position fixed and then it in first instance. It should probably figure out like find a sponge or something like this and then it would move and then it would see consider these next actions like what is what is this video supposed to to show. Yeah I think you're understanding is completely correct so this is more like a conceptual video where we wanted to kind of across that it can accomplish longer tasks but you're right that the way it would happen is that it would look at the current image then it would decide that at first needs to find a sponge or maybe pick up the sponge if the sponge is already available then append that to prompt and continue. So we just wanted to make it short so that you can still get to get that idea across but only by having a single image. Yeah so it might be a little bit confusing that doesn't I think depict fully how the method works. Yeah I think we just got excited by the visual of a language model sort of seeing nothing and then waking up and saying oh I'm a robot. Okay here's my history of what I've done before. Okay depending on that what I thought I made a lot of sense doesn't make any sense anymore so it's more like excitement than anything else. It does look pretty sweet like it looks pretty cool especially like the effects on like the zoom seeing what's around. You use by the way we've not shown this yet you use these everyday robots constructions which look semi creepy but also quite cool especially when they pick up stuff they like hold it behind their back like it's like a mixture of a butler and someone who just has a knife and wants to stab you. But pretty sweet and it works surprisingly well. So maybe we can talk about the results of a little bit next because my next question would sort of be okay how well does it actually work in the environments where you tested on. Do you maybe want to comment a little bit on what was what were the general results and then you have some ablations. If a do you want to take this or do you. Yeah I think I can take this. So we tested this on two environments. One is the real office kitchen and another one is a kind of a mock office kitchen showing in figure five I think and we tested on a hundred and one in standard. From like six categories. Yeah so here here are the test environment that the A is a real kitchen and B is a mock kitchen. There are 15 objects that we focus on and also five semantic semantic locations. Like these locations are semantically meaningful like table trash can close counter far counter and a robot operator location where we define like bring back to you. That's where it is supposed to bring it back to. We test on a hundred and one instructions from six or seven categories if you scroll down a little bit. It's mainly to test different capabilities of the robot for example can it understand synonyms like non synonyms or verb synonyms like what does that mean. Throw away means bring something to the trash can like recycle means bring something to the trash can and also structure language which is just like verb non compositions. And also we test embodiment which means we test if the robot is not in the trash can. And also we test embodiment which means we test if the robot understands what its current embodiment is. For example if I already pick up something I shouldn't try to find it again because I already have it. Also we test on crowdsourced basically it's unstructured human queries from like coworkers for example and long source. And also we test embodiment which means we test if the robot understands what its current embodiment is. For example if I already pick up something I shouldn't try to find it again because I already have it. Also we test on crowdsourced basically it's unstructured human queries from like coworkers for example and long horizon tasks which are some of the really really challenging instructions such as I spilled my coke on the table how would you throw it away and then bring me something to clean. So that's a really challenging task the robot need to understand what does spill mean like what tools you can use to clean up a spill. So these are the instructions that we tested and overall I think we achieved 71 percent planning success rate and 66 percent execution success rate. And it's the hardest question is do the longer horizon tasks. So I think we only have about like 30 or 40 percent success rate. And yeah we are working on improving those like other success rate on those other questions. Ryan if you have anything to add. Yeah the only thing I was going to say is that the long horizon ones are particularly challenging both from like reasoning and language side. But a lot of the issue comes with if you have like a 90 percent success rate manipulation policy which is still quite high. Every time you do this you reduce the probability that your overall plan is going to succeed. And so that starts to like both it's a big challenge and we want to get our manipulation policies better and better and each of our low level skills better and better. But also having some sort of like closed loop that so the language model knows to retry would be really helpful here. And you I saw I saw in the results that it was pretty interesting in that you did ablate a lot of these things. For example you did ablate what for example if we don't have the language model and these are the overall success rate. You ablate what if we don't have the language model and what if we don't have the scoring model and generally they were worse much worse in both cases which was pretty cool to see and not always the same. Except in this one it is one thing to understand this correctly if you drop the generative model on a generative uses it uses a large language on a projects the nearest to the nearest skill via an embedding. That is actually better than your original policy. Is that just noise or is there something behind it if you use this verbs category. My guess is I think it's more noise than than anything else. But there were definitely times where so we see it like really fail in certain circumstances. So embodiment because there's no value function there to tell it that it can't do something. There's a real issue for it. And so there are a lot of failures for anything that didn't have a value function there. I think we saw some like some pretty interesting differences between the no value function. So this is the scoring model only without a value function and the generative model. And so some of the issues with the general model came around with like nouns for instance. And this is because when you do this projection. So the say I said I just worked out I want a snack it then projects to or then these the plan will say bring me a snack. But really what I want is a snack to help me recover from my workout. And so that like a little bit of information is enough to say it's probably not like potato chips but maybe something like healthier. Similarly like a drink there would lose a lot of its information. And so on the noun ones we saw that it ended up like losing this information and that cost a lot of the success rate. Whereas the scoring model did OK across the board but maybe not as like smoothly in the verb category. Another really fascinating thing here is at least in my opinion just the scale of data collection in this project. I have I have made a few notes and at one point it says something like you use a lot of human labelers for for example the success rate of these little policies. So even when when you train these little or small unit let's call them unit policies you use humans to see whether they're correct or not. And you use three human raters per execution and you get it you get give it one single sparse reward if two out of three agree. So like this scale seems immense. Is this really like how did you determine this was the best way to spend the human time and not maybe together more noisy but three times more like. Noisy but three times more labels or something like this. How did this come to be. Yeah this is a good question. I think we are still figuring this out. A lot of these questions and how to spend how to spend human time in the most efficient way that that helps the policies the most. And I think there is a question of crowd labeling as you as you mentioned. So how much noise can you tolerate in the reward function compared to like the throughput of that. Also how much time you should spend collecting human demonstrations versus how much time humans maybe should be just supervising robots collecting data autonomously. How much should we be spending time developing assets and policies in simulation and transferring them to the real world. So we are still kind of trying to find the trade-offs between all of these. I don't think we have any any very good answers right now. As for labeling itself we noticed in previous projects that the noise on the on the reward signal is going to be really can have a big influence on performance. So that's why we decided to have three labor laborers to to agree on the two of which we have to agree to to market the reward. And we also had additional questions such as was the behavior undesirable or unsafe. And these are sometimes quite ambiguous. So it's actually it helps quite a lot to have multiple people look at the video and and tell us what they think. Did you always have these additional things in. So you have as you say and also wrote this down somewhere a unsafe undesirable or infeasible. Did you always have this in or was this kind of a development that happened over time that you realized oh crap we're asking people how likely is the robot to pick up an apple but there is no apple in sight and things like this. Yeah so some of them we added. So initially we knew that safety is a is a big problem. So we started with with that question and we noticed that sometimes the robot would do something that isn't necessarily unsafe but we still don't want it to do it. For instance it will touch the object that it wasn't supposed to touch or it will just poke something and it will fall off the table. So then then we added the undesirable which is like has a slightly different definition and we can also optimize for it differently in the reward function. And then regarding the the last one the infeasibility this is something that we noticed with reinforcement learning algorithms that if you add a lot of data where the task wasn't feasible even though the data is technically correct. The robot didn't accomplish the task it got reward zero but it seems to be influencing the real algorithms in a bad way. So we added this in addition to prevent that and potentially filter for this data or see how we can change the real algorithms to handle that kind of data better. And why do you only give a single reward. I mean presumably a human watching a video like this could be you know every couple of frames could be like yeah good job robot yeah that's the right way yeah oh no don't do that. Like essentially like Peter Pan or like you know warmer warmer warmer colder colder which would give sort of a much more dense label space. Was this is like a technical limitation or did you also consciously choose to say no we got it's one single reward and that's only it's one when you fulfill the task and zero everywhere else. Yeah so there's I think a few reasons for this first I think the ambiguity that comes with it. You know it's already sometimes difficult to decide whether the task was accomplished correctly or whether it was undesirable or not. If in addition to this you have to add this continuous signal whether the robot is going in the right direction I think it can be fairly ambiguous depending on what the robot is doing. Secondly we made a decision some time ago that optimizing for sparse reward tasks would be just more scalable for the future. There are some tasks where it's quite difficult to say whether the robot is actually going in the in the right direction and sometimes that accomplishes a task in a surprising way and we don't necessarily want to eliminate that and introduce human bias of like well I think it should go that way. So our real algorithm is that we've been developing have also been optimized for the sparse reward setting. So that was kind of another factor that we that we thought about when when considering the reward function. So speaking about doing it like humans there's a yet another set of data collection in this project and that is that not only do you collect the labels but you also do quite a considerable amount of behavior cloning. From essentially learning from demonstrations from humans with another set of data gathered from you call it teleoperated teleoperator sessions. How can we how can we imagine such a teleoperator session like how many of these kitchens and robots do you have and how long does this take to gather a data set that you could conceivably use to collect data from humans. Gather a data set that you could conceivably do behavior cloning from. Yeah so I think we specified in the paper that we gathered at that point around 70,000 demonstrations for all these different tasks. This is across 11 robots I believe we built a little we built little stations for the robots like the stations that you can see in the picture here. Where the robots can can practice these things and people can demonstrate how to how to how to do things. I think we are still kind of trying to see how much of this if we if we filter the data set for instance how much can we filter it and still get really high result. So I think we we don't have very good answers to that yet. Yeah but this is something we're looking into kind of the trade-offs between how much demonstration how many demonstrations you're collecting how much autonomous data and so on. Where is this just because this is at Google which is a company and sure there's like a cash cow that generates infinite money but there's got to be some kind of constraint on you just or how do you how do you how does this work maybe. What robotics at Google what is your mission there and how do you pitch such a thing to to management like yeah essentially we want to collect 70,000 sessions of tele operated things every time a human presumably not a random human because they would just crash the robot out of spite. But like a trained trusted human needs to sit down and spend their time and there's robots are quite slow as of now. There's got to be a considerable budget behind all of this data collection and labeling and so on. How do you do you have to make a case for that or are you relatively free in doing this. How does how does your work in the same in the business perspective look like. Yeah I think in any company you kind of have to make a case or even in in academia you have to make a case for your project why you think this is how the money should be spent and where the resources should go. So usually the way we we kind of justify it as by showing kind of step by step results and showing if we extrapolate this where this is going to go. So we we we've done some projects previously where we showed reinforcement learning at scale with six robots or behavior cloning at scale with just two or three robots. And then we start seeing that with the amount of data that we collected there we already can see some interesting results and now if we want to get these robots to do many more things we need more robots we need more data. And this is kind of one big bet that we that we have in robotics at Google is that this large scale machine learning could be a way to really help robotics. So we want to we want to be able to be risk some of those questions for the for the community right. Like if we can actually buy a lot of robots and provide a lot of demonstrations how does it scale. How does it work. I think one of the sides or one of the figures in the appendix actually has somewhat like the way that we built up these skills one by one it's maybe I don't know what page it's on but it's a little higher than that. Yeah this one sort of shows like how these were built up over time and and how more one more and more skills were added more and more data was collected each time seeing signs of life for the algorithms and performance and improving upon that. And you can see that from time to time there's a new skill being added so that kind of goes from zero up in the meantime there's also the underlying code is changing. So it's kind of like improvements over time. So this goes it goes up and to the right which is what we all love. And was there was there major downturns in this project like times where you know things didn't seem to work out or you didn't exactly know what the problem was things like this. Could you get us a bit behind the scenes into when when things go wrong. No problem. There's quite a lot I'm just trying to think which one to tell you. There's quite a lot also from previous projects but I think one thing that was quite surprising to me personally and I think we are still kind of working on that is that if you spend in if you classify approaches into let's say imitation learning and reinforcement learning. If you spend enough time and data on either of them you can get them to work. So we some of the results that you see here most of them are from behavioral calling but we can achieve very comparable results with reinforcement learning either by transferring policies from simulation and then continue collecting with that policy and kind of fine tuning it to a high performance. Or by just bootstrapping from real data and improving upon that. But what is quite surprising is that combining these these two have have has been quite tricky. So kind of having a single algorithm that can digest all of that data that can digest all of the demonstrations as well as the autonomous data that was collected data that we collect in simulation and so on and have it have all the properties that fit into the data. So it performs at least as good as behavioral cloning but it can also improve autonomously and so on. This has been this has been quite surprising and tricky. I want to maybe have a bit of an or make a bit of an outlook right here because it seems we have a pretty cool way to go from skills that are described by language. But you have to define them. Let's just scroll to one of them. You have to define them ahead of time. Right. You have to define pick up the Coke can bring it to you find the Coke can and so on. You have to just you have to design these even though they're described by language. They're pretty fixed set. Now the first thing that maybe one can think about is how to extend that set and not necessarily extend the data. Just linearly. But I'm thinking of something when I say please clean up the table. You might not know what's on the table. So we need this kind of a concept of like almost like a variable or an unknown. You know like so the plan could be go to the table and then kind of decide what to do next. So the language model could get even or has to get a feedback almost from either the value functions or from the picture itself. Is that anything that's on your your radar sort of what if I don't what if I have to adjust my plan on the fly to the state that I'm going to encounter. How could this model be extended to to handle that. Let's say all the actions are in your action space. But you just don't know at the beginning which ones you're going to take. Yeah I guess right now we kind of like count on the value functions to sort of like collapse whatever your plan is into the thing that is actually possible in the world. I think like one of the most I guess straightforward ways to do it though maybe not straightforward in practice is to use things like visual transformers or like structured scene representations that actually tell the language model what's possible so that they can start like reasoning over it earlier on. The other thing is to add in something like these success rates success detectors that say OK you tried to do this and it wasn't possible. So maybe you tried to find an apple that was impossible. Perhaps the next thing to do is try to find an orange that may actually be in the scene. So there's some like combination of value functions giving it feedback about the scene. But right now we don't have anything that like has the language model really really reasoning over the steps because the value functions takes it take care of that like interaction. But one could fine tune it on some data that allows it to do that is probably the most straightforward way to do it. But whether that works is open question. I guess the other thing is and this would really also close the loop or close one of the loops is if I imagine that I also had a model that could take any visual input and then kind of describe that describe what's happening in the visual input. So I'm going to give it a video of pick up the of something picking up the Coke can and the thing would come up with like a label for it like this video shows pick up a Coke can. Then I'd have almost limitless possibilities. I could just let a robot move at random essentially let the language model or let this model describe what it's doing then kind of feed that to the language model and so on. So instead of you designing the actions that it should train I could just let it do stuff and then have a model describe that stuff and then use that. Is is that a plan or is there like a major hurdle on the way there because that would kind of result in a almost autonomously learning system. If you give it a good language model the language model could even also prompted what to try next right. But the language model could be like OK what should I learn next. I should probably learn to pick up an orange and then you just ran them around until the thing the description model says this looks like picking up an orange. I guess I can say something first and then I will ask like Carol because he has previously worked current Brian worked a little bit on like learning from play data. So what you describe kind of similar to that. What I want to mention is that we find language is a great kind of state obstruction because people invent language because they obstruct some states right. Like every every every word every sentence is meaningful. So there are some work in language showing that using language obstruction can improve exploration. For example you can use that to guide your exploration and summarize current states. So that's one potential direction that we can go. Yeah I think there is kind of multiple ways you can see pushing this to an extreme. I think like one small step in the direction would be rather than having these predefined skills label everything in hindsight as I think you're describing as well. And and train policies based on the hindsight labels. So it's not just pick up an apple but you know kind of however the person that looked at that video described it. That's the skill that the robot was performing. And then you maybe don't have to constrain the language model to pick across the skills that you train. But maybe you can just take the generative output and see how that works. I think there is also a potential potential research to be done in how much can language actually take from the robotics problem and how much can it help solving it. So right now we are operating at a certain level of abstraction like you command things like pick up the coke can and then the language model can operate on that. But you can also imagine operating on much lower level which is just like you know move this direction or that direction or something like that. And the language model commands all of that. And you kind of you can choose where in that abstraction you want to be. And I think it's quite interesting that we at least can contrive things like this because of how good language models are today. Yeah and I think I guess to that there's also works on using language basically to predict rewards like over states. And so that's like one way to kind of like hook it all together. We have this like general framework. What's the biggest hurdle like what's the biggest let's say unsolved problem to push push these sort of everyday robots not the company but like the the expression the robots that help us doing our tasks. What where's the like the biggest roadblock in getting these to a point where they could actually be usable. I think right now given kind of how much time we spend on different parts of the system. It's the skills themselves. The ball neck is still the robot actually doing the thing that you ask it to do. Even though these skills are simple to get them to the place where they generalize to any environment can kind of pick up any object even the object that wasn't trained on and do these tasks. And with large diversity of objects environments and so on to very high performance this is still really really hard. So I think if if we get much better skills underlying skills then well would have made a big step towards this actually being very useful. I was going to say the along with those skills like the way that we use the value functions is that as the skill improves so does the like value functions estimate of what it can do. So it's kind of nice where like position both to use these skills but it also improve the overall algorithm by having a better estimate of a success probability. So I think we're like I think sake and itself is at least set up in a good way to sort of like scale along with as this bottleneck is relieved. Last question from from my side what do you think of the Tesla bought. And when I give you the short pro in in in briefly in that it is the ultimate platform because the world is designed for designed for humans right. So if you have the humanoid robot conceivably it could do anything the human can at least mechanically. Do you does this sound good to you or is there like major skepticism. No comments. You can you can wager wager bets right now. I think one one thing that is maybe that I'm excited to see is I think Tesla has the ability to scale things up quite well. They seem to be a really good hardware company. And so it would be interesting to see how some of the problems change. This is also things that we are researching as well how problems change and how solutions change when you have many many of these robots. So I would be I would be excited to see they have any any good insights there. Is there last things that we maybe haven't touched on yet that you would like people to know here just for visuals. I'm showing what some of the successful episodes at the end which are quite impressive like very multi. So there's just one robot. This is this is a collage but very multi-step things. And I think that's just really impressive very long horizon planning things down to these individual actions. Yeah that's that's pretty cool. Anything any last thing you want to want to let people know how can they get started. Where can they find out more information. I just want to mention that we have the website on the website we have a couple of videos demo demonstrating how the robot works and how the inference process works along with the decision process. All the scores we have calculated along with the robot execution. So if there are anyone interested in like how our algorithm works check definitely check that out. I think like I guess what I'm most excited about with it is like how interpretable it is that you can actually see how the decision is being reached by the robot that you can see that the language model likes these things and that the affordance model understands that these tasks make sense or do not make sense in a given world embodied environment. I think it's like nice that it scales really well to adding in new tasks as we go. And then I guess towards how people would use it I think to start. Yeah I mean the paper and the website is a good place to go. I think we're planning to open source a version of it on a more kind of toy environment in the coming months. So hopefully that'll be like an exciting like easy way to sort of like get in the mix with both this and language models. I think there's a lot of power in in leveraging language models and kind of giving them these like hands and eyes to execute real world tasks. I also think you had a point earlier about basically like we use affordances but really it's just a value function. It's this value function doesn't necessarily have to map to an affordance. And I think that's a really powerful idea that we're basically taking all the knowledge in a language model and then hopefully applying it with a value function that isn't even necessarily normalized to can you do this or not. It's sort of what's helpful what's possible for whatever the RL train policy is doing. I think that's like a really I don't know open space. Yeah I'm also quite excited about how language can kind of chip away a little bit from the robotics problem. I think that's something that we haven't really thought about that much before. And we see that we can handle much more much longer horizon commands abstract commands and so on while keeping the policies fairly simple. So it's I think it's quite exciting to see how much further we can we can push that direction. Yeah I think representations have always been such a challenge for especially like task representations are such a challenge for robotics. And I think language has provided this like really nice interface to interact with the robot and then have the robot interact with the world. Excellent. Well Carl, Brian, Faye thank you very much for being here. This was a lot of fun and I hope to see you again soon. Thank you. Thank you for having us.
[ { "start": 0, "end": 6.140000000000001, "text": " So today we're here with three of the authors of this paper with I have to say a lot of authors" }, { "start": 6.3, "end": 9.06, "text": " It seems like a giant work just from what I could gather" }, { "start": 9.540000000000001, "end": 14.46, "text": " From the from the paper itself and the data collection and the evaluation and so on" }, { "start": 14.46, "end": 21.46, "text": " So this was a huge thing, but the results are pretty cool. So here with me today are Faye Xia" }, { "start": 21.46, "end": 29.1, "text": " Brian Ictor and Karol Hausmann who are three of the authors of this work. Welcome to the channel everyone" }, { "start": 29.9, "end": 32.620000000000005, "text": " Thanks. Thank you for having us. It's great to have you here" }, { "start": 33.620000000000005, "end": 35.620000000000005, "text": " The I like I love the title" }, { "start": 36.14, "end": 41.58, "text": " Because it's a bit of a mantra on the do as I do as I say not as I do which is kind of the other" }, { "start": 41.58, "end": 43.86, "text": " Way around right here and this idea of" }, { "start": 44.7, "end": 47.7, "text": " Connecting robots and language. It seems pretty natural" }, { "start": 47.7, "end": 52.620000000000005, "text": " I have to say I've I've seen a lot number of paper attempt to do something like this" }, { "start": 52.620000000000005, "end": 60.06, "text": " Like can we maybe translate what the language model says into the space of what the robot understands and things like this?" }, { "start": 60.42, "end": 63.42, "text": " But this here it seems like a bit of a new approach" }, { "start": 64.02000000000001, "end": 68, "text": " Why why did you try? Why did you attempt to do this?" }, { "start": 68, "end": 74.30000000000001, "text": " Like why does this seem promising and why did no one else do this thing yet?" }, { "start": 74.30000000000001, "end": 76.46000000000001, "text": " Yeah, I think to start like the" }, { "start": 76.46, "end": 82.22, "text": " To I guess like prior work on like using a language model to kind of translate it down" }, { "start": 82.22, "end": 87.86, "text": " I think we first started out with sort of like playing around with that and and realized I guess how much information is" }, { "start": 88.25999999999999, "end": 93.02, "text": " Embued in these language models and how well they're able to reason over sequences and remember what they've done" }, { "start": 93.46, "end": 97.33999999999999, "text": " But when we really like started thinking about applying it to the world" }, { "start": 97.33999999999999, "end": 100.58, "text": " It was sort of like odd that there's no way to basically" }, { "start": 101.22, "end": 104.82, "text": " Make sure that whatever it's saying actually makes sense for the environment that was in" }, { "start": 104.82, "end": 109.1, "text": " And so I think like after playing around that for a while we were sort of like stuck there like okay" }, { "start": 109.1, "end": 111.05999999999999, "text": " We have these like interesting plans" }, { "start": 111.05999999999999, "end": 118.69999999999999, "text": " But they don't actually make sense for everything that the the robot can do and so we started kind of like shifting towards towards that problem" }, { "start": 118.74, "end": 125.46, "text": " Yeah, I think also separately we've been trying to get robots to do many things and learn multiple skills and" }, { "start": 126.3, "end": 128.29999999999998, "text": " This is a very difficult problem" }, { "start": 128.3, "end": 134.70000000000002, "text": " and we were debating kind of the the best way to do this whether we should predefined the skills up front or whether we should just" }, { "start": 135.54000000000002, "end": 140.14000000000001, "text": " demonstrate kind of anything that comes to mind and label it afterwards and" }, { "start": 140.86, "end": 145.64000000000001, "text": " Just connecting these two dots the language models with the skills that we already have on the robots" }, { "start": 145.9, "end": 149.18, "text": " Seems like a nice way of factorizing this problem" }, { "start": 149.22000000000003, "end": 155.86, "text": " Did you always could you so you have this robot in this environment and is if I understood correctly?" }, { "start": 155.86, "end": 162.46, "text": " Maybe here is a good demonstration of that. So you have the robot in these two environments and" }, { "start": 163.26000000000002, "end": 168.74, "text": " These are the environments that exist to understand this correctly. So it's only these two environments. There's no" }, { "start": 169.38000000000002, "end": 171.38000000000002, "text": " generalization across environments" }, { "start": 171.98000000000002, "end": 178.02, "text": " Yeah, so we've been collecting data in beautiful environments. These are the two environments that we use for evaluation" }, { "start": 178.82000000000002, "end": 183.98000000000002, "text": " We also have a separate environment that is right next to the environment that it's a" }, { "start": 183.98, "end": 187.26, "text": " Mark this be here where robots are practicing" }, { "start": 187.78, "end": 195.22, "text": " But it looks fairly similar to to at least the stations that the robots practice on are fairly similar to the stations that you see here" }, { "start": 196.22, "end": 202.14, "text": " The backgrounds are changing the the objects are changing that we practice with and things like that. We also use" }, { "start": 202.73999999999998, "end": 208.89999999999998, "text": " Simulation as an additional environment that we then try to make look similar to the real world" }, { "start": 208.9, "end": 213.06, "text": " But we don't really focus in this paper on generalization to" }, { "start": 213.58, "end": 219.46, "text": " Completely new environment. We rather try to focus on kind of having a robot do as many things" }, { "start": 220.98000000000002, "end": 227.78, "text": " In a single environment when we talk about robot practicing things, I guess that's where your methods starts with robots" }, { "start": 228.3, "end": 232.26, "text": " Practicing things and by things I guess we mean a bunch of very" }, { "start": 232.86, "end": 235.74, "text": " Low-level let's call them unit unitary" }, { "start": 235.74, "end": 242.78, "text": " Skills like here. For example find a coke can pick up the coke can bring it to you something like this" }, { "start": 242.78, "end": 244.78, "text": " So these these could be things that" }, { "start": 245.22, "end": 251.66, "text": " Conceivably we could learn with something like behavior cloning or something like this. How did you?" }, { "start": 252.82000000000002, "end": 258.74, "text": " Decide on what actions are possible for these robots to do on their own like as a unit" }, { "start": 259.46000000000004, "end": 262.46000000000004, "text": " Some of it is based on what the robots capable of some of it's like what?" }, { "start": 262.46, "end": 265.38, "text": " Gives us a like a easy reward function" }, { "start": 266.18, "end": 269.38, "text": " And some of it was sort of motivated by what?" }, { "start": 269.65999999999997, "end": 275.21999999999997, "text": " Composes well into long horizon behaviors that you really want to do in the world like if we have a robot operating in a kitchen" }, { "start": 275.21999999999997, "end": 279.78, "text": " What would I ask it to do what what's required of it to do that?" }, { "start": 279.78, "end": 286.14, "text": " And how would I break down the task? I think was like part of the motivation like really how this robot is gonna operate in the world" }, { "start": 286.85999999999996, "end": 290.02, "text": " Yeah, and also it's interesting to see how this picture came out" }, { "start": 290.02, "end": 294.74, "text": " So initially we kind of have to come up with these and we kind of have to think up front" }, { "start": 294.74, "end": 296.74, "text": " What would that person ask a robot to do?" }, { "start": 297.09999999999997, "end": 302.58, "text": " But now that we have something running we can actually ask people and see how they interact with the robot and" }, { "start": 302.97999999999996, "end": 305.65999999999997, "text": " Decide on which skills we should be learning next based on that" }, { "start": 308.97999999999996, "end": 314.09999999999997, "text": " Sorry, I want to add that at the beginning we choose pick and place because these are" }, { "start": 314.1, "end": 319.94, "text": " Two fundamental skills that can unlock a large number of instructions that we are able to solve" }, { "start": 319.94, "end": 325.54, "text": " But it is also very easy to add new skills into the picture like we only need to" }, { "start": 326.26000000000005, "end": 332.42, "text": " Have a have a language language description for the skill and we also need a policy and value function" }, { "start": 332.42, "end": 338.42, "text": " So these are all the three three things you need to import a new skill into the second framework" }, { "start": 338.42, "end": 344.18, "text": " What I like here is that you said you need a policy and a value function that policy doesn't even have to be like" }, { "start": 344.58000000000004, "end": 351.06, "text": " Neural network based policy conceivably one skill can be a very classic control problem" }, { "start": 351.06, "end": 353.06, "text": " I believe when you pick up things" }, { "start": 353.70000000000005, "end": 362.34000000000003, "text": " You is is that correct that you classically control where the actuator should go and when you move the robot you kind of plan in space" }, { "start": 362.34, "end": 369.38, "text": " So not everything is like reinforcement learned or behavior cloned" }, { "start": 369.38, "end": 375.85999999999996, "text": " Yeah, so different skills are learned differently in this case pick was learned through behavior cloning on real data" }, { "start": 376.9, "end": 380.82, "text": " But yeah, for instance for instance moving around this is not" }, { "start": 381.38, "end": 386.73999999999995, "text": " Trained with reinforcement learning or behavior cloning. So yeah, you can compose you can have different algorithms" }, { "start": 386.74, "end": 395.7, "text": " Train different skills and these skills just to to round out the picture right here the input is" }, { "start": 396.26, "end": 401.54, "text": " Whatever the camera sees plus, you know kind of all the states of the actuators" }, { "start": 402.02, "end": 406.26, "text": " So that conceivably there's an apple in front of you and the task is pick up an apple" }, { "start": 406.74, "end": 410.58, "text": " And that that would be kind of the state from from where you operate. That's right" }, { "start": 410.58, "end": 414.26, "text": " Yeah, we are in the state of the actuators. So that's the input" }, { "start": 414.26, "end": 422.02, "text": " From where you operate. That's right. Yeah, we are going the value function. The value function describes kind of how likely you are to fulfill that task" }, { "start": 422.58, "end": 427.38, "text": " That's right. Yeah, so the input to the policy is the image that the robot sees that you get at every" }, { "start": 428.09999999999997, "end": 430.09999999999997, "text": " after every action" }, { "start": 430.65999999999997, "end": 434.18, "text": " We actuate the arm by doing end effector position control" }, { "start": 437.06, "end": 439.53999999999996, "text": " Yeah, these are the inputs and outputs" }, { "start": 440.65999999999997, "end": 443.46, "text": " And also there's a terminate action, right?" }, { "start": 443.46, "end": 447.62, "text": " Sorry that so the robot can say it itself when it's done" }, { "start": 448.18, "end": 455.38, "text": " Yes, so one of the actions that the robot can command is terminate which basically means i'm done now we can move on to the next one" }, { "start": 457.14, "end": 460.97999999999996, "text": " And okay, so now I guess that this is one part of the puzzle" }, { "start": 460.97999999999996, "end": 466.58, "text": " You have robots you have all these policies for all the little things that the robots could do these little things" }, { "start": 466.58, "end": 472.82, "text": " Were developed by you by the community conceivably you could also use the large language models itself" }, { "start": 472.82, "end": 477.46, "text": " To suggest new things to train right on the basic level. You could ask gpd3" }, { "start": 477.94, "end": 481.86, "text": " What would you do right here and then the little steps you could conceivably" }, { "start": 482.41999999999996, "end": 486.65999999999997, "text": " Make into like train into little actions, but you have this library of things" }, { "start": 486.65999999999997, "end": 492.5, "text": " And now the question is how do you how do you compose them and that's where the large language models comes in" }, { "start": 492.5, "end": 498.66, "text": " Do you want to comment maybe a little bit on like how does that look in a base in a basic way?" }, { "start": 498.66, "end": 504.1, "text": " How do we combine the knowledge of language models with these skills that the robots can do?" }, { "start": 504.5, "end": 508.34, "text": " Yeah, I guess uh at a high level so the language model already has" }, { "start": 508.98, "end": 510.98, "text": " So much knowledge about the world" }, { "start": 511.78, "end": 516.74, "text": " And how to do things in order and memory and things like that" }, { "start": 516.74, "end": 522.58, "text": " And the way to get it to like really speak in the way that is amenable to the robot first" }, { "start": 522.58, "end": 528.9, "text": " We show it a few like prompt examples. So we show it solving, you know, like about 10 problems" }, { "start": 529.46, "end": 534.1, "text": " And breaking it down from the query into the sequence of steps that it would take to solve that" }, { "start": 534.9, "end": 538.1800000000001, "text": " It turns out you can actually not use that and you still actually get like" }, { "start": 538.98, "end": 544.42, "text": " Some level of performance maybe like half the performance. So the language model just like comes out of the box" }, { "start": 544.42, "end": 546.42, "text": " With pretty good understanding of these tasks" }, { "start": 546.42, "end": 550.8199999999999, "text": " We then show with these examples this kind of brings it into the right frame of thought" }, { "start": 550.8199999999999, "end": 557.9399999999999, "text": " But if you do that and you ask for something new it doesn't like fully constrain it in a way that the robot will be able to understand it" }, { "start": 557.9399999999999, "end": 564.9, "text": " So our tasks along with the image and the states that we uh mentioned before also takes in like a task id" }, { "start": 564.9, "end": 568.9, "text": " So it says like um like pick up the apple" }, { "start": 568.9, "end": 572.0999999999999, "text": " So really what we need it to do is like output pick up the apple" }, { "start": 572.1, "end": 578.34, "text": " It can't say like pick up the fruit because the low level policies are not generalizing to that" }, { "start": 578.9, "end": 586.26, "text": " So to make sure that every time we actually output things you can do we instead of like taking the generative output of the language model" }, { "start": 586.4200000000001, "end": 588.26, "text": " We use what's called a scoring model" }, { "start": 588.26, "end": 594.4200000000001, "text": " So when a language model outputs, uh some text it also comes with a probability that it would output that text" }, { "start": 594.74, "end": 601.3000000000001, "text": " And so instead we can just like force it to only respond in these ways and say basically how likely it is to respond in that way" }, { "start": 601.3, "end": 607.6999999999999, "text": " So in this case we get like a score of if I were going to pick up the apple or put the apple somewhere" }, { "start": 607.6999999999999, "end": 609.6999999999999, "text": " These are the things i'd likely to respond" }, { "start": 609.6999999999999, "end": 611.6999999999999, "text": " These are the things there's no way I would respond" }, { "start": 611.6999999999999, "end": 616.9, "text": " And this gives us some like probability that the language model thinks this is really useful to the downstream task" }, { "start": 616.9, "end": 620.9, "text": " On the other side we have these value functions and policies that we've talked about" }, { "start": 620.9, "end": 625.6999999999999, "text": " There are actually the value functions outputting how likely it is to achieve a task" }, { "start": 625.6999999999999, "end": 630.0999999999999, "text": " I think actually there's another uh slide one like one more down" }, { "start": 630.1, "end": 636.1, "text": " Um, but this is basically or yeah, this yeah is saying basically these are possible from this state" }, { "start": 636.1, "end": 641.3000000000001, "text": " And so on one hand we have a language model saying this seems really useful for the task" }, { "start": 641.3000000000001, "end": 645.3000000000001, "text": " And on the other hand we have the value function saying this seems possible" }, { "start": 645.3000000000001, "end": 650.9, "text": " And together they give some probability that this is what you want to do to basically accomplish the high level instruction" }, { "start": 652.34, "end": 658.34, "text": " I have a number of okay. Let's just start at at the beginning at the at the language model level" }, { "start": 658.34, "end": 664.1, "text": " I see the high level picture you you ask both the language model and the value functions" }, { "start": 664.1, "end": 672.1, "text": " What's what you know what they think should happen next and uh the combination of the two is what then you really do" }, { "start": 672.1, "end": 680.1, "text": " Which makes a lot of sense when you do ask these language models what to do um right here" }, { "start": 680.1, "end": 690.1, "text": " You said you use the you use the essentially you you ask the language model for the likelihood of of an output instead of letting it generate the output" }, { "start": 690.1, "end": 698.1, "text": " Was this your first try because one could also imagine uh you know saying something like of the following options" }, { "start": 698.1, "end": 708.1, "text": " Which one would you pick right and then you list all the options which would conceivably be more general because you could option you could add options over time" }, { "start": 708.1, "end": 720.1, "text": " And stuff like I guess you could do that here as well but um was this your first attempt or did you did you have some prompt engineering attempts before that" }, { "start": 720.1, "end": 732.1, "text": " Yeah I think at first we tried just like prompt engineering to see like basically what the generative model would output I think like our our initial thinking was we just want the generative model to basically plan as much as we can" }, { "start": 732.1, "end": 740.1, "text": " But that runs into two problems one is that it doesn't constrain the output fully so if I give it all these examples and then I said" }, { "start": 740.1, "end": 750.1, "text": " How would you put a fruit on the table instead of an apple on the table the generative model will actually respond with like number one find a fruit number two pick up the fruit" }, { "start": 750.1, "end": 758.1, "text": " And then you need to figure out how to like take that and project it into the final like thing that the robot can actually handle" }, { "start": 758.1, "end": 772.1, "text": " You can project this in some sort of like embedding space and that works sort of well but you actually lose some context on the overall query so I guess the way that we do it is a little bit more like well founded so to speak" }, { "start": 772.1, "end": 781.1, "text": " But the other really nice benefit of this is it gives us scores for everything which is really interpretable it lets us like see the trade off between these two options" }, { "start": 781.1, "end": 791.1, "text": " So in your example you said you know what if I just said here are your options pick one and the language model would probably pick one but now you only know that this is its favorite option" }, { "start": 791.1, "end": 802.1, "text": " You don't know the probability that it would have done maybe maybe it's actually okay with the next three options so this gives us this like interpretable score that we can then combine with the value functions" }, { "start": 802.1, "end": 831.1, "text": " Yeah there are some caveats to this I feel in that for example we know that by definition longer outputs are less likely right so I guess it's not too much of a problem for you because most of yours are like three or four words but have you noticed any of kind of these effects of just how these probabilities are constructed as kind of multiplications of softmax outputs like that's got to bring its own bias into the into the picture" }, { "start": 831.1, "end": 839.1, "text": " Have you observed any of that have you had problems with any of that or was it was it generally okay" }, { "start": 839.1, "end": 857.1, "text": " Yeah it's it's definitely a little bit of an issue I mean I think in general it's also very particular to the to these like if you were to misspell a word in there or like have an A versus an M it's not particularly robust to those in the options it is in the query like to what the user might say" }, { "start": 857.1, "end": 886.1, "text": " but not when you're scoring these options because if one word is off then this like multiplication of each word just kind of tanks the entire score so we did have to be somewhat careful with what we have one way to kind of like get around this a little bit is if you have some like end of statement token and if it if it adds extra words on the end then it's saying if if there's like more to come that end of token will basically kind of normalize the rest of it like you can't end a statement or a word" }, { "start": 886.1, "end": 889.1, "text": " and a statement early" }, { "start": 889.1, "end": 902.1, "text": " the yeah I think in the other thing that we did try to do is like potentially normalize them so knowing that this query is longer perhaps we need to upweight it or have some normalization on the language output" }, { "start": 902.1, "end": 923.1, "text": " but we found that it wasn't particularly consistent and there wasn't like just a constant effect across one or the other and it depends on the way you like referred to the query and so at the end of the day we just took the outputs as they were so it was an issue but it wasn't like a huge one" }, { "start": 923.1, "end": 944.1, "text": " I imagine that there's another factor here for example if you if you say you said before pick up a fruit or please bring me a fruit or something of this you're essentially relying on the ability of the large language model to sort of recognize that apple is a fruit and and and kind of interpret that in the same way and and so on" }, { "start": 944.1, "end": 973.1, "text": " so the kind of close as the language model estimates how close the things are did you find this generally in agreement of in how how humans find how close the things are and maybe yeah I'm just I'm just wondering about this notion of how how close things in language are together also what happens if you for example have an apple and an orange in your scene these two things would be quite close together so even if you said you know" }, { "start": 973.1, "end": 1000.1, "text": " please pick up an apple the pickup an orange thing would conceivably score quite high in the language model which might perturb your things so I can kind of I can sort of make out that you have an ideal environment right here in that you probably picked objects that are distinct from each other locations that are fairly distinct from each other right such that there's a nice semantic gap between the things" }, { "start": 1000.1, "end": 1030.1, "text": " what like do you think this is well applicable to a real world setting or what kind of hurdles could there be with connecting language models and the set of actions in this way so I guess the first question was about do these families kind of align with what you would expect and that was actually that was one of the first things that I was looking at was like how well do these scores sort of match up to what you think it's going to be so yeah it turns out that like apples are" }, { "start": 1030.1, "end": 1059.1, "text": " apples and orange and banana are all going to score quite highly when you're asking for a fruit if you ask for a snack all the food options are going to score highly similarly drink soda any category like that it performs about yes you would expect as a human which is good but then yeah it comes to this problem of what if there's an apple and orange or what if there's an orange but not an apple and that's where these value functions come in this is actually like one of the key reasons why we have to do this in volume and grounding." }, { "start": 1060.1, "end": 1071.1, "text": " Because if you just asked a regular language model that doesn't know what's there then how does it make that decision maybe it uses the wrong one then your plan isn't really correct and our policies may not actually work." }, { "start": 1072.1, "end": 1087.1, "text": " But the value function tells you if there is an apple in the scene and no orange then you're going to see a high value function on the apple because the pick apple command could work versus the orange command is going to be quite low and so that actually lets you sort of like disambiguate this so." }, { "start": 1087.1, "end": 1096.1, "text": " In the in figure B if it had a pick up the Red Bull if you said bring me a drink and there's a Red Bull but no water it's going to pick up the Red Bull because that's actually what's there." }, { "start": 1097.1, "end": 1107.1, "text": " And if not then then the instruction itself is ambiguous right if you say pick up a drink and there's two drinks and both are affordable according to the value function yeah." }, { "start": 1107.1, "end": 1122.1, "text": " Yeah then we think like either is completely fine I think it's also interesting because then the robot is making the trade off itself dependent maybe on the value function so for instance if you ask for a fruit and there's an orange and an apple but it's much better at picking up apples." }, { "start": 1123.1, "end": 1127.1, "text": " Maybe it will pick up the apple because the value function will just tip the scale." }, { "start": 1127.1, "end": 1145.1, "text": " So it will make some errors in that sense but since this is interpretable and you can kind of look back and see why I decided for that it can also inform us as to what skill we should train a little bit more or which value functions are a little under fitted and things like that so it will make some sort of mistake." }, { "start": 1146.1, "end": 1150.1, "text": " But maybe that's that's okay maybe that's acceptable." }, { "start": 1150.1, "end": 1161.1, "text": " I think one like really nice feature of that too is it's not necessarily always like it's better at picking up oranges or apples but you can see like these objects are in different locations one may be better for the policy than the other." }, { "start": 1162.1, "end": 1168.1, "text": " So we're going to end up doing the one that's a little more robust and a little more likely to succeed as long as it still fulfills the high level query." }, { "start": 1168.1, "end": 1189.1, "text": " Yeah I like the fact that you have success probability as sort of the ultimate score because I was I also thought one failure mode here is that some tasks are inherently harder than others right and so naturally your value function would be lower and therefore you can misinterpret just by the fact like well like this this this is me the procrastinator like this thing seems really hard and we'll do this other thing that I'm not sure how to do." }, { "start": 1189.1, "end": 1197.1, "text": " But it's really easy so it's almost it's almost too human how the robot would act in this way." }, { "start": 1198.1, "end": 1209.1, "text": " So yeah you have these what I like here as well is that you have to bank of value functions on one hand the language model on the other hand and the language model on the other hand is the one that's the most difficult to do." }, { "start": 1209.1, "end": 1227.1, "text": " So yeah you have these what I like here as well is that you have to bank of value functions on one hand the language model on the other hand and they are never if I understand correctly trained together right there never in fact the language model is probably just frozen." }, { "start": 1227.1, "end": 1239.1, "text": " So they're never trained together which means that you could conceivably just add a skill to the robot train its value function for it and just plug it in and and go." }, { "start": 1240.1, "end": 1248.1, "text": " Yeah we can scale this fairly easily so we can continue adding skills we can also change the underlying algorithm how we train the skills." }, { "start": 1248.1, "end": 1258.1, "text": " Or how we train the particular skill that we want to add if we if suddenly there is a really good script that allows to I don't know swipe the floor or something like that." }, { "start": 1259.1, "end": 1271.1, "text": " We can we can also add that as long as we have a value function for it and also at the same time if the language model becomes better we can also swap out the language model and get improvements through that." }, { "start": 1271.1, "end": 1284.1, "text": " I want to add that so our current value function is one way that we instantiate affordance but there are many other ways that we can instantiate affordance like for example we can directly do prediction." }, { "start": 1285.1, "end": 1295.1, "text": " We can also use classical motion planning like to calculate for example length of the trajectory is also or the probability of success if you do like sampling based motion planning." }, { "start": 1295.1, "end": 1302.1, "text": " So there are many ways that we can come to the affordance and the method is really flexible to plug in any type of affordance." }, { "start": 1304.1, "end": 1317.1, "text": " I guess a big topic in maybe maybe it's more the space of blockchains and things like this is agents that do an action for you but also optimize for example for cost or for resources or something like this." }, { "start": 1317.1, "end": 1339.1, "text": " This could directly flow into that where you can tell the robot you know do whatever fulfills my task but also costs very little and this could if this directly flows into affordance there might be a normalization issue but if this directly flows in you'd have you could tune the knobs on these on these functions fairly easily." }, { "start": 1339.1, "end": 1362.1, "text": " So this is the full algorithm I guess we haven't talked yet about how you extend this to multiple steps but it is as far as I can tell fairly easy in that you do this in sort of a stepwise fashion so first you ask your language model your value functions at the current state and the current camera position where what should be done." }, { "start": 1362.1, "end": 1389.1, "text": " Then you try to whatever should be done according to both scores combined you execute that and after you execute it you ask the same thing again but now the prompt changes and it's simply that you so here the prompt is essentially I would first and then first action is decided and once you go on the prompt now says I would first." }, { "start": 1389.1, "end": 1401.1, "text": " The prompt now says I would first and then whatever was decided on and then second and then it's simply the same thing with the next action did I get this approximately correct." }, { "start": 1403.1, "end": 1408.1, "text": " Do you pay any attention to whether or not the task was fulfilled successfully." }, { "start": 1408.1, "end": 1437.1, "text": " So right now we don't we assume it will successfully execute I think some things could happen like if it fails at a navigation task say it was trying to navigate to an apple and the and it doesn't get there then the value functions at that next state are going to be quite low so you're not going to be able to basically pick something up or whatever so maybe then you end up selecting navigate to the apple again or navigate to a table instead but we don't have any like explicit success." }, { "start": 1438.1, "end": 1451.1, "text": " Detection I think this is like one area that we're like pretty interested in going basically like finishing the job closing the loop entirely on when you try to do something did you succeed telling the language model and then having a language model adapt accordingly." }, { "start": 1451.1, "end": 1474.1, "text": " I want to show one video from from your website which in this case if I got this right it confused me I guess a little bit because this thing right here if you see it kind of looks around sees all the things right like looks and sees and then it kind of scores the actions." }, { "start": 1474.1, "end": 1482.1, "text": " And like this so pick apple I can't do that pick sponge okay." }, { "start": 1482.1, "end": 1511.1, "text": " Bring you a sponge no not go to trash can place the sponge place the sponge is good and that's the place the sponge kind of up ways to bring you a sponge or like what's going on right here because in my in my estimation the robot shouldn't even look around initially the robot should just have its camera position fixed and then it in first instance." }, { "start": 1512.1, "end": 1524.1, "text": " It should probably figure out like find a sponge or something like this and then it would move and then it would see consider these next actions like what is what is this video supposed to to show." }, { "start": 1524.1, "end": 1547.1, "text": " Yeah I think you're understanding is completely correct so this is more like a conceptual video where we wanted to kind of across that it can accomplish longer tasks but you're right that the way it would happen is that it would look at the current image then it would decide that at first needs to find a sponge or maybe pick up the sponge if the sponge is already available then append that to prompt and continue." }, { "start": 1547.1, "end": 1555.1, "text": " So we just wanted to make it short so that you can still get to get that idea across but only by having a single image." }, { "start": 1556.1, "end": 1562.1, "text": " Yeah so it might be a little bit confusing that doesn't I think depict fully how the method works." }, { "start": 1562.1, "end": 1582.1, "text": " Yeah I think we just got excited by the visual of a language model sort of seeing nothing and then waking up and saying oh I'm a robot. Okay here's my history of what I've done before. Okay depending on that what I thought I made a lot of sense doesn't make any sense anymore so it's more like excitement than anything else." }, { "start": 1582.1, "end": 1592.1, "text": " It does look pretty sweet like it looks pretty cool especially like the effects on like the zoom seeing what's around." }, { "start": 1592.1, "end": 1613.1, "text": " You use by the way we've not shown this yet you use these everyday robots constructions which look semi creepy but also quite cool especially when they pick up stuff they like hold it behind their back like it's like a mixture of a butler and someone who just has a knife and wants to stab you." }, { "start": 1613.1, "end": 1628.1, "text": " But pretty sweet and it works surprisingly well. So maybe we can talk about the results of a little bit next because my next question would sort of be okay how well does it actually work in the environments where you tested on." }, { "start": 1628.1, "end": 1643.1, "text": " Do you maybe want to comment a little bit on what was what were the general results and then you have some ablations." }, { "start": 1643.1, "end": 1661.1, "text": " If a do you want to take this or do you. Yeah I think I can take this. So we tested this on two environments. One is the real office kitchen and another one is a kind of a mock office kitchen showing in figure five I think and we tested on a hundred and one in standard." }, { "start": 1661.1, "end": 1675.1, "text": " From like six categories. Yeah so here here are the test environment that the A is a real kitchen and B is a mock kitchen. There are 15 objects that we focus on and also five semantic semantic locations." }, { "start": 1675.1, "end": 1691.1, "text": " Like these locations are semantically meaningful like table trash can close counter far counter and a robot operator location where we define like bring back to you. That's where it is supposed to bring it back to." }, { "start": 1691.1, "end": 1707.1, "text": " We test on a hundred and one instructions from six or seven categories if you scroll down a little bit. It's mainly to test different capabilities of the robot for example can it understand synonyms like non synonyms or verb synonyms like what does that mean." }, { "start": 1707.1, "end": 1723.1, "text": " Throw away means bring something to the trash can like recycle means bring something to the trash can and also structure language which is just like verb non compositions. And also we test embodiment which means we test if the robot is not in the trash can." }, { "start": 1723.1, "end": 1740.1, "text": " And also we test embodiment which means we test if the robot understands what its current embodiment is. For example if I already pick up something I shouldn't try to find it again because I already have it. Also we test on crowdsourced basically it's unstructured human queries from like coworkers for example and long source." }, { "start": 1740.1, "end": 1762.1, "text": " And also we test embodiment which means we test if the robot understands what its current embodiment is. For example if I already pick up something I shouldn't try to find it again because I already have it. Also we test on crowdsourced basically it's unstructured human queries from like coworkers for example and long horizon tasks which are some of the really really challenging instructions such as I spilled my coke on the table how would you throw it away and then bring me something to clean." }, { "start": 1762.1, "end": 1780.1, "text": " So that's a really challenging task the robot need to understand what does spill mean like what tools you can use to clean up a spill. So these are the instructions that we tested and overall I think we achieved 71 percent planning success rate and 66 percent execution success rate." }, { "start": 1781.1, "end": 1790.1, "text": " And it's the hardest question is do the longer horizon tasks. So I think we only have about like 30 or 40 percent success rate." }, { "start": 1790.1, "end": 1800.1, "text": " And yeah we are working on improving those like other success rate on those other questions. Ryan if you have anything to add." }, { "start": 1801.1, "end": 1808.1, "text": " Yeah the only thing I was going to say is that the long horizon ones are particularly challenging both from like reasoning and language side." }, { "start": 1808.1, "end": 1820.1, "text": " But a lot of the issue comes with if you have like a 90 percent success rate manipulation policy which is still quite high. Every time you do this you reduce the probability that your overall plan is going to succeed." }, { "start": 1821.1, "end": 1829.1, "text": " And so that starts to like both it's a big challenge and we want to get our manipulation policies better and better and each of our low level skills better and better." }, { "start": 1830.1, "end": 1835.1, "text": " But also having some sort of like closed loop that so the language model knows to retry would be really helpful here." }, { "start": 1835.1, "end": 1846.1, "text": " And you I saw I saw in the results that it was pretty interesting in that you did ablate a lot of these things." }, { "start": 1847.1, "end": 1853.1, "text": " For example you did ablate what for example if we don't have the language model and these are the overall success rate." }, { "start": 1853.1, "end": 1867.1, "text": " You ablate what if we don't have the language model and what if we don't have the scoring model and generally they were worse much worse in both cases which was pretty cool to see and not always the same." }, { "start": 1867.1, "end": 1884.1, "text": " Except in this one it is one thing to understand this correctly if you drop the generative model on a generative uses it uses a large language on a projects the nearest to the nearest skill via an embedding." }, { "start": 1885.1, "end": 1893.1, "text": " That is actually better than your original policy. Is that just noise or is there something behind it if you use this verbs category." }, { "start": 1893.1, "end": 1899.1, "text": " My guess is I think it's more noise than than anything else." }, { "start": 1900.1, "end": 1905.1, "text": " But there were definitely times where so we see it like really fail in certain circumstances." }, { "start": 1906.1, "end": 1910.1, "text": " So embodiment because there's no value function there to tell it that it can't do something." }, { "start": 1911.1, "end": 1916.1, "text": " There's a real issue for it. And so there are a lot of failures for anything that didn't have a value function there." }, { "start": 1916.1, "end": 1923.1, "text": " I think we saw some like some pretty interesting differences between the no value function." }, { "start": 1924.1, "end": 1929.1, "text": " So this is the scoring model only without a value function and the generative model." }, { "start": 1930.1, "end": 1934.1, "text": " And so some of the issues with the general model came around with like nouns for instance." }, { "start": 1935.1, "end": 1937.1, "text": " And this is because when you do this projection." }, { "start": 1937.1, "end": 1947.1, "text": " So the say I said I just worked out I want a snack it then projects to or then these the plan will say bring me a snack." }, { "start": 1948.1, "end": 1950.1, "text": " But really what I want is a snack to help me recover from my workout." }, { "start": 1951.1, "end": 1957.1, "text": " And so that like a little bit of information is enough to say it's probably not like potato chips but maybe something like healthier." }, { "start": 1958.1, "end": 1960.1, "text": " Similarly like a drink there would lose a lot of its information." }, { "start": 1960.1, "end": 1966.1, "text": " And so on the noun ones we saw that it ended up like losing this information and that cost a lot of the success rate." }, { "start": 1967.1, "end": 1972.1, "text": " Whereas the scoring model did OK across the board but maybe not as like smoothly in the verb category." }, { "start": 1974.1, "end": 1983.1, "text": " Another really fascinating thing here is at least in my opinion just the scale of data collection in this project." }, { "start": 1983.1, "end": 1996.1, "text": " I have I have made a few notes and at one point it says something like you use a lot of human labelers for for example the success rate of these little policies." }, { "start": 1997.1, "end": 2006.1, "text": " So even when when you train these little or small unit let's call them unit policies you use humans to see whether they're correct or not." }, { "start": 2006.1, "end": 2017.1, "text": " And you use three human raters per execution and you get it you get give it one single sparse reward if two out of three agree." }, { "start": 2018.1, "end": 2021.1, "text": " So like this scale seems immense." }, { "start": 2022.1, "end": 2034.1, "text": " Is this really like how did you determine this was the best way to spend the human time and not maybe together more noisy but three times more like." }, { "start": 2034.1, "end": 2037.1, "text": " Noisy but three times more labels or something like this." }, { "start": 2038.1, "end": 2039.1, "text": " How did this come to be." }, { "start": 2040.1, "end": 2041.1, "text": " Yeah this is a good question." }, { "start": 2042.1, "end": 2043.1, "text": " I think we are still figuring this out." }, { "start": 2044.1, "end": 2051.1, "text": " A lot of these questions and how to spend how to spend human time in the most efficient way that that helps the policies the most." }, { "start": 2052.1, "end": 2056.1, "text": " And I think there is a question of crowd labeling as you as you mentioned." }, { "start": 2056.1, "end": 2063.1, "text": " So how much noise can you tolerate in the reward function compared to like the throughput of that." }, { "start": 2064.1, "end": 2073.1, "text": " Also how much time you should spend collecting human demonstrations versus how much time humans maybe should be just supervising robots collecting data autonomously." }, { "start": 2074.1, "end": 2080.1, "text": " How much should we be spending time developing assets and policies in simulation and transferring them to the real world." }, { "start": 2080.1, "end": 2085.1, "text": " So we are still kind of trying to find the trade-offs between all of these." }, { "start": 2086.1, "end": 2089.1, "text": " I don't think we have any any very good answers right now." }, { "start": 2090.1, "end": 2103.1, "text": " As for labeling itself we noticed in previous projects that the noise on the on the reward signal is going to be really can have a big influence on performance." }, { "start": 2103.1, "end": 2112.1, "text": " So that's why we decided to have three labor laborers to to agree on the two of which we have to agree to to market the reward." }, { "start": 2113.1, "end": 2118.1, "text": " And we also had additional questions such as was the behavior undesirable or unsafe." }, { "start": 2119.1, "end": 2120.1, "text": " And these are sometimes quite ambiguous." }, { "start": 2121.1, "end": 2127.1, "text": " So it's actually it helps quite a lot to have multiple people look at the video and and tell us what they think." }, { "start": 2127.1, "end": 2131.1, "text": " Did you always have these additional things in." }, { "start": 2132.1, "end": 2140.1, "text": " So you have as you say and also wrote this down somewhere a unsafe undesirable or infeasible." }, { "start": 2141.1, "end": 2154.1, "text": " Did you always have this in or was this kind of a development that happened over time that you realized oh crap we're asking people how likely is the robot to pick up an apple but there is no apple in sight and things like this." }, { "start": 2154.1, "end": 2156.1, "text": " Yeah so some of them we added." }, { "start": 2157.1, "end": 2160.1, "text": " So initially we knew that safety is a is a big problem." }, { "start": 2161.1, "end": 2169.1, "text": " So we started with with that question and we noticed that sometimes the robot would do something that isn't necessarily unsafe but we still don't want it to do it." }, { "start": 2170.1, "end": 2176.1, "text": " For instance it will touch the object that it wasn't supposed to touch or it will just poke something and it will fall off the table." }, { "start": 2176.1, "end": 2185.1, "text": " So then then we added the undesirable which is like has a slightly different definition and we can also optimize for it differently in the reward function." }, { "start": 2186.1, "end": 2202.1, "text": " And then regarding the the last one the infeasibility this is something that we noticed with reinforcement learning algorithms that if you add a lot of data where the task wasn't feasible even though the data is technically correct." }, { "start": 2202.1, "end": 2210.1, "text": " The robot didn't accomplish the task it got reward zero but it seems to be influencing the real algorithms in a bad way." }, { "start": 2211.1, "end": 2219.1, "text": " So we added this in addition to prevent that and potentially filter for this data or see how we can change the real algorithms to handle that kind of data better." }, { "start": 2220.1, "end": 2224.1, "text": " And why do you only give a single reward." }, { "start": 2224.1, "end": 2234.1, "text": " I mean presumably a human watching a video like this could be you know every couple of frames could be like yeah good job robot yeah that's the right way yeah oh no don't do that." }, { "start": 2235.1, "end": 2243.1, "text": " Like essentially like Peter Pan or like you know warmer warmer warmer colder colder which would give sort of a much more dense label space." }, { "start": 2243.1, "end": 2257.1, "text": " Was this is like a technical limitation or did you also consciously choose to say no we got it's one single reward and that's only it's one when you fulfill the task and zero everywhere else." }, { "start": 2258.1, "end": 2262.1, "text": " Yeah so there's I think a few reasons for this first I think the ambiguity that comes with it." }, { "start": 2263.1, "end": 2268.1, "text": " You know it's already sometimes difficult to decide whether the task was accomplished correctly or whether it was undesirable or not." }, { "start": 2268.1, "end": 2277.1, "text": " If in addition to this you have to add this continuous signal whether the robot is going in the right direction I think it can be fairly ambiguous depending on what the robot is doing." }, { "start": 2278.1, "end": 2289.1, "text": " Secondly we made a decision some time ago that optimizing for sparse reward tasks would be just more scalable for the future." }, { "start": 2289.1, "end": 2305.1, "text": " There are some tasks where it's quite difficult to say whether the robot is actually going in the in the right direction and sometimes that accomplishes a task in a surprising way and we don't necessarily want to eliminate that and introduce human bias of like well I think it should go that way." }, { "start": 2306.1, "end": 2312.1, "text": " So our real algorithm is that we've been developing have also been optimized for the sparse reward setting." }, { "start": 2312.1, "end": 2319.1, "text": " So that was kind of another factor that we that we thought about when when considering the reward function." }, { "start": 2320.1, "end": 2335.1, "text": " So speaking about doing it like humans there's a yet another set of data collection in this project and that is that not only do you collect the labels but you also do quite a considerable amount of behavior cloning." }, { "start": 2335.1, "end": 2345.1, "text": " From essentially learning from demonstrations from humans with another set of data gathered from you call it teleoperated teleoperator sessions." }, { "start": 2346.1, "end": 2362.1, "text": " How can we how can we imagine such a teleoperator session like how many of these kitchens and robots do you have and how long does this take to gather a data set that you could conceivably use to collect data from humans." }, { "start": 2362.1, "end": 2366.1, "text": " Gather a data set that you could conceivably do behavior cloning from." }, { "start": 2367.1, "end": 2377.1, "text": " Yeah so I think we specified in the paper that we gathered at that point around 70,000 demonstrations for all these different tasks." }, { "start": 2378.1, "end": 2385.1, "text": " This is across 11 robots I believe we built a little we built little stations for the robots like the stations that you can see in the picture here." }, { "start": 2385.1, "end": 2391.1, "text": " Where the robots can can practice these things and people can demonstrate how to how to how to do things." }, { "start": 2392.1, "end": 2403.1, "text": " I think we are still kind of trying to see how much of this if we if we filter the data set for instance how much can we filter it and still get really high result." }, { "start": 2404.1, "end": 2407.1, "text": " So I think we we don't have very good answers to that yet." }, { "start": 2407.1, "end": 2416.1, "text": " Yeah but this is something we're looking into kind of the trade-offs between how much demonstration how many demonstrations you're collecting how much autonomous data and so on." }, { "start": 2417.1, "end": 2435.1, "text": " Where is this just because this is at Google which is a company and sure there's like a cash cow that generates infinite money but there's got to be some kind of constraint on you just or how do you how do you how does this work maybe." }, { "start": 2435.1, "end": 2454.1, "text": " What robotics at Google what is your mission there and how do you pitch such a thing to to management like yeah essentially we want to collect 70,000 sessions of tele operated things every time a human presumably not a random human because they would just crash the robot out of spite." }, { "start": 2454.1, "end": 2464.1, "text": " But like a trained trusted human needs to sit down and spend their time and there's robots are quite slow as of now." }, { "start": 2465.1, "end": 2470.1, "text": " There's got to be a considerable budget behind all of this data collection and labeling and so on." }, { "start": 2471.1, "end": 2475.1, "text": " How do you do you have to make a case for that or are you relatively free in doing this." }, { "start": 2476.1, "end": 2482.1, "text": " How does how does your work in the same in the business perspective look like." }, { "start": 2482.1, "end": 2495.1, "text": " Yeah I think in any company you kind of have to make a case or even in in academia you have to make a case for your project why you think this is how the money should be spent and where the resources should go." }, { "start": 2496.1, "end": 2506.1, "text": " So usually the way we we kind of justify it as by showing kind of step by step results and showing if we extrapolate this where this is going to go." }, { "start": 2506.1, "end": 2516.1, "text": " So we we we've done some projects previously where we showed reinforcement learning at scale with six robots or behavior cloning at scale with just two or three robots." }, { "start": 2517.1, "end": 2528.1, "text": " And then we start seeing that with the amount of data that we collected there we already can see some interesting results and now if we want to get these robots to do many more things we need more robots we need more data." }, { "start": 2528.1, "end": 2540.1, "text": " And this is kind of one big bet that we that we have in robotics at Google is that this large scale machine learning could be a way to really help robotics." }, { "start": 2541.1, "end": 2547.1, "text": " So we want to we want to be able to be risk some of those questions for the for the community right." }, { "start": 2548.1, "end": 2552.1, "text": " Like if we can actually buy a lot of robots and provide a lot of demonstrations how does it scale." }, { "start": 2553.1, "end": 2554.1, "text": " How does it work." }, { "start": 2554.1, "end": 2565.1, "text": " I think one of the sides or one of the figures in the appendix actually has somewhat like the way that we built up these skills one by one it's maybe I don't know what page it's on but it's a little higher than that." }, { "start": 2566.1, "end": 2580.1, "text": " Yeah this one sort of shows like how these were built up over time and and how more one more and more skills were added more and more data was collected each time seeing signs of life for the algorithms and performance and improving upon that." }, { "start": 2580.1, "end": 2589.1, "text": " And you can see that from time to time there's a new skill being added so that kind of goes from zero up in the meantime there's also the underlying code is changing." }, { "start": 2590.1, "end": 2592.1, "text": " So it's kind of like improvements over time." }, { "start": 2595.1, "end": 2599.1, "text": " So this goes it goes up and to the right which is what we all love." }, { "start": 2599.1, "end": 2612.1, "text": " And was there was there major downturns in this project like times where you know things didn't seem to work out or you didn't exactly know what the problem was things like this." }, { "start": 2612.1, "end": 2617.1, "text": " Could you get us a bit behind the scenes into when when things go wrong." }, { "start": 2624.1, "end": 2625.1, "text": " No problem." }, { "start": 2625.1, "end": 2628.1, "text": " There's quite a lot I'm just trying to think which one to tell you." }, { "start": 2630.1, "end": 2652.1, "text": " There's quite a lot also from previous projects but I think one thing that was quite surprising to me personally and I think we are still kind of working on that is that if you spend in if you classify approaches into let's say imitation learning and reinforcement learning." }, { "start": 2652.1, "end": 2657.1, "text": " If you spend enough time and data on either of them you can get them to work." }, { "start": 2658.1, "end": 2677.1, "text": " So we some of the results that you see here most of them are from behavioral calling but we can achieve very comparable results with reinforcement learning either by transferring policies from simulation and then continue collecting with that policy and kind of fine tuning it to a high performance." }, { "start": 2677.1, "end": 2681.1, "text": " Or by just bootstrapping from real data and improving upon that." }, { "start": 2681.1, "end": 2688.1, "text": " But what is quite surprising is that combining these these two have have has been quite tricky." }, { "start": 2688.1, "end": 2704.1, "text": " So kind of having a single algorithm that can digest all of that data that can digest all of the demonstrations as well as the autonomous data that was collected data that we collect in simulation and so on and have it have all the properties that fit into the data." }, { "start": 2704.1, "end": 2709.1, "text": " So it performs at least as good as behavioral cloning but it can also improve autonomously and so on." }, { "start": 2709.1, "end": 2712.1, "text": " This has been this has been quite surprising and tricky." }, { "start": 2718.1, "end": 2730.1, "text": " I want to maybe have a bit of an or make a bit of an outlook right here because it seems we have a pretty cool way to go from skills that are described by language." }, { "start": 2730.1, "end": 2733.1, "text": " But you have to define them." }, { "start": 2733.1, "end": 2735.1, "text": " Let's just scroll to one of them." }, { "start": 2735.1, "end": 2737.1, "text": " You have to define them ahead of time." }, { "start": 2737.1, "end": 2738.1, "text": " Right." }, { "start": 2737.1, "end": 2742.1, "text": " You have to define pick up the Coke can bring it to you find the Coke can and so on." }, { "start": 2742.1, "end": 2747.1, "text": " You have to just you have to design these even though they're described by language." }, { "start": 2747.1, "end": 2749.1, "text": " They're pretty fixed set." }, { "start": 2749.1, "end": 2757.1, "text": " Now the first thing that maybe one can think about is how to extend that set and not necessarily extend the data." }, { "start": 2757.1, "end": 2760.1, "text": " Just linearly." }, { "start": 2760.1, "end": 2764.1, "text": " But I'm thinking of something when I say please clean up the table." }, { "start": 2764.1, "end": 2767.1, "text": " You might not know what's on the table." }, { "start": 2767.1, "end": 2772.1, "text": " So we need this kind of a concept of like almost like a variable or an unknown." }, { "start": 2772.1, "end": 2781.1, "text": " You know like so the plan could be go to the table and then kind of decide what to do next." }, { "start": 2781.1, "end": 2792.1, "text": " So the language model could get even or has to get a feedback almost from either the value functions or from the picture itself." }, { "start": 2792.1, "end": 2805.1, "text": " Is that anything that's on your your radar sort of what if I don't what if I have to adjust my plan on the fly to the state that I'm going to encounter." }, { "start": 2805.1, "end": 2811.1, "text": " How could this model be extended to to handle that." }, { "start": 2811.1, "end": 2814.1, "text": " Let's say all the actions are in your action space." }, { "start": 2814.1, "end": 2819.1, "text": " But you just don't know at the beginning which ones you're going to take." }, { "start": 2819.1, "end": 2829.1, "text": " Yeah I guess right now we kind of like count on the value functions to sort of like collapse whatever your plan is into the thing that is actually possible in the world." }, { "start": 2829.1, "end": 2850.1, "text": " I think like one of the most I guess straightforward ways to do it though maybe not straightforward in practice is to use things like visual transformers or like structured scene representations that actually tell the language model what's possible so that they can start like reasoning over it earlier on." }, { "start": 2850.1, "end": 2861.1, "text": " The other thing is to add in something like these success rates success detectors that say OK you tried to do this and it wasn't possible. So maybe you tried to find an apple that was impossible." }, { "start": 2861.1, "end": 2865.1, "text": " Perhaps the next thing to do is try to find an orange that may actually be in the scene." }, { "start": 2865.1, "end": 2872.1, "text": " So there's some like combination of value functions giving it feedback about the scene." }, { "start": 2872.1, "end": 2881.1, "text": " But right now we don't have anything that like has the language model really really reasoning over the steps because the value functions takes it take care of that like interaction." }, { "start": 2881.1, "end": 2889.1, "text": " But one could fine tune it on some data that allows it to do that is probably the most straightforward way to do it." }, { "start": 2889.1, "end": 2894.1, "text": " But whether that works is open question." }, { "start": 2894.1, "end": 2910.1, "text": " I guess the other thing is and this would really also close the loop or close one of the loops is if I imagine that I also had a model that could take any visual input and then kind of describe that describe what's happening in the visual input." }, { "start": 2910.1, "end": 2922.1, "text": " So I'm going to give it a video of pick up the of something picking up the Coke can and the thing would come up with like a label for it like this video shows pick up a Coke can." }, { "start": 2922.1, "end": 2926.1, "text": " Then I'd have almost limitless possibilities." }, { "start": 2926.1, "end": 2938.1, "text": " I could just let a robot move at random essentially let the language model or let this model describe what it's doing then kind of feed that to the language model and so on." }, { "start": 2938.1, "end": 2950.1, "text": " So instead of you designing the actions that it should train I could just let it do stuff and then have a model describe that stuff and then use that." }, { "start": 2950.1, "end": 2962.1, "text": " Is is that a plan or is there like a major hurdle on the way there because that would kind of result in a almost autonomously learning system." }, { "start": 2962.1, "end": 2969.1, "text": " If you give it a good language model the language model could even also prompted what to try next right." }, { "start": 2969.1, "end": 2972.1, "text": " But the language model could be like OK what should I learn next." }, { "start": 2972.1, "end": 2983.1, "text": " I should probably learn to pick up an orange and then you just ran them around until the thing the description model says this looks like picking up an orange." }, { "start": 2983.1, "end": 2991.1, "text": " I guess I can say something first and then I will ask like Carol because he has previously worked current Brian worked a little bit on like learning from play data." }, { "start": 2991.1, "end": 2994.1, "text": " So what you describe kind of similar to that." }, { "start": 2994.1, "end": 3004.1, "text": " What I want to mention is that we find language is a great kind of state obstruction because people invent language because they obstruct some states right." }, { "start": 3004.1, "end": 3007.1, "text": " Like every every every word every sentence is meaningful." }, { "start": 3007.1, "end": 3015.1, "text": " So there are some work in language showing that using language obstruction can improve exploration." }, { "start": 3015.1, "end": 3024.1, "text": " For example you can use that to guide your exploration and summarize current states. So that's one potential direction that we can go." }, { "start": 3024.1, "end": 3032.1, "text": " Yeah I think there is kind of multiple ways you can see pushing this to an extreme." }, { "start": 3032.1, "end": 3042.1, "text": " I think like one small step in the direction would be rather than having these predefined skills label everything in hindsight as I think you're describing as well." }, { "start": 3042.1, "end": 3046.1, "text": " And and train policies based on the hindsight labels." }, { "start": 3046.1, "end": 3051.1, "text": " So it's not just pick up an apple but you know kind of however the person that looked at that video described it." }, { "start": 3051.1, "end": 3054.1, "text": " That's the skill that the robot was performing." }, { "start": 3054.1, "end": 3060.1, "text": " And then you maybe don't have to constrain the language model to pick across the skills that you train." }, { "start": 3060.1, "end": 3064.1, "text": " But maybe you can just take the generative output and see how that works." }, { "start": 3064.1, "end": 3078.1, "text": " I think there is also a potential potential research to be done in how much can language actually take from the robotics problem and how much can it help solving it." }, { "start": 3078.1, "end": 3086.1, "text": " So right now we are operating at a certain level of abstraction like you command things like pick up the coke can and then the language model can operate on that." }, { "start": 3086.1, "end": 3093.1, "text": " But you can also imagine operating on much lower level which is just like you know move this direction or that direction or something like that." }, { "start": 3093.1, "end": 3096.1, "text": " And the language model commands all of that." }, { "start": 3096.1, "end": 3100.1, "text": " And you kind of you can choose where in that abstraction you want to be." }, { "start": 3100.1, "end": 3107.1, "text": " And I think it's quite interesting that we at least can contrive things like this because of how good language models are today." }, { "start": 3107.1, "end": 3115.1, "text": " Yeah and I think I guess to that there's also works on using language basically to predict rewards like over states." }, { "start": 3115.1, "end": 3119.1, "text": " And so that's like one way to kind of like hook it all together." }, { "start": 3119.1, "end": 3121.1, "text": " We have this like general framework." }, { "start": 3121.1, "end": 3138.1, "text": " What's the biggest hurdle like what's the biggest let's say unsolved problem to push push these sort of everyday robots not the company but like the the expression the robots that help us doing our tasks." }, { "start": 3138.1, "end": 3145.1, "text": " What where's the like the biggest roadblock in getting these to a point where they could actually be usable." }, { "start": 3145.1, "end": 3151.1, "text": " I think right now given kind of how much time we spend on different parts of the system." }, { "start": 3151.1, "end": 3153.1, "text": " It's the skills themselves." }, { "start": 3153.1, "end": 3157.1, "text": " The ball neck is still the robot actually doing the thing that you ask it to do." }, { "start": 3157.1, "end": 3169.1, "text": " Even though these skills are simple to get them to the place where they generalize to any environment can kind of pick up any object even the object that wasn't trained on and do these tasks." }, { "start": 3169.1, "end": 3177.1, "text": " And with large diversity of objects environments and so on to very high performance this is still really really hard." }, { "start": 3177.1, "end": 3189.1, "text": " So I think if if we get much better skills underlying skills then well would have made a big step towards this actually being very useful." }, { "start": 3189.1, "end": 3200.1, "text": " I was going to say the along with those skills like the way that we use the value functions is that as the skill improves so does the like value functions estimate of what it can do." }, { "start": 3200.1, "end": 3208.1, "text": " So it's kind of nice where like position both to use these skills but it also improve the overall algorithm by having a better estimate of a success probability." }, { "start": 3208.1, "end": 3215.1, "text": " So I think we're like I think sake and itself is at least set up in a good way to sort of like scale along with as this bottleneck is relieved." }, { "start": 3215.1, "end": 3220.1, "text": " Last question from from my side what do you think of the Tesla bought." }, { "start": 3220.1, "end": 3230.1, "text": " And when I give you the short pro in in in briefly in that it is the ultimate platform because the world is designed for designed for humans right." }, { "start": 3230.1, "end": 3239.1, "text": " So if you have the humanoid robot conceivably it could do anything the human can at least mechanically." }, { "start": 3239.1, "end": 3251.1, "text": " Do you does this sound good to you or is there like major skepticism." }, { "start": 3251.1, "end": 3256.1, "text": " No comments." }, { "start": 3256.1, "end": 3259.1, "text": " You can you can wager wager bets right now." }, { "start": 3259.1, "end": 3271.1, "text": " I think one one thing that is maybe that I'm excited to see is I think Tesla has the ability to scale things up quite well." }, { "start": 3271.1, "end": 3275.1, "text": " They seem to be a really good hardware company." }, { "start": 3275.1, "end": 3280.1, "text": " And so it would be interesting to see how some of the problems change." }, { "start": 3280.1, "end": 3289.1, "text": " This is also things that we are researching as well how problems change and how solutions change when you have many many of these robots." }, { "start": 3289.1, "end": 3295.1, "text": " So I would be I would be excited to see they have any any good insights there." }, { "start": 3295.1, "end": 3303.1, "text": " Is there last things that we maybe haven't touched on yet that you would like people to know here just for visuals." }, { "start": 3303.1, "end": 3309.1, "text": " I'm showing what some of the successful episodes at the end which are quite impressive like very multi." }, { "start": 3309.1, "end": 3311.1, "text": " So there's just one robot." }, { "start": 3311.1, "end": 3315.1, "text": " This is this is a collage but very multi-step things." }, { "start": 3315.1, "end": 3323.1, "text": " And I think that's just really impressive very long horizon planning things down to these individual actions." }, { "start": 3323.1, "end": 3325.1, "text": " Yeah that's that's pretty cool." }, { "start": 3325.1, "end": 3331.1, "text": " Anything any last thing you want to want to let people know how can they get started." }, { "start": 3331.1, "end": 3334.1, "text": " Where can they find out more information." }, { "start": 3334.1, "end": 3347.1, "text": " I just want to mention that we have the website on the website we have a couple of videos demo demonstrating how the robot works and how the inference process works along with the decision process." }, { "start": 3347.1, "end": 3351.1, "text": " All the scores we have calculated along with the robot execution." }, { "start": 3351.1, "end": 3359.1, "text": " So if there are anyone interested in like how our algorithm works check definitely check that out." }, { "start": 3359.1, "end": 3379.1, "text": " I think like I guess what I'm most excited about with it is like how interpretable it is that you can actually see how the decision is being reached by the robot that you can see that the language model likes these things and that the affordance model understands that these tasks make sense or do not make sense in a given world embodied environment." }, { "start": 3379.1, "end": 3387.1, "text": " I think it's like nice that it scales really well to adding in new tasks as we go." }, { "start": 3387.1, "end": 3390.1, "text": " And then I guess towards how people would use it I think to start." }, { "start": 3390.1, "end": 3393.1, "text": " Yeah I mean the paper and the website is a good place to go." }, { "start": 3393.1, "end": 3400.1, "text": " I think we're planning to open source a version of it on a more kind of toy environment in the coming months." }, { "start": 3400.1, "end": 3406.1, "text": " So hopefully that'll be like an exciting like easy way to sort of like get in the mix with both this and language models." }, { "start": 3406.1, "end": 3416.1, "text": " I think there's a lot of power in in leveraging language models and kind of giving them these like hands and eyes to execute real world tasks." }, { "start": 3416.1, "end": 3425.1, "text": " I also think you had a point earlier about basically like we use affordances but really it's just a value function." }, { "start": 3425.1, "end": 3428.1, "text": " It's this value function doesn't necessarily have to map to an affordance." }, { "start": 3428.1, "end": 3438.1, "text": " And I think that's a really powerful idea that we're basically taking all the knowledge in a language model and then hopefully applying it with a value function that isn't even necessarily normalized to can you do this or not." }, { "start": 3438.1, "end": 3443.1, "text": " It's sort of what's helpful what's possible for whatever the RL train policy is doing." }, { "start": 3443.1, "end": 3448.1, "text": " I think that's like a really I don't know open space." }, { "start": 3448.1, "end": 3455.1, "text": " Yeah I'm also quite excited about how language can kind of chip away a little bit from the robotics problem." }, { "start": 3455.1, "end": 3460.1, "text": " I think that's something that we haven't really thought about that much before." }, { "start": 3460.1, "end": 3468.1, "text": " And we see that we can handle much more much longer horizon commands abstract commands and so on while keeping the policies fairly simple." }, { "start": 3468.1, "end": 3474.1, "text": " So it's I think it's quite exciting to see how much further we can we can push that direction." }, { "start": 3474.1, "end": 3481.1, "text": " Yeah I think representations have always been such a challenge for especially like task representations are such a challenge for robotics." }, { "start": 3481.1, "end": 3491.1, "text": " And I think language has provided this like really nice interface to interact with the robot and then have the robot interact with the world." }, { "start": 3491.1, "end": 3492.1, "text": " Excellent." }, { "start": 3492.1, "end": 3496.1, "text": " Well Carl, Brian, Faye thank you very much for being here." }, { "start": 3496.1, "end": 3499.1, "text": " This was a lot of fun and I hope to see you again soon." }, { "start": 3499.1, "end": 3527.1, "text": " Thank you. Thank you for having us." } ]
qgUegkefocg
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Fastformer: Additive Attention Can Be All You Need (Machine Learning Research Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "attention mechanism", "attention is all you need", "fastformer", "fast former", "nlp", "natural language processing", "linear attention", "linear transformer", "query key value", "additive attention", "elementwise product", "fast transformer", "faster transformer", "transformer memory", "attention quadratic memory", "fastformer explained" ]
#attention #transformer #fastformer Transformers have become the dominant model class in the last few years for large data, but their quadratic complexity in terms of sequence length has plagued them until now. Fastformer claims to be the fastest and most performant linear attention variant, able to consume long contexts at once. This is achieved by a combination of additive attention and elementwise products. While initial results look promising, I have my reservations... OUTLINE: 0:00 - Intro & Outline 2:15 - Fastformer description 5:20 - Baseline: Classic Attention 10:00 - Fastformer architecture 12:50 - Additive Attention 18:05 - Query-Key element-wise multiplication 21:35 - Redundant modules in Fastformer 25:00 - Problems with the architecture 27:30 - Is this even attention? 32:20 - Experimental Results 34:50 - Conclusion & Comments Paper: https://arxiv.org/abs/2108.09084 Abstract: Transformer is a powerful model for text understanding. However, it is inefficient due to its quadratic complexity to input sequence length. Although there are many methods on Transformer acceleration, they are still either inefficient on long sequences or not effective enough. In this paper, we propose Fastformer, which is an efficient Transformer model based on additive attention. In Fastformer, instead of modeling the pair-wise interactions between tokens, we first use additive attention mechanism to model global contexts, and then further transform each token representation based on its interaction with global context representations. In this way, Fastformer can achieve effective context modeling with linear complexity. Extensive experiments on five datasets show that Fastformer is much more efficient than many existing Transformer models and can meanwhile achieve comparable or even better long text modeling performance. Authors: Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello there! Today we'll look at Fastformer Additive Attention Can Be All You Need by Chuan Wu, Fang Zhao Wu, Tao Qi, and Yongfeng Huang. So this paper definitely wins out in the category of most innovative paper titles of the last few months, as apparently we've gone from Is All You Need to Can Be All You Need. So a big win on this front. As you might have guessed from this title, the paper is introducing a new kind of attention mechanism. If you don't know what an attention mechanism is, and you're in machine learning, you might want to find out. I have a video on attention is all you need. So the new attention here is additive attention, which is supposed to be a much, much, much faster way of doing attention, thus the name Fastformer. This additive attention circumvents this quadratic bottleneck that we usually have in the attention mechanism. Instead of doing sort of multiplicative attention, they do what they call additive attention. Now, the naming, in my opinion, is a bit confusing, and the whole concept is a bit confusing. So on a high level, that's what they do. They design a new attention mechanism. My opinion of the paper is that it's kind of deceptively naming things to make it appear like it's an attention mechanism, where in reality, it seems to be sort of just sort of a feed forward ish layer type of thing that they propose, maybe not even. So you know, we'll go into that. Their promises are that of course, circumventing this quadratic bottleneck of attention, you can input much longer sequences into the context of a transformer. And you can do it also much faster for the same length of sequences, since everything is just additive and not multiplicative. We're gonna find that out. They claim they have a lot of experimental evidence. And yeah, if you like content like this, you know, don't hesitate to subscribe if you haven't done so already. So the abstract reads transformer are very powerful. Okay. However, the attention mechanism is inefficient due to the quadratic complexity to input sequence length. They say although there are many methods on transformer acceleration, they are still either inefficient on long sequences or not effective enough by effective, I guess, they mean that their performance suffers too much. So they say they propose fast former an efficient transformer model based on additive attention. So instead of modeling the pairwise interactions between tokens, which is what attention does, we first use additive attention mechanism to model global contexts and then further transform each token representation based on its interaction with the global context representations. Now, if this sounds confusing to you, it does so to me too. They go a little bit into more detail right here, they say they have this additive attention, which is linear complexity instead of quadratic as in usual transformers. So here is a bit more detail, we use additive attention to summarize the input attention query matrix into a global query vector. Then we model the interaction between the attention key and the global query vector via element wise product to learn the global context aware key matrix. We further summarize it into a global key vector via additive attention. Then we use element wise product to aggregate the global key and attention value, which are further processed by a linear transformation to compute the global context aware attention value. Finally, we add together the original attention query and the global context aware attention value to form the final output. You know, still after this paragraph doesn't make too much sense to me to understand. So we'll go to the diagram in just one second. But here is essentially what they promise. Okay, they propose an additive attention based transformer named fast former to our knowledge, fast former is the most efficient transformer architecture. So that's one they propose the most efficient transformer architecture. Second, we propose to model the interaction between global context and token representations via element wise product, which can help fully model context information in an efficient way. Okay, so they the element wise product seems to be the second component. So there's additive attention, there is element wise product. And then lastly, they say, you know, our experimental data sets valid validate our approach. All right, so here is the coveted diagram of the fast former. It's a little bit complicated. But I want to go back a little bit to the regular attention mechanism. I know I've done this a lot. But I think in this context, it is really worth discussing. So in a regular attention mechanism, what do you have, you have some sort of an input sequence, each one of these things can be a vector, some sort of an embedding vector or something like this, but it's a, it's a sequence, essentially, it's a set, but we think of it as a sequence of, let's say tokens in natural language. And we want to transform the sequence of one layer into a sequence of equal length of the next layer. So if we stack many of these layers together, we sort of want to improve the representations of these tokens layer by layer by layer, such that we can at the end of the transformer understand what each token means in the context of all other tokens. So if this is a sentence, my house is very green, then at the at the beginning, each word is just an isolated piece of data. At the end of these transformations, we want sort of all the tokens to be aware of all the other tokens in the input, and sort of capture their in context meaning. Now, what we need to do is we need to transform one set of representations into the next one. The way we do this is by the attention mechanism. So the attention mechanism, essentially, from each of the tokens, it derives three different things. One is called a key. So the key is a vector. So the key is a vector for each token. And that vector describes kind of like what the content of this token is so far. Okay, so one vector is the key, which allows the token to advertise what it has to offer. The other one is the query, which allows each token and that's also derived from the same token, but I'm going to draw it up here. The query means what does this token want to know about the other tokens in the sequence. So this can be different from its content. So as you see the query and the key, they might be different. There are variants where there's the same, but usually you derive two different values from each token. And then what we do is we route by inner product. So you for every single query, you aggregate across the entire input sequence, you aggregate by inner product, which means that this would get routed here by a lot. This one may be two, these ones not so much, and so on. So you aggregate essentially the inner product, which for each query gives you a histogram, a histogram across the sequence saying, okay, this information here is mildly relevant. This one is more relevant. This one is slightly relevant. These ones aren't relevant at all for me. This histogram, you then normalize via a softmax operation. And that gives you, I mean, that gives you a real distribution over the input. So with the query and the key, you decide how you want to aggregate the information in the input sequence for one particular element in the output sequence. You do this for every element. So for every element, you get a distribution of how you want to aggregate. And then in the last step, every single item also emits what's called a value. And the value is yet another vector. And the value, I guess you don't even have to actually transform anything, the value, you can just take the information itself of the token if you want. But essentially, the value is ultimately what you multiply together with this distribution. And then that becomes your next layer representation for this particular token. Right. So the whole query key attention mechanism is simply to decide how do I want to aggregate the different values of the input sequence for any given token in the next layer. All right. Okay, I hope this is clear. So the query, the key advertises what the contents are, which is kind of like the value, the value is the actual contents. But the key is more like an addressable representation of the content. And the query emits what do I want to know about the others. So you match the queries of myself with the key of the others. And that aggregates. Now, in that context, let's look at the fast former. So we said there are two elements there is, first of all, there is this additive attention. And that's what you can see kind of down here. So you see, there's the input, and the input gets transformed into three different things into queries, keys and values. That is just like a regular attention mechanism. These are linear transformations that each token independently goes through. So this token independently produces this, this query, this key and this value. And with the same transformation, this token produces this query, this key, and these this value. So there's no interaction, every token goes through the same transformation, then you can see instead of now considering the interactions between each of the queries and each of the keys, sorry, that should probably be up here. Instead of considering this interaction, we don't do that. What we do first is we say, well, this really becomes quadratic if we do if we consider interaction between each query and each key. Therefore, let's simply construct one global query, okay, one global query. And then we consider the interaction of that global query with each of the keys instead of instead of doing everything with everything. So here is you work here, you can see how the linearness instead of the quadraticness of this approach comes to be instead of considering pairwise interactions, we simply construct a single query vector. By the way, this is all this is one head. So this is one head. Usually a transformer has multiple heads. So over here, you would have like, head number two, and so on head number three, head number four, but in a single head, we make one query vector. Yeah, and you immediately see what the shortcomings are here. Whereas previously, every token could sort of dynamically decide how it wants to aggregate information, and every token could do that, you know, in a in a sort of by itself. Now, it's only the sequence as a whole that gets to decide how it wants to aggregate information, because it needs to come up with a combined query vector. So I'm going to guess this thing here works might work quite well for tasks that have sort of a single single minded output sort of topic classification or something like this, where you simply, you know, the global information is necessary usually, whereas tasks that might be more, you know, nuanced and language relevant, like considering specific interactions between individual tokens, and so on, those might fall a lot short in this approach. Okay, but how how does this single query vector come to be? Now, this single query vector is constructed purely, as you can see from the queries of the individual token elements. How there's this funny construction here, where you have you can see this is the query vector right here. And then it itself goes here. And here, so it's used twice. Okay, so we what we do is we construct this alpha value for each query vector. And then we multiply that alpha value by the query vector itself. And then we add this is an addition here, we add all together at the end. So essentially, this query vector here, the global one is a weighted sum across all of the individual query vectors. Now the question is, you know, how do we decide decide on the weight? And that's where these alpha values come in. So let's see, I here is the formulas for the alpha value. So each query vector qi will produce the its own alpha i, how is that computed? As you can see right here, this should be familiar to you. This is the softmax formula. So what we do is we it's also the formula for logistic regression, if you squint a little bit. So essentially, the alpha i's are the result of a softmax operation across the queries. So you have query one, query two, query three, right? It's a softmax across not the queries itself, but this quantity right here, the query multiplied by some sort of a transformation. And this now really looks like logistic regression. This w here is a vector that is learned, this is a learned parameter vector, right? I take the inner product with each of the queries. And that gives me like a number, right? And then what I do is I simply normalize this by all the numbers of all the queries. Okay, so every one of these gets multiplied by this w, which gives me one number, and then I simply normalize, I push it through the exponential function, then I normalize it. This is essentially a logistic regression with the w being the feature vector. Okay, now what does it mean? What does this mean? Okay, like we construct the final query vector as an aggregate across all query vectors with the weightings being dependent on like a softmax or a logistic regression with respect to this learned vector w, this is always the same right for for every one of those queries. I can make sense of that if I think okay, this is the w here is essentially you know, in logistic regression, you classify so the w vector me is the sort of the classification boundary of, you know, the one class versus the other class. So this here, I think is essentially a little classifier that cares about one particular thing that is learned. So this can be some intermediate feature that is useful that is learned via backpropagation in this w vector. And the the weighting of this particular head in this particular layer is then according to that feature. So in here, there is somewhere there is a w vector, and that w vector in this particular layer for this particular head refers to some kind of useful feature, like, I don't know, like, is there a name of a country somewhere in the sentence? Okay. And that's what we use as a weight to aggregate the queries. So you can immediately see that if a term, if a, you know, a token, it's if it's query sort of contains a country information, this classifier would, you know, say, well, that particular query has a lot of the information that I am particularly look for in this layer, therefore, the inner product will be high, therefore, the alpha will be high, therefore, that particular query would be represented greatly in the global query vector. So the global query vector, essentially, you can think of, I select among all the query vectors, the ones that I care about in this particular layer in this particular head. However, what you care about is the static. It's statically learned, it's the same for every single sample. Okay. All right. So this is sort of a weighing by particular feature. Now, once we have the global query vector right here, how do we let it interact with the key vector? So usually what we do is we do an inner product of the query and the key. And then that defines sort of our aggregation distribution. However, since we only have a single query, you know, that will not give us that will in fact, not give us an n dimensional seek, sorry, an n length sequence as here, that will only give us a sequence of length one in the next layer. So we can't really do that. So what they do is they almost do an inner product, except they don't sum, right, they do simply element wise, multi They do simply element wise multiplications of the queries and the keys. Now element wise multiplication, it kind of means so it means, you know, like the element wise multiplication, if you think of it, if both elements are small, the result is very small. If and if both are high, the result is very high. So there's some nonlinear dynamics going on within the same dimension, right? There's no aggregation across dimensions. And yeah, so they do element wise multiplication right here in order to obtain these P vectors and the P vectors, they are now the integration, every P vector, P vector, so P i is equal to the element wise multiplication of the i of key vector with the global query vector. Okay, so yeah, and the query, the query vector itself is, of course, a sum across a weighted sum across all of the queries. So if I pull the K in, you can see that I still have, okay, alpha j, I still have this quadratic thing here, I still have for you know, I get I have n P vectors. And for each one, I have also n Q vectors, and I consider products of the form i j. So I still have the quadratic products in here. However, I don't have quadratic complexity. Why? Because I don't have a softmax in between aggregating the queries and aggregating the keys. And therefore, you know, the what is the commutative associative rule applies, and I can simply get away with first aggregating the query and then multiplying it as a whole by the keys. Now, of course, that are those are two linear operations in sequence. Whereas in the normal attention mechanism, I have a linear operation, then a nonlinear one with the softmax, and then again, a linear one. And arguably, the nonlinearities is what brings the whole power to deep learning. So, you know, this essentially, here, you can see how it really circumvents the quadratic bottlenecks by simply saying, well, if everything's linear, then there, you know, we can we can just add all together. Yeah, that's the trick, essentially. Now, then you realize we're not done yet. Okay, what do we do with the P vectors? Well, this seems familiar, right? Again, we do another one of these additive attentions. So they call this thing additive attention, you can see from each P one, we produce a beta value, the beta value exactly the same way as the alpha values, I suppose, at least yes, you can see that right here, right, the beta values exactly the same. For each P, we multiply it by a learned feature vector, which is WK right here. And then we normalize by all of them. And, you know, after the exponential function, and then we aggregate the global key via, again, a weighted sum of all of these P vectors. So this is again, additive attention in order, in order to have a global key vector. And now, exactly the same trick, we use the global key vector, element wise multiplied by the value vectors, which gives us these u vectors right here, that these apparently go through another linear transformation to give us the R vectors. You know, you can, you can stack as many linear transformations as you want. And then we're still not done, right? We're still not done. So essentially, what we've done in the end is we have we we take the values, which is the information we want to forward propagate. And for each value, we element wise multiply it with this K vector. And this K vector is a result of the keys and also this query vector. And that's a result of the the queues. So essentially, there is no aggregation of information as is there in the regular transformer, I don't aggregate the values from the sequence in a weighted fashion, I simply leave each value as it is, you know, these are, as I said, these are transformations that don't depend on the other sequence elements. So V1 purely depends on E1. And the only way the only way that token information from the other tokens can come into any token is via this aggregation methods, right here, in, in that in the normalization constant, right in in the aggregation that happens via the normalization, you know, for example, the key n could be represented more in this global key, and then that's multiplied here to my vector one. So that's how other information comes into any particular token. And as I said, we're still not done. After we obtained these R vectors, we then add to them, this thing right here, we add to them, the query vectors again, now why I don't add why, I don't know, but we just do. So we simply add the query vectors to the R vectors that we have here. And that's going to be our final output. So this is stupidly complex. And I don't think for any particular reason. So there are multiple problems right here. For example, this transformation right here is a linear transformation. Okay, maybe it makes sense. But it seems like you just had a linear transformation here. And this whole sum here is sort of a linear aggregation. Ergo, yeah, okay, maybe you can justify that. But second of all, this connection right here, right? If this is not ablated in experiment, like I don't believe squat here. Like, I want to know how much this this is clearly not something you do from the beginning, this is clearly something you add after the other stuff don't doesn't work. So I want to see an experiment where this connection is missing, and to decide and I want to see an experiment where only this connection happens to decide, you know, where the actual work is going here. Then another thing, you can see this here, the middle column is entirely useless. Like, like this, this right here, it's simply it's simply the lower part is a repetition from sorry, the upper part here is a repetition from the left. So these two things are repeating. And then the lower part is repeated here, right? And in fact, you can stack as many of these columns, they just call them query key, and value. Well, if I just call them column one, column two, and here, this this is like the final column, fine f cf, right? I can, in fact, insert column three, column four, column five, I can insert as many as I want, because it's just repeated, right? That there's no qualitative difference that differentiates the queries from the keys in this model, right? Only the values are a bit different, because at the end, they're not aggregated into this global vector with this additive attention thing. But in essence, you know, you could do away completely with for example, with the key column and directly do the query multiplying them into the values completely possible. So completely unnecessary key column. Now, you might think, okay, if the key column is unnecessary, or if I can introduce 50 keys in between 50 key columns that always take the last whatever global vector and multiply it in and do additive attention. Is this really an attention mechanism? And the answer is kind of but not in the way you expect. It's a bit sneaky, honestly. See, attention is when I have, well, arguably, right? Who am I to define this? But arguably, attention is when I create one of these things in a dynamic way. They and these things are how do I aggregate information? How do I weigh information from an input sequence? Okay, that is, in essence, an attention mechanism dynamically creating this waiting. So the only way this actually really happens right here is where we're in this W thing, right? So this here is in fact, the attention mechanism, not the not the not this, this is just a weighted sum. Like, this here is the the hidden attention mechanism with, it's essentially a self attention mechanism, right? You can you can see. So the alpha is are how do we aggregate information? And then, okay, I guess, yeah, this belongs to the attention mechanism. But the keys and the queries, sorry, the keys and the values are both what they call q, right? What I aggregate here, those are essentially the values, the things to be addressed, these are essentially the keys. So the query is essentially this thing right here. That's that's the query. Now the query, as you can see, is not dynamic, the query is just statically learned, which makes this essentially into a, like a feed forward network, or at best an attention mechanism with a single learned query. So instead of having n queries, now we have one query per head. And that's why I said the thing at the very beginning, if, if this is applied to a task that largely relies on, you know, single minded task, global global information task, and so on, such as sequence classification, or something like this, it can be that I only need a couple of intermediate really different features per layer, after all, they are vector valued. So, which means that if I have eight heads, which have eight different w vectors, and you know, there are two w vectors per layer, to be fair, there is a w here. And there's also a w again, in this thing right here. So every column gives me essentially a new feature to extract, right? So the number of heads times the number of these columns I have is essentially the number of features I can have static features I can extract from such a sequence. And as I said, for global information tasks, that might in fact be enough. And in that case, you know, good, I can I can get around. However, I could have done the same thing, probably by Yeah, but by simply constructing less queries than keys and reducing the sequence length or something like this. I mean, there are there are many ways of this. But I think the thing here is framed in terms of the words of an attention mechanism, where the actual attention mechanism is simply like the thing here that happens inside the queries, it's essentially a self attention mechanism on top of the queries with not a dynamic but one single fixed query. The same goes for column two, and then column three is just kind of like weird. Like, it's kind of a weird residual connection, or something where there's this product here with something that's incoming. It's kind of like a feed forward layer again, like a dynamic feed forward layer per token. Yeah. So yes, that's that's why I find the name a bit deceptive right here also to formulate as query key and value here and, and their whole talk about who we model the interaction between something, something, something. Yeah. Okay. But what about experiments? They're experiments I find to be relatively lacking. They do have a lot of baseline comparisons, which is respectable. Their data sets, however, appear to be yeah, things like sentiment classification, topic classification tasks. And, you know, they do perform well. I am, you know, experimental results are experimental results. And then, you know, the best numbers are achieved by ensembles, which is which is also fine, right. But even the regular numbers right here appear to be quite competitive. So I don't exactly know. Yeah, the complexity right here is also a bit shaky, because they sort of leave away the linear operations and so on like, yeah. And, as I said, there are no ablations of most of the things. So there are no ablations, for example, of this residual connection where you just randomly add the query, like, why would you do that? Why would you do that? Why would you do that? Why would you query? Like, why would you do that? Like, that doesn't even make sense. If you call this a query, this thing, then by itself, it should carry no information to pass on by nature of being a query. Right. So, you know, why do you why do you add it up there? You know, what's the effect of the individual columns, how many there are, right? You know, there are many things to ablate here to really show why this model performs well. What they do is they compare sort of the runtime and the the runtime as the sequence length increases. And as you can see, they're quite fast right here, which I guess fast transfer is this fast former, I guess fast transformer is fast former. So and and the regular transformer, and they also are like a constant factor faster than others. But you know, are like, are you a constant factor faster, because you actually don't do any sort of attention? I don't I don't know. So yeah, that those are my my two cents to this paper. Again, this might be a neat model for certain tasks. It's certainly fast, it certainly doesn't make you run out of memory as a regular transformer for a given set of tasks, it might in fact work better than a transformer. My main problem here is with with the whole framing in terms of attention. In terms of the sort of same languages, trying to pass this off as a function, pass this off as a faster transformer, which it is not. Alright, let me know what you think in the comments. And thanks for listening. Bye bye.
[ { "start": 0, "end": 6.16, "text": " Hello there! Today we'll look at Fastformer Additive Attention Can Be All You Need by" }, { "start": 6.16, "end": 14.120000000000001, "text": " Chuan Wu, Fang Zhao Wu, Tao Qi, and Yongfeng Huang. So this paper definitely wins out in the category" }, { "start": 14.120000000000001, "end": 22.8, "text": " of most innovative paper titles of the last few months, as apparently we've gone from Is All You" }, { "start": 22.8, "end": 29.96, "text": " Need to Can Be All You Need. So a big win on this front. As you might have guessed from this title," }, { "start": 29.96, "end": 37.120000000000005, "text": " the paper is introducing a new kind of attention mechanism. If you don't know what an attention" }, { "start": 37.120000000000005, "end": 42.8, "text": " mechanism is, and you're in machine learning, you might want to find out. I have a video on" }, { "start": 42.8, "end": 50.08, "text": " attention is all you need. So the new attention here is additive attention, which is supposed" }, { "start": 50.08, "end": 57.84, "text": " to be a much, much, much faster way of doing attention, thus the name Fastformer. This" }, { "start": 57.84, "end": 63.56, "text": " additive attention circumvents this quadratic bottleneck that we usually have in the attention" }, { "start": 63.56, "end": 70, "text": " mechanism. Instead of doing sort of multiplicative attention, they do what they call additive" }, { "start": 70, "end": 76.32000000000001, "text": " attention. Now, the naming, in my opinion, is a bit confusing, and the whole concept is a bit" }, { "start": 76.32000000000001, "end": 82.34, "text": " confusing. So on a high level, that's what they do. They design a new attention mechanism. My" }, { "start": 82.34, "end": 88.68, "text": " opinion of the paper is that it's kind of deceptively naming things to make it appear like" }, { "start": 88.68, "end": 95.56, "text": " it's an attention mechanism, where in reality, it seems to be sort of just sort of a feed forward" }, { "start": 95.56, "end": 103.16, "text": " ish layer type of thing that they propose, maybe not even. So you know, we'll go into that. Their" }, { "start": 103.16, "end": 110, "text": " promises are that of course, circumventing this quadratic bottleneck of attention, you can input" }, { "start": 110, "end": 118, "text": " much longer sequences into the context of a transformer. And you can do it also much faster" }, { "start": 118, "end": 123.2, "text": " for the same length of sequences, since everything is just additive and not multiplicative. We're" }, { "start": 123.2, "end": 129.16, "text": " gonna find that out. They claim they have a lot of experimental evidence. And yeah, if you like" }, { "start": 129.16, "end": 136.44, "text": " content like this, you know, don't hesitate to subscribe if you haven't done so already. So the" }, { "start": 136.44, "end": 145.96, "text": " abstract reads transformer are very powerful. Okay. However, the attention mechanism is inefficient" }, { "start": 145.96, "end": 152.32, "text": " due to the quadratic complexity to input sequence length. They say although there are many methods" }, { "start": 152.32, "end": 158.12, "text": " on transformer acceleration, they are still either inefficient on long sequences or not effective" }, { "start": 158.12, "end": 165.56, "text": " enough by effective, I guess, they mean that their performance suffers too much. So they say they" }, { "start": 165.56, "end": 171.92000000000002, "text": " propose fast former an efficient transformer model based on additive attention. So instead of" }, { "start": 171.92000000000002, "end": 178.4, "text": " modeling the pairwise interactions between tokens, which is what attention does, we first use additive" }, { "start": 178.4, "end": 184.36, "text": " attention mechanism to model global contexts and then further transform each token representation" }, { "start": 184.36, "end": 191.24, "text": " based on its interaction with the global context representations. Now, if this sounds confusing to" }, { "start": 191.24, "end": 198.68, "text": " you, it does so to me too. They go a little bit into more detail right here, they say they have" }, { "start": 198.68, "end": 206.96, "text": " this additive attention, which is linear complexity instead of quadratic as in usual transformers. So" }, { "start": 206.96, "end": 214.04000000000002, "text": " here is a bit more detail, we use additive attention to summarize the input attention query matrix into" }, { "start": 214.04000000000002, "end": 219.48000000000002, "text": " a global query vector. Then we model the interaction between the attention key and the global query" }, { "start": 219.48, "end": 225.95999999999998, "text": " vector via element wise product to learn the global context aware key matrix. We further summarize" }, { "start": 225.95999999999998, "end": 232.56, "text": " it into a global key vector via additive attention. Then we use element wise product to aggregate the" }, { "start": 232.56, "end": 239.92, "text": " global key and attention value, which are further processed by a linear transformation to compute" }, { "start": 239.92, "end": 246.51999999999998, "text": " the global context aware attention value. Finally, we add together the original attention query and" }, { "start": 246.52, "end": 252.76000000000002, "text": " the global context aware attention value to form the final output. You know, still after this paragraph" }, { "start": 252.76000000000002, "end": 260.6, "text": " doesn't make too much sense to me to understand. So we'll go to the diagram in just one second. But" }, { "start": 260.6, "end": 266.04, "text": " here is essentially what they promise. Okay, they propose an additive attention based transformer" }, { "start": 266.04, "end": 272.2, "text": " named fast former to our knowledge, fast former is the most efficient transformer architecture. So" }, { "start": 272.2, "end": 278.08, "text": " that's one they propose the most efficient transformer architecture. Second, we propose to" }, { "start": 278.08, "end": 282.28, "text": " model the interaction between global context and token representations via element wise product," }, { "start": 282.28, "end": 289.15999999999997, "text": " which can help fully model context information in an efficient way. Okay, so they the element wise" }, { "start": 289.15999999999997, "end": 296, "text": " product seems to be the second component. So there's additive attention, there is element wise product." }, { "start": 296, "end": 303.6, "text": " And then lastly, they say, you know, our experimental data sets valid validate our approach. All right," }, { "start": 303.6, "end": 311.08, "text": " so here is the coveted diagram of the fast former. It's a little bit complicated. But I want to go" }, { "start": 311.08, "end": 316.64, "text": " back a little bit to the regular attention mechanism. I know I've done this a lot. But I" }, { "start": 316.64, "end": 323.52, "text": " think in this context, it is really worth discussing. So in a regular attention mechanism," }, { "start": 323.52, "end": 330.64, "text": " what do you have, you have some sort of an input sequence, each one of these things can be a vector," }, { "start": 330.64, "end": 335.35999999999996, "text": " some sort of an embedding vector or something like this, but it's a, it's a sequence, essentially," }, { "start": 335.35999999999996, "end": 340.68, "text": " it's a set, but we think of it as a sequence of, let's say tokens in natural language. And we want" }, { "start": 340.68, "end": 349.2, "text": " to transform the sequence of one layer into a sequence of equal length of the next layer. So if" }, { "start": 349.2, "end": 354.59999999999997, "text": " we stack many of these layers together, we sort of want to improve the representations of these" }, { "start": 354.59999999999997, "end": 361.44, "text": " tokens layer by layer by layer, such that we can at the end of the transformer understand what each" }, { "start": 361.44, "end": 370.88, "text": " token means in the context of all other tokens. So if this is a sentence, my house is very green," }, { "start": 370.88, "end": 378.64, "text": " then at the at the beginning, each word is just an isolated piece of data. At the end of these" }, { "start": 378.64, "end": 385.76, "text": " transformations, we want sort of all the tokens to be aware of all the other tokens in the input," }, { "start": 385.76, "end": 393.76, "text": " and sort of capture their in context meaning. Now, what we need to do is we need to transform" }, { "start": 393.76, "end": 400.56, "text": " one set of representations into the next one. The way we do this is by the attention mechanism. So" }, { "start": 400.56, "end": 406.8, "text": " the attention mechanism, essentially, from each of the tokens, it derives three different things." }, { "start": 406.8, "end": 414.8, "text": " One is called a key. So the key is a vector. So the key is a vector for each token. And that" }, { "start": 414.8, "end": 421.84000000000003, "text": " vector describes kind of like what the content of this token is so far. Okay, so one vector is the" }, { "start": 421.84000000000003, "end": 428.40000000000003, "text": " key, which allows the token to advertise what it has to offer. The other one is the query," }, { "start": 429.28000000000003, "end": 434.16, "text": " which allows each token and that's also derived from the same token, but I'm going to draw it" }, { "start": 434.16, "end": 442.72, "text": " up here. The query means what does this token want to know about the other tokens in the sequence." }, { "start": 442.72, "end": 448.08000000000004, "text": " So this can be different from its content. So as you see the query and the key, they might be" }, { "start": 448.08000000000004, "end": 452.96000000000004, "text": " different. There are variants where there's the same, but usually you derive two different" }, { "start": 452.96000000000004, "end": 460.40000000000003, "text": " values from each token. And then what we do is we route by inner product. So you for every single" }, { "start": 460.4, "end": 468, "text": " query, you aggregate across the entire input sequence, you aggregate by inner product," }, { "start": 468, "end": 475.67999999999995, "text": " which means that this would get routed here by a lot. This one may be two, these ones not so much," }, { "start": 475.67999999999995, "end": 482.15999999999997, "text": " and so on. So you aggregate essentially the inner product, which for each query gives you a histogram," }, { "start": 482.15999999999997, "end": 488.88, "text": " a histogram across the sequence saying, okay, this information here is mildly relevant. This one" }, { "start": 488.88, "end": 496.24, "text": " is more relevant. This one is slightly relevant. These ones aren't relevant at all for me. This" }, { "start": 496.24, "end": 503.04, "text": " histogram, you then normalize via a softmax operation. And that gives you, I mean, that gives" }, { "start": 503.04, "end": 509.04, "text": " you a real distribution over the input. So with the query and the key, you decide how you want to" }, { "start": 509.04, "end": 517.84, "text": " aggregate the information in the input sequence for one particular element in the output sequence." }, { "start": 517.84, "end": 521.2800000000001, "text": " You do this for every element. So for every element, you get a distribution of how you want" }, { "start": 521.2800000000001, "end": 528.24, "text": " to aggregate. And then in the last step, every single item also emits what's called a value." }, { "start": 528.24, "end": 533.6800000000001, "text": " And the value is yet another vector. And the value, I guess you don't even have to actually" }, { "start": 534.32, "end": 540.1600000000001, "text": " transform anything, the value, you can just take the information itself of the token if you want." }, { "start": 540.1600000000001, "end": 546.08, "text": " But essentially, the value is ultimately what you multiply together with this distribution." }, { "start": 546.08, "end": 551.2, "text": " And then that becomes your next layer representation for this particular token." }, { "start": 552, "end": 558.24, "text": " Right. So the whole query key attention mechanism is simply to decide how do I want to aggregate the" }, { "start": 559.6, "end": 568.8000000000001, "text": " different values of the input sequence for any given token in the next layer. All right. Okay," }, { "start": 568.8000000000001, "end": 575.84, "text": " I hope this is clear. So the query, the key advertises what the contents are, which is kind" }, { "start": 575.84, "end": 581.36, "text": " of like the value, the value is the actual contents. But the key is more like an addressable" }, { "start": 581.36, "end": 587.9200000000001, "text": " representation of the content. And the query emits what do I want to know about the others." }, { "start": 587.9200000000001, "end": 592.88, "text": " So you match the queries of myself with the key of the others. And that aggregates. Now," }, { "start": 593.44, "end": 599.6800000000001, "text": " in that context, let's look at the fast former. So we said there are two elements there is," }, { "start": 599.6800000000001, "end": 604.24, "text": " first of all, there is this additive attention. And that's what you can see kind of down here." }, { "start": 604.24, "end": 609.2, "text": " So you see, there's the input, and the input gets transformed into three different things into" }, { "start": 609.2, "end": 615.76, "text": " queries, keys and values. That is just like a regular attention mechanism. These are linear" }, { "start": 616.32, "end": 623.36, "text": " transformations that each token independently goes through. So this token independently produces" }, { "start": 623.36, "end": 629.6, "text": " this, this query, this key and this value. And with the same transformation, this token produces" }, { "start": 629.6, "end": 635.2, "text": " this query, this key, and these this value. So there's no interaction, every token goes through" }, { "start": 635.2, "end": 644, "text": " the same transformation, then you can see instead of now considering the interactions between each" }, { "start": 644, "end": 649.28, "text": " of the queries and each of the keys, sorry, that should probably be up here. Instead of considering" }, { "start": 649.28, "end": 656.08, "text": " this interaction, we don't do that. What we do first is we say, well, this really becomes quadratic" }, { "start": 656.08, "end": 663.2, "text": " if we do if we consider interaction between each query and each key. Therefore, let's simply" }, { "start": 663.2, "end": 670.24, "text": " construct one global query, okay, one global query. And then we consider the interaction of" }, { "start": 670.24, "end": 678.64, "text": " that global query with each of the keys instead of instead of doing everything with everything." }, { "start": 678.64, "end": 684.88, "text": " So here is you work here, you can see how the linearness instead of the quadraticness of this" }, { "start": 684.88, "end": 690.88, "text": " approach comes to be instead of considering pairwise interactions, we simply construct a" }, { "start": 690.88, "end": 698.64, "text": " single query vector. By the way, this is all this is one head. So this is one head. Usually a" }, { "start": 698.64, "end": 704.08, "text": " transformer has multiple heads. So over here, you would have like, head number two, and so on head" }, { "start": 704.08, "end": 711.68, "text": " number three, head number four, but in a single head, we make one query vector. Yeah, and you" }, { "start": 711.68, "end": 719.5999999999999, "text": " immediately see what the shortcomings are here. Whereas previously, every token could sort of" }, { "start": 719.5999999999999, "end": 724.9599999999999, "text": " dynamically decide how it wants to aggregate information, and every token could do that," }, { "start": 725.8399999999999, "end": 732.64, "text": " you know, in a in a sort of by itself. Now, it's only the sequence as a whole that gets to decide" }, { "start": 732.64, "end": 738, "text": " how it wants to aggregate information, because it needs to come up with a combined query vector." }, { "start": 738, "end": 745.2, "text": " So I'm going to guess this thing here works might work quite well for tasks that have sort of" }, { "start": 745.2, "end": 751.12, "text": " a single single minded output sort of topic classification or something like this, where" }, { "start": 751.12, "end": 757.84, "text": " you simply, you know, the global information is necessary usually, whereas tasks that might be" }, { "start": 757.84, "end": 763.52, "text": " more, you know, nuanced and language relevant, like considering specific interactions between" }, { "start": 763.52, "end": 771.84, "text": " individual tokens, and so on, those might fall a lot short in this approach. Okay, but how how does" }, { "start": 771.84, "end": 778.96, "text": " this single query vector come to be? Now, this single query vector is constructed purely, as you" }, { "start": 778.96, "end": 786.72, "text": " can see from the queries of the individual token elements. How there's this funny construction here," }, { "start": 786.72, "end": 793.84, "text": " where you have you can see this is the query vector right here. And then it itself goes here." }, { "start": 794.4, "end": 802.4, "text": " And here, so it's used twice. Okay, so we what we do is we construct this alpha value for each query" }, { "start": 802.4, "end": 809.2, "text": " vector. And then we multiply that alpha value by the query vector itself. And then we add this is" }, { "start": 809.2, "end": 817.12, "text": " an addition here, we add all together at the end. So essentially, this query vector here, the global" }, { "start": 817.12, "end": 824.08, "text": " one is a weighted sum across all of the individual query vectors. Now the question is, you know, how" }, { "start": 824.08, "end": 830.24, "text": " do we decide decide on the weight? And that's where these alpha values come in. So let's see," }, { "start": 830.24, "end": 840.48, "text": " I here is the formulas for the alpha value. So each query vector qi will produce the its own" }, { "start": 840.48, "end": 846.48, "text": " alpha i, how is that computed? As you can see right here, this should be familiar to you. This" }, { "start": 846.48, "end": 856.8, "text": " is the softmax formula. So what we do is we it's also the formula for logistic regression, if you" }, { "start": 856.8, "end": 867.04, "text": " squint a little bit. So essentially, the alpha i's are the result of a softmax operation across the" }, { "start": 867.04, "end": 874.64, "text": " queries. So you have query one, query two, query three, right? It's a softmax across not the queries" }, { "start": 874.64, "end": 882.4, "text": " itself, but this quantity right here, the query multiplied by some sort of a transformation. And" }, { "start": 882.4, "end": 889.68, "text": " this now really looks like logistic regression. This w here is a vector that is learned, this is" }, { "start": 889.68, "end": 897.4399999999999, "text": " a learned parameter vector, right? I take the inner product with each of the queries. And that gives" }, { "start": 897.4399999999999, "end": 905.92, "text": " me like a number, right? And then what I do is I simply normalize this by all the numbers of all" }, { "start": 905.92, "end": 914.3199999999999, "text": " the queries. Okay, so every one of these gets multiplied by this w, which gives me one number," }, { "start": 914.3199999999999, "end": 921.4399999999999, "text": " and then I simply normalize, I push it through the exponential function, then I normalize it." }, { "start": 921.4399999999999, "end": 927.92, "text": " This is essentially a logistic regression with the w being the feature vector." }, { "start": 927.92, "end": 934.7199999999999, "text": " Okay, now what does it mean? What does this mean? Okay, like we construct the final query vector" }, { "start": 934.7199999999999, "end": 943.1999999999999, "text": " as an aggregate across all query vectors with the weightings being dependent on like a softmax or" }, { "start": 943.1999999999999, "end": 948.4, "text": " a logistic regression with respect to this learned vector w, this is always the same right for for" }, { "start": 948.4, "end": 958.72, "text": " every one of those queries. I can make sense of that if I think okay, this is the w here is essentially" }, { "start": 960.24, "end": 965.76, "text": " you know, in logistic regression, you classify so the w vector me is the sort of the classification" }, { "start": 965.76, "end": 975.52, "text": " boundary of, you know, the one class versus the other class. So this here, I think is essentially" }, { "start": 975.52, "end": 983.12, "text": " a little classifier that cares about one particular thing that is learned. So this can be" }, { "start": 983.12, "end": 990.96, "text": " some intermediate feature that is useful that is learned via backpropagation in this w vector." }, { "start": 991.68, "end": 998.64, "text": " And the the weighting of this particular head in this particular layer is then according to that" }, { "start": 998.64, "end": 1005.4399999999999, "text": " feature. So in here, there is somewhere there is a w vector, and that w vector in this particular" }, { "start": 1005.4399999999999, "end": 1012.4, "text": " layer for this particular head refers to some kind of useful feature, like, I don't know, like," }, { "start": 1012.4, "end": 1021.04, "text": " is there a name of a country somewhere in the sentence? Okay. And that's what we use as a weight" }, { "start": 1021.04, "end": 1029.92, "text": " to aggregate the queries. So you can immediately see that if a term, if a, you know, a token," }, { "start": 1031.12, "end": 1039.68, "text": " it's if it's query sort of contains a country information, this classifier would, you know," }, { "start": 1040.32, "end": 1047.2, "text": " say, well, that particular query has a lot of the information that I am particularly look for in" }, { "start": 1047.2, "end": 1051.92, "text": " this layer, therefore, the inner product will be high, therefore, the alpha will be high, therefore," }, { "start": 1051.92, "end": 1059.52, "text": " that particular query would be represented greatly in the global query vector. So the global query" }, { "start": 1059.52, "end": 1068.0800000000002, "text": " vector, essentially, you can think of, I select among all the query vectors, the ones that I care" }, { "start": 1068.0800000000002, "end": 1075.52, "text": " about in this particular layer in this particular head. However, what you care about is the" }, { "start": 1075.52, "end": 1082.48, "text": " static. It's statically learned, it's the same for every single sample. Okay. All right. So" }, { "start": 1082.48, "end": 1088.8, "text": " this is sort of a weighing by particular feature. Now, once we have the global query vector right" }, { "start": 1088.8, "end": 1095.2, "text": " here, how do we let it interact with the key vector? So usually what we do is we do an inner" }, { "start": 1095.2, "end": 1101.36, "text": " product of the query and the key. And then that defines sort of our aggregation distribution." }, { "start": 1101.36, "end": 1107.12, "text": " However, since we only have a single query, you know, that will not give us that will in fact," }, { "start": 1107.12, "end": 1116.24, "text": " not give us an n dimensional seek, sorry, an n length sequence as here, that will only give us" }, { "start": 1116.24, "end": 1121.6799999999998, "text": " a sequence of length one in the next layer. So we can't really do that. So what they do is they" }, { "start": 1121.6799999999998, "end": 1128.6399999999999, "text": " almost do an inner product, except they don't sum, right, they do simply element wise, multi" }, { "start": 1128.64, "end": 1135.6000000000001, "text": " They do simply element wise multiplications of the queries and the keys. Now element wise" }, { "start": 1135.6000000000001, "end": 1144.16, "text": " multiplication, it kind of means so it means, you know, like the element wise multiplication," }, { "start": 1144.16, "end": 1150.16, "text": " if you think of it, if both elements are small, the result is very small. If and if both are high," }, { "start": 1150.16, "end": 1155.68, "text": " the result is very high. So there's some nonlinear dynamics going on within the same dimension," }, { "start": 1155.68, "end": 1165.2, "text": " right? There's no aggregation across dimensions. And yeah, so they do element wise multiplication" }, { "start": 1165.2, "end": 1171.44, "text": " right here in order to obtain these P vectors and the P vectors, they are now the integration," }, { "start": 1172.16, "end": 1182.4, "text": " every P vector, P vector, so P i is equal to the element wise multiplication of the i of key vector" }, { "start": 1182.4, "end": 1193.92, "text": " with the global query vector. Okay, so yeah, and the query, the query vector itself is, of course," }, { "start": 1194.5600000000002, "end": 1204.8000000000002, "text": " a sum across a weighted sum across all of the queries. So if I pull the K in, you can see that" }, { "start": 1204.8, "end": 1214.1599999999999, "text": " I still have, okay, alpha j, I still have this quadratic thing here, I still have for you know," }, { "start": 1215.04, "end": 1223.52, "text": " I get I have n P vectors. And for each one, I have also n Q vectors, and I consider products" }, { "start": 1223.52, "end": 1230.48, "text": " of the form i j. So I still have the quadratic products in here. However, I don't have quadratic" }, { "start": 1230.48, "end": 1238.8, "text": " complexity. Why? Because I don't have a softmax in between aggregating the queries and aggregating" }, { "start": 1238.8, "end": 1246.32, "text": " the keys. And therefore, you know, the what is the commutative associative rule applies, and I can" }, { "start": 1246.32, "end": 1253.44, "text": " simply get away with first aggregating the query and then multiplying it as a whole by the keys." }, { "start": 1253.44, "end": 1259.68, "text": " Now, of course, that are those are two linear operations in sequence. Whereas in the normal" }, { "start": 1259.68, "end": 1265.6000000000001, "text": " attention mechanism, I have a linear operation, then a nonlinear one with the softmax, and then" }, { "start": 1265.6000000000001, "end": 1272.48, "text": " again, a linear one. And arguably, the nonlinearities is what brings the whole power to deep learning." }, { "start": 1272.48, "end": 1279.76, "text": " So, you know, this essentially, here, you can see how it really circumvents the quadratic bottlenecks" }, { "start": 1279.76, "end": 1286, "text": " by simply saying, well, if everything's linear, then there, you know, we can we can just add all" }, { "start": 1286, "end": 1294.24, "text": " together. Yeah, that's the trick, essentially. Now, then you realize we're not done yet. Okay," }, { "start": 1294.24, "end": 1301.52, "text": " what do we do with the P vectors? Well, this seems familiar, right? Again, we do another one of these" }, { "start": 1301.52, "end": 1306.48, "text": " additive attentions. So they call this thing additive attention, you can see from each P one," }, { "start": 1306.48, "end": 1312.72, "text": " we produce a beta value, the beta value exactly the same way as the alpha values, I suppose," }, { "start": 1312.72, "end": 1318.96, "text": " at least yes, you can see that right here, right, the beta values exactly the same. For each P," }, { "start": 1319.52, "end": 1329.52, "text": " we multiply it by a learned feature vector, which is WK right here. And then we normalize by all of" }, { "start": 1329.52, "end": 1335.84, "text": " them. And, you know, after the exponential function, and then we aggregate the global key via, again," }, { "start": 1335.84, "end": 1344.24, "text": " a weighted sum of all of these P vectors. So this is again, additive attention in order, in order" }, { "start": 1344.24, "end": 1351.9199999999998, "text": " to have a global key vector. And now, exactly the same trick, we use the global key vector," }, { "start": 1351.9199999999998, "end": 1359.36, "text": " element wise multiplied by the value vectors, which gives us these u vectors right here," }, { "start": 1359.36, "end": 1367.6, "text": " that these apparently go through another linear transformation to give us the R vectors. You know," }, { "start": 1367.6, "end": 1374.4799999999998, "text": " you can, you can stack as many linear transformations as you want. And then we're" }, { "start": 1374.4799999999998, "end": 1380.6399999999999, "text": " still not done, right? We're still not done. So essentially, what we've done in the end is we" }, { "start": 1380.64, "end": 1388.5600000000002, "text": " have we we take the values, which is the information we want to forward propagate. And for each value," }, { "start": 1388.5600000000002, "end": 1398.4, "text": " we element wise multiply it with this K vector. And this K vector is a result of the keys and" }, { "start": 1398.4, "end": 1404.3200000000002, "text": " also this query vector. And that's a result of the the queues. So essentially," }, { "start": 1404.32, "end": 1412.24, "text": " there is no aggregation of information as is there in the regular transformer, I don't aggregate" }, { "start": 1412.24, "end": 1419.28, "text": " the values from the sequence in a weighted fashion, I simply leave each value as it is," }, { "start": 1419.28, "end": 1423.84, "text": " you know, these are, as I said, these are transformations that don't depend on the other" }, { "start": 1423.84, "end": 1432.96, "text": " sequence elements. So V1 purely depends on E1. And the only way the only way that token information" }, { "start": 1432.96, "end": 1439.92, "text": " from the other tokens can come into any token is via this aggregation methods, right here," }, { "start": 1439.92, "end": 1448.56, "text": " in, in that in the normalization constant, right in in the aggregation that happens via the" }, { "start": 1448.56, "end": 1456.72, "text": " normalization, you know, for example, the key n could be represented more in this global key," }, { "start": 1456.72, "end": 1466.08, "text": " and then that's multiplied here to my vector one. So that's how other information comes into any" }, { "start": 1466.08, "end": 1474.56, "text": " particular token. And as I said, we're still not done. After we obtained these R vectors, we then" }, { "start": 1474.56, "end": 1485.52, "text": " add to them, this thing right here, we add to them, the query vectors again, now why I don't add" }, { "start": 1485.52, "end": 1495.76, "text": " why, I don't know, but we just do. So we simply add the query vectors to the R vectors that we" }, { "start": 1495.76, "end": 1504.8, "text": " have here. And that's going to be our final output. So this is stupidly complex. And I don't think for" }, { "start": 1504.8, "end": 1511.6, "text": " any particular reason. So there are multiple problems right here. For example, this transformation" }, { "start": 1511.6, "end": 1519.84, "text": " right here is a linear transformation. Okay, maybe it makes sense. But it seems like you just had a" }, { "start": 1519.84, "end": 1528.6399999999999, "text": " linear transformation here. And this whole sum here is sort of a linear aggregation. Ergo, yeah," }, { "start": 1528.6399999999999, "end": 1535.9199999999998, "text": " okay, maybe you can justify that. But second of all, this connection right here, right? If this is" }, { "start": 1535.92, "end": 1545.2, "text": " not ablated in experiment, like I don't believe squat here. Like, I want to know how much this" }, { "start": 1545.2, "end": 1549.8400000000001, "text": " this is clearly not something you do from the beginning, this is clearly something you add" }, { "start": 1549.8400000000001, "end": 1557.28, "text": " after the other stuff don't doesn't work. So I want to see an experiment where this connection" }, { "start": 1557.28, "end": 1563.6000000000001, "text": " is missing, and to decide and I want to see an experiment where only this connection happens to" }, { "start": 1563.6, "end": 1571.9199999999998, "text": " decide, you know, where the actual work is going here. Then another thing, you can see this here," }, { "start": 1571.9199999999998, "end": 1579.4399999999998, "text": " the middle column is entirely useless. Like, like this, this right here, it's simply it's simply the" }, { "start": 1579.4399999999998, "end": 1586.6399999999999, "text": " lower part is a repetition from sorry, the upper part here is a repetition from the left. So these" }, { "start": 1586.64, "end": 1595.2800000000002, "text": " two things are repeating. And then the lower part is repeated here, right? And in fact, you can" }, { "start": 1595.2800000000002, "end": 1601.6000000000001, "text": " stack as many of these columns, they just call them query key, and value. Well, if I just call" }, { "start": 1601.6000000000001, "end": 1609.5200000000002, "text": " them column one, column two, and here, this this is like the final column, fine f cf, right? I can," }, { "start": 1609.5200000000002, "end": 1615.2, "text": " in fact, insert column three, column four, column five, I can insert as many as I want, because it's" }, { "start": 1615.2, "end": 1622.16, "text": " just repeated, right? That there's no qualitative difference that differentiates the queries from" }, { "start": 1622.16, "end": 1627.68, "text": " the keys in this model, right? Only the values are a bit different, because at the end, they're not" }, { "start": 1627.68, "end": 1634.88, "text": " aggregated into this global vector with this additive attention thing. But in essence, you know," }, { "start": 1634.88, "end": 1641.76, "text": " you could do away completely with for example, with the key column and directly do the query" }, { "start": 1641.76, "end": 1649.2, "text": " multiplying them into the values completely possible. So completely unnecessary key column." }, { "start": 1649.2, "end": 1654.96, "text": " Now, you might think, okay, if the key column is unnecessary, or if I can introduce 50 keys in" }, { "start": 1654.96, "end": 1662, "text": " between 50 key columns that always take the last whatever global vector and multiply it in and do" }, { "start": 1662, "end": 1668.72, "text": " additive attention. Is this really an attention mechanism? And the answer is kind of but not in" }, { "start": 1668.72, "end": 1679.04, "text": " the way you expect. It's a bit sneaky, honestly. See, attention is when I have, well, arguably," }, { "start": 1679.04, "end": 1685.1200000000001, "text": " right? Who am I to define this? But arguably, attention is when I create one of these things" }, { "start": 1685.1200000000001, "end": 1692.32, "text": " in a dynamic way. They and these things are how do I aggregate information? How do I weigh" }, { "start": 1692.32, "end": 1699.6, "text": " information from an input sequence? Okay, that is, in essence, an attention mechanism dynamically" }, { "start": 1699.6, "end": 1707.6799999999998, "text": " creating this waiting. So the only way this actually really happens right here is where we're" }, { "start": 1707.6799999999998, "end": 1716.1599999999999, "text": " in this W thing, right? So this here is in fact, the attention mechanism, not the not the not this," }, { "start": 1716.16, "end": 1724.16, "text": " this is just a weighted sum. Like, this here is the the hidden attention mechanism with," }, { "start": 1724.72, "end": 1730.96, "text": " it's essentially a self attention mechanism, right? You can you can see. So the alpha is" }, { "start": 1730.96, "end": 1738.48, "text": " are how do we aggregate information? And then, okay, I guess, yeah, this belongs to the attention" }, { "start": 1738.48, "end": 1748.16, "text": " mechanism. But the keys and the queries, sorry, the keys and the values are both what they call" }, { "start": 1748.16, "end": 1757.68, "text": " q, right? What I aggregate here, those are essentially the values, the things to be addressed," }, { "start": 1757.68, "end": 1764.08, "text": " these are essentially the keys. So the query is essentially this thing right here. That's" }, { "start": 1764.08, "end": 1770.8799999999999, "text": " that's the query. Now the query, as you can see, is not dynamic, the query is just statically" }, { "start": 1770.8799999999999, "end": 1777.6, "text": " learned, which makes this essentially into a, like a feed forward network, or at best an" }, { "start": 1777.6, "end": 1786.24, "text": " attention mechanism with a single learned query. So instead of having n queries, now we have one" }, { "start": 1786.24, "end": 1795.28, "text": " query per head. And that's why I said the thing at the very beginning, if, if this is applied to a" }, { "start": 1795.28, "end": 1802.64, "text": " task that largely relies on, you know, single minded task, global global information task," }, { "start": 1802.64, "end": 1809.36, "text": " and so on, such as sequence classification, or something like this, it can be that I only need" }, { "start": 1809.36, "end": 1816.24, "text": " a couple of intermediate really different features per layer, after all, they are vector valued. So," }, { "start": 1817.6799999999998, "end": 1824.6399999999999, "text": " which means that if I have eight heads, which have eight different w vectors, and you know," }, { "start": 1824.6399999999999, "end": 1830.56, "text": " there are two w vectors per layer, to be fair, there is a w here. And there's also a w again," }, { "start": 1830.56, "end": 1837.6, "text": " in this thing right here. So every column gives me essentially a new feature to extract, right?" }, { "start": 1837.6, "end": 1842.7199999999998, "text": " So the number of heads times the number of these columns I have is essentially the number of" }, { "start": 1842.7199999999998, "end": 1849.6, "text": " features I can have static features I can extract from such a sequence. And as I said, for global" }, { "start": 1849.6, "end": 1856.32, "text": " information tasks, that might in fact be enough. And in that case, you know, good, I can I can get" }, { "start": 1856.32, "end": 1866.3999999999999, "text": " around. However, I could have done the same thing, probably by Yeah, but by simply constructing less" }, { "start": 1866.4, "end": 1872.96, "text": " queries than keys and reducing the sequence length or something like this. I mean, there are" }, { "start": 1872.96, "end": 1880.16, "text": " there are many ways of this. But I think the thing here is framed in terms of the words of" }, { "start": 1880.16, "end": 1885.8400000000001, "text": " an attention mechanism, where the actual attention mechanism is simply like the thing here that" }, { "start": 1885.8400000000001, "end": 1891.3600000000001, "text": " happens inside the queries, it's essentially a self attention mechanism on top of the queries" }, { "start": 1891.36, "end": 1898.24, "text": " with not a dynamic but one single fixed query. The same goes for column two, and then column three" }, { "start": 1898.24, "end": 1907.04, "text": " is just kind of like weird. Like, it's kind of a weird residual connection, or something where" }, { "start": 1907.04, "end": 1912.56, "text": " there's this product here with something that's incoming. It's kind of like a feed forward layer" }, { "start": 1912.56, "end": 1924.6399999999999, "text": " again, like a dynamic feed forward layer per token. Yeah. So yes, that's that's why I find the name" }, { "start": 1924.6399999999999, "end": 1931.6799999999998, "text": " a bit deceptive right here also to formulate as query key and value here and, and their whole" }, { "start": 1931.6799999999998, "end": 1938.72, "text": " talk about who we model the interaction between something, something, something. Yeah. Okay. But" }, { "start": 1938.72, "end": 1947.2, "text": " what about experiments? They're experiments I find to be relatively lacking. They do have a lot of" }, { "start": 1947.2, "end": 1955.52, "text": " baseline comparisons, which is respectable. Their data sets, however, appear to be yeah, things like" }, { "start": 1955.52, "end": 1964.24, "text": " sentiment classification, topic classification tasks. And, you know, they do perform well." }, { "start": 1964.24, "end": 1971.6, "text": " I am, you know, experimental results are experimental results. And then, you know," }, { "start": 1971.6, "end": 1976.88, "text": " the best numbers are achieved by ensembles, which is which is also fine, right. But even" }, { "start": 1976.88, "end": 1986.24, "text": " the regular numbers right here appear to be quite competitive. So I don't exactly know." }, { "start": 1986.24, "end": 1994.32, "text": " Yeah, the complexity right here is also a bit shaky, because they sort of leave away the linear" }, { "start": 1994.32, "end": 2003.2, "text": " operations and so on like, yeah. And, as I said, there are no ablations of most of the things. So" }, { "start": 2003.2, "end": 2008.88, "text": " there are no ablations, for example, of this residual connection where you just randomly add" }, { "start": 2008.88, "end": 2014.56, "text": " the query, like, why would you do that? Why would you do that? Why would you do that? Why would you" }, { "start": 2014.56, "end": 2020.48, "text": " query? Like, why would you do that? Like, that doesn't even make sense. If you call this a query," }, { "start": 2021.6, "end": 2030.72, "text": " this thing, then by itself, it should carry no information to pass on by nature of being a query." }, { "start": 2030.72, "end": 2036.48, "text": " Right. So, you know, why do you why do you add it up there? You know, what's the effect of the" }, { "start": 2036.48, "end": 2044.3999999999999, "text": " individual columns, how many there are, right? You know, there are many things to ablate here to" }, { "start": 2044.4, "end": 2051.6800000000003, "text": " really show why this model performs well. What they do is they compare sort of the runtime and the" }, { "start": 2052.48, "end": 2059.36, "text": " the runtime as the sequence length increases. And as you can see, they're quite fast right here," }, { "start": 2060.4, "end": 2067.36, "text": " which I guess fast transfer is this fast former, I guess fast transformer is fast former." }, { "start": 2067.36, "end": 2073.44, "text": " So and and the regular transformer, and they also are like a constant factor faster than others." }, { "start": 2074.4, "end": 2080.48, "text": " But you know, are like, are you a constant factor faster, because you actually don't do" }, { "start": 2081.04, "end": 2090.1600000000003, "text": " any sort of attention? I don't I don't know. So yeah, that those are my my two cents to this" }, { "start": 2090.16, "end": 2097.2799999999997, "text": " paper. Again, this might be a neat model for certain tasks. It's certainly fast, it certainly" }, { "start": 2097.7599999999998, "end": 2102.64, "text": " doesn't make you run out of memory as a regular transformer for a given set of tasks, it might" }, { "start": 2102.64, "end": 2110.64, "text": " in fact work better than a transformer. My main problem here is with with the whole framing in" }, { "start": 2110.64, "end": 2118.96, "text": " terms of attention. In terms of the sort of same languages, trying to pass this off as a function," }, { "start": 2118.96, "end": 2126.16, "text": " pass this off as a faster transformer, which it is not. Alright, let me know what you think" }, { "start": 2126.16, "end": 2152.96, "text": " in the comments. And thanks for listening. Bye bye." } ]
_c6A33Fg5Ns
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
DeBERTa: Decoding-enhanced BERT with Disentangled Attention (Machine Learning Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "deep learning tutorial", "huggingface", "huggingface transformers", "microsoft", "microsoft research", "bert", "roberta", "deberta", "nlp", "natural language processing", "glue", "superglue", "state of the art", "transformers", "attention", "attention mechanism", "disentanglement", "disentangled representation", "positional encodings", "position embeddings", "masked language modelling", "pretraining", "open source" ]
#deberta #bert #huggingface DeBERTa by Microsoft is the next iteration of BERT-style Self-Attention Transformer models, surpassing RoBERTa in State-of-the-art in multiple NLP tasks. DeBERTa brings two key improvements: First, they treat content and position information separately in a new form of disentangled attention mechanism. Second, they resort to relative positional encodings throughout the base of the transformer, and provide absolute positional encodings only at the very end. The resulting model is both more accurate on downstream tasks and needs less pretraining steps to reach good accuracy. Models are also available in Huggingface and on Github. OUTLINE: 0:00 - Intro & Overview 2:15 - Position Encodings in Transformer's Attention Mechanism 9:55 - Disentangling Content & Position Information in Attention 21:35 - Disentangled Query & Key construction in the Attention Formula 25:50 - Efficient Relative Position Encodings 28:40 - Enhanced Mask Decoder using Absolute Position Encodings 35:30 - My Criticism of EMD 38:05 - Experimental Results 40:30 - Scaling up to 1.5 Billion Parameters 44:20 - Conclusion & Comments Paper: https://arxiv.org/abs/2006.03654 Code: https://github.com/microsoft/DeBERTa Huggingface models: https://huggingface.co/models?search=deberta Abstract: Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions, respectively. Second, an enhanced mask decoder is used to incorporate absolute positions in the decoding layer to predict the masked tokens in model pre-training. In addition, a new virtual adversarial training method is used for fine-tuning to improve models' generalization. We show that these techniques significantly improve the efficiency of model pre-training and the performance of both natural language understanding (NLU) and natural langauge generation (NLG) downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). Notably, we scale up DeBERTa by training a larger version that consists of 48 Transform layers with 1.5 billion parameters. The significant performance boost makes the single DeBERTa model surpass the human performance on the SuperGLUE benchmark (Wang et al., 2019a) for the first time in terms of macro-average score (89.9 versus 89.8), and the ensemble DeBERTa model sits atop the SuperGLUE leaderboard as of January 6, 2021, out performing the human baseline by a decent margin (90.3 versus 89.8). Authors: Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there, today we'll look at DeBURTA, decoding enhanced BERT with disentangled attention, by Peng Cheng He, Xia Dong Liu, Zhang Feng Gao, and Wai Ju Chen of Microsoft. This paper is an improvement on BERT, the language model and the Roburta variant of it. Specifically, it suggests two improvements, namely, first is this disentangled attention where they disentangle positional information and content information of the individual tokens in the attention mechanism. And the second improvement kind of results from the first improvement as this decoding enhanced decoder, I guess, enhanced decoder, where because they only use relative positional information in the transformer part of the model, they have to re-feed the absolute positional information at the end, which gives them another bit of improvement. Altogether with this, they reach state of the art in various NLP tasks. And this model DeBURTA is now available in hugging face for you to download for all of your NLP needs. So we're going to go through the paper and look at the two improvements and what they give. Let's then see if that's relevant. As always, if you like content like this, don't hesitate to share it out to all of your friends and leave a like and a comment. I still read all the comments. So give me your opinion. And please also give me your opinions on the new recording setup. There should be a title somewhere here, a picture somewhere here. I absolutely want to hear feedback because I have no idea what I'm doing. So yeah. All right. Let's dive in DeBURTA or DeBURTA or DeBURTA. I don't know. I think it's DeBURTA because it's from decoding enhanced. DeBURTA is a new model architecture, they say here. We propose a new model architecture DeBURTA, decoding enhanced BERT with disentangled attention that improves the BERT and ROBERTA models using two novel techniques. The first is the disentangled attention mechanism where each word is represented using two vectors that encode its content and position respectively. And the attention weights among the words are computed using disentangled matrices on their contents and relative positions respectively. Okay, we'll look at that first. So what they mean is when you have a multi-head attention layer, what we want to do is we want to transform one sequence of tokens of token representations into the next sequence of token representations. Now usually every token, let's say these are our tokens and this could be a sentence in a language like I am hungry. And here is like this see this classification token that we always add when we train BERT. Every one of these tokens is represented by a vector. Like this is a vector, this is a vector, it has many entries, this is a vector. Some of the vectors are thicker than others. I mean that's just a this one just hasn't eaten enough. So every one of these tokens is represented by a vector. And what a multi-head attention layer does is it simply transforms this via means of the attention mechanism into a series of vectors again, so we put in a series of vectors, and we end up with another series of vectors. If you want to know what a multi-head attention does in detail, please go look at my video attention is all you need where that's explained. Specifically it is a attention it is sort of an information routing algorithm that sees that sees how information needs to be routed from tokens to tokens using queries, keys, values and so on. If you haven't seen the video, it's a beautiful mechanism, but I'm not going to explain it again right here. I'm sorry. Alright, so in this what usually do is you transform vectors into vectors. And because of how the multi-head attention mechanism works, the mechanism has no way to discern where in a sentence, for example, a given token is so it cannot differentiate between this sentence here and the sentence Am I hungry? If it's just multi-head attention is just not possible for it because it treats the incoming sentence as like a bag of words, which is not the case in for example, a recurrent neural network. A recurrent neural network would go one by one over these word representations. And it has kind of a mechanism to see what a sequence is, however multi-headed attention doesn't. So what people usually do is they augment these representations with position encodings. So that's at the beginning, you know, where you might ask, where do these vectors come from the very first? Of course, they come from the last layer, but the very first vectors you put in come from a table. And these are your classic word vectors. So at some at some point, you have a big table. And the big table has your entire vocabulary in it. So every word in the language that you consider so there's I and there's am and there is you and there is Apple and there is hungry. And there is even the CLS token, all of them have a table entry, and all of them have a vector associated with them. Now these vectors are trainable. So the neural network can decide itself what goes into these vectors. But every word has a fixed vector in there. And in the very first layer, because you don't have a last layer to draw from, you simply look at what token it is, you go to the table right here, you retrieve this vector, and you put it here. And that's your start. And then you transform up the layers, of course, every time from the last layer. But at the beginning, you have embeddings. Now the same thing you do for positions, okay, so you also have a second table usually, and the original transformer paper, by the way, these were fixed vectors. But nowadays, I think most of them are also trained. So you label the positions. So that's position, that's position one, that's position two, three, and four. So for every position, two, three, four, and maybe you have also five and six, there is a maximum length. But right now we consider sentences of length three with the CLS token appended. So these are length four. So every position also has a vector. And I'm going to actually draw these vectors in this color. So every position has a vector, irrespective of what word there is, okay, right now, we just have vectors for words irrespective of where they are. And we have vectors of positions irrespective of what words there are. And what you do is same, you look at what position is here, you go to the table, you retrieve that embedding, and you somehow also put it here. Now I've made a bit of a mess here with this thing, sorry. So how do you now you have two vectors all of a sudden per word. So you have one, that is a position, and you have one that is the kind of the word itself that represents the word itself. And the neural network needs both in order to understand the sentence, right? If every word has these two vectors at the beginning, now it can understand, aha, this is the word I that is at the beginning of the sentence. So it's probably the subject of a sentence. However, if the word am was at the beginning, it could be, oh, it's probably a question because it starts with a verb, am I hungry? Okay. And it can also evaluate the relative distances of things to each other and so on. So given this information, the neural network has all the tools it sort of needs to understand that sentence as a sequence. Now, what you have, you have basically two ways of combining the two things. First of all, you can concatenate them, which means that I'm going to do it in this you just put no, that's terrible. You just put the I'm not too skilled yet with this new thing. You put this on top here, imagine this is the same length and you just concatenate the vector. So now the vector is longer. Of course, that also increases your dimensionality, computational issues and so on. So what a lot of people do is they simply, you know, line them up if they're the same size and they add them together element wise. And you know, in the worst case, the neural network now can decide because both of these are trained, right? So the neural network can absolutely decide that, you know, in the top part here, it simply learns a bunch of zeros. And then the bottom part here, it simply learns a bunch of zeros here. So essentially, it's a concatenation. That's the worst case. In the best case, the neural network can actually do some kind of information combining already in this addition step down here. Okay, so the you you give both encodings to the neural network as a single vector, right? So what goes into the multi added attention mechanism is a single vector. This paper says that is not ideal, because the positions are too much mixed with the with the signal of the content of the words. And we'd rather have this in a disentangled representation, such that the network can sort of reason about the words in one line, and it can reason about the position of the words in another line. So their goal is to disentangle these two vectors and basically design a new attention mechanism that always treats the content and the position as separate things. So the new attention mechanism they propose is right here. Of course, they're not they can't stay separate, right? But they they can be disentangled through the layers. So their new algorithm sort of is here, the way they obtain the attention matrix is due to the following thing. So how do you usually obtain the attention matrix, you have your input x here, this is your sequence, and you produce two values from it q and k. So these are matrices. So if x is a sequence, then every single sequence element emits one key, which is a vector, right, one key, and then every single one also emits one query. So like this, like this, and the key sort of the key is supposed to say, what is in what information is this token about, and the query is kind of supposed to say, what information does it request from other tokens. So now you route the information wherever the inner products line up, for example, probably this thing would go to be routed here and so on. It's not a hard routing, it's a soft routing. So by transforming x by linear transformations into keys and queries, you obtain your attention matrix by multiplying together queries and keys, such that you have sort of the inner product between each of these vectors. And this is quadratic, and this is the big bottleneck in transformers. But you have the inner product between each of the two, you get a giant matrix, and the giant matrix basically says how much does token two attend to token three, that's the position two, three of that matrix. And that's that seek that element is going to be the inner product of the query of token two with the key of token three. So that's how you do the attention matrix. And these vectors right here, they if you do regular bird, they always have, they're always everything at the same time. So you feed, you feed content and position somewhere down the layers, you feed that in, you add it together, and the network is supposed to figure out itself how to use these two pieces of information. This paper says no, wait, we can do better. What we can do is for us, each sequence element, it does not only produce one key and one query, it actually, we think it should be contained, it should be made up of two vectors. So each of these things has two different, two different components. One is this kind of H component, which is the which is the content, content information, and one is the P component, which is the positional information. So here, how should how should token I attend to token j, they say, well, that is going to be it's going to be the same thing. It's going to be the inner product between the between the this is the query of token I, and this is the key of token j. Okay. However, now the queries and keys are made up of two of two different parts. One is the content part, one is the position part, and the position, as you can see, maybe as j condition, the neither position is going to be a relative positioning. So if you have your sequence right here, what each token would do is it would emit one vector, oh, sorry, it would emit one vector that is the content of the token, like before, and then another vector would come in from the position. So the same we did at the beginning, but now in each layer, this positional information comes in irrespective of what word there is, right, irrespective of what word is in the position, the position gets an encoding right here. And then the interesting thing is we don't add the two together, we treat them actually separately. So here, the keys are two vectors, and the queries are also two vectors. So I'm just going to draw one up here. So the query is going to be a vector. And the query for the position is also going to be a vector. And that also it depends only on the position and not on the incoming signal. Okay. So now, how do we route information? Now we have four different routings. First we only consider dark blue, dark blue. So this is kind of the classic attention, right? This and this, they match really well, so that goes here. That one probably doesn't go there, and so on. But then we also, so this is what they call content to content routing. But then we also have content to position, position to content, and position to position routing. And in all of these, so for example, in content to position, I'm sure I'm going to, there's a 50-50 chance I'm going to mix this up, and I'm sure I'm going to, but in content to position, what we're going to do is we're going to look at this vector right here, which is the content vector of the query that is produced from the token, right? The content is produced from the token. And we're going to attend to the position vector of the key. So we're going to attend to the light blue things. So essentially, the, this part is like the classic attention part. It is, I am the word am, I'm requesting all information from all the nouns in the sentence, because I'm a verb, and I would like to know who are the nouns in the sentence. Then the content to position encodings is, I am the verb am, I would like to know what is around me. The positions are relative positions. So I can request the vector for, you know, the plus one position of me or the plus two. So the word can attend to its surroundings. So given that it's the word am, it might be particularly interested, maybe it has already figured out it's not a question, right? From the previous layers. So it's particularly interested in what's before it. So because, you know, am actually, it probably isn't particularly interesting, because it's always going to be I. So actually, maybe it's exactly a counterexample, where it wouldn't want information from there. But it can sort of attend, it can say, I want to attend to things after myself, because I already have figured out that before me must be an I, I want to attend to things after me, like one position after me, what's right after me, what's two words after me, and so on. Position to content is exactly the opposite. It is, it is saying so the token can say, well, I am in I am in a I am in position plus four to you know, what kind of information do I want to send to things that are four away from me, right? Irrespective of what the content is. So here, we simply consider what position is the token with respect to its neighbors, and what kind of information doesn't want to aggregate from each of the words. It is a bit, it's a bit weird, right? So it says, it says, like, I, I am in in position. A word that is two words after me, what kind of information do I want to get from it? And since it's attending to content that can be dependent on that can be dependent on what word there is, but not its position and then position to position is simply, well, what kind of information do I in position, you know, three, you want to send to something in position seven, which would be useful. But this is relative position encoding, which simply means I am always kind of in the middle. And so this isn't really helpful, so they decide to leave this away. So we end up with the three different attention mechanisms, so to say, we end up so there's this one, there's this one, and there's this one, okay, corresponding to the three out of four different ways we can combine the dark blue and the light blue keys and queries. Now you can see right here, that's what they do. And their final attention matrix is simply the addition of all of those together. So we construct one attention from like the classic attention, we construct one attention that is content to position, we construct one attention that is position to content, and we construct one that is position to position, but then we leave it away because it's we deal with relative position, so it would sort of be the same for every token. And that's not particularly helpful. I'm going to repeat it again, the H information contains actual signal from the last layer, while the P has no idea about the signal, it simply contains information about the position of the tokens. So you can decide to send information to a word that's two positions ahead of you, or to request information from where that's three positions behind you, depending on what word you yourself are. Okay, so that's the content to position and position to content attention. These things are all added together. And that makes up the final attention matrix. So a final entry in the attention matrix could be influenced by multiple ones of them. It could say, you know, I am the word, I'm the word am I'm in position to, I request a lot of information from other nouns, if any noun is here, I want information, but I also want information from things that are one or two positions ahead of me. So that, that is, and you know, since I'm the word am, and also since I'm in position number two, I am very interested to know what the subject of the sentence is. Now we have all of it. Okay. All right. And the rest is, is just like classic attention. Okay. Now you, you simply, so these P and H matrices are obtained by, sorry, the queries and the keys for this are obtained by linear transformation. So you see, this is the incoming signal. You send it through a linear transformation to obtain the queries. And you also send it through a linear transformation to obtain the keys. So the H is the same, but the, these matrices here, these are learned weights to produce key queries and keys. And then you multiply them together. That defines your attention matrix. You run that through a soft max to make a distribution out of each row, and then you multiply it together with the values. So this part here is kind of like the routing table and the values are the information to be routed. The values are obtained from these input signal. As we said, we're going to amend that by, so this over here is the classic key queries, keys and values. Sorry, that's too much. The classic queries, keys and values. And then we augment that by two new, so there is the queries and the keys for the position. And you can see that the difference here is that again, it's learned weights, but now there is this P thing right here. And the P is positional encodings. And that comes exactly out of this table we saw up here. So the positional encodings come from this. And it's important to see that this here is H and this is the P values, but this is only H0, right? H is actually transformed to H1 by the transformer, the first layer, to H2 by the second layer, and so on. The P always stays the same. So you would feed the P into this layer, and you would feed it again into this layer, and you would feed it again into this layer. So you can see it's only positional information. It's not content information. And by feeding the position each time and doing this in this disentangled way, the model can sort of keep the content and position information separate. I actually think it doesn't really keep the information separate because after layer one, you certainly have position information in your H, right? You can see that from this path here, from the actually feeding position information into the transformer layer, H1 is already going to be a conglomerate of H0, which is pure content plus the position somehow. This plus is not a real addition, but somehow the information is intermingled there. And if we weren't to feed in these things right here, it would just be like the classic BERT, right, what they criticize. Now by continuously feeding the positional information, that is one advantage. You can actually do that with BERT. You can just add the position information each time. I'm not sure if that would work super well, but you can do that. Just gives the model a bit more side information to work with. And then by keeping it separate, yeah, as I said, I'm not sure it's actually separate. It's just that you keep feeding in position information layer after layer, therefore giving the model sort of more information every time it makes a transformation, because otherwise it would have to carry through the position information through all the layers, just from the very first layer. So in this mechanism, you can see it's true that the position encoding is kept separate because it comes in fresh every layer, but I don't see that the content certainly has position information in it from the last layer. I hope you can see that. So as I said, they do relative position encoding. What does that mean? So that means that the position encoding depends on where you look from. So what I've drawn at the beginning, like this here, this isn't entirely correct. You have to look at each token individually. So for this middle token here, for example, the positions look like this. They look like negative two, negative one, zero, one, two, and you would have kind of a table not with absolute positions, but you'd actually have a table with negative two, negative one, zero, one plus one plus two, and so on. And you would retrieve those vectors. And then you when you consider the next vector, this one right here, it would look different. It would write this would be zero, this minus one minus two, and so on. So they do two things. First of all, they truncate at some point, they simply say, well, our context window is two. So instead of going negative three here, we simply keep it at negative two. So everything beyond negative two gets also the vector for negative two. So that vector here is going to be just plugged into here and into here for this token, right. And for this token, for the previous token, it is only going to be plugged here and if and nowhere else. There are ways to efficiently implement this. And that's this algorithm right here. Don't want to go too much into it. But just so you're aware, you don't have to consider each token really individually during it attention. That would be prohibitively expensive. So you can do one big matrix multiply and then sort of pick and choose together from your from the matrix that results, especially with this truncation. This is this algorithm. So they call it efficient implementation. Alright, so that is this position, position enhanced or disentangled information. Why is it disentangled again? Because in every layer, they have a side input. This piece right here is the side input that they sort of feed on top of this information. And they specifically construct the attention matrix out of the three things, right? It's almost like two contributions. The one contribution is, hey, let's feed in position information in each layer. And I think that has been tried before. That's pretty simple. The second thing is that we don't we don't simply add the two vectors when we input it into the attention, but we're going to construct basically three attention matrices and then add those together once we determine the inner products between each of those. So this is one of the improvements. And that already helps a lot. But then they run into a problem. And this is not necessarily a problem with their method. But this is a problem in general when you use relative positioning codings. So they say, given a sentence, a new store opened beside a new mall, right? That's a sentence. The words store and mall are mass. So let's say you do this mask language model pre training, right? You mask out the words store and mall and you ask the model to reconstruct them using only the local context, e.g. relative position and surrounding words is insufficient for the model to distinguish store and mall in this sentence, since both follow the word new with the same relative positions. So from the word new, you know, relatively, it's always plus one, oopsie. It's plus one to this word. So the model cannot distinguish the two. So there is a need for absolute position and codings, because if you had absolute position and codings, you could maybe make sense, though. You know, since I mean, you could figure out like store is probably kind of a smaller thing and mall is kind of a bigger thing. So it's more likely that the store opened beside the new mall than the mall opened beside the new store. So that means we need absolute position and codings or something like this, right? And especially, we could have relative position and codings, but if this is a very long sentence and we truncate them somewhere, again, these two things are not in range of one another. And they're not going to know how far you know, they are apart and each each one by itself is just plus one apart. So how do we solve the problem? We feed in absolute position and codings. However, that's exactly what they criticize. They say no, relative position and codings are much better than absolute for learning. And that's kind of the same reasoning why a convolution is better than a fully connected layer because you kind of slide the transformation over and it's simply data relative to each other. So relative positioning makes a lot of sense if when every word can do computation, not based on where exactly it is in the sentence, but how it is in relation to other words. Otherwise, if you have absolute positioning codings, what you would have to do is you would have to say, well, if I'm the word M, and I'm in position two, I need to learn to attend to position three. However, if I'm the word M and I'm in position three, I need to learn to attend to position four. And if I'm in position four, I need to learn to attend in position five. These are all different things you need to learn. However, if you have relative encoding, what you can do is you can simply say I want to attend to the word that's right after me easy. But we do need absolute positioning coding for some things, namely disambiguate between tasks like this. So they feed in absolute position information. But instead of doing it at the beginning, they do it at the end. So at the beginning, we have the word vectors, right? They go in here. And then we have position information. 12345. We have that at every single layer of the transformer. We feed it in again and again and again. We feed in the same P vectors, okay? They have different different of these. Sorry, if these transformations in each layer. So the actual transformation that makes the keys and the values, sorry, the keys and the queries of the position information are different, but the vectors are the same every time. And then at the very top. So these are P relative. So this is sorry, yeah, I mixed up this is the this is this negative two negative one, zero, one, two for the middle token. And then at the end, we're going to feed in absolute position encodings. So here we have, you know, your let's start at one. Let's be good math lab people. Here we have 12345 that we're going to now combine with the vectors that come out of here. So the reasoning is they say there are two methods of their two methods of incorporating absolute position, the BERT model incorporates absolute position in the input layer. In the BERT, we incorporate them right after all the transformer layers, but before the softmax layer for mask token prediction, as shown in figure two, I've looked at figure two, it's, it's not really helpful, honestly. So that is this figure in the appendix, where they say, okay, so in the BERT late in the BERT, you have the absolute position encoding somewhere down here, it goes through all the transformer layers. And then you have this classification layer at the top that does the language model decoding. However, in their model, what you'd have is you have all the transformer layers here, down here, and then you have the absolute position encodings that come in through the side here. And kind of the last transformer layer now has access to these absolute layers or the last n, I think n in their case is two, or one, one or two. So in the last layer, or the last layers, now the transformer has access to the absolute positions, and before it's just relative position at each step. And they reason that that helps because the transformer part learns to deal with relative positions. Okay, in this way, they say here, the BERT captures the relative positions in all the transformer layers and only uses the absolute position as complementary information when decoding the masked words. Thus we call the BERT as decoding component an enhanced masked decoder. And they compare the two, and they observe that EMD works much better. So feeding absolute positions at the end works better than feeding them at the beginning. We conjecture that the early incorporation of absolute positions used by BERT might undesirably hamper the model from learning sufficient information of relative position. In addition, EMD also enables us to introduce other useful information, addition to positions, yada, yada, yada, we leave it for future. So they say you could also feed in other information. I guess that's the case in every single neural network ever. Yeah, but the point is they feed in the absolute position at the end and their conjecture. So I'm not sure I'm not a fan of this. I'm here, you know, this is this is like saying, okay, if we only feed it in at the end right here, this is position absolute, then we sort of limit the model. Like right now, the model has the same information as it had before, as if we were to feed it at the beginning. But we sort of limit it to only one layer of transformation. So all it can do is sort of have kind of a little linear transformation in in there. And yeah. And so if we don't feed that in here, whereas we do feed it in, the model can use it or not any way it wants. And that's just not a good enough reason for me. So I think, you know, regularization has its place, bottleneck layer has its place and so on, restricting the capacity, and so on. But I'm not a fan of hampering the model in this way kind of restricting it. And I, you know, just because it makes your your number better, there's not really a reason why the same information should be worse if you give the model more steps to compute to compute with, you know, if you feed it in at the beginning, technically, if you train the model correctly, it should learn to use that information in at least as good a way as if you feed it in at the end, right, at least that tells me that the model that we haven't really figured out how to train these models correctly yet, with regards to positional encodings. And again, I'm not a fan of simply saying, well, we only feed it in at the end, because then the question immediately is, well, how many layers at the end? How many layers at the beginning? Or when, you know, when is it too powerful? It's just, yeah, I don't think it's, it's, it makes a lot of sense to simply give the model information, but not let it do its best with that information, unless you have a specific kind of reasoning why this is just not good enough for me here. Not a criticism of the, you know, obviously, it's better, like they observe, like, you know, all the in all the information, sorry, all the arguments can be invalidated by, but it's better, right? That's deep learning. So yeah, all respect for them for trying it out, and actually realizing it's better. Pretty cool. So they also do scale invariant fine tuning where if they fine tune, which is where you take kind of this, this model you train with mask language modeling, and then you fine tune it to NLP tasks, they have a bunch of tricks there like virtual adversarial training and normalizing the embeddings before they do that. And that apparently helps a lot. But they also say they leave the comprehensive study of this for future work. For now, they just want to get the good number, which is understandable because you get published. Alright, so here you can see, actually, we can we can skip most of the tables, they are better. They are better. They are better. They are better in language modeling, too, which is interesting. So you can do kind of bird style denoising, but in classification, you can also do actually order regressive language model, which is pretty cool. So here they do an ablation study of the different components where they remove this enhanced the decoder. And one time they remove the position content to position encodings, sorry, attention mechanism. And one time they reduce the position to content tension mechanism. And in the table, it is sort of a wash. Depends on the task of how you look at but each of the components here gets you some kind of a benefit or a hit when you take it away. So yeah, it's not really clear that one of the components gives you all the boost. The combination of them is obviously the best. And it's really cool when papers do these kinds of ablations rather than just throw a bunch of stuff at you and you it's on you to figure out which of that stuff is important. They compare it to Roberta in terms of training of accuracy after training. So how much do you need pre training for a fine tuning and the deeper to as you can see in these graphs outperforms Roberta. So potentially, you need less pre training steps to reach the same accuracy in fine tuning task, which is cool. Also means that if you train for longer, you reach or if you train for the same amount of time, you reach a higher accuracy. And now for you know, their big thing they build, they scale it up. And they have a bunch of tricks here. And you know, pretty cool. They scale it up. I just want to highlight one trick. We optimize the model architecture as well as first we share the projection matrices of relative position embeddings. Okay. So they share the projection matrices of the relative position embeddings with each other. Okay, so they share the position matrices with the content matrices. So now instead of for example, so here is the query of the content, the key of the content. Here is the query of the projection and the key of the sorry, position position. My battery is soon over to speed up. So the content right here, and the position right here give rise to these matrices by means of these help of these learned weights, right? So here is WC, here is W sorry, WKC, WKC, sorry, W. That's the matrix that generates the queries from the content that generates the keys from the content, the matrix that generates the queries from the position and the matrix that generates the keys from the position. So if you now share, you now want to share this and that. And also you want to share this and that. So if and at the end, they are added, right? So you multiply these things, and then they are added. And in my mind, honestly, what what that results in, because before, let's just, let's just see. So before you had something like, if you if we simply multiply query times key transposed for the context site, that would give you sort of context WQ. And now we share them. So we don't care about C and P anymore. WK transposed K transposed. And sorry. Of course, context, this transposed. And now we add them to something else. And let's just say we have these position to position encodings that they leave away. But you know, we're going to consider them because it's easiest. So it's position WQ WK. Yeah, transposed position transposed. You know, if these matrices are shared, this simply ends up to be being the addition of the position and content times these two matrices times the again, this. So and this is just like the old school attention mechanism. Now I see there's these cross terms and maybe they influence something. But it gets closer and closer back to the old mechanism where you simply add the encodings and don't consider them in a in a disentangled way, right? If you do, if you dis if you like share the matrices of the disentangled representations, it simply refers back to as if you were to feed the position in each layer of a traditional transformer. So yeah, I'm not sure how much really the disentanglement is super important or whether or not it's just more important that this positional information is actually available at each step. But, you know, I might be wrong here with the cross terms. I haven't actually looked entirely at that. Yeah, so that's the paper, they have kind of a discussion depiction of attention matrices down here, where they show that their model, you know, does some does something kind of different from other models in terms of where it attends and it has less of these global attention patterns like Roberta has right here. Except for the very first one, which is the CLS vector, which makes sense. And otherwise, it has a rather diagonal attention matrix. So that's, it's pretty sensible, though you can also make the case that sometimes there are just really important words in a sentence that everything should attend to. I don't know, but it is state of the art and it is a cool algorithm and is worth considering if you build your next model. All right, with that, I thank you for listening. Subscribe if you haven't. I'll see you next time. Bye bye.
[ { "start": 0, "end": 7.5200000000000005, "text": " Hi there, today we'll look at DeBURTA, decoding enhanced BERT with disentangled attention," }, { "start": 7.5200000000000005, "end": 14.36, "text": " by Peng Cheng He, Xia Dong Liu, Zhang Feng Gao, and Wai Ju Chen of Microsoft." }, { "start": 14.36, "end": 19.88, "text": " This paper is an improvement on BERT, the language model and the Roburta variant of" }, { "start": 19.88, "end": 20.88, "text": " it." }, { "start": 20.88, "end": 27.82, "text": " Specifically, it suggests two improvements, namely, first is this disentangled attention" }, { "start": 27.82, "end": 33.76, "text": " where they disentangle positional information and content information of the individual" }, { "start": 33.76, "end": 36.480000000000004, "text": " tokens in the attention mechanism." }, { "start": 36.480000000000004, "end": 41.8, "text": " And the second improvement kind of results from the first improvement as this decoding" }, { "start": 41.8, "end": 48.82, "text": " enhanced decoder, I guess, enhanced decoder, where because they only use relative positional" }, { "start": 48.82, "end": 56.2, "text": " information in the transformer part of the model, they have to re-feed the absolute positional" }, { "start": 56.2, "end": 60.84, "text": " information at the end, which gives them another bit of improvement." }, { "start": 60.84, "end": 65.52000000000001, "text": " Altogether with this, they reach state of the art in various NLP tasks." }, { "start": 65.52000000000001, "end": 71.84, "text": " And this model DeBURTA is now available in hugging face for you to download for all of" }, { "start": 71.84, "end": 74.48, "text": " your NLP needs." }, { "start": 74.48, "end": 79.56, "text": " So we're going to go through the paper and look at the two improvements and what they" }, { "start": 79.56, "end": 81.24000000000001, "text": " give." }, { "start": 81.24000000000001, "end": 84.4, "text": " Let's then see if that's relevant." }, { "start": 84.4, "end": 88.76, "text": " As always, if you like content like this, don't hesitate to share it out to all of your" }, { "start": 88.76, "end": 92.12, "text": " friends and leave a like and a comment." }, { "start": 92.12, "end": 93.52000000000001, "text": " I still read all the comments." }, { "start": 93.52000000000001, "end": 96.18, "text": " So give me your opinion." }, { "start": 96.18, "end": 100.14, "text": " And please also give me your opinions on the new recording setup." }, { "start": 100.14, "end": 104.4, "text": " There should be a title somewhere here, a picture somewhere here." }, { "start": 104.4, "end": 109.42, "text": " I absolutely want to hear feedback because I have no idea what I'm doing." }, { "start": 109.42, "end": 110.42, "text": " So yeah." }, { "start": 110.42, "end": 112, "text": " All right." }, { "start": 112, "end": 115.84, "text": " Let's dive in DeBURTA or DeBURTA or DeBURTA." }, { "start": 115.84, "end": 116.84, "text": " I don't know." }, { "start": 116.84, "end": 119.88, "text": " I think it's DeBURTA because it's from decoding enhanced." }, { "start": 119.88, "end": 124.48, "text": " DeBURTA is a new model architecture, they say here." }, { "start": 124.48, "end": 130.92000000000002, "text": " We propose a new model architecture DeBURTA, decoding enhanced BERT with disentangled attention" }, { "start": 130.92000000000002, "end": 136.16, "text": " that improves the BERT and ROBERTA models using two novel techniques." }, { "start": 136.16, "end": 141.28, "text": " The first is the disentangled attention mechanism where each word is represented using two vectors" }, { "start": 141.28, "end": 145.16, "text": " that encode its content and position respectively." }, { "start": 145.16, "end": 150.36, "text": " And the attention weights among the words are computed using disentangled matrices on" }, { "start": 150.36, "end": 152.8, "text": " their contents and relative positions respectively." }, { "start": 152.8, "end": 156.92000000000002, "text": " Okay, we'll look at that first." }, { "start": 156.92000000000002, "end": 166.76, "text": " So what they mean is when you have a multi-head attention layer, what we want to do is we" }, { "start": 166.76, "end": 172.95999999999998, "text": " want to transform one sequence of tokens of token representations into the next sequence" }, { "start": 172.95999999999998, "end": 174.6, "text": " of token representations." }, { "start": 174.6, "end": 179.48, "text": " Now usually every token, let's say these are our tokens and this could be a sentence in" }, { "start": 179.48, "end": 183.6, "text": " a language like I am hungry." }, { "start": 183.6, "end": 192.92, "text": " And here is like this see this classification token that we always add when we train BERT." }, { "start": 192.92, "end": 198.11999999999998, "text": " Every one of these tokens is represented by a vector." }, { "start": 198.11999999999998, "end": 203.1, "text": " Like this is a vector, this is a vector, it has many entries, this is a vector." }, { "start": 203.1, "end": 205.79999999999998, "text": " Some of the vectors are thicker than others." }, { "start": 205.79999999999998, "end": 211.27999999999997, "text": " I mean that's just a this one just hasn't eaten enough." }, { "start": 211.27999999999997, "end": 215.11999999999998, "text": " So every one of these tokens is represented by a vector." }, { "start": 215.11999999999998, "end": 220.92, "text": " And what a multi-head attention layer does is it simply transforms this via means of" }, { "start": 220.92, "end": 228.76, "text": " the attention mechanism into a series of vectors again, so we put in a series of vectors, and" }, { "start": 228.76, "end": 232.35999999999999, "text": " we end up with another series of vectors." }, { "start": 232.35999999999999, "end": 238.26, "text": " If you want to know what a multi-head attention does in detail, please go look at my video" }, { "start": 238.26, "end": 242.64, "text": " attention is all you need where that's explained." }, { "start": 242.64, "end": 249.76, "text": " Specifically it is a attention it is sort of an information routing algorithm that sees" }, { "start": 249.76, "end": 257.36, "text": " that sees how information needs to be routed from tokens to tokens using queries, keys," }, { "start": 257.36, "end": 258.4, "text": " values and so on." }, { "start": 258.4, "end": 264.2, "text": " If you haven't seen the video, it's a beautiful mechanism, but I'm not going to explain it" }, { "start": 264.2, "end": 265.4, "text": " again right here." }, { "start": 265.4, "end": 267.12, "text": " I'm sorry." }, { "start": 267.12, "end": 275.8, "text": " Alright, so in this what usually do is you transform vectors into vectors." }, { "start": 275.8, "end": 282.72, "text": " And because of how the multi-head attention mechanism works, the mechanism has no way" }, { "start": 282.72, "end": 288.88, "text": " to discern where in a sentence, for example, a given token is so it cannot differentiate" }, { "start": 288.88, "end": 293.32, "text": " between this sentence here and the sentence Am I hungry?" }, { "start": 293.32, "end": 298.24, "text": " If it's just multi-head attention is just not possible for it because it treats the" }, { "start": 298.24, "end": 303.12, "text": " incoming sentence as like a bag of words, which is not the case in for example, a recurrent" }, { "start": 303.12, "end": 304.12, "text": " neural network." }, { "start": 304.12, "end": 310.28000000000003, "text": " A recurrent neural network would go one by one over these word representations." }, { "start": 310.28000000000003, "end": 317.16, "text": " And it has kind of a mechanism to see what a sequence is, however multi-headed attention" }, { "start": 317.16, "end": 318.16, "text": " doesn't." }, { "start": 318.16, "end": 324.7, "text": " So what people usually do is they augment these representations with position encodings." }, { "start": 324.7, "end": 329.36, "text": " So that's at the beginning, you know, where you might ask, where do these vectors come" }, { "start": 329.36, "end": 331.22, "text": " from the very first?" }, { "start": 331.22, "end": 334.84000000000003, "text": " Of course, they come from the last layer, but the very first vectors you put in come" }, { "start": 334.84000000000003, "end": 336.12, "text": " from a table." }, { "start": 336.12, "end": 338.82000000000005, "text": " And these are your classic word vectors." }, { "start": 338.82000000000005, "end": 343.18, "text": " So at some at some point, you have a big table." }, { "start": 343.18, "end": 346.36, "text": " And the big table has your entire vocabulary in it." }, { "start": 346.36, "end": 351.32000000000005, "text": " So every word in the language that you consider so there's I and there's am and there is you" }, { "start": 351.32000000000005, "end": 355.24, "text": " and there is Apple and there is hungry." }, { "start": 355.24, "end": 360.36, "text": " And there is even the CLS token, all of them have a table entry, and all of them have a" }, { "start": 360.36, "end": 362.72, "text": " vector associated with them." }, { "start": 362.72, "end": 364, "text": " Now these vectors are trainable." }, { "start": 364, "end": 368.64, "text": " So the neural network can decide itself what goes into these vectors." }, { "start": 368.64, "end": 372.86, "text": " But every word has a fixed vector in there." }, { "start": 372.86, "end": 377.04, "text": " And in the very first layer, because you don't have a last layer to draw from, you simply" }, { "start": 377.04, "end": 384.12, "text": " look at what token it is, you go to the table right here, you retrieve this vector, and" }, { "start": 384.12, "end": 385.12, "text": " you put it here." }, { "start": 385.12, "end": 386.12, "text": " And that's your start." }, { "start": 386.12, "end": 390, "text": " And then you transform up the layers, of course, every time from the last layer." }, { "start": 390, "end": 391.96, "text": " But at the beginning, you have embeddings." }, { "start": 391.96, "end": 399.08, "text": " Now the same thing you do for positions, okay, so you also have a second table usually, and" }, { "start": 399.08, "end": 404.12, "text": " the original transformer paper, by the way, these were fixed vectors." }, { "start": 404.12, "end": 407.54, "text": " But nowadays, I think most of them are also trained." }, { "start": 407.54, "end": 409, "text": " So you label the positions." }, { "start": 409, "end": 414.08, "text": " So that's position, that's position one, that's position two, three, and four." }, { "start": 414.08, "end": 418.72, "text": " So for every position, two, three, four, and maybe you have also five and six, there is" }, { "start": 418.72, "end": 420.28000000000003, "text": " a maximum length." }, { "start": 420.28000000000003, "end": 426.72, "text": " But right now we consider sentences of length three with the CLS token appended." }, { "start": 426.72, "end": 428.58000000000004, "text": " So these are length four." }, { "start": 428.58000000000004, "end": 431.72, "text": " So every position also has a vector." }, { "start": 431.72, "end": 436.36, "text": " And I'm going to actually draw these vectors in this color." }, { "start": 436.36, "end": 442.68, "text": " So every position has a vector, irrespective of what word there is, okay, right now, we" }, { "start": 442.68, "end": 445.72, "text": " just have vectors for words irrespective of where they are." }, { "start": 445.72, "end": 449.52000000000004, "text": " And we have vectors of positions irrespective of what words there are." }, { "start": 449.52000000000004, "end": 456.84000000000003, "text": " And what you do is same, you look at what position is here, you go to the table, you" }, { "start": 456.84000000000003, "end": 462.52000000000004, "text": " retrieve that embedding, and you somehow also put it here." }, { "start": 462.52000000000004, "end": 469.12, "text": " Now I've made a bit of a mess here with this thing, sorry." }, { "start": 469.12, "end": 474.94000000000005, "text": " So how do you now you have two vectors all of a sudden per word." }, { "start": 474.94, "end": 481.04, "text": " So you have one, that is a position, and you have one that is the kind of the word itself" }, { "start": 481.04, "end": 483.56, "text": " that represents the word itself." }, { "start": 483.56, "end": 488.42, "text": " And the neural network needs both in order to understand the sentence, right?" }, { "start": 488.42, "end": 494.64, "text": " If every word has these two vectors at the beginning, now it can understand, aha, this" }, { "start": 494.64, "end": 497.3, "text": " is the word I that is at the beginning of the sentence." }, { "start": 497.3, "end": 500.12, "text": " So it's probably the subject of a sentence." }, { "start": 500.12, "end": 506.36, "text": " However, if the word am was at the beginning, it could be, oh, it's probably a question" }, { "start": 506.36, "end": 509.84000000000003, "text": " because it starts with a verb, am I hungry?" }, { "start": 509.84000000000003, "end": 510.84000000000003, "text": " Okay." }, { "start": 510.84000000000003, "end": 515.32, "text": " And it can also evaluate the relative distances of things to each other and so on." }, { "start": 515.32, "end": 519.96, "text": " So given this information, the neural network has all the tools it sort of needs to understand" }, { "start": 519.96, "end": 522.72, "text": " that sentence as a sequence." }, { "start": 522.72, "end": 529.48, "text": " Now, what you have, you have basically two ways of combining the two things." }, { "start": 529.48, "end": 533.28, "text": " First of all, you can concatenate them, which means that I'm going to do it in this you" }, { "start": 533.28, "end": 537.32, "text": " just put no, that's terrible." }, { "start": 537.32, "end": 542.44, "text": " You just put the I'm not too skilled yet with this new thing." }, { "start": 542.44, "end": 546.32, "text": " You put this on top here, imagine this is the same length and you just concatenate the" }, { "start": 546.32, "end": 547.32, "text": " vector." }, { "start": 547.32, "end": 548.76, "text": " So now the vector is longer." }, { "start": 548.76, "end": 553.08, "text": " Of course, that also increases your dimensionality, computational issues and so on." }, { "start": 553.08, "end": 557.2, "text": " So what a lot of people do is they simply, you know, line them up if they're the same" }, { "start": 557.2, "end": 560.72, "text": " size and they add them together element wise." }, { "start": 560.72, "end": 564.76, "text": " And you know, in the worst case, the neural network now can decide because both of these" }, { "start": 564.76, "end": 566.08, "text": " are trained, right?" }, { "start": 566.08, "end": 570.9200000000001, "text": " So the neural network can absolutely decide that, you know, in the top part here, it simply" }, { "start": 570.9200000000001, "end": 572.6, "text": " learns a bunch of zeros." }, { "start": 572.6, "end": 575.32, "text": " And then the bottom part here, it simply learns a bunch of zeros here." }, { "start": 575.32, "end": 577.9200000000001, "text": " So essentially, it's a concatenation." }, { "start": 577.9200000000001, "end": 579.1600000000001, "text": " That's the worst case." }, { "start": 579.1600000000001, "end": 584.1400000000001, "text": " In the best case, the neural network can actually do some kind of information combining already" }, { "start": 584.14, "end": 587.4399999999999, "text": " in this addition step down here." }, { "start": 587.4399999999999, "end": 594.4, "text": " Okay, so the you you give both encodings to the neural network as a single vector, right?" }, { "start": 594.4, "end": 597.68, "text": " So what goes into the multi added attention mechanism is a single vector." }, { "start": 597.68, "end": 606.4399999999999, "text": " This paper says that is not ideal, because the positions are too much mixed with the" }, { "start": 606.4399999999999, "end": 609.78, "text": " with the signal of the content of the words." }, { "start": 609.78, "end": 614.1999999999999, "text": " And we'd rather have this in a disentangled representation, such that the network can" }, { "start": 614.1999999999999, "end": 621.52, "text": " sort of reason about the words in one line, and it can reason about the position of the" }, { "start": 621.52, "end": 624.0799999999999, "text": " words in another line." }, { "start": 624.0799999999999, "end": 629.92, "text": " So their goal is to disentangle these two vectors and basically design a new attention" }, { "start": 629.92, "end": 637.1, "text": " mechanism that always treats the content and the position as separate things." }, { "start": 637.1, "end": 640.28, "text": " So the new attention mechanism they propose is right here." }, { "start": 640.28, "end": 643.5600000000001, "text": " Of course, they're not they can't stay separate, right?" }, { "start": 643.5600000000001, "end": 648.84, "text": " But they they can be disentangled through the layers." }, { "start": 648.84, "end": 655.76, "text": " So their new algorithm sort of is here, the way they obtain the attention matrix is due" }, { "start": 655.76, "end": 658.08, "text": " to the following thing." }, { "start": 658.08, "end": 664.28, "text": " So how do you usually obtain the attention matrix, you have your input x here, this is" }, { "start": 664.28, "end": 671, "text": " your sequence, and you produce two values from it q and k." }, { "start": 671, "end": 673.12, "text": " So these are matrices." }, { "start": 673.12, "end": 680.9599999999999, "text": " So if x is a sequence, then every single sequence element emits one key, which is a vector," }, { "start": 680.9599999999999, "end": 686.86, "text": " right, one key, and then every single one also emits one query." }, { "start": 686.86, "end": 693.56, "text": " So like this, like this, and the key sort of the key is supposed to say, what is in" }, { "start": 693.56, "end": 699.16, "text": " what information is this token about, and the query is kind of supposed to say, what" }, { "start": 699.16, "end": 702.4, "text": " information does it request from other tokens." }, { "start": 702.4, "end": 707.4599999999999, "text": " So now you route the information wherever the inner products line up, for example, probably" }, { "start": 707.4599999999999, "end": 711.16, "text": " this thing would go to be routed here and so on." }, { "start": 711.16, "end": 713.5, "text": " It's not a hard routing, it's a soft routing." }, { "start": 713.5, "end": 722.2199999999999, "text": " So by transforming x by linear transformations into keys and queries, you obtain your attention" }, { "start": 722.22, "end": 729, "text": " matrix by multiplying together queries and keys, such that you have sort of the inner" }, { "start": 729, "end": 732.8000000000001, "text": " product between each of these vectors." }, { "start": 732.8000000000001, "end": 736.22, "text": " And this is quadratic, and this is the big bottleneck in transformers." }, { "start": 736.22, "end": 739.88, "text": " But you have the inner product between each of the two, you get a giant matrix, and the" }, { "start": 739.88, "end": 746.36, "text": " giant matrix basically says how much does token two attend to token three, that's the" }, { "start": 746.36, "end": 749.0400000000001, "text": " position two, three of that matrix." }, { "start": 749.04, "end": 755.56, "text": " And that's that seek that element is going to be the inner product of the query of token" }, { "start": 755.56, "end": 759.16, "text": " two with the key of token three." }, { "start": 759.16, "end": 761.98, "text": " So that's how you do the attention matrix." }, { "start": 761.98, "end": 767.12, "text": " And these vectors right here, they if you do regular bird, they always have, they're" }, { "start": 767.12, "end": 768.64, "text": " always everything at the same time." }, { "start": 768.64, "end": 774.78, "text": " So you feed, you feed content and position somewhere down the layers, you feed that in," }, { "start": 774.78, "end": 778.92, "text": " you add it together, and the network is supposed to figure out itself how to use these two" }, { "start": 778.92, "end": 781.16, "text": " pieces of information." }, { "start": 781.16, "end": 784.16, "text": " This paper says no, wait, we can do better." }, { "start": 784.16, "end": 792.36, "text": " What we can do is for us, each sequence element, it does not only produce one key and one query," }, { "start": 792.36, "end": 799.24, "text": " it actually, we think it should be contained, it should be made up of two vectors." }, { "start": 799.24, "end": 805.7199999999999, "text": " So each of these things has two different, two different components." }, { "start": 805.72, "end": 817.6, "text": " One is this kind of H component, which is the which is the content, content information," }, { "start": 817.6, "end": 822.12, "text": " and one is the P component, which is the positional information." }, { "start": 822.12, "end": 831.58, "text": " So here, how should how should token I attend to token j, they say, well, that is going" }, { "start": 831.58, "end": 833.14, "text": " to be it's going to be the same thing." }, { "start": 833.14, "end": 842.36, "text": " It's going to be the inner product between the between the this is the query of token" }, { "start": 842.36, "end": 846.76, "text": " I, and this is the key of token j." }, { "start": 846.76, "end": 847.76, "text": " Okay." }, { "start": 847.76, "end": 854.12, "text": " However, now the queries and keys are made up of two of two different parts." }, { "start": 854.12, "end": 858.8, "text": " One is the content part, one is the position part, and the position, as you can see, maybe" }, { "start": 858.8, "end": 864.4799999999999, "text": " as j condition, the neither position is going to be a relative positioning." }, { "start": 864.4799999999999, "end": 871.24, "text": " So if you have your sequence right here, what each token would do is it would emit one vector," }, { "start": 871.24, "end": 881.92, "text": " oh, sorry, it would emit one vector that is the content of the token, like before, and" }, { "start": 881.92, "end": 886.4399999999999, "text": " then another vector would come in from the position." }, { "start": 886.44, "end": 892.72, "text": " So the same we did at the beginning, but now in each layer, this positional information" }, { "start": 892.72, "end": 898.5200000000001, "text": " comes in irrespective of what word there is, right, irrespective of what word is in the" }, { "start": 898.5200000000001, "end": 902.7600000000001, "text": " position, the position gets an encoding right here." }, { "start": 902.7600000000001, "end": 907.2, "text": " And then the interesting thing is we don't add the two together, we treat them actually" }, { "start": 907.2, "end": 908.2, "text": " separately." }, { "start": 908.2, "end": 912.96, "text": " So here, the keys are two vectors, and the queries are also two vectors." }, { "start": 912.96, "end": 915.3800000000001, "text": " So I'm just going to draw one up here." }, { "start": 915.38, "end": 918.6, "text": " So the query is going to be a vector." }, { "start": 918.6, "end": 921.8, "text": " And the query for the position is also going to be a vector." }, { "start": 921.8, "end": 926.56, "text": " And that also it depends only on the position and not on the incoming signal." }, { "start": 926.56, "end": 928.56, "text": " Okay." }, { "start": 928.56, "end": 931.88, "text": " So now, how do we route information?" }, { "start": 931.88, "end": 936.04, "text": " Now we have four different routings." }, { "start": 936.04, "end": 938.86, "text": " First we only consider dark blue, dark blue." }, { "start": 938.86, "end": 941.7, "text": " So this is kind of the classic attention, right?" }, { "start": 941.7, "end": 947.1, "text": " This and this, they match really well, so that goes here." }, { "start": 947.1, "end": 950.2, "text": " That one probably doesn't go there, and so on." }, { "start": 950.2, "end": 956.08, "text": " But then we also, so this is what they call content to content routing." }, { "start": 956.08, "end": 962.0400000000001, "text": " But then we also have content to position, position to content, and position to position" }, { "start": 962.0400000000001, "end": 963.36, "text": " routing." }, { "start": 963.36, "end": 969.12, "text": " And in all of these, so for example, in content to position, I'm sure I'm going to, there's" }, { "start": 969.12, "end": 974.08, "text": " a 50-50 chance I'm going to mix this up, and I'm sure I'm going to, but in content to position," }, { "start": 974.08, "end": 978.8, "text": " what we're going to do is we're going to look at this vector right here, which is the content" }, { "start": 978.8, "end": 983.28, "text": " vector of the query that is produced from the token, right?" }, { "start": 983.28, "end": 986.0600000000001, "text": " The content is produced from the token." }, { "start": 986.0600000000001, "end": 991.4, "text": " And we're going to attend to the position vector of the key." }, { "start": 991.4, "end": 995.4, "text": " So we're going to attend to the light blue things." }, { "start": 995.4, "end": 1000.88, "text": " So essentially, the, this part is like the classic attention part." }, { "start": 1000.88, "end": 1008.28, "text": " It is, I am the word am, I'm requesting all information from all the nouns in the sentence," }, { "start": 1008.28, "end": 1013.1999999999999, "text": " because I'm a verb, and I would like to know who are the nouns in the sentence." }, { "start": 1013.1999999999999, "end": 1022.72, "text": " Then the content to position encodings is, I am the verb am, I would like to know what" }, { "start": 1022.72, "end": 1023.8, "text": " is around me." }, { "start": 1023.8, "end": 1026.2, "text": " The positions are relative positions." }, { "start": 1026.2, "end": 1032.34, "text": " So I can request the vector for, you know, the plus one position of me or the plus two." }, { "start": 1032.34, "end": 1036.32, "text": " So the word can attend to its surroundings." }, { "start": 1036.32, "end": 1041.32, "text": " So given that it's the word am, it might be particularly interested, maybe it has already" }, { "start": 1041.32, "end": 1046.46, "text": " figured out it's not a question, right?" }, { "start": 1046.46, "end": 1047.56, "text": " From the previous layers." }, { "start": 1047.56, "end": 1050.3, "text": " So it's particularly interested in what's before it." }, { "start": 1050.3, "end": 1055.76, "text": " So because, you know, am actually, it probably isn't particularly interesting, because it's" }, { "start": 1055.76, "end": 1057.1599999999999, "text": " always going to be I." }, { "start": 1057.1599999999999, "end": 1063.12, "text": " So actually, maybe it's exactly a counterexample, where it wouldn't want information from there." }, { "start": 1063.12, "end": 1069.28, "text": " But it can sort of attend, it can say, I want to attend to things after myself, because" }, { "start": 1069.28, "end": 1075.12, "text": " I already have figured out that before me must be an I, I want to attend to things after" }, { "start": 1075.12, "end": 1079.3999999999999, "text": " me, like one position after me, what's right after me, what's two words after me, and so" }, { "start": 1079.4, "end": 1080.5, "text": " on." }, { "start": 1080.5, "end": 1083.48, "text": " Position to content is exactly the opposite." }, { "start": 1083.48, "end": 1092.5600000000002, "text": " It is, it is saying so the token can say, well, I am in I am in a I am in position plus" }, { "start": 1092.5600000000002, "end": 1098.98, "text": " four to you know, what kind of information do I want to send to things that are four" }, { "start": 1098.98, "end": 1101.1200000000001, "text": " away from me, right?" }, { "start": 1101.1200000000001, "end": 1103.16, "text": " Irrespective of what the content is." }, { "start": 1103.16, "end": 1110.92, "text": " So here, we simply consider what position is the token with respect to its neighbors," }, { "start": 1110.92, "end": 1116.52, "text": " and what kind of information doesn't want to aggregate from each of the words." }, { "start": 1116.52, "end": 1118.88, "text": " It is a bit, it's a bit weird, right?" }, { "start": 1118.88, "end": 1125.0400000000002, "text": " So it says, it says, like, I, I am in in position." }, { "start": 1125.0400000000002, "end": 1132.24, "text": " A word that is two words after me, what kind of information do I want to get from it?" }, { "start": 1132.24, "end": 1139.16, "text": " And since it's attending to content that can be dependent on that can be dependent on what" }, { "start": 1139.16, "end": 1145.24, "text": " word there is, but not its position and then position to position is simply, well, what" }, { "start": 1145.24, "end": 1149.42, "text": " kind of information do I in position, you know, three, you want to send to something" }, { "start": 1149.42, "end": 1152.82, "text": " in position seven, which would be useful." }, { "start": 1152.82, "end": 1158.48, "text": " But this is relative position encoding, which simply means I am always kind of in the middle." }, { "start": 1158.48, "end": 1163.58, "text": " And so this isn't really helpful, so they decide to leave this away." }, { "start": 1163.58, "end": 1171.08, "text": " So we end up with the three different attention mechanisms, so to say, we end up so there's" }, { "start": 1171.08, "end": 1177.76, "text": " this one, there's this one, and there's this one, okay, corresponding to the three out" }, { "start": 1177.76, "end": 1185.52, "text": " of four different ways we can combine the dark blue and the light blue keys and queries." }, { "start": 1185.52, "end": 1188.76, "text": " Now you can see right here, that's what they do." }, { "start": 1188.76, "end": 1193.82, "text": " And their final attention matrix is simply the addition of all of those together." }, { "start": 1193.82, "end": 1199.94, "text": " So we construct one attention from like the classic attention, we construct one attention" }, { "start": 1199.94, "end": 1205.12, "text": " that is content to position, we construct one attention that is position to content," }, { "start": 1205.12, "end": 1210.44, "text": " and we construct one that is position to position, but then we leave it away because it's we" }, { "start": 1210.44, "end": 1215.42, "text": " deal with relative position, so it would sort of be the same for every token." }, { "start": 1215.42, "end": 1219.3600000000001, "text": " And that's not particularly helpful." }, { "start": 1219.3600000000001, "end": 1225.4, "text": " I'm going to repeat it again, the H information contains actual signal from the last layer," }, { "start": 1225.4, "end": 1231.3600000000001, "text": " while the P has no idea about the signal, it simply contains information about the position" }, { "start": 1231.3600000000001, "end": 1233.16, "text": " of the tokens." }, { "start": 1233.16, "end": 1239.04, "text": " So you can decide to send information to a word that's two positions ahead of you, or" }, { "start": 1239.04, "end": 1245.84, "text": " to request information from where that's three positions behind you, depending on what word" }, { "start": 1245.84, "end": 1247.32, "text": " you yourself are." }, { "start": 1247.32, "end": 1252.72, "text": " Okay, so that's the content to position and position to content attention." }, { "start": 1252.72, "end": 1254.84, "text": " These things are all added together." }, { "start": 1254.84, "end": 1257.58, "text": " And that makes up the final attention matrix." }, { "start": 1257.58, "end": 1263.1599999999999, "text": " So a final entry in the attention matrix could be influenced by multiple ones of them." }, { "start": 1263.16, "end": 1269.48, "text": " It could say, you know, I am the word, I'm the word am I'm in position to, I request" }, { "start": 1269.48, "end": 1274.72, "text": " a lot of information from other nouns, if any noun is here, I want information, but" }, { "start": 1274.72, "end": 1280.76, "text": " I also want information from things that are one or two positions ahead of me." }, { "start": 1280.76, "end": 1287.2, "text": " So that, that is, and you know, since I'm the word am, and also since I'm in position" }, { "start": 1287.2, "end": 1294.6000000000001, "text": " number two, I am very interested to know what the subject of the sentence is." }, { "start": 1294.6000000000001, "end": 1297.44, "text": " Now we have all of it." }, { "start": 1297.44, "end": 1298.44, "text": " Okay." }, { "start": 1298.44, "end": 1299.44, "text": " All right." }, { "start": 1299.44, "end": 1304.24, "text": " And the rest is, is just like classic attention." }, { "start": 1304.24, "end": 1305.24, "text": " Okay." }, { "start": 1305.24, "end": 1314.6000000000001, "text": " Now you, you simply, so these P and H matrices are obtained by, sorry, the queries and the" }, { "start": 1314.6, "end": 1318.24, "text": " keys for this are obtained by linear transformation." }, { "start": 1318.24, "end": 1320.1599999999999, "text": " So you see, this is the incoming signal." }, { "start": 1320.1599999999999, "end": 1323.8999999999999, "text": " You send it through a linear transformation to obtain the queries." }, { "start": 1323.8999999999999, "end": 1328.26, "text": " And you also send it through a linear transformation to obtain the keys." }, { "start": 1328.26, "end": 1333.28, "text": " So the H is the same, but the, these matrices here, these are learned weights to produce" }, { "start": 1333.28, "end": 1335.54, "text": " key queries and keys." }, { "start": 1335.54, "end": 1338.1599999999999, "text": " And then you multiply them together." }, { "start": 1338.1599999999999, "end": 1340.54, "text": " That defines your attention matrix." }, { "start": 1340.54, "end": 1344.6, "text": " You run that through a soft max to make a distribution out of each row, and then you" }, { "start": 1344.6, "end": 1346.96, "text": " multiply it together with the values." }, { "start": 1346.96, "end": 1351.68, "text": " So this part here is kind of like the routing table and the values are the information to" }, { "start": 1351.68, "end": 1352.68, "text": " be routed." }, { "start": 1352.68, "end": 1358.26, "text": " The values are obtained from these input signal." }, { "start": 1358.26, "end": 1365.36, "text": " As we said, we're going to amend that by, so this over here is the classic key queries," }, { "start": 1365.36, "end": 1366.36, "text": " keys and values." }, { "start": 1366.36, "end": 1369, "text": " Sorry, that's too much." }, { "start": 1369, "end": 1372.84, "text": " The classic queries, keys and values." }, { "start": 1372.84, "end": 1379.66, "text": " And then we augment that by two new, so there is the queries and the keys for the position." }, { "start": 1379.66, "end": 1385.4, "text": " And you can see that the difference here is that again, it's learned weights, but now" }, { "start": 1385.4, "end": 1387.88, "text": " there is this P thing right here." }, { "start": 1387.88, "end": 1390.58, "text": " And the P is positional encodings." }, { "start": 1390.58, "end": 1396.04, "text": " And that comes exactly out of this table we saw up here." }, { "start": 1396.04, "end": 1400.44, "text": " So the positional encodings come from this." }, { "start": 1400.44, "end": 1405.68, "text": " And it's important to see that this here is H and this is the P values, but this is only" }, { "start": 1405.68, "end": 1407.58, "text": " H0, right?" }, { "start": 1407.58, "end": 1414.68, "text": " H is actually transformed to H1 by the transformer, the first layer, to H2 by the second layer," }, { "start": 1414.68, "end": 1415.68, "text": " and so on." }, { "start": 1415.68, "end": 1418.32, "text": " The P always stays the same." }, { "start": 1418.32, "end": 1423.92, "text": " So you would feed the P into this layer, and you would feed it again into this layer, and" }, { "start": 1423.92, "end": 1425.92, "text": " you would feed it again into this layer." }, { "start": 1425.92, "end": 1429.2, "text": " So you can see it's only positional information." }, { "start": 1429.2, "end": 1431.6200000000001, "text": " It's not content information." }, { "start": 1431.6200000000001, "end": 1441.5600000000002, "text": " And by feeding the position each time and doing this in this disentangled way, the model" }, { "start": 1441.5600000000002, "end": 1445.68, "text": " can sort of keep the content and position information separate." }, { "start": 1445.68, "end": 1451.72, "text": " I actually think it doesn't really keep the information separate because after layer one," }, { "start": 1451.72, "end": 1454.94, "text": " you certainly have position information in your H, right?" }, { "start": 1454.94, "end": 1461.38, "text": " You can see that from this path here, from the actually feeding position information" }, { "start": 1461.38, "end": 1468.6000000000001, "text": " into the transformer layer, H1 is already going to be a conglomerate of H0, which is" }, { "start": 1468.6000000000001, "end": 1472.4, "text": " pure content plus the position somehow." }, { "start": 1472.4, "end": 1477.18, "text": " This plus is not a real addition, but somehow the information is intermingled there." }, { "start": 1477.18, "end": 1483.72, "text": " And if we weren't to feed in these things right here, it would just be like the classic" }, { "start": 1483.72, "end": 1485.52, "text": " BERT, right, what they criticize." }, { "start": 1485.52, "end": 1491.98, "text": " Now by continuously feeding the positional information, that is one advantage." }, { "start": 1491.98, "end": 1493.32, "text": " You can actually do that with BERT." }, { "start": 1493.32, "end": 1495.6000000000001, "text": " You can just add the position information each time." }, { "start": 1495.6000000000001, "end": 1500.24, "text": " I'm not sure if that would work super well, but you can do that." }, { "start": 1500.24, "end": 1504.84, "text": " Just gives the model a bit more side information to work with." }, { "start": 1504.84, "end": 1511.78, "text": " And then by keeping it separate, yeah, as I said, I'm not sure it's actually separate." }, { "start": 1511.78, "end": 1517.16, "text": " It's just that you keep feeding in position information layer after layer, therefore giving" }, { "start": 1517.16, "end": 1522.12, "text": " the model sort of more information every time it makes a transformation, because otherwise" }, { "start": 1522.12, "end": 1528.44, "text": " it would have to carry through the position information through all the layers, just from" }, { "start": 1528.44, "end": 1532.06, "text": " the very first layer." }, { "start": 1532.06, "end": 1537.78, "text": " So in this mechanism, you can see it's true that the position encoding is kept separate" }, { "start": 1537.78, "end": 1544.36, "text": " because it comes in fresh every layer, but I don't see that the content certainly has" }, { "start": 1544.36, "end": 1547.28, "text": " position information in it from the last layer." }, { "start": 1547.28, "end": 1549.92, "text": " I hope you can see that." }, { "start": 1549.92, "end": 1554.6, "text": " So as I said, they do relative position encoding." }, { "start": 1554.6, "end": 1555.72, "text": " What does that mean?" }, { "start": 1555.72, "end": 1564.04, "text": " So that means that the position encoding depends on where you look from." }, { "start": 1564.04, "end": 1569.3999999999999, "text": " So what I've drawn at the beginning, like this here, this isn't entirely correct." }, { "start": 1569.3999999999999, "end": 1571.52, "text": " You have to look at each token individually." }, { "start": 1571.52, "end": 1576.32, "text": " So for this middle token here, for example, the positions look like this." }, { "start": 1576.32, "end": 1580.92, "text": " They look like negative two, negative one, zero, one, two, and you would have kind of" }, { "start": 1580.92, "end": 1585.96, "text": " a table not with absolute positions, but you'd actually have a table with negative two, negative" }, { "start": 1585.96, "end": 1590.86, "text": " one, zero, one plus one plus two, and so on." }, { "start": 1590.86, "end": 1592.32, "text": " And you would retrieve those vectors." }, { "start": 1592.32, "end": 1597.6799999999998, "text": " And then you when you consider the next vector, this one right here, it would look different." }, { "start": 1597.6799999999998, "end": 1601.9199999999998, "text": " It would write this would be zero, this minus one minus two, and so on." }, { "start": 1601.9199999999998, "end": 1603.84, "text": " So they do two things." }, { "start": 1603.84, "end": 1607.72, "text": " First of all, they truncate at some point, they simply say, well, our context window" }, { "start": 1607.72, "end": 1608.76, "text": " is two." }, { "start": 1608.76, "end": 1613.56, "text": " So instead of going negative three here, we simply keep it at negative two." }, { "start": 1613.56, "end": 1617.3999999999999, "text": " So everything beyond negative two gets also the vector for negative two." }, { "start": 1617.4, "end": 1624.64, "text": " So that vector here is going to be just plugged into here and into here for this token, right." }, { "start": 1624.64, "end": 1629.8000000000002, "text": " And for this token, for the previous token, it is only going to be plugged here and if" }, { "start": 1629.8000000000002, "end": 1633.0400000000002, "text": " and nowhere else." }, { "start": 1633.0400000000002, "end": 1635.92, "text": " There are ways to efficiently implement this." }, { "start": 1635.92, "end": 1637.68, "text": " And that's this algorithm right here." }, { "start": 1637.68, "end": 1639.7800000000002, "text": " Don't want to go too much into it." }, { "start": 1639.7800000000002, "end": 1645.96, "text": " But just so you're aware, you don't have to consider each token really individually during" }, { "start": 1645.96, "end": 1647.0400000000002, "text": " it attention." }, { "start": 1647.04, "end": 1649.24, "text": " That would be prohibitively expensive." }, { "start": 1649.24, "end": 1654.52, "text": " So you can do one big matrix multiply and then sort of pick and choose together from" }, { "start": 1654.52, "end": 1659.7, "text": " your from the matrix that results, especially with this truncation." }, { "start": 1659.7, "end": 1662.32, "text": " This is this algorithm." }, { "start": 1662.32, "end": 1664.28, "text": " So they call it efficient implementation." }, { "start": 1664.28, "end": 1672.6, "text": " Alright, so that is this position, position enhanced or disentangled information." }, { "start": 1672.6, "end": 1674.7, "text": " Why is it disentangled again?" }, { "start": 1674.7, "end": 1678.92, "text": " Because in every layer, they have a side input." }, { "start": 1678.92, "end": 1687.04, "text": " This piece right here is the side input that they sort of feed on top of this information." }, { "start": 1687.04, "end": 1692.32, "text": " And they specifically construct the attention matrix out of the three things, right?" }, { "start": 1692.32, "end": 1693.92, "text": " It's almost like two contributions." }, { "start": 1693.92, "end": 1698.0800000000002, "text": " The one contribution is, hey, let's feed in position information in each layer." }, { "start": 1698.0800000000002, "end": 1700.4, "text": " And I think that has been tried before." }, { "start": 1700.4, "end": 1701.4, "text": " That's pretty simple." }, { "start": 1701.4, "end": 1707.3600000000001, "text": " The second thing is that we don't we don't simply add the two vectors when we input it" }, { "start": 1707.3600000000001, "end": 1713.88, "text": " into the attention, but we're going to construct basically three attention matrices and then" }, { "start": 1713.88, "end": 1719.92, "text": " add those together once we determine the inner products between each of those." }, { "start": 1719.92, "end": 1723.5600000000002, "text": " So this is one of the improvements." }, { "start": 1723.5600000000002, "end": 1725.5600000000002, "text": " And that already helps a lot." }, { "start": 1725.5600000000002, "end": 1727.8400000000001, "text": " But then they run into a problem." }, { "start": 1727.8400000000001, "end": 1731.0800000000002, "text": " And this is not necessarily a problem with their method." }, { "start": 1731.08, "end": 1735.24, "text": " But this is a problem in general when you use relative positioning codings." }, { "start": 1735.24, "end": 1741.24, "text": " So they say, given a sentence, a new store opened beside a new mall, right?" }, { "start": 1741.24, "end": 1742.36, "text": " That's a sentence." }, { "start": 1742.36, "end": 1746.06, "text": " The words store and mall are mass." }, { "start": 1746.06, "end": 1749.24, "text": " So let's say you do this mask language model pre training, right?" }, { "start": 1749.24, "end": 1755.52, "text": " You mask out the words store and mall and you ask the model to reconstruct them using" }, { "start": 1755.52, "end": 1760.6, "text": " only the local context, e.g. relative position and surrounding words is insufficient for" }, { "start": 1760.6, "end": 1765.84, "text": " the model to distinguish store and mall in this sentence, since both follow the word" }, { "start": 1765.84, "end": 1769.52, "text": " new with the same relative positions." }, { "start": 1769.52, "end": 1775.8, "text": " So from the word new, you know, relatively, it's always plus one, oopsie." }, { "start": 1775.8, "end": 1778.76, "text": " It's plus one to this word." }, { "start": 1778.76, "end": 1781.3799999999999, "text": " So the model cannot distinguish the two." }, { "start": 1781.3799999999999, "end": 1787.7199999999998, "text": " So there is a need for absolute position and codings, because if you had absolute position" }, { "start": 1787.72, "end": 1792.84, "text": " and codings, you could maybe make sense, though." }, { "start": 1792.84, "end": 1796.96, "text": " You know, since I mean, you could figure out like store is probably kind of a smaller thing" }, { "start": 1796.96, "end": 1799.76, "text": " and mall is kind of a bigger thing." }, { "start": 1799.76, "end": 1805.76, "text": " So it's more likely that the store opened beside the new mall than the mall opened beside" }, { "start": 1805.76, "end": 1807.96, "text": " the new store." }, { "start": 1807.96, "end": 1814.84, "text": " So that means we need absolute position and codings or something like this, right?" }, { "start": 1814.84, "end": 1819.48, "text": " And especially, we could have relative position and codings, but if this is a very long sentence" }, { "start": 1819.48, "end": 1824.72, "text": " and we truncate them somewhere, again, these two things are not in range of one another." }, { "start": 1824.72, "end": 1829.48, "text": " And they're not going to know how far you know, they are apart and each each one by" }, { "start": 1829.48, "end": 1832.04, "text": " itself is just plus one apart." }, { "start": 1832.04, "end": 1835.4199999999998, "text": " So how do we solve the problem?" }, { "start": 1835.4199999999998, "end": 1838.04, "text": " We feed in absolute position and codings." }, { "start": 1838.04, "end": 1840.52, "text": " However, that's exactly what they criticize." }, { "start": 1840.52, "end": 1845.96, "text": " They say no, relative position and codings are much better than absolute for learning." }, { "start": 1845.96, "end": 1849.8, "text": " And that's kind of the same reasoning why a convolution is better than a fully connected" }, { "start": 1849.8, "end": 1856.68, "text": " layer because you kind of slide the transformation over and it's simply data relative to each" }, { "start": 1856.68, "end": 1857.68, "text": " other." }, { "start": 1857.68, "end": 1862.96, "text": " So relative positioning makes a lot of sense if when every word can do computation, not" }, { "start": 1862.96, "end": 1868.68, "text": " based on where exactly it is in the sentence, but how it is in relation to other words." }, { "start": 1868.68, "end": 1872.64, "text": " Otherwise, if you have absolute positioning codings, what you would have to do is you" }, { "start": 1872.64, "end": 1878.3200000000002, "text": " would have to say, well, if I'm the word M, and I'm in position two, I need to learn to" }, { "start": 1878.3200000000002, "end": 1879.88, "text": " attend to position three." }, { "start": 1879.88, "end": 1884.1200000000001, "text": " However, if I'm the word M and I'm in position three, I need to learn to attend to position" }, { "start": 1884.1200000000001, "end": 1885.1200000000001, "text": " four." }, { "start": 1885.1200000000001, "end": 1888.0800000000002, "text": " And if I'm in position four, I need to learn to attend in position five." }, { "start": 1888.0800000000002, "end": 1890.24, "text": " These are all different things you need to learn." }, { "start": 1890.24, "end": 1896.04, "text": " However, if you have relative encoding, what you can do is you can simply say I want to" }, { "start": 1896.04, "end": 1899.8, "text": " attend to the word that's right after me easy." }, { "start": 1899.8, "end": 1904.8799999999999, "text": " But we do need absolute positioning coding for some things, namely disambiguate between" }, { "start": 1904.8799999999999, "end": 1906.36, "text": " tasks like this." }, { "start": 1906.36, "end": 1909.56, "text": " So they feed in absolute position information." }, { "start": 1909.56, "end": 1914.72, "text": " But instead of doing it at the beginning, they do it at the end." }, { "start": 1914.72, "end": 1918.48, "text": " So at the beginning, we have the word vectors, right?" }, { "start": 1918.48, "end": 1920.8799999999999, "text": " They go in here." }, { "start": 1920.8799999999999, "end": 1923.1599999999999, "text": " And then we have position information." }, { "start": 1923.1599999999999, "end": 1925.8, "text": " 12345." }, { "start": 1925.8, "end": 1929.6, "text": " We have that at every single layer of the transformer." }, { "start": 1929.6, "end": 1932.48, "text": " We feed it in again and again and again." }, { "start": 1932.48, "end": 1935.24, "text": " We feed in the same P vectors, okay?" }, { "start": 1935.24, "end": 1937.52, "text": " They have different different of these." }, { "start": 1937.52, "end": 1940.68, "text": " Sorry, if these transformations in each layer." }, { "start": 1940.68, "end": 1945.36, "text": " So the actual transformation that makes the keys and the values, sorry, the keys and the" }, { "start": 1945.36, "end": 1951.3999999999999, "text": " queries of the position information are different, but the vectors are the same every time." }, { "start": 1951.3999999999999, "end": 1953.76, "text": " And then at the very top." }, { "start": 1953.76, "end": 1956.8, "text": " So these are P relative." }, { "start": 1956.8, "end": 1961.8, "text": " So this is sorry, yeah, I mixed up this is the this is this negative two negative one," }, { "start": 1961.8, "end": 1964.92, "text": " zero, one, two for the middle token." }, { "start": 1964.92, "end": 1971.08, "text": " And then at the end, we're going to feed in absolute position encodings." }, { "start": 1971.08, "end": 1975, "text": " So here we have, you know, your let's start at one." }, { "start": 1975, "end": 1977.84, "text": " Let's be good math lab people." }, { "start": 1977.84, "end": 1984.28, "text": " Here we have 12345 that we're going to now combine with the vectors that come out of" }, { "start": 1984.28, "end": 1986.32, "text": " here." }, { "start": 1986.32, "end": 1993.58, "text": " So the reasoning is they say there are two methods of their two methods of incorporating" }, { "start": 1993.58, "end": 1998.36, "text": " absolute position, the BERT model incorporates absolute position in the input layer." }, { "start": 1998.36, "end": 2002.8, "text": " In the BERT, we incorporate them right after all the transformer layers, but before the" }, { "start": 2002.8, "end": 2007.8, "text": " softmax layer for mask token prediction, as shown in figure two, I've looked at figure" }, { "start": 2007.8, "end": 2011.76, "text": " two, it's, it's not really helpful, honestly." }, { "start": 2011.76, "end": 2019.2, "text": " So that is this figure in the appendix, where they say, okay, so in the BERT late in the" }, { "start": 2019.2, "end": 2023.72, "text": " BERT, you have the absolute position encoding somewhere down here, it goes through all the" }, { "start": 2023.72, "end": 2025.36, "text": " transformer layers." }, { "start": 2025.36, "end": 2030.84, "text": " And then you have this classification layer at the top that does the language model decoding." }, { "start": 2030.84, "end": 2036.04, "text": " However, in their model, what you'd have is you have all the transformer layers here," }, { "start": 2036.04, "end": 2042.1599999999999, "text": " down here, and then you have the absolute position encodings that come in through the" }, { "start": 2042.1599999999999, "end": 2043.92, "text": " side here." }, { "start": 2043.92, "end": 2049.88, "text": " And kind of the last transformer layer now has access to these absolute layers or the" }, { "start": 2049.88, "end": 2056.62, "text": " last n, I think n in their case is two, or one, one or two." }, { "start": 2056.62, "end": 2062.36, "text": " So in the last layer, or the last layers, now the transformer has access to the absolute" }, { "start": 2062.36, "end": 2067.92, "text": " positions, and before it's just relative position at each step." }, { "start": 2067.92, "end": 2076, "text": " And they reason that that helps because the transformer part learns to deal with relative" }, { "start": 2076, "end": 2077.32, "text": " positions." }, { "start": 2077.32, "end": 2083.36, "text": " Okay, in this way, they say here, the BERT captures the relative positions in all the" }, { "start": 2083.36, "end": 2087.36, "text": " transformer layers and only uses the absolute position as complementary information when" }, { "start": 2087.36, "end": 2089.76, "text": " decoding the masked words." }, { "start": 2089.76, "end": 2095.2000000000003, "text": " Thus we call the BERT as decoding component an enhanced masked decoder." }, { "start": 2095.2000000000003, "end": 2099.92, "text": " And they compare the two, and they observe that EMD works much better." }, { "start": 2099.92, "end": 2108.32, "text": " So feeding absolute positions at the end works better than feeding them at the beginning." }, { "start": 2108.32, "end": 2113.2400000000002, "text": " We conjecture that the early incorporation of absolute positions used by BERT might undesirably" }, { "start": 2113.2400000000002, "end": 2117.5600000000004, "text": " hamper the model from learning sufficient information of relative position." }, { "start": 2117.56, "end": 2122.4, "text": " In addition, EMD also enables us to introduce other useful information, addition to positions," }, { "start": 2122.4, "end": 2124.48, "text": " yada, yada, yada, we leave it for future." }, { "start": 2124.48, "end": 2126.52, "text": " So they say you could also feed in other information." }, { "start": 2126.52, "end": 2130.16, "text": " I guess that's the case in every single neural network ever." }, { "start": 2130.16, "end": 2136.04, "text": " Yeah, but the point is they feed in the absolute position at the end and their conjecture." }, { "start": 2136.04, "end": 2138.32, "text": " So I'm not sure I'm not a fan of this." }, { "start": 2138.32, "end": 2145.2799999999997, "text": " I'm here, you know, this is this is like saying, okay, if we only feed it in at the end right" }, { "start": 2145.28, "end": 2150.44, "text": " here, this is position absolute, then we sort of limit the model." }, { "start": 2150.44, "end": 2156.0800000000004, "text": " Like right now, the model has the same information as it had before, as if we were to feed it" }, { "start": 2156.0800000000004, "end": 2157.92, "text": " at the beginning." }, { "start": 2157.92, "end": 2162.0800000000004, "text": " But we sort of limit it to only one layer of transformation." }, { "start": 2162.0800000000004, "end": 2167.88, "text": " So all it can do is sort of have kind of a little linear transformation in in there." }, { "start": 2167.88, "end": 2169.36, "text": " And yeah." }, { "start": 2169.36, "end": 2175.2400000000002, "text": " And so if we don't feed that in here, whereas we do feed it in, the model can use it or" }, { "start": 2175.24, "end": 2177.12, "text": " not any way it wants." }, { "start": 2177.12, "end": 2180.64, "text": " And that's just not a good enough reason for me." }, { "start": 2180.64, "end": 2187.2, "text": " So I think, you know, regularization has its place, bottleneck layer has its place and" }, { "start": 2187.2, "end": 2191.08, "text": " so on, restricting the capacity, and so on." }, { "start": 2191.08, "end": 2196.58, "text": " But I'm not a fan of hampering the model in this way kind of restricting it." }, { "start": 2196.58, "end": 2201.4799999999996, "text": " And I, you know, just because it makes your your number better, there's not really a reason" }, { "start": 2201.48, "end": 2209.12, "text": " why the same information should be worse if you give the model more steps to compute to" }, { "start": 2209.12, "end": 2213.36, "text": " compute with, you know, if you feed it in at the beginning, technically, if you train" }, { "start": 2213.36, "end": 2219.48, "text": " the model correctly, it should learn to use that information in at least as good a way" }, { "start": 2219.48, "end": 2226.08, "text": " as if you feed it in at the end, right, at least that tells me that the model that we" }, { "start": 2226.08, "end": 2231.08, "text": " haven't really figured out how to train these models correctly yet, with regards to positional" }, { "start": 2231.08, "end": 2232.6, "text": " encodings." }, { "start": 2232.6, "end": 2237.88, "text": " And again, I'm not a fan of simply saying, well, we only feed it in at the end, because" }, { "start": 2237.88, "end": 2241.2, "text": " then the question immediately is, well, how many layers at the end?" }, { "start": 2241.2, "end": 2242.84, "text": " How many layers at the beginning?" }, { "start": 2242.84, "end": 2245.96, "text": " Or when, you know, when is it too powerful?" }, { "start": 2245.96, "end": 2253.04, "text": " It's just, yeah, I don't think it's, it's, it makes a lot of sense to simply give the" }, { "start": 2253.04, "end": 2259.08, "text": " model information, but not let it do its best with that information, unless you have a specific" }, { "start": 2259.08, "end": 2265.16, "text": " kind of reasoning why this is just not good enough for me here." }, { "start": 2265.16, "end": 2270.12, "text": " Not a criticism of the, you know, obviously, it's better, like they observe, like, you" }, { "start": 2270.12, "end": 2275.84, "text": " know, all the in all the information, sorry, all the arguments can be invalidated by, but" }, { "start": 2275.84, "end": 2277.6, "text": " it's better, right?" }, { "start": 2277.6, "end": 2278.6, "text": " That's deep learning." }, { "start": 2278.6, "end": 2284.6, "text": " So yeah, all respect for them for trying it out, and actually realizing it's better." }, { "start": 2284.6, "end": 2285.7599999999998, "text": " Pretty cool." }, { "start": 2285.76, "end": 2290.92, "text": " So they also do scale invariant fine tuning where if they fine tune, which is where you" }, { "start": 2290.92, "end": 2294.84, "text": " take kind of this, this model you train with mask language modeling, and then you fine" }, { "start": 2294.84, "end": 2300.8, "text": " tune it to NLP tasks, they have a bunch of tricks there like virtual adversarial training" }, { "start": 2300.8, "end": 2305.36, "text": " and normalizing the embeddings before they do that." }, { "start": 2305.36, "end": 2307.4, "text": " And that apparently helps a lot." }, { "start": 2307.4, "end": 2312.32, "text": " But they also say they leave the comprehensive study of this for future work." }, { "start": 2312.32, "end": 2318.56, "text": " For now, they just want to get the good number, which is understandable because you get published." }, { "start": 2318.56, "end": 2326.48, "text": " Alright, so here you can see, actually, we can we can skip most of the tables, they are" }, { "start": 2326.48, "end": 2327.48, "text": " better." }, { "start": 2327.48, "end": 2328.48, "text": " They are better." }, { "start": 2328.48, "end": 2329.48, "text": " They are better." }, { "start": 2329.48, "end": 2333.54, "text": " They are better in language modeling, too, which is interesting." }, { "start": 2333.54, "end": 2340.4, "text": " So you can do kind of bird style denoising, but in classification, you can also do actually" }, { "start": 2340.4, "end": 2343.96, "text": " order regressive language model, which is pretty cool." }, { "start": 2343.96, "end": 2350.12, "text": " So here they do an ablation study of the different components where they remove this enhanced" }, { "start": 2350.12, "end": 2351.38, "text": " the decoder." }, { "start": 2351.38, "end": 2358.48, "text": " And one time they remove the position content to position encodings, sorry, attention mechanism." }, { "start": 2358.48, "end": 2363.1, "text": " And one time they reduce the position to content tension mechanism." }, { "start": 2363.1, "end": 2366.92, "text": " And in the table, it is sort of a wash." }, { "start": 2366.92, "end": 2372.64, "text": " Depends on the task of how you look at but each of the components here gets you some" }, { "start": 2372.64, "end": 2377.36, "text": " kind of a benefit or a hit when you take it away." }, { "start": 2377.36, "end": 2383.4, "text": " So yeah, it's not really clear that one of the components gives you all the boost." }, { "start": 2383.4, "end": 2387.08, "text": " The combination of them is obviously the best." }, { "start": 2387.08, "end": 2391.92, "text": " And it's really cool when papers do these kinds of ablations rather than just throw" }, { "start": 2391.92, "end": 2400.32, "text": " a bunch of stuff at you and you it's on you to figure out which of that stuff is important." }, { "start": 2400.32, "end": 2406.36, "text": " They compare it to Roberta in terms of training of accuracy after training." }, { "start": 2406.36, "end": 2411.96, "text": " So how much do you need pre training for a fine tuning and the deeper to as you can see" }, { "start": 2411.96, "end": 2414.12, "text": " in these graphs outperforms Roberta." }, { "start": 2414.12, "end": 2421.64, "text": " So potentially, you need less pre training steps to reach the same accuracy in fine tuning" }, { "start": 2421.64, "end": 2423.2599999999998, "text": " task, which is cool." }, { "start": 2423.2599999999998, "end": 2427.7599999999998, "text": " Also means that if you train for longer, you reach or if you train for the same amount" }, { "start": 2427.7599999999998, "end": 2430.8799999999997, "text": " of time, you reach a higher accuracy." }, { "start": 2430.8799999999997, "end": 2435.72, "text": " And now for you know, their big thing they build, they scale it up." }, { "start": 2435.72, "end": 2439.52, "text": " And they have a bunch of tricks here." }, { "start": 2439.52, "end": 2440.52, "text": " And you know, pretty cool." }, { "start": 2440.52, "end": 2441.52, "text": " They scale it up." }, { "start": 2441.52, "end": 2444.3199999999997, "text": " I just want to highlight one trick." }, { "start": 2444.3199999999997, "end": 2448.44, "text": " We optimize the model architecture as well as first we share the projection matrices" }, { "start": 2448.44, "end": 2450.6, "text": " of relative position embeddings." }, { "start": 2450.6, "end": 2451.6, "text": " Okay." }, { "start": 2451.6, "end": 2457.8399999999997, "text": " So they share the projection matrices of the relative position embeddings with each other." }, { "start": 2457.8399999999997, "end": 2465.24, "text": " Okay, so they share the position matrices with the content matrices." }, { "start": 2465.24, "end": 2471.52, "text": " So now instead of for example, so here is the query of the content, the key of the content." }, { "start": 2471.52, "end": 2481.72, "text": " Here is the query of the projection and the key of the sorry, position position." }, { "start": 2481.72, "end": 2485.04, "text": " My battery is soon over to speed up." }, { "start": 2485.04, "end": 2492.28, "text": " So the content right here, and the position right here give rise to these matrices by" }, { "start": 2492.28, "end": 2496.92, "text": " means of these help of these learned weights, right?" }, { "start": 2496.92, "end": 2508.08, "text": " So here is WC, here is W sorry, WKC, WKC, sorry, W. That's the matrix that generates" }, { "start": 2508.08, "end": 2512.38, "text": " the queries from the content that generates the keys from the content, the matrix that" }, { "start": 2512.38, "end": 2518.44, "text": " generates the queries from the position and the matrix that generates the keys from the" }, { "start": 2518.44, "end": 2519.84, "text": " position." }, { "start": 2519.84, "end": 2525.62, "text": " So if you now share, you now want to share this and that." }, { "start": 2525.62, "end": 2527.8399999999997, "text": " And also you want to share this and that." }, { "start": 2527.8399999999997, "end": 2530.88, "text": " So if and at the end, they are added, right?" }, { "start": 2530.88, "end": 2534.44, "text": " So you multiply these things, and then they are added." }, { "start": 2534.44, "end": 2545.8399999999997, "text": " And in my mind, honestly, what what that results in, because before, let's just, let's just" }, { "start": 2545.8399999999997, "end": 2546.8399999999997, "text": " see." }, { "start": 2546.8399999999997, "end": 2553.46, "text": " So before you had something like, if you if we simply multiply query times key transposed" }, { "start": 2553.46, "end": 2558.92, "text": " for the context site, that would give you sort of context WQ." }, { "start": 2558.92, "end": 2560.2, "text": " And now we share them." }, { "start": 2560.2, "end": 2563.84, "text": " So we don't care about C and P anymore." }, { "start": 2563.84, "end": 2567.52, "text": " WK transposed K transposed." }, { "start": 2567.52, "end": 2571.32, "text": " And sorry." }, { "start": 2571.32, "end": 2574.36, "text": " Of course, context, this transposed." }, { "start": 2574.36, "end": 2577.2, "text": " And now we add them to something else." }, { "start": 2577.2, "end": 2581.88, "text": " And let's just say we have these position to position encodings that they leave away." }, { "start": 2581.88, "end": 2584.54, "text": " But you know, we're going to consider them because it's easiest." }, { "start": 2584.54, "end": 2589.2000000000003, "text": " So it's position WQ WK." }, { "start": 2589.2000000000003, "end": 2593.76, "text": " Yeah, transposed position transposed." }, { "start": 2593.76, "end": 2601.1600000000003, "text": " You know, if these matrices are shared, this simply ends up to be being the addition of" }, { "start": 2601.1600000000003, "end": 2608.6800000000003, "text": " the position and content times these two matrices times the again, this." }, { "start": 2608.6800000000003, "end": 2611.76, "text": " So and this is just like the old school attention mechanism." }, { "start": 2611.76, "end": 2615.36, "text": " Now I see there's these cross terms and maybe they influence something." }, { "start": 2615.36, "end": 2621.28, "text": " But it gets closer and closer back to the old mechanism where you simply add the encodings" }, { "start": 2621.28, "end": 2626.5200000000004, "text": " and don't consider them in a in a disentangled way, right?" }, { "start": 2626.5200000000004, "end": 2632.5200000000004, "text": " If you do, if you dis if you like share the matrices of the disentangled representations," }, { "start": 2632.5200000000004, "end": 2639, "text": " it simply refers back to as if you were to feed the position in each layer of a traditional" }, { "start": 2639, "end": 2640.6400000000003, "text": " transformer." }, { "start": 2640.64, "end": 2647.68, "text": " So yeah, I'm not sure how much really the disentanglement is super important or whether" }, { "start": 2647.68, "end": 2652.72, "text": " or not it's just more important that this positional information is actually available" }, { "start": 2652.72, "end": 2653.72, "text": " at each step." }, { "start": 2653.72, "end": 2656.48, "text": " But, you know, I might be wrong here with the cross terms." }, { "start": 2656.48, "end": 2660.04, "text": " I haven't actually looked entirely at that." }, { "start": 2660.04, "end": 2664.7599999999998, "text": " Yeah, so that's the paper, they have kind of a discussion depiction of attention matrices" }, { "start": 2664.76, "end": 2671, "text": " down here, where they show that their model, you know, does some does something kind of" }, { "start": 2671, "end": 2675.2400000000002, "text": " different from other models in terms of where it attends and it has less of these global" }, { "start": 2675.2400000000002, "end": 2680.36, "text": " attention patterns like Roberta has right here." }, { "start": 2680.36, "end": 2684.6000000000004, "text": " Except for the very first one, which is the CLS vector, which makes sense." }, { "start": 2684.6000000000004, "end": 2687.44, "text": " And otherwise, it has a rather diagonal attention matrix." }, { "start": 2687.44, "end": 2692.3, "text": " So that's, it's pretty sensible, though you can also make the case that sometimes there" }, { "start": 2692.3, "end": 2698.1600000000003, "text": " are just really important words in a sentence that everything should attend to." }, { "start": 2698.1600000000003, "end": 2703.8, "text": " I don't know, but it is state of the art and it is a cool algorithm and is worth considering" }, { "start": 2703.8, "end": 2705.96, "text": " if you build your next model." }, { "start": 2705.96, "end": 2709.8, "text": " All right, with that, I thank you for listening." }, { "start": 2709.8, "end": 2710.8, "text": " Subscribe if you haven't." }, { "start": 2710.8, "end": 2711.8, "text": " I'll see you next time." }, { "start": 2711.8, "end": 2727.86, "text": " Bye bye." } ]
69IjNZaoeao
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
LeDeepChef 👨‍🍳 Deep Reinforcement Learning Agent for Families of Text-Based Games
[ "Science & Technology" ]
[ "ml", "machine learning", "reinforcement learning", "recipe", "text-based games", "text games", "natural language processing", "nlp", "actor", "critic", "GRU", "embedding", "pretraining", "artificial intelligence", "ai", "competition", "microsoft" ]
The AI cook is here! This agent learns to play a text-based game where the goal is to prepare a meal according to a recipe. Challenges? Many! The number of possible actions is huge, ingredients change and can include ones never seen before, you need to navigate rooms, use tools, manage an inventory and sequence everything correctly and all of this from a noisy textual description that the game engine throws at you. This paper mixes supervised explicit training with reinforcement learning in order to solve this task. Abstract: While Reinforcement Learning (RL) approaches lead to significant achievements in a variety of areas in recent history, natural language tasks remained mostly unaffected, due to the compositional and combinatorial nature that makes them notoriously hard to optimize. With the emerging field of Text-Based Games (TBGs), researchers try to bridge this gap. Inspired by the success of RL algorithms on Atari games, the idea is to develop new methods in a restricted game world and then gradually move to more complex environments. Previous work in the area of TBGs has mainly focused on solving individual games. We, however, consider the task of designing an agent that not just succeeds in a single game, but performs well across a whole family of games, sharing the same theme. In this work, we present our deep RL agent--LeDeepChef--that shows generalization capabilities to never-before-seen games of the same family with different environments and task descriptions. The agent participated in Microsoft Research's "First TextWorld Problems: A Language and Reinforcement Learning Challenge" and outperformed all but one competitor on the final test set. The games from the challenge all share the same theme, namely cooking in a modern house environment, but differ significantly in the arrangement of the rooms, the presented objects, and the specific goal (recipe to cook). To build an agent that achieves high scores across a whole family of games, we use an actor-critic framework and prune the action-space by using ideas from hierarchical reinforcement learning and a specialized module trained on a recipe database. Authors: Leonard Adolphs, Thomas Hofmann https://arxiv.org/abs/1909.01646
Hi there. Today we're looking at Le Deep Chef, deep reinforcement learning agent for families of text-based games by Leonard Adolfs and Thomas Hoffmann. So this is a paper about engineering an agent for a particular family of tasks. This is different from reinforcement learning agents that for example are just good at one game, let's say Pong or whatnot and even I guess even things like Starcraft. Though this kind of depends on what you mean by game. So what are we talking about here? The following is a text-based games where the goal is to cook recipes. So let's just jump in and see what goes on. The game starts by telling you, you are hungry. Let's cook a delicious meal and so on. So the objective is basically always the same. It's find the cookbook, read the recipe that's in it, then collect all the things that are in the recipe, prepare them in certain ways that are also specified by the recipe and then at the end you have a meal and then you can eat the meal and that will give you points. But since it's a text-based games and the input doesn't come structured but it comes in natural text. So the game tells you for example kitchen. So basically you're in the kitchen. You are now in the kitchen. I guess you better just go and list everything you see here. You hear a noise, you spin around. So you see that the kind of input you get from the game is very playful, has a lot of descriptive elements. Sometimes it's like you see a closed oven. You make out a table. Then you can see on the counter you can make out a sliced fried red hot pepper and so on. So it's very much not trivial to kind of parse this in a traditional way. If you were to go about this by simply writing an algorithm extracting things it's very hard because for example you might see that there's an oven but it's a closed oven. You make out a table. So this is kind of a synonym for you see a table but you see like there is a table. You can make out a sliced fried red hot pepper and here it's important not only do you need to realize that there is a red hot pepper but also that its state is sliced and fried. This is important because you need all ingredients in a certain state. Right? You examine here you examine the stove so there is a stove. Right? So all these things you need to kind of understand. So if you now look there is a recipe book in here. Or no there isn't a recipe. You can examine recipe. I guess there is a recipe book in that room. If there is a recipe book then you can examine the recipe and that's the command. So the arrows here always indicate that that's a user command. And these you have to type. That's like the next thing that your agent needs to do. You can't select from a predefined set of actions. You actually need to type in the things you want to do. Right? And these are a lot. Like there are a lot of possibilities of what you could type in. Even if you restrict it to kind of what you know the game accepts there are still so many actions. It's way different than for example Atari games. They always have eight actions. Like there's eight buttons you could possibly press and that's it. And here there are like combinatorically many things you can do. Like you can prepare and take and all the ingredients. You don't know which ingredients come. So here you examine the recipe. Let's look at a recipe. It says you open the recipe. Start reading. Recipe number one. Here are the ingredients. Red hot pepper. Here for right now that's just one ingredient. Then there are directions. So what do you need to do? Slice the red hot pepper. Fry the red hot pepper and prepare the meal. Those are the directions of the recipe. You also have this inventory command which tells you which you're carrying. Next difficulty. The inventory is finite. So you can't carry everything. At some points you have to drop things that are unnecessary. You can't just take everything. Here you see the command take red hot pepper. That only works if there's a red hot pepper in the room. And here says you take the red hot pepper from the counter. Your score has just gone up by one point. And then if you type inventory it says you're carrying a sliced fried red hot pepper. Again here it says the state of the ingredient. So the ingredient is the red hot pepper and the state is sliced and fried. And then you can prepare meal and then you can eat meal and then it says your score has just gone up by one point. And these are the scores you collect. So there are a lot of difficulties that are actually not shown in this example. For example there are different rooms. You may have noticed here you're in the kitchen. But there could be other rooms and you start in a random room. You also need to navigate through the rooms. Close the doors to the rooms could be closed and then you need to open them and so on. You can only for example if this pepper here weren't already sliced and fried you need to find... You can only slice it if there is a knife in the room. You can only fry it if there is a frying pan or an oven or a stove in the room. So and then you'd have to notice that there is a knife. If there is no knife you need to take the red hot pepper bring it to a new room with a knife and then slice it. So this is vastly difficult game. The last difficulty is actually that in the test set there will be ingredients that you haven't seen during training. So also that there. Your agent needs to generalize. That's why it says a family of text-based games. Because the objective always the same to kind of cook the recipe. But the things you have to do and the things that appear and so on those are those change basically from episode to episode. And the test set will be different than the training set or kind of there will be unseen data. Alright so how does this paper go about solving this problem? This paper basically does the following and we are going here from high level to low level. On the highest level it's a reinforcement learning agent and that is sort of how you would imagine an RL agent to work. So here at the end you have a policy and the policy predicts an action. If you don't know what a kind of a policy and an action things are in RL these are basic RL concept and we'll kind of skip them here and I'll assume everyone knows what they are. But essentially a policy specifies which action you take next given the current game state. So the policy is made up, scores different actions. So at each step there are k actions available. And these k actions I foresaid there are almost infinitely many actions that you could take. The first difficulty and that's the thing that actually comes in here is to reduce all of the possible actions that you can't even list to just k commands. So we'll go into that later how this is done. But basically one of the main contributions of this paper is how do you even specify what is reasonable, what would be reasonable to do in the current situation. And then the policy over here only has to decide among those reasonable actions, not among all actions. But given that you have k reasonable commands you see here command one command, these are embedded and then fed into GRUs which are recurrent neural networks. So for each of these commands you'll get a 32 dimensional vector. This 32 dimensional vector is here C1 through Ck. Each are combined with an encoding of the current state. So these 32 dimensional vector are combined with encoding of the current state which is 256 dimensional and then fed into a neural network that will output a probability distribution over these actions. This is pretty classic in deep reinforcement learning. So you have action encoding and the state encoding and the policy decides on that. The state encoding you'll see here it's the same everywhere of course because the current game state is the current game state. This comes from this model up here. What this does is over here you have the what you would call the state the current observation. The current observation is composed of many things. Specifically the following eight things. The first one is actually called observation which is I would call all of this the current observation from an RL perspective. But the first is actually observation. It's whatever you saw the big text you saw before. Like you were in the kitchen it looks like this it smells like this you turn around and so on. This would be the observation. It's what the game engine says at the current time step. This is just a piece of text. Second missing items. Third unnecessary items. Now these things you might wonder okay how do I know what what items are missing and unnecessary. These things come from another model that this paper trains and we'll get into that later. But basically they have a method of specifying which items are still missing which are unnecessary and they list those here. Then description which is the output of the last look command. So in each room you can look you can type look and then it'll give you a description of the room and what's in there. The previous commands this is often used in RL either explicitly or implicitly through a recurrent network in order to give the agent an idea what what happened in the in the previous steps or what it did so that it doesn't repeat actions unnecessarily or so it learns to not repeat actions unnecessarily. Required utilities. Again this is a model that's kind of trained to predict what utilities are required to perform some actions. So as I said before if you want to slice the red hot pepper you need a knife. If you want to fry it you need a stove. Discovered locations. As I said there are different rooms you actually don't know what rooms there are before you actually go in in there. So before you go through a door you reach another room. So the list of previously discovered and visited locations is there and then the name of the current location it is also there. So these are eight things that make up the current observation. These eight things are just strings of text and these eight things are each one as you can see here these are that the eight things from observation to location each one are embedded and fed also into an RNN. So for each of these eight things you'll obtain a 32 dimensional vector and these are all concatenated to make up one big 256 dimensional vector. So this 256 dimensional vector will contain all the necessary information about the current room what's in there what what items are you still missing what items do you have in your inventory which ones are unnecessary and so on. So if you train this correctly this 256 dimensional vector will describe the current game state as it is relevant to your agent like everything about it every relevant information that's in here will be encoded in this vector. Now this vector isn't the final state encoding yet what you'll have is you feed this into an RNN that takes as input the last time steps you have to imagine the last time step already there was observation blah blah blah this entire thing was I'm just copying I'm just copying this box over here so this entire thing was already done last step and already fed into an RNN so this this is an RNN that actually goes over time and the last whatever the output here is it will be fed to the next step and this is a trick often done in reinforcement learning as well that you actually have a recurrent neural network over the time steps so each time step you have a certain observation you encode it and so on you get a description of that and then you feed this into an RNN what the RNN can learn to do is it can learn to react to different not only to the current observation but to the current observation conditioned on the history of previous observations so it can learn before I was in this room now I'm in this new room so I actually haven't you know taken all the items from this room yet because I just came into this room and so on so the the kind of component where you are able to look at the past and what happened in the past is in captured by this RNN here so it's fairly complicated architecture but this here this state encoding that is conditioned on the also on the history then goes into this into here that's it that's the vector that goes in here is combined with each action so all of these actions here these K actions and this is all fed through a neural network and that will give you the policy this is a fairly complicated thing but if you look at it it's not it's not too it's not too difficult actually so what you'll do is you will take your observations here this is all observation it will be encoded and combined with the history in order to give you this in order to give you an encoding of the current state on the other hand you'll take all of the possible commands that you could perform right now encode each one separately right into an embedding and then you combine each one of those with this encoding you specified previously that you and and from that you make your decision which action to take next and the action here is the one that's output is the action you take next sampled from this policy the last thing you need is a value network and this is just important for reinforcement learning which tells you from this state here so I'm getting weird with colors here from this state here which is the same as this one so you'd simply transfer this over from this state how valuable is that what's my value of the state and the value is if I'm in this state and I act as I normally act what are all my future rewards going to be combined so it basically gives you a value of this state you can think of this in for example terms of chess if you had this in chess and then this here is it would be a description of the chessboard this HT and the value would be how valuable is this position for you so if you're very much ahead and material and position and so on this value would be very high if you're behind this value would be very low and this is in a real network simply trying to predict that value so with all of this you now have a never good basis to do reinforcement learning you have a policy you have a value network and from that you can train an RL agent and this is done classically in an actor critic way where you do advantage learning here the advantage and the policy you train weighted by the advantage then the value network you train to be close to their reward and then you have an entropy penalty if you don't know what these things are the video will get bit too long if I were to go over these reinforcement learning concepts but these are very standard in reinforcement learning so you can train these you can basically train what it does is you can train these neural networks in absence of label training data because you don't know what the best action is in each step right there's no one telling you you just have a reward you just sometimes you get a point and you don't know which actions led to that so these things will actually allow you to train these neural networks by using just the reward without knowing which exact actions were right and wrong and that's the core of reinforcement learning obviously alright so the the core one of the core ingredients actually is this recipe manager and the recipe manager is a sub model that does the following so here it takes as an input the cookbook here and it also takes as an input the inventory and it outputs something like this and this this is a this is a table representation of what it outputs it will output all the ingredients that you need for the recipe whether or not this input that this ingredient is currently missing from your inventory and action to perform so which actions still need to be performed so let's look at the following let's look at this example the recipe tells you you need the ingredients are a carrot a red hot pepper and a white onion and the inventory says you care you're carrying a white onion and a carrot right so down here you see aha we we do actually have we do actually have a carrot so it's not missing the carrot isn't missing you have it in your inventory the red hot pepper is missing we don't have it in the inventory but we need it for the recipe the white onion we need for the recipe but it's not missing then it also is for each of the ingredients is supposed to tell you this recipe model which of the what you still need to perform on it so here it says slice the carrot roast the carrot and you simply have a carrot it doesn't say slice the roast that means it's not sliced and roasted so the recipe is supposed to output you still need to slice and roast the carrot here for example for the white onion says fry the white onion and as you can see in the inventory it says you're carrying a fried white onion so for the white onion you see we don't need to do anything anymore so that the recipe model is basically trying to to make this table here and this table you can see as an intermediary step in order to do all the other things and the difference here to a pure RL method and this is important the difference is that this representation this intermediate table representation is done explicitly so the recipe model really produces a table like this and not just in other RL methods people go about and make this recipe model output some sort of you know let's say a 200 dimensional vector that's supposed to encompass all of this information and that doesn't appear to work as well like often that if you simply train this end-to-end that will not pick up on the important information because the training signal tends to be way too weak you have to imagine you already have this really really big model construction here and you're trying to learn it you're trying to learn it from a tiny reward signal that you get at the end right this is very noisy signal now if if you're now trying to say well the inputs to these things right this command here and we also saw the inputs to these these depend on this recipe model also now are whatever giant neural network construction here and we'll all train this end-to-end and these will actually not be text these will actually be some sort of latent vectors that will often fail because you're now just trying to extract information from too noisy of a reward signal so the authors here do actually pretty neat separation of that and they train this recipe model with actually an augmented data set so they go to freebase and get more food items and then they construct a data set that resembles this and train it in a supervised way to output tables tables like this so this is is pretty smart and I think it's a good lesson if you ever attempt something like this that really really important information such as this one if you can train it in a supervised way as a kind of a pre-processing step to your RL procedure that's extremely helpful here you can you can see how this is then used so by combining this table that was output from the recipe model and your inventory and the output of this look command you can then generate these commands so before we said it's important to reduce the everything you could do which is infinite things to everything that is reasonable to do currently and this model here does that so given this given that and given the description of what's currently in the room you can now generate these commands and for example take knife if you have to slice something because you see a knife is in the room and you could conceivably take the knife right you can construct these commands but also since you know right since you know what's since you know what's in your inventory and since you know which things are still missing you can generate commands like take the white onion or drop the water because you don't need the water right so um the the offers also group these things here in this what they call high-level commands which take all required items from here simply means take everything that's in the room that is not in the inventory but you need it so these things which for an RL agent it makes sense to group these things together because it doesn't make sense to have them as two separate things if you need both take both if you don't need any what if you have a new entry drop all of these things so that makes sense that's a small optimization that apparently brought some gains but the kind of the the overarching message here is that once you have a once you have this information from the recipe model you can then use it in many useful ways in order to make life for your RL agent easier alright so that kind of is the entire model that's very it's quite convoluted but basically you start with this here this recipe manager you decide you output this table down here which ingredients are in the recipe are they still missing and which actions we need to perform you then combine it with this information here the information about the current room and your inventory in order to come up with a set of commands that are conceivable to do here you combine these commands with some commands that are always available so commands that are always available are things like look inventory prepare meal you have that right you add that if the recipe manager does not output any missing and the agents location is the kitchen so you can add these other items and also we're not even gonna get into that you add navigational items because there are doors in these rooms and you need to navigate around so they actually train another model to here you see to detect to detect directions that you could move into and open doors for every closed door in the room so that's another challenge that the agent needs to overcome they have to build an entire model to predict which doors are there and are they closed do you need to open them so these commands if there are doors and if you can move through them these commands are also added to this set of commands that are reasonable so now we have a set of commands that are reasonable over here then you describe the room here you put both into this embedding and then finally your policy outputs an action that's that that's the entire process very convoluted very big very astonishing that this works with our L but in order to need to get it to work you actually need to do this supervised training and the experimental evidence here is quite solid in that they compare to baseline systems that that use classic techniques and they do some ablation over over their individual parts and they get second place I think in a competition about these text-based games so that's pretty good and that was it for me and check it out and bye bye
[ { "start": 0, "end": 5.4, "text": " Hi there. Today we're looking at Le Deep Chef, deep reinforcement learning agent" }, { "start": 5.4, "end": 11.28, "text": " for families of text-based games by Leonard Adolfs and Thomas Hoffmann. So" }, { "start": 11.28, "end": 18.400000000000002, "text": " this is a paper about engineering an agent for a particular family of tasks." }, { "start": 18.400000000000002, "end": 22.400000000000002, "text": " This is different from reinforcement learning agents that for example are" }, { "start": 22.4, "end": 30.24, "text": " just good at one game, let's say Pong or whatnot and even I guess even things" }, { "start": 30.24, "end": 39.08, "text": " like Starcraft. Though this kind of depends on what you mean by game. So what" }, { "start": 39.08, "end": 45.32, "text": " are we talking about here? The following is a text-based games where the goal is" }, { "start": 45.32, "end": 55.7, "text": " to cook recipes. So let's just jump in and see what goes on. The game" }, { "start": 55.7, "end": 62.16, "text": " starts by telling you, you are hungry. Let's cook a delicious meal and so on." }, { "start": 62.16, "end": 68.52, "text": " So the objective is basically always the same. It's find the cookbook, read the" }, { "start": 68.52, "end": 75.16, "text": " recipe that's in it, then collect all the things that are in the recipe, prepare" }, { "start": 75.16, "end": 80.47999999999999, "text": " them in certain ways that are also specified by the recipe and then at the" }, { "start": 80.47999999999999, "end": 84.75999999999999, "text": " end you have a meal and then you can eat the meal and that will give you points." }, { "start": 84.75999999999999, "end": 91.52, "text": " But since it's a text-based games and the input doesn't come structured but it" }, { "start": 91.52, "end": 98.52, "text": " comes in natural text. So the game tells you for example kitchen. So basically" }, { "start": 98.52, "end": 102.72, "text": " you're in the kitchen. You are now in the kitchen. I guess you better just go and" }, { "start": 102.72, "end": 107.8, "text": " list everything you see here. You hear a noise, you spin around. So you see that" }, { "start": 107.8, "end": 113.84, "text": " the kind of input you get from the game is very playful, has a lot of descriptive" }, { "start": 113.84, "end": 123.6, "text": " elements. Sometimes it's like you see a closed oven. You make out a table. Then" }, { "start": 123.6, "end": 130.04, "text": " you can see on the counter you can make out a sliced fried red hot pepper and so" }, { "start": 130.04, "end": 136.92, "text": " on. So it's very much not trivial to kind of parse this in a traditional way." }, { "start": 136.92, "end": 141.84, "text": " If you were to go about this by simply writing an algorithm extracting things" }, { "start": 141.84, "end": 147.32, "text": " it's very hard because for example you might see that there's an oven but it's" }, { "start": 147.32, "end": 153.07999999999998, "text": " a closed oven. You make out a table. So this is kind of a synonym for you see a" }, { "start": 153.08, "end": 160.76000000000002, "text": " table but you see like there is a table. You can make out a sliced fried red hot" }, { "start": 160.76000000000002, "end": 164.48000000000002, "text": " pepper and here it's important not only do you need to realize that there is a" }, { "start": 164.48000000000002, "end": 170.8, "text": " red hot pepper but also that its state is sliced and fried. This is important" }, { "start": 170.8, "end": 179, "text": " because you need all ingredients in a certain state. Right? You examine here you" }, { "start": 179, "end": 186.96, "text": " examine the stove so there is a stove. Right? So all these things you need to" }, { "start": 186.96, "end": 193.48, "text": " kind of understand. So if you now look there is a recipe book in here." }, { "start": 193.48, "end": 200.2, "text": " Or no there isn't a recipe. You can examine recipe. I guess there is a recipe" }, { "start": 200.2, "end": 206.84, "text": " book in that room. If there is a recipe book then you can examine the recipe and" }, { "start": 206.84, "end": 211.32, "text": " that's the command. So the arrows here always indicate that that's a user" }, { "start": 211.32, "end": 217.16, "text": " command. And these you have to type. That's like the next thing that" }, { "start": 217.16, "end": 223.64000000000001, "text": " your agent needs to do. You can't select from a predefined set of actions." }, { "start": 223.64000000000001, "end": 228.96, "text": " You actually need to type in the things you want to do. Right? And these are a" }, { "start": 228.96, "end": 233.32, "text": " lot. Like there are a lot of possibilities of what you could type in." }, { "start": 233.32, "end": 237.48, "text": " Even if you restrict it to kind of what you know the game accepts there are" }, { "start": 237.48, "end": 243.4, "text": " still so many actions. It's way different than for example Atari games." }, { "start": 243.4, "end": 246.51999999999998, "text": " They always have eight actions. Like there's eight buttons you could" }, { "start": 246.51999999999998, "end": 252.64, "text": " possibly press and that's it. And here there are like combinatorically many" }, { "start": 252.64, "end": 259.03999999999996, "text": " things you can do. Like you can prepare and take and all the ingredients. You" }, { "start": 259.04, "end": 264.92, "text": " don't know which ingredients come. So here you examine the recipe." }, { "start": 264.92, "end": 269.6, "text": " Let's look at a recipe. It says you open the recipe. Start reading. Recipe number" }, { "start": 269.6, "end": 275.6, "text": " one. Here are the ingredients. Red hot pepper. Here for right now that's just one" }, { "start": 275.6, "end": 280.08000000000004, "text": " ingredient. Then there are directions. So what do you need to do? Slice the red" }, { "start": 280.08000000000004, "end": 285.32000000000005, "text": " hot pepper. Fry the red hot pepper and prepare the meal. Those are" }, { "start": 285.32, "end": 291.2, "text": " the directions of the recipe. You also have this inventory command which" }, { "start": 291.2, "end": 298.4, "text": " tells you which you're carrying. Next difficulty. The inventory is finite. So" }, { "start": 298.4, "end": 302.68, "text": " you can't carry everything. At some points you have to drop things that are" }, { "start": 302.68, "end": 308.6, "text": " unnecessary. You can't just take everything. Here you see the command take" }, { "start": 308.6, "end": 313.08, "text": " red hot pepper. That only works if there's a red hot pepper in the room. And" }, { "start": 313.08, "end": 318.12, "text": " here says you take the red hot pepper from the counter. Your score has just gone" }, { "start": 318.12, "end": 322.44, "text": " up by one point. And then if you type inventory it says you're carrying a" }, { "start": 322.44, "end": 330.08, "text": " sliced fried red hot pepper. Again here it says the state of the ingredient." }, { "start": 330.08, "end": 336.44, "text": " So the ingredient is the red hot pepper and the state is sliced and fried. And" }, { "start": 336.44, "end": 340.32, "text": " then you can prepare meal and then you can eat meal and then it says your" }, { "start": 340.32, "end": 345.92, "text": " score has just gone up by one point. And these are the scores you collect. So" }, { "start": 345.92, "end": 349.52, "text": " there are a lot of difficulties that are actually not shown in this example. For" }, { "start": 349.52, "end": 354.2, "text": " example there are different rooms. You may have noticed here you're in the" }, { "start": 354.2, "end": 359.48, "text": " kitchen. But there could be other rooms and you start in a random room. You also" }, { "start": 359.48, "end": 364.32, "text": " need to navigate through the rooms. Close the doors to the rooms could be" }, { "start": 364.32, "end": 373.08, "text": " closed and then you need to open them and so on. You can only for example if" }, { "start": 373.08, "end": 382.68, "text": " this pepper here weren't already sliced and fried you need to find..." }, { "start": 382.68, "end": 389.2, "text": " You can only slice it if there is a knife in the room. You can only fry" }, { "start": 389.2, "end": 395.12, "text": " it if there is a frying pan or an oven or a stove in the room." }, { "start": 395.12, "end": 402.59999999999997, "text": " So and then you'd have to notice that there is a knife. If there is no knife" }, { "start": 402.59999999999997, "end": 407.24, "text": " you need to take the red hot pepper bring it to a new room with a knife and" }, { "start": 407.24, "end": 415.2, "text": " then slice it. So this is vastly difficult game. The last difficulty is" }, { "start": 415.2, "end": 422.47999999999996, "text": " actually that in the test set there will be ingredients that you haven't seen" }, { "start": 422.47999999999996, "end": 428.92, "text": " during training. So also that there. Your agent needs to generalize. That's why it" }, { "start": 428.92, "end": 432.88, "text": " says a family of text-based games. Because the objective always the same to" }, { "start": 432.88, "end": 436.36, "text": " kind of cook the recipe. But the things you have to do and the things that" }, { "start": 436.36, "end": 443.36, "text": " appear and so on those are those change basically from episode to episode. And" }, { "start": 443.36, "end": 448.88, "text": " the test set will be different than the training set or kind of there will be" }, { "start": 448.88, "end": 454.84000000000003, "text": " unseen data. Alright so how does this paper go about solving this problem?" }, { "start": 454.84000000000003, "end": 465.2, "text": " This paper basically does the following and we are going here from high level to" }, { "start": 465.2, "end": 471.84000000000003, "text": " low level. On the highest level it's a reinforcement learning agent and that is" }, { "start": 471.84, "end": 481.64, "text": " sort of how you would imagine an RL agent to work. So here at the end you have" }, { "start": 481.64, "end": 487.71999999999997, "text": " a policy and the policy predicts an action. If you don't know what a kind of" }, { "start": 487.71999999999997, "end": 492.32, "text": " a policy and an action things are in RL these are basic RL concept and we'll" }, { "start": 492.32, "end": 498.2, "text": " kind of skip them here and I'll assume everyone knows what they are. But" }, { "start": 498.2, "end": 503.64, "text": " essentially a policy specifies which action you take next given the current" }, { "start": 503.64, "end": 511.64, "text": " game state. So the policy is made up, scores different actions. So at each step" }, { "start": 511.64, "end": 519.2, "text": " there are k actions available. And these k actions I foresaid there are almost" }, { "start": 519.2, "end": 524.84, "text": " infinitely many actions that you could take. The first difficulty and that's the" }, { "start": 524.84, "end": 534.2800000000001, "text": " thing that actually comes in here is to reduce all of the possible actions that" }, { "start": 534.2800000000001, "end": 541.48, "text": " you can't even list to just k commands. So we'll go into that later how this is" }, { "start": 541.48, "end": 547.52, "text": " done. But basically one of the main contributions of this paper is how do" }, { "start": 547.52, "end": 553.9200000000001, "text": " you even specify what is reasonable, what would be reasonable to do in the current" }, { "start": 553.92, "end": 559.92, "text": " situation. And then the policy over here only has to decide among those reasonable" }, { "start": 559.92, "end": 566.16, "text": " actions, not among all actions. But given that you have k reasonable commands" }, { "start": 566.16, "end": 572.64, "text": " you see here command one command, these are embedded and then fed into GRUs which are" }, { "start": 572.64, "end": 578.4399999999999, "text": " recurrent neural networks. So for each of these commands you'll get a 32" }, { "start": 578.44, "end": 588.48, "text": " dimensional vector. This 32 dimensional vector is here C1 through Ck. Each are" }, { "start": 588.48, "end": 596.36, "text": " combined with an encoding of the current state. So these 32 dimensional" }, { "start": 596.36, "end": 601.6800000000001, "text": " vector are combined with encoding of the current state which is 256 dimensional" }, { "start": 601.6800000000001, "end": 607.9200000000001, "text": " and then fed into a neural network that will output a probability distribution" }, { "start": 607.92, "end": 613.4, "text": " over these actions. This is pretty classic in deep reinforcement learning." }, { "start": 613.4, "end": 619.4399999999999, "text": " So you have action encoding and the state encoding and the policy decides on that." }, { "start": 619.4399999999999, "end": 623.52, "text": " The state encoding you'll see here it's the same everywhere of course because" }, { "start": 623.52, "end": 628.7199999999999, "text": " the current game state is the current game state. This comes from this model up" }, { "start": 628.7199999999999, "end": 636.3199999999999, "text": " here. What this does is over here you have the what you would call the state" }, { "start": 636.32, "end": 643.9200000000001, "text": " the current observation. The current observation is composed of many" }, { "start": 643.9200000000001, "end": 649.08, "text": " things. Specifically the following eight things. The first one is actually" }, { "start": 649.08, "end": 655.2800000000001, "text": " called observation which is I would call all of this the current observation" }, { "start": 655.2800000000001, "end": 661.08, "text": " from an RL perspective. But the first is actually observation. It's whatever you" }, { "start": 661.08, "end": 665.12, "text": " saw the big text you saw before. Like you were in the kitchen it looks like this" }, { "start": 665.12, "end": 669.16, "text": " it smells like this you turn around and so on. This would be the observation." }, { "start": 669.16, "end": 673.28, "text": " It's what the game engine says at the current time step. This is just a piece of" }, { "start": 673.28, "end": 683.64, "text": " text. Second missing items. Third unnecessary items. Now these things you" }, { "start": 683.64, "end": 688.28, "text": " might wonder okay how do I know what what items are missing and unnecessary." }, { "start": 688.28, "end": 695.3199999999999, "text": " These things come from another model that this paper trains and we'll get" }, { "start": 695.3199999999999, "end": 700.0799999999999, "text": " into that later. But basically they have a method of specifying which items are" }, { "start": 700.0799999999999, "end": 708.3199999999999, "text": " still missing which are unnecessary and they list those here. Then description" }, { "start": 708.3199999999999, "end": 713.36, "text": " which is the output of the last look command. So in each room you can look you" }, { "start": 713.36, "end": 717.4399999999999, "text": " can type look and then it'll give you a description of the room and what's in" }, { "start": 717.44, "end": 725.9200000000001, "text": " there. The previous commands this is often used in RL either explicitly or" }, { "start": 725.9200000000001, "end": 732.84, "text": " implicitly through a recurrent network in order to give the agent an idea what" }, { "start": 732.84, "end": 737.8800000000001, "text": " what happened in the in the previous steps or what it did so that it doesn't" }, { "start": 737.8800000000001, "end": 743.9200000000001, "text": " repeat actions unnecessarily or so it learns to not repeat actions" }, { "start": 743.92, "end": 750.8, "text": " unnecessarily. Required utilities. Again this is a model that's kind of trained" }, { "start": 750.8, "end": 757.8399999999999, "text": " to predict what utilities are required to perform some actions. So as I said" }, { "start": 757.8399999999999, "end": 762.52, "text": " before if you want to slice the red hot pepper you need a knife. If you want to" }, { "start": 762.52, "end": 770, "text": " fry it you need a stove. Discovered locations. As I said there are different" }, { "start": 770, "end": 776, "text": " rooms you actually don't know what rooms there are before you actually go in in" }, { "start": 776, "end": 782.08, "text": " there. So before you go through a door you reach another room. So the list of" }, { "start": 782.08, "end": 787.76, "text": " previously discovered and visited locations is there and then the name of" }, { "start": 787.76, "end": 795.04, "text": " the current location it is also there. So these are eight things that make up the" }, { "start": 795.04, "end": 801.24, "text": " current observation. These eight things are just strings of text and these eight" }, { "start": 801.24, "end": 807, "text": " things are each one as you can see here these are that the eight things from" }, { "start": 807, "end": 813.3199999999999, "text": " observation to location each one are embedded and fed also into an RNN. So for" }, { "start": 813.3199999999999, "end": 818.52, "text": " each of these eight things you'll obtain a 32 dimensional vector and these are" }, { "start": 818.52, "end": 824.88, "text": " all concatenated to make up one big 256 dimensional vector. So this 256" }, { "start": 824.88, "end": 830.4, "text": " dimensional vector will contain all the necessary information about the current" }, { "start": 830.4, "end": 835.52, "text": " room what's in there what what items are you still missing what items do you have" }, { "start": 835.52, "end": 839.96, "text": " in your inventory which ones are unnecessary and so on. So if you train" }, { "start": 839.96, "end": 846.4, "text": " this correctly this 256 dimensional vector will describe the current game" }, { "start": 846.4, "end": 851.76, "text": " state as it is relevant to your agent like everything about it every" }, { "start": 851.76, "end": 857.24, "text": " relevant information that's in here will be encoded in this vector. Now this" }, { "start": 857.24, "end": 863.4399999999999, "text": " vector isn't the final state encoding yet what you'll have is you feed this into" }, { "start": 863.4399999999999, "end": 869.92, "text": " an RNN that takes as input the last time steps you have to imagine the last time" }, { "start": 869.92, "end": 876.4, "text": " step already there was observation blah blah blah this entire thing was I'm just" }, { "start": 876.4, "end": 883.52, "text": " copying I'm just copying this box over here so this entire thing was already" }, { "start": 883.52, "end": 890.28, "text": " done last step and already fed into an RNN so this this is an RNN that actually" }, { "start": 890.28, "end": 896.8, "text": " goes over time and the last whatever the output here is it will be fed to the" }, { "start": 896.8, "end": 902.1999999999999, "text": " next step and this is a trick often done in reinforcement learning as well that" }, { "start": 902.2, "end": 908.44, "text": " you actually have a recurrent neural network over the time steps so each" }, { "start": 908.44, "end": 912.8000000000001, "text": " time step you have a certain observation you encode it and so on you get a" }, { "start": 912.8000000000001, "end": 917.88, "text": " description of that and then you feed this into an RNN what the RNN can learn" }, { "start": 917.88, "end": 925.84, "text": " to do is it can learn to react to different not only to the current" }, { "start": 925.84, "end": 929.96, "text": " observation but to the current observation conditioned on the history" }, { "start": 929.96, "end": 935.88, "text": " of previous observations so it can learn before I was in this room now I'm in this" }, { "start": 935.88, "end": 942.2800000000001, "text": " new room so I actually haven't you know taken all the items from this room yet" }, { "start": 942.2800000000001, "end": 949, "text": " because I just came into this room and so on so the the kind of component where" }, { "start": 949, "end": 954.52, "text": " you are able to look at the past and what happened in the past is in captured" }, { "start": 954.52, "end": 965.24, "text": " by this RNN here so it's fairly complicated architecture but this here" }, { "start": 965.24, "end": 973, "text": " this state encoding that is conditioned on the also on the history then goes into" }, { "start": 973, "end": 980.72, "text": " this into here that's it that's the vector that goes in here is combined" }, { "start": 980.72, "end": 988.1600000000001, "text": " with each action so all of these actions here these K actions and this is all fed" }, { "start": 988.1600000000001, "end": 994.64, "text": " through a neural network and that will give you the policy this is a fairly" }, { "start": 994.64, "end": 1000.48, "text": " complicated thing but if you look at it it's not it's not too it's not too" }, { "start": 1000.48, "end": 1010.48, "text": " difficult actually so what you'll do is you will take your observations here this" }, { "start": 1010.48, "end": 1016.28, "text": " is all observation it will be encoded and combined with the history in order" }, { "start": 1016.28, "end": 1022.6, "text": " to give you this in order to give you an encoding of the current state on the" }, { "start": 1022.6, "end": 1027.2, "text": " other hand you'll take all of the possible commands that you could" }, { "start": 1027.2, "end": 1033.1200000000001, "text": " perform right now encode each one separately right into an embedding and" }, { "start": 1033.1200000000001, "end": 1039.6, "text": " then you combine each one of those with this encoding you specified previously" }, { "start": 1039.6, "end": 1046.9199999999998, "text": " that you and and from that you make your decision which action to take next and" }, { "start": 1046.9199999999998, "end": 1052.76, "text": " the action here is the one that's output is the action you take next sampled from" }, { "start": 1052.76, "end": 1060.6799999999998, "text": " this policy the last thing you need is a value network and this is just important" }, { "start": 1060.6799999999998, "end": 1068.3999999999999, "text": " for reinforcement learning which tells you from this state here so I'm getting" }, { "start": 1068.4, "end": 1075.3600000000001, "text": " weird with colors here from this state here which is the same as this one so" }, { "start": 1075.3600000000001, "end": 1079.8000000000002, "text": " you'd simply transfer this over from this state how valuable is that what's" }, { "start": 1079.8000000000002, "end": 1085.3600000000001, "text": " my value of the state and the value is if I'm in this state and I act as I" }, { "start": 1085.3600000000001, "end": 1091.3200000000002, "text": " normally act what are all my future rewards going to be combined so it" }, { "start": 1091.3200000000002, "end": 1096.2800000000002, "text": " basically gives you a value of this state you can think of this in for" }, { "start": 1096.28, "end": 1102.48, "text": " example terms of chess if you had this in chess and then this here is it would" }, { "start": 1102.48, "end": 1108, "text": " be a description of the chessboard this HT and the value would be how valuable" }, { "start": 1108, "end": 1111.92, "text": " is this position for you so if you're very much ahead and material and" }, { "start": 1111.92, "end": 1116.92, "text": " position and so on this value would be very high if you're behind this value" }, { "start": 1116.92, "end": 1121.16, "text": " would be very low and this is in a real network simply trying to predict that" }, { "start": 1121.16, "end": 1130.3600000000001, "text": " value so with all of this you now have a never good basis to do reinforcement" }, { "start": 1130.3600000000001, "end": 1137.0400000000002, "text": " learning you have a policy you have a value network and from that you can" }, { "start": 1137.0400000000002, "end": 1142.52, "text": " train an RL agent and this is done classically in an actor critic way where" }, { "start": 1142.52, "end": 1149.8400000000001, "text": " you do advantage learning here the advantage and the policy you train" }, { "start": 1149.84, "end": 1155, "text": " weighted by the advantage then the value network you train to be close to their" }, { "start": 1155, "end": 1159.56, "text": " reward and then you have an entropy penalty if you don't know what these" }, { "start": 1159.56, "end": 1164.12, "text": " things are the video will get bit too long if I were to go over these" }, { "start": 1164.12, "end": 1169.04, "text": " reinforcement learning concepts but these are very standard in reinforcement" }, { "start": 1169.04, "end": 1175.6799999999998, "text": " learning so you can train these you can basically train what it does is you can" }, { "start": 1175.68, "end": 1181.1200000000001, "text": " train these neural networks in absence of label training data because you don't" }, { "start": 1181.1200000000001, "end": 1185.44, "text": " know what the best action is in each step right there's no one telling you" }, { "start": 1185.44, "end": 1189.64, "text": " you just have a reward you just sometimes you get a point and you don't" }, { "start": 1189.64, "end": 1195.64, "text": " know which actions led to that so these things will actually allow you to train" }, { "start": 1195.64, "end": 1201.52, "text": " these neural networks by using just the reward without knowing which exact" }, { "start": 1201.52, "end": 1206.52, "text": " actions were right and wrong and that's the core of reinforcement learning" }, { "start": 1206.52, "end": 1216, "text": " obviously alright so the the core one of the core ingredients actually is this" }, { "start": 1216, "end": 1225.48, "text": " recipe manager and the recipe manager is a sub model that does the following so" }, { "start": 1225.48, "end": 1234.64, "text": " here it takes as an input the cookbook here and it also takes as an input the" }, { "start": 1234.64, "end": 1241.72, "text": " inventory and it outputs something like this and this this is a this is a table" }, { "start": 1241.72, "end": 1248.32, "text": " representation of what it outputs it will output all the ingredients that you" }, { "start": 1248.32, "end": 1256.12, "text": " need for the recipe whether or not this input that this ingredient is currently" }, { "start": 1256.12, "end": 1265.72, "text": " missing from your inventory and action to perform so which actions still need" }, { "start": 1265.72, "end": 1272.56, "text": " to be performed so let's look at the following let's look at this example the" }, { "start": 1272.56, "end": 1276.56, "text": " recipe tells you you need the ingredients are a carrot a red hot pepper" }, { "start": 1276.56, "end": 1283.9199999999998, "text": " and a white onion and the inventory says you care you're carrying a white onion" }, { "start": 1283.9199999999998, "end": 1295.44, "text": " and a carrot right so down here you see aha we we do actually have we do" }, { "start": 1295.44, "end": 1301.6, "text": " actually have a carrot so it's not missing the carrot isn't missing you" }, { "start": 1301.6, "end": 1305.48, "text": " have it in your inventory the red hot pepper is missing we don't have it in" }, { "start": 1305.48, "end": 1309.56, "text": " the inventory but we need it for the recipe the white onion we need for the" }, { "start": 1309.56, "end": 1317.52, "text": " recipe but it's not missing then it also is for each of the ingredients is" }, { "start": 1317.52, "end": 1322.58, "text": " supposed to tell you this recipe model which of the what you still need to" }, { "start": 1322.58, "end": 1327.52, "text": " perform on it so here it says slice the carrot roast the carrot and you simply" }, { "start": 1327.52, "end": 1331.48, "text": " have a carrot it doesn't say slice the roast that means it's not sliced and" }, { "start": 1331.48, "end": 1336.16, "text": " roasted so the recipe is supposed to output you still need to slice and roast" }, { "start": 1336.16, "end": 1342.64, "text": " the carrot here for example for the white onion says fry the white onion and" }, { "start": 1342.64, "end": 1352.8, "text": " as you can see in the inventory it says you're carrying a fried white onion so" }, { "start": 1352.8, "end": 1358.6, "text": " for the white onion you see we don't need to do anything anymore so that the" }, { "start": 1358.6, "end": 1366.9599999999998, "text": " recipe model is basically trying to to make this table here and this table you" }, { "start": 1366.9599999999998, "end": 1372.9599999999998, "text": " can see as an intermediary step in order to do all the other things and the" }, { "start": 1372.9599999999998, "end": 1378.6, "text": " difference here to a pure RL method and this is important the difference is that" }, { "start": 1378.6, "end": 1384.4399999999998, "text": " this representation this intermediate table representation is done explicitly" }, { "start": 1384.44, "end": 1391.3600000000001, "text": " so the recipe model really produces a table like this and not just in other RL" }, { "start": 1391.3600000000001, "end": 1397.4, "text": " methods people go about and make this recipe model output some sort of you" }, { "start": 1397.4, "end": 1402.76, "text": " know let's say a 200 dimensional vector that's supposed to encompass all of this" }, { "start": 1402.76, "end": 1410.16, "text": " information and that doesn't appear to work as well like often that if you" }, { "start": 1410.16, "end": 1415.28, "text": " simply train this end-to-end that will not pick up on the important information" }, { "start": 1415.28, "end": 1420.2, "text": " because the training signal tends to be way too weak you have to imagine you" }, { "start": 1420.2, "end": 1426.0800000000002, "text": " already have this really really big model construction here and you're" }, { "start": 1426.0800000000002, "end": 1431.4, "text": " trying to learn it you're trying to learn it from a tiny reward signal that" }, { "start": 1431.4, "end": 1437.28, "text": " you get at the end right this is very noisy signal now if if you're now trying" }, { "start": 1437.28, "end": 1443.36, "text": " to say well the inputs to these things right this command here and we also saw" }, { "start": 1443.36, "end": 1448.36, "text": " the inputs to these these depend on this recipe model also now are whatever" }, { "start": 1448.36, "end": 1454.16, "text": " giant neural network construction here and we'll all train this end-to-end and" }, { "start": 1454.16, "end": 1458.48, "text": " these will actually not be text these will actually be some sort of latent" }, { "start": 1458.48, "end": 1464.32, "text": " vectors that will often fail because you're now just trying to extract" }, { "start": 1464.32, "end": 1469.36, "text": " information from too noisy of a reward signal so the authors here do actually" }, { "start": 1469.36, "end": 1477.52, "text": " pretty neat separation of that and they train this recipe model with actually an" }, { "start": 1477.52, "end": 1482.24, "text": " augmented data set so they go to freebase and get more food items and" }, { "start": 1482.24, "end": 1488.76, "text": " then they construct a data set that resembles this and train it in a" }, { "start": 1488.76, "end": 1496.56, "text": " supervised way to output tables tables like this so this is is pretty smart and" }, { "start": 1496.56, "end": 1503.28, "text": " I think it's a good lesson if you ever attempt something like this that really" }, { "start": 1503.28, "end": 1507.42, "text": " really important information such as this one if you can train it in a" }, { "start": 1507.42, "end": 1512.32, "text": " supervised way as a kind of a pre-processing step to your RL" }, { "start": 1512.32, "end": 1520.56, "text": " procedure that's extremely helpful here you can you can see how this is then" }, { "start": 1520.56, "end": 1528.28, "text": " used so by combining this table that was output from the recipe model and your" }, { "start": 1528.28, "end": 1537.4399999999998, "text": " inventory and the output of this look command you can then generate these" }, { "start": 1537.44, "end": 1542.56, "text": " commands so before we said it's important to reduce the everything you could do" }, { "start": 1542.56, "end": 1548.6000000000001, "text": " which is infinite things to everything that is reasonable to do currently and" }, { "start": 1548.6000000000001, "end": 1556.2, "text": " this model here does that so given this given that and given the description of" }, { "start": 1556.2, "end": 1563.04, "text": " what's currently in the room you can now generate these commands and for example" }, { "start": 1563.04, "end": 1567.44, "text": " take knife if you have to slice something because you see a knife is in" }, { "start": 1567.44, "end": 1573.8, "text": " the room and you could conceivably take the knife right you can construct these" }, { "start": 1573.8, "end": 1580.12, "text": " commands but also since you know right since you know what's since you know" }, { "start": 1580.12, "end": 1585.8, "text": " what's in your inventory and since you know which things are still missing you" }, { "start": 1585.8, "end": 1591.56, "text": " can generate commands like take the white onion or drop the water because" }, { "start": 1591.56, "end": 1597.9199999999998, "text": " you don't need the water right so um the the offers also group these things here" }, { "start": 1597.9199999999998, "end": 1602.04, "text": " in this what they call high-level commands which take all required items" }, { "start": 1602.04, "end": 1607.6399999999999, "text": " from here simply means take everything that's in the room that is not in the" }, { "start": 1607.6399999999999, "end": 1612.76, "text": " inventory but you need it so these things which for an RL agent it makes" }, { "start": 1612.76, "end": 1618.44, "text": " sense to group these things together because it doesn't make sense to have" }, { "start": 1618.44, "end": 1623.56, "text": " them as two separate things if you need both take both if you don't need any" }, { "start": 1623.56, "end": 1628.88, "text": " what if you have a new entry drop all of these things so that makes sense that's" }, { "start": 1628.88, "end": 1636.04, "text": " a small optimization that apparently brought some gains but the kind of the" }, { "start": 1636.04, "end": 1641.72, "text": " the overarching message here is that once you have a once you have this" }, { "start": 1641.72, "end": 1647.52, "text": " information from the recipe model you can then use it in many useful ways in" }, { "start": 1647.52, "end": 1656.44, "text": " order to make life for your RL agent easier alright so that kind of is the" }, { "start": 1656.44, "end": 1661.74, "text": " entire model that's very it's quite convoluted but basically you start with" }, { "start": 1661.74, "end": 1666.84, "text": " this here this recipe manager you decide you output this table down here which" }, { "start": 1666.84, "end": 1674.48, "text": " ingredients are in the recipe are they still missing and which actions we need" }, { "start": 1674.48, "end": 1679.64, "text": " to perform you then combine it with this information here the information about" }, { "start": 1679.64, "end": 1684.64, "text": " the current room and your inventory in order to come up with a set of commands" }, { "start": 1684.64, "end": 1690.32, "text": " that are conceivable to do here you combine these commands with some" }, { "start": 1690.32, "end": 1697.72, "text": " commands that are always available so commands that are always available are" }, { "start": 1697.72, "end": 1705.84, "text": " things like look inventory prepare meal you have that right you add that if the" }, { "start": 1705.84, "end": 1711.68, "text": " recipe manager does not output any missing and the agents location is the" }, { "start": 1711.68, "end": 1718.56, "text": " kitchen so you can add these other items and also we're not even gonna get into" }, { "start": 1718.56, "end": 1722.76, "text": " that you add navigational items because there are doors in these rooms and you" }, { "start": 1722.76, "end": 1728.8, "text": " need to navigate around so they actually train another model to here you see to" }, { "start": 1728.8, "end": 1738.32, "text": " detect to detect directions that you could move into and open doors for every" }, { "start": 1738.32, "end": 1742.18, "text": " closed door in the room so that's another challenge that the agent needs to" }, { "start": 1742.18, "end": 1746.84, "text": " overcome they have to build an entire model to predict which doors are there" }, { "start": 1746.84, "end": 1752.04, "text": " and are they closed do you need to open them so these commands if there are" }, { "start": 1752.04, "end": 1757.08, "text": " doors and if you can move through them these commands are also added to this" }, { "start": 1757.08, "end": 1761.24, "text": " set of commands that are reasonable so now we have a set of commands that are" }, { "start": 1761.24, "end": 1768.8799999999999, "text": " reasonable over here then you describe the room here you put both into this" }, { "start": 1768.8799999999999, "end": 1775.44, "text": " embedding and then finally your policy outputs an action that's that that's the" }, { "start": 1775.44, "end": 1781.72, "text": " entire process very convoluted very big very astonishing that this works with our" }, { "start": 1781.72, "end": 1788.2, "text": " L but in order to need to get it to work you actually need to do this supervised" }, { "start": 1788.2, "end": 1794.3600000000001, "text": " training and the experimental evidence here is quite solid in that they compare" }, { "start": 1794.3600000000001, "end": 1803.48, "text": " to baseline systems that that use classic techniques and they do some" }, { "start": 1803.48, "end": 1811.2, "text": " ablation over over their individual parts and they get second place I think" }, { "start": 1811.2, "end": 1817.56, "text": " in a competition about these text-based games so that's pretty good and that was" }, { "start": 1817.56, "end": 1847.1599999999999, "text": " it for me and check it out and bye bye" } ]
_PyusGsbBPY
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Stochastic RNNs without Teacher-Forcing
[ "Science & Technology" ]
[ "NeurIPS2018", "NIPS2018", "NLP", "deep learning", "RNN" ]
We present a stochastic non-autoregressive RNN that does not require teacher-forcing for training. The content is based on our 2018 NeurIPS paper: Deep State Space Models for Unconditional Word Generation https://arxiv.org/abs/1806.04550
Hi everybody, my name is Florian and Janik was nice enough to host me here as a guest to talk about Stochastic RNNs without teacher forcing. This is based on recent work, deep state space models for unconditional word generation, which we presented at this year's New RIPs. And if you feel like any more details, please check out the paper. We focus on a de facto standard training hack for any RNNs that generate text. It's called teacher forcing and it's used in any model, whether unconditional or conditional, such as in a sentence autoencoder or in a translation model. To understand where teacher forcing comes from, we first need to understand where text generation comes from. For the good or the bad, and here we will focus on the bad, text generation has its roots in language modeling. So language modeling is the problem of predicting the next word, given all the previous words. People used to use ANGRA models for this, but today people use recurrent neural networks to do that. Such recurrent neural networks or RNNs factorize the joint observation probability of a sequence that I here depict as W into independent softmax distributions over individual tokens. So for every time step, there's a softmax function. And the softmax is conditioned on a hidden state. And all the magic of the RNN goes into the function that gives you the new state, given the old hidden state. Usually this is called a transition function, F, and as an input it gets the last state and the last word. So F could be a GUO function or an LSTM function. Just like any other language model, you can turn this into a generative model of text. Let's look at the dependencies that you would have at test time. There's initial hidden state H1. We sample a new word. We use our transition function F and it gives us the new state H2. Then we can sample a new word W2, feed it back, get a new state, sample a new word, feed it back. It's important to note that all the stochasticity in the output is solely due to the stochasticity in the sampling process, because the transition function is deterministic. So far there's nothing to complain about. But so far I've only talked about test time. At training time there is a catch. This is where teacher forcing kicks in. It turns out that you can't learn this model by basing the evolution of the hidden states on your own predictions. You have to use teacher forcing and that means you substitute your own prediction by the ground truth. So at training time there's no sampling loop. You just take the ground truth token and feed it into your state transition function. So that feels unintuitive because at test time we do something else than we do at training time. And it's also known in the literature for a few years to cause biases. So why is that problematic? Remember we come from language modeling. In language modeling we could argue that if our only goal is to predict one word given the previous words, then of course we can use the ground truth context to ground truth previous words. But if we're interested in generating like longer sequences, then we need to learn what to memorize. And in particular we need to become robust against our own predictions because we might make mistakes at test time and there's no ground truth at test time. Just to get this confirmed by somebody who has worked in the field for years, at the NeurIPS representation learning workshop Alex Grave mentioned teacher forcing as one of the big three problems for autoregressive models. And in his own words, teacher forcing might lead to predict one step ahead, not many and potentially brittle generation and myopic representations. How have people addressed teacher forcing so far? There are approaches to try to mitigate the problem. For example, by blending together these two views, training time and test time, so that sometimes you use your own prediction during training, but sometimes you use the ground truth. We believe for a rigorous model of text generation, we need a rigorous model of uncertainty. This should be an integral part of any generative model and therefore it should be the same model both at training time and test time without any hacks. We propose a fundamentally different approach by proposing a new transition function. The new transition function is non autoregressive. That means it depends on the last stage, ht-1, but it doesn't depend on the last word. That means teacher forcing is not an option anymore, but it also means teacher forcing is not a problem anymore. Instead, the transition function accepts a white noise vector as the second input. Now you might wonder why do we need noise at all as an input to the transition function? Well, for a given prefix, there might be different continuations. So we need some source of entropy to model the entropy in different continuations. The rest of the paper pretty much focuses on the following two questions. A. Which function f is powerful enough to turn the most simple noise source, just the standard Gaussian vector, into something that is powerful enough to replace the autoregressive feedback mechanism of a standard RNN? And the second question is, of course, how do we train this? What framework do we train this in? And it will turn out that variational flows are suitable functions f and variational inference is the right framework to train them. So here's the roadmap to complete the model. First, we need to cast the generative model as a probabilistic method because so far I've only sketched a procedure that involves sampling some noise and then applying some function and then predicting observations. Then we need to propose a variational inference model so that we can do maximum likelihood training. We will derive an elbow, which is our objective. Then in the paper, we also describe how the tightness of the elbow can be improved. And here I will finish by talking a bit about the evaluation and what we do to inspect the model. Since this work is based a lot on variational flows, let me give you a quick summary of variational flows. A variational flow is a diffeomorphism f, which maps from what I will call a simple noise space, Xi, to a complex noise space, H. And here I'm already using the notation for our sequence model. Simply by the change of variable formula, we know that the probability of an event H in the complex space is simply the probability of the event in the simplest space Xi as given by the inverse of f times a Jacobian term with respect to f evaluated at Xi. How can we use this in our sequential setting? First, let me fix some notation because sequential models are pretty prone to overloaded notation. I'll write time as t running from 1 to capital T. And whenever I talk about a sequence of variables like w, I don't index them. I just write w without an index. And only when I need a specific element, I'll write it as wt. Let's formalize the generative model. We start out with the probability of observing a sequence w. And since we use the latent variable model, we marginalize out the latent variables H. And then we will assume that the overall dependencies between hidden states H and observations w follow like an HMM type of dependency. That means the new state only depends on the last state and the current observation only depends on the current state. And now the question is how do we model these transitions? I've so far pitched the ideas of sampling noise and then using some transition function f. And we have seen flows already. Now we are ready to combine the two. We propose a transition function fg, which has the signature as I mentioned before. It gets a hidden state and noise vector as an input. And it gives you a new state as an output. This can be seen as a conditional flow because any ht minus 1, any last state, inserted as the first argument into fg, induces a flow which maps from the simple noise distribution to the space of new hidden states. And as I've said before, for the prior distribution in the simple noise space, we simply assume it's a standard Gaussian. Let's look at this graphically, because in the end this is a graphical model. I copied over the formulas from the last slide. And at the bottom you see the graphical model. First we have a sequence of stochastic variables Xi. Those deterministically induce via the transition function f, via the flow, a sequence of hidden states. And those independently predict the observations. All the magic is in the transition. So let me sketch this process here in the big circle. How do we get from the last state h2 to the new state h3? Let's say h2 encodes a prefix and there are two possible continuations. They're equally likely in the corpus, so there are two potential new states. The blue state h3 and the yellow state h3. I've sketched the standard Gaussian noise distribution at the top. There are yellow samples and there are blue samples. The flow realizes a mapping that takes any yellow sample and maps it to the yellow hidden state. And it maps any blue sample to the blue hidden state. So with probability one half in this situation, we either get a blue or a yellow sample from the simple noise distribution. And it will induce new states, blue h3 or the yellow h3. So far we have proposed the generative model. Now the question is how do we train it if we don't know the hidden states? The answer is variational inference and in particular, amortized variational inference. The key idea of variational inference is to introduce a parameterized approximate inference model. How do we propose such a model? Well, a good recipe is to first look at a true posterior. The probability of a state sequence given an observation sequence. The true posterior turns out to factorize into individual components, which give us the probability of a state given the last state and the future observations. It turns out that we can formulate this inference model using two ingredients that should be familiar. First, we use a transition function Fq, which induces a flow. It has the same signature as Fg for the generative model. And we use a noise source q. But now the noise source isn't uninformative anymore. In variational inference, the inference network is informed about the data. So there's a base distribution q of Xi t, which is allowed to look at the data Wt. Now compare this to teacher forcing. In teacher forcing, we substitute our own predictions by inserting ground truth information into the generative model. In variational inference, it's very clear how to use the data. The data enters through the inference model and it enters in the form of future observation because the past observation we want to store in the hidden state. It remains to derive an elbow, which is the usual evidence lower bound objective used for variational inference. Any elbow, whether it's in a sequential setting or not, factorizes into two parts, a reconstruction loss and a model mismatch term. Here, reconstruction loss means probability of observation given a state. And model mismatch is between the generative model P and the inference model q. This is what is usually written as a KL divergence. To derive our elbow, we follow the literature on flows. In the first step, we introduced the flow on the inference model Fq. We turn the expectation with respect to the complex state space H into an expectation with respect to the simple noise distribution. And then, of course, at the same time, the flow appears inside the expectation. And we get the log-determinant terms that I've mentioned before. In the second step, we introduced the generative flow Fg using the same change of variable technique. It's possible to write out the elbow in a way so that there's only one Jacobian term for both flows and so that the generative model always appears as the inverse concatenated with the inference flow. In a second, I'll show you what the interpretation of that is. Let's quickly recap what we've seen so far. There's a generative model. It consists of a generative flow Fg and an uninformed noise source. There's an inference model, which contains an inference flow Fq and a simple base distribution across the noise variables q of xi. In the elbow, the two flows appear concatenated, and we can interpret this in the following way. The inference model q proposes a noise vector, xi t, that is informed about the future. The inference flow maps this to a hidden state. At the hidden state, the reconstruction loss lives. This is where we pay a price for making a bad prediction. However, the inference model cannot encode all the possible information about the future into the hidden state, ht, because the mapping continues to the simple noise space of the generative model. And the inference model must make sure that the proposal also covers significant probability mass under the uninformed prior. This trade-off between reconstruction and model mismatch is common to all elbows. But here we highlight the special situation where we have two flows, one for the inference model and one for the generative model. In our paper, we also show how we can use the recently proposed important weighted autoencoder to improve the tightness of our bound, but I'll skip those steps here. Instead, let's quickly talk about evaluation. We apply our model to unconditional generation. So why in hell would somebody look into unconditional generation? Well, actually, it turns out it's harder than conditional generation. If you know what the French sentence looks like, it's much easier to continue a partial English translation. But it's not only harder, it's also more interesting to inspect which information does a sequence model need to store and which information can it forget. We use two metrics to evaluate our model. First, we look at sequence cross entropy. So we compare the model's sequence distribution to the data sequence distribution. Usually estimating the data distribution is impossible. You don't want to say that the probability of a sentence is how many times the sentence has appeared in the training data. However, for words, we can use unigram frequencies of words in a corpus as a pretty reliable estimate. Also, we can get an estimate of our model's probability assigned to a sequence by using MC sampling. We take the marginal likelihood, sample k trajectories, and assess the probability that the trajectories assigned to the given sequence. Since our model is not autoregressive, the sequence isn't tied to an observation. So we can actually use the same sequences of hidden states to evaluate probabilities for all the words in the vocabulary. Since we've pitched our noise model as the key to contribution to our generative model, we want to empirically verify that the model is being used. Working with a clean probabilistic model allows us to use tools from probability theory to assess that. We use the mutual information between a noise vector at time t and the observation of time t. So this measures how much information in the output is actually due to the noise model. Before showing you the numbers, let's quickly go across the parameterization of our model. For the flows, we look at shift scaling transformations. And if the scaling g is lower triangular, we can compute efficiently the Jacobian determinant. We also look at real NVP and we compose flows by concatenation. The base distribution of our inference model depends on the future observations, which we summarize using a GRU RNN. The base distribution itself is a diagonal Gaussian. We use a state size of 8 and also run some experiments for 16 and 32. All the numbers are in the paper, so here are just the take-home messages. We are on par or better than a domestic RNN with teacher forcing trained at the same state size. Also, we observed that a powerful generative flow is essential to achieve good performance. Furthermore, we can confirm that important weightless elbow improved the results. This is the first model applying generative flows to sequence modeling. So naturally, we are interested in comparing the expressiveness of fg and fq. Our paper has a table that compares four choices for both flows. Our findings are that the generative flow should be powerful and the inference flow should be slightly less powerful. To understand our noise model, we look at the mutual information at every time step and show a box spot for all of them. Initially, the mutual information is highest, which means the initial character is most important to remember. The noise model is never being ignored and we see increased variance in the remaining time steps because we are averaging here across different sequences. A non-autoregressive model needs to have lower entropy in the observation model because any underentropy under the observation model is being forgotten because there is no feedback. The purple line shows you the observation model entropy during training. The dashed red line shows you the entropy on the observation model of a baseline. So indeed, we have lower entropy in the observation model and at the same time in green, you see the mutual information increasing. Let's summarize our findings. Using variational flows, non-autoregressive modeling of sequences is possible and teacher forcing is not necessary. At the same time, we get a noise model that is the driving factor of the sequence model and is easy to interpret. For any details, please check out the paper and for any questions, shoot me an email.
[ { "start": 0, "end": 6, "text": " Hi everybody, my name is Florian and Janik was nice enough to host me here as a guest to talk about" }, { "start": 6, "end": 14, "text": " Stochastic RNNs without teacher forcing. This is based on recent work, deep state space models for" }, { "start": 14, "end": 21, "text": " unconditional word generation, which we presented at this year's New RIPs. And if you feel like any more details," }, { "start": 21, "end": 29, "text": " please check out the paper. We focus on a de facto standard training hack for any RNNs that generate" }, { "start": 29, "end": 37, "text": " text. It's called teacher forcing and it's used in any model, whether unconditional or conditional," }, { "start": 37, "end": 45, "text": " such as in a sentence autoencoder or in a translation model. To understand where teacher forcing comes from," }, { "start": 45, "end": 52, "text": " we first need to understand where text generation comes from. For the good or the bad, and here we will focus on the bad," }, { "start": 52, "end": 60, "text": " text generation has its roots in language modeling. So language modeling is the problem of predicting the next word," }, { "start": 60, "end": 69, "text": " given all the previous words. People used to use ANGRA models for this, but today people use recurrent neural networks to do that." }, { "start": 69, "end": 78, "text": " Such recurrent neural networks or RNNs factorize the joint observation probability of a sequence that I here depict as W" }, { "start": 78, "end": 86, "text": " into independent softmax distributions over individual tokens. So for every time step, there's a softmax function." }, { "start": 86, "end": 93, "text": " And the softmax is conditioned on a hidden state. And all the magic of the RNN goes into the function that gives you the new state," }, { "start": 93, "end": 101, "text": " given the old hidden state. Usually this is called a transition function, F, and as an input it gets the last state and the last word." }, { "start": 101, "end": 111, "text": " So F could be a GUO function or an LSTM function. Just like any other language model, you can turn this into a generative model of text." }, { "start": 111, "end": 118, "text": " Let's look at the dependencies that you would have at test time. There's initial hidden state H1. We sample a new word." }, { "start": 118, "end": 126, "text": " We use our transition function F and it gives us the new state H2. Then we can sample a new word W2, feed it back," }, { "start": 126, "end": 135, "text": " get a new state, sample a new word, feed it back. It's important to note that all the stochasticity in the output is solely due to the stochasticity" }, { "start": 135, "end": 142, "text": " in the sampling process, because the transition function is deterministic. So far there's nothing to complain about." }, { "start": 142, "end": 149, "text": " But so far I've only talked about test time. At training time there is a catch. This is where teacher forcing kicks in." }, { "start": 149, "end": 156, "text": " It turns out that you can't learn this model by basing the evolution of the hidden states on your own predictions." }, { "start": 156, "end": 161, "text": " You have to use teacher forcing and that means you substitute your own prediction by the ground truth." }, { "start": 161, "end": 168, "text": " So at training time there's no sampling loop. You just take the ground truth token and feed it into your state transition function." }, { "start": 168, "end": 173, "text": " So that feels unintuitive because at test time we do something else than we do at training time." }, { "start": 173, "end": 179, "text": " And it's also known in the literature for a few years to cause biases. So why is that problematic?" }, { "start": 179, "end": 188, "text": " Remember we come from language modeling. In language modeling we could argue that if our only goal is to predict one word given the previous words," }, { "start": 188, "end": 192, "text": " then of course we can use the ground truth context to ground truth previous words." }, { "start": 192, "end": 198, "text": " But if we're interested in generating like longer sequences, then we need to learn what to memorize." }, { "start": 198, "end": 207, "text": " And in particular we need to become robust against our own predictions because we might make mistakes at test time and there's no ground truth at test time." }, { "start": 207, "end": 215, "text": " Just to get this confirmed by somebody who has worked in the field for years, at the NeurIPS representation learning workshop Alex Grave mentioned" }, { "start": 215, "end": 220, "text": " teacher forcing as one of the big three problems for autoregressive models." }, { "start": 220, "end": 230, "text": " And in his own words, teacher forcing might lead to predict one step ahead, not many and potentially brittle generation and myopic representations." }, { "start": 230, "end": 235, "text": " How have people addressed teacher forcing so far? There are approaches to try to mitigate the problem." }, { "start": 235, "end": 242, "text": " For example, by blending together these two views, training time and test time, so that sometimes you use your own prediction during training," }, { "start": 242, "end": 249, "text": " but sometimes you use the ground truth. We believe for a rigorous model of text generation, we need a rigorous model of uncertainty." }, { "start": 249, "end": 258, "text": " This should be an integral part of any generative model and therefore it should be the same model both at training time and test time without any hacks." }, { "start": 258, "end": 263, "text": " We propose a fundamentally different approach by proposing a new transition function." }, { "start": 263, "end": 273, "text": " The new transition function is non autoregressive. That means it depends on the last stage, ht-1, but it doesn't depend on the last word." }, { "start": 273, "end": 279, "text": " That means teacher forcing is not an option anymore, but it also means teacher forcing is not a problem anymore." }, { "start": 279, "end": 284, "text": " Instead, the transition function accepts a white noise vector as the second input." }, { "start": 284, "end": 289, "text": " Now you might wonder why do we need noise at all as an input to the transition function?" }, { "start": 289, "end": 293, "text": " Well, for a given prefix, there might be different continuations." }, { "start": 293, "end": 298, "text": " So we need some source of entropy to model the entropy in different continuations." }, { "start": 298, "end": 303, "text": " The rest of the paper pretty much focuses on the following two questions." }, { "start": 303, "end": 311, "text": " A. Which function f is powerful enough to turn the most simple noise source, just the standard Gaussian vector," }, { "start": 311, "end": 317, "text": " into something that is powerful enough to replace the autoregressive feedback mechanism of a standard RNN?" }, { "start": 317, "end": 322, "text": " And the second question is, of course, how do we train this? What framework do we train this in?" }, { "start": 322, "end": 331, "text": " And it will turn out that variational flows are suitable functions f and variational inference is the right framework to train them." }, { "start": 331, "end": 334, "text": " So here's the roadmap to complete the model." }, { "start": 334, "end": 340, "text": " First, we need to cast the generative model as a probabilistic method because so far I've only sketched a procedure" }, { "start": 340, "end": 346, "text": " that involves sampling some noise and then applying some function and then predicting observations." }, { "start": 346, "end": 351, "text": " Then we need to propose a variational inference model so that we can do maximum likelihood training." }, { "start": 351, "end": 354, "text": " We will derive an elbow, which is our objective." }, { "start": 354, "end": 359, "text": " Then in the paper, we also describe how the tightness of the elbow can be improved." }, { "start": 359, "end": 365, "text": " And here I will finish by talking a bit about the evaluation and what we do to inspect the model." }, { "start": 365, "end": 372, "text": " Since this work is based a lot on variational flows, let me give you a quick summary of variational flows." }, { "start": 372, "end": 382, "text": " A variational flow is a diffeomorphism f, which maps from what I will call a simple noise space, Xi, to a complex noise space, H." }, { "start": 382, "end": 386, "text": " And here I'm already using the notation for our sequence model." }, { "start": 386, "end": 395, "text": " Simply by the change of variable formula, we know that the probability of an event H in the complex space is simply the probability of the event" }, { "start": 395, "end": 404, "text": " in the simplest space Xi as given by the inverse of f times a Jacobian term with respect to f evaluated at Xi." }, { "start": 404, "end": 407, "text": " How can we use this in our sequential setting?" }, { "start": 407, "end": 413, "text": " First, let me fix some notation because sequential models are pretty prone to overloaded notation." }, { "start": 413, "end": 418, "text": " I'll write time as t running from 1 to capital T." }, { "start": 418, "end": 425, "text": " And whenever I talk about a sequence of variables like w, I don't index them. I just write w without an index." }, { "start": 425, "end": 431, "text": " And only when I need a specific element, I'll write it as wt." }, { "start": 431, "end": 434, "text": " Let's formalize the generative model." }, { "start": 434, "end": 438, "text": " We start out with the probability of observing a sequence w." }, { "start": 438, "end": 443, "text": " And since we use the latent variable model, we marginalize out the latent variables H." }, { "start": 443, "end": 453, "text": " And then we will assume that the overall dependencies between hidden states H and observations w follow like an HMM type of dependency." }, { "start": 453, "end": 459, "text": " That means the new state only depends on the last state and the current observation only depends on the current state." }, { "start": 459, "end": 462, "text": " And now the question is how do we model these transitions?" }, { "start": 462, "end": 467, "text": " I've so far pitched the ideas of sampling noise and then using some transition function f." }, { "start": 467, "end": 472, "text": " And we have seen flows already. Now we are ready to combine the two." }, { "start": 472, "end": 478, "text": " We propose a transition function fg, which has the signature as I mentioned before." }, { "start": 478, "end": 481, "text": " It gets a hidden state and noise vector as an input." }, { "start": 481, "end": 484, "text": " And it gives you a new state as an output." }, { "start": 484, "end": 494, "text": " This can be seen as a conditional flow because any ht minus 1, any last state, inserted as the first argument into fg," }, { "start": 494, "end": 502, "text": " induces a flow which maps from the simple noise distribution to the space of new hidden states." }, { "start": 502, "end": 510, "text": " And as I've said before, for the prior distribution in the simple noise space, we simply assume it's a standard Gaussian." }, { "start": 510, "end": 514, "text": " Let's look at this graphically, because in the end this is a graphical model." }, { "start": 514, "end": 517, "text": " I copied over the formulas from the last slide." }, { "start": 517, "end": 519, "text": " And at the bottom you see the graphical model." }, { "start": 519, "end": 523, "text": " First we have a sequence of stochastic variables Xi." }, { "start": 523, "end": 530, "text": " Those deterministically induce via the transition function f, via the flow, a sequence of hidden states." }, { "start": 530, "end": 533, "text": " And those independently predict the observations." }, { "start": 533, "end": 536, "text": " All the magic is in the transition." }, { "start": 536, "end": 540, "text": " So let me sketch this process here in the big circle." }, { "start": 540, "end": 545, "text": " How do we get from the last state h2 to the new state h3?" }, { "start": 545, "end": 549, "text": " Let's say h2 encodes a prefix and there are two possible continuations." }, { "start": 549, "end": 554, "text": " They're equally likely in the corpus, so there are two potential new states." }, { "start": 554, "end": 558, "text": " The blue state h3 and the yellow state h3." }, { "start": 558, "end": 562, "text": " I've sketched the standard Gaussian noise distribution at the top." }, { "start": 562, "end": 565, "text": " There are yellow samples and there are blue samples." }, { "start": 565, "end": 570, "text": " The flow realizes a mapping that takes any yellow sample and maps it to the yellow hidden state." }, { "start": 570, "end": 574, "text": " And it maps any blue sample to the blue hidden state." }, { "start": 574, "end": 580, "text": " So with probability one half in this situation, we either get a blue or a yellow sample from the simple noise distribution." }, { "start": 580, "end": 586, "text": " And it will induce new states, blue h3 or the yellow h3." }, { "start": 586, "end": 589, "text": " So far we have proposed the generative model." }, { "start": 589, "end": 593, "text": " Now the question is how do we train it if we don't know the hidden states?" }, { "start": 593, "end": 598, "text": " The answer is variational inference and in particular, amortized variational inference." }, { "start": 598, "end": 604, "text": " The key idea of variational inference is to introduce a parameterized approximate inference model." }, { "start": 604, "end": 606, "text": " How do we propose such a model?" }, { "start": 606, "end": 610, "text": " Well, a good recipe is to first look at a true posterior." }, { "start": 610, "end": 614, "text": " The probability of a state sequence given an observation sequence." }, { "start": 614, "end": 619, "text": " The true posterior turns out to factorize into individual components," }, { "start": 619, "end": 625, "text": " which give us the probability of a state given the last state and the future observations." }, { "start": 625, "end": 631, "text": " It turns out that we can formulate this inference model using two ingredients that should be familiar." }, { "start": 631, "end": 636, "text": " First, we use a transition function Fq, which induces a flow." }, { "start": 636, "end": 639, "text": " It has the same signature as Fg for the generative model." }, { "start": 639, "end": 642, "text": " And we use a noise source q." }, { "start": 642, "end": 646, "text": " But now the noise source isn't uninformative anymore." }, { "start": 646, "end": 650, "text": " In variational inference, the inference network is informed about the data." }, { "start": 650, "end": 656, "text": " So there's a base distribution q of Xi t, which is allowed to look at the data Wt." }, { "start": 656, "end": 659, "text": " Now compare this to teacher forcing." }, { "start": 659, "end": 666, "text": " In teacher forcing, we substitute our own predictions by inserting ground truth information into the generative model." }, { "start": 666, "end": 669, "text": " In variational inference, it's very clear how to use the data." }, { "start": 669, "end": 675, "text": " The data enters through the inference model and it enters in the form of future observation" }, { "start": 675, "end": 679, "text": " because the past observation we want to store in the hidden state." }, { "start": 679, "end": 686, "text": " It remains to derive an elbow, which is the usual evidence lower bound objective used for variational inference." }, { "start": 686, "end": 691, "text": " Any elbow, whether it's in a sequential setting or not, factorizes into two parts," }, { "start": 691, "end": 694, "text": " a reconstruction loss and a model mismatch term." }, { "start": 694, "end": 699, "text": " Here, reconstruction loss means probability of observation given a state." }, { "start": 699, "end": 704, "text": " And model mismatch is between the generative model P and the inference model q." }, { "start": 704, "end": 708, "text": " This is what is usually written as a KL divergence." }, { "start": 708, "end": 713, "text": " To derive our elbow, we follow the literature on flows." }, { "start": 713, "end": 718, "text": " In the first step, we introduced the flow on the inference model Fq." }, { "start": 718, "end": 727, "text": " We turn the expectation with respect to the complex state space H into an expectation with respect to the simple noise distribution." }, { "start": 727, "end": 732, "text": " And then, of course, at the same time, the flow appears inside the expectation." }, { "start": 732, "end": 736, "text": " And we get the log-determinant terms that I've mentioned before." }, { "start": 736, "end": 743, "text": " In the second step, we introduced the generative flow Fg using the same change of variable technique." }, { "start": 743, "end": 748, "text": " It's possible to write out the elbow in a way so that there's only one Jacobian term for both flows" }, { "start": 748, "end": 754, "text": " and so that the generative model always appears as the inverse concatenated with the inference flow." }, { "start": 754, "end": 757, "text": " In a second, I'll show you what the interpretation of that is." }, { "start": 757, "end": 760, "text": " Let's quickly recap what we've seen so far." }, { "start": 760, "end": 762, "text": " There's a generative model." }, { "start": 762, "end": 767, "text": " It consists of a generative flow Fg and an uninformed noise source." }, { "start": 767, "end": 772, "text": " There's an inference model, which contains an inference flow Fq" }, { "start": 772, "end": 777, "text": " and a simple base distribution across the noise variables q of xi." }, { "start": 777, "end": 783, "text": " In the elbow, the two flows appear concatenated, and we can interpret this in the following way." }, { "start": 783, "end": 789, "text": " The inference model q proposes a noise vector, xi t, that is informed about the future." }, { "start": 789, "end": 792, "text": " The inference flow maps this to a hidden state." }, { "start": 792, "end": 796, "text": " At the hidden state, the reconstruction loss lives." }, { "start": 796, "end": 799, "text": " This is where we pay a price for making a bad prediction." }, { "start": 799, "end": 806, "text": " However, the inference model cannot encode all the possible information about the future into the hidden state, ht," }, { "start": 806, "end": 811, "text": " because the mapping continues to the simple noise space of the generative model." }, { "start": 811, "end": 818, "text": " And the inference model must make sure that the proposal also covers significant probability mass under the uninformed prior." }, { "start": 818, "end": 823, "text": " This trade-off between reconstruction and model mismatch is common to all elbows." }, { "start": 823, "end": 830, "text": " But here we highlight the special situation where we have two flows, one for the inference model and one for the generative model." }, { "start": 830, "end": 839, "text": " In our paper, we also show how we can use the recently proposed important weighted autoencoder to improve the tightness of our bound, but I'll skip those steps here." }, { "start": 839, "end": 843, "text": " Instead, let's quickly talk about evaluation." }, { "start": 843, "end": 846, "text": " We apply our model to unconditional generation." }, { "start": 846, "end": 849, "text": " So why in hell would somebody look into unconditional generation?" }, { "start": 849, "end": 853, "text": " Well, actually, it turns out it's harder than conditional generation." }, { "start": 853, "end": 859, "text": " If you know what the French sentence looks like, it's much easier to continue a partial English translation." }, { "start": 859, "end": 869, "text": " But it's not only harder, it's also more interesting to inspect which information does a sequence model need to store and which information can it forget." }, { "start": 869, "end": 871, "text": " We use two metrics to evaluate our model." }, { "start": 871, "end": 873, "text": " First, we look at sequence cross entropy." }, { "start": 873, "end": 879, "text": " So we compare the model's sequence distribution to the data sequence distribution." }, { "start": 879, "end": 883, "text": " Usually estimating the data distribution is impossible." }, { "start": 883, "end": 889, "text": " You don't want to say that the probability of a sentence is how many times the sentence has appeared in the training data." }, { "start": 889, "end": 895, "text": " However, for words, we can use unigram frequencies of words in a corpus as a pretty reliable estimate." }, { "start": 895, "end": 902, "text": " Also, we can get an estimate of our model's probability assigned to a sequence by using MC sampling." }, { "start": 902, "end": 910, "text": " We take the marginal likelihood, sample k trajectories, and assess the probability that the trajectories assigned to the given sequence." }, { "start": 910, "end": 914, "text": " Since our model is not autoregressive, the sequence isn't tied to an observation." }, { "start": 914, "end": 921, "text": " So we can actually use the same sequences of hidden states to evaluate probabilities for all the words in the vocabulary." }, { "start": 921, "end": 930, "text": " Since we've pitched our noise model as the key to contribution to our generative model, we want to empirically verify that the model is being used." }, { "start": 930, "end": 936, "text": " Working with a clean probabilistic model allows us to use tools from probability theory to assess that." }, { "start": 936, "end": 942, "text": " We use the mutual information between a noise vector at time t and the observation of time t." }, { "start": 942, "end": 947, "text": " So this measures how much information in the output is actually due to the noise model." }, { "start": 947, "end": 952, "text": " Before showing you the numbers, let's quickly go across the parameterization of our model." }, { "start": 952, "end": 956, "text": " For the flows, we look at shift scaling transformations." }, { "start": 956, "end": 962, "text": " And if the scaling g is lower triangular, we can compute efficiently the Jacobian determinant." }, { "start": 962, "end": 967, "text": " We also look at real NVP and we compose flows by concatenation." }, { "start": 967, "end": 974, "text": " The base distribution of our inference model depends on the future observations, which we summarize using a GRU RNN." }, { "start": 974, "end": 977, "text": " The base distribution itself is a diagonal Gaussian." }, { "start": 977, "end": 982, "text": " We use a state size of 8 and also run some experiments for 16 and 32." }, { "start": 982, "end": 986, "text": " All the numbers are in the paper, so here are just the take-home messages." }, { "start": 986, "end": 992, "text": " We are on par or better than a domestic RNN with teacher forcing trained at the same state size." }, { "start": 992, "end": 997, "text": " Also, we observed that a powerful generative flow is essential to achieve good performance." }, { "start": 997, "end": 1003, "text": " Furthermore, we can confirm that important weightless elbow improved the results." }, { "start": 1003, "end": 1007, "text": " This is the first model applying generative flows to sequence modeling." }, { "start": 1007, "end": 1012, "text": " So naturally, we are interested in comparing the expressiveness of fg and fq." }, { "start": 1012, "end": 1016, "text": " Our paper has a table that compares four choices for both flows." }, { "start": 1016, "end": 1024, "text": " Our findings are that the generative flow should be powerful and the inference flow should be slightly less powerful." }, { "start": 1024, "end": 1031, "text": " To understand our noise model, we look at the mutual information at every time step and show a box spot for all of them." }, { "start": 1031, "end": 1037, "text": " Initially, the mutual information is highest, which means the initial character is most important to remember." }, { "start": 1037, "end": 1046, "text": " The noise model is never being ignored and we see increased variance in the remaining time steps because we are averaging here across different sequences." }, { "start": 1046, "end": 1057, "text": " A non-autoregressive model needs to have lower entropy in the observation model because any underentropy under the observation model is being forgotten because there is no feedback." }, { "start": 1057, "end": 1062, "text": " The purple line shows you the observation model entropy during training." }, { "start": 1062, "end": 1067, "text": " The dashed red line shows you the entropy on the observation model of a baseline." }, { "start": 1067, "end": 1075, "text": " So indeed, we have lower entropy in the observation model and at the same time in green, you see the mutual information increasing." }, { "start": 1075, "end": 1078, "text": " Let's summarize our findings." }, { "start": 1078, "end": 1085, "text": " Using variational flows, non-autoregressive modeling of sequences is possible and teacher forcing is not necessary." }, { "start": 1085, "end": 1092, "text": " At the same time, we get a noise model that is the driving factor of the sequence model and is easy to interpret." }, { "start": 1092, "end": 1120, "text": " For any details, please check out the paper and for any questions, shoot me an email." } ]
-YiMVR3HEuY
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Reinforcement Learning with Unsupervised Auxiliary Tasks
[ "Science & Technology" ]
[ "machine learning", "artificial intelligence", "ai", "deep learning", "unsupervised learning", "research", "academia", "paper", "review", "agents", "tasks" ]
https://arxiv.org/abs/1611.05397 Abstract: Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10× and averaging 87\% expert human performance on Labyrinth. Authors: Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
Hi there, today we're looking at reinforcement learning with unsupervised auxiliary tasks by Google. So in this paper the authors consider a reinforcement learning task and I can show you what it looks like. It looks like this kind of a maze or this is an example that they give where you have to navigate the maze, it's 3D and you have to navigate from pixel inputs, you have to collect apples and reach the goal and this gives you rewards. So on the left you can see what the agent is actually seeing, on the right you can see it from a top down view. The problem is of course that the input is very, or the reward is very sparse, meaning that you have to navigate a lot of maze before you even get a single point. So reinforcement learning has a big trouble with this because it relies on constant reward to notice what actions are good and what actions are bad. So what the authors propose is in addition to the regular loss that you would have, so your reward which is this thing, you would also have an additional set of auxiliary tasks and here C goes over the auxiliary control tasks that you specify. Each of those has a reward and you're also trying to maximize these each with some kind of a weight here. And the thing is that the parameters that you maximize over control all of the different tasks so they are partly shared between the tasks. So what you're hoping is that by kind of learning to do one thing you also learn to do another thing. So the difference between this and let's say, you might have, so we've seen kind of work of it like this before where you do it more like an autoencoder setting. So for example you can't, the agent sees the input on the left here and it kind of tries to predict what the next input will be, what the next frame will be. The thought behind this is if you can accurately predict what the next frame will be maybe learn something useful about the environment. In this work it's different because now we couple a reward to these tasks and I can show you here what the authors propose as additional rewards. Sorry, they're further on top. Let me go there. Basically they consider here these two auxiliary control tasks. So pixel changes which means that the agent actually tries to actively change pixels. So it gets a reward for changing the pixels in the input. So it tries to maximize this. It needs to learn what do I need to do to maximize my pixel changes and probably that will be moving around. So it will learn to kind of move around, not move against the wall because if it moves against the wall the pixels won't change. So it will kind of learn to move along the, like how a regular human agent would also move not into a wall, not like into a dead end or something such that the pixels always change. Of course it's not perfect. You can also change your pixels quite a bit by simply spinning around in a circle. But this is one auxiliary tasks that they augment the agent with. The other one is network features. So it's kind of a meta learning here. You actually reward the agent for changing its own internal activations. So the hope is that it kind of learns about something by itself. How can I activate my internal neural network units? And it gets rewarded for that. So it might want to activate a lot of them and want to learn how they're activated. So this kind of self-interspection, you also hope that it kind of leads to a network that does more sophisticated tasks or that by nature of trying to get most pixel changes and the most network feature activations that you also learn something useful for the actual task. So these are the two tasks they propose. In addition, they also do, and they have a drawing of this over here. They also do a lot of other things, namely on the top left, you can kind of see here we have a database agent. This is an A3C agent, meaning that it's an actor critic. So you learn a policy and you learn a value network. We might go over this in a future video. So just consider this a standard reinforcement learning agent. You feed its experience into a replay buffer. And out of the replay buffer, you do many things. So for one, you try to learn these auxiliary tasks. Note that these are shared parameters between all of these networks. That's why the auxiliary tasks actually help. But you also try to better learn your value function. They call this off policy learning because you kind of pause the base agent training for a while and then you train the value function some more, just because that helps. You also try a reward prediction from here. And the way they do it, as they explain, is kind of in a skewed sampling way. So out of all the situations you can be in, the agent will have a reward very, very few times. So what they do is they simply sample out of the replay buffer, out of all the experiences they've had so far, they sample more frequently the experiences where they have actually gotten a reward. That way the hope is, of course, the agent, if you look at the experience here where you actually get an apple, then the agent might learn a lot faster, oh, there's some kind of apple there and I move towards it to get a reward. So that's the hope that you instantly recognize high reward situations and kind of are not so interested in non-reward situations. Of course, it does introduce biased near sampling and you might decide for yourself if that's good or bad. But here it seems to work. So they have a lot of experiments in this task, this labyrinth task, and they, of course, as is with research, they reach state of the art, they're much better than anything else. No, I mean they don't boast this much. So it's actually fair comparisons. The criticisms, so they also evaluate on Atari games, the criticisms that I have are twofold. First of all, the choice of auxiliary tasks is, of course, completely up to the implementer, which means that I have to decide as an implementer of this algorithm what my auxiliary task will be. And here, pixel changes and network features, they seem like fairly general tasks that you could apply to a lot of these kind of problems, but it always kind of comes down to how much knowledge about the task would you like to code into the actor. And here, I mean, you can see it makes sense to get at least the pixel changes as an auxiliary task, but it's questionable how much of kind of domain knowledge this already encodes. So the fact, the choice of these are certainly something that you have to decide as a human. And I think these are good choices. So they're not too domain specific, but also they do correspond to like some kind of visual moving around game task. And the other kind of criticisms, not really criticisms, it's just a remark, is that they do a lot of things. So their paper is about the auxiliary tasks, but they also then do these skewed sampling and the off-policy value learning and so on. And of course, you can kind of argue, yeah, this is all done in other reinforcement learning tasks. That's why it's a fair comparison. I guess it's a philosophical question. If you want to reach state of the art, of course, you have to first of all, get a better method here. This will be the auxiliary tasks. This is the new idea. And then implement all the tricks that the other people have discovered, which is good because you kind of reach the highest performance you can get. But also the problem is you make it harder to compare, you make it harder to see where the improvement is coming from. Have you simply chosen better hyperparameters for the reward predictions of things? Is there an interaction maybe between the auxiliary tasks and the skewed sampling part? All of these kind of things wash out and it's not really clear where the improvement is coming from. On the other hand, if you simply take a basic, basic, basic algorithm, like just A3C here on the top left, and you augment it with nothing but these auxiliary tasks on the bottom left, and then you see an improvement, you can be relatively sure it's due to your new idea. But of course, you won't reach any state of the art numbers because everyone that does A3C also does these tricks. No question here. I'm standing more on the side of not doing the tricks or maybe doing both. Yeah, but decide for yourself and have a nice day.
[ { "start": 0, "end": 6.48, "text": " Hi there, today we're looking at reinforcement learning with unsupervised auxiliary tasks" }, { "start": 6.48, "end": 9.64, "text": " by Google." }, { "start": 9.64, "end": 14.6, "text": " So in this paper the authors consider a reinforcement learning task and I can show you what it looks" }, { "start": 14.6, "end": 16.92, "text": " like." }, { "start": 16.92, "end": 22.64, "text": " It looks like this kind of a maze or this is an example that they give where you have" }, { "start": 22.64, "end": 27.64, "text": " to navigate the maze, it's 3D and you have to navigate from pixel inputs, you have to" }, { "start": 27.64, "end": 31.52, "text": " collect apples and reach the goal and this gives you rewards." }, { "start": 31.52, "end": 36, "text": " So on the left you can see what the agent is actually seeing, on the right you can see" }, { "start": 36, "end": 38.68, "text": " it from a top down view." }, { "start": 38.68, "end": 45.72, "text": " The problem is of course that the input is very, or the reward is very sparse, meaning" }, { "start": 45.72, "end": 52.78, "text": " that you have to navigate a lot of maze before you even get a single point." }, { "start": 52.78, "end": 58.96, "text": " So reinforcement learning has a big trouble with this because it relies on constant reward" }, { "start": 58.96, "end": 62.5, "text": " to notice what actions are good and what actions are bad." }, { "start": 62.5, "end": 71.2, "text": " So what the authors propose is in addition to the regular loss that you would have, so" }, { "start": 71.2, "end": 79.72, "text": " your reward which is this thing, you would also have an additional set of auxiliary tasks" }, { "start": 79.72, "end": 86.4, "text": " and here C goes over the auxiliary control tasks that you specify." }, { "start": 86.4, "end": 92.44, "text": " Each of those has a reward and you're also trying to maximize these each with some kind" }, { "start": 92.44, "end": 94.4, "text": " of a weight here." }, { "start": 94.4, "end": 99.84, "text": " And the thing is that the parameters that you maximize over control all of the different" }, { "start": 99.84, "end": 104.22, "text": " tasks so they are partly shared between the tasks." }, { "start": 104.22, "end": 109.08, "text": " So what you're hoping is that by kind of learning to do one thing you also learn to do another" }, { "start": 109.08, "end": 111.12, "text": " thing." }, { "start": 111.12, "end": 118.72, "text": " So the difference between this and let's say, you might have, so we've seen kind of work" }, { "start": 118.72, "end": 125, "text": " of it like this before where you do it more like an autoencoder setting." }, { "start": 125, "end": 130.88, "text": " So for example you can't, the agent sees the input on the left here and it kind of tries" }, { "start": 130.88, "end": 135.2, "text": " to predict what the next input will be, what the next frame will be." }, { "start": 135.2, "end": 139.32, "text": " The thought behind this is if you can accurately predict what the next frame will be maybe" }, { "start": 139.32, "end": 142.64, "text": " learn something useful about the environment." }, { "start": 142.64, "end": 150.79999999999998, "text": " In this work it's different because now we couple a reward to these tasks and I can show" }, { "start": 150.79999999999998, "end": 155.67999999999998, "text": " you here what the authors propose as additional rewards." }, { "start": 155.67999999999998, "end": 158.72, "text": " Sorry, they're further on top." }, { "start": 158.72, "end": 161.67999999999998, "text": " Let me go there." }, { "start": 161.68, "end": 167.04000000000002, "text": " Basically they consider here these two auxiliary control tasks." }, { "start": 167.04000000000002, "end": 176.72, "text": " So pixel changes which means that the agent actually tries to actively change pixels." }, { "start": 176.72, "end": 181.56, "text": " So it gets a reward for changing the pixels in the input." }, { "start": 181.56, "end": 183.8, "text": " So it tries to maximize this." }, { "start": 183.8, "end": 189.44, "text": " It needs to learn what do I need to do to maximize my pixel changes and probably that" }, { "start": 189.44, "end": 191.24, "text": " will be moving around." }, { "start": 191.24, "end": 195.64000000000001, "text": " So it will learn to kind of move around, not move against the wall because if it moves" }, { "start": 195.64000000000001, "end": 199.08, "text": " against the wall the pixels won't change." }, { "start": 199.08, "end": 208.60000000000002, "text": " So it will kind of learn to move along the, like how a regular human agent would also" }, { "start": 208.60000000000002, "end": 214.56, "text": " move not into a wall, not like into a dead end or something such that the pixels always" }, { "start": 214.56, "end": 215.56, "text": " change." }, { "start": 215.56, "end": 217.32000000000002, "text": " Of course it's not perfect." }, { "start": 217.32, "end": 223.51999999999998, "text": " You can also change your pixels quite a bit by simply spinning around in a circle." }, { "start": 223.51999999999998, "end": 227.6, "text": " But this is one auxiliary tasks that they augment the agent with." }, { "start": 227.6, "end": 229.68, "text": " The other one is network features." }, { "start": 229.68, "end": 233.12, "text": " So it's kind of a meta learning here." }, { "start": 233.12, "end": 244.76, "text": " You actually reward the agent for changing its own internal activations." }, { "start": 244.76, "end": 249.79999999999998, "text": " So the hope is that it kind of learns about something by itself." }, { "start": 249.79999999999998, "end": 256.12, "text": " How can I activate my internal neural network units?" }, { "start": 256.12, "end": 257.48, "text": " And it gets rewarded for that." }, { "start": 257.48, "end": 261.92, "text": " So it might want to activate a lot of them and want to learn how they're activated." }, { "start": 261.92, "end": 268.84, "text": " So this kind of self-interspection, you also hope that it kind of leads to a network that" }, { "start": 268.84, "end": 278.47999999999996, "text": " does more sophisticated tasks or that by nature of trying to get most pixel changes and the" }, { "start": 278.47999999999996, "end": 284.35999999999996, "text": " most network feature activations that you also learn something useful for the actual" }, { "start": 284.35999999999996, "end": 286.88, "text": " task." }, { "start": 286.88, "end": 290.32, "text": " So these are the two tasks they propose." }, { "start": 290.32, "end": 296.84, "text": " In addition, they also do, and they have a drawing of this over here." }, { "start": 296.84, "end": 303.84, "text": " They also do a lot of other things, namely on the top left, you can kind of see here" }, { "start": 303.84, "end": 307.23999999999995, "text": " we have a database agent." }, { "start": 307.23999999999995, "end": 313.2, "text": " This is an A3C agent, meaning that it's an actor critic." }, { "start": 313.2, "end": 316.23999999999995, "text": " So you learn a policy and you learn a value network." }, { "start": 316.23999999999995, "end": 318.88, "text": " We might go over this in a future video." }, { "start": 318.88, "end": 322.96, "text": " So just consider this a standard reinforcement learning agent." }, { "start": 322.96, "end": 326.59999999999997, "text": " You feed its experience into a replay buffer." }, { "start": 326.6, "end": 329.96000000000004, "text": " And out of the replay buffer, you do many things." }, { "start": 329.96000000000004, "end": 335.96000000000004, "text": " So for one, you try to learn these auxiliary tasks." }, { "start": 335.96000000000004, "end": 340.24, "text": " Note that these are shared parameters between all of these networks." }, { "start": 340.24, "end": 343.6, "text": " That's why the auxiliary tasks actually help." }, { "start": 343.6, "end": 347.28000000000003, "text": " But you also try to better learn your value function." }, { "start": 347.28000000000003, "end": 356.12, "text": " They call this off policy learning because you kind of pause the base agent training" }, { "start": 356.12, "end": 362.28000000000003, "text": " for a while and then you train the value function some more, just because that helps." }, { "start": 362.28000000000003, "end": 366.4, "text": " You also try a reward prediction from here." }, { "start": 366.4, "end": 371.48, "text": " And the way they do it, as they explain, is kind of in a skewed sampling way." }, { "start": 371.48, "end": 380.04, "text": " So out of all the situations you can be in, the agent will have a reward very, very few" }, { "start": 380.04, "end": 381.28000000000003, "text": " times." }, { "start": 381.28, "end": 386.64, "text": " So what they do is they simply sample out of the replay buffer, out of all the experiences" }, { "start": 386.64, "end": 393.76, "text": " they've had so far, they sample more frequently the experiences where they have actually gotten" }, { "start": 393.76, "end": 395.14, "text": " a reward." }, { "start": 395.14, "end": 405.91999999999996, "text": " That way the hope is, of course, the agent, if you look at the experience here where you" }, { "start": 405.92, "end": 412.32, "text": " actually get an apple, then the agent might learn a lot faster, oh, there's some kind" }, { "start": 412.32, "end": 416.68, "text": " of apple there and I move towards it to get a reward." }, { "start": 416.68, "end": 424.04, "text": " So that's the hope that you instantly recognize high reward situations and kind of are not" }, { "start": 424.04, "end": 426.44, "text": " so interested in non-reward situations." }, { "start": 426.44, "end": 432.44, "text": " Of course, it does introduce biased near sampling and you might decide for yourself if that's" }, { "start": 432.44, "end": 433.44, "text": " good or bad." }, { "start": 433.44, "end": 436.6, "text": " But here it seems to work." }, { "start": 436.6, "end": 446.04, "text": " So they have a lot of experiments in this task, this labyrinth task, and they, of course," }, { "start": 446.04, "end": 451.08, "text": " as is with research, they reach state of the art, they're much better than anything else." }, { "start": 451.08, "end": 453.64, "text": " No, I mean they don't boast this much." }, { "start": 453.64, "end": 457.84, "text": " So it's actually fair comparisons." }, { "start": 457.84, "end": 464.47999999999996, "text": " The criticisms, so they also evaluate on Atari games, the criticisms that I have are twofold." }, { "start": 464.47999999999996, "end": 472.84, "text": " First of all, the choice of auxiliary tasks is, of course, completely up to the implementer," }, { "start": 472.84, "end": 479.59999999999997, "text": " which means that I have to decide as an implementer of this algorithm what my auxiliary task will" }, { "start": 479.59999999999997, "end": 480.59999999999997, "text": " be." }, { "start": 480.59999999999997, "end": 485.15999999999997, "text": " And here, pixel changes and network features, they seem like fairly general tasks that you" }, { "start": 485.16, "end": 491.08000000000004, "text": " could apply to a lot of these kind of problems, but it always kind of comes down to how much" }, { "start": 491.08000000000004, "end": 497.48, "text": " knowledge about the task would you like to code into the actor." }, { "start": 497.48, "end": 504.40000000000003, "text": " And here, I mean, you can see it makes sense to get at least the pixel changes as an auxiliary" }, { "start": 504.40000000000003, "end": 511.40000000000003, "text": " task, but it's questionable how much of kind of domain knowledge this already encodes." }, { "start": 511.4, "end": 519.68, "text": " So the fact, the choice of these are certainly something that you have to decide as a human." }, { "start": 519.68, "end": 521.9599999999999, "text": " And I think these are good choices." }, { "start": 521.9599999999999, "end": 528.64, "text": " So they're not too domain specific, but also they do correspond to like some kind of visual" }, { "start": 528.64, "end": 532.68, "text": " moving around game task." }, { "start": 532.68, "end": 540.88, "text": " And the other kind of criticisms, not really criticisms, it's just a remark, is that they" }, { "start": 540.88, "end": 542.84, "text": " do a lot of things." }, { "start": 542.84, "end": 549.4, "text": " So their paper is about the auxiliary tasks, but they also then do these skewed sampling" }, { "start": 549.4, "end": 552.56, "text": " and the off-policy value learning and so on." }, { "start": 552.56, "end": 559.52, "text": " And of course, you can kind of argue, yeah, this is all done in other reinforcement learning" }, { "start": 559.52, "end": 560.52, "text": " tasks." }, { "start": 560.52, "end": 562.72, "text": " That's why it's a fair comparison." }, { "start": 562.72, "end": 566.16, "text": " I guess it's a philosophical question." }, { "start": 566.16, "end": 572.3199999999999, "text": " If you want to reach state of the art, of course, you have to first of all, get a better" }, { "start": 572.3199999999999, "end": 573.6, "text": " method here." }, { "start": 573.6, "end": 575.04, "text": " This will be the auxiliary tasks." }, { "start": 575.04, "end": 576.48, "text": " This is the new idea." }, { "start": 576.48, "end": 585.04, "text": " And then implement all the tricks that the other people have discovered, which is good" }, { "start": 585.04, "end": 588.04, "text": " because you kind of reach the highest performance you can get." }, { "start": 588.04, "end": 596.48, "text": " But also the problem is you make it harder to compare, you make it harder to see where" }, { "start": 596.48, "end": 598.1999999999999, "text": " the improvement is coming from." }, { "start": 598.1999999999999, "end": 605.76, "text": " Have you simply chosen better hyperparameters for the reward predictions of things?" }, { "start": 605.76, "end": 611.4, "text": " Is there an interaction maybe between the auxiliary tasks and the skewed sampling part?" }, { "start": 611.4, "end": 615.16, "text": " All of these kind of things wash out and it's not really clear where the improvement is" }, { "start": 615.16, "end": 616.16, "text": " coming from." }, { "start": 616.16, "end": 623, "text": " On the other hand, if you simply take a basic, basic, basic algorithm, like just A3C here" }, { "start": 623, "end": 630.9599999999999, "text": " on the top left, and you augment it with nothing but these auxiliary tasks on the bottom left," }, { "start": 630.9599999999999, "end": 635.52, "text": " and then you see an improvement, you can be relatively sure it's due to your new idea." }, { "start": 635.52, "end": 640.48, "text": " But of course, you won't reach any state of the art numbers because everyone that does" }, { "start": 640.48, "end": 645.16, "text": " A3C also does these tricks." }, { "start": 645.16, "end": 647.12, "text": " No question here." }, { "start": 647.12, "end": 653.12, "text": " I'm standing more on the side of not doing the tricks or maybe doing both." }, { "start": 653.12, "end": 676.08, "text": " Yeah, but decide for yourself and have a nice day." } ]
xTzFJIknh7E
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
TransCoder: Unsupervised Translation of Programming Languages (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper" ]
Code migration between languages is an expensive and laborious task. To translate from one language to the other, one needs to be an expert at both. Current automatic tools often produce illegible and complicated code. This paper applies unsupervised neural machine translation to source code of Python, C++, and Java and is able to translate between them, without ever being trained in a supervised fashion. OUTLINE: 0:00 - Intro & Overview 1:15 - The Transcompiling Problem 5:55 - Neural Machine Translation 8:45 - Unsupervised NMT 12:55 - Shared Embeddings via Token Overlap 20:45 - MLM Objective 25:30 - Denoising Objective 30:10 - Back-Translation Objective 33:00 - Evaluation Dataset 37:25 - Results 41:45 - Tokenization 42:40 - Shared Embeddings 43:30 - Human-Aware Translation 47:25 - Failure Cases 48:05 - Conclusion Paper: https://arxiv.org/abs/2006.03511 Abstract: A transcompiler, also known as source-to-source translator, is a system that converts source code from a high-level programming language (such as C++ or Python) to another. Transcompilers are primarily used for interoperability, and to port codebases written in an obsolete or deprecated language (e.g. COBOL, Python 2) to a modern one. They typically rely on handcrafted rewrite rules, applied to the source code abstract syntax tree. Unfortunately, the resulting translations often lack readability, fail to respect the target language conventions, and require manual modifications in order to work properly. The overall translation process is timeconsuming and requires expertise in both the source and target languages, making code-translation projects expensive. Although neural models significantly outperform their rule-based counterparts in the context of natural language translation, their applications to transcompilation have been limited due to the scarcity of parallel data in this domain. In this paper, we propose to leverage recent approaches in unsupervised machine translation to train a fully unsupervised neural transcompiler. We train our model on source code from open source GitHub projects, and show that it can translate functions between C++, Java, and Python with high accuracy. Our method relies exclusively on monolingual source code, requires no expertise in the source or target languages, and can easily be generalized to other programming languages. We also build and release a test set composed of 852 parallel functions, along with unit tests to check the correctness of translations. We show that our model outperforms rule-based commercial baselines by a significant margin. Authors: Marie-Anne Lachaux, Baptiste Roziere, Lowik Chanussot, Guillaume Lample Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! So the paper we're looking at today can take the code on the left, which is written in Python, and can output the code on the right, which is written in C++. Now the point here is that the code on the right does the same thing as the code on the left, so it is implementing the same function. The surprising thing here is that this model that takes the Python as an input has never been explicitly trained to output C++. So this is an unsupervised translation model. And the cool thing about this paper is that by having no target, having no supervised signal at translating source code languages into one another, it can perform pretty well at the task nonetheless. So we're going to look at this paper. It's called Unsupervised Translation of Programming Languages by Marie-Anne Lachaud, Baptiste Rosier, Loïc Chanusso and Jérôme Lomple at Facebook AI Research. As always, if you like content like this, consider sharing it out and leaving a like and also leaving a comment if you have something to say about it. They say a trans compiler, also known as a source-to-source translator, is a system that converts source code from a high-level programming language such as C++ or Python to another. They say trans compilers are primarily used for interoperability and to port code bases written in an obsolete or deprecated language such as COBOL or Python 2 to a modern one. So for Python 2, you might know this tool that's called 2-to-3. So 2-to-3 is a tool that ships with the Python 3 standard library, I believe, that allows you to take Python 2 code and produce Python 3 code. And that is to kind of push people to convert their old code bases of Python 2 to the modern Python 3. Now 2-to-3 is a handwritten program. It has specific rules built in that the programmers know if we modify Python 2 like this, Python 3 gets out. For example, the print statement in Python 2 requires no brackets, so we make a rule that whenever there's a print statement with no brackets, we'll add the brackets such that it's Python 3 compliant. Most of this code will transform the source code first into an abstract syntax tree, modify that, apply specific rules to that, and then output the language from the abstract syntax tree. Now the problem here is many-fold. First of all, there can only be so much translation as there are rules. So every one of these rules has to be coded as a modification to the abstract syntax tree, and every one of these rules is handcrafted and therefore needs sort of human ingenuity. Humans need to go and write these rules of how to transform one language into another. And often times, even though you can write these rules, often times whatever comes out is sort of a bit of a cryptic source code, because you kind of have to make sure that your rules cover all the possible things, and the source code that comes out is oftentimes very cryptic and a bit hard to understand, because it's been sort of expanded and formalized to make sure that it still does the same thing as the original source code. Now for Python 2 to Python 3, this is still easy, right? These languages are extremely similar, because it's not that big of a step to Python 3, except if you use very low level language constructs or language features which have been obsoleted with Python 3. On the other hand, if there's something like Cobol, so a lot of this old banking code or insurance code or government agency code whatnot is written in these really old programming languages, and they've been kept alive by these old-school programmers that are slowly but surely all retiring now, and there are just not many new programmers around that can support these languages, and the languages themselves aren't really updated that much anymore, so you would like to transform Cobalt into something like... I don't want to call Java modern programming language, but it is used in modern times. I'd rather not call it modern per se. Java itself is a beast that's been sort of supported since forever, but in any case, you would like to transform something like Cobalt to something like Java, where you have a lot of programmers that can develop and further develop your code. This is much harder. Cobalt and Java are much more away from each other than are Python 2 and Python 3. So what you would like to do is you would like to have a tool that is like 2 to 3, but humans... because now if you want a tool like this, you need someone that's really proficient in Cobalt and Java in order to write a tool like this, and you need lots of them, and they need to invest lots of time. What you would rather like to do is you would like to learn a system that translates from one language into another, such that the meaning is conserved. And this of course is exactly the domain of natural language machine translation, except it's source code. So we all know that we've all realized in the last years that things like Google Translate have become extremely good at translating. So they say right here, although neural models significantly outperform their rule-based counterparts in the context of natural language translation, which is Google Translate is all learned right now, they say their applications to trans compilation have been limited due to the scarcity of parallel data in this domain. So what's the problem with just going and saying, oh, we can build really good neural machine translation models, let's just apply them to source code. The problem is if you build a neural machine translation model, say something that transforms English to German, so you have the word hello and you output hello. You can do it not with just one word, but with entire sentences and so on. These models, they usually, or in the classical sense, what they need are parallel corpora, which means that you have documents that are written in many languages and you can guarantee that they mean the same thing. So this is a supervised signal. One example of this is, let's say press releases of the United Nations. So the United Nations will make some press release and they will then have professional translators translate that press release into all of the different or into many different languages. And so you can pretty much guarantee that these mean the same thing. So these pairs of documents or triplets or whatnot, they are supervised training data for a machine translation model that translates from one language into the other. And the neural machine translation models rely heavily on these parallel corpora. For source code, you just don't have that as much. You don't have big code bases in great numbers where the exact same thing is implemented in one language and in the other language. There's just not that much data available. It is the case that sometimes, sorry, that sometimes, let's say in the case of Torch, it started as Lua and then it went to PyTorch and the developers had to translate the code from Torch to Python. But in the same in the same step, they've also made improvements. They sort of re-engineered and reinvented the framework and made it better. And so you can't really say these are the same things. And likewise, there's not a lot of code available where the same thing is implemented in two languages. So we just don't have these parallel corpora for the source code translation. So rather what this paper does is this paper goes into unsupervised machine translation. Now what does unsupervised machine translation mean? Unsupervised machine translation, you imagine I have just a big database of documents. And these documents, I know they're all in English. And I have this other big database and I know that they are all in German. I just know their documents are in German. But they don't correspond to each other. They're just German documents. And over here, they're just English documents. They don't, I don't say that these two here are somehow the same. No, I just have a bunch of German, a bunch of English. They don't even have to correspond. They're just text. And now what I want to do is I want to learn a shared embedding space. I sort of want to learn a shared space of embeddings for these two languages, such that similar things are mapped to the similar place. So if these two documents just happen to talk about the same thing, I want them to be mapped to similar spaces in this shared embedding space. So I'm gonna have one model, a single model, where I input the text and it goes into this shared embedding space. Okay, now this is unusual because usually in machine translation, if you translate from here from English to German, then you'll have your dedicated model that takes as English as an input and German as an output. And that would be a different model than one that takes even German as an input and English as an output or French as an input and German as an output. In this case, we have this process right here and this process right here is the same model. And then the decoder that translates this, of course, now we have the encoder embedding, and then the decoder that actually translates to a language is also going to be the same model. So it's the same model that translates to English, then it translates to German. So first of all, how do we make the same model? Let's say we have the perfect encoder, right? This is E, the encoder, the same encoder for all languages. Let's say we have the perfect encoder and it can map the... whenever a sentence means the same thing in different languages, we can completely map it to the same point in embedding space, irrespective of the language it comes from. Now, how do we tell the decoder, which is also the same model, like how does it know what to do? This is a little trick where you basically take this embedding, so you take the input, you put it into your model, I don't even know how you do this, and then you're output is going to be autoregressive, right? So you decode one token at a time. So you decode this token and then you feed it back into the model, and then you decode this token, you feed that back into the model, and so on. This is an autoregressive language model. And the trick here is that the very first token is a special token that describes the language you want to output. So here you say, I want German, and then you let the model decode its thing. And by conditioning on this token right here, it knows it should now produce German. During training, you will simply put the token here, and if it produces something other than German, that's a loss, right? So it will learn to produce German after you produce this tag. Alright, so what we need is an encoder and a decoder, such that in the encoder we can put in any language text, and it will map to the same space the things that mean the same things, and we need a decoder that can produce any language given this first thing. Now the decoder should be fairly easy, right? If we have a shared vocabulary between the languages, and we always put this token, the decoder is not a problem. You can just learn the decoder in a straightforward way, but the encoder is going to be a problem. So how does the encoder map the different languages to the same space, such that the same things are ending up in the same place? It seems a bit counterintuitive, right? Because it doesn't know which things correspond to which things. Now the first thing you need is a shared vocabulary here. Since we are in a shared space right here, what you need is a shared vocabulary. So you tokenize all of the text with a shared vocabulary. And this vocabulary is going to consist of sort of word pieces. Now if you don't know what word pieces are, in a word piece tokenization what you would do is you would split words into so-called word pieces. So for example hello right here might be split into two word pieces. The first word piece might be he, and the second word piece might be LLO. There is usually some kind of indicator here that this is the end of a word and so on, but we'll simplify. And hello right here would be a HA for the first token and then LLO for the second token. And these kind of word piece encodings, since the smallest units are going to be the characters themselves, they ensure that you always have everything is in vocabulary. You have no out-of-vocabulary tokens. But here you can already see that if we tokenize the languages like this and then we use the same encoder, so the same encoder will pop them into this shared space, that means that to the model this and this looks like the same thing. It is the same thing, right? It's the same input token in different languages. Now as you can see this comes from the same word, the LLO at the end in English and the LLO at the end in German. It comes from the same word. So you know it's fair to assume that since it's the same input it's going to be mapped into a same embedding space right here. Or since these things are usually context-dependent we can say in a similar embedding space or a close embedding space, but certainly the initial vectors are the same. That is already half the task, right? So by tokenizing in this way we have already mapped part of our languages, even though they're just the different languages, we have mapped the same word to the same space. And this relies on the fact that in this case for example English and German and for example French they have significant overlap in their words as such. So the word hello and the word hollow, they are almost the same word as letters, as word pieces. And these shared embedding techniques abuse sort of the fact that these languages are close. They'll say there will be some word pieces that are going to be the same in these languages and naturally because they're the same they'll end up in the same place in embedding space. And because that now you have the... so what these embedding techniques do is they simply figure out the statistical relations between the word pieces. So if two things appear together often in the same context they'll be mapped into the same space as well. So it would realize there is a lot of times where ha and he appear in front of this low thing. So I should probably map the ha and the he to the same location in embedding space. So the same relation to the low. So they would end up at the same place. So now you see even though these word pieces are different, like they get different IDs, they'll be mapped to the same place in embedding space because their relation to the low is the same and the low themselves are being mapped to the same place because they are actually the same. So you can see that this partial overlap between word pieces in the different languages combined with the shared embedding pre-training results in these token across the languages results in an alignment of the embeddings. So naturally the things that mean the same things are going to be in the same places in embedding space, either because they are the same or because their statistical relation to the things that are the same is the same. It is sort of like these ha and the he are like synonyms in this shared language. So if you jumble all the English and German text together, the model thinks ha and he are synonyms and therefore maps it to the same space. This happens exactly the same in if you only have one language, two true synonyms. So this would exactly be the same thing in this case. Alright, so now we have different languages. We have a single encoder where we can input any of those languages mapping it to the shared space. The decoder can be trained by simply giving it this indicator token right here to decode the appropriate language. So now the question is how exactly do we train this such that this happens? There's one caveat in programming languages of course we still have to check whether that's the same or not and we know that in programming languages a lot of programming languages for example the word if is the same right so if you tokenize Java or Python or C++ the word if is the same and likewise there is a lot of overlap between the different programming languages and that exactly is this correspondence to here. These models use the parts that are overlapping either tokens themselves or this can also be grammatical constructs and so on they can also be overlapping and therefore map to the same space so if a construct is used in the same way this can be in higher layers this can induce the same effect and they'll use that to map the and sorry they'll use that the result will be that the similar things in the same in these languages will be mapped to the similar spaces in embedding space. Now this makes this example a bit weaker because that means this method would work exceptionally well for something like Python 2 to Python 3 because they of course have like a lot of overlap of syntax and keywords and constructs whereas something like cobalt to Java it's more let's say doubtable that they will work so well. In this paper here they've chosen C++ Python and Java which do have significant overlap but especially something like Python to Java of course there is a lot of a difference or Python to C++ as well Python is not typed and so you can see a bit of the difficulty is already in the paper here but you have to be aware that this works less and less the less this shared overlap is given. Alright so how do you train these models and remember we don't have parallel corpora we are simply reliant on having databases from having big repositories of Python code C++ code and Java code and they don't correspond to each other so the first thing you do and as I understand it you can do these things in parallel but there are three different objectives that achieve three different things in these models. The first objective is the cross-lingual masked language pre-training. Now the models here are going to be transformer models with encoders and decoders and that's comes from the attention is all you need papers and various other papers like this I've done videos on those if you want to see that. This masked language model pre-training however is from the BERT paper so BERT if you don't know what that is I've also done a video on that. This simply trains the encoder so this is to train the encoder. What you would do is you would input code so usually in masked language model if you train the encoder you input code with tokens like hello there you would then so this is your input you would then mask some of the tokens for example here the low and maybe the entire word there you would scrap that you would put it through your encoder which is this transformer model like BERT and then BERT is supposed to reconstruct hello there. BERT is supposed to reconstruct these two tokens like it doesn't see them and you ask it what did I cross out and it needs to reconstruct that so you train the model to reconstruct these masked tokens and the research on BERT and other things has shown that if you train with this objective the encoder sort of learns about the structure of code it learns about it learns which tokens and which constructs often appear together and therefore it learns something about the structure of the input and that means it can create whatever is up here is a good and meaningful embedding for these things that tells you something about the statistical coexistence of tokens and of course since we're doing this with all the languages so the Python goes in here C++ goes in here without telling the model what it is you just throw it in there right Java goes in there by tokenizing it and you see an example right here so if this right here is C++ but in Python this would also be if and since it needs to learn a single encoder for all of these languages and since the tokens overlap partially it is going to result in exactly what we want namely a shared embedding space where even though the input comes from different languages it is mapped similar things are mapped to similar places in the embedding space right so the mask language model pre training very quickly as you take a piece of code like here on the left you mask out some of the tokens here you can see them in this mask and you simply ask the model the encoder to reconstruct those things okay so this is just for the encoder as far as I understand it the encoder doesn't it doesn't see the thing back here it simply sees this and you tell it please reconstruct please tell me which words or which tokens I I clipped out here and it's supposed to tell you okay the first one is if the second one is int and the third one is the I now if you consider what the encoder has to do here so if you were to see this then that pretty clearly you know you could you could guess that that is an if of course it's not a hundred percent but this is just pre training right so you train it to output if here now here you have to do a little bit more inference maybe you've seen this for construct a bunch of times and you can see that this is compared here and this is added so probably it's an integer and then in the last thing this is even more complicated if you don't see that the eye is here you somehow have to guess that what it is it's not clear right but you can guess that okay there's a local variable I right here and probably it's going to be used somewhere in this block now this here isn't I and I don't see I anywhere else so probably I goes in here which makes sense because it's an integer and prime is an array and and it integers index arrays so on okay so this is what the model does first the second thing is we need to train the decoder somehow how do we train the decoder in a very similar way we make the decoder do denoising auto encoding now before we just had single tokens we just asked the encoder to reconstruct tokens so the encoder is this box right here this colors this box is the encoder and the actual part that's going to predict these words is going to be one sort of one classification layer on top that is going to predict for each position the individual word now did this is just for pre-training after the pre-training you scrap that and you attach it to a decoder so you attach whatever you got out of the encoder to a decoder and the decoder will output in an autoregressive way one token after another I did I put a it output a token right here and it feed that token back into the decoder saying okay here's what I've produced now produce the next thing you produce the next token and so on so it would produce token after token the output and now as I said I'm not exactly sure I think they're doing all of these things at the same time so this would still be here but the information would just be routed in two different ways or maybe they do it one after another it doesn't really matter but what matters is in this thing here you now train the decoder I mean you train it jointly with the encoder but you also involve the decoder and you do this by doing something very similar you corrupt a piece of code and you get corrupted code now you can see part of this corruption is masking like you did before but also part of the corruption is like here you scramble some of the tokens right this was it was this over here you just jumble some of them around a bit and then you here you also drop a token as you can see that the one is dropped and you simply so you don't show this to the encoder or the decoder you input this corrupted code into the decode into the first the encoder and then you ask the decoder to give you back the original code without showing it the original code so the the task for the decoder for the encoder decoder for the entire model here is if you're if you see this here is corrupt the code I have corrupted it in various ways please tell me what I originally had now it can the masking it does the same as before it sort of infers it this thing here it says well probably I probably this isn't really correct you don't even tell it where the errors are right before with the masking you at least told it where the errors are now you don't even tell it where the where the errors are so it needs to recognize this here is probably correct this isn't this I'm gonna rewrite this to that okay and it does this one token at a time so it first goes into the dirt and it needs to output the correct thing this is I hope the difference is clear to the masked language modeling which involved the involved the decoder and here also is the first time where in the encoder you you prepend this Java token now this as you can see it still goes from the same language to the same language but this is where you train the decoder to output a given language so here with the token again this is the same decoder for all the three languages the only difference here is every time you simply provided with the special token at the beginning to tell it which language it should decode right now so this this now we have an encoder that maps all the languages to a shared space and we have a decoder that conditioned on a token like this can output a valid code in that thing assuming this here was corrupted code now since the encoder is shared it should map the same kind of corrupted code of the different languages to the same place in the embedding space and therefore this would also this would already be enough to have this model that we desire we can input some code it doesn't actually have to be corrupted right we can just input some code in one language and ask the decoder to output the other language and this works but it doesn't work super well and here the authors go for another idea from the unsupervised machine translation literature which is back translation so back translation is a technique where you can tune an unsupervised machine translation model in a way that you would tune a supervised one but of course you don't have supervised data so what's the plan you will produce the data yourself using your own model so the plan is pretty simple it's actually contained in the back translation name so if you have a piece of code what you would do is you would first use your model to translate this to another language any of your choice now you have no clue whether this thing here is correct or not you have no clue and you have no way of assessing it because you don't have ground truth what you can do is use your model again or actually use a second model that you train in parallel now I believe in this case they could use the same model but you can that could be instable and so on but in any case you can use your system again to translate it back to your original language your system can do that right and here whatever you get as an output you know the ground truth it's whatever you started with so now you can compare what comes out to what you started with the difficulty of course is if there is a mistake you don't know which of the two models made a mistake and you so it could be could be that your original translation model made a mistake and or it could be that your back translation model made a mistake and you have to find a loss function that kind of punishes both equally or you simply keep one sort of constant and loss free and train the other one because there there's going to be a sample where you have C++ as an input and then the intermediate language is Python so all of the models sort of get trained once as an as a source to target translator and once as a target to source translator but I hope the the objective is clear from the back translation so now with the back translation you actually you train the models to go from one language to another language okay and that's the that's the final goal even though you do it without supervised data you now have a model that can encode things into a shared space that can decode into a language and that is attuned to translating from one language to another language so that's that's it how this is all how does this work now for evaluation the question is of course how do you evaluate models like this for evaluation they go to this website called geeks for geeks and this is a an online platform with computer science and programming articles it gathers many coding problems and presents solution in several programming languages okay so this is a website that teaches you to code and it will have like an exercise please do this and then it will provide solutions in the different languages now why is that cool and they have an example they have an example right here why is that cool because not only can you be relatively sure that these different functions that you have here do the same thing but you can also relatively be relatively sure that they are implemented in the similar way right because you what this website is trying to do is it's trying to teach the people how to how to code up an algorithm that they think up in their head and therefore not only is the solution correct and the same it is implemented in the in the same way as you can see here the construct there's this if construct is everywhere the else if is everywhere so even though some of the languages might have specialty things for implementing some algorithms these are really the same algorithmic the same expression of algorithmic thought in the different languages so that is a perfect parallel data set the problem of course is that there is not that many so it is good enough as a test set it is not good enough as a training set but given that it's a test set you can just have these as test set and then you can input the C++ and see whether or not the Java comes out the problem here of course is that even though this is very clear there are still you know sort of many variations of how you can implement that to even express the same algorithmic thought so metrics from natural language processing like blue just aren't going to be very good because they look at n-gram overlap and you can write this function with very different n-grams and still be very very valid and correct and also exact match is not going to be really the the gold standard here so what they do is they create a set of unit tests where for each of these functions they go they check their input types they randomly generate input randomly generate a set of inputs look whatever comes out and if the same thing comes out in all of their test functions that they consider this a good unit test for that function so whenever you your model now produces let's say you input Python it produces a C++ you simply put these unit tests through the C++ function you produce and if they produce the same output as the Python the original Python function when on the same inputs then you consider the unit test to succeed and you consider the function to be correct that this of course this isn't this isn't the super duper gold standard especially with random inputs because usually what you want to do is test sort of corner cases but it's better than anything else so far I've been a long dis-advocate of unit tests honestly because I think whenever a human writes a unit test then they're probably since they have already implemented the function itself they're probably going to make the same mistakes or they're probably just going to replicate the code and thinking of the function in the unit test itself and therefore it doesn't really get you anything I guess in large organizations you write unit tests so that someone else doesn't screw up your code but in this case it would actually be cool because now as a human you could simply write a bunch of unit tests and then let your let your trans compiler do the heavy lifting and you simply check whether or not the output is good alright so how does this do here you can see they have some baselines the C++ to Java as I understand it is a commercial system and the Java to Python is an open source system both are human experts that make up these rule-based systems on how to trans on how to translate code into other languages now the if you do what they have here is trans coder beam one which means a beam size of one so if you don't know a beam search is very shortly beam search is like if you decode from your language model you can either always take the next token that has the highest probability this would be greedy decoding or a beam size of one or you can sort of always keep the top n hypotheses of what the of what the most likely output is as you can keep that as a you can keep the top five in memory and always decode these five on sort of like you have a mini batch of five sequences and you always keep the top five in memory so at the end of the decoding you're going to have five different variants of the same sentence or of the same decoded output and you can then decide which one you like best and usually what you do is you then output the one that has the highest probability which is not the same as the greedy because sometimes the next token will be will look one next token will look very good in a greedy way but you'd better take the second most likely because the next to next token is going to sort of make up for that to make the entire sequence even more likely so more beam size basically means you can keep more hypotheses of the output in memory until the end so if you just do the greedy decoding you see you already get fairly close to these baselines it's very very cool very interesting and if you up the beam size you surpass these baselines now the way they up the beam size here I find to be a bit let's call it a bit cheaty because when they say beam five what they mean is they keep the five hypotheses and then at the end I as I understand it if any of the five hypotheses passes all the unit tests or the most they keep it right so basically they give themselves the freedom to say whichever one of the five we output is the best that's the one we count and of course that's not really a match to the commercial or to the baseline system because it can output one thing now it is maybe a good practical application to give the human that you know you input a function you give the human five options to choose from and it can choose and thereby decide which one the human likes best but it is sort of it is a wonky what I like more is this here the beam 10 top one this is what you would actually do so we could keep 10 hypotheses during decoding and that the end output the top one the top likely one and as you can see that is better than greedy but it is worse than where you you know give yourself the freedom to output multiple ones of course though they say that most of the errors that this top one makes come from compilation errors when the target language is Java or C++ it suggests that the beam and top one metric could easily be improved we leave this to future work which this again I find valid right so if you if your method is I'm going to keep the top 10 hypothesis until the end and then I'm going from the top and I simply compile them and I output the first one that compiles that that's not cheating right that's a valid thing again yeah so in that way I can I can understand what they're saying right here okay so they give some examples some of which I find very interesting so the first thing here is that oh yeah by the way I've said in the I've said that the tokenizer between the natural languages is shared they make a little tweak here in that they tokenize the different languages with their language respective tokenizers which will still end up tokenizing pretty much you know this the print statement in C++ or in Java no actually the print statement in Python is print and in Java it's println and so on but it will still like the all the if statements it will still tokenize into the same into the same word but it's simply not viable to to parse Python with a C++ parser okay so we have looked at this the results this is one of the results they look at their shared embedding space and this is a t-sneak plot so a 2d projection of this shared embedding space and you can see that this is actually happening so the different so null null and none are mapped to similar locations println and cout are mapped to similar locations in this space so this is exactly what we want this is sort of a verification that this method of embedding the different languages into the same space really turns out such that whatever means the same thing is mapped to the same place you can see here catch and accept two very different tokens are mapped to the same place simply because they're used in the same sort of constructs across the languages very cool one of these examples here is quite impressive and kind of shows the difference between this and and rule-based translation in this function right here you have a C plus plus function that takes a character pointer to that is called str str in as an input now in C++ strings are at least in old versions of C++ strings are handled as character arrays so a string is indistinguishable from a character array and in this case usually what you do is you don't input the array because that will cause a copy you input a pointer to the first to you input a pointer to the array and that would define the string okay so if you translate this again this the type of this is simply character array if you translate this with this transcoder system that they've built into Java in Java there is a type called string right there's a native type called string and is that true I think oh yeah that's and then that's handled really weirdly in the JVM I think yes so there is it at least there is a type called string so it would map that it would recognize are you mean a string therefore I'm going to put a string here and it uses all the string method like string length string character at and so on where in C++ this is just an array and you just have array accesses now they take this same C++ function and only change one thing they change the name of the parameter everything else is the same but now the character array is called our okay and they put it through the same system and that system now outputs a function that takes in a character array called our instead of a string and it uses you know here the property length it uses array access instead of this car character at method so simply by changing the name and this is something where I believe the rule-based systems can this can be an advantage over rule-based system because what this here does is it simply says oh I've seen a lot of humans in my code base that use this use like stir as a as a variable name and that usually means that the constructs here are like the constructs in Java where people use strings and I've seen other places where people use you know names like this right here and usually that is used in the same context as in Java people use character arrays right so it in programming it's not only important what the the code actually does but a lot of programming goes via naming of things like other programmers will read your code and by reading stir right here they will sort of assume that this is a string whereas if they read our right here they will assume you're a pirate and you are referring to a character array and they will treat the code the code means something different and these systems right here these neural machine translation systems can actually understand that part because they do statistical inference on code that humans wrote if you change this back to say input then again it goes back to a string and uses all the string functions so that's fairly impressive in my mind and it yeah definitely an advantage over rule-based systems of course the disadvantage over rule-based systems is that in rule-based systems you can almost get on like sometimes you can even guarantee that the code does the same thing here you can't they give some examples of failed translations where so now you get you run into this problem where the min function in Python is overloaded it can either give you the minimum of a sequence or it can give you the minimum of two things now this is translated to Java right here and math dot min is not overloaded in Java it only gives you the minimum of two things and not the minimum of an array and it still outputs that now given enough data probably could learn because these things are all context dependent but this is one of the this is one of the failure cases of these models of course all right so this was this paper I I've read that the code of this and the unit tests will be output will be put online at some times they are not right now if I if I hear about it I can link to it or let you know about it let me know what you think of this paper in the comments share it out and subscribe if you haven't yet and bye bye
[ { "start": 0, "end": 5.24, "text": " Hi there! So the paper we're looking at today can take the code on the left," }, { "start": 5.24, "end": 9.8, "text": " which is written in Python, and can output the code on the right, which is" }, { "start": 9.8, "end": 15.16, "text": " written in C++. Now the point here is that the code on the right does the same" }, { "start": 15.16, "end": 20.16, "text": " thing as the code on the left, so it is implementing the same function. The" }, { "start": 20.16, "end": 25.6, "text": " surprising thing here is that this model that takes the Python as an input has" }, { "start": 25.6, "end": 32.36, "text": " never been explicitly trained to output C++. So this is an unsupervised" }, { "start": 32.36, "end": 37.56, "text": " translation model. And the cool thing about this paper is that by" }, { "start": 37.56, "end": 43.72, "text": " having no target, having no supervised signal at translating source code" }, { "start": 43.72, "end": 48.040000000000006, "text": " languages into one another, it can perform pretty well at the task" }, { "start": 48.040000000000006, "end": 53.760000000000005, "text": " nonetheless. So we're going to look at this paper. It's called" }, { "start": 53.76, "end": 59.32, "text": " Unsupervised Translation of Programming Languages by Marie-Anne Lachaud, Baptiste" }, { "start": 59.32, "end": 67.44, "text": " Rosier, Loïc Chanusso and Jérôme Lomple at Facebook AI Research. As always, if you" }, { "start": 67.44, "end": 73.24, "text": " like content like this, consider sharing it out and leaving a like and also" }, { "start": 73.24, "end": 78.75999999999999, "text": " leaving a comment if you have something to say about it. They say a trans" }, { "start": 78.75999999999999, "end": 82.7, "text": " compiler, also known as a source-to-source translator, is a system" }, { "start": 82.7, "end": 86.8, "text": " that converts source code from a high-level programming language such as" }, { "start": 86.8, "end": 94.32000000000001, "text": " C++ or Python to another. They say trans compilers are primarily used for" }, { "start": 94.32000000000001, "end": 99.64, "text": " interoperability and to port code bases written in an obsolete or deprecated" }, { "start": 99.64, "end": 106.12, "text": " language such as COBOL or Python 2 to a modern one. So for Python 2, you might" }, { "start": 106.12, "end": 112, "text": " know this tool that's called 2-to-3. So 2-to-3 is a tool that ships with the" }, { "start": 112, "end": 117.16, "text": " Python 3 standard library, I believe, that allows you to take Python 2 code" }, { "start": 117.16, "end": 123.32, "text": " and produce Python 3 code. And that is to kind of push people to convert their" }, { "start": 123.32, "end": 129.76, "text": " old code bases of Python 2 to the modern Python 3. Now 2-to-3 is a" }, { "start": 129.76, "end": 136.32, "text": " handwritten program. It has specific rules built in that the programmers know" }, { "start": 136.32, "end": 140.48, "text": " if we modify Python 2 like this, Python 3 gets out. For example, the print" }, { "start": 140.48, "end": 145.84, "text": " statement in Python 2 requires no brackets, so we make a rule that whenever" }, { "start": 145.84, "end": 148.95999999999998, "text": " there's a print statement with no brackets, we'll add the brackets such that" }, { "start": 148.95999999999998, "end": 155, "text": " it's Python 3 compliant. Most of this code will transform the" }, { "start": 155, "end": 160.51999999999998, "text": " source code first into an abstract syntax tree, modify that, apply specific" }, { "start": 160.51999999999998, "end": 166.04, "text": " rules to that, and then output the language from the abstract syntax tree." }, { "start": 166.04, "end": 172.84, "text": " Now the problem here is many-fold. First of all, there can only be so much" }, { "start": 172.84, "end": 178.28, "text": " translation as there are rules. So every one of these rules has to be coded as a" }, { "start": 178.28, "end": 182.79999999999998, "text": " modification to the abstract syntax tree, and every one of these rules is" }, { "start": 182.79999999999998, "end": 188.35999999999999, "text": " handcrafted and therefore needs sort of human ingenuity. Humans need to go and" }, { "start": 188.35999999999999, "end": 195.2, "text": " write these rules of how to transform one language into another. And often" }, { "start": 195.2, "end": 200.11999999999998, "text": " times, even though you can write these rules," }, { "start": 200.11999999999998, "end": 205.35999999999999, "text": " often times whatever comes out is sort of a bit of a cryptic source code, because" }, { "start": 205.35999999999999, "end": 209.6, "text": " you kind of have to make sure that your rules cover all the possible things, and" }, { "start": 209.6, "end": 214.35999999999999, "text": " the source code that comes out is oftentimes very cryptic and a bit hard" }, { "start": 214.35999999999999, "end": 219.2, "text": " to understand, because it's been sort of expanded and formalized to make sure" }, { "start": 219.2, "end": 225.6, "text": " that it still does the same thing as the original source code. Now for Python 2 to" }, { "start": 225.6, "end": 230.11999999999998, "text": " Python 3, this is still easy, right? These languages are extremely similar," }, { "start": 230.11999999999998, "end": 236.39999999999998, "text": " because it's not that big of a step to Python 3, except if you use very low" }, { "start": 236.39999999999998, "end": 242.07999999999998, "text": " level language constructs or language features which have been" }, { "start": 242.07999999999998, "end": 248.07999999999998, "text": " obsoleted with Python 3. On the other hand, if there's something like Cobol, so a" }, { "start": 248.08, "end": 253.44000000000003, "text": " lot of this old banking code or insurance code or government agency code" }, { "start": 253.44000000000003, "end": 257.40000000000003, "text": " whatnot is written in these really old programming languages, and they've been" }, { "start": 257.40000000000003, "end": 262.2, "text": " kept alive by these old-school programmers that are slowly but surely" }, { "start": 262.2, "end": 267.24, "text": " all retiring now, and there are just not many new programmers around that can" }, { "start": 267.24, "end": 270.72, "text": " support these languages, and the languages themselves aren't really" }, { "start": 270.72, "end": 275.32, "text": " updated that much anymore, so you would like to transform Cobalt into something" }, { "start": 275.32, "end": 282.12, "text": " like... I don't want to call Java modern programming language, but it is used in" }, { "start": 282.12, "end": 287.56, "text": " modern times. I'd rather not call it modern per se. Java itself is a beast" }, { "start": 287.56, "end": 294.15999999999997, "text": " that's been sort of supported since forever, but in any case, you would like" }, { "start": 294.15999999999997, "end": 298.6, "text": " to transform something like Cobalt to something like Java, where you have" }, { "start": 298.6, "end": 303.71999999999997, "text": " a lot of programmers that can develop and further develop your code. This is" }, { "start": 303.72, "end": 309.42, "text": " much harder. Cobalt and Java are much more away from each other than are Python 2" }, { "start": 309.42, "end": 314.52000000000004, "text": " and Python 3. So what you would like to do is you would like to have a tool that" }, { "start": 314.52000000000004, "end": 320.6, "text": " is like 2 to 3, but humans... because now if you want a tool like this, you need" }, { "start": 320.6, "end": 325, "text": " someone that's really proficient in Cobalt and Java in order to write a tool" }, { "start": 325, "end": 329.52000000000004, "text": " like this, and you need lots of them, and they need to invest lots of time. What" }, { "start": 329.52, "end": 333.71999999999997, "text": " you would rather like to do is you would like to learn a system that translates" }, { "start": 333.71999999999997, "end": 338.56, "text": " from one language into another, such that the meaning is conserved. And this of" }, { "start": 338.56, "end": 343.68, "text": " course is exactly the domain of natural language machine translation, except it's" }, { "start": 343.68, "end": 349.59999999999997, "text": " source code. So we all know that we've all realized in the last years that" }, { "start": 349.59999999999997, "end": 356.64, "text": " things like Google Translate have become extremely good at translating. So they" }, { "start": 356.64, "end": 361.56, "text": " say right here, although neural models significantly outperform their rule-based" }, { "start": 361.56, "end": 365.96, "text": " counterparts in the context of natural language translation, which is Google" }, { "start": 365.96, "end": 370.91999999999996, "text": " Translate is all learned right now, they say their applications to trans" }, { "start": 370.91999999999996, "end": 375.59999999999997, "text": " compilation have been limited due to the scarcity of parallel data in this domain." }, { "start": 375.59999999999997, "end": 380.88, "text": " So what's the problem with just going and saying, oh, we can build" }, { "start": 380.88, "end": 384.88, "text": " really good neural machine translation models, let's just apply them to source" }, { "start": 384.88, "end": 388.56, "text": " code. The problem is if you build a neural machine translation model, say" }, { "start": 388.56, "end": 394.4, "text": " something that transforms English to German, so you have the word hello and" }, { "start": 394.4, "end": 401.4, "text": " you output hello. You can do it not with just one word, but with entire sentences" }, { "start": 401.4, "end": 406.71999999999997, "text": " and so on. These models, they usually, or in the classical sense, what they need" }, { "start": 406.71999999999997, "end": 411.92, "text": " are parallel corpora, which means that you have documents that are" }, { "start": 411.92, "end": 418.64000000000004, "text": " written in many languages and you can guarantee that they mean the same thing." }, { "start": 418.64000000000004, "end": 424.44, "text": " So this is a supervised signal. One example of this is, let's say press" }, { "start": 424.44, "end": 428.40000000000003, "text": " releases of the United Nations. So the United Nations will make some press" }, { "start": 428.40000000000003, "end": 433.84000000000003, "text": " release and they will then have professional translators translate that" }, { "start": 433.84000000000003, "end": 439.18, "text": " press release into all of the different or into many different languages. And so" }, { "start": 439.18, "end": 443.32, "text": " you can pretty much guarantee that these mean the same thing. So these pairs of" }, { "start": 443.32, "end": 449.08, "text": " documents or triplets or whatnot, they are supervised training data for a" }, { "start": 449.08, "end": 453.04, "text": " machine translation model that translates from one language into the" }, { "start": 453.04, "end": 458.28000000000003, "text": " other. And the neural machine translation models rely heavily on these parallel" }, { "start": 458.28000000000003, "end": 464.4, "text": " corpora. For source code, you just don't have that as much. You don't have big" }, { "start": 464.4, "end": 470.88, "text": " code bases in great numbers where the exact same thing is implemented in one" }, { "start": 470.88, "end": 474.79999999999995, "text": " language and in the other language. There's just not that much data" }, { "start": 474.79999999999995, "end": 482.03999999999996, "text": " available. It is the case that sometimes, sorry, that sometimes, let's say in the" }, { "start": 482.03999999999996, "end": 489.15999999999997, "text": " case of Torch, it started as Lua and then it went to PyTorch and the developers" }, { "start": 489.16, "end": 496.12, "text": " had to translate the code from Torch to Python. But in the same in the same step," }, { "start": 496.12, "end": 500.12, "text": " they've also made improvements. They sort of re-engineered and reinvented the" }, { "start": 500.12, "end": 504.56, "text": " framework and made it better. And so you can't really say these are the same" }, { "start": 504.56, "end": 510.08000000000004, "text": " things. And likewise, there's not a lot of code available where the same thing is" }, { "start": 510.08000000000004, "end": 515.28, "text": " implemented in two languages. So we just don't have these parallel corpora for" }, { "start": 515.28, "end": 521.24, "text": " the source code translation. So rather what this paper does is this paper goes" }, { "start": 521.24, "end": 526.48, "text": " into unsupervised machine translation. Now what does unsupervised machine" }, { "start": 526.48, "end": 532, "text": " translation mean? Unsupervised machine translation, you imagine I have just a" }, { "start": 532, "end": 538.92, "text": " big database of documents. And these documents, I know they're all in" }, { "start": 538.92, "end": 545.1999999999999, "text": " English. And I have this other big database and I know that they are all" }, { "start": 545.2, "end": 549.9200000000001, "text": " in German. I just know their documents are in German. But they don't correspond" }, { "start": 549.9200000000001, "end": 553.48, "text": " to each other. They're just German documents. And over here, they're just" }, { "start": 553.48, "end": 558.4000000000001, "text": " English documents. They don't, I don't say that these two here are somehow the" }, { "start": 558.4000000000001, "end": 562.6400000000001, "text": " same. No, I just have a bunch of German, a bunch of English. They don't even have" }, { "start": 562.6400000000001, "end": 567.44, "text": " to correspond. They're just text. And now what I want to do is I want to learn a" }, { "start": 567.44, "end": 574.96, "text": " shared embedding space. I sort of want to learn a shared space of embeddings" }, { "start": 574.96, "end": 580.24, "text": " for these two languages, such that similar things are mapped to the similar" }, { "start": 580.24, "end": 584.4000000000001, "text": " place. So if these two documents just happen to talk about the same thing, I" }, { "start": 584.4000000000001, "end": 590.6800000000001, "text": " want them to be mapped to similar spaces in this shared embedding space. So I'm" }, { "start": 590.6800000000001, "end": 597.02, "text": " gonna have one model, a single model, where I input the text and it goes into" }, { "start": 597.02, "end": 601.84, "text": " this shared embedding space. Okay, now this is unusual because usually in" }, { "start": 601.84, "end": 607.8000000000001, "text": " machine translation, if you translate from here from English to German, then" }, { "start": 607.8000000000001, "end": 612.84, "text": " you'll have your dedicated model that takes as English as an input and German" }, { "start": 612.84, "end": 617.6, "text": " as an output. And that would be a different model than one that takes even" }, { "start": 617.6, "end": 621.5600000000001, "text": " German as an input and English as an output or French as an input and German" }, { "start": 621.5600000000001, "end": 627.6, "text": " as an output. In this case, we have this process right here and this process" }, { "start": 627.6, "end": 633.76, "text": " right here is the same model. And then the decoder that translates this, of" }, { "start": 633.76, "end": 637.32, "text": " course, now we have the encoder embedding, and then the decoder that actually" }, { "start": 637.32, "end": 642.4, "text": " translates to a language is also going to be the same model. So it's the same" }, { "start": 642.4, "end": 648.48, "text": " model that translates to English, then it translates to German. So first of all, how" }, { "start": 648.48, "end": 652.96, "text": " do we make the same model? Let's say we have the perfect encoder, right?" }, { "start": 652.96, "end": 656.82, "text": " This is E, the encoder, the same encoder for all languages. Let's say we have the" }, { "start": 656.82, "end": 662.2, "text": " perfect encoder and it can map the... whenever a sentence means the same thing" }, { "start": 662.2, "end": 665.4000000000001, "text": " in different languages, we can completely map it to the same point in" }, { "start": 665.4000000000001, "end": 670.44, "text": " embedding space, irrespective of the language it comes from. Now, how do we tell" }, { "start": 670.44, "end": 676.24, "text": " the decoder, which is also the same model, like how does it know what to do? This is" }, { "start": 676.24, "end": 682.1600000000001, "text": " a little trick where you basically take this embedding, so you take the input," }, { "start": 682.1600000000001, "end": 686.5600000000001, "text": " you put it into your model, I don't even know how you do this, and then you're" }, { "start": 686.56, "end": 690.92, "text": " output is going to be autoregressive, right? So you decode one token at a time." }, { "start": 690.92, "end": 696.8399999999999, "text": " So you decode this token and then you feed it back into the model, and then you" }, { "start": 696.8399999999999, "end": 700.4799999999999, "text": " decode this token, you feed that back into the model, and so on. This is an" }, { "start": 700.4799999999999, "end": 705.1999999999999, "text": " autoregressive language model. And the trick here is that the very" }, { "start": 705.1999999999999, "end": 708.8399999999999, "text": " first token is a special token that describes the language you want to" }, { "start": 708.8399999999999, "end": 714.2399999999999, "text": " output. So here you say, I want German, and then you let the model decode its" }, { "start": 714.24, "end": 719.4, "text": " thing. And by conditioning on this token right here, it knows it should" }, { "start": 719.4, "end": 726.12, "text": " now produce German. During training, you will simply put the token here," }, { "start": 726.12, "end": 730.64, "text": " and if it produces something other than German, that's a loss, right? So it will" }, { "start": 730.64, "end": 736.24, "text": " learn to produce German after you produce this tag. Alright, so what we need" }, { "start": 736.24, "end": 743.6, "text": " is an encoder and a decoder, such that in the encoder we can put in any language" }, { "start": 743.6, "end": 750.44, "text": " text, and it will map to the same space the things that mean the same things, and" }, { "start": 750.44, "end": 757.5600000000001, "text": " we need a decoder that can produce any language given this first thing." }, { "start": 757.5600000000001, "end": 761.32, "text": " Now the decoder should be fairly easy, right? If we have a shared vocabulary" }, { "start": 761.32, "end": 768.96, "text": " between the languages, and we always put this token, the decoder is not a" }, { "start": 768.96, "end": 773.12, "text": " problem. You can just learn the decoder in a straightforward way, but the encoder" }, { "start": 773.12, "end": 779.2, "text": " is going to be a problem. So how does the encoder map the different languages to" }, { "start": 779.2, "end": 784.76, "text": " the same space, such that the same things are ending up in the same place? It seems" }, { "start": 784.76, "end": 791.2, "text": " a bit counterintuitive, right? Because it doesn't know which things" }, { "start": 791.2, "end": 795.36, "text": " correspond to which things. Now the first thing you need is a shared vocabulary" }, { "start": 795.36, "end": 800.12, "text": " here. Since we are in a shared space right here, what you need is a" }, { "start": 800.12, "end": 807.5600000000001, "text": " shared vocabulary. So you tokenize all of the text with a shared vocabulary. And" }, { "start": 807.5600000000001, "end": 812.48, "text": " this vocabulary is going to consist of sort of word pieces. Now if you don't" }, { "start": 812.48, "end": 817.72, "text": " know what word pieces are, in a word piece tokenization what you would" }, { "start": 817.72, "end": 822.48, "text": " do is you would split words into so-called word pieces. So for example" }, { "start": 822.48, "end": 828.36, "text": " hello right here might be split into two word pieces. The first word piece might" }, { "start": 828.36, "end": 835.64, "text": " be he, and the second word piece might be LLO. There is usually some kind of" }, { "start": 835.64, "end": 840.08, "text": " indicator here that this is the end of a word and so on, but we'll simplify. And" }, { "start": 840.08, "end": 846.8000000000001, "text": " hello right here would be a HA for the first token and then LLO for the second" }, { "start": 846.8000000000001, "end": 850.96, "text": " token. And these kind of word piece encodings, since the smallest units are" }, { "start": 850.96, "end": 855.96, "text": " going to be the characters themselves, they ensure that you always have" }, { "start": 855.96, "end": 860.64, "text": " everything is in vocabulary. You have no out-of-vocabulary tokens. But here you" }, { "start": 860.64, "end": 867.36, "text": " can already see that if we tokenize the languages like this and then we use the" }, { "start": 867.36, "end": 873.84, "text": " same encoder, so the same encoder will pop them into this shared space, that" }, { "start": 873.84, "end": 881.08, "text": " means that to the model this and this looks like the same thing. It is the same" }, { "start": 881.08, "end": 886.72, "text": " thing, right? It's the same input token in different languages. Now as you can see" }, { "start": 886.72, "end": 892, "text": " this comes from the same word, the LLO at the end in English and the LLO at the" }, { "start": 892, "end": 898.5200000000001, "text": " end in German. It comes from the same word. So you know it's fair to assume" }, { "start": 898.5200000000001, "end": 903.8000000000001, "text": " that since it's the same input it's going to be mapped into a same embedding" }, { "start": 903.8000000000001, "end": 908.24, "text": " space right here. Or since these things are usually context-dependent we can say" }, { "start": 908.24, "end": 916.28, "text": " in a similar embedding space or a close embedding space, but certainly the" }, { "start": 916.28, "end": 922.2, "text": " initial vectors are the same. That is already half the task, right? So by" }, { "start": 922.2, "end": 929.44, "text": " tokenizing in this way we have already mapped part of our languages, even though" }, { "start": 929.44, "end": 933.4, "text": " they're just the different languages, we have mapped the same word to the same" }, { "start": 933.4, "end": 938.68, "text": " space. And this relies on the fact that in this case for example English and" }, { "start": 938.68, "end": 944.68, "text": " German and for example French they have significant overlap in their words" }, { "start": 944.68, "end": 950.4399999999999, "text": " as such. So the word hello and the word hollow, they are almost the same" }, { "start": 950.4399999999999, "end": 956.88, "text": " word as letters, as word pieces. And these shared embedding techniques abuse" }, { "start": 956.88, "end": 961.84, "text": " sort of the fact that these languages are close. They'll say there will be" }, { "start": 961.84, "end": 966.1600000000001, "text": " some word pieces that are going to be the same in these languages and" }, { "start": 966.1600000000001, "end": 969.8000000000001, "text": " naturally because they're the same they'll end up in the same place in" }, { "start": 969.8000000000001, "end": 976.2, "text": " embedding space. And because that now you have the... so what these" }, { "start": 976.2, "end": 981.32, "text": " embedding techniques do is they simply figure out the statistical relations" }, { "start": 981.32, "end": 986.76, "text": " between the word pieces. So if two things appear together often in the same" }, { "start": 986.76, "end": 992.2, "text": " context they'll be mapped into the same space as well. So it would realize there" }, { "start": 992.2, "end": 998.56, "text": " is a lot of times where ha and he appear in front of this low thing. So I should" }, { "start": 998.56, "end": 1004.28, "text": " probably map the ha and the he to the same location in embedding space. So the" }, { "start": 1004.28, "end": 1009.16, "text": " same relation to the low. So they would end up at the same place. So now you see" }, { "start": 1009.16, "end": 1013.72, "text": " even though these word pieces are different, like they get different" }, { "start": 1013.72, "end": 1018.6800000000001, "text": " IDs, they'll be mapped to the same place in embedding space because their" }, { "start": 1018.6800000000001, "end": 1024.08, "text": " relation to the low is the same and the low themselves are being mapped to the" }, { "start": 1024.08, "end": 1029.28, "text": " same place because they are actually the same. So you can see that this" }, { "start": 1029.28, "end": 1035.56, "text": " partial overlap between word pieces in the different languages combined with" }, { "start": 1035.56, "end": 1042.88, "text": " the shared embedding pre-training results in these token across the" }, { "start": 1042.88, "end": 1049.5200000000002, "text": " languages results in an alignment of the embeddings. So naturally the things that" }, { "start": 1049.5200000000002, "end": 1053.88, "text": " mean the same things are going to be in the same places in embedding space, either" }, { "start": 1053.88, "end": 1059.1200000000001, "text": " because they are the same or because their statistical relation to the things" }, { "start": 1059.1200000000001, "end": 1065.3200000000002, "text": " that are the same is the same. It is sort of like these ha and the he are" }, { "start": 1065.3200000000002, "end": 1070.44, "text": " like synonyms in this shared language. So if you jumble all the English and" }, { "start": 1070.44, "end": 1075.2, "text": " German text together, the model thinks ha and he are synonyms and therefore" }, { "start": 1075.2, "end": 1079.72, "text": " maps it to the same space. This happens exactly the same in if you only have one" }, { "start": 1079.72, "end": 1085.48, "text": " language, two true synonyms. So this would exactly be the same thing in" }, { "start": 1085.48, "end": 1091.8400000000001, "text": " this case. Alright, so now we have different languages. We have a single" }, { "start": 1091.8400000000001, "end": 1097.04, "text": " encoder where we can input any of those languages mapping it to the shared space." }, { "start": 1097.04, "end": 1102.76, "text": " The decoder can be trained by simply giving it this indicator token right" }, { "start": 1102.76, "end": 1108.2, "text": " here to decode the appropriate language. So now the question is how exactly do we" }, { "start": 1108.2, "end": 1114.48, "text": " train this such that this happens? There's one caveat in" }, { "start": 1114.48, "end": 1119.8799999999999, "text": " programming languages of course we still have to check whether that's the same or" }, { "start": 1119.8799999999999, "end": 1123.6, "text": " not and we know that in programming languages a lot of programming languages" }, { "start": 1123.6, "end": 1129.6, "text": " for example the word if is the same right so if you tokenize Java or Python" }, { "start": 1129.6, "end": 1136.56, "text": " or C++ the word if is the same and likewise there is a lot of overlap" }, { "start": 1136.56, "end": 1141.04, "text": " between the different programming languages and that exactly is this" }, { "start": 1141.04, "end": 1147.36, "text": " correspondence to here. These models use the parts that are overlapping either" }, { "start": 1147.36, "end": 1152.84, "text": " tokens themselves or this can also be grammatical constructs and so on they" }, { "start": 1152.84, "end": 1157.1999999999998, "text": " can also be overlapping and therefore map to the same space so if a construct" }, { "start": 1157.1999999999998, "end": 1161.76, "text": " is used in the same way this can be in higher layers this can induce the same" }, { "start": 1161.76, "end": 1167.24, "text": " effect and they'll use that to map the and sorry they'll use that the result" }, { "start": 1167.24, "end": 1171.8, "text": " will be that the similar things in the same in these languages will be" }, { "start": 1171.8, "end": 1175.84, "text": " mapped to the similar spaces in embedding space. Now this makes this" }, { "start": 1175.84, "end": 1180.28, "text": " example a bit weaker because that means this method would work exceptionally" }, { "start": 1180.28, "end": 1184.28, "text": " well for something like Python 2 to Python 3 because they of course have" }, { "start": 1184.28, "end": 1190.12, "text": " like a lot of overlap of syntax and keywords and constructs whereas" }, { "start": 1190.12, "end": 1195.72, "text": " something like cobalt to Java it's more let's say doubtable that they will" }, { "start": 1195.72, "end": 1202.12, "text": " work so well. In this paper here they've chosen C++ Python and Java which" }, { "start": 1202.12, "end": 1207.08, "text": " do have significant overlap but especially something like Python to Java" }, { "start": 1207.08, "end": 1212.48, "text": " of course there is a lot of a difference or Python to C++ as well" }, { "start": 1212.48, "end": 1218.6799999999998, "text": " Python is not typed and so you can see a bit of the difficulty is already in the" }, { "start": 1218.6799999999998, "end": 1223.76, "text": " paper here but you have to be aware that this works less and less the less this" }, { "start": 1223.76, "end": 1230.36, "text": " shared overlap is given. Alright so how do you train these models and remember" }, { "start": 1230.36, "end": 1236.72, "text": " we don't have parallel corpora we are simply reliant on having databases from" }, { "start": 1236.72, "end": 1243.4, "text": " having big repositories of Python code C++ code and Java code and they don't" }, { "start": 1243.4, "end": 1247.6000000000001, "text": " correspond to each other so the first thing you do and as I understand it you" }, { "start": 1247.6000000000001, "end": 1250.72, "text": " can do these things in parallel but there are three different objectives" }, { "start": 1250.72, "end": 1256.24, "text": " that achieve three different things in these models. The first objective is the" }, { "start": 1256.24, "end": 1260.6000000000001, "text": " cross-lingual masked language pre-training. Now the models here are" }, { "start": 1260.6000000000001, "end": 1265.76, "text": " going to be transformer models with encoders and decoders and that's comes" }, { "start": 1265.76, "end": 1269.84, "text": " from the attention is all you need papers and various other papers like" }, { "start": 1269.84, "end": 1275.76, "text": " this I've done videos on those if you want to see that. This masked language" }, { "start": 1275.76, "end": 1280.68, "text": " model pre-training however is from the BERT paper so BERT if you don't know" }, { "start": 1280.68, "end": 1286.28, "text": " what that is I've also done a video on that. This simply trains the encoder so" }, { "start": 1286.28, "end": 1292.12, "text": " this is to train the encoder. What you would do is you would input code so" }, { "start": 1292.12, "end": 1296.9199999999998, "text": " usually in masked language model if you train the encoder you input code with" }, { "start": 1296.9199999999998, "end": 1308.9199999999998, "text": " tokens like hello there you would then so this is your input you would then mask" }, { "start": 1308.9199999999998, "end": 1315.7199999999998, "text": " some of the tokens for example here the low and maybe the entire word there you" }, { "start": 1315.7199999999998, "end": 1320.32, "text": " would scrap that you would put it through your encoder which is this" }, { "start": 1320.32, "end": 1326.6399999999999, "text": " transformer model like BERT and then BERT is supposed to reconstruct hello" }, { "start": 1326.6399999999999, "end": 1333.24, "text": " there. BERT is supposed to reconstruct these two tokens like it doesn't see" }, { "start": 1333.24, "end": 1338.8, "text": " them and you ask it what did I cross out and it needs to reconstruct that so you" }, { "start": 1338.8, "end": 1345.1599999999999, "text": " train the model to reconstruct these masked tokens and the research on BERT" }, { "start": 1345.1599999999999, "end": 1349.72, "text": " and other things has shown that if you train with this objective the encoder" }, { "start": 1349.72, "end": 1355.92, "text": " sort of learns about the structure of code it learns about it learns which" }, { "start": 1355.92, "end": 1360.64, "text": " tokens and which constructs often appear together and therefore it learns" }, { "start": 1360.64, "end": 1366, "text": " something about the structure of the input and that means it can create" }, { "start": 1366, "end": 1371.6000000000001, "text": " whatever is up here is a good and meaningful embedding for these things" }, { "start": 1371.6000000000001, "end": 1377.1200000000001, "text": " that tells you something about the statistical coexistence of tokens and of" }, { "start": 1377.12, "end": 1382.4799999999998, "text": " course since we're doing this with all the languages so the Python goes in here" }, { "start": 1382.4799999999998, "end": 1387.36, "text": " C++ goes in here without telling the model what it is you just throw it in" }, { "start": 1387.36, "end": 1393.9199999999998, "text": " there right Java goes in there by tokenizing it and you see an example" }, { "start": 1393.9199999999998, "end": 1401.12, "text": " right here so if this right here is C++ but in Python this would also be if and" }, { "start": 1401.12, "end": 1407.08, "text": " since it needs to learn a single encoder for all of these languages and since the" }, { "start": 1407.08, "end": 1412.28, "text": " tokens overlap partially it is going to result in exactly what we want namely a" }, { "start": 1412.28, "end": 1416.6399999999999, "text": " shared embedding space where even though the input comes from different" }, { "start": 1416.6399999999999, "end": 1421.28, "text": " languages it is mapped similar things are mapped to similar places in the" }, { "start": 1421.28, "end": 1426.1599999999999, "text": " embedding space right so the mask language model pre training very quickly" }, { "start": 1426.1599999999999, "end": 1431.9199999999998, "text": " as you take a piece of code like here on the left you mask out some of the tokens" }, { "start": 1431.92, "end": 1437.44, "text": " here you can see them in this mask and you simply ask the model the encoder to" }, { "start": 1437.44, "end": 1443.52, "text": " reconstruct those things okay so this is just for the encoder as far as I" }, { "start": 1443.52, "end": 1447.44, "text": " understand it the encoder doesn't it doesn't see the thing back here it" }, { "start": 1447.44, "end": 1454.24, "text": " simply sees this and you tell it please reconstruct please tell me which words" }, { "start": 1454.24, "end": 1460.4, "text": " or which tokens I I clipped out here and it's supposed to tell you okay the first" }, { "start": 1460.4, "end": 1466.6000000000001, "text": " one is if the second one is int and the third one is the I now if you consider" }, { "start": 1466.6000000000001, "end": 1471.8000000000002, "text": " what the encoder has to do here so if you were to see this then that pretty" }, { "start": 1471.8000000000002, "end": 1477.5600000000002, "text": " clearly you know you could you could guess that that is an if of course it's" }, { "start": 1477.5600000000002, "end": 1482.0400000000002, "text": " not a hundred percent but this is just pre training right so you train it to" }, { "start": 1482.0400000000002, "end": 1486, "text": " output if here now here you have to do a little bit more inference maybe you've" }, { "start": 1486, "end": 1491.04, "text": " seen this for construct a bunch of times and you can see that this is compared" }, { "start": 1491.04, "end": 1496.28, "text": " here and this is added so probably it's an integer and then in the last thing" }, { "start": 1496.28, "end": 1500.12, "text": " this is even more complicated if you don't see that the eye is here you" }, { "start": 1500.12, "end": 1505.32, "text": " somehow have to guess that what it is it's not clear right but you can guess" }, { "start": 1505.32, "end": 1509.94, "text": " that okay there's a local variable I right here and probably it's going to be" }, { "start": 1509.94, "end": 1516.24, "text": " used somewhere in this block now this here isn't I and I don't see I anywhere" }, { "start": 1516.24, "end": 1520.3200000000002, "text": " else so probably I goes in here which makes sense because it's an integer and" }, { "start": 1520.3200000000002, "end": 1527.2, "text": " prime is an array and and it integers index arrays so on okay so this is what" }, { "start": 1527.2, "end": 1531.92, "text": " the model does first the second thing is we need to train the decoder somehow" }, { "start": 1531.92, "end": 1536.88, "text": " how do we train the decoder in a very similar way we make the decoder do" }, { "start": 1536.88, "end": 1543.8000000000002, "text": " denoising auto encoding now before we just had single tokens we just asked the" }, { "start": 1543.8000000000002, "end": 1551.64, "text": " encoder to reconstruct tokens so the encoder is this box right here this" }, { "start": 1551.64, "end": 1556.96, "text": " colors this box is the encoder and the actual part that's going to predict" }, { "start": 1556.96, "end": 1563.1200000000001, "text": " these words is going to be one sort of one classification layer on top that is" }, { "start": 1563.12, "end": 1568.8, "text": " going to predict for each position the individual word now did this is just for" }, { "start": 1568.8, "end": 1573.28, "text": " pre-training after the pre-training you scrap that and you attach it to a" }, { "start": 1573.28, "end": 1580, "text": " decoder so you attach whatever you got out of the encoder to a decoder and the" }, { "start": 1580, "end": 1586.4799999999998, "text": " decoder will output in an autoregressive way one token after another I did I put" }, { "start": 1586.4799999999998, "end": 1591.4799999999998, "text": " a it output a token right here and it feed that token back into the decoder" }, { "start": 1591.48, "end": 1595.72, "text": " saying okay here's what I've produced now produce the next thing you produce" }, { "start": 1595.72, "end": 1603.24, "text": " the next token and so on so it would produce token after token the output and" }, { "start": 1603.24, "end": 1607.56, "text": " now as I said I'm not exactly sure I think they're doing all of these things" }, { "start": 1607.56, "end": 1612.52, "text": " at the same time so this would still be here but the information would just be" }, { "start": 1612.52, "end": 1617.32, "text": " routed in two different ways or maybe they do it one after another it doesn't" }, { "start": 1617.32, "end": 1623.04, "text": " really matter but what matters is in this thing here you now train the decoder" }, { "start": 1623.04, "end": 1628.4399999999998, "text": " I mean you train it jointly with the encoder but you also involve the decoder" }, { "start": 1628.4399999999998, "end": 1635.12, "text": " and you do this by doing something very similar you corrupt a piece of code and" }, { "start": 1635.12, "end": 1641.12, "text": " you get corrupted code now you can see part of this corruption is masking like" }, { "start": 1641.12, "end": 1645.6, "text": " you did before but also part of the corruption is like here you scramble" }, { "start": 1645.6, "end": 1650.8, "text": " some of the tokens right this was it was this over here you just jumble some of" }, { "start": 1650.8, "end": 1656, "text": " them around a bit and then you here you also drop a token as you can see that" }, { "start": 1656, "end": 1661.6, "text": " the one is dropped and you simply so you don't show this to the encoder or the" }, { "start": 1661.6, "end": 1667.48, "text": " decoder you input this corrupted code into the decode into the first the" }, { "start": 1667.48, "end": 1674.6399999999999, "text": " encoder and then you ask the decoder to give you back the original code without" }, { "start": 1674.64, "end": 1679.44, "text": " showing it the original code so the the task for the decoder for the encoder" }, { "start": 1679.44, "end": 1684.2800000000002, "text": " decoder for the entire model here is if you're if you see this here is corrupt" }, { "start": 1684.2800000000002, "end": 1691.2800000000002, "text": " the code I have corrupted it in various ways please tell me what I originally" }, { "start": 1691.2800000000002, "end": 1698.96, "text": " had now it can the masking it does the same as before it sort of infers it this" }, { "start": 1698.96, "end": 1705.64, "text": " thing here it says well probably I probably this isn't really correct you" }, { "start": 1705.64, "end": 1709.08, "text": " don't even tell it where the errors are right before with the masking you at" }, { "start": 1709.08, "end": 1712.8400000000001, "text": " least told it where the errors are now you don't even tell it where the where" }, { "start": 1712.8400000000001, "end": 1717.08, "text": " the errors are so it needs to recognize this here is probably correct this isn't" }, { "start": 1717.08, "end": 1723.48, "text": " this I'm gonna rewrite this to that okay and it does this one token at a time so" }, { "start": 1723.48, "end": 1729.32, "text": " it first goes into the dirt and it needs to output the correct thing this is I" }, { "start": 1729.32, "end": 1733.92, "text": " hope the difference is clear to the masked language modeling which involved" }, { "start": 1733.92, "end": 1740.48, "text": " the involved the decoder and here also is the first time where in the encoder" }, { "start": 1740.48, "end": 1745.98, "text": " you you prepend this Java token now this as you can see it still goes from the" }, { "start": 1745.98, "end": 1750.48, "text": " same language to the same language but this is where you train the decoder to" }, { "start": 1750.48, "end": 1757.52, "text": " output a given language so here with the token again this is the same decoder for" }, { "start": 1757.52, "end": 1761.52, "text": " all the three languages the only difference here is every time you simply" }, { "start": 1761.52, "end": 1765.96, "text": " provided with the special token at the beginning to tell it which language it" }, { "start": 1765.96, "end": 1772.6, "text": " should decode right now so this this now we have an encoder that maps all the" }, { "start": 1772.6, "end": 1776.52, "text": " languages to a shared space and we have a decoder that conditioned on a token" }, { "start": 1776.52, "end": 1783.48, "text": " like this can output a valid code in that thing assuming this here was" }, { "start": 1783.48, "end": 1790, "text": " corrupted code now since the encoder is shared it should map the same kind of" }, { "start": 1790, "end": 1793.6, "text": " corrupted code of the different languages to the same place in the" }, { "start": 1793.6, "end": 1798, "text": " embedding space and therefore this would also this would already be enough to" }, { "start": 1798, "end": 1803.2, "text": " have this model that we desire we can input some code it doesn't actually have" }, { "start": 1803.2, "end": 1807.16, "text": " to be corrupted right we can just input some code in one language and ask the" }, { "start": 1807.16, "end": 1812.1200000000001, "text": " decoder to output the other language and this works but it doesn't work super" }, { "start": 1812.1200000000001, "end": 1817.68, "text": " well and here the authors go for another idea from the unsupervised machine" }, { "start": 1817.68, "end": 1822.48, "text": " translation literature which is back translation so back translation is a" }, { "start": 1822.48, "end": 1827.98, "text": " technique where you can tune an unsupervised machine translation model" }, { "start": 1827.98, "end": 1832.8, "text": " in a way that you would tune a supervised one but of course you don't" }, { "start": 1832.8, "end": 1838.6399999999999, "text": " have supervised data so what's the plan you will produce the data yourself using" }, { "start": 1838.6399999999999, "end": 1843.24, "text": " your own model so the plan is pretty simple it's actually contained in the" }, { "start": 1843.24, "end": 1848.6399999999999, "text": " back translation name so if you have a piece of code what you would do is you" }, { "start": 1848.6399999999999, "end": 1853.8799999999999, "text": " would first use your model to translate this to another language any of your" }, { "start": 1853.8799999999999, "end": 1858.6399999999999, "text": " choice now you have no clue whether this thing here is correct or not you have no" }, { "start": 1858.6399999999999, "end": 1861.96, "text": " clue and you have no way of assessing it because you don't have ground truth" }, { "start": 1861.96, "end": 1867.64, "text": " what you can do is use your model again or actually use a second model that you" }, { "start": 1867.64, "end": 1873.16, "text": " train in parallel now I believe in this case they could use the same model but" }, { "start": 1873.16, "end": 1878.72, "text": " you can that could be instable and so on but in any case you can use your system" }, { "start": 1878.72, "end": 1884.04, "text": " again to translate it back to your original language your system can do" }, { "start": 1884.04, "end": 1889.3600000000001, "text": " that right and here whatever you get as an output you know the ground truth it's" }, { "start": 1889.36, "end": 1893.7199999999998, "text": " whatever you started with so now you can compare what comes out to what you" }, { "start": 1893.7199999999998, "end": 1898.36, "text": " started with the difficulty of course is if there is a mistake you don't know" }, { "start": 1898.36, "end": 1906.6399999999999, "text": " which of the two models made a mistake and you so it could be could be that" }, { "start": 1906.6399999999999, "end": 1910.76, "text": " your original translation model made a mistake and or it could be that your" }, { "start": 1910.76, "end": 1917.12, "text": " back translation model made a mistake and you have to find a loss function that" }, { "start": 1917.12, "end": 1924.04, "text": " kind of punishes both equally or you simply keep one sort of constant and" }, { "start": 1924.04, "end": 1929.28, "text": " loss free and train the other one because there there's going to be a" }, { "start": 1929.28, "end": 1932.9599999999998, "text": " sample where you have C++ as an input and then the intermediate language is" }, { "start": 1932.9599999999998, "end": 1938.8799999999999, "text": " Python so all of the models sort of get trained once as an as a source to target" }, { "start": 1938.8799999999999, "end": 1944, "text": " translator and once as a target to source translator but I hope the the" }, { "start": 1944, "end": 1947.68, "text": " objective is clear from the back translation so now with the back" }, { "start": 1947.68, "end": 1954.8, "text": " translation you actually you train the models to go from one language to" }, { "start": 1954.8, "end": 1959.92, "text": " another language okay and that's the that's the final goal even though you do" }, { "start": 1959.92, "end": 1966.28, "text": " it without supervised data you now have a model that can encode things into a" }, { "start": 1966.28, "end": 1970.28, "text": " shared space that can decode into a language and that is attuned to" }, { "start": 1970.28, "end": 1978.2, "text": " translating from one language to another language so that's that's it how this is" }, { "start": 1978.2, "end": 1984.24, "text": " all how does this work now for evaluation the question is of course how" }, { "start": 1984.24, "end": 1989.3999999999999, "text": " do you evaluate models like this for evaluation they go to this website" }, { "start": 1989.3999999999999, "end": 1997.6399999999999, "text": " called geeks for geeks and this is a an online platform with computer science" }, { "start": 1997.64, "end": 2002.5200000000002, "text": " and programming articles it gathers many coding problems and presents solution in" }, { "start": 2002.5200000000002, "end": 2007.48, "text": " several programming languages okay so this is a website that teaches you to" }, { "start": 2007.48, "end": 2012.68, "text": " code and it will have like an exercise please do this and then it will provide" }, { "start": 2012.68, "end": 2017.6200000000001, "text": " solutions in the different languages now why is that cool and they have an" }, { "start": 2017.6200000000001, "end": 2025.5200000000002, "text": " example they have an example right here why is that cool because not only can" }, { "start": 2025.52, "end": 2031.36, "text": " you be relatively sure that these different functions that you have here" }, { "start": 2031.36, "end": 2036.24, "text": " do the same thing but you can also relatively be relatively sure that they" }, { "start": 2036.24, "end": 2041.4, "text": " are implemented in the similar way right because you what this website is trying" }, { "start": 2041.4, "end": 2048.6, "text": " to do is it's trying to teach the people how to how to code up an algorithm that" }, { "start": 2048.6, "end": 2051.84, "text": " they think up in their head and therefore not only is the solution" }, { "start": 2051.84, "end": 2056.4, "text": " correct and the same it is implemented in the in the same way as you can see" }, { "start": 2056.4, "end": 2061, "text": " here the construct there's this if construct is everywhere the else if is" }, { "start": 2061, "end": 2066.36, "text": " everywhere so even though some of the languages might have specialty things" }, { "start": 2066.36, "end": 2071.96, "text": " for implementing some algorithms these are really the same algorithmic the same" }, { "start": 2071.96, "end": 2076.32, "text": " expression of algorithmic thought in the different languages so that is a perfect" }, { "start": 2076.32, "end": 2081.08, "text": " parallel data set the problem of course is that there is not that many so it is" }, { "start": 2081.08, "end": 2087.16, "text": " good enough as a test set it is not good enough as a training set but given that" }, { "start": 2087.16, "end": 2092.96, "text": " it's a test set you can just have these as test set and then you can input the" }, { "start": 2092.96, "end": 2098.2, "text": " C++ and see whether or not the Java comes out the problem here of course is" }, { "start": 2098.2, "end": 2103.44, "text": " that even though this is very clear there are still you know sort of many" }, { "start": 2103.44, "end": 2107.92, "text": " variations of how you can implement that to even express the same algorithmic" }, { "start": 2107.92, "end": 2114.2000000000003, "text": " thought so metrics from natural language processing like blue just aren't going" }, { "start": 2114.2000000000003, "end": 2118.16, "text": " to be very good because they look at n-gram overlap and you can write this" }, { "start": 2118.16, "end": 2124.76, "text": " function with very different n-grams and still be very very valid and correct and" }, { "start": 2124.76, "end": 2130.36, "text": " also exact match is not going to be really the the gold standard here so" }, { "start": 2130.36, "end": 2135.16, "text": " what they do is they create a set of unit tests where for each of these" }, { "start": 2135.16, "end": 2141.52, "text": " functions they go they check their input types they randomly generate input" }, { "start": 2141.52, "end": 2147.7999999999997, "text": " randomly generate a set of inputs look whatever comes out and if the same thing" }, { "start": 2147.7999999999997, "end": 2153.3199999999997, "text": " comes out in all of their test functions that they consider this a good unit test" }, { "start": 2153.3199999999997, "end": 2158.3199999999997, "text": " for that function so whenever you your model now produces let's say you input" }, { "start": 2158.3199999999997, "end": 2164, "text": " Python it produces a C++ you simply put these unit tests through the C++" }, { "start": 2164, "end": 2169.16, "text": " function you produce and if they produce the same output as the Python the" }, { "start": 2169.16, "end": 2175, "text": " original Python function when on the same inputs then you consider the unit" }, { "start": 2175, "end": 2180.96, "text": " test to succeed and you consider the function to be correct that this of" }, { "start": 2180.96, "end": 2186.44, "text": " course this isn't this isn't the super duper gold standard especially with" }, { "start": 2186.44, "end": 2191.8, "text": " random inputs because usually what you want to do is test sort of corner cases" }, { "start": 2191.8, "end": 2197.36, "text": " but it's better than anything else so far I've been a long" }, { "start": 2197.36, "end": 2202, "text": " dis-advocate of unit tests honestly because I think whenever a human writes" }, { "start": 2202, "end": 2207.5600000000004, "text": " a unit test then they're probably since they have already implemented the" }, { "start": 2207.5600000000004, "end": 2212.7200000000003, "text": " function itself they're probably going to make the same mistakes or they're" }, { "start": 2212.7200000000003, "end": 2216.88, "text": " probably just going to replicate the code and thinking of the function in the" }, { "start": 2216.88, "end": 2222.44, "text": " unit test itself and therefore it doesn't really get you anything I guess" }, { "start": 2222.44, "end": 2226.32, "text": " in large organizations you write unit tests so that someone else doesn't screw" }, { "start": 2226.32, "end": 2232.2400000000002, "text": " up your code but in this case it would actually be cool because now as a human" }, { "start": 2232.2400000000002, "end": 2237.6400000000003, "text": " you could simply write a bunch of unit tests and then let your let your trans" }, { "start": 2237.6400000000003, "end": 2241.92, "text": " compiler do the heavy lifting and you simply check whether or not the" }, { "start": 2241.92, "end": 2249.12, "text": " output is good alright so how does this do here you can see they have some" }, { "start": 2249.12, "end": 2253.52, "text": " baselines the C++ to Java as I understand it is a commercial system and" }, { "start": 2253.52, "end": 2259.52, "text": " the Java to Python is an open source system both are human experts that make" }, { "start": 2259.52, "end": 2265, "text": " up these rule-based systems on how to trans on how to translate code into" }, { "start": 2265, "end": 2271.4, "text": " other languages now the if you do what they have here is trans coder beam one" }, { "start": 2271.4, "end": 2275.76, "text": " which means a beam size of one so if you don't know a beam search is very shortly" }, { "start": 2275.76, "end": 2280.88, "text": " beam search is like if you decode from your language model you can either" }, { "start": 2280.88, "end": 2285.2000000000003, "text": " always take the next token that has the highest probability this would be greedy" }, { "start": 2285.2000000000003, "end": 2291.08, "text": " decoding or a beam size of one or you can sort of always keep the top n" }, { "start": 2291.08, "end": 2297.2000000000003, "text": " hypotheses of what the of what the most likely output is as you can keep that as" }, { "start": 2297.2, "end": 2304.96, "text": " a you can keep the top five in memory and always decode these five on sort of" }, { "start": 2304.96, "end": 2309.48, "text": " like you have a mini batch of five sequences and you always keep the top" }, { "start": 2309.48, "end": 2314.4399999999996, "text": " five in memory so at the end of the decoding you're going to have five" }, { "start": 2314.4399999999996, "end": 2320.08, "text": " different variants of the same sentence or of the same decoded output and you" }, { "start": 2320.08, "end": 2324.8799999999997, "text": " can then decide which one you like best and usually what you do is you then" }, { "start": 2324.88, "end": 2328.88, "text": " output the one that has the highest probability which is not the same as the" }, { "start": 2328.88, "end": 2334.48, "text": " greedy because sometimes the next token will be will look one next token will" }, { "start": 2334.48, "end": 2340.28, "text": " look very good in a greedy way but you'd better take the second most likely" }, { "start": 2340.28, "end": 2345.96, "text": " because the next to next token is going to sort of make up for that to make the" }, { "start": 2345.96, "end": 2352.76, "text": " entire sequence even more likely so more beam size basically means you can keep" }, { "start": 2352.76, "end": 2358.76, "text": " more hypotheses of the output in memory until the end so if you just do the" }, { "start": 2358.76, "end": 2363.32, "text": " greedy decoding you see you already get fairly close to these baselines it's" }, { "start": 2363.32, "end": 2370.44, "text": " very very cool very interesting and if you up the beam size you surpass these" }, { "start": 2370.44, "end": 2376.4, "text": " baselines now the way they up the beam size here I find to be a bit let's call" }, { "start": 2376.4, "end": 2380.0400000000004, "text": " it a bit cheaty because when they say beam five what they mean is they keep" }, { "start": 2380.04, "end": 2385.72, "text": " the five hypotheses and then at the end I as I understand it if any of the five" }, { "start": 2385.72, "end": 2392.6, "text": " hypotheses passes all the unit tests or the most they keep it right so basically" }, { "start": 2392.6, "end": 2398, "text": " they give themselves the freedom to say whichever one of the five we output is" }, { "start": 2398, "end": 2403.4, "text": " the best that's the one we count and of course that's not really a match to the" }, { "start": 2403.4, "end": 2409.52, "text": " commercial or to the baseline system because it can output one thing now" }, { "start": 2409.52, "end": 2415.08, "text": " it is maybe a good practical application to give the human that you know you input" }, { "start": 2415.08, "end": 2419.4, "text": " a function you give the human five options to choose from and it can choose" }, { "start": 2419.4, "end": 2427.48, "text": " and thereby decide which one the human likes best but it is sort of it is a" }, { "start": 2427.48, "end": 2432.48, "text": " wonky what I like more is this here the beam 10 top one this is what you would" }, { "start": 2432.48, "end": 2436.84, "text": " actually do so we could keep 10 hypotheses during decoding and that the" }, { "start": 2436.84, "end": 2443.1200000000003, "text": " end output the top one the top likely one and as you can see that is better" }, { "start": 2443.1200000000003, "end": 2447.2400000000002, "text": " than greedy but it is worse than where you you know give yourself the freedom" }, { "start": 2447.2400000000002, "end": 2452.7200000000003, "text": " to output multiple ones of course though they say that most of the errors that" }, { "start": 2452.7200000000003, "end": 2458.28, "text": " this top one makes come from compilation errors when the target language is Java" }, { "start": 2458.28, "end": 2464.44, "text": " or C++ it suggests that the beam and top one metric could easily be improved we" }, { "start": 2464.44, "end": 2469.56, "text": " leave this to future work which this again I find valid right so if you if" }, { "start": 2469.56, "end": 2474.68, "text": " your method is I'm going to keep the top 10 hypothesis until the end and" }, { "start": 2474.68, "end": 2480.16, "text": " then I'm going from the top and I simply compile them and I output the first one" }, { "start": 2480.16, "end": 2489.04, "text": " that compiles that that's not cheating right that's a valid thing again yeah so" }, { "start": 2489.04, "end": 2496.96, "text": " in that way I can I can understand what they're saying right here okay so they" }, { "start": 2496.96, "end": 2503, "text": " give some examples some of which I find very interesting so the first thing here" }, { "start": 2503, "end": 2509.7599999999998, "text": " is that oh yeah by the way I've said in the I've said that the tokenizer between" }, { "start": 2509.7599999999998, "end": 2513.72, "text": " the natural languages is shared they make a little tweak here in that they" }, { "start": 2513.72, "end": 2518.2, "text": " tokenize the different languages with their language respective tokenizers" }, { "start": 2518.2, "end": 2524.16, "text": " which will still end up tokenizing pretty much you know this the print" }, { "start": 2524.16, "end": 2531.2799999999997, "text": " statement in C++ or in Java no actually the print statement in Python is print" }, { "start": 2531.2799999999997, "end": 2534.7999999999997, "text": " and in Java it's println and so on but it will still like the all the if" }, { "start": 2534.7999999999997, "end": 2542.7599999999998, "text": " statements it will still tokenize into the same into the same word but it's" }, { "start": 2542.76, "end": 2554.92, "text": " simply not viable to to parse Python with a C++ parser okay so we have looked" }, { "start": 2554.92, "end": 2560.4, "text": " at this the results this is one of the results they look at their shared" }, { "start": 2560.4, "end": 2564.44, "text": " embedding space and this is a t-sneak plot so a 2d projection of this shared" }, { "start": 2564.44, "end": 2568.4, "text": " embedding space and you can see that this is actually happening so the" }, { "start": 2568.4, "end": 2574.28, "text": " different so null null and none are mapped to similar locations println" }, { "start": 2574.28, "end": 2580.88, "text": " and cout are mapped to similar locations in this space so this is exactly what we" }, { "start": 2580.88, "end": 2587, "text": " want this is sort of a verification that this method of embedding the different" }, { "start": 2587, "end": 2592.1600000000003, "text": " languages into the same space really turns out such that whatever means the" }, { "start": 2592.1600000000003, "end": 2596.76, "text": " same thing is mapped to the same place you can see here catch and accept two" }, { "start": 2596.76, "end": 2600.48, "text": " very different tokens are mapped to the same place simply because they're used" }, { "start": 2600.48, "end": 2607, "text": " in the same sort of constructs across the languages very cool one of these" }, { "start": 2607, "end": 2611.96, "text": " examples here is quite impressive and kind of shows the difference between" }, { "start": 2611.96, "end": 2619.2400000000002, "text": " this and and rule-based translation in this function right here you have a C" }, { "start": 2619.2400000000002, "end": 2624.36, "text": " plus plus function that takes a character pointer to that is called str" }, { "start": 2624.36, "end": 2632.08, "text": " str in as an input now in C++ strings are at least in old versions of C++" }, { "start": 2632.08, "end": 2637.4, "text": " strings are handled as character arrays so a string is indistinguishable from a" }, { "start": 2637.4, "end": 2643.52, "text": " character array and in this case usually what you do is you don't input the array" }, { "start": 2643.52, "end": 2649.6400000000003, "text": " because that will cause a copy you input a pointer to the first to you input a" }, { "start": 2649.64, "end": 2656.7999999999997, "text": " pointer to the array and that would define the string okay so if you" }, { "start": 2656.7999999999997, "end": 2662.24, "text": " translate this again this the type of this is simply character array if you" }, { "start": 2662.24, "end": 2667, "text": " translate this with this transcoder system that they've built into Java in" }, { "start": 2667, "end": 2672.4, "text": " Java there is a type called string right there's a native type called string and" }, { "start": 2672.4, "end": 2677.92, "text": " is that true I think oh yeah that's and then that's handled really weirdly in" }, { "start": 2677.92, "end": 2685.8, "text": " the JVM I think yes so there is it at least there is a type called string so" }, { "start": 2685.8, "end": 2690.32, "text": " it would map that it would recognize are you mean a string therefore I'm going to" }, { "start": 2690.32, "end": 2694.76, "text": " put a string here and it uses all the string method like string length string" }, { "start": 2694.76, "end": 2700.4, "text": " character at and so on where in C++ this is just an array and you just have array" }, { "start": 2700.4, "end": 2706.28, "text": " accesses now they take this same C++ function and only change one thing they" }, { "start": 2706.28, "end": 2711.44, "text": " change the name of the parameter everything else is the same but now the" }, { "start": 2711.44, "end": 2717.32, "text": " character array is called our okay and they put it through the same system and" }, { "start": 2717.32, "end": 2723.88, "text": " that system now outputs a function that takes in a character array called our" }, { "start": 2723.88, "end": 2730.2400000000002, "text": " instead of a string and it uses you know here the property length it uses array" }, { "start": 2730.2400000000002, "end": 2735.6400000000003, "text": " access instead of this car character at method so simply by changing the name" }, { "start": 2735.64, "end": 2741.3599999999997, "text": " and this is something where I believe the rule-based systems can this can be" }, { "start": 2741.3599999999997, "end": 2746.72, "text": " an advantage over rule-based system because what this here does is it simply" }, { "start": 2746.72, "end": 2753.64, "text": " says oh I've seen a lot of humans in my code base that use this use like stir as" }, { "start": 2753.64, "end": 2759.96, "text": " a as a variable name and that usually means that the constructs here are like" }, { "start": 2759.96, "end": 2766.32, "text": " the constructs in Java where people use strings and I've seen other places where" }, { "start": 2766.32, "end": 2771.56, "text": " people use you know names like this right here and usually that is used in" }, { "start": 2771.56, "end": 2777.2400000000002, "text": " the same context as in Java people use character arrays right so it in" }, { "start": 2777.2400000000002, "end": 2782.28, "text": " programming it's not only important what the the code actually does but a lot of" }, { "start": 2782.28, "end": 2786.7200000000003, "text": " programming goes via naming of things like other programmers will read your" }, { "start": 2786.72, "end": 2791.08, "text": " code and by reading stir right here they will sort of assume that this is a" }, { "start": 2791.08, "end": 2796.8799999999997, "text": " string whereas if they read our right here they will assume you're a pirate" }, { "start": 2796.8799999999997, "end": 2802.8799999999997, "text": " and you are referring to a character array and they will treat the code the" }, { "start": 2802.8799999999997, "end": 2807.9399999999996, "text": " code means something different and these systems right here these neural machine" }, { "start": 2807.9399999999996, "end": 2812.56, "text": " translation systems can actually understand that part because they do" }, { "start": 2812.56, "end": 2817.7599999999998, "text": " statistical inference on code that humans wrote if you change this back to" }, { "start": 2817.7599999999998, "end": 2823.7999999999997, "text": " say input then again it goes back to a string and uses all the string functions" }, { "start": 2823.7999999999997, "end": 2830.44, "text": " so that's fairly impressive in my mind and it yeah definitely an advantage over" }, { "start": 2830.44, "end": 2835.32, "text": " rule-based systems of course the disadvantage over rule-based systems is" }, { "start": 2835.32, "end": 2839.04, "text": " that in rule-based systems you can almost get on like sometimes you can" }, { "start": 2839.04, "end": 2844, "text": " even guarantee that the code does the same thing here you can't they give some" }, { "start": 2844, "end": 2850.48, "text": " examples of failed translations where so now you get you run into this problem" }, { "start": 2850.48, "end": 2855.2, "text": " where the min function in Python is overloaded it can either give you the" }, { "start": 2855.2, "end": 2860.7599999999998, "text": " minimum of a sequence or it can give you the minimum of two things now this is" }, { "start": 2860.7599999999998, "end": 2867.56, "text": " translated to Java right here and math dot min is not overloaded in Java it" }, { "start": 2867.56, "end": 2873.44, "text": " only gives you the minimum of two things and not the minimum of an array and it" }, { "start": 2873.44, "end": 2877.92, "text": " still outputs that now given enough data probably could learn because these" }, { "start": 2877.92, "end": 2882.84, "text": " things are all context dependent but this is one of the this is one of the" }, { "start": 2882.84, "end": 2891.6, "text": " failure cases of these models of course all right so this was this paper I" }, { "start": 2891.6, "end": 2898.24, "text": " I've read that the code of this and the unit tests will be output will be put" }, { "start": 2898.24, "end": 2905.5, "text": " online at some times they are not right now if I if I hear about it I can link to" }, { "start": 2905.5, "end": 2909.92, "text": " it or let you know about it let me know what you think of this paper in the" }, { "start": 2909.92, "end": 2922.4, "text": " comments share it out and subscribe if you haven't yet and bye bye" } ]
pZyxlf6l0N8
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Thinking While Moving: Deep Reinforcement Learning with Concurrent Control
[ "Science & Technology" ]
[ "deep learning", "machine learning", "reinforcement learning", "vector to go", "vtg", "continuous", "control", "robot", "concurrent", "deep rl", "deep neural networks", "berkeley", "google", "grasping", "qlearning" ]
Classic RL "stops" the world whenever the Agent computes a new action. This paper considers a more realistic scenario where the agent is thinking about the next action to take while still performing the last action. This results in a fascinating way of reformulating Q-learning in continuous time, then introducing concurrency and finally going back to discrete time. https://arxiv.org/abs/2004.06089 Abstract: We study reinforcement learning in settings where sampling an action from the policy must be done concurrently with the time evolution of the controlled system, such as when a robot must decide on the next action while still performing the previous action. Much like a person or an animal, the robot must think and move at the same time, deciding on its next action before the previous one has completed. In order to develop an algorithmic framework for such concurrent control problems, we start with a continuous-time formulation of the Bellman equations, and then discretize them in a way that is aware of system delays. We instantiate this new class of approximate dynamic programming methods via a simple architectural extension to existing value-based deep reinforcement learning algorithms. We evaluate our methods on simulated benchmark tasks and a large-scale robotic grasping task where the robot must "think while moving". Authors: Ted Xiao, Eric Jang, Dmitry Kalashnikov, Sergey Levine, Julian Ibarz, Karol Hausman, Alexander Herzog Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there. So if you look at these two robots, the left one labeled blocking, the right one labeled concurrent, the blocking robot, as you can see, always has these little pauses in its movement where it does nothing and then it kind of continues with its motion, while the one on the right is one continuous motion that it does. So the reasoning here is that the robot has a camera and the camera takes some time to register what's going on. And then also the robot has a computer inside and the computer also takes some time to decide what to do based on what the camera saw. And while all of this is happening, the robot on the left just freezes. So it performs an action and then it freezes because it takes time to register a state and compute a new action, whereas the robot on the right, it takes the same amount of time to do these things. It also takes a time to register the state and to compute an action, but it does that as it is executing the last action. So it does this in parallel and then it executes the action. Once it has computed a new action, it executes that new action right on top of the old action. And that gives this one big fluid motion. So this requires a new formulation of reinforcement learning. And that's what this paper does. Thinking while moving deep reinforcement learning with concurrent control by people from Google Brain, UC Berkeley and X. So they have a nice diagram here in the supplementary material to show you what is going on in their framework. So in classic reinforcement learning right here, in classic reinforcement learning, you have this dichotomy between agent and environment, right? So the agent and the environment. Now, the agent is supposed to kind of act in the environment in the following manner. The environment will send an observation to the agent. The observation in this case is the picture of the camera. So it sends an observation and the agent will think what to do with the observation, which is called a policy. The policy pie will take an observation and then output an action of what to do and will send back the action to that environment. So in classic RL, you assume that this part here is kind of freezes time. So the environment will output an observation and as and the process of registering the observation of computing the action and of sending the action back is happens in zero time. Of course, it doesn't happen in zero time, but in our reinforcement learning problems, for example, the OpenAI Gym, the environment just stops until it gets the next action. And then it performs the action, right? It performs the action in the environment and by that the environment changes and time happens. And then it stops again as we think of the next action, right? So this is we usually call this one step in the in the kind of classic formulation of RL. The only point that time happens is when the action is executed. No time happens when the state is registered or when the action is computed. And that's what you see here on the left. So in blue, you have the state registration. This is, for example, the camera. The camera has some time in order to register and store the image that it has taken, right? Maybe post process it a little bit. So that's what the camera does. But in our classic formulation, as you can see here, if this is time, it happens instantaneously all at the same time. And also, this is the policy. This is thinking what to do, right? This is your evaluation of your neural network. If this is a neural network, I'm drawing a small neural network here. This happens instantaneously in these formulations and only as the action is executed, time happens, right? And then until here, time freezes again and only once the action is determined, time happens again. In the new formulation now, as you've already seen, what we have here is this kind of continuous framework where, let's say you're here, it actually takes time for the camera to post process the image. It takes more time for you to think about what to do. And then once you decide on an action, that action is going to happen, right? But we can say, for example, at this point, you tell the camera to take a new picture of the state, right? But that takes time. And while that's happening, the old action is still ongoing, right? So you don't even have to say the action is still ongoing, but the world is still moving, right? The world is still changing while you think, while you post process and while you evaluate your policy. The world is thinking and only after some time, right? After this lag time here, have you decided on a new action? And then you can break that old action and kind of perform your new action. And all of this is happening in time. So this is the new framework. Now you see the problem here. The problem is that you base your decisions on the state and time T on time, sorry, time H here. You base your decisions on a state as it was at that time, right? That's what you use to think right here. That's what you store and think about. But you perform. So you perform the action at this point in time. So there is a considerable difference here because the world has now changed. So you see the problem. The action you perform is based on an old knowledge of the world. And you have basically no way of making the action dependent on the current state of the world, because that would require you to capture the current state and that takes time. And in that time, the world has already shifted again. So the agent is kind of required to think ahead about the action that it is currently performing and how the world changes according to that. So this new formulation of reinforcement learning formulates this in a formal way. It formulates it in a formal way and will quickly go through that. Yes, so they go into the very basics here. We'll quickly, quickly go through them. So they introduce these quantities like the policy pi, the transition distribution, the reward and the Q and value function. Now we'll just quickly go over these. So you have the agent. Sorry about that. You have the agent and you have the environment. Hello. So if you think of the agent and the environment, the environment has this transition function. The transition function, it takes it says, OK, I'm in this state and the agent does this action. And here is the probability distribution over the next state. So it says that your little spaceship is here and the meteors are here. And then you push the button. If you push the button for shoot, then you'll be in the same place. The meteors will still be here, but you'll have a little shot coming out of your spaceship. That's what the environment does. Right. So you give it a state and action and it will give you the next state. It will also give you a reward. Right. The reward in the same thing, reward here will be a second output here that tells you either, let's say, negative one if you die or zero if nothing happens or plus one if you shoot a meteor. That's the reward. So this you can think of as the real world. So these two quantities are in the real world in the environment. So that's how you model the environment. Then the agent has these quantities called the policy. So what pi does is much like the transition, but pi takes in a state and gives you an action. Right. So this is now the agent deciding this is thinking. The policy takes in a state and gives an action. And this contains various forms, but it's just a function for now. The agent also has a Q and V function, and these are quite, quite similar. So the Q function, what the Q function will do if if you were in a state and you have several options of what to do, right, you have action one, action two and action three. You're in state S. The Q function of S and A one superscript pi would tell you the following. It would tell you what's my expected reward if I'm in state S. That's here. And perform action A. So A one. So if I now take this path and after this path, I follow the policy pi, right, the policy pi for each of the following. So it's like right now I take action A one. I don't care about my policy. But after that, I follow the policy pi. What is my expected reward going to be until the end of the episode? That's the Q function. And the value function here, very similar, but it only cares about the state. It says if I'm in state S and I just follow the policy pi, even in the first step, right, I just follow this policy pi. What is my expected reward going to be over the course of the episode? That is the Q and the value functions. You can see why Q learning is popular. If you have a good Q function and the Q and the value function, these are the things that you actually want to learn, right? If you have a good Q function, you can simply always plug in every action into your Q function and then simply take the maximum, the action that has the maximum Q value, because that will give you the best reward if your policy pi, right, is kind of self-referential. If your policy is to always take the maximum Q value, then taking the maximum Q value with the policy, given that you take the maximum Q value, will be optimal. All right, this was very convoluted, but all right, so let's start off with modeling the environment in this continuous framework. So instead of having the next state be determined by the current state in action, in the continuous framework, they do this via differential equation. So the DS is how does the environment change? This is the change in the environment that is determined by two functions, F and G. So F is your classic environment function. It takes in a state and an action at time t, right? These are now functions and it will output how the state changes. And the G here is, this is a Wiener process, is to introduce stochasticity, as I understand it, because in the classic formulation, the transition model gives you a probability up here, a probability distribution. So this Wiener process is responsible for introducing that probabilistic nature into this differential equation. But ultimately, it simply tells you how does the state change depending on my state, current state and action that I perform. So the reward function now is also pretty simple. The tau here is a trajectory and the trajectory is simply the state and action over time. So if I integrate from time zero to infinity or to the end of the episode, my reward function at each point in time, right? So I go through my episode and I get high reward, not so high and so on. So the integral under this curve will be my total reward, just like we sum up the reward of individual steps in the discrete case. In the continuous case, you can think of each infinitesimal time step giving you a tiny bit of reward. So the entire reward is just an integral. Then we go on the value function for a given state at time t. So think about what this is. The value function for a state means what reward can I expect starting in this particular state and then following policy pi until the end of the episode. And that here is the expectation over all trajectories that come from my policy of the reward in that trajectory. So I can, you know, if I'm here, my policy now is also a distribution. It can go multiple trajectories, right? And I want to have the expected value of the reward. So each one of these has a reward, the expected value of the reward over all trajectories starting from state s t. And again, here you say that that is the integral over the now here I have a bit of a problem because here they say t equals zero going from here and here. But here the t is already here. So I believe this should be this should be t equals t prime and then t prime t prime t prime up here. And t minus t prime or something like this. In any case, I think it should actually start from this state here and not from time zero. But I might be missing something. I'm not the biggest integrator in the world. So, you know, all right, then you have the Q function. Now think of it what the Q function is in the discrete case, the Q function tells you if I'm in state s and perform action a, what is my expected reward going to be? I have to introduce some different things here. They say if I'm in state s and I act action a at time t until time h right now, you have to say how long you're going to perform the action for until you perform the next action. Right. So h is your your lag time here until you perform the next action. So this now I actually agree with this formulation with the integral here. So this is going to be the integral from time t to time t plus h. That's how long you perform the action. Your reward of performing that action, right. Given the state plus the value function at the end of that. So you're here. You're in s t and you perform action a right. And then this is your state at time t plus h and then you're here. And from there on, you could perform many, many, many actions. But in the original notion of the Q function, the Q function tells you if I'm here and I perform this action and after that, I act according to policy pi. What is my what is my expected reward? And there's a classic recurrence relation in reinforcement learning where you can say the Q function in s t given to a is the reward that I get from performing a in state s plus the value function at state s at the next state. Because the value function is exactly the reward that you would get by following policy pi in that next state. And the Q function means I perform a now and after that I perform pi. So this is the continuous analog. That's why you have this part here where you perform the action for each time. After each time you just go after go with your policy and that will be the value function. So this is the continuous formulation of the of the problem. Right. And now they can introduce these these lagging times. So in their diagram up here, they define these notions. So you have your state s t right here. Then after this time, you capture the new state. Right. So after that time, you capture the new state and decide on an action and then you perform it for each time. Is that correct? Until here. So the the the I minus one of action is performed at this time and the I action is performed at this time. No, that makes no sense. So let's read it. So this is when you capture the state and you need to time to perform to think. Right. This is thinking. And then you perform this action at that time. This is the lag time now. And you perform this action. You want to know you want to know if I perform this action until this time here, what is what is happening? So this is the new Q function takes into account this thing. It tells you if I'm in state s and I think this is thinking leads me to here. This is the old action, right? This is the old action that's still happening while I observe this state. Right. So it means if I do this right now and after thinking, I do this. Right. So I'm at state. I'm at time t. And this is still happening. And then after I think thinking leads me here t plus t a s. I perform this new action. I'm out of colors. I perform this new action at that point until time H. What's my Q function? So my Q function is going to be the integral time t where I start observing the state and start thinking until t plus t a s. That's when I still perform the old action. Right. So this is going to be the reward in the state given the old action. And then at that time, I switch over to the new action. Right. So at that time until time H, now I perform the new action. So this entire part here, this part until here is taking the place of this first part here in the Q function of this first part. Right. So because before it was simply executing one action, we didn't have this concurrency yet. So executing the action. And after that, it's going to be the value function. And now it's executing two actions. First, execute the old action. Then once you're done thinking, execute the new action. And then it's the value function from there on. I hope this is clear. It wasn't clear to me until just now as well. All right. So they define the Monte Carlo estimator where you can do this with just samples of the trajectories instead of expectations. And then they define the Bellman operator, the Bellman backup operator. Now, the Bellman backup operator is an important quantity in value based reinforcement learning because the Bellman backup operator is basically what I talked about before. It tells you that if your policy is to always select the maximum, the action with the maximum Q value, right? That's what's down here. After you do this action, then the policy you arrive at and you can give certain optimality guarantees. But in essence, this is so-called a contraction. So if you always do that and you calculate your Q function that way, it will mean that in the contraction is defined as if you have an operator. If you have two things that are X1 and X2 that are some apart from each other, then after you apply the operator, this T here, X1 minus T X2, they will be closer together, which basically means that the Q to Q functions of the individual states will be closer together. And you'll converge to a single Q function. So given enough time and enough data, you'll converge on one Q function. There's one fixed point Q function that you'll converge to. And you can show under assumption in classic RL that this is going to be the optimal Q function, the true, let's say, Q function. So they first prove this and then they prove a now they go back to discrete time. So now they were in continuous time. They go back to discrete time, but now they have a discrete time formulation with this lag here. And also they prove that that Bellman operator is a contraction. So the contraction part basically means that if you perform Q learning, you're going to arrive at a solution. That's what this means to be contraction. But now, obviously, that solution in classic RL is going to be the optimal Q function. But here, I actually don't know. All right, so they try this out and they introduce one last important concept here, what they call vector to go, which basically means that at the point where they start thinking, where is a good thing to show this at the point where they start thinking, they give a they give the last action with. So at this point right here, where they capture the state, they also sort of the state contains a information about what part of the action that you started here is still outstanding. So maybe your action was and they illustrate this down here. Maybe your action was to move your robot arm from down here to up here. That was your planned action at this point in time. Now, if you are at step, if you perform the action here and here you start capturing the next state, then you would also give this particular vector here to the to the to the agent. So not only will you tell it, hey, by the way, my last action was a T minus one, as you would need in the Q value. You will also say, and this much is outstanding. This is much is where as I still have to do that much. So basically you're saying I wanted to move my arm right here and I still have to do this part of the action. Now you can see why the algorithm is able to learn much better given that information, because otherwise it has it would have to basically infer that vector from kind of differencing the action minus the what probably happened in the meantime. So they test this out. And what results is the robot videos you've seen before where they say they can recover the original Q learning in this continuous framework. So here on the left side, you have blocking actions and it says when it says yes here, it is kind of the old old framework. You see the grasp success at like 92 percent, where as if you go to non blocking actions, but do none of the none of the concurrent information, the grasp success suffers. But you can recover the grasp success if you if you give these concurrent information like introduce time step penalty and you give this vector to go and the information about the previous action. And you can also see that the episode duration here is much lower when you go for the continuous actions than when you are in the old framework, naturally, because you don't need to pause. Right. In this. So this is the simulated robotics and the real world robotic grasping results. You see kind of similar results in that if you do have blocking actions, your grasp success is higher than if you don't. But your duration of your of your policy is cut in half. So maybe this is a trade off worth considering. I think this is a is pretty cool framework. And I think there's going to be a lot of work still outstanding here. And I invite you to check out the paper and look at their videos and their ablation studies of what's important and what not. And with that, bye bye.
[ { "start": 0, "end": 7, "text": " Hi there. So if you look at these two robots, the left one labeled blocking, the right one labeled concurrent," }, { "start": 7, "end": 15, "text": " the blocking robot, as you can see, always has these little pauses in its movement where it does nothing" }, { "start": 15, "end": 24, "text": " and then it kind of continues with its motion, while the one on the right is one continuous motion that it does." }, { "start": 24, "end": 33, "text": " So the reasoning here is that the robot has a camera and the camera takes some time to register what's going on." }, { "start": 33, "end": 41, "text": " And then also the robot has a computer inside and the computer also takes some time to decide what to do based on what the camera saw." }, { "start": 41, "end": 48, "text": " And while all of this is happening, the robot on the left just freezes. So it performs an action and then it freezes" }, { "start": 48, "end": 58, "text": " because it takes time to register a state and compute a new action, whereas the robot on the right, it takes the same amount of time to do these things." }, { "start": 58, "end": 66, "text": " It also takes a time to register the state and to compute an action, but it does that as it is executing the last action." }, { "start": 66, "end": 75, "text": " So it does this in parallel and then it executes the action. Once it has computed a new action, it executes that new action right on top of the old action." }, { "start": 75, "end": 83, "text": " And that gives this one big fluid motion. So this requires a new formulation of reinforcement learning." }, { "start": 83, "end": 93, "text": " And that's what this paper does. Thinking while moving deep reinforcement learning with concurrent control by people from Google Brain, UC Berkeley and X." }, { "start": 93, "end": 103, "text": " So they have a nice diagram here in the supplementary material to show you what is going on in their framework." }, { "start": 103, "end": 111, "text": " So in classic reinforcement learning right here, in classic reinforcement learning, you have this dichotomy between agent and environment, right?" }, { "start": 111, "end": 120, "text": " So the agent and the environment. Now, the agent is supposed to kind of act in the environment in the following manner." }, { "start": 120, "end": 129, "text": " The environment will send an observation to the agent. The observation in this case is the picture of the camera." }, { "start": 129, "end": 139, "text": " So it sends an observation and the agent will think what to do with the observation, which is called a policy." }, { "start": 139, "end": 149, "text": " The policy pie will take an observation and then output an action of what to do and will send back the action to that environment." }, { "start": 149, "end": 160, "text": " So in classic RL, you assume that this part here is kind of freezes time." }, { "start": 160, "end": 174, "text": " So the environment will output an observation and as and the process of registering the observation of computing the action and of sending the action back is happens in zero time." }, { "start": 174, "end": 186, "text": " Of course, it doesn't happen in zero time, but in our reinforcement learning problems, for example, the OpenAI Gym, the environment just stops until it gets the next action." }, { "start": 186, "end": 194, "text": " And then it performs the action, right? It performs the action in the environment and by that the environment changes and time happens." }, { "start": 194, "end": 206, "text": " And then it stops again as we think of the next action, right? So this is we usually call this one step in the in the kind of classic formulation of RL." }, { "start": 206, "end": 216, "text": " The only point that time happens is when the action is executed. No time happens when the state is registered or when the action is computed." }, { "start": 216, "end": 226, "text": " And that's what you see here on the left. So in blue, you have the state registration. This is, for example, the camera." }, { "start": 226, "end": 236, "text": " The camera has some time in order to register and store the image that it has taken, right? Maybe post process it a little bit." }, { "start": 236, "end": 248, "text": " So that's what the camera does. But in our classic formulation, as you can see here, if this is time, it happens instantaneously all at the same time." }, { "start": 248, "end": 256, "text": " And also, this is the policy. This is thinking what to do, right? This is your evaluation of your neural network." }, { "start": 256, "end": 272, "text": " If this is a neural network, I'm drawing a small neural network here. This happens instantaneously in these formulations and only as the action is executed, time happens, right?" }, { "start": 272, "end": 280, "text": " And then until here, time freezes again and only once the action is determined, time happens again." }, { "start": 280, "end": 292, "text": " In the new formulation now, as you've already seen, what we have here is this kind of continuous framework where, let's say you're here," }, { "start": 292, "end": 300, "text": " it actually takes time for the camera to post process the image. It takes more time for you to think about what to do." }, { "start": 300, "end": 304, "text": " And then once you decide on an action, that action is going to happen, right?" }, { "start": 304, "end": 314, "text": " But we can say, for example, at this point, you tell the camera to take a new picture of the state, right?" }, { "start": 314, "end": 320, "text": " But that takes time. And while that's happening, the old action is still ongoing, right?" }, { "start": 320, "end": 326, "text": " So you don't even have to say the action is still ongoing, but the world is still moving, right?" }, { "start": 326, "end": 333, "text": " The world is still changing while you think, while you post process and while you evaluate your policy." }, { "start": 333, "end": 338, "text": " The world is thinking and only after some time, right?" }, { "start": 338, "end": 343, "text": " After this lag time here, have you decided on a new action?" }, { "start": 343, "end": 349, "text": " And then you can break that old action and kind of perform your new action." }, { "start": 349, "end": 355, "text": " And all of this is happening in time. So this is the new framework." }, { "start": 355, "end": 366, "text": " Now you see the problem here. The problem is that you base your decisions on the state and time T on time, sorry, time H here." }, { "start": 366, "end": 371, "text": " You base your decisions on a state as it was at that time, right?" }, { "start": 371, "end": 376, "text": " That's what you use to think right here. That's what you store and think about." }, { "start": 376, "end": 381, "text": " But you perform. So you perform the action at this point in time." }, { "start": 381, "end": 386, "text": " So there is a considerable difference here because the world has now changed." }, { "start": 386, "end": 392, "text": " So you see the problem. The action you perform is based on an old knowledge of the world." }, { "start": 392, "end": 398, "text": " And you have basically no way of making the action dependent on the current state of the world," }, { "start": 398, "end": 403, "text": " because that would require you to capture the current state and that takes time." }, { "start": 403, "end": 406, "text": " And in that time, the world has already shifted again." }, { "start": 406, "end": 417, "text": " So the agent is kind of required to think ahead about the action that it is currently performing and how the world changes according to that." }, { "start": 417, "end": 425, "text": " So this new formulation of reinforcement learning formulates this in a formal way." }, { "start": 425, "end": 432, "text": " It formulates it in a formal way and will quickly go through that." }, { "start": 432, "end": 440, "text": " Yes, so they go into the very basics here. We'll quickly, quickly go through them." }, { "start": 440, "end": 453, "text": " So they introduce these quantities like the policy pi, the transition distribution, the reward and the Q and value function." }, { "start": 453, "end": 459, "text": " Now we'll just quickly go over these. So you have the agent. Sorry about that." }, { "start": 459, "end": 467, "text": " You have the agent and you have the environment. Hello." }, { "start": 467, "end": 480, "text": " So if you think of the agent and the environment, the environment has this transition function." }, { "start": 480, "end": 489, "text": " The transition function, it takes it says, OK, I'm in this state and the agent does this action." }, { "start": 489, "end": 494, "text": " And here is the probability distribution over the next state." }, { "start": 494, "end": 501, "text": " So it says that your little spaceship is here and the meteors are here." }, { "start": 501, "end": 514, "text": " And then you push the button. If you push the button for shoot, then you'll be in the same place." }, { "start": 514, "end": 520, "text": " The meteors will still be here, but you'll have a little shot coming out of your spaceship." }, { "start": 520, "end": 527, "text": " That's what the environment does. Right. So you give it a state and action and it will give you the next state." }, { "start": 527, "end": 532, "text": " It will also give you a reward. Right." }, { "start": 532, "end": 548, "text": " The reward in the same thing, reward here will be a second output here that tells you either, let's say, negative one if you die or zero if nothing happens or plus one if you shoot a meteor." }, { "start": 548, "end": 557, "text": " That's the reward. So this you can think of as the real world. So these two quantities are in the real world in the environment." }, { "start": 557, "end": 564, "text": " So that's how you model the environment. Then the agent has these quantities called the policy." }, { "start": 564, "end": 579, "text": " So what pi does is much like the transition, but pi takes in a state and gives you an action. Right. So this is now the agent deciding this is thinking." }, { "start": 579, "end": 589, "text": " The policy takes in a state and gives an action. And this contains various forms, but it's just a function for now." }, { "start": 589, "end": 596, "text": " The agent also has a Q and V function, and these are quite, quite similar." }, { "start": 596, "end": 607, "text": " So the Q function, what the Q function will do if if you were in a state and you have several options of what to do, right, you have action one, action two and action three." }, { "start": 607, "end": 620, "text": " You're in state S. The Q function of S and A one superscript pi would tell you the following." }, { "start": 620, "end": 629, "text": " It would tell you what's my expected reward if I'm in state S. That's here. And perform action A." }, { "start": 629, "end": 642, "text": " So A one. So if I now take this path and after this path, I follow the policy pi, right, the policy pi for each of the following." }, { "start": 642, "end": 649, "text": " So it's like right now I take action A one. I don't care about my policy. But after that, I follow the policy pi." }, { "start": 649, "end": 655, "text": " What is my expected reward going to be until the end of the episode? That's the Q function." }, { "start": 655, "end": 661, "text": " And the value function here, very similar, but it only cares about the state." }, { "start": 661, "end": 672, "text": " It says if I'm in state S and I just follow the policy pi, even in the first step, right, I just follow this policy pi." }, { "start": 672, "end": 681, "text": " What is my expected reward going to be over the course of the episode? That is the Q and the value functions." }, { "start": 681, "end": 689, "text": " You can see why Q learning is popular. If you have a good Q function and the Q and the value function, these are the things that you actually want to learn, right?" }, { "start": 689, "end": 703, "text": " If you have a good Q function, you can simply always plug in every action into your Q function and then simply take the maximum, the action that has the maximum Q value," }, { "start": 703, "end": 712, "text": " because that will give you the best reward if your policy pi, right, is kind of self-referential." }, { "start": 712, "end": 726, "text": " If your policy is to always take the maximum Q value, then taking the maximum Q value with the policy, given that you take the maximum Q value, will be optimal." }, { "start": 726, "end": 736, "text": " All right, this was very convoluted, but all right, so let's start off with modeling the environment in this continuous framework." }, { "start": 736, "end": 745, "text": " So instead of having the next state be determined by the current state in action, in the continuous framework, they do this via differential equation." }, { "start": 745, "end": 754, "text": " So the DS is how does the environment change? This is the change in the environment that is determined by two functions, F and G." }, { "start": 754, "end": 762, "text": " So F is your classic environment function. It takes in a state and an action at time t, right?" }, { "start": 762, "end": 768, "text": " These are now functions and it will output how the state changes." }, { "start": 768, "end": 783, "text": " And the G here is, this is a Wiener process, is to introduce stochasticity, as I understand it, because in the classic formulation, the transition model gives you a probability up here, a probability distribution." }, { "start": 783, "end": 793, "text": " So this Wiener process is responsible for introducing that probabilistic nature into this differential equation." }, { "start": 793, "end": 801, "text": " But ultimately, it simply tells you how does the state change depending on my state, current state and action that I perform." }, { "start": 801, "end": 808, "text": " So the reward function now is also pretty simple." }, { "start": 808, "end": 815, "text": " The tau here is a trajectory and the trajectory is simply the state and action over time." }, { "start": 815, "end": 826, "text": " So if I integrate from time zero to infinity or to the end of the episode, my reward function at each point in time, right?" }, { "start": 826, "end": 832, "text": " So I go through my episode and I get high reward, not so high and so on." }, { "start": 832, "end": 843, "text": " So the integral under this curve will be my total reward, just like we sum up the reward of individual steps in the discrete case." }, { "start": 843, "end": 850, "text": " In the continuous case, you can think of each infinitesimal time step giving you a tiny bit of reward." }, { "start": 850, "end": 855, "text": " So the entire reward is just an integral." }, { "start": 855, "end": 863, "text": " Then we go on the value function for a given state at time t." }, { "start": 863, "end": 877, "text": " So think about what this is. The value function for a state means what reward can I expect starting in this particular state and then following policy pi until the end of the episode." }, { "start": 877, "end": 888, "text": " And that here is the expectation over all trajectories that come from my policy of the reward in that trajectory." }, { "start": 888, "end": 894, "text": " So I can, you know, if I'm here, my policy now is also a distribution." }, { "start": 894, "end": 900, "text": " It can go multiple trajectories, right? And I want to have the expected value of the reward." }, { "start": 900, "end": 910, "text": " So each one of these has a reward, the expected value of the reward over all trajectories starting from state s t." }, { "start": 910, "end": 924, "text": " And again, here you say that that is the integral over the now here I have a bit of a problem because here they say t equals zero going from here and here." }, { "start": 924, "end": 944, "text": " But here the t is already here. So I believe this should be this should be t equals t prime and then t prime t prime t prime up here." }, { "start": 944, "end": 955, "text": " And t minus t prime or something like this. In any case, I think it should actually start from this state here and not from time zero." }, { "start": 955, "end": 961, "text": " But I might be missing something. I'm not the biggest integrator in the world." }, { "start": 961, "end": 967, "text": " So, you know, all right, then you have the Q function." }, { "start": 967, "end": 977, "text": " Now think of it what the Q function is in the discrete case, the Q function tells you if I'm in state s and perform action a, what is my expected reward going to be?" }, { "start": 977, "end": 995, "text": " I have to introduce some different things here. They say if I'm in state s and I act action a at time t until time h right now, you have to say how long you're going to perform the action for" }, { "start": 995, "end": 1003, "text": " until you perform the next action. Right. So h is your your lag time here until you perform the next action." }, { "start": 1003, "end": 1010, "text": " So this now I actually agree with this formulation with the integral here." }, { "start": 1010, "end": 1016, "text": " So this is going to be the integral from time t to time t plus h." }, { "start": 1016, "end": 1030, "text": " That's how long you perform the action. Your reward of performing that action, right. Given the state plus the value function at the end of that." }, { "start": 1030, "end": 1037, "text": " So you're here. You're in s t and you perform action a right." }, { "start": 1037, "end": 1049, "text": " And then this is your state at time t plus h and then you're here. And from there on, you could perform many, many, many actions." }, { "start": 1049, "end": 1061, "text": " But in the original notion of the Q function, the Q function tells you if I'm here and I perform this action and after that, I act according to policy pi." }, { "start": 1061, "end": 1088, "text": " What is my what is my expected reward? And there's a classic recurrence relation in reinforcement learning where you can say the Q function in s t given to a is the reward that I get from performing a in state s plus the value function at state s at the next state." }, { "start": 1088, "end": 1095, "text": " Because the value function is exactly the reward that you would get by following policy pi in that next state." }, { "start": 1095, "end": 1101, "text": " And the Q function means I perform a now and after that I perform pi." }, { "start": 1101, "end": 1104, "text": " So this is the continuous analog." }, { "start": 1104, "end": 1110, "text": " That's why you have this part here where you perform the action for each time." }, { "start": 1110, "end": 1118, "text": " After each time you just go after go with your policy and that will be the value function." }, { "start": 1118, "end": 1124, "text": " So this is the continuous formulation of the of the problem." }, { "start": 1124, "end": 1129, "text": " Right. And now they can introduce these these lagging times." }, { "start": 1129, "end": 1135, "text": " So in their diagram up here, they define these notions." }, { "start": 1135, "end": 1141, "text": " So you have your state s t right here." }, { "start": 1141, "end": 1147, "text": " Then after this time, you capture the new state." }, { "start": 1147, "end": 1159, "text": " Right. So after that time, you capture the new state and decide on an action and then you perform it for each time." }, { "start": 1159, "end": 1161, "text": " Is that correct?" }, { "start": 1161, "end": 1163, "text": " Until here." }, { "start": 1163, "end": 1175, "text": " So the the the I minus one of action is performed at this time and the I action is performed at this time." }, { "start": 1175, "end": 1178, "text": " No, that makes no sense." }, { "start": 1178, "end": 1185, "text": " So let's read it." }, { "start": 1185, "end": 1193, "text": " So this is when you capture the state and you need to time to perform to think." }, { "start": 1193, "end": 1197, "text": " Right. This is thinking." }, { "start": 1197, "end": 1202, "text": " And then you perform this action at that time." }, { "start": 1202, "end": 1204, "text": " This is the lag time now." }, { "start": 1204, "end": 1207, "text": " And you perform this action." }, { "start": 1207, "end": 1215, "text": " You want to know you want to know if I perform this action until this time here, what is what is happening?" }, { "start": 1215, "end": 1222, "text": " So this is the new Q function takes into account this thing." }, { "start": 1222, "end": 1234, "text": " It tells you if I'm in state s and I think this is thinking leads me to here." }, { "start": 1234, "end": 1238, "text": " This is the old action, right?" }, { "start": 1238, "end": 1243, "text": " This is the old action that's still happening while I observe this state." }, { "start": 1243, "end": 1254, "text": " Right. So it means if I do this right now and after thinking, I do this." }, { "start": 1254, "end": 1256, "text": " Right." }, { "start": 1256, "end": 1260, "text": " So I'm at state. I'm at time t." }, { "start": 1260, "end": 1263, "text": " And this is still happening." }, { "start": 1263, "end": 1274, "text": " And then after I think thinking leads me here t plus t a s." }, { "start": 1274, "end": 1277, "text": " I perform this new action." }, { "start": 1277, "end": 1280, "text": " I'm out of colors." }, { "start": 1280, "end": 1286, "text": " I perform this new action at that point until time H." }, { "start": 1286, "end": 1288, "text": " What's my Q function?" }, { "start": 1288, "end": 1302, "text": " So my Q function is going to be the integral time t where I start observing the state and start thinking until t plus t a s." }, { "start": 1302, "end": 1305, "text": " That's when I still perform the old action." }, { "start": 1305, "end": 1311, "text": " Right. So this is going to be the reward in the state given the old action." }, { "start": 1311, "end": 1314, "text": " And then at that time, I switch over to the new action." }, { "start": 1314, "end": 1315, "text": " Right." }, { "start": 1315, "end": 1322, "text": " So at that time until time H, now I perform the new action." }, { "start": 1322, "end": 1340, "text": " So this entire part here, this part until here is taking the place of this first part here in the Q function of this first part." }, { "start": 1340, "end": 1343, "text": " Right. So because before it was simply executing one action," }, { "start": 1343, "end": 1345, "text": " we didn't have this concurrency yet." }, { "start": 1345, "end": 1347, "text": " So executing the action." }, { "start": 1347, "end": 1349, "text": " And after that, it's going to be the value function." }, { "start": 1349, "end": 1352, "text": " And now it's executing two actions." }, { "start": 1352, "end": 1354, "text": " First, execute the old action." }, { "start": 1354, "end": 1357, "text": " Then once you're done thinking, execute the new action." }, { "start": 1357, "end": 1361, "text": " And then it's the value function from there on." }, { "start": 1361, "end": 1364, "text": " I hope this is clear." }, { "start": 1364, "end": 1367, "text": " It wasn't clear to me until just now as well." }, { "start": 1367, "end": 1368, "text": " All right." }, { "start": 1368, "end": 1378, "text": " So they define the Monte Carlo estimator where you can do this with just samples of the trajectories instead of expectations." }, { "start": 1378, "end": 1383, "text": " And then they define the Bellman operator, the Bellman backup operator." }, { "start": 1383, "end": 1393, "text": " Now, the Bellman backup operator is an important quantity in value based reinforcement learning because the Bellman backup operator is basically what I talked about before." }, { "start": 1393, "end": 1406, "text": " It tells you that if your policy is to always select the maximum, the action with the maximum Q value, right?" }, { "start": 1406, "end": 1407, "text": " That's what's down here." }, { "start": 1407, "end": 1419, "text": " After you do this action, then the policy you arrive at and you can give certain optimality guarantees." }, { "start": 1419, "end": 1424, "text": " But in essence, this is so-called a contraction." }, { "start": 1424, "end": 1435, "text": " So if you always do that and you calculate your Q function that way, it will mean that in the contraction is defined as if you have an operator." }, { "start": 1435, "end": 1446, "text": " If you have two things that are X1 and X2 that are some apart from each other, then after you apply the operator, this T here," }, { "start": 1446, "end": 1460, "text": " X1 minus T X2, they will be closer together, which basically means that the Q to Q functions of the individual states will be closer together." }, { "start": 1460, "end": 1465, "text": " And you'll converge to a single Q function." }, { "start": 1465, "end": 1470, "text": " So given enough time and enough data, you'll converge on one Q function." }, { "start": 1470, "end": 1474, "text": " There's one fixed point Q function that you'll converge to." }, { "start": 1474, "end": 1484, "text": " And you can show under assumption in classic RL that this is going to be the optimal Q function, the true, let's say, Q function." }, { "start": 1484, "end": 1491, "text": " So they first prove this and then they prove a now they go back to discrete time." }, { "start": 1491, "end": 1492, "text": " So now they were in continuous time." }, { "start": 1492, "end": 1499, "text": " They go back to discrete time, but now they have a discrete time formulation with this lag here." }, { "start": 1499, "end": 1504, "text": " And also they prove that that Bellman operator is a contraction." }, { "start": 1504, "end": 1512, "text": " So the contraction part basically means that if you perform Q learning, you're going to arrive at a solution." }, { "start": 1512, "end": 1514, "text": " That's what this means to be contraction." }, { "start": 1514, "end": 1521, "text": " But now, obviously, that solution in classic RL is going to be the optimal Q function." }, { "start": 1521, "end": 1524, "text": " But here, I actually don't know." }, { "start": 1524, "end": 1533, "text": " All right, so they try this out and they introduce one last important concept here, what they call vector to go," }, { "start": 1533, "end": 1545, "text": " which basically means that at the point where they start thinking," }, { "start": 1545, "end": 1556, "text": " where is a good thing to show this at the point where they start thinking, they give a they give the last action with." }, { "start": 1556, "end": 1566, "text": " So at this point right here, where they capture the state," }, { "start": 1566, "end": 1581, "text": " they also sort of the state contains a information about what part of the action that you started here is still outstanding." }, { "start": 1581, "end": 1586, "text": " So maybe your action was and they illustrate this down here." }, { "start": 1586, "end": 1594, "text": " Maybe your action was to move your robot arm from down here to up here." }, { "start": 1594, "end": 1598, "text": " That was your planned action at this point in time." }, { "start": 1598, "end": 1608, "text": " Now, if you are at step, if you perform the action here and here you start capturing the next state," }, { "start": 1608, "end": 1615, "text": " then you would also give this particular vector here to the to the to the agent." }, { "start": 1615, "end": 1622, "text": " So not only will you tell it, hey, by the way, my last action was a T minus one, as you would need in the Q value." }, { "start": 1622, "end": 1627, "text": " You will also say, and this much is outstanding." }, { "start": 1627, "end": 1631, "text": " This is much is where as I still have to do that much." }, { "start": 1631, "end": 1638, "text": " So basically you're saying I wanted to move my arm right here and I still have to do this part of the action." }, { "start": 1638, "end": 1645, "text": " Now you can see why the algorithm is able to learn much better given that information," }, { "start": 1645, "end": 1660, "text": " because otherwise it has it would have to basically infer that vector from kind of differencing the action minus the what probably happened in the meantime." }, { "start": 1660, "end": 1661, "text": " So they test this out." }, { "start": 1661, "end": 1671, "text": " And what results is the robot videos you've seen before where they say they can recover the original" }, { "start": 1671, "end": 1675, "text": " Q learning in this continuous framework." }, { "start": 1675, "end": 1682, "text": " So here on the left side, you have blocking actions and it says when it says yes here," }, { "start": 1682, "end": 1685, "text": " it is kind of the old old framework." }, { "start": 1685, "end": 1694, "text": " You see the grasp success at like 92 percent, where as if you go to non blocking actions," }, { "start": 1694, "end": 1701, "text": " but do none of the none of the concurrent information, the grasp success suffers." }, { "start": 1701, "end": 1715, "text": " But you can recover the grasp success if you if you give these concurrent information like introduce time step penalty and you give this vector to go and the information about the previous action." }, { "start": 1715, "end": 1728, "text": " And you can also see that the episode duration here is much lower when you go for the continuous actions than when you are in the old framework," }, { "start": 1728, "end": 1730, "text": " naturally, because you don't need to pause." }, { "start": 1730, "end": 1734, "text": " Right." }, { "start": 1734, "end": 1735, "text": " In this." }, { "start": 1735, "end": 1739, "text": " So this is the simulated robotics and the real world robotic grasping results." }, { "start": 1739, "end": 1749, "text": " You see kind of similar results in that if you do have blocking actions, your grasp success is higher than if you don't." }, { "start": 1749, "end": 1756, "text": " But your duration of your of your policy is cut in half." }, { "start": 1756, "end": 1760, "text": " So maybe this is a trade off worth considering." }, { "start": 1760, "end": 1763, "text": " I think this is a is pretty cool framework." }, { "start": 1763, "end": 1768, "text": " And I think there's going to be a lot of work still outstanding here." }, { "start": 1768, "end": 1776, "text": " And I invite you to check out the paper and look at their videos and their ablation studies of what's important and what not." }, { "start": 1776, "end": 1799, "text": " And with that, bye bye." } ]
5skIqoO3ku0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
OpenAI Embeddings (and Controversy?!)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "natural language processing", "mlnews", "openai", "openai embeddings", "nils reimers", "beir dataset", "beir benchmark", "text similarity", "neural embeddings", "gpt-3 embeddings", "gpt 3", "openai api", "openai gpt embeddings", "splade", "sentencebert", "neural retrieval", "neural search engine", "vector search engine", "inner product search", "semantic search engine", "gpt-3 search", "faiq dataset", "how good is openai" ]
#mlnews #openai #embeddings COMMENTS DIRECTLY FROM THE AUTHOR (thanks a lot for reaching out Arvind :) ): 1. The FIQA results you share also have code to reproduce the results in the paper using the API: https://twitter.com/arvind_io/status/1488257004783112192?s=20&t=gB3c79VEX8hGJl6WfZa2iA There's no discrepancy AFAIK. 2. We leave out 6 not 7 BEIR datasets. Results on msmarco, nq and triviaqa are in a separate table (Table 5 in the paper). NQ is part of BEIR too and we didn't want to repeat it. Finally, the 6 datasets we leave out are not readily available and it is common to leave them out in prior work too. For examples, see SPLADE v2 (https://arxiv.org/pdf/2109.10086.pdf) also evaluates on the same 12 BEIR datasets. 3. Finally, I'm now working on time travel so that I can cite papers from the future :) END COMMENTS FROM THE AUTHOR OpenAI launches an embeddings endpoint in their API, providing high-dimensional vector embeddings for use in text similarity, text search, and code search. While embeddings are universally recognized as a standard tool to process natural language, people have raised doubts about the quality of OpenAI's embeddings, as one blog post found they are often outperformed by open-source models, which are much smaller and with which embedding would cost a fraction of what OpenAI charges. In this video, we examine the claims made and determine what it all means. OUTLINE: 0:00 - Intro 0:30 - Sponsor: Weights & Biases 2:20 - What embeddings are available? 3:55 - OpenAI shows promising results 5:25 - How good are the results really? 6:55 - Criticism: Open models might be cheaper and smaller 10:05 - Discrepancies in the results 11:00 - The author's response 11:50 - Putting things into perspective 13:35 - What about real world data? 14:40 - OpenAI's pricing strategy: Why so expensive? Sponsor: Weights & Biases https://wandb.me/yannic Merch: store.ykilcher.com ERRATA: At 13:20 I say "better", it should be "worse" References: https://openai.com/blog/introducing-text-and-code-embeddings/ https://arxiv.org/pdf/2201.10005.pdf https://beta.openai.com/docs/guides/embeddings/what-are-embeddings https://beta.openai.com/docs/api-reference/fine-tunes https://twitter.com/Nils_Reimers/status/1487014195568775173?s=20&t=NBF7D2DYi41346cGM-PQjQ https://medium.com/@nils_reimers/openai-gpt-3-text-embeddings-really-a-new-state-of-the-art-in-dense-text-embeddings-6571fe3ec9d9 https://mobile.twitter.com/arvind_io/status/1487188996774002688 https://twitter.com/gwern/status/1487096484545847299 https://twitter.com/gwern/status/1487156204979855366 https://twitter.com/Nils_Reimers/status/1487216073409716224 https://twitter.com/gwern/status/1470203876209012736 https://www.reddit.com/r/MachineLearning/comments/sew5rl/d_it_seems_openais_new_embedding_models_perform/ https://mobile.twitter.com/arvind_io/status/1488257004783112192 https://mobile.twitter.com/arvind_io/status/1488569644726177796 Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello, everyone, welcome to a special edition of ML news, we have something to discuss. Open AI just released an embeddings endpoint to their API. This is a company by blog post called introducing text and code embeddings in the Open AI API. Now after the let's call them big successes of GPT three, and codecs, which is the model that powers GitHub scope pilot, Open AI pushes forward into the domain of embeddings. Hold on, this video is sponsored by weights and biases, weights and biases is your one stop shop for all your machine learning needs, it will track your experiments with a single line of code will upload automatically all your logs, all your configurations, everything to your cloud, it will automatically grab all the output, all the metrics, all the configurations of your experiments, and store that in one neat location. So you can see your experiments, you can track them wherever they run, you can compare among the experiments, but you can go further, you can then tune your hyper parameters according to the results of those experiments. And all of this is done automatically in a distributed way, you can literally sit on your toilet on your smartphone and tune your hyper parameters and start new experiments. But it's not only experiment tracking and hyper parameter tuning weights and biases has tools for the entire pipeline of machine learning research from the initial idea up until the deployment and beyond that when you actually want to track what you've deployed weights and biases has cool methods to track all of your data set and their dependencies to each other, as well as your models and all kinds of other artifacts that you might produce a very powerful visualizations for all the inputs and outputs of your pipelines, as well as the models themselves. All of this runs in the cloud. But if you're concerned about privacy, there are options to self host the system is free for personal use and for academics and they have great plans for enterprises, small teams, large teams doesn't matter. So thank you very much weights and biases for sponsoring this video. If you don't know them yet, absolutely check them out. It's free, it'll make your life a whole lot easier. Now let's get into the video. So briefly said an embedding model associates a piece of text with a fixed size vector. The fixed size vector can then be used to do semantic similarity search in high dimensional spaces among other things. They have a toy depiction of these embeddings right here. Now as this clearly shows, furries and football fans are in fact linearly separable. So you know, thanks open AI. In order to get these embeddings, you'd interact with the open API, as you would else you'd instantiate it, you call it you get back a vector, they have three different modes available. One is for text similarity, which essentially means that you can put in pieces of text. And if the vectors are close together, that means the text are in some way similar. The second one is for text search where they have a separate encoder for documents, which are, I guess, longer pieces of content, and queries, which are shorter pieces of content. And the idea is that you would rank document vectors against query vector, and then whichever ones fall closest together, those would be the relevant documents to retrieve for that query. It's a bit similar to text similarity, the differences are in the length of the things that you put into the models, and also a little bit of the semantics, although I don't think there's too much of a difference. The last one is code search, which is essentially the same as text search for code. What's also to be said is that these come in different sizes, Ada being the smallest, and DaVinci being the largest DaVinci is the original 175 billion parameter GPT three model size, they do release a paper along with it on how they train this thing and what the results are. And the brief summary is that in various data sets and various tasks, they do beat previous state of the art results, for example, in linear probe classification, which is where you take embeddings, and then you train just a small linear layer on top with a label data set, they outperform previous state of the art, they also do so in text search tasks in the buyer retrieval benchmark. And lastly, they outperform on code search quite a bit. The paper goes into more details on how the model was trained, they explained that it is a contrastive loss that they've used. Essentially, what you want to do is you want to encode pieces of text through the encoder, and then make similar things closer to each other and negatives, in this case, in batch negatives further apart from each other. This does require quite large batch sizes to actually get an accurate distribution of negatives. But you know, it's open AI, so they can do it. As I said, their models go from 300 million parameters for the smallest to 175 billion for the largest with the embedding dimensions going from 1024 up to a ridiculous 12,288. Now you might think the larger dimension is a good thing. But this is not necessarily the case right here. This is one of the criticisms that's going to come up in a short while. You can also see right here that yeah, indeed, the batch size is pretty large, the paper itself goes into a little bit more detail into the results. And here we kind of see the first scratches in what people are now saying about this model, namely that it doesn't seem to perform that well. Now while these average results that they have presented, mostly from their extra large models do outperform other things is very often that they don't outperform them by that much. And if you actually look in selected tasks, then it's not even clear they're the best model. They seem to compare sometimes to quite outdated baselines. As you can see, these papers are sometimes from 2021. And last I checked, it's 2022. So you know, opening, I get your crap in order. Now by far the biggest controversial point right here is the price. As they say in their documentation, encoding 1000 tokens with a DaVinci model will cost you 60 cents. Now 60 cents doesn't sound like a lot, but corpora often have a lot more than 1000 tokens. Remember that tokens are not even words, they're kind of sub words. And that means that this model is quite expensive. Now this gets drastically cheaper if you go down to the smaller models, as you can see, the query embeddings are already 10 times smaller and Babbage and Ada another factor of eight or so. So pretty shortly, this Twitter thread here blew up by Niels Reimers, who says GPT-3 embeddings by OpenAI was announced this week, I was excited and tested them on 20 datasets. Sadly, they are worse than open models that are 1000 times smaller and running open AI models can be at 1 million times more expensive. This is accompanied by a medium post called open AI GPT-3 text embeddings, really a new state of the art in dense text embeddings, where he leverages a lot of these points that I've said previously, like they seem to not compare to the most recent and most performing baselines and their results don't seem to be that far ahead of the competition, especially if you consider the smaller models and also that they did weird selections of data sets that they've trained on. For example, the buyer benchmark has 18 data sets and they have chosen to just test on 11 of them and report average performance across those 11. So Niels assembled his own benchmark of tasks and tested these models against some openly available models. And the most shocking conclusion is that it seems to be that for some tasks, at least, you can get much better performance with the open models at astonishingly low cost. As you can see in this table here, this lists performance against the cost of encoding 1 million documents, which even for the smallest open AI model costs $800 goes up to $60,000 for the largest one. And on the open models, well, the most expensive tested right here will cost you $6.80 and the best performing one $2.40. Now it is to be said that these prices are probably made such that the largest possible shock effect is achieved. Very often when he mentions prices, he says that, well, this is the cost of like a preemptable t4 GPU, which I guess first of all, you get the difficulty of being preemptable, which you don't get with open AI. And second of all, good luck finding quota for a t4 anywhere on the planet right now. But point taken, the open models can be significantly cheaper. And the blog post explores the results from the paper itself also a bit more, again, pointing out that the advantages aren't that much. And something like point one f1 score, and oftentimes even behind the open models. Another point he makes is that the high dimensionality of the embeddings might actually work against you if you're looking to implement anything, because higher dimensional vectors, if you want to build a search index, for example, they require a much more memory intensive index structure, which will cost you more money. And even disregarding money, searching through a higher dimensional space can be a lot slower than searching through a low dimensional space. And he points out that is not really an option to compress these high dimensional embeddings, they are using something like PCA, as that deteriorates their performance quite quickly. Now the claim is just made right here, but I think he must have some experience or references from somewhere. So I guess that would also count for down sampling methods such as random projections. But I don't know, I guess that's still open out there to try. Now it is to be said that when the author here tried to use the open AI API to reproduce the numbers in the paper, it resulted in different numbers, which makes one wonder, did they change the model since the paper? Or maybe is there something wrong with this evaluation? Now curiously, if I read this correctly, actually, the numbers of the current API used are better than the numbers that are in the paper, which is weird. But also people have pointed out minor issues that can creep in and really destroy your results, such as Gwern right here pointing out that you cannot have new lines in your embedding queries, otherwise the embeddings become almost unusable, which is a thing that open AI discusses in their API documentation. However, Reimer's responded to this and said that yes, indeed, he had replaced the new lines, he'd actually use the exact code that he found in an open AI website snippet. So these results do look pretty legit. In fact, one of the main authors of the paper has put out a response, I guess. I mean, it's not responding to anything. It's just a Twitter thread. But it comes kind of in the light of these criticisms about how they evaluate their embedding models in open AI API. This goes into more detail on the evaluation, mainly reciting points from the paper, but being a little bit more, yeah, we don't always achieve the best results possible than the blog post is because the blog post just shows average numbers and says, well, we're state of the art pretty much everywhere. But if you look into detail a little bit more, the picture becomes a bit more murky. I'll link all the threads here in the description. I think one point to be mentioned right here, which is made by the author here and also by the blog post is that hello, this is Yannick from the future. I've waited on this story a bit because we have some new development, the authors quasi responded again and not really brought anything new to the table, but just put sort of the things being said into context here in that they do point out that on many of the information retrieval, so the search tasks, the embeddings are actually performing really well. And that on zero shot, keep that in mind, including, for example, the FIQA data set where they outperform something like BM25 or other models by a wide margin. On top of that, they also put the cost in perspective saying that for this example data set, and this is a fairly, let's say average data set, the cost of embedding the documents and the queries is $80. So the blog post always compared costs of embedding X many millions of tokens. But if you go to actual data set, yes, the embeddings are still going to be more expensive, but the absolute cost might actually not be as much as the blog post might seem. Of course, that depends entirely on how large your data set is. But spending 80 bucks for a 62% relative improvement seems to be a nice deal. So it seems to really depend on the data set at hand, and you might have to try it out on a subset of your data. This was then greeted by a response response, saying that, yes, but the much smaller model and much cheaper model is just point one of a score better than the largest GPT-3 model. So Niels asked why the evaluation was just done on 11 out of the 18 data sets, we don't have a response yet to that, but it's been a week, so I don't expect we'll get one. And that is where it stands currently back to Yannick in the past. In their experience, these embeddings seem to do quite well when you have to transfer them to a new domain. A lot of these openly available models, they are trained on specific data sets, you know, with specific benchmarks in mind and all of that. So they kind of come from the academic world for the academic world, and therefore might overperform even on a different data set, it is still a clean data set that has been assembled kind of to be a benchmark and so on. While what OpenAI is saying that if we take these embeddings and actually go to the real world, our customers see big improvements in their own applications. Now, of course, there's no way to verify that. And the blog posts lists three examples of customers saying, Oh, look, they are able to find like six to 10 times more relevant examples for something or they pump their performance from 64% to 89%. Again, there's no way to verify that but I wouldn't actually be surprised if that is the case. Real world data is a lot more messy than any of the academic data sets. And therefore, I guess only trying it out will actually tell you whether it's useful or not. I do have to wonder about the price though. Like there are two possibilities essentially. One OpenAI has done market research and so on. And this is what they think people will pay for this. Like this is how much value they think they bring with their API. Or on the other hand, this is kind of their operating cost plus some margin to make the shareholders happy. Now I really can't tell apparently they do have customers. So someone must be willing to pay all of this. On the other hand, it does seem outrageously expensive for such a small improvement, at least in these academic data sets. So let me know what you think is this even profitable for OpenAI? Like does anyone have any estimates on what it costs them to develop these new models and to keep them running? It must be massive endeavor. In any case, that was it for the special episode of ML news. Merch is still available. And I'll see you next time. Bye bye.
[ { "start": 0, "end": 11, "text": " Hello, everyone, welcome to a special edition of ML news, we have something to discuss." }, { "start": 11, "end": 15.24, "text": " Open AI just released an embeddings endpoint to their API." }, { "start": 15.24, "end": 21.52, "text": " This is a company by blog post called introducing text and code embeddings in the Open AI API." }, { "start": 21.52, "end": 28, "text": " Now after the let's call them big successes of GPT three, and codecs, which is the model" }, { "start": 28, "end": 33.4, "text": " that powers GitHub scope pilot, Open AI pushes forward into the domain of embeddings." }, { "start": 33.4, "end": 38.2, "text": " Hold on, this video is sponsored by weights and biases, weights and biases is your one" }, { "start": 38.2, "end": 43.56, "text": " stop shop for all your machine learning needs, it will track your experiments with a single" }, { "start": 43.56, "end": 49.120000000000005, "text": " line of code will upload automatically all your logs, all your configurations, everything" }, { "start": 49.120000000000005, "end": 55.04, "text": " to your cloud, it will automatically grab all the output, all the metrics, all the configurations" }, { "start": 55.04, "end": 59.32, "text": " of your experiments, and store that in one neat location." }, { "start": 59.32, "end": 64.18, "text": " So you can see your experiments, you can track them wherever they run, you can compare among" }, { "start": 64.18, "end": 68.7, "text": " the experiments, but you can go further, you can then tune your hyper parameters according" }, { "start": 68.7, "end": 70.98, "text": " to the results of those experiments." }, { "start": 70.98, "end": 75.56, "text": " And all of this is done automatically in a distributed way, you can literally sit on" }, { "start": 75.56, "end": 80.94, "text": " your toilet on your smartphone and tune your hyper parameters and start new experiments." }, { "start": 80.94, "end": 85.64, "text": " But it's not only experiment tracking and hyper parameter tuning weights and biases" }, { "start": 85.64, "end": 91.22, "text": " has tools for the entire pipeline of machine learning research from the initial idea up" }, { "start": 91.22, "end": 95.96, "text": " until the deployment and beyond that when you actually want to track what you've deployed" }, { "start": 95.96, "end": 100.82, "text": " weights and biases has cool methods to track all of your data set and their dependencies" }, { "start": 100.82, "end": 104.84, "text": " to each other, as well as your models and all kinds of other artifacts that you might" }, { "start": 104.84, "end": 110.82, "text": " produce a very powerful visualizations for all the inputs and outputs of your pipelines," }, { "start": 110.82, "end": 112.66, "text": " as well as the models themselves." }, { "start": 112.66, "end": 114.33999999999999, "text": " All of this runs in the cloud." }, { "start": 114.33999999999999, "end": 119.03999999999999, "text": " But if you're concerned about privacy, there are options to self host the system is free" }, { "start": 119.03999999999999, "end": 124.72, "text": " for personal use and for academics and they have great plans for enterprises, small teams," }, { "start": 124.72, "end": 126.36, "text": " large teams doesn't matter." }, { "start": 126.36, "end": 129.48, "text": " So thank you very much weights and biases for sponsoring this video." }, { "start": 129.48, "end": 132.48, "text": " If you don't know them yet, absolutely check them out." }, { "start": 132.48, "end": 135.22, "text": " It's free, it'll make your life a whole lot easier." }, { "start": 135.22, "end": 140.62, "text": " Now let's get into the video." }, { "start": 140.62, "end": 146.96, "text": " So briefly said an embedding model associates a piece of text with a fixed size vector." }, { "start": 146.96, "end": 152.08, "text": " The fixed size vector can then be used to do semantic similarity search in high dimensional" }, { "start": 152.08, "end": 153.88, "text": " spaces among other things." }, { "start": 153.88, "end": 158.16, "text": " They have a toy depiction of these embeddings right here." }, { "start": 158.16, "end": 164.94, "text": " Now as this clearly shows, furries and football fans are in fact linearly separable." }, { "start": 164.94, "end": 167.1, "text": " So you know, thanks open AI." }, { "start": 167.1, "end": 171.92, "text": " In order to get these embeddings, you'd interact with the open API, as you would else you'd" }, { "start": 171.92, "end": 176.85999999999999, "text": " instantiate it, you call it you get back a vector, they have three different modes available." }, { "start": 176.85999999999999, "end": 181.54, "text": " One is for text similarity, which essentially means that you can put in pieces of text." }, { "start": 181.54, "end": 186.26, "text": " And if the vectors are close together, that means the text are in some way similar." }, { "start": 186.26, "end": 190.76, "text": " The second one is for text search where they have a separate encoder for documents, which" }, { "start": 190.76, "end": 197.01999999999998, "text": " are, I guess, longer pieces of content, and queries, which are shorter pieces of content." }, { "start": 197.02, "end": 202.5, "text": " And the idea is that you would rank document vectors against query vector, and then whichever" }, { "start": 202.5, "end": 207.86, "text": " ones fall closest together, those would be the relevant documents to retrieve for that" }, { "start": 207.86, "end": 208.86, "text": " query." }, { "start": 208.86, "end": 212.84, "text": " It's a bit similar to text similarity, the differences are in the length of the things" }, { "start": 212.84, "end": 216.74, "text": " that you put into the models, and also a little bit of the semantics, although I don't think" }, { "start": 216.74, "end": 218.66000000000003, "text": " there's too much of a difference." }, { "start": 218.66000000000003, "end": 224.5, "text": " The last one is code search, which is essentially the same as text search for code." }, { "start": 224.5, "end": 229.3, "text": " What's also to be said is that these come in different sizes, Ada being the smallest," }, { "start": 229.3, "end": 236.82, "text": " and DaVinci being the largest DaVinci is the original 175 billion parameter GPT three model" }, { "start": 236.82, "end": 242.22, "text": " size, they do release a paper along with it on how they train this thing and what the" }, { "start": 242.22, "end": 243.3, "text": " results are." }, { "start": 243.3, "end": 248.82, "text": " And the brief summary is that in various data sets and various tasks, they do beat previous" }, { "start": 248.82, "end": 253.74, "text": " state of the art results, for example, in linear probe classification, which is where" }, { "start": 253.74, "end": 258.78000000000003, "text": " you take embeddings, and then you train just a small linear layer on top with a label data" }, { "start": 258.78000000000003, "end": 263.86, "text": " set, they outperform previous state of the art, they also do so in text search tasks" }, { "start": 263.86, "end": 266.5, "text": " in the buyer retrieval benchmark." }, { "start": 266.5, "end": 269.54, "text": " And lastly, they outperform on code search quite a bit." }, { "start": 269.54, "end": 273.22, "text": " The paper goes into more details on how the model was trained, they explained that it" }, { "start": 273.22, "end": 275.86, "text": " is a contrastive loss that they've used." }, { "start": 275.86, "end": 280.74, "text": " Essentially, what you want to do is you want to encode pieces of text through the encoder," }, { "start": 280.74, "end": 286.08, "text": " and then make similar things closer to each other and negatives, in this case, in batch" }, { "start": 286.08, "end": 288.68, "text": " negatives further apart from each other." }, { "start": 288.68, "end": 294.06, "text": " This does require quite large batch sizes to actually get an accurate distribution of" }, { "start": 294.06, "end": 295.06, "text": " negatives." }, { "start": 295.06, "end": 298.46000000000004, "text": " But you know, it's open AI, so they can do it." }, { "start": 298.46000000000004, "end": 303.7, "text": " As I said, their models go from 300 million parameters for the smallest to 175 billion" }, { "start": 303.7, "end": 312.42, "text": " for the largest with the embedding dimensions going from 1024 up to a ridiculous 12,288." }, { "start": 312.42, "end": 316.21999999999997, "text": " Now you might think the larger dimension is a good thing." }, { "start": 316.21999999999997, "end": 319.21999999999997, "text": " But this is not necessarily the case right here." }, { "start": 319.21999999999997, "end": 323.09999999999997, "text": " This is one of the criticisms that's going to come up in a short while." }, { "start": 323.09999999999997, "end": 327.38, "text": " You can also see right here that yeah, indeed, the batch size is pretty large, the paper" }, { "start": 327.38, "end": 331.78, "text": " itself goes into a little bit more detail into the results." }, { "start": 331.78, "end": 338.85999999999996, "text": " And here we kind of see the first scratches in what people are now saying about this model," }, { "start": 338.85999999999996, "end": 342.73999999999995, "text": " namely that it doesn't seem to perform that well." }, { "start": 342.73999999999995, "end": 347.41999999999996, "text": " Now while these average results that they have presented, mostly from their extra large" }, { "start": 347.41999999999996, "end": 353.82, "text": " models do outperform other things is very often that they don't outperform them by" }, { "start": 353.82, "end": 354.82, "text": " that much." }, { "start": 354.82, "end": 359.21999999999997, "text": " And if you actually look in selected tasks, then it's not even clear they're the best" }, { "start": 359.21999999999997, "end": 360.21999999999997, "text": " model." }, { "start": 360.22, "end": 363.70000000000005, "text": " They seem to compare sometimes to quite outdated baselines." }, { "start": 363.70000000000005, "end": 367.38000000000005, "text": " As you can see, these papers are sometimes from 2021." }, { "start": 367.38000000000005, "end": 369.98, "text": " And last I checked, it's 2022." }, { "start": 369.98, "end": 373.70000000000005, "text": " So you know, opening, I get your crap in order." }, { "start": 373.70000000000005, "end": 379.46000000000004, "text": " Now by far the biggest controversial point right here is the price." }, { "start": 379.46000000000004, "end": 384.68, "text": " As they say in their documentation, encoding 1000 tokens with a DaVinci model will cost" }, { "start": 384.68, "end": 385.94000000000005, "text": " you 60 cents." }, { "start": 385.94, "end": 392.78, "text": " Now 60 cents doesn't sound like a lot, but corpora often have a lot more than 1000 tokens." }, { "start": 392.78, "end": 397.56, "text": " Remember that tokens are not even words, they're kind of sub words." }, { "start": 397.56, "end": 401.54, "text": " And that means that this model is quite expensive." }, { "start": 401.54, "end": 405.78, "text": " Now this gets drastically cheaper if you go down to the smaller models, as you can see," }, { "start": 405.78, "end": 411.65999999999997, "text": " the query embeddings are already 10 times smaller and Babbage and Ada another factor" }, { "start": 411.65999999999997, "end": 413.32, "text": " of eight or so." }, { "start": 413.32, "end": 419.78, "text": " So pretty shortly, this Twitter thread here blew up by Niels Reimers, who says GPT-3 embeddings" }, { "start": 419.78, "end": 424.58, "text": " by OpenAI was announced this week, I was excited and tested them on 20 datasets." }, { "start": 424.58, "end": 431.42, "text": " Sadly, they are worse than open models that are 1000 times smaller and running open AI" }, { "start": 431.42, "end": 434.78, "text": " models can be at 1 million times more expensive." }, { "start": 434.78, "end": 439.78, "text": " This is accompanied by a medium post called open AI GPT-3 text embeddings, really a new" }, { "start": 439.78, "end": 444.94, "text": " state of the art in dense text embeddings, where he leverages a lot of these points that" }, { "start": 444.94, "end": 451.21999999999997, "text": " I've said previously, like they seem to not compare to the most recent and most performing" }, { "start": 451.21999999999997, "end": 457.53999999999996, "text": " baselines and their results don't seem to be that far ahead of the competition, especially" }, { "start": 457.53999999999996, "end": 463.78, "text": " if you consider the smaller models and also that they did weird selections of data sets" }, { "start": 463.78, "end": 464.85999999999996, "text": " that they've trained on." }, { "start": 464.86, "end": 470.5, "text": " For example, the buyer benchmark has 18 data sets and they have chosen to just test on" }, { "start": 470.5, "end": 474.58000000000004, "text": " 11 of them and report average performance across those 11." }, { "start": 474.58000000000004, "end": 481.1, "text": " So Niels assembled his own benchmark of tasks and tested these models against some openly" }, { "start": 481.1, "end": 482.44, "text": " available models." }, { "start": 482.44, "end": 487.3, "text": " And the most shocking conclusion is that it seems to be that for some tasks, at least," }, { "start": 487.3, "end": 493.5, "text": " you can get much better performance with the open models at astonishingly low cost." }, { "start": 493.5, "end": 498.14, "text": " As you can see in this table here, this lists performance against the cost of encoding 1" }, { "start": 498.14, "end": 505.4, "text": " million documents, which even for the smallest open AI model costs $800 goes up to $60,000" }, { "start": 505.4, "end": 506.7, "text": " for the largest one." }, { "start": 506.7, "end": 512.94, "text": " And on the open models, well, the most expensive tested right here will cost you $6.80 and" }, { "start": 512.94, "end": 514.9, "text": " the best performing one $2.40." }, { "start": 514.9, "end": 521.22, "text": " Now it is to be said that these prices are probably made such that the largest possible" }, { "start": 521.22, "end": 523.48, "text": " shock effect is achieved." }, { "start": 523.48, "end": 528.46, "text": " Very often when he mentions prices, he says that, well, this is the cost of like a preemptable" }, { "start": 528.46, "end": 534.74, "text": " t4 GPU, which I guess first of all, you get the difficulty of being preemptable, which" }, { "start": 534.74, "end": 536.5, "text": " you don't get with open AI." }, { "start": 536.5, "end": 541, "text": " And second of all, good luck finding quota for a t4 anywhere on the planet right now." }, { "start": 541, "end": 544.9200000000001, "text": " But point taken, the open models can be significantly cheaper." }, { "start": 544.9200000000001, "end": 550.38, "text": " And the blog post explores the results from the paper itself also a bit more, again, pointing" }, { "start": 550.38, "end": 553.0600000000001, "text": " out that the advantages aren't that much." }, { "start": 553.06, "end": 558.9799999999999, "text": " And something like point one f1 score, and oftentimes even behind the open models." }, { "start": 558.9799999999999, "end": 563.66, "text": " Another point he makes is that the high dimensionality of the embeddings might actually work against" }, { "start": 563.66, "end": 568.38, "text": " you if you're looking to implement anything, because higher dimensional vectors, if you" }, { "start": 568.38, "end": 572.7399999999999, "text": " want to build a search index, for example, they require a much more memory intensive" }, { "start": 572.7399999999999, "end": 575.64, "text": " index structure, which will cost you more money." }, { "start": 575.64, "end": 580.9399999999999, "text": " And even disregarding money, searching through a higher dimensional space can be a lot slower" }, { "start": 580.94, "end": 583.2600000000001, "text": " than searching through a low dimensional space." }, { "start": 583.2600000000001, "end": 587.2600000000001, "text": " And he points out that is not really an option to compress these high dimensional embeddings," }, { "start": 587.2600000000001, "end": 592.32, "text": " they are using something like PCA, as that deteriorates their performance quite quickly." }, { "start": 592.32, "end": 597.1400000000001, "text": " Now the claim is just made right here, but I think he must have some experience or references" }, { "start": 597.1400000000001, "end": 598.1400000000001, "text": " from somewhere." }, { "start": 598.1400000000001, "end": 602.72, "text": " So I guess that would also count for down sampling methods such as random projections." }, { "start": 602.72, "end": 606.0600000000001, "text": " But I don't know, I guess that's still open out there to try." }, { "start": 606.0600000000001, "end": 610.5400000000001, "text": " Now it is to be said that when the author here tried to use the open AI API to reproduce" }, { "start": 610.54, "end": 616.2199999999999, "text": " the numbers in the paper, it resulted in different numbers, which makes one wonder, did they" }, { "start": 616.2199999999999, "end": 618.42, "text": " change the model since the paper?" }, { "start": 618.42, "end": 621.8199999999999, "text": " Or maybe is there something wrong with this evaluation?" }, { "start": 621.8199999999999, "end": 627.8199999999999, "text": " Now curiously, if I read this correctly, actually, the numbers of the current API used are better" }, { "start": 627.8199999999999, "end": 631.38, "text": " than the numbers that are in the paper, which is weird." }, { "start": 631.38, "end": 636.06, "text": " But also people have pointed out minor issues that can creep in and really destroy your" }, { "start": 636.06, "end": 641.28, "text": " results, such as Gwern right here pointing out that you cannot have new lines in your" }, { "start": 641.28, "end": 646.8599999999999, "text": " embedding queries, otherwise the embeddings become almost unusable, which is a thing that" }, { "start": 646.8599999999999, "end": 650.4599999999999, "text": " open AI discusses in their API documentation." }, { "start": 650.4599999999999, "end": 655.3, "text": " However, Reimer's responded to this and said that yes, indeed, he had replaced the new" }, { "start": 655.3, "end": 660.3, "text": " lines, he'd actually use the exact code that he found in an open AI website snippet." }, { "start": 660.3, "end": 662.4599999999999, "text": " So these results do look pretty legit." }, { "start": 662.46, "end": 667.74, "text": " In fact, one of the main authors of the paper has put out a response, I guess." }, { "start": 667.74, "end": 669.7800000000001, "text": " I mean, it's not responding to anything." }, { "start": 669.7800000000001, "end": 671.7, "text": " It's just a Twitter thread." }, { "start": 671.7, "end": 677.58, "text": " But it comes kind of in the light of these criticisms about how they evaluate their embedding" }, { "start": 677.58, "end": 679.96, "text": " models in open AI API." }, { "start": 679.96, "end": 685.5, "text": " This goes into more detail on the evaluation, mainly reciting points from the paper, but" }, { "start": 685.5, "end": 691.6600000000001, "text": " being a little bit more, yeah, we don't always achieve the best results possible than the" }, { "start": 691.66, "end": 696.5799999999999, "text": " blog post is because the blog post just shows average numbers and says, well, we're state" }, { "start": 696.5799999999999, "end": 698.5, "text": " of the art pretty much everywhere." }, { "start": 698.5, "end": 703.18, "text": " But if you look into detail a little bit more, the picture becomes a bit more murky." }, { "start": 703.18, "end": 705.6999999999999, "text": " I'll link all the threads here in the description." }, { "start": 705.6999999999999, "end": 709.92, "text": " I think one point to be mentioned right here, which is made by the author here and also" }, { "start": 709.92, "end": 714.3, "text": " by the blog post is that hello, this is Yannick from the future." }, { "start": 714.3, "end": 719.78, "text": " I've waited on this story a bit because we have some new development, the authors quasi" }, { "start": 719.78, "end": 725.3399999999999, "text": " responded again and not really brought anything new to the table, but just put sort of the" }, { "start": 725.3399999999999, "end": 731.6999999999999, "text": " things being said into context here in that they do point out that on many of the information" }, { "start": 731.6999999999999, "end": 737.14, "text": " retrieval, so the search tasks, the embeddings are actually performing really well." }, { "start": 737.14, "end": 741.98, "text": " And that on zero shot, keep that in mind, including, for example, the FIQA data set" }, { "start": 741.98, "end": 747.74, "text": " where they outperform something like BM25 or other models by a wide margin." }, { "start": 747.74, "end": 752.22, "text": " On top of that, they also put the cost in perspective saying that for this example data" }, { "start": 752.22, "end": 757.1800000000001, "text": " set, and this is a fairly, let's say average data set, the cost of embedding the documents" }, { "start": 757.1800000000001, "end": 758.98, "text": " and the queries is $80." }, { "start": 758.98, "end": 764.54, "text": " So the blog post always compared costs of embedding X many millions of tokens." }, { "start": 764.54, "end": 768.98, "text": " But if you go to actual data set, yes, the embeddings are still going to be more expensive," }, { "start": 768.98, "end": 773.94, "text": " but the absolute cost might actually not be as much as the blog post might seem." }, { "start": 773.94, "end": 777.62, "text": " Of course, that depends entirely on how large your data set is." }, { "start": 777.62, "end": 783.62, "text": " But spending 80 bucks for a 62% relative improvement seems to be a nice deal." }, { "start": 783.62, "end": 788.14, "text": " So it seems to really depend on the data set at hand, and you might have to try it out" }, { "start": 788.14, "end": 789.98, "text": " on a subset of your data." }, { "start": 789.98, "end": 796.72, "text": " This was then greeted by a response response, saying that, yes, but the much smaller model" }, { "start": 796.72, "end": 803.26, "text": " and much cheaper model is just point one of a score better than the largest GPT-3 model." }, { "start": 803.26, "end": 807.86, "text": " So Niels asked why the evaluation was just done on 11 out of the 18 data sets, we don't" }, { "start": 807.86, "end": 812.5, "text": " have a response yet to that, but it's been a week, so I don't expect we'll get one." }, { "start": 812.5, "end": 816.14, "text": " And that is where it stands currently back to Yannick in the past." }, { "start": 816.14, "end": 821.54, "text": " In their experience, these embeddings seem to do quite well when you have to transfer" }, { "start": 821.54, "end": 822.7, "text": " them to a new domain." }, { "start": 822.7, "end": 828.58, "text": " A lot of these openly available models, they are trained on specific data sets, you know," }, { "start": 828.58, "end": 831.4, "text": " with specific benchmarks in mind and all of that." }, { "start": 831.4, "end": 835.78, "text": " So they kind of come from the academic world for the academic world, and therefore might" }, { "start": 835.78, "end": 840.9, "text": " overperform even on a different data set, it is still a clean data set that has been" }, { "start": 840.9, "end": 844.02, "text": " assembled kind of to be a benchmark and so on." }, { "start": 844.02, "end": 847.6999999999999, "text": " While what OpenAI is saying that if we take these embeddings and actually go to the real" }, { "start": 847.6999999999999, "end": 852.8199999999999, "text": " world, our customers see big improvements in their own applications." }, { "start": 852.8199999999999, "end": 855.9399999999999, "text": " Now, of course, there's no way to verify that." }, { "start": 855.9399999999999, "end": 860.9, "text": " And the blog posts lists three examples of customers saying, Oh, look, they are able" }, { "start": 860.9, "end": 866.04, "text": " to find like six to 10 times more relevant examples for something or they pump their" }, { "start": 866.04, "end": 869.26, "text": " performance from 64% to 89%." }, { "start": 869.26, "end": 873.86, "text": " Again, there's no way to verify that but I wouldn't actually be surprised if that is" }, { "start": 873.86, "end": 874.86, "text": " the case." }, { "start": 874.86, "end": 879.26, "text": " Real world data is a lot more messy than any of the academic data sets." }, { "start": 879.26, "end": 883.78, "text": " And therefore, I guess only trying it out will actually tell you whether it's useful" }, { "start": 883.78, "end": 884.78, "text": " or not." }, { "start": 884.78, "end": 886.54, "text": " I do have to wonder about the price though." }, { "start": 886.54, "end": 889.3, "text": " Like there are two possibilities essentially." }, { "start": 889.3, "end": 892.0999999999999, "text": " One OpenAI has done market research and so on." }, { "start": 892.0999999999999, "end": 895.66, "text": " And this is what they think people will pay for this." }, { "start": 895.66, "end": 899.9, "text": " Like this is how much value they think they bring with their API." }, { "start": 899.9, "end": 904.52, "text": " Or on the other hand, this is kind of their operating cost plus some margin to make the" }, { "start": 904.52, "end": 905.78, "text": " shareholders happy." }, { "start": 905.78, "end": 908.9399999999999, "text": " Now I really can't tell apparently they do have customers." }, { "start": 908.9399999999999, "end": 911.7199999999999, "text": " So someone must be willing to pay all of this." }, { "start": 911.7199999999999, "end": 917.26, "text": " On the other hand, it does seem outrageously expensive for such a small improvement, at" }, { "start": 917.26, "end": 919.4399999999999, "text": " least in these academic data sets." }, { "start": 919.4399999999999, "end": 923.86, "text": " So let me know what you think is this even profitable for OpenAI?" }, { "start": 923.86, "end": 928.58, "text": " Like does anyone have any estimates on what it costs them to develop these new models" }, { "start": 928.58, "end": 930.26, "text": " and to keep them running?" }, { "start": 930.26, "end": 931.9, "text": " It must be massive endeavor." }, { "start": 931.9, "end": 936.9399999999999, "text": " In any case, that was it for the special episode of ML news." }, { "start": 936.9399999999999, "end": 938.7, "text": " Merch is still available." }, { "start": 938.7, "end": 939.7, "text": " And I'll see you next time." }, { "start": 939.7, "end": 955.74, "text": " Bye bye." } ]
j4xgkjWlfL4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
OpenAI DALL·E: Creating Images from Text (Blog Post Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "gpt", "gpt-3", "visual transformer", "transformer", "transformers", "attention mechanism", "vqvae", "vq vae", "vq-vae", "codebook", "relaxation", "gumbel", "text", "images", "nlp", "natural language processing", "autoregressive", "grid", "encoder", "decoder", "gpt3", "avocado chair", "porcupine sphere", "animations", "fisheye", "text to image", "image captioning", "openai", "sutskever", "dali", "dalle", "walle", "vector quantized", "hierarchical", "gan", "generative", "likelihood" ]
#openai #science #gpt3 OpenAI's newest model, DALL·E, shows absolutely amazing abilities in generating high-quality images from arbitrary text descriptions. Like GPT-3, the range of applications and the diversity of outputs is astonishing, given that this is a single model, trained on a purely autoregressive task. This model is a significant step towards the combination of text and images in future AI applications. OUTLINE: 0:00 - Introduction 2:45 - Overview 4:20 - Dataset 5:35 - Comparison to GPT-3 7:00 - Model Architecture 13:20 - VQ-VAE 21:00 - Combining VQ-VAE with GPT-3 27:30 - Pre-Training with Relaxation 32:15 - Experimental Results 33:00 - My Hypothesis about DALL·E's inner workings 36:15 - Sparse Attention Patterns 38:00 - DALL·E can't count 39:35 - DALL·E can't global order 40:10 - DALL·E renders different views 41:10 - DALL·E is very good at texture 41:40 - DALL·E can complete a bust 43:30 - DALL·E can do some reflections, but not others 44:15 - DALL·E can do cross-sections of some objects 45:50 - DALL·E is amazing at style 46:30 - DALL·E can generate logos 47:40 - DALL·E can generate bedrooms 48:35 - DALL·E can combine unusual concepts 49:25 - DALL·E can generate illustrations 50:15 - DALL·E sometimes understands complicated prompts 50:55 - DALL·E can pass part of an IQ test 51:40 - DALL·E probably does not have geographical / temporal knowledge 53:10 - Reranking dramatically improves quality 53:50 - Conclusions & Comments Blog: https://openai.com/blog/dall-e/ Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
A sphere made of Swiss cheese. A sphere with a texture of Swiss cheese. And there you have it. Beautiful, very appetizing Swiss cheese balls. My Swiss heart had just skipped a beat out of this monstrosity. What's even cooler than a sphere made of Swiss cheese is a torus made of denim. These images are so cool. A torus made of denim. And the point here is that these images aren't photoshopped or sort of human created. They are AI generated. And they are generated by this new model that OpenAI released a blog post about. It's called Dali. And it can, what it can do is it can take a piece of text such as the one on top here. The fact that I can select is simply the fact that they don't give you access to the model. They just give you access of a bunch of things that they've tried. But the model can take any piece of text and it can output a picture that matches that text. So here you got a torus made of toothpaste. And the quality of these images is super astounding. And what's even more astounding is sort of the range of capabilities that this model has. So the model can do various things such as so in here the input is an illustration of a baby daikon radish in a tutu walking a dog. And you see an illustration of a baby daikon radish in a tutu walking a dog. The outputs are just adorable. These are generated by the AI. The same for an armchair in the shape of an avocado, a storefront that has the word OpenAI written on it. I've tried reverse image searching some of these images and I could not find them on the internet. So it's definitely not just a model sort of outputting an image it found somewhere. These are actually generated images. And the astounding thing is that it's the same model that outputs all of these different images. It's not one model here trained on illustrations and one model trained on chairs. It's a single model that can take in a piece of text and optionally part of an image or none of an image and it will output the image either it continues the image you already give part of or it just generates the image by itself. So the model is called Dali. And this is just a blog post for now by OpenAI. They say they'll follow this up with a paper. And if the paper brings substantially new things, I think I'll make a video on it. But today we're just going to look at what this model can do, how it works, how it probably works. And we can take some guesses of what we can read in the paper once it's out. In fact, OpenAI has brought out two new models along with this Dali model. They've also released a blog post and a paper about a model called Clip, which is more of a sort of a classifier, not exactly a classifier. It's sort of a it connects text and images in a different way. It's not a generative model. And we're going to look at that in a different video. But you can see the clear trend right here is that OpenAI is looking into connecting text and images. So they say Dali, which is an, this is a, and I think an homage to Salvador Dali and mixed with the character Wally. So they say it's a 12 billion parameter version of GPT-3. So you know, it's more like, it's more like not GPT-3. That was more than 10 times larger, but it's a 12 billion parameter version of GPT-3 trained to generate images from text descriptions using a data set of text image pairs. We found that it has diverse set of capabilities, including creating anthropomorphized versions of animals and objects, combining unrelated concepts in plausible ways, rendering text and applying transformations to existing images. So a lot of the things they don't tell us here, especially the data set, like how did they get the data set? Nobody knows. They don't say this. They simply say it's a data set of text image pairs. And they sort of allude to the fact that they have large pieces of data, especially in the clip. Then they allude to the fact that you can just find data that connects text and images on the internet. And it's true if you if you search, if you scrape the correct websites, and do it in sort of a smart fashion, you can find a lot of data where there's an image and there's a piece of text describing that image. And we have to assume that they sort of scrape the internet for something like this. I don't think they have a lot of human explicitly human labeled data for this type of thing. So we'll just assume that they have like a huge data set. And of course, they train a huge model on it, a 12 billion parameter version of GPT three GPT three is the famous model, the famous text generation model by open AI. And you can sort of see the same things right here. So GPT three, my hypothesis was that it sort of smartly mixes the training data rather than remember the training data, it sort of remembers it and then smartly interpolates between it. And I think you can sort of see the same kind of things right here in that these are all definitely pictures that you could imagine in the real world. But they have, you know, they have, for example, they're changed to open AI in here, there are surely chairs that sort of look like this. So it just kind of mixes a chair with an avocado in a plausible way. I'm not saying this to denigrate the model, I'm saying that, I mean, this is seriously cool, the fact that it can do that. So they say like GPT three, Dulli is a transformer language model. Now, this is very, very interesting, the fact that it's a transformer language model, it receives both the text and the image as a single stream of data containing up to 1000 and 1280 tokens, and it's trained using maximum likelihood to generate all of the tokens one after another. Okay, this training procedure allows Dulli not only to generate images from scratch, but also to regenerate any rectangular region of an existing image that extends to the bottom right corner in a way that is consistent with the text prompt. And they say a little bit more here on the right. And they also say a little bit more down on the bottom. So I'm going to try to take a stab of explaining how this model works with the full knowledge that I might be wrong once the paper comes out. And for that, we have to go back a little bit and look at the models it draws from, namely the VQ VAE. So the vector quantized VAE literature, so VQ VAE will consider this to be sort of the inspiration of or one of the necessary ingredients of this model. So if we combine VQ VAE with something like GPT three, we get Dulli. That's my that's my hypothesis for today. Why combining these two models? So GPT three is extremely good at modeling language, right? So if I have a piece of text, let's go down here for a minute. And let's say I have a cat set on the mat. A transformer will be very good at understanding this sentence and being able to complete it. So if I cross out this and ask a transformer to continue the sentence, it will be able to continue the sentence just fine if it is if it is trained well. And that's exactly how GPT three works. Now imagine that I don't have a piece of text, but I have some sort of a description of an image, right? And let's say I have, I have a box. Here is a box. And the box which is going to be a VQ VAE can take in a description of an image in words, but not exactly words that humans understand. But let's say there is an image language, sort of like a programming language, okay. And you input symbols into the image, let's say, it's a bit like Egyptian hieroglyphs, maybe. So here is the here is the this, this hieroglyph thing, and then there is the sun, the sun thing. And then there is the tree, the word for tree, like the hieroglyph for tree. And I input that here. And the output will be an image where I don't know, there the sun is shining. Yes, I draw some like a child, it has a little smile, okay, deal with it. And there is a tree, maybe not exactly the tree from the hieroglyphs, but like some sort of some sort of tree that fits. And then there is some human in the scene, maybe the human sits here, the human sits at the tree, you know, relaxing, chilling. Okay, so this, now the image on the right is consistent of pixels, right. And modeling pixels with a transformer is very, very hard, because in the case of our model right here, it's something like 256 by 256 pixels. That would mean the transformer would have to generate 256 times 256, which is like two to the two to the 16. This is just too much for a transformer to model the pixels individually. So there are multiple ways around this, for example, modeling little regions right here, which are not really satisfactory. So what this model does is it sort of it doesn't try to model the picture as such, it tries to predict to predict these hieroglyphs right here, it tries to predict sort of a language that this box can understand and produce a picture from, okay, so its task is going to be given some sort of a given some sort of a text prefix. So a human in a sunny field, sunny day or on a sunny day, chilling under a tree. So this piece of text followed. So the model is trained to take this piece of text and output this sequence of hieroglyphs. Okay, so this sequence of hieroglyphs outputting from this piece of text. And that's something a transformer can do if you have a vocabulary right here. So if you have a fixed list of hieroglyphs that you could use, right, so in there there is the human is in there. That's a worse Egyptian. And then the pyramid is in here as well, some that you need, some that you don't need. So if there is a vocabulary, the transformer is going to be pretty, pretty good at generating this thing. So you need two parts. The first part right here is a transformer language model, a GPT-3 thing that can input a sequence of text, and it can output a sequence of text, which is just in a different vocabulary, namely this picture vocabulary. And then in the step two, you need a box that takes in this picture vocabulary and actually produces an images and image right here. So as I already said, this part is taken over by GPT, GPT-3, like the custom GPT model they built for this. And this part is taken over by something like a VQVAE, the generator part of it. So what is a VQVAE? A VQVAE is, and you will be able to see that. So the box that we're going to need is this box right here, from here up to where the image is. And this thing right here is going to be that vocabulary. So what does a VQVAE do? It takes the image here on the left, you can see that here's the encoder, it takes the image, it encodes it into a latent space. Now what a VAE would do, or what an autoencoder would do, is it would encode the image into a latent space, and then it would decode it again into and try to reproduce the same image. And then you assume that whatever is in the middle right here is a sensible representation, a latent representation of that image, right? If you can train this model, you're going to get some sort of a representation in the middle that describes the image, otherwise you couldn't reproduce the image. And there have been many models built on this concept. Now this model right here, it turns out that the classic autoencoder doesn't work too well. But this model works quite formidably. So what you're going to have is you're going to have this vocabulary right here. It's also called a codebook. Let's call it a codebook. So the codebook is also the vocabulary. So what you're saying is that you can't just output any latent encoding. So the encoder outputs a continuous vector. But what you're saying is it has to be one of those. Like there are a number of vectors that you have at your disposal, Mr. or Miss Encoder or Mrs. Encoder. There is a number of vectors that you have at your disposal. You can only choose those. You can't choose any vector that you want, right? So in your latent space, you can't just choose any latent space. There's this, there's this, there's this, there's this, there's this, there's this, you have to choose one of them. And if you choose something in between, which you'll inevitably will because this, all of our neural networks output continuous values, we're just going to have to use the same codebook. And in this case, we're just going to clamp you, we're just going to find the nearest one in our codebook. And we'll just say, well, we, we just make it such that you as if you had output that one. So the encoder can only hit one of those codebook vectors. And then you feed these codebook vectors to the decoder. And the decoder just decodes from these codebook vectors. Okay. And this should make output as project almost very good paying attention. And then we can can simply write the decoder in anyữa like way. So I've Stanford and bizarre and execute suite programming are out to be much, much, much better than simply doing the auto encoder thing continuously. So imagine that this codebook vocabulary is sort of like a vocabulary of This is a cat. And you don't just encode this into one of these words. What you do is you split the image up into a grid. It's not as fine as pixels. It's fairly, it's okay large. So in their experiments, they're going to use something like 32 by 32 grids, which is also what Dolly uses. Every image is described by 1024 tokens. That's 32 by 32 tokens. And then you're going to encode, you're going to make an encoder such that when this grid is through the encoder, this thing here corresponds to one of the code vectors and this thing here corresponds to another one. So you have your big vocabulary right here. And this is the red vector, this is the blue vector, this is the green vector, and you're going to just describe the image regions with these codebook vectors, like such. Now, the fact that you have a lot of these vectors, you have in fact, you have 8092 vectors in Dolly. And the image only consists of 1024 tokens. So, you know, it's conceivable, like, it's not like here where you have to reuse the same token over and over again. But one of these tokens could, for example, be sky. So maybe this is the thing that sort of describes sky. So what you'll have is like this thing and this thing and this thing and this thing should be approximately sky. Right. And then maybe the red one is is, I don't know, animal. And the blue one is vegetation. And the green one is some something else. So you can see if you feed this to a model that has to make a picture from it, it can just look at this and it's sort of like a description, a low resolution description of an image is not exactly a down sampled image. It's a it's a description because these things here contain a lot of information by themselves. OK, it's just that you can't choose any vector in latent space. You have to choose one of those vectors in the codebook. So that's a vector quantized VAE. And they train everything at the same time. So they train the encoder and decoder with this straight through estimator because this nearest neighbor computation isn't exactly differentiable. They also train the codebook to match the outputs of the encoder. So you can train that or you can just take the the exponential average of the encoder outputs. And that's the VQVAE, which is developed more in VQVAE 2. So this is VQVAE 2. I've linked the papers. VQVAE. What's the writing of 3 to the version two of it does the same thing, but in multi scale. So here you can see that in the encoder, you you you take the image and you put it at multiple resolutions. So this is large resolution. This is low resolution. Then you use the vector quantization to encode this into this grid and encode this into the codebook vectors. So again, here maybe you have red, red, red. This is red and this is the green one and so on. So you each square has to choose one of these eight thousand vectors to represent itself. And then you do this sort of hierarchical thing where you use the deep a decoder on this level to produce a slightly higher resolution image. But then you quantize again and you use a decoder at a next level to produce an even higher resolution image. So you can see that this hierarchical models, usually these hierarchical models, if you want good high resolution images, you sort of need them. So you can see that the the top decoder here outputs something quite blocky. And then every every additional one adds a sort of details to the image. It's pretty impressive as such. And you can see that the training right here of the VQVA. These are these are papers from last year or the years before. So this has been known. What Dali does is from what I can gather from the blog post right here. The images are preprocessed to 256 to 256 during training, similar to VQVA each image is compressed to a 32 by 32 grid of discrete latent codes using a discrete VAE that we pre trained using a continuous relaxation. OK, there's a lot of there's a lot of stuff here. So the VAE is pre trained. And they're saying they're saying also down here that their model uses maximum likelihood to to generate all of the tokens one after another. It's decoder only and so on. So probably this whole pipeline here is pre trained. They pre train a VAE a discrete VAE. And then they simply the Dali model simply has to learn how to produce the tokens. Right. The Dali model simply has to learn how to produce these hieroglyphs. And the box is fixed. The box is not changed. It's possible that they also train the decoder here. So the decoder. But I don't know, I can't tell this from the blog post. What's certainly is that they what's certain is that they don't train the encoder. So what you would do in a single step of Dali is you would have your text right here. Blah, blah, blah. And you would have a partial image. OK, you would input this text and the partial image to Dali. The partial image is any image where you've blacked out the bottom right. And they do the bottom right simply. It's the same as you do left to right by text. So you do sort of top left to bottom right. And yeah, it's good because you can always flip an image. Maybe not actually. But it's just a bias that you have to provide the model with in order to do autoregressive training. Right. So here is the image of that cat. Right. I. And you black out the bottom right. You can black out the whole image if you want the model to produce images unconditionally. All right. So you black all of this out. Cool. So now what you do is these here, they are already. They are already words. Right. You tokenize those token, token, token, and you go into your vocabulary of text. Right. So there is a vocabulary of text somewhere. There's blah. And you encode all of these using that vocabulary. So this is maybe word 34. So this is word 34, 34, 34. You go to your image. You rasterize this according to your definition. OK. And then you go and run this through this encoder that you trained. So you run it through the box and the box will tell you for each of this grid outputs, the box will tell you, well, in my in my vocabulary of image pieces, this here is number two. This here is number four. This is two again. This is 35 and so on. So you do this left to right, top to bottom, and then you put it right here. OK. So this is followed by an image of two, four, two, 35. And what you ask the model to do is simply to predict from all of this. And the model knows that this is text and this is images. From all of this, predict the next token, which would be this token right here. So you want to predict this one right here. What is it? And that's how you train the model. Right. And once it gets that, you can ask it to predict the next one and so on. And in this way, you can let it generate an entire image at inference time. And you know, you can train this. They say all these tokens are generated autoregressively. Now, in my understanding, this is all the model does, because once you have that token, so if the model says this is number seven, you go back to your box and you say, please. It's a different box. This is the encoder. This is the encoder of the VQVA. And now you go to your decoder that you've also pre-trained. Right. So this is a different box. And you ask it, I have this image, right? I have two, four, two, 35 and seven. Please generate an image for me for that. Or maybe you want to wait until you have the complete image. Right. So you have the complete image and you give this to your decoder. These are now that these hieroglyphs, right? So you have the box and the box produces an image. And the box says, well, OK, this cat here probably reproduces the ears fairly well, because you can describe them sort of exactly. Maybe you also want to copy that over or something. But then it says, well, it's a cat. So I'm going to, you know, maybe this. If the model has done a good job, there should be some sort of a cat. Right. And the model, you know, maybe in these hieroglyphs, it's even described how the cat looks like. The cat looks straight ahead as whiskers, as eyes and so on. OK. So I'm going to guess that the part on top that is trained and the part on bottom is pre-trained. With the option that the decoder part could also be trained at training time. At the same time, they train this language model on top. So they make some further inferences right here. They say each image is compressed in latent codes using a discrete V that we pre-trained using a continuous relaxation. We found that training using the relaxation obviates the need for an explicit codebook, EMA loss or tricks like dead code revival and can scale up to large vocabulary sizes. And this is the part where I am a bit confused. So clearly they say they have a vocabulary individual domain. OK, there are 8192. Well, I don't know my powers of two 8192 different words in the codebook. So there must be a codebook. But they say there this obviates the need for an explicit codebook. So I don't really know what to make of that. I can tell you what a continuous relaxation might look like. So this is from a different paper that they linked of the concrete random variables. So if you have an operation such as this, like a discrete random variable, you need to take an argmax of it. What you'll have is you'll have some sort of logits, right? There may be like this and you take the argmax of it, which means that you put it into a distribution where it's just one value. And this is sort of the same operation as we do in the VQVAE, where we assign each each output of the encoder to the nearest codebook vector. We say you can only have one of the codebook vectors. That's it. Right. Now, what you want to do when you relax this is you want to say, well, instead of that, what you could do is you could just kind of take that codebook vector a lot, but also, you know, take a little bit of the others. So more than doing a hard assignment to a codebook vector, right? So here would be the output of your encoder and you hard assign it to the nearest neighbor. You want to say, well, I'm going to soft assign it to all the ones. It's sort of like the difference between k nearest neighbor and a Gaussian mixture model, as I understand. Not what they do here, but it's analogous to that. And with that, they don't need an explicit codebook. And I don't know what that means. What I can imagine is that they don't actually train the codebook vectors. Maybe they just quantized to some prefixed schema, or I just don't understand what they do. Yeah, here is an illustration of these discrete random variables. So you want to get to a point when when you sample the variable, as you drop your temperature, it more and more approaches this fixed sampling. Like you can be either here or here or here with the sort of masses that are indicated by the size of the circle. But as you increase the temperature, you go more to a mixture. So yeah, you can be at the corner, but you can also be kind of in this region or in this region or in this region. As you increase the temperature, you can see the the distribution becomes more of a mixture distribution. And the mixture distribution, any mixture distribution with a temperature other than zero, of course, now all of a sudden has sort of a defined gradient. Whereas these discrete random variables, they do not have a gradient. And that's the reason why the VQVAE needs to do this straight through estimator right here, because this hard assignment to the codebook does not have a gradient defined. With the soft relaxation, you do have a gradient. And maybe they just mean they don't need they don't need this hard assignment to the codebook. I'm not sure. Or maybe they just they quantize in a different way. Maybe they go back to a continuous latent space. Yeah, I can imagine they they might go back to a continuous latent space. But somehow, somehow, they still do this a form of quantization. This could be a fixed quantization. Like you say, OK, you can choose any of the bases vectors and some mixtures that we define between them. Or they define it via moving averages or they define it via batch statistics or I don't know. If you know, let me know in the comments to the video. Right. So this was my take on what the model does and what is probably behind it. Now, let's look at some more examples right here, because these are fun. So they they say it can sort of control attributes. So you see these, it's, for example, a pentagonal green clock. And you see it's not always pentagonal. It's sometimes hexagonal and sometimes heptagonal and whatnot. But in general, what it does well is sort of color and also kind of object description. So lunch box, it gets and green it gets. What it can't do super well is stuff like counting. So I have sort of a hypothesis. I have multiple hypotheses about here. Just see what's in all of these examples, how the text prompt is phrased. So it says a pentagonal green lunchbox, a green lunchbox in the shape of a pentagon. This is quite unusual way to phrase the prompt. And by the way, all these criticisms that I'm leveraging here, most of them are actually admitted and discussed in this blog post. It's actually it's pretty cool and pretty self, let's say, self critical of them. So it's this is I've you know, I thought of these things and then I read the little text. And then they they already describe what I concluded. It's sad. But yeah, it's pretty cool of them because the current climate is sort of make your research look as as cool and flawless as possible. This goes a bit against it. So they say that the images here aren't cherry picked. And I totally believe this. So they have a little trick that they do. They output, I think, five hundred and twelve images from their model because they can sample and then they re rank them using this other model that they've released this clip model. And this clip model is a pretty good re ranker. So you give it a piece of text and an image and sort of tells you how well they fit together. And so the outputs that you see here are re ranked by this model. So you see are strictly the best outputs according to that model. So it's not cherry picked by humans, but it's cherry picked by a very good model. And the second thing is that the text prompt here is absolutely cherry picked. Right. By the way, this is phrased. You can see that it is very, very brittle. Probably the model. I can't test it, but probably it's very brittle in how exactly you phrase this text prompt. And I'm going to guess they have tried a lot of things before they've released these few examples right here that they show. And they've made sure that they work. So, yeah, just keep in mind that this is very brittle. And we already know this from like GPT three. We know that the input might seem the same to a human, just phrased differently in some cases. And yet the model will output completely different things. And we know that a lot of these GPT three examples are very, very constructed in terms of the input prompt. So, yeah, the other thing is the model, as I said, it can do colors and it can do colors and textures pretty well. So we've already seen the things made of things. So the sphere made of noodles that actually probably exists, the sphere made of guacamole. However, it's not super good at counting, for example. And I have a sort of multiple hypothesis. So these image models, they tend to be very good at sort of style and texture. Style and texture are the domain of these image models, like anywhere where there's like a convolution. And by the way, they use in the VQVAE model. No, not in the VQVAE. In this transformer for images, they don't do full attention. What they do is each one of the image tokens can attend to each of the text tokens such as this. But the image tokens, they can only sort of attend in the grid layer by layer. In one layer, they can attend sort of to the row of other image elements. In another layer, they can attend to the same column. And in even another layer, they can attend to sort of the the surroundings of them, like a convolution. So they can attend to, let's say, their couple of neighbors right here. So it's not full attention, yet in every layer, every image token can attend to all the text tokens. So yeah, in these models, what you typically see is that textures and style is pretty good. However, global correspondences are not as good. And that's what you see a lot in these face models where the left and the right earring don't match and things like this. So global correspondences are not so good. And you would actually expect that objects aren't as good as well. Right. So here, this is still a clock. This is still a light bulb. This is still a stop sign. Right. So it somehow gets the objects correct, which in my hypothesis, it shouldn't because this is some sort of a global structure. However, I think that's just a matter of how the data set is collected. The data sets are probably we humans, we take pictures of objects. Right. So the fundamental structures in these data sets is the object. So it makes sense that it learns that we humans, we don't we don't take pictures and we often don't describe the count in them. So I can get that the model has a harder time to learn that and actually focuses just on the object as a global thing. The count would be a global thing. Right. But it's not that prominent in the data. And the rest is a local thing like the color, the texture and so on. Yeah. The cube made of porcupine. So you can see here that this this counting. So two is often quite good. Actually, here it mixes up glasses and glasses. Right. So two often works. However, if you go if you go past two, it often gets it wrong. So five, you'll get anything from three to seven clocks and so on. So I'm going to also guess it's very brittle. Like they're not here. Yes, they're sitting on a table. But if you take a object that's not that often on a table like a club, you'll see that it's pretty unrecognizable whether or not it's on a table. Five, four clubs. So, you know, the model is prone to ignoring part of its input if the likelihood in another part is larger. Also, it can't do things like this. You know, a stack of three cubes, a red cube is on the top sitting on a green cube. It often gets the order wrong, like it gets the cubes on top of each other. However, it often gets it wrong when it comes to, you know, the order, the global things. As I said, anything global that is not what the object is tends to be weak. Anything local tends to be strong in these models. And that's just a matter of how they're built and how the data is. So they say the image can render new views. And here is where I'm not as convinced. So here you have like an extreme close up view of a cubby cub, cabby bar, sorry, of a fox. They're close up. Sometimes they're extreme close up. Right. You can see that it gets like forest. It gets it gets pretty well. But then you say, OK, a ground level view like, and then you say, OK, an aerial view. Maybe some of them are aerial views. Some of them aren't. What's pretty cool is things like a OK, a fish eye lens view. I mean, that's that's pretty cool. And a they have some of them, a bottom view or a rear view. Yeah, the rear view works better. So it does understand these these kind of things like what's the rear of a fox and what's the front of a fox. Though, as you can also see, not always texture. It's very good at texture. So here something made of voxels can do that perfectly. An owl made of voxels like this looks like it comes straight from Minecraft. Right. Absolutely, absolutely cool. Even X-Ray sometimes doesn't always get the bones right. But yeah, as I said, style structure. Very cool. So here is an example of a completion. So they give the text prompt a photograph of a bust of Homer and the image, the top part of the image. And they say, well, it can describing a well-known figure. It can complete the figure. I don't agree that it completes Homer. Like it completes it probably just sees this bust and this and it just completes whatever fits. I don't I have not studied Homer as a historic person or busts of him. But, you know, I disagree that this depicts largely the same person very often. You can see here there is sometimes there is even, you know, there's completely unrelated stuff. There is that lady with the pearl earring by Vermeer somewhere in there and so on. And what I also like in this kind of this this one, you know, the game draw something where or, you know, pictionary and so on, there are people when they can't draw something, they just kind of write it on the picture. It's like, ah, screw it. Now, this is right. This is Homer. This is Homer. Now, I don't care what you say. This is Homer. But, you know, it does, you know, it does. So when you say Cleopatra, it it goes more into the into sort of the female direction Medusa. It has some though. I'm pretty sure Medusa has the snake, the snake hair. No, maybe Venus. Yeah, somewhat somewhat. It they test a lot of things like can it do mirror reflections? And you can see right here, they say it can do reflections on the ground pretty well, but it can't do reflections, for example, in a mirror, because in a lot of these pictures, the object like here would actually have to be in front of the mirror. However, in the fewest amount of pictures, the object mirrored is actually also in front of the mirror. So this kind of global correspondence isn't given as much. However, there is a fair bit of reflection on the ground, so to say. So, you know, that's pretty cool, but it's also probably very, very common in datasets. Yeah, cross section view of a walnut. So they sort of implore, sorry, explore the model, what it can do. And here you can see that, you know, if something is common in the dataset, you know, like the cross section view of human head, there are a lot of pictures of that right in the dataset. However, if it comes to cross section view of a where, where did I see the airplane? There is an airplane somewhere. It's less, it's less so. So you can see that this is still it is. So here it probably doesn't really know how that looks, because, you know, they probably on the image, on the Internet, even on the whole Internet, pictures of cross sections of airplanes or any sections of airplanes. Are not really distributed often. So it sort of just focuses on airplane. And then with cross section, it probably knows that it should somehow display some of the interior. So it just kind of produces some stuff that matches this thing. As I said, if if it can't make the likelihood high of all of the things, what it tends to do is just focus on one of the things and just make that likelihood high, which is reasonable for a model. A macro photo, macro photographs of stuff. These are pretty cool. This is what you would find in some image galleries. Absolutely. Then it can do various things like style transfer. And here is where it shines. Right. So you can have different paintings of different objects in different styles. So here you can like have an owl sitting in the forest in the morning. And you can have this as a painting, as a painting in the pop art style and so on. It's very, very impressive. So I absolutely glory actually, too, like as a postage stamp. These are these are these are absolutely amazing. And yeah, you can have stuff like stained glass windows. And this is yeah, it's where the model shines. And even here a storefront that has the word Opnea written on it. So just right now, just look at how convoluted this text prompt has to be for them to get this to work. It's impressive. But the text prompt has to be repeated and reformulated a bunch of times and so on. My personal favorite is the pie torch chips. They're crunchy. You get a piece of back prop in every package. So you can see it sometimes misses like this is perch, perch chips and so on. It sometimes misses. But it is pretty cool that it basically can do OCR, right. Or reverse OCR. You can you give it a piece of text and it sort of makes a picture with that on it. It's very, very impressive, even though, as we said, like the global the global correspondences are not always there. They do implore like fashion, a skirt like here that the yellow skirt, then, you know, these mannequins. And here they have a loft bedroom with a white bed next to a nightstand. There is a fish tank standing beside the bed and they give sort of the beginning of the image. And here's what the model comes up with. And, you know, you can imagine that there are a lot of pictures like this in the data set. So the model might be pretty good at stuff like this, though I have found their king bed next to, yeah, let's say the nightstand with the telescope. The telescope beside the bed, it just, you know, that beside like there's a telescope. Sometimes it's on the bed. Sometimes it's next to it. There are some weird telescopes around. Well, this is a lot of telescopes. That's a weird telescope. But, you know, the quality is pretty impressive. This is absolutely nitpicking that I'm doing here. Combining unrelated concepts. We've already seen the armchair in the shape of an avocado. They also have a snail made of harp. Though my personal favorite is the penguin made of garlic. The penguin made of garlic. This. Perfect, right? Absolutely adorable. And just qualitatively like this. This would take a human like you would pay a high quality, highly educated Photoshop artist quite a bit of money to get this sort of output. Right. And these models, they shine at this sort of style transfer texture stuff. And here you have the illustrations. You can have any kind of illustrations like the illustration of a baby shark with a mustache. Holding. There's holding an umbrella somewhere. Playing it. Running. Riding a unicycle. It's just it's just nice. And as I said, this is the same model that can do all of this stuff. And these are samples. They're just samples. They're not cherry picked. However, they are re-ranked. Remember that. So they can do hybrids of images, hybrids of different giraffe and turtle and so on. And they do sort of implore the model a little bit more where they, as I said, they give this cat on the top and they say they want the exact same cat on the top as a photo colored blue on the bottom. So you can see that doesn't always work. Right. But in a surprising amount of times, it actually does work. Sometimes it's just like a blue pot. But you can you can see it's not the finished model yet. However, it is a step into the direction that shows us that this is definitely, definitely possible. It can even do some of these progressive matrices where it fills in the bottom right. However, they do mention it's very, very finicky with respect to whether or not, for example, if you invert the color. So if you look at the bottom right of any of these things, if I invert the colors, the output sort of changes and it's often also not right. However, sometimes it is actually right, which is crazy because in some of these things, you have to do some crazy sort of inference that we usually we usually do these things in IQ tests. So I don't know the debate about what is intelligence goes on. They say it has geographic knowledge. However, I'm not sure it has geographic knowledge as it just associates words with particular images like they say, OK, this is a photo of food of China. OK, maybe you just not sure this classifies as geographic knowledge. He said he's yeah, also this temporal knowledge, a photo of a phone from the 20s. OK, you know, and then the different time periods, 60s, 70s, 80s, future and so on, like distant future. Like, wow, these phones, I particularly so I like the usually this stuff. It's it's pretty OK, right? But it's not temporal knowledge. It just associates a bunch of tokens with some sort of style of computer. Today's computer, the future computer, the distant future computer. Please know, please, please, please don't give me that. I don't want to. I don't want that. I love the action movie poster because so the style is correct. But it just says action movie in the future. Yeah, they do get sort of the kind of some of the styles. It just it just says action movie like this is like a like a naggy, naggy child like I'm hungry. Hi, hungry. I'm dead. All right. So they also have a summary right here and they do show what it means that they they use this clip to rerank. So on the left here, you can see just eight samples straight up from the model. And they're not too bad. But, you know, you increase the quality by sort of sampling more and then taking the best eight as you go to the right here, according to the reranker. So I'm going to guess they decided on five twelve because that was sort of, you know, it gives you already pretty diverse, pretty good, pretty high quality outputs right here. All right. So just lastly, shout out to the the the authors right here. The primary authors are deter mesh, Mikhail Pavlov, Gabrielle Goh and Scott Ray with a I guess the secondary supporting authors and most of open eye behind them. I don't know how they work. I would encourage you to go look at the model. It's pretty cool. Try out all these inputs. As I said, these are the inputs are simply restricting you because they don't trust you with their model. Yet, right in the real model, you can input any piece of text that you want and you will get out an image. And the fact that you have to select the stuff here is simply because that's the stuff they tried. That's the stuff their PR department has signed off on. Right. And so so you get to see that because as I said, they're not like this is at the same time, this is a PR dilemma when you release a generative model because it could release. They discussed this a little bit in the blog post. You know, it could release like very problematic images in a classifier. It's not as pronounced. It's also sometimes dangerous, but not as dangerous as if you have a generative model. That's the first thing. And the second thing is there is I mean, there is money in this definitely, definitely money to be made in this. So, you know, we'll see whether or not we get the full model or not. All right. With that, that was it for me. I hope you enjoy the blog post. I hope you enjoyed the video. If you did, let me know. Shared out. Subscribe if you haven't and bye bye.
[ { "start": 0, "end": 9, "text": " A sphere made of Swiss cheese. A sphere with a texture of Swiss cheese." }, { "start": 9, "end": 17.76, "text": " And there you have it. Beautiful, very appetizing Swiss cheese balls. My Swiss heart had just" }, { "start": 17.76, "end": 25.04, "text": " skipped a beat out of this monstrosity. What's even cooler than a sphere made of Swiss cheese" }, { "start": 25.04, "end": 35.84, "text": " is a torus made of denim. These images are so cool. A torus made of denim. And the point here is" }, { "start": 35.84, "end": 43.120000000000005, "text": " that these images aren't photoshopped or sort of human created. They are AI generated. And they are" }, { "start": 43.120000000000005, "end": 51.120000000000005, "text": " generated by this new model that OpenAI released a blog post about. It's called Dali. And it can," }, { "start": 51.12, "end": 57.28, "text": " what it can do is it can take a piece of text such as the one on top here. The fact that I can select" }, { "start": 57.28, "end": 62.239999999999995, "text": " is simply the fact that they don't give you access to the model. They just give you access" }, { "start": 62.239999999999995, "end": 67.92, "text": " of a bunch of things that they've tried. But the model can take any piece of text and it can output" }, { "start": 67.92, "end": 78.72, "text": " a picture that matches that text. So here you got a torus made of toothpaste. And the quality" }, { "start": 78.72, "end": 85.6, "text": " of these images is super astounding. And what's even more astounding is sort of the range of" }, { "start": 85.6, "end": 94.4, "text": " capabilities that this model has. So the model can do various things such as so in here the input is" }, { "start": 94.4, "end": 100.64, "text": " an illustration of a baby daikon radish in a tutu walking a dog. And you see an illustration of a" }, { "start": 100.64, "end": 108.4, "text": " baby daikon radish in a tutu walking a dog. The outputs are just adorable. These are generated" }, { "start": 108.4, "end": 115.2, "text": " by the AI. The same for an armchair in the shape of an avocado, a storefront that has the word" }, { "start": 115.2, "end": 121.76, "text": " OpenAI written on it. I've tried reverse image searching some of these images and I could not" }, { "start": 121.76, "end": 130.16, "text": " find them on the internet. So it's definitely not just a model sort of outputting an image it found" }, { "start": 130.16, "end": 136.32, "text": " somewhere. These are actually generated images. And the astounding thing is that it's the same" }, { "start": 136.32, "end": 141.6, "text": " model that outputs all of these different images. It's not one model here trained on illustrations" }, { "start": 141.6, "end": 149.04, "text": " and one model trained on chairs. It's a single model that can take in a piece of text and" }, { "start": 149.04, "end": 157.35999999999999, "text": " optionally part of an image or none of an image and it will output the image either it continues" }, { "start": 157.35999999999999, "end": 163.84, "text": " the image you already give part of or it just generates the image by itself. So the model is" }, { "start": 163.84, "end": 172.72, "text": " called Dali. And this is just a blog post for now by OpenAI. They say they'll follow this up with a" }, { "start": 172.72, "end": 180.48000000000002, "text": " paper. And if the paper brings substantially new things, I think I'll make a video on it. But today" }, { "start": 180.48000000000002, "end": 185.76, "text": " we're just going to look at what this model can do, how it works, how it probably works. And we" }, { "start": 185.76, "end": 192.16, "text": " can take some guesses of what we can read in the paper once it's out. In fact, OpenAI has brought" }, { "start": 192.16, "end": 198.72, "text": " out two new models along with this Dali model. They've also released a blog post and a paper" }, { "start": 198.72, "end": 204.96, "text": " about a model called Clip, which is more of a sort of a classifier, not exactly a classifier." }, { "start": 204.96, "end": 210.88, "text": " It's sort of a it connects text and images in a different way. It's not a generative model." }, { "start": 211.76, "end": 217.12, "text": " And we're going to look at that in a different video. But you can see the clear trend right here" }, { "start": 217.12, "end": 225.12, "text": " is that OpenAI is looking into connecting text and images. So they say Dali, which is an, this is a," }, { "start": 225.12, "end": 232.88, "text": " and I think an homage to Salvador Dali and mixed with the character Wally. So they say it's a 12" }, { "start": 232.88, "end": 240.72, "text": " billion parameter version of GPT-3. So you know, it's more like, it's more like not GPT-3. That was" }, { "start": 240.72, "end": 247.52, "text": " more than 10 times larger, but it's a 12 billion parameter version of GPT-3 trained to generate" }, { "start": 247.52, "end": 254.4, "text": " images from text descriptions using a data set of text image pairs. We found that it has diverse" }, { "start": 254.4, "end": 259.2, "text": " set of capabilities, including creating anthropomorphized versions of animals and" }, { "start": 259.2, "end": 265.44, "text": " objects, combining unrelated concepts in plausible ways, rendering text and applying transformations" }, { "start": 265.44, "end": 272.71999999999997, "text": " to existing images. So a lot of the things they don't tell us here, especially the data set," }, { "start": 272.71999999999997, "end": 277.84, "text": " like how did they get the data set? Nobody knows. They don't say this. They simply say it's a data" }, { "start": 277.84, "end": 285.04, "text": " set of text image pairs. And they sort of allude to the fact that they have large pieces of data," }, { "start": 285.04, "end": 292, "text": " especially in the clip. Then they allude to the fact that you can just find data that connects" }, { "start": 292, "end": 298.08, "text": " text and images on the internet. And it's true if you if you search, if you scrape the correct" }, { "start": 298.08, "end": 304.16, "text": " websites, and do it in sort of a smart fashion, you can find a lot of data where there's an image" }, { "start": 304.16, "end": 312.16, "text": " and there's a piece of text describing that image. And we have to assume that they sort of scrape" }, { "start": 312.16, "end": 317.44, "text": " the internet for something like this. I don't think they have a lot of human explicitly human" }, { "start": 317.44, "end": 324.71999999999997, "text": " labeled data for this type of thing. So we'll just assume that they have like a huge data set." }, { "start": 324.71999999999997, "end": 331.44, "text": " And of course, they train a huge model on it, a 12 billion parameter version of GPT three GPT three" }, { "start": 331.44, "end": 340, "text": " is the famous model, the famous text generation model by open AI. And you can sort of see the" }, { "start": 340, "end": 347.92, "text": " same things right here. So GPT three, my hypothesis was that it sort of smartly mixes the training" }, { "start": 347.92, "end": 354.4, "text": " data rather than remember the training data, it sort of remembers it and then smartly interpolates" }, { "start": 354.4, "end": 360.8, "text": " between it. And I think you can sort of see the same kind of things right here in that these are" }, { "start": 360.8, "end": 366.08, "text": " all definitely pictures that you could imagine in the real world. But they have, you know, they have," }, { "start": 366.08, "end": 372.08, "text": " for example, they're changed to open AI in here, there are surely chairs that sort of look like" }, { "start": 372.08, "end": 377.28, "text": " this. So it just kind of mixes a chair with an avocado in a plausible way. I'm not saying this to" }, { "start": 377.28, "end": 383.28, "text": " denigrate the model, I'm saying that, I mean, this is seriously cool, the fact that it can do that." }, { "start": 384.32, "end": 392.15999999999997, "text": " So they say like GPT three, Dulli is a transformer language model. Now, this is very," }, { "start": 392.16, "end": 398.8, "text": " very interesting, the fact that it's a transformer language model, it receives both the text and the" }, { "start": 398.8, "end": 407.20000000000005, "text": " image as a single stream of data containing up to 1000 and 1280 tokens, and it's trained using maximum" }, { "start": 407.20000000000005, "end": 413.84000000000003, "text": " likelihood to generate all of the tokens one after another. Okay, this training procedure allows Dulli" }, { "start": 413.84000000000003, "end": 419.76000000000005, "text": " not only to generate images from scratch, but also to regenerate any rectangular region of an existing" }, { "start": 419.76, "end": 425.28, "text": " image that extends to the bottom right corner in a way that is consistent with the text prompt." }, { "start": 426.96, "end": 433.2, "text": " And they say a little bit more here on the right. And they also say a little bit more down on the" }, { "start": 433.2, "end": 440.32, "text": " bottom. So I'm going to try to take a stab of explaining how this model works with the full" }, { "start": 440.32, "end": 446.32, "text": " knowledge that I might be wrong once the paper comes out. And for that, we have to go back a" }, { "start": 446.32, "end": 454.4, "text": " little bit and look at the models it draws from, namely the VQ VAE. So the vector quantized VAE" }, { "start": 454.4, "end": 464.64, "text": " literature, so VQ VAE will consider this to be sort of the inspiration of or one of the necessary" }, { "start": 464.64, "end": 475.2, "text": " ingredients of this model. So if we combine VQ VAE with something like GPT three, we get Dulli." }, { "start": 475.2, "end": 483.59999999999997, "text": " That's my that's my hypothesis for today. Why combining these two models? So GPT three is" }, { "start": 483.59999999999997, "end": 491.03999999999996, "text": " extremely good at modeling language, right? So if I have a piece of text, let's go down here for a" }, { "start": 491.03999999999996, "end": 504.48, "text": " minute. And let's say I have a cat set on the mat. A transformer will be very good at understanding" }, { "start": 504.48, "end": 511.04, "text": " this sentence and being able to complete it. So if I cross out this and ask a transformer to continue" }, { "start": 511.04, "end": 516.24, "text": " the sentence, it will be able to continue the sentence just fine if it is if it is trained" }, { "start": 516.24, "end": 523.52, "text": " well. And that's exactly how GPT three works. Now imagine that I don't have a piece of text," }, { "start": 523.52, "end": 532.08, "text": " but I have some sort of a description of an image, right? And let's say I have, I have a box." }, { "start": 532.08, "end": 542.1600000000001, "text": " Here is a box. And the box which is going to be a VQ VAE can take in a description of an image in" }, { "start": 542.1600000000001, "end": 547.76, "text": " words, but not exactly words that humans understand. But let's say there is an image language," }, { "start": 547.76, "end": 555.2, "text": " sort of like a programming language, okay. And you input symbols into the image, let's say," }, { "start": 555.2, "end": 564.96, "text": " it's a bit like Egyptian hieroglyphs, maybe. So here is the here is the this, this hieroglyph thing," }, { "start": 564.96, "end": 572.48, "text": " and then there is the sun, the sun thing. And then there is the tree, the word for tree, like the" }, { "start": 572.48, "end": 580.32, "text": " hieroglyph for tree. And I input that here. And the output will be an image where I don't know," }, { "start": 580.32, "end": 586.6400000000001, "text": " there the sun is shining. Yes, I draw some like a child, it has a little smile, okay, deal with it." }, { "start": 587.44, "end": 592.5600000000001, "text": " And there is a tree, maybe not exactly the tree from the hieroglyphs, but like some sort of some" }, { "start": 592.5600000000001, "end": 598.8000000000001, "text": " sort of tree that fits. And then there is some human in the scene, maybe the human sits here," }, { "start": 598.8, "end": 611.4399999999999, "text": " the human sits at the tree, you know, relaxing, chilling. Okay, so this, now the image on the" }, { "start": 611.4399999999999, "end": 618.24, "text": " right is consistent of pixels, right. And modeling pixels with a transformer is very, very hard," }, { "start": 618.24, "end": 626.9599999999999, "text": " because in the case of our model right here, it's something like 256 by 256 pixels. That would mean" }, { "start": 626.96, "end": 634.88, "text": " the transformer would have to generate 256 times 256, which is like two to the two to the 16. This" }, { "start": 634.88, "end": 642.32, "text": " is just too much for a transformer to model the pixels individually. So there are multiple ways" }, { "start": 642.32, "end": 648.24, "text": " around this, for example, modeling little regions right here, which are not really satisfactory." }, { "start": 649.76, "end": 656, "text": " So what this model does is it sort of it doesn't try to model the picture as such, it tries" }, { "start": 656, "end": 665.52, "text": " to predict to predict these hieroglyphs right here, it tries to predict sort of a language" }, { "start": 665.52, "end": 672.64, "text": " that this box can understand and produce a picture from, okay, so its task is going to be given some" }, { "start": 672.64, "end": 685.4399999999999, "text": " sort of a given some sort of a text prefix. So a human in a sunny field, sunny day or on a sunny day," }, { "start": 687.12, "end": 697.52, "text": " chilling under a tree. So this piece of text followed. So the model is trained to take this" }, { "start": 697.52, "end": 704.8, "text": " piece of text and output this sequence of hieroglyphs. Okay, so this sequence of hieroglyphs" }, { "start": 704.8, "end": 712.96, "text": " outputting from this piece of text. And that's something a transformer can do if you have a" }, { "start": 712.96, "end": 719.52, "text": " vocabulary right here. So if you have a fixed list of hieroglyphs that you could use, right," }, { "start": 719.52, "end": 728.88, "text": " so in there there is the human is in there. That's a worse Egyptian. And then the pyramid" }, { "start": 728.88, "end": 733.6, "text": " is in here as well, some that you need, some that you don't need. So if there is a vocabulary," }, { "start": 733.6, "end": 738.48, "text": " the transformer is going to be pretty, pretty good at generating this thing. So you need" }, { "start": 739.76, "end": 747.04, "text": " two parts. The first part right here is a transformer language model, a GPT-3 thing that can" }, { "start": 747.04, "end": 753.4399999999999, "text": " input a sequence of text, and it can output a sequence of text, which is just in a different" }, { "start": 753.4399999999999, "end": 759.36, "text": " vocabulary, namely this picture vocabulary. And then in the step two, you need a box that takes" }, { "start": 759.36, "end": 764.3199999999999, "text": " in this picture vocabulary and actually produces an images and image right here. So as I already" }, { "start": 764.3199999999999, "end": 772.64, "text": " said, this part is taken over by GPT, GPT-3, like the custom GPT model they built for this." }, { "start": 772.64, "end": 781.1999999999999, "text": " And this part is taken over by something like a VQVAE, the generator part of it. So what is" }, { "start": 781.1999999999999, "end": 791.1999999999999, "text": " a VQVAE? A VQVAE is, and you will be able to see that. So the box that we're going to need is" }, { "start": 792.64, "end": 799.76, "text": " this box right here, from here up to where the image is. And this thing right here is going to" }, { "start": 799.76, "end": 805.76, "text": " be that vocabulary. So what does a VQVAE do? It takes the image here on the left, you can see" }, { "start": 805.76, "end": 811.76, "text": " that here's the encoder, it takes the image, it encodes it into a latent space. Now what a" }, { "start": 812.64, "end": 819.4399999999999, "text": " VAE would do, or what an autoencoder would do, is it would encode the image into a latent space," }, { "start": 819.4399999999999, "end": 826.4, "text": " and then it would decode it again into and try to reproduce the same image. And then you assume" }, { "start": 826.4, "end": 832.8, "text": " that whatever is in the middle right here is a sensible representation, a latent representation" }, { "start": 832.8, "end": 837.84, "text": " of that image, right? If you can train this model, you're going to get some sort of a" }, { "start": 837.84, "end": 843.76, "text": " representation in the middle that describes the image, otherwise you couldn't reproduce the image." }, { "start": 844.64, "end": 851.04, "text": " And there have been many models built on this concept. Now this model right here, it turns out" }, { "start": 851.04, "end": 858.3199999999999, "text": " that the classic autoencoder doesn't work too well. But this model works quite formidably. So" }, { "start": 858.3199999999999, "end": 864.16, "text": " what you're going to have is you're going to have this vocabulary right here. It's also called a" }, { "start": 864.16, "end": 870.56, "text": " codebook. Let's call it a codebook. So the codebook is also the vocabulary." }, { "start": 870.56, "end": 882.7199999999999, "text": " So what you're saying is that you can't just output any latent encoding. So the encoder outputs a" }, { "start": 882.7199999999999, "end": 889.4399999999999, "text": " continuous vector. But what you're saying is it has to be one of those. Like there are a number" }, { "start": 889.4399999999999, "end": 897.1999999999999, "text": " of vectors that you have at your disposal, Mr. or Miss Encoder or Mrs. Encoder. There is a number of" }, { "start": 897.2, "end": 903.36, "text": " vectors that you have at your disposal. You can only choose those. You can't choose any vector" }, { "start": 903.36, "end": 908.8000000000001, "text": " that you want, right? So in your latent space, you can't just choose any latent space. There's this," }, { "start": 908.8000000000001, "end": 912.88, "text": " there's this, there's this, there's this, there's this, there's this, you have to choose one of them." }, { "start": 912.88, "end": 919.6800000000001, "text": " And if you choose something in between, which you'll inevitably will because this, all of our" }, { "start": 919.6800000000001, "end": 925.36, "text": " neural networks output continuous values, we're just going to have to use the same codebook." }, { "start": 925.36, "end": 931.6, "text": " And in this case, we're just going to clamp you, we're just going to find the nearest one in our" }, { "start": 931.6, "end": 938, "text": " codebook. And we'll just say, well, we, we just make it such that you as if you had output that" }, { "start": 938, "end": 944.72, "text": " one. So the encoder can only hit one of those codebook vectors. And then you feed these codebook" }, { "start": 944.72, "end": 952, "text": " vectors to the decoder. And the decoder just decodes from these codebook vectors. Okay. And" }, { "start": 952, "end": 957.28, "text": " this should make output as project almost very good paying attention. And then we can can" }, { "start": 957.84, "end": 965.44, "text": " simply write the decoder in anyữa like way. So I've Stanford and bizarre and execute suite" }, { "start": 965.44, "end": 972.56, "text": " programming are out to be much, much, much better than simply doing the auto encoder thing" }, { "start": 972.56, "end": 980, "text": " continuously. So imagine that this codebook vocabulary is sort of like a vocabulary of" }, { "start": 980, "end": 982, "text": " This is a cat." }, { "start": 982, "end": 987, "text": " And you don't just encode this into one of these words." }, { "start": 987, "end": 992, "text": " What you do is you split the image up into a grid." }, { "start": 992, "end": 994, "text": " It's not as fine as pixels." }, { "start": 994, "end": 996, "text": " It's fairly, it's okay large." }, { "start": 996, "end": 1002, "text": " So in their experiments, they're going to use something like 32 by 32 grids," }, { "start": 1002, "end": 1005, "text": " which is also what Dolly uses." }, { "start": 1005, "end": 1012, "text": " Every image is described by 1024 tokens. That's 32 by 32 tokens." }, { "start": 1012, "end": 1018, "text": " And then you're going to encode, you're going to make an encoder such that" }, { "start": 1018, "end": 1022, "text": " when this grid is through the encoder," }, { "start": 1022, "end": 1027, "text": " this thing here corresponds to one of the code vectors" }, { "start": 1027, "end": 1030, "text": " and this thing here corresponds to another one." }, { "start": 1030, "end": 1034, "text": " So you have your big vocabulary right here." }, { "start": 1034, "end": 1039, "text": " And this is the red vector, this is the blue vector," }, { "start": 1039, "end": 1041, "text": " this is the green vector," }, { "start": 1041, "end": 1051, "text": " and you're going to just describe the image regions with these codebook vectors, like such." }, { "start": 1051, "end": 1056, "text": " Now, the fact that you have a lot of these vectors," }, { "start": 1056, "end": 1061, "text": " you have in fact, you have 8092 vectors in Dolly." }, { "start": 1061, "end": 1067, "text": " And the image only consists of 1024 tokens." }, { "start": 1067, "end": 1073, "text": " So, you know, it's conceivable, like, it's not like here where you have to reuse the same token over and over again." }, { "start": 1073, "end": 1076, "text": " But one of these tokens could, for example, be sky." }, { "start": 1076, "end": 1080, "text": " So maybe this is the thing that sort of describes sky." }, { "start": 1080, "end": 1085, "text": " So what you'll have is like this thing and this thing and this thing and this thing should be approximately sky." }, { "start": 1085, "end": 1092, "text": " Right. And then maybe the red one is is, I don't know, animal." }, { "start": 1092, "end": 1095, "text": " And the blue one is vegetation." }, { "start": 1095, "end": 1098, "text": " And the green one is some something else." }, { "start": 1098, "end": 1103, "text": " So you can see if you feed this to a model that has to make a picture from it," }, { "start": 1103, "end": 1111, "text": " it can just look at this and it's sort of like a description, a low resolution description of an image is not exactly a down sampled image." }, { "start": 1111, "end": 1117, "text": " It's a it's a description because these things here contain a lot of information by themselves." }, { "start": 1117, "end": 1122, "text": " OK, it's just that you can't choose any vector in latent space." }, { "start": 1122, "end": 1126, "text": " You have to choose one of those vectors in the codebook." }, { "start": 1126, "end": 1129, "text": " So that's a vector quantized VAE." }, { "start": 1129, "end": 1131, "text": " And they train everything at the same time." }, { "start": 1131, "end": 1140, "text": " So they train the encoder and decoder with this straight through estimator because this nearest neighbor computation isn't exactly differentiable." }, { "start": 1140, "end": 1145, "text": " They also train the codebook to match the outputs of the encoder." }, { "start": 1145, "end": 1152, "text": " So you can train that or you can just take the the exponential average of the encoder outputs." }, { "start": 1152, "end": 1158, "text": " And that's the VQVAE, which is developed more in VQVAE 2." }, { "start": 1158, "end": 1162, "text": " So this is VQVAE 2. I've linked the papers." }, { "start": 1162, "end": 1165, "text": " VQVAE." }, { "start": 1165, "end": 1172, "text": " What's the writing of 3 to the version two of it does the same thing, but in multi scale." }, { "start": 1172, "end": 1180, "text": " So here you can see that in the encoder, you you you take the image and you put it at multiple resolutions." }, { "start": 1180, "end": 1182, "text": " So this is large resolution." }, { "start": 1182, "end": 1184, "text": " This is low resolution." }, { "start": 1184, "end": 1192, "text": " Then you use the vector quantization to encode this into this grid and encode this into the codebook vectors." }, { "start": 1192, "end": 1195, "text": " So again, here maybe you have red, red, red." }, { "start": 1195, "end": 1198, "text": " This is red and this is the green one and so on." }, { "start": 1198, "end": 1204, "text": " So you each square has to choose one of these eight thousand vectors to represent itself." }, { "start": 1204, "end": 1215, "text": " And then you do this sort of hierarchical thing where you use the deep a decoder on this level to produce a slightly higher resolution image." }, { "start": 1215, "end": 1222, "text": " But then you quantize again and you use a decoder at a next level to produce an even higher resolution image." }, { "start": 1222, "end": 1231, "text": " So you can see that this hierarchical models, usually these hierarchical models, if you want good high resolution images, you sort of need them." }, { "start": 1231, "end": 1238, "text": " So you can see that the the top decoder here outputs something quite blocky." }, { "start": 1238, "end": 1246, "text": " And then every every additional one adds a sort of details to the image." }, { "start": 1246, "end": 1249, "text": " It's pretty impressive as such." }, { "start": 1249, "end": 1254, "text": " And you can see that the training right here of the VQVA." }, { "start": 1254, "end": 1258, "text": " These are these are papers from last year or the years before." }, { "start": 1258, "end": 1260, "text": " So this has been known." }, { "start": 1260, "end": 1270, "text": " What Dali does is from what I can gather from the blog post right here." }, { "start": 1270, "end": 1289, "text": " The images are preprocessed to 256 to 256 during training, similar to VQVA each image is compressed to a 32 by 32 grid of discrete latent codes using a discrete VAE that we pre trained using a continuous relaxation." }, { "start": 1289, "end": 1295, "text": " OK, there's a lot of there's a lot of stuff here." }, { "start": 1295, "end": 1300, "text": " So the VAE is pre trained." }, { "start": 1300, "end": 1311, "text": " And they're saying they're saying also down here that their model uses maximum likelihood to to generate all of the tokens one after another." }, { "start": 1311, "end": 1313, "text": " It's decoder only and so on." }, { "start": 1313, "end": 1318, "text": " So probably this whole pipeline here is pre trained." }, { "start": 1318, "end": 1323, "text": " They pre train a VAE a discrete VAE." }, { "start": 1323, "end": 1330, "text": " And then they simply the Dali model simply has to learn how to produce the tokens." }, { "start": 1330, "end": 1333, "text": " Right. The Dali model simply has to learn how to produce these hieroglyphs." }, { "start": 1333, "end": 1335, "text": " And the box is fixed." }, { "start": 1335, "end": 1337, "text": " The box is not changed." }, { "start": 1337, "end": 1341, "text": " It's possible that they also train the decoder here." }, { "start": 1341, "end": 1344, "text": " So the decoder." }, { "start": 1344, "end": 1348, "text": " But I don't know, I can't tell this from the blog post." }, { "start": 1348, "end": 1356, "text": " What's certainly is that they what's certain is that they don't train the encoder." }, { "start": 1356, "end": 1362, "text": " So what you would do in a single step of Dali is you would have your text right here." }, { "start": 1362, "end": 1365, "text": " Blah, blah, blah." }, { "start": 1365, "end": 1367, "text": " And you would have a partial image." }, { "start": 1367, "end": 1373, "text": " OK, you would input this text and the partial image to Dali." }, { "start": 1373, "end": 1379, "text": " The partial image is any image where you've blacked out the bottom right." }, { "start": 1379, "end": 1382, "text": " And they do the bottom right simply." }, { "start": 1382, "end": 1385, "text": " It's the same as you do left to right by text." }, { "start": 1385, "end": 1388, "text": " So you do sort of top left to bottom right." }, { "start": 1388, "end": 1392, "text": " And yeah, it's good because you can always flip an image." }, { "start": 1392, "end": 1394, "text": " Maybe not actually." }, { "start": 1394, "end": 1400, "text": " But it's just a bias that you have to provide the model with in order to do autoregressive training." }, { "start": 1400, "end": 1401, "text": " Right." }, { "start": 1401, "end": 1405, "text": " So here is the image of that cat." }, { "start": 1405, "end": 1407, "text": " Right." }, { "start": 1407, "end": 1409, "text": " I." }, { "start": 1409, "end": 1411, "text": " And you black out the bottom right." }, { "start": 1411, "end": 1416, "text": " You can black out the whole image if you want the model to produce images unconditionally." }, { "start": 1416, "end": 1417, "text": " All right." }, { "start": 1417, "end": 1421, "text": " So you black all of this out." }, { "start": 1421, "end": 1423, "text": " Cool." }, { "start": 1423, "end": 1430, "text": " So now what you do is these here, they are already." }, { "start": 1430, "end": 1431, "text": " They are already words." }, { "start": 1431, "end": 1432, "text": " Right." }, { "start": 1432, "end": 1437, "text": " You tokenize those token, token, token, and you go into your vocabulary of text." }, { "start": 1437, "end": 1438, "text": " Right." }, { "start": 1438, "end": 1440, "text": " So there is a vocabulary of text somewhere." }, { "start": 1440, "end": 1441, "text": " There's blah." }, { "start": 1441, "end": 1445, "text": " And you encode all of these using that vocabulary." }, { "start": 1445, "end": 1447, "text": " So this is maybe word 34." }, { "start": 1447, "end": 1452, "text": " So this is word 34, 34, 34." }, { "start": 1452, "end": 1454, "text": " You go to your image." }, { "start": 1454, "end": 1461, "text": " You rasterize this according to your definition." }, { "start": 1461, "end": 1462, "text": " OK." }, { "start": 1462, "end": 1467, "text": " And then you go and run this through this encoder that you trained." }, { "start": 1467, "end": 1475, "text": " So you run it through the box and the box will tell you for each of this grid outputs," }, { "start": 1475, "end": 1482, "text": " the box will tell you, well, in my in my vocabulary of image pieces," }, { "start": 1482, "end": 1485, "text": " this here is number two." }, { "start": 1485, "end": 1487, "text": " This here is number four." }, { "start": 1487, "end": 1488, "text": " This is two again." }, { "start": 1488, "end": 1490, "text": " This is 35 and so on." }, { "start": 1490, "end": 1496, "text": " So you do this left to right, top to bottom, and then you put it right here." }, { "start": 1496, "end": 1497, "text": " OK." }, { "start": 1497, "end": 1504, "text": " So this is followed by an image of two, four, two, 35." }, { "start": 1504, "end": 1509, "text": " And what you ask the model to do is simply to predict from all of this." }, { "start": 1509, "end": 1513, "text": " And the model knows that this is text and this is images." }, { "start": 1513, "end": 1519, "text": " From all of this, predict the next token, which would be this token right here." }, { "start": 1519, "end": 1523, "text": " So you want to predict this one right here." }, { "start": 1523, "end": 1525, "text": " What is it?" }, { "start": 1525, "end": 1526, "text": " And that's how you train the model." }, { "start": 1526, "end": 1527, "text": " Right." }, { "start": 1527, "end": 1532, "text": " And once it gets that, you can ask it to predict the next one and so on." }, { "start": 1532, "end": 1537, "text": " And in this way, you can let it generate an entire image at inference time." }, { "start": 1537, "end": 1539, "text": " And you know, you can train this." }, { "start": 1539, "end": 1542, "text": " They say all these tokens are generated autoregressively." }, { "start": 1542, "end": 1547, "text": " Now, in my understanding, this is all the model does, because once you have that token," }, { "start": 1547, "end": 1554, "text": " so if the model says this is number seven, you go back to your box and you say, please." }, { "start": 1554, "end": 1555, "text": " It's a different box." }, { "start": 1555, "end": 1557, "text": " This is the encoder." }, { "start": 1557, "end": 1560, "text": " This is the encoder of the VQVA." }, { "start": 1560, "end": 1564, "text": " And now you go to your decoder that you've also pre-trained." }, { "start": 1564, "end": 1565, "text": " Right." }, { "start": 1565, "end": 1568, "text": " So this is a different box." }, { "start": 1568, "end": 1572, "text": " And you ask it, I have this image, right?" }, { "start": 1572, "end": 1576, "text": " I have two, four, two, 35 and seven." }, { "start": 1576, "end": 1580, "text": " Please generate an image for me for that." }, { "start": 1580, "end": 1584, "text": " Or maybe you want to wait until you have the complete image." }, { "start": 1584, "end": 1585, "text": " Right." }, { "start": 1585, "end": 1589, "text": " So you have the complete image and you give this to your decoder." }, { "start": 1589, "end": 1591, "text": " These are now that these hieroglyphs, right?" }, { "start": 1591, "end": 1594, "text": " So you have the box and the box produces an image." }, { "start": 1594, "end": 1602, "text": " And the box says, well, OK, this cat here probably reproduces the ears fairly well," }, { "start": 1602, "end": 1605, "text": " because you can describe them sort of exactly." }, { "start": 1605, "end": 1608, "text": " Maybe you also want to copy that over or something." }, { "start": 1608, "end": 1610, "text": " But then it says, well, it's a cat." }, { "start": 1610, "end": 1614, "text": " So I'm going to, you know, maybe this." }, { "start": 1614, "end": 1620, "text": " If the model has done a good job, there should be some sort of a cat." }, { "start": 1620, "end": 1621, "text": " Right." }, { "start": 1621, "end": 1625, "text": " And the model, you know, maybe in these hieroglyphs, it's even described how the cat looks like." }, { "start": 1625, "end": 1629, "text": " The cat looks straight ahead as whiskers, as eyes and so on." }, { "start": 1629, "end": 1630, "text": " OK." }, { "start": 1630, "end": 1641, "text": " So I'm going to guess that the part on top that is trained and the part on bottom is pre-trained." }, { "start": 1641, "end": 1646, "text": " With the option that the decoder part could also be trained at training time." }, { "start": 1646, "end": 1651, "text": " At the same time, they train this language model on top." }, { "start": 1651, "end": 1655, "text": " So they make some further inferences right here." }, { "start": 1655, "end": 1664, "text": " They say each image is compressed in latent codes using a discrete V that we pre-trained using a continuous relaxation." }, { "start": 1664, "end": 1669, "text": " We found that training using the relaxation obviates the need for an explicit codebook," }, { "start": 1669, "end": 1675, "text": " EMA loss or tricks like dead code revival and can scale up to large vocabulary sizes." }, { "start": 1675, "end": 1680, "text": " And this is the part where I am a bit confused." }, { "start": 1680, "end": 1685, "text": " So clearly they say they have a vocabulary individual domain." }, { "start": 1685, "end": 1688, "text": " OK, there are 8192." }, { "start": 1688, "end": 1696, "text": " Well, I don't know my powers of two 8192 different words in the codebook." }, { "start": 1696, "end": 1698, "text": " So there must be a codebook." }, { "start": 1698, "end": 1704, "text": " But they say there this obviates the need for an explicit codebook." }, { "start": 1704, "end": 1708, "text": " So I don't really know what to make of that." }, { "start": 1708, "end": 1712, "text": " I can tell you what a continuous relaxation might look like." }, { "start": 1712, "end": 1718, "text": " So this is from a different paper that they linked of the concrete random variables." }, { "start": 1718, "end": 1725, "text": " So if you have an operation such as this, like a discrete random variable, you need to take an argmax of it." }, { "start": 1725, "end": 1729, "text": " What you'll have is you'll have some sort of logits, right?" }, { "start": 1729, "end": 1741, "text": " There may be like this and you take the argmax of it, which means that you put it into a distribution where it's just one value." }, { "start": 1741, "end": 1751, "text": " And this is sort of the same operation as we do in the VQVAE, where we assign each each output of the encoder to the nearest codebook vector." }, { "start": 1751, "end": 1753, "text": " We say you can only have one of the codebook vectors." }, { "start": 1753, "end": 1755, "text": " That's it. Right." }, { "start": 1755, "end": 1771, "text": " Now, what you want to do when you relax this is you want to say, well, instead of that, what you could do is you could just kind of take that codebook vector a lot, but also, you know, take a little bit of the others." }, { "start": 1771, "end": 1777, "text": " So more than doing a hard assignment to a codebook vector, right?" }, { "start": 1777, "end": 1784, "text": " So here would be the output of your encoder and you hard assign it to the nearest neighbor." }, { "start": 1784, "end": 1789, "text": " You want to say, well, I'm going to soft assign it to all the ones." }, { "start": 1789, "end": 1795, "text": " It's sort of like the difference between k nearest neighbor and a Gaussian mixture model, as I understand." }, { "start": 1795, "end": 1799, "text": " Not what they do here, but it's analogous to that." }, { "start": 1799, "end": 1803, "text": " And with that, they don't need an explicit codebook." }, { "start": 1803, "end": 1805, "text": " And I don't know what that means." }, { "start": 1805, "end": 1810, "text": " What I can imagine is that they don't actually train the codebook vectors." }, { "start": 1810, "end": 1820, "text": " Maybe they just quantized to some prefixed schema, or I just don't understand what they do." }, { "start": 1820, "end": 1823, "text": " Yeah, here is an illustration of these discrete random variables." }, { "start": 1823, "end": 1833, "text": " So you want to get to a point when when you sample the variable, as you drop your temperature, it more and more approaches this fixed sampling." }, { "start": 1833, "end": 1840, "text": " Like you can be either here or here or here with the sort of masses that are indicated by the size of the circle." }, { "start": 1840, "end": 1844, "text": " But as you increase the temperature, you go more to a mixture." }, { "start": 1844, "end": 1850, "text": " So yeah, you can be at the corner, but you can also be kind of in this region or in this region or in this region." }, { "start": 1850, "end": 1858, "text": " As you increase the temperature, you can see the the distribution becomes more of a mixture distribution." }, { "start": 1858, "end": 1869, "text": " And the mixture distribution, any mixture distribution with a temperature other than zero, of course, now all of a sudden has sort of a defined gradient." }, { "start": 1869, "end": 1873, "text": " Whereas these discrete random variables, they do not have a gradient." }, { "start": 1873, "end": 1884, "text": " And that's the reason why the VQVAE needs to do this straight through estimator right here, because this hard assignment to the codebook does not have a gradient defined." }, { "start": 1884, "end": 1888, "text": " With the soft relaxation, you do have a gradient." }, { "start": 1888, "end": 1896, "text": " And maybe they just mean they don't need they don't need this hard assignment to the codebook." }, { "start": 1896, "end": 1900, "text": " I'm not sure. Or maybe they just they quantize in a different way." }, { "start": 1900, "end": 1906, "text": " Maybe they go back to a continuous latent space." }, { "start": 1906, "end": 1910, "text": " Yeah, I can imagine they they might go back to a continuous latent space." }, { "start": 1910, "end": 1917, "text": " But somehow, somehow, they still do this a form of quantization." }, { "start": 1917, "end": 1919, "text": " This could be a fixed quantization." }, { "start": 1919, "end": 1927, "text": " Like you say, OK, you can choose any of the bases vectors and some mixtures that we define between them." }, { "start": 1927, "end": 1935, "text": " Or they define it via moving averages or they define it via batch statistics or I don't know." }, { "start": 1935, "end": 1939, "text": " If you know, let me know in the comments to the video." }, { "start": 1939, "end": 1944, "text": " Right. So this was my take on what the model does and what is probably behind it." }, { "start": 1944, "end": 1949, "text": " Now, let's look at some more examples right here, because these are fun." }, { "start": 1949, "end": 1953, "text": " So they they say it can sort of control attributes." }, { "start": 1953, "end": 1958, "text": " So you see these, it's, for example, a pentagonal green clock." }, { "start": 1958, "end": 1960, "text": " And you see it's not always pentagonal." }, { "start": 1960, "end": 1965, "text": " It's sometimes hexagonal and sometimes heptagonal and whatnot." }, { "start": 1965, "end": 1971, "text": " But in general, what it does well is sort of color and also kind of object description." }, { "start": 1971, "end": 1974, "text": " So lunch box, it gets and green it gets." }, { "start": 1974, "end": 1980, "text": " What it can't do super well is stuff like counting." }, { "start": 1980, "end": 1984, "text": " So I have sort of a hypothesis." }, { "start": 1984, "end": 1986, "text": " I have multiple hypotheses about here." }, { "start": 1986, "end": 1991, "text": " Just see what's in all of these examples, how the text prompt is phrased." }, { "start": 1991, "end": 1997, "text": " So it says a pentagonal green lunchbox, a green lunchbox in the shape of a pentagon." }, { "start": 1997, "end": 2001, "text": " This is quite unusual way to phrase the prompt." }, { "start": 2001, "end": 2008, "text": " And by the way, all these criticisms that I'm leveraging here, most of them are actually admitted and discussed in this blog post." }, { "start": 2008, "end": 2013, "text": " It's actually it's pretty cool and pretty self, let's say, self critical of them." }, { "start": 2013, "end": 2019, "text": " So it's this is I've you know, I thought of these things and then I read the little text." }, { "start": 2019, "end": 2023, "text": " And then they they already describe what I concluded." }, { "start": 2023, "end": 2035, "text": " It's sad. But yeah, it's pretty cool of them because the current climate is sort of make your research look as as cool and flawless as possible." }, { "start": 2035, "end": 2037, "text": " This goes a bit against it." }, { "start": 2037, "end": 2042, "text": " So they say that the images here aren't cherry picked." }, { "start": 2042, "end": 2044, "text": " And I totally believe this." }, { "start": 2044, "end": 2047, "text": " So they have a little trick that they do." }, { "start": 2047, "end": 2057, "text": " They output, I think, five hundred and twelve images from their model because they can sample and then they re rank them using this other model that they've released this clip model." }, { "start": 2057, "end": 2060, "text": " And this clip model is a pretty good re ranker." }, { "start": 2060, "end": 2065, "text": " So you give it a piece of text and an image and sort of tells you how well they fit together." }, { "start": 2065, "end": 2069, "text": " And so the outputs that you see here are re ranked by this model." }, { "start": 2069, "end": 2073, "text": " So you see are strictly the best outputs according to that model." }, { "start": 2073, "end": 2077, "text": " So it's not cherry picked by humans, but it's cherry picked by a very good model." }, { "start": 2077, "end": 2082, "text": " And the second thing is that the text prompt here is absolutely cherry picked." }, { "start": 2082, "end": 2084, "text": " Right." }, { "start": 2084, "end": 2086, "text": " By the way, this is phrased." }, { "start": 2086, "end": 2089, "text": " You can see that it is very, very brittle." }, { "start": 2089, "end": 2090, "text": " Probably the model." }, { "start": 2090, "end": 2097, "text": " I can't test it, but probably it's very brittle in how exactly you phrase this text prompt." }, { "start": 2097, "end": 2106, "text": " And I'm going to guess they have tried a lot of things before they've released these few examples right here that they show." }, { "start": 2106, "end": 2108, "text": " And they've made sure that they work." }, { "start": 2108, "end": 2114, "text": " So, yeah, just keep in mind that this is very brittle." }, { "start": 2114, "end": 2117, "text": " And we already know this from like GPT three." }, { "start": 2117, "end": 2125, "text": " We know that the input might seem the same to a human, just phrased differently in some cases." }, { "start": 2125, "end": 2128, "text": " And yet the model will output completely different things." }, { "start": 2128, "end": 2136, "text": " And we know that a lot of these GPT three examples are very, very constructed in terms of the input prompt." }, { "start": 2136, "end": 2145, "text": " So, yeah, the other thing is the model, as I said, it can do colors and it can do colors and textures pretty well." }, { "start": 2145, "end": 2150, "text": " So we've already seen the things made of things." }, { "start": 2150, "end": 2157, "text": " So the sphere made of noodles that actually probably exists, the sphere made of guacamole." }, { "start": 2157, "end": 2161, "text": " However, it's not super good at counting, for example." }, { "start": 2161, "end": 2164, "text": " And I have a sort of multiple hypothesis." }, { "start": 2164, "end": 2169, "text": " So these image models, they tend to be very good at sort of style and texture." }, { "start": 2169, "end": 2176, "text": " Style and texture are the domain of these image models, like anywhere where there's like a convolution." }, { "start": 2176, "end": 2181, "text": " And by the way, they use in the VQVAE model." }, { "start": 2181, "end": 2187, "text": " No, not in the VQVAE. In this transformer for images, they don't do full attention." }, { "start": 2187, "end": 2194, "text": " What they do is each one of the image tokens can attend to each of the text tokens such as this." }, { "start": 2194, "end": 2202, "text": " But the image tokens, they can only sort of attend in the grid layer by layer." }, { "start": 2202, "end": 2209, "text": " In one layer, they can attend sort of to the row of other image elements." }, { "start": 2209, "end": 2213, "text": " In another layer, they can attend to the same column." }, { "start": 2213, "end": 2220, "text": " And in even another layer, they can attend to sort of the the surroundings of them, like a convolution." }, { "start": 2220, "end": 2224, "text": " So they can attend to, let's say, their couple of neighbors right here." }, { "start": 2224, "end": 2232, "text": " So it's not full attention, yet in every layer, every image token can attend to all the text tokens." }, { "start": 2232, "end": 2241, "text": " So yeah, in these models, what you typically see is that textures and style is pretty good." }, { "start": 2241, "end": 2245, "text": " However, global correspondences are not as good." }, { "start": 2245, "end": 2252, "text": " And that's what you see a lot in these face models where the left and the right earring don't match and things like this." }, { "start": 2252, "end": 2254, "text": " So global correspondences are not so good." }, { "start": 2254, "end": 2259, "text": " And you would actually expect that objects aren't as good as well." }, { "start": 2259, "end": 2262, "text": " Right. So here, this is still a clock." }, { "start": 2262, "end": 2264, "text": " This is still a light bulb." }, { "start": 2264, "end": 2266, "text": " This is still a stop sign." }, { "start": 2266, "end": 2274, "text": " Right. So it somehow gets the objects correct, which in my hypothesis, it shouldn't because this is some sort of a global structure." }, { "start": 2274, "end": 2278, "text": " However, I think that's just a matter of how the data set is collected." }, { "start": 2278, "end": 2283, "text": " The data sets are probably we humans, we take pictures of objects." }, { "start": 2283, "end": 2289, "text": " Right. So the fundamental structures in these data sets is the object." }, { "start": 2289, "end": 2298, "text": " So it makes sense that it learns that we humans, we don't we don't take pictures and we often don't describe the count in them." }, { "start": 2298, "end": 2305, "text": " So I can get that the model has a harder time to learn that and actually focuses just on the object as a global thing." }, { "start": 2305, "end": 2310, "text": " The count would be a global thing. Right. But it's not that prominent in the data." }, { "start": 2310, "end": 2316, "text": " And the rest is a local thing like the color, the texture and so on." }, { "start": 2316, "end": 2319, "text": " Yeah. The cube made of porcupine." }, { "start": 2319, "end": 2322, "text": " So you can see here that this this counting." }, { "start": 2322, "end": 2325, "text": " So two is often quite good." }, { "start": 2325, "end": 2330, "text": " Actually, here it mixes up glasses and glasses. Right." }, { "start": 2330, "end": 2332, "text": " So two often works." }, { "start": 2332, "end": 2338, "text": " However, if you go if you go past two, it often gets it wrong." }, { "start": 2338, "end": 2344, "text": " So five, you'll get anything from three to seven clocks and so on." }, { "start": 2344, "end": 2348, "text": " So I'm going to also guess it's very brittle." }, { "start": 2348, "end": 2350, "text": " Like they're not here." }, { "start": 2350, "end": 2351, "text": " Yes, they're sitting on a table." }, { "start": 2351, "end": 2358, "text": " But if you take a object that's not that often on a table like a club," }, { "start": 2358, "end": 2365, "text": " you'll see that it's pretty unrecognizable whether or not it's on a table." }, { "start": 2365, "end": 2368, "text": " Five, four clubs." }, { "start": 2368, "end": 2377, "text": " So, you know, the model is prone to ignoring part of its input if the likelihood in another part is larger." }, { "start": 2377, "end": 2380, "text": " Also, it can't do things like this." }, { "start": 2380, "end": 2384, "text": " You know, a stack of three cubes, a red cube is on the top sitting on a green cube." }, { "start": 2384, "end": 2389, "text": " It often gets the order wrong, like it gets the cubes on top of each other." }, { "start": 2389, "end": 2395, "text": " However, it often gets it wrong when it comes to, you know, the order, the global things." }, { "start": 2395, "end": 2401, "text": " As I said, anything global that is not what the object is tends to be weak." }, { "start": 2401, "end": 2404, "text": " Anything local tends to be strong in these models." }, { "start": 2404, "end": 2408, "text": " And that's just a matter of how they're built and how the data is." }, { "start": 2408, "end": 2413, "text": " So they say the image can render new views." }, { "start": 2413, "end": 2415, "text": " And here is where I'm not as convinced." }, { "start": 2415, "end": 2423, "text": " So here you have like an extreme close up view of a cubby cub, cabby bar, sorry, of a fox." }, { "start": 2423, "end": 2429, "text": " They're close up. Sometimes they're extreme close up. Right." }, { "start": 2429, "end": 2433, "text": " You can see that it gets like forest. It gets it gets pretty well." }, { "start": 2433, "end": 2441, "text": " But then you say, OK, a ground level view like, and then you say, OK, an aerial view." }, { "start": 2441, "end": 2446, "text": " Maybe some of them are aerial views. Some of them aren't." }, { "start": 2446, "end": 2453, "text": " What's pretty cool is things like a OK, a fish eye lens view." }, { "start": 2453, "end": 2456, "text": " I mean, that's that's pretty cool." }, { "start": 2456, "end": 2462, "text": " And a they have some of them, a bottom view or a rear view." }, { "start": 2462, "end": 2464, "text": " Yeah, the rear view works better." }, { "start": 2464, "end": 2470, "text": " So it does understand these these kind of things like what's the rear of a fox and what's the front of a fox." }, { "start": 2470, "end": 2475, "text": " Though, as you can also see, not always texture." }, { "start": 2475, "end": 2477, "text": " It's very good at texture." }, { "start": 2477, "end": 2483, "text": " So here something made of voxels can do that perfectly." }, { "start": 2483, "end": 2489, "text": " An owl made of voxels like this looks like it comes straight from Minecraft." }, { "start": 2489, "end": 2493, "text": " Right. Absolutely, absolutely cool." }, { "start": 2493, "end": 2497, "text": " Even X-Ray sometimes doesn't always get the bones right." }, { "start": 2497, "end": 2501, "text": " But yeah, as I said, style structure." }, { "start": 2501, "end": 2505, "text": " Very cool. So here is an example of a completion." }, { "start": 2505, "end": 2514, "text": " So they give the text prompt a photograph of a bust of Homer and the image, the top part of the image." }, { "start": 2514, "end": 2519, "text": " And they say, well, it can describing a well-known figure." }, { "start": 2519, "end": 2522, "text": " It can complete the figure." }, { "start": 2522, "end": 2525, "text": " I don't agree that it completes Homer." }, { "start": 2525, "end": 2534, "text": " Like it completes it probably just sees this bust and this and it just completes whatever fits." }, { "start": 2534, "end": 2541, "text": " I don't I have not studied Homer as a historic person or busts of him." }, { "start": 2541, "end": 2550, "text": " But, you know, I disagree that this depicts largely the same person very often." }, { "start": 2550, "end": 2558, "text": " You can see here there is sometimes there is even, you know, there's completely unrelated stuff." }, { "start": 2558, "end": 2564, "text": " There is that lady with the pearl earring by Vermeer somewhere in there and so on." }, { "start": 2564, "end": 2571, "text": " And what I also like in this kind of this this one, you know, the game draw something where or, you know," }, { "start": 2571, "end": 2577, "text": " pictionary and so on, there are people when they can't draw something, they just kind of write it on the picture." }, { "start": 2577, "end": 2580, "text": " It's like, ah, screw it. Now, this is right." }, { "start": 2580, "end": 2582, "text": " This is Homer. This is Homer." }, { "start": 2582, "end": 2584, "text": " Now, I don't care what you say." }, { "start": 2584, "end": 2589, "text": " This is Homer. But, you know, it does, you know, it does." }, { "start": 2589, "end": 2599, "text": " So when you say Cleopatra, it it goes more into the into sort of the female direction Medusa." }, { "start": 2599, "end": 2601, "text": " It has some though." }, { "start": 2601, "end": 2607, "text": " I'm pretty sure Medusa has the snake, the snake hair." }, { "start": 2607, "end": 2616, "text": " No, maybe Venus. Yeah, somewhat somewhat." }, { "start": 2616, "end": 2621, "text": " It they test a lot of things like can it do mirror reflections?" }, { "start": 2621, "end": 2626, "text": " And you can see right here, they say it can do reflections on the ground pretty well," }, { "start": 2626, "end": 2631, "text": " but it can't do reflections, for example, in a mirror, because in a lot of these pictures," }, { "start": 2631, "end": 2635, "text": " the object like here would actually have to be in front of the mirror." }, { "start": 2635, "end": 2643, "text": " However, in the fewest amount of pictures, the object mirrored is actually also in front of the mirror." }, { "start": 2643, "end": 2647, "text": " So this kind of global correspondence isn't given as much." }, { "start": 2647, "end": 2652, "text": " However, there is a fair bit of reflection on the ground, so to say." }, { "start": 2652, "end": 2659, "text": " So, you know, that's pretty cool, but it's also probably very, very common in datasets." }, { "start": 2659, "end": 2661, "text": " Yeah, cross section view of a walnut." }, { "start": 2661, "end": 2667, "text": " So they sort of implore, sorry, explore the model, what it can do." }, { "start": 2667, "end": 2672, "text": " And here you can see that, you know, if something is common in the dataset, you know," }, { "start": 2672, "end": 2678, "text": " like the cross section view of human head, there are a lot of pictures of that right in the dataset." }, { "start": 2678, "end": 2685, "text": " However, if it comes to cross section view of a where, where did I see the airplane?" }, { "start": 2685, "end": 2687, "text": " There is an airplane somewhere." }, { "start": 2687, "end": 2691, "text": " It's less, it's less so." }, { "start": 2691, "end": 2695, "text": " So you can see that this is still it is." }, { "start": 2695, "end": 2702, "text": " So here it probably doesn't really know how that looks, because, you know, they probably on the image," }, { "start": 2702, "end": 2707, "text": " on the Internet, even on the whole Internet, pictures of cross sections of airplanes or any sections of airplanes." }, { "start": 2707, "end": 2711, "text": " Are not really distributed often." }, { "start": 2711, "end": 2714, "text": " So it sort of just focuses on airplane." }, { "start": 2714, "end": 2719, "text": " And then with cross section, it probably knows that it should somehow display some of the interior." }, { "start": 2719, "end": 2726, "text": " So it just kind of produces some stuff that matches this thing." }, { "start": 2726, "end": 2734, "text": " As I said, if if it can't make the likelihood high of all of the things," }, { "start": 2734, "end": 2739, "text": " what it tends to do is just focus on one of the things and just make that likelihood high," }, { "start": 2739, "end": 2743, "text": " which is reasonable for a model." }, { "start": 2743, "end": 2747, "text": " A macro photo, macro photographs of stuff." }, { "start": 2747, "end": 2748, "text": " These are pretty cool." }, { "start": 2748, "end": 2752, "text": " This is what you would find in some image galleries." }, { "start": 2752, "end": 2755, "text": " Absolutely." }, { "start": 2755, "end": 2758, "text": " Then it can do various things like style transfer." }, { "start": 2758, "end": 2759, "text": " And here is where it shines." }, { "start": 2759, "end": 2761, "text": " Right." }, { "start": 2761, "end": 2766, "text": " So you can have different paintings of different objects in different styles." }, { "start": 2766, "end": 2774, "text": " So here you can like have an owl sitting in the forest in the morning." }, { "start": 2774, "end": 2779, "text": " And you can have this as a painting, as a painting in the pop art style and so on." }, { "start": 2779, "end": 2781, "text": " It's very, very impressive." }, { "start": 2781, "end": 2785, "text": " So I absolutely glory actually, too, like as a postage stamp." }, { "start": 2785, "end": 2789, "text": " These are these are these are absolutely amazing." }, { "start": 2789, "end": 2793, "text": " And yeah, you can have stuff like stained glass windows." }, { "start": 2793, "end": 2795, "text": " And this is yeah, it's where the model shines." }, { "start": 2795, "end": 2799, "text": " And even here a storefront that has the word Opnea written on it." }, { "start": 2799, "end": 2806, "text": " So just right now, just look at how convoluted this text prompt has to be for them to get this to work." }, { "start": 2806, "end": 2808, "text": " It's impressive." }, { "start": 2808, "end": 2814, "text": " But the text prompt has to be repeated and reformulated a bunch of times and so on." }, { "start": 2814, "end": 2819, "text": " My personal favorite is the pie torch chips." }, { "start": 2819, "end": 2821, "text": " They're crunchy." }, { "start": 2821, "end": 2825, "text": " You get a piece of back prop in every package." }, { "start": 2825, "end": 2831, "text": " So you can see it sometimes misses like this is perch, perch chips and so on." }, { "start": 2831, "end": 2833, "text": " It sometimes misses." }, { "start": 2833, "end": 2837, "text": " But it is pretty cool that it basically can do OCR, right." }, { "start": 2837, "end": 2839, "text": " Or reverse OCR." }, { "start": 2839, "end": 2845, "text": " You can you give it a piece of text and it sort of makes a picture with that on it." }, { "start": 2845, "end": 2854, "text": " It's very, very impressive, even though, as we said, like the global the global correspondences are not always there." }, { "start": 2854, "end": 2866, "text": " They do implore like fashion, a skirt like here that the yellow skirt, then, you know, these mannequins." }, { "start": 2866, "end": 2871, "text": " And here they have a loft bedroom with a white bed next to a nightstand." }, { "start": 2871, "end": 2875, "text": " There is a fish tank standing beside the bed and they give sort of the beginning of the image." }, { "start": 2875, "end": 2877, "text": " And here's what the model comes up with." }, { "start": 2877, "end": 2883, "text": " And, you know, you can imagine that there are a lot of pictures like this in the data set." }, { "start": 2883, "end": 2895, "text": " So the model might be pretty good at stuff like this, though I have found their king bed next to, yeah, let's say the nightstand with the telescope." }, { "start": 2895, "end": 2901, "text": " The telescope beside the bed, it just, you know, that beside like there's a telescope." }, { "start": 2901, "end": 2903, "text": " Sometimes it's on the bed." }, { "start": 2903, "end": 2904, "text": " Sometimes it's next to it." }, { "start": 2904, "end": 2906, "text": " There are some weird telescopes around." }, { "start": 2906, "end": 2909, "text": " Well, this is a lot of telescopes." }, { "start": 2909, "end": 2911, "text": " That's a weird telescope." }, { "start": 2911, "end": 2914, "text": " But, you know, the quality is pretty impressive." }, { "start": 2914, "end": 2918, "text": " This is absolutely nitpicking that I'm doing here." }, { "start": 2918, "end": 2920, "text": " Combining unrelated concepts." }, { "start": 2920, "end": 2924, "text": " We've already seen the armchair in the shape of an avocado." }, { "start": 2924, "end": 2926, "text": " They also have a snail made of harp." }, { "start": 2926, "end": 2933, "text": " Though my personal favorite is the penguin made of garlic." }, { "start": 2933, "end": 2937, "text": " The penguin made of garlic." }, { "start": 2937, "end": 2939, "text": " This." }, { "start": 2939, "end": 2941, "text": " Perfect, right?" }, { "start": 2941, "end": 2943, "text": " Absolutely adorable." }, { "start": 2943, "end": 2946, "text": " And just qualitatively like this." }, { "start": 2946, "end": 2952, "text": " This would take a human like you would pay a high quality," }, { "start": 2952, "end": 2959, "text": " highly educated Photoshop artist quite a bit of money to get this sort of output." }, { "start": 2959, "end": 2960, "text": " Right." }, { "start": 2960, "end": 2968, "text": " And these models, they shine at this sort of style transfer texture stuff." }, { "start": 2968, "end": 2971, "text": " And here you have the illustrations." }, { "start": 2971, "end": 2981, "text": " You can have any kind of illustrations like the illustration of a baby shark with a mustache." }, { "start": 2981, "end": 2983, "text": " Holding." }, { "start": 2983, "end": 2987, "text": " There's holding an umbrella somewhere." }, { "start": 2987, "end": 2989, "text": " Playing it." }, { "start": 2989, "end": 2990, "text": " Running." }, { "start": 2990, "end": 2994, "text": " Riding a unicycle." }, { "start": 2994, "end": 2996, "text": " It's just it's just nice." }, { "start": 2996, "end": 3000, "text": " And as I said, this is the same model that can do all of this stuff." }, { "start": 3000, "end": 3002, "text": " And these are samples." }, { "start": 3002, "end": 3003, "text": " They're just samples." }, { "start": 3003, "end": 3004, "text": " They're not cherry picked." }, { "start": 3004, "end": 3005, "text": " However, they are re-ranked." }, { "start": 3005, "end": 3008, "text": " Remember that." }, { "start": 3008, "end": 3017, "text": " So they can do hybrids of images, hybrids of different giraffe and turtle and so on." }, { "start": 3017, "end": 3031, "text": " And they do sort of implore the model a little bit more where they, as I said, they give this cat on the top and they say they want the exact same cat on the top as a photo colored blue on the bottom." }, { "start": 3031, "end": 3035, "text": " So you can see that doesn't always work." }, { "start": 3035, "end": 3036, "text": " Right." }, { "start": 3036, "end": 3043, "text": " But in a surprising amount of times, it actually does work." }, { "start": 3043, "end": 3045, "text": " Sometimes it's just like a blue pot." }, { "start": 3045, "end": 3052, "text": " But you can you can see it's not the finished model yet." }, { "start": 3052, "end": 3058, "text": " However, it is a step into the direction that shows us that this is definitely, definitely possible." }, { "start": 3058, "end": 3063, "text": " It can even do some of these progressive matrices where it fills in the bottom right." }, { "start": 3063, "end": 3070, "text": " However, they do mention it's very, very finicky with respect to whether or not, for example, if you invert the color." }, { "start": 3070, "end": 3081, "text": " So if you look at the bottom right of any of these things, if I invert the colors, the output sort of changes and it's often also not right." }, { "start": 3081, "end": 3092, "text": " However, sometimes it is actually right, which is crazy because in some of these things, you have to do some crazy sort of inference that" }, { "start": 3092, "end": 3095, "text": " we usually we usually do these things in IQ tests." }, { "start": 3095, "end": 3101, "text": " So I don't know the debate about what is intelligence goes on." }, { "start": 3101, "end": 3104, "text": " They say it has geographic knowledge." }, { "start": 3104, "end": 3114, "text": " However, I'm not sure it has geographic knowledge as it just associates words with particular images like they say, OK, this is a photo of food of China." }, { "start": 3114, "end": 3119, "text": " OK, maybe you just not sure this classifies as geographic knowledge." }, { "start": 3119, "end": 3126, "text": " He said he's yeah, also this temporal knowledge, a photo of a phone from the 20s." }, { "start": 3126, "end": 3133, "text": " OK, you know, and then the different time periods, 60s, 70s, 80s, future and so on, like distant future." }, { "start": 3133, "end": 3142, "text": " Like, wow, these phones, I particularly so I like the usually this stuff." }, { "start": 3142, "end": 3143, "text": " It's it's pretty OK, right?" }, { "start": 3143, "end": 3145, "text": " But it's not temporal knowledge." }, { "start": 3145, "end": 3151, "text": " It just associates a bunch of tokens with some sort of style of computer." }, { "start": 3151, "end": 3155, "text": " Today's computer, the future computer, the distant future computer." }, { "start": 3155, "end": 3159, "text": " Please know, please, please, please don't give me that." }, { "start": 3159, "end": 3161, "text": " I don't want to. I don't want that." }, { "start": 3161, "end": 3168, "text": " I love the action movie poster because so the style is correct." }, { "start": 3168, "end": 3175, "text": " But it just says action movie in the future." }, { "start": 3175, "end": 3180, "text": " Yeah, they do get sort of the kind of some of the styles." }, { "start": 3180, "end": 3188, "text": " It just it just says action movie like this is like a like a naggy, naggy child like I'm hungry." }, { "start": 3188, "end": 3191, "text": " Hi, hungry. I'm dead." }, { "start": 3191, "end": 3199, "text": " All right. So they also have a summary right here and they do show what it means that they they use this clip to rerank." }, { "start": 3199, "end": 3205, "text": " So on the left here, you can see just eight samples straight up from the model." }, { "start": 3205, "end": 3217, "text": " And they're not too bad. But, you know, you increase the quality by sort of sampling more and then taking the best eight as you go to the right here, according to the reranker." }, { "start": 3217, "end": 3229, "text": " So I'm going to guess they decided on five twelve because that was sort of, you know, it gives you already pretty diverse, pretty good, pretty high quality outputs right here." }, { "start": 3229, "end": 3234, "text": " All right. So just lastly, shout out to the the the authors right here." }, { "start": 3234, "end": 3248, "text": " The primary authors are deter mesh, Mikhail Pavlov, Gabrielle Goh and Scott Ray with a I guess the secondary supporting authors and most of open eye behind them." }, { "start": 3248, "end": 3254, "text": " I don't know how they work. I would encourage you to go look at the model." }, { "start": 3254, "end": 3263, "text": " It's pretty cool. Try out all these inputs. As I said, these are the inputs are simply restricting you because they don't trust you with their model." }, { "start": 3263, "end": 3272, "text": " Yet, right in the real model, you can input any piece of text that you want and you will get out an image." }, { "start": 3272, "end": 3278, "text": " And the fact that you have to select the stuff here is simply because that's the stuff they tried." }, { "start": 3278, "end": 3283, "text": " That's the stuff their PR department has signed off on. Right." }, { "start": 3283, "end": 3300, "text": " And so so you get to see that because as I said, they're not like this is at the same time, this is a PR dilemma when you release a generative model because it could release." }, { "start": 3300, "end": 3309, "text": " They discussed this a little bit in the blog post. You know, it could release like very problematic images in a classifier." }, { "start": 3309, "end": 3316, "text": " It's not as pronounced. It's also sometimes dangerous, but not as dangerous as if you have a generative model." }, { "start": 3316, "end": 3327, "text": " That's the first thing. And the second thing is there is I mean, there is money in this definitely, definitely money to be made in this." }, { "start": 3327, "end": 3333, "text": " So, you know, we'll see whether or not we get the full model or not." }, { "start": 3333, "end": 3339, "text": " All right. With that, that was it for me. I hope you enjoy the blog post. I hope you enjoyed the video." }, { "start": 3339, "end": 3367, "text": " If you did, let me know. Shared out. Subscribe if you haven't and bye bye." } ]
yPjuAo53uNI
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Rant] The Male Only History of Deep Learning
[ "Science & Technology" ]
[ "deep learning", "machine learning", "neural networks", "history", "groups", "ideology" ]
This casting of our field in terms of ideological narrow-sighted group-think is disgusting. Keep Science about ideas! https://twitter.com/timnitGebru/status/1252752743942328321 Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Alright, so instead of reviewing a paper today, I thought I might review this thing. So this person on Twitter posted this link to an article called Brief History of Deep Learning from 1943 to 2019 of Machine Learning Knowledge.ai. So let's look at this. Actually let's look at the tweet first. Because this is... I just saw this. The male-only history of deep learning, where you say AlexNet makes history, but ImageNet doesn't, because women's contributions don't count. And contributions from anyone except for white and white-adjacent people for that matter. That is the tweet, and it has 109 retweets, over 400 likes, and people generally agreeing with this sentiment. So the person is expressing concerns that this article is only going over one particular group of people. So let's look at the article. They basically go over the history of neural networks, of deep learning in an algorithmic sense. So let's check it out. So first we go into neurons, starting at 1943. And the Perceptron paper right here. The first backpropagation algorithm from Kelly. This actually, I think people like Schmidhuber would be proud. As far as I can tell, this is kind of more of a forgotten history, or some of these things are more of a forgotten history. Of course, Minsky's paper, very famous. But here backpropagation attributed to this paper, and so on. And you can see things people like Hinton only coming up later here, the Boltzmann machine, backpropagation in neural networks now. So this, as far as I can tell, it's just a take on kind of the history of algorithmic development. And you can see here, it really is about algorithms. The algorithms behind deep learning. So here is the vanishing gradient problem, the LSTM as an architectural component, deep belief networks. Then you have GPUs for training. Again, vanishing gradients, AlexNet, then GANs, AlphaGo. So we're now going a bit faster. And then the end, it says, the godfathers win the Turing Award for their immense contribution in advancements in area of deep learning and artificial intelligence. This is a defining moment for those who had worked relentlessly on neural networks when the entire machine learning community had moved away from it in the 1970s. So the article clearly is focused on algorithmic developments in deep learning. And that's why AlexNet is here. Now this person rags that AlexNet is here, but ImageNet isn't. And clearly you can see from the article, ImageNet is a data set. It was not made with deep learning in mind. It was simply made as a data set. It's not an algorithmic development. So GANs are here as well, right? But CelebA isn't. C410 isn't. MNIST isn't. The PantryBank isn't. So I think we've skipped a lot of architectural advancements here, like transformers or all kinds of things here. But the history is clearly about the algorithmic developments. And to reframe this, it clearly states ImageNet doesn't because women's contributions don't count. The insinuation here, absolutely, I find this to be absolutely intellectually dishonest. And they say, and contributions from anyone except for white and white-adjacent people for that matter. At this point you just have to laugh. Because of course the narrative that the person wanted to tell was that it's only white people that count. But then you scroll and you're like, arrrrr, it doesn't fit my narrative, right? This GPU is not a white person. So to make it fit your narrative you have to call white-adjacent... What is white-adjacent? It's like, whatever I don't like I now call white. But people just agreeing with this, I find this absolutely disgusting. And I find the article to be okay. I don't know better. But if you have a problem with... I definitely think there is misattribution in science throughout, even systematic. But to say that ImageNet wasn't included because women's contributions don't count, that is just a straight out lie. And to call people white-adjacent is like, how do you not have a bell in your head that goes ding ding ding ding ding when you do something like this? So I find this to be dishonest, either willfully or just because people have so become used to seeing the world in one particular frame. I think these calls only get big whenever there is money and attention going into a field, right? If you look at any field where it's just a bunch of weirdos doing their thing, the weirdos don't care who's there. They just care about the ideas that people have. And I believe we should take that view in science in general. I don't care who has the idea. And these people do. And I disagree. All right, that was it. Keep pushing back on these things if you agree as well. And keep science for ideas. Thanks.
[ { "start": 0, "end": 6.26, "text": " Alright, so instead of reviewing a paper today, I thought I might review this thing." }, { "start": 6.26, "end": 13.22, "text": " So this person on Twitter posted this link to an article called Brief History of Deep" }, { "start": 13.22, "end": 21.1, "text": " Learning from 1943 to 2019 of Machine Learning Knowledge.ai." }, { "start": 21.1, "end": 24.28, "text": " So let's look at this." }, { "start": 24.28, "end": 26.04, "text": " Actually let's look at the tweet first." }, { "start": 26.04, "end": 28.12, "text": " Because this is..." }, { "start": 28.12, "end": 29.64, "text": " I just saw this." }, { "start": 29.64, "end": 36, "text": " The male-only history of deep learning, where you say AlexNet makes history, but ImageNet" }, { "start": 36, "end": 40.72, "text": " doesn't, because women's contributions don't count." }, { "start": 40.72, "end": 47.84, "text": " And contributions from anyone except for white and white-adjacent people for that matter." }, { "start": 47.84, "end": 56.96, "text": " That is the tweet, and it has 109 retweets, over 400 likes, and people generally agreeing" }, { "start": 56.96, "end": 58.14, "text": " with this sentiment." }, { "start": 58.14, "end": 68.72, "text": " So the person is expressing concerns that this article is only going over one particular" }, { "start": 68.72, "end": 70.68, "text": " group of people." }, { "start": 70.68, "end": 73.2, "text": " So let's look at the article." }, { "start": 73.2, "end": 80.84, "text": " They basically go over the history of neural networks, of deep learning in an algorithmic" }, { "start": 80.84, "end": 81.84, "text": " sense." }, { "start": 81.84, "end": 83.12, "text": " So let's check it out." }, { "start": 83.12, "end": 88.28, "text": " So first we go into neurons, starting at 1943." }, { "start": 88.28, "end": 93.52000000000001, "text": " And the Perceptron paper right here." }, { "start": 93.52000000000001, "end": 97.36, "text": " The first backpropagation algorithm from Kelly." }, { "start": 97.36, "end": 101.96000000000001, "text": " This actually, I think people like Schmidhuber would be proud." }, { "start": 101.96000000000001, "end": 108.44, "text": " As far as I can tell, this is kind of more of a forgotten history, or some of these things" }, { "start": 108.44, "end": 109.72, "text": " are more of a forgotten history." }, { "start": 109.72, "end": 113.44, "text": " Of course, Minsky's paper, very famous." }, { "start": 113.44, "end": 120.6, "text": " But here backpropagation attributed to this paper, and so on." }, { "start": 120.6, "end": 127.6, "text": " And you can see things people like Hinton only coming up later here, the Boltzmann machine," }, { "start": 127.6, "end": 132.44, "text": " backpropagation in neural networks now." }, { "start": 132.44, "end": 139.36, "text": " So this, as far as I can tell, it's just a take on kind of the history of algorithmic" }, { "start": 139.36, "end": 140.36, "text": " development." }, { "start": 140.36, "end": 144.84, "text": " And you can see here, it really is about algorithms." }, { "start": 144.84, "end": 147.60000000000002, "text": " The algorithms behind deep learning." }, { "start": 147.60000000000002, "end": 154.16000000000003, "text": " So here is the vanishing gradient problem, the LSTM as an architectural component, deep" }, { "start": 154.16000000000003, "end": 155.52, "text": " belief networks." }, { "start": 155.52, "end": 158.72000000000003, "text": " Then you have GPUs for training." }, { "start": 158.72000000000003, "end": 163.88000000000002, "text": " Again, vanishing gradients, AlexNet, then GANs, AlphaGo." }, { "start": 163.88000000000002, "end": 166.20000000000002, "text": " So we're now going a bit faster." }, { "start": 166.2, "end": 171.72, "text": " And then the end, it says, the godfathers win the Turing Award for their immense contribution" }, { "start": 171.72, "end": 174.72, "text": " in advancements in area of deep learning and artificial intelligence." }, { "start": 174.72, "end": 179.32, "text": " This is a defining moment for those who had worked relentlessly on neural networks when" }, { "start": 179.32, "end": 183.83999999999997, "text": " the entire machine learning community had moved away from it in the 1970s." }, { "start": 183.83999999999997, "end": 191.95999999999998, "text": " So the article clearly is focused on algorithmic developments in deep learning." }, { "start": 191.95999999999998, "end": 194.04, "text": " And that's why AlexNet is here." }, { "start": 194.04, "end": 200.32, "text": " Now this person rags that AlexNet is here, but ImageNet isn't." }, { "start": 200.32, "end": 205.64, "text": " And clearly you can see from the article, ImageNet is a data set." }, { "start": 205.64, "end": 208.35999999999999, "text": " It was not made with deep learning in mind." }, { "start": 208.35999999999999, "end": 210.5, "text": " It was simply made as a data set." }, { "start": 210.5, "end": 212.76, "text": " It's not an algorithmic development." }, { "start": 212.76, "end": 215.32, "text": " So GANs are here as well, right?" }, { "start": 215.32, "end": 217.39999999999998, "text": " But CelebA isn't." }, { "start": 217.39999999999998, "end": 219.2, "text": " C410 isn't." }, { "start": 219.2, "end": 222.07999999999998, "text": " MNIST isn't." }, { "start": 222.08, "end": 224.60000000000002, "text": " The PantryBank isn't." }, { "start": 224.60000000000002, "end": 232.72000000000003, "text": " So I think we've skipped a lot of architectural advancements here, like transformers or all" }, { "start": 232.72000000000003, "end": 234, "text": " kinds of things here." }, { "start": 234, "end": 237.88000000000002, "text": " But the history is clearly about the algorithmic developments." }, { "start": 237.88000000000002, "end": 245.4, "text": " And to reframe this, it clearly states ImageNet doesn't because women's contributions don't" }, { "start": 245.4, "end": 247.24, "text": " count." }, { "start": 247.24, "end": 254, "text": " The insinuation here, absolutely, I find this to be absolutely intellectually dishonest." }, { "start": 254, "end": 257.96000000000004, "text": " And they say, and contributions from anyone except for white and white-adjacent people" }, { "start": 257.96000000000004, "end": 259.72, "text": " for that matter." }, { "start": 259.72, "end": 262.44, "text": " At this point you just have to laugh." }, { "start": 262.44, "end": 268.8, "text": " Because of course the narrative that the person wanted to tell was that it's only white people" }, { "start": 268.8, "end": 270.2, "text": " that count." }, { "start": 270.2, "end": 275.92, "text": " But then you scroll and you're like, arrrrr, it doesn't fit my narrative, right?" }, { "start": 275.92, "end": 281.6, "text": " This GPU is not a white person." }, { "start": 281.6, "end": 286.36, "text": " So to make it fit your narrative you have to call white-adjacent..." }, { "start": 286.36, "end": 288.56, "text": " What is white-adjacent?" }, { "start": 288.56, "end": 294.96000000000004, "text": " It's like, whatever I don't like I now call white." }, { "start": 294.96000000000004, "end": 301, "text": " But people just agreeing with this, I find this absolutely disgusting." }, { "start": 301, "end": 303.48, "text": " And I find the article to be okay." }, { "start": 303.48, "end": 304.88, "text": " I don't know better." }, { "start": 304.88, "end": 306.24, "text": " But if you have a problem with..." }, { "start": 306.24, "end": 311.6, "text": " I definitely think there is misattribution in science throughout, even systematic." }, { "start": 311.6, "end": 317.04, "text": " But to say that ImageNet wasn't included because women's contributions don't count, that is" }, { "start": 317.04, "end": 319.71999999999997, "text": " just a straight out lie." }, { "start": 319.71999999999997, "end": 325.15999999999997, "text": " And to call people white-adjacent is like, how do you not have a bell in your head that" }, { "start": 325.15999999999997, "end": 330.26, "text": " goes ding ding ding ding ding when you do something like this?" }, { "start": 330.26, "end": 338.15999999999997, "text": " So I find this to be dishonest, either willfully or just because people have so become used" }, { "start": 338.15999999999997, "end": 344.36, "text": " to seeing the world in one particular frame." }, { "start": 344.36, "end": 350.76, "text": " I think these calls only get big whenever there is money and attention going into a" }, { "start": 350.76, "end": 351.96, "text": " field, right?" }, { "start": 351.96, "end": 359, "text": " If you look at any field where it's just a bunch of weirdos doing their thing, the weirdos" }, { "start": 359, "end": 360.4, "text": " don't care who's there." }, { "start": 360.4, "end": 364.56, "text": " They just care about the ideas that people have." }, { "start": 364.56, "end": 370, "text": " And I believe we should take that view in science in general." }, { "start": 370, "end": 372.96, "text": " I don't care who has the idea." }, { "start": 372.96, "end": 376.08, "text": " And these people do." }, { "start": 376.08, "end": 377.08, "text": " And I disagree." }, { "start": 377.08, "end": 379.24, "text": " All right, that was it." }, { "start": 379.24, "end": 383.32, "text": " Keep pushing back on these things if you agree as well." }, { "start": 383.32, "end": 385.76, "text": " And keep science for ideas." }, { "start": 385.76, "end": 389.4, "text": " Thanks." } ]