video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
PXOhi6m09bA
results but that were widely beyond previous stay of the arts and these are some of the Appalachian of showing how many training sentences they use to in order to surpass this kind of system so you can see that you would need probably somewhere between half a million data in order to for it to surpass the flat line which is the unsupervised machine translation results so that's that for
8,069
8,108
https://www.youtube.com/watch?v=PXOhi6m09bA&t=8069s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
that method and so again like these none of the kind of principles that we have talked about so far a bullet proof but I think what's really interesting is you can seemingly extract training signal out of nowhere by just carefully considering the problems of what are the invariants that you can exploit and especially in NLP like really thinking about how thinking about it knock at a random
8,108
8,137
https://www.youtube.com/watch?v=PXOhi6m09bA&t=8108s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
_X0mgOOSpLU
The power of yet. I heard about a high school in Chicago where students had to pass a certain number of courses to graduate, and if they didn't pass a course, they got the grade "Not Yet." And I thought that was fantastic, because if you get a failing grade, you think, I'm nothing, I'm nowhere. But if you get the grade "Not Yet", you understand that you're on a learning curve.
0
40
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=0s
The power of believing that you can improve | Carol Dweck
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
It gives you a path into the future. "Not Yet" also gave me insight into a critical event early in my career, a real turning point. I wanted to see how children coped with challenge and difficulty, so I gave 10-year-olds problems that were slightly too hard for them. Some of them reacted in a shockingly positive way. They said things like, "I love a challenge,"
40
78
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=40s
The power of believing that you can improve | Carol Dweck
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
or, "You know, I was hoping this would be informative." They understood that their abilities could be developed. They had what I call a growth mindset. But other students felt it was tragic, catastrophic. From their more fixed mindset perspective, their intelligence had been up for judgment, and they failed. Instead of luxuriating in the power of yet, they were gripped in the tyranny of now.
78
119
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=78s
The power of believing that you can improve | Carol Dweck
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
So what do they do next? I'll tell you what they do next. In one study, they told us they would probably cheat the next time instead of studying more if they failed a test. In another study, after a failure, they looked for someone who did worse than they did so they could feel really good about themselves. And in study after study, they have run from difficulty.
119
151
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=119s
The power of believing that you can improve | Carol Dweck
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
Scientists measured the electrical activity from the brain as students confronted an error. On the left, you see the fixed-mindset students. There's hardly any activity. They run from the error. They don't engage with it. But on the right, you have the students with the growth mindset, the idea that abilities can be developed. They engage deeply. Their brain is on fire with yet.
151
184
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=151s
The power of believing that you can improve | Carol Dweck
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
They engage deeply. They process the error. They learn from it and they correct it. How are we raising our children? Are we raising them for now instead of yet? Are we raising kids who are obsessed with getting As? Are we raising kids who don't know how to dream big dreams? Their biggest goal is getting the next A, or the next test score? And are they carrying this need for constant validation with them
184
225
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=184s
The power of believing that you can improve | Carol Dweck
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
into their future lives? Maybe, because employers are coming to me and saying, "We have already raised a generation of young workers who can't get through the day without an award." So what can we do? How can we build that bridge to yet? Here are some things we can do. First of all, we can praise wisely, not praising intelligence or talent. That has failed. Don't do that anymore.
225
262
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=225s
The power of believing that you can improve | Carol Dweck
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
But praising the process that kids engage in, their effort, their strategies, their focus, their perseverance, their improvement. This process praise creates kids who are hardy and resilient. There are other ways to reward yet. We recently teamed up with game scientists from the University of Washington to create a new online math game that rewarded yet. In this game, students were rewarded for effort, strategy and progress.
262
300
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=262s
The power of believing that you can improve | Carol Dweck
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
The usual math game rewards you for getting answers right, right now, but this game rewarded process. And we got more effort, more strategies, more engagement over longer periods of time, and more perseverance when they hit really, really hard problems. Just the words "yet" or "not yet," we're finding, give kids greater confidence, give them a path into the future that creates greater persistence.
300
339
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=300s
The power of believing that you can improve | Carol Dweck
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
And we can actually change students' mindsets. In one study, we taught them that every time they push out of their comfort zone to learn something new and difficult, the neurons in their brain can form new, stronger connections, and over time, they can get smarter. Look what happened: In this study, students who were not taught this growth mindset continued to show declining grades over this difficult school transition,
339
376
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=339s
The power of believing that you can improve | Carol Dweck
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
but those who were taught this lesson showed a sharp rebound in their grades. We have shown this now, this kind of improvement, with thousands and thousands of kids, especially struggling students. So let's talk about equality. In our country, there are groups of students who chronically underperform, for example, children in inner cities, or children on Native American reservations.
376
414
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=376s
The power of believing that you can improve | Carol Dweck
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
And they've done so poorly for so long that many people think it's inevitable. But when educators create growth mindset classrooms steeped in yet, equality happens. And here are just a few examples. In one year, a kindergarten class in Harlem, New York scored in the 95th percentile on the national achievement test. Many of those kids could not hold a pencil when they arrived at school.
414
457
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=414s
The power of believing that you can improve | Carol Dweck
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
In one year, fourth-grade students in the South Bronx, way behind, became the number one fourth-grade class in the state of New York on the state math test. In a year, to a year and a half, Native American students in a school on a reservation went from the bottom of their district to the top, and that district included affluent sections of Seattle. So the Native kids outdid the Microsoft kids.
457
505
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=457s
The power of believing that you can improve | Carol Dweck
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
This happened because the meaning of effort and difficulty were transformed. Before, effort and difficulty made them feel dumb, made them feel like giving up, but now, effort and difficulty, that's when their neurons are making new connections, stronger connections. That's when they're getting smarter. I received a letter recently from a 13-year-old boy. He said, "Dear Professor Dweck,
505
544
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=505s
The power of believing that you can improve | Carol Dweck
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
I appreciate that your writing is based on solid scientific research, and that's why I decided to put it into practice. I put more effort into my schoolwork, into my relationship with my family, and into my relationship with kids at school, and I experienced great improvement in all of those areas. I now realize I've wasted most of my life." Let's not waste any more lives,
544
587
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=544s
The power of believing that you can improve | Carol Dweck
https://i.ytimg.com/vi/_…axresdefault.jpg
9-o2aAoN0rY
hi there today we're looking at fast reinforcement learning with generalized policy updates by andre barretto shabo ho diana borsa david silver and doina preco so on high level this paper proposes a framework for reinforcement learning where you have many tasks at the same time and they propose a framework where they learn many policies at the same time that
0
26
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=0s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
can or cannot correspond to these tasks and then their argument is that if you now have a new task that you haven't seen before you can easily construct a solution to that task from your old policies basically mixing what you learned about your old tasks and it's a pretty general framework and we're going to look at it in my opinion it's it's pretty cool for certain settings
26
51
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=26s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
however i think it kind of breaks down the the more general you go which i guess is expected um of such a framework but uh it's as you can see it's kind of math heavy but we'll get into the examples and um what it's potentially useful for all right so that was it on a high level if you like content like this don't hesitate to subscribe to the channel and share it out leave a like and tell
51
79
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=51s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
me in the comments what you think i'm still reading all of them uh so i will see it cool let's dive in so they say the combination of reinforcement learning with deep learning is a promising approach to tackle important sequential decision making problems that are currently intractable well they're taking they're talking about you know things like um mostly these game playing ais like go
79
109
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=79s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
and things like this so we're this combination of deep learning with reinforcement learning has really shined or shun whatever one obstacle to overcome is the amount of data needed by learning systems of this type so again if you look at these systems like alphago they need a simulator and they need to collect enormous amounts of data um even more so with systems like
109
136
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=109s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
the dota ai the openai5 dota or starcraft playing alpha star i think it's alpha star they need so many simulations in order to learn about the tasks because they always start from scratch in this article they say we propose to address this issue through a divide and conquer approach we argue that complex decision problems can be naturally decomposed into multiple tasks
136
164
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=136s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
that unfold in sequence or in parallel by associating each task with a reward function this problem decomposition can be seamlessly accommodated within the standard reinforcement learning formalism okay so what are they saying right here they are basically saying that if you have a task let's say you want to get whoopsie from here to here and that's very complicated
164
191
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=164s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
let's make it complicated super duper complicated you can basically subdivide that task into multiple subtasks right so here it's like left turn right turn go straight left turn go straight right turn and so on and each of these subtasks you can see the two right turns here might share a lot of common information there could also be tasks that are at the same time like you need to go
191
218
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=191s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
forward and jump can be decomposed into going forward and to jump now they're saying is if each of these tasks now has its separate reward function in the environment like for some reason the environment tells you this by the way is task task one and you're gonna get a positive reward if you do a right turn and this down here is task two the the left turn task and you're
218
245
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=218s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
gonna get a positive reward if for that task so the entire task state can be decomposed into a vector so in our case here we have maybe a vector with three elements okay the three elements correspond to turn right go straight and turn left and now your this this right here is your reward vector so we're no longer talk in this framework we're no longer talking about
245
275
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=245s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
just a reward we're talking about a reward vector now each of these tasks is going to give you its own individual reward so let's say you're here and you're actually turning right this is going to give you a reward of one for this task but reward of zero for the other task okay so the environment will somehow tell you which tasks you you get reward for now there is a notion
275
305
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=275s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
where you can map this back to a single number and that is the second thing they introduce here so the second thing they introduce here is this thing they call w so w is going to be a mixing vector w is going to be a vector i will call w right here this is the reward vector w is going to be the vector that tells you your final reward so here we're going to do an inner product
305
333
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=305s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
so we're going to transpose this and multiply by w and w mixes these rewards and comes up with your final reward right here so this this is maybe the reward vector this is the reward number how we're going to call this reward number so in this case w would have to look something like this let's say this is an example so the task right here would be to only do
333
363
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=333s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
right turns now this is not a really nice example we're going to see some nicer examples later on but you can see that now the environment is specified as a vector of rewards and you can create the specific tasks like turning right simply by adjusting how you mix these different things by this vector w and this is going to be the key ingredient here so they discuss
363
389
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=363s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
your general reinforcement learning the reinforcement learning lingo and i think we've gone through this a number of times just very very quickly uh in reinforcement learning you're given these transitions you are in a state you take an action and that leads you to get a reward or prime and you get into a state s prime in the next state they say the reward is given by the
389
420
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=389s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
reward function so the reward is purely a function of where you are and what you do and where you get to now most reinforcement learning problems you can actually kind of forget about this part right here because well it isn't it is kind of important but you could um most reinforcement learning problems the reward is simply a matter of where you are and what you do
420
444
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=420s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
and this can be a random variable there can be randomness but maybe it's easier if you for now think about the reward simply as a function of these two things so what you want to discover is a policy pi where you input you input where you are and the output is going to be what should you do in that situation okay uh that is a policy and associated with each policy is this
444
471
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=444s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
thing called a q function so you can see right here the q function of a policy um is going to be a function of where you are and what you do and this is a bit confusing but it basically means that you are in state s so you are here and you have let's say three options action one action two action three to do now the q function tells you the q function this is s and the a's are
471
500
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=471s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
the numbers okay so let's say we plug in the state s and for a we plug in number two what it will tell you is if i am in state s and i perform action number two then how valuable is that for me and value is defined by all the reward that i'm going to pick up from now until the end of time or the end of the episode it depends um but let's say until the end of time
500
530
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=500s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
well how much how much reward am i going to pick up from now until the end of time is a bit of a vague not a vague question but a difficult question i can tell you how much i could estimate how much reward i'm going to pick up in the next step because i know what action i'm doing i'm performing action number two but what happens after that who knows so that's where this policy right here
530
555
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=530s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
comes in this policy right here says so the full definition of the q function is if i'm in state s and i perform action a right now and after that i follow policy pi what is my reward going to be well now it's well defined so right now you do action a and after that you do whatever action the policy tells you in that specific situation okay so that's the q function and you can pretty easily
555
585
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=555s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
see that if you have a q function right if you have an accurate q function you can get a good policy by simply always going with the action that gives you the highest q value because um it's because of a recurrence relationship called the the bellman equation uh this thing right here so your q function basically decomposes into the reward in the next step as we
585
610
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=585s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
said plus whatever happens after that and whatever happens after that is just by the nature of how the things are defined is going to be the q function of whatever the policy is telling you so you can get a pretty good policy by always doing whatever action your q function tells you is best this step of calculating the q function is called a policy evaluation and this paper here
610
642
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=610s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
is going to generalize these notions um sorry so this is a policy evaluation and then the act of selecting an action is going to be a policy improvement these are just names okay but we need to know them because the paper introduces two new things i'm going to where do i highlight policy evaluation i don't know but here they say this is the policy improvement
642
673
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=642s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
okay ah here policy evaluation policy improvement these are the two steps so the first step is calculate the queue function the second step is to select an action and you can see how these things interlock namely we can calculate the q function of a given policy and we can improve that policy by selecting whatever action is best for the q function this paper generalizes this and you can
673
705
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=673s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
see that there is a little a little r right here so the r is just a specific way to reference the reward function used right here okay and you can see it here as well now usually we have one policy and one reward right and so what we do is we improve the policy and that leads us to better evaluate the q function for a given reward function and that leads us to improve the policy
705
740
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=705s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
now this paper is going to transform this into the following we have many policies so we have policy one policy two and so on until policy i don't know p and we also have many reward functions reward 1 reward 2 reward 3 and so on until reward let's call that r so we have many different tasks right here and we have many policies now in essence they don't need to have some anything to
740
774
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=740s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
do with each other for the theory of this paper but i can simplify this a bit of how they see the world so let's say you have an agent and the agent has been trained on simply that first task right here and has been trained using classic q learning reinforcement learning what not and that results in this particular policy and then the agent just from scratch you restarted again
774
806
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=774s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
you run reinforcement learning just on reward number two and obtained policy number two and so on so you do this for all these rewards individually okay so you give the agent a new task and you ask it to learn a policy for that task now you're in a situation where if you are have a new task so are new the question is do you again need to train a new policy and the answer for this paper is no
806
839
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=806s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
because we have all these policies we don't need to train a new we can simply mix and match these policies that we already know to obtain a good solution for the new task so how does the paper do it it does it yeah it does it in the following it defines the successor features okay maybe it since maybe it's better if we first go to an example so the example they give here is the
839
872
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=839s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
following otherwise this i guess this might sound just a bit too abstract okay so you have this world here the agent is the thing here in yellow and it can just move so its actions are move left up right down this this is one step okay in the environment there are two different objects one object is a triangle and one object is a square okay so um there are a number of tasks we can
872
903
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=872s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
define right now in this thing so we define tasks according to a reward function so the reward let's say the reward one is going to be um one if if it picks up a square sorry the square and zero else just if it picks up a square on any given step we give it a reward of one it we don't care about the blue triangles okay and then reward two is going to be the opposite it's
903
939
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=903s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
going to be one not the opposite but one if it picks up a triangle and zero else so you can see the um good policies right here so pi one is a is a good policy for reward one because it just goes and and collects these red things doesn't care about the blue things just goes and collects them pi two it goes and collects the blue things doesn't care about the red things
939
966
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=939s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
okay so let's imagine that you have run reinforcement learning twice once for reward one and once for reward two and now you have two policies okay so you have two policies this will lead to pi one this will lead to pi two and now i give you the third task now the third task is a bit special it's one if you pick up a square and it's um it's zero else except it's negative one
966
1,007
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=966s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
if you pick up a blue thing well the order of these is kind of wrong but it just for visual representation okay so now you're asked to um pick up the red things but avoid the blue things okay pick up as many red things as you can avoid the blue things and again as we said the question is do you now have to run reinforcement learning again in this agent with your simulator using
1,007
1,037
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1007s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
like q learning or something like this from the start or can you come up with a solution just given these two policies that will perform well on the on this new task okay and we're going to see how they do it so what they do is they use successor features so these successor features um i've done a video about successor features and i'll link to that you can look at that but
1,037
1,073
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1037s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
essentially essentially the successor features are defined like this and for that we need to know what this thing is right here they simply call this a feature function okay it's very it's very um ambiguous term a feature function is a function that takes in a transition so state action next state and maps it to a high dimensional vector note this is almost the same as a reward function
1,073
1,104
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1073s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
except the reward function simply maps it to a number now this is mapped to a higher dimensional thing again i wanna i kind of wanna leave out the next state right here just to make things easier on you so a feature here can be many many things but the structure of the features is going to be such that the reward function is going to be this feature times this w
1,104
1,141
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1104s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
vector so it was a bit a bit not correct before when i said the reward is now a vector the reward of a particular task w can be seen as the inner product between the features and the task vector so w specifies the task and the features well they specify the features in our case it can be it can be fairly simple namely yes i was i was definitely wrong at the beginning so the feature
1,141
1,176
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1141s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
functions right here is which object do you pick up okay so we define the feature function as 1 0 if you pick up a square and we define the feature function as 0 1 if you pick up a triangle and now you can and we define it as we define it as 0 0 if you pick up nothing and now you can fairly easily see that the reward of each task can be simply calculated by mixing the features
1,176
1,210
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1176s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
accordingly okay so reward one is going to be simply the feature a 1 0 which is the w vector so i can specify a task by giving the appropriate w vector and now you can see that if this is my reward function my agent can go out into the world if it collects a square it is going to be rewarded right here if it collects a triangle even though the features indicate that it collected a triangle it
1,210
1,244
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1210s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
doesn't care about it because the w is 0 right here if i now want to give it the new tag the same is true for r2 if and i want to give it the new task r3 right and you remember the reward function right there i can achieve that reward function by simply multiplying the same features the exact same feature functions by this vector right here okay remember there is a slight difference between the
1,244
1,275
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1244s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
reward function and the feature function in this particular example the idea of the paper is that the feature function can be rich in in expressivity and you know tell you all sorts of things about your current state and the reward function is just a number right and then the the reward is specified by simply linearly mixing these features so the structure imposed by the
1,275
1,301
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1275s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
paper here is that there are such a thing as a feature and any task can be described by mixing these same features okay that's that's the issue right here so the features are going to be constant across tasks whereas the w defines the task all right so the the goal here is that if you have learned many many things um during your tasks what you want to do is you
1,301
1,343
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1301s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
want to learn this feature representation that is the same across all tasks and then you want to simply have the w specify how to mix these features to get the reward now of course this is a very strict very very definition not not a lot of things will fall into this unless you make the features like exponentially big of course um however they do discuss whenever a task doesn't fall
1,343
1,371
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1343s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
into that so i hope you're with me so far this is the first kind of restriction we impose on our worlds that we can tackle with this framework namely that all of our worlds have all of our tasks in this world have to be a linear mix of the same features if that's given then our um then we can derive policies for tasks that we have never seen we can derive good policies by
1,371
1,402
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1371s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
doing zero learning simply by specifying the task we can have a good policy for that task from the policies we've already learned for the other tasks okay so the reward three is now simply this and yeah notice it's not the same as the reward function because the reward function had one if you pick up the square negative one if you pick up the triangle and zero else so the zero we don't have to
1,402
1,430
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1402s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
specify here because it's not part of our features right so you can see that the reward function is given simply by that and we can now as i said derive a good policy for this reward by looking at the other policies even though none of these policies has ever learned to avoid anything so it makes it defines these successor features right here so the successor features
1,430
1,463
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1430s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
is much like the q function you can see the signature is almost the same so as a q function tells you um how much reward you're going to get if you do the action a and then follow policy pi the successor features almost the same thing however it doesn't tell you what rewards you're going to get it tells you which features you're going to get and which features by that we mean the
1,463
1,492
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1463s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
sum of future features now you can see this some this a little bit this uh it of course it comes from the fact of the linearity up here so it's not really an additional restriction but simply to clarify what this means for your environment your environment has to be able to be looked at in terms of these features and these features they need to be cumulative
1,492
1,520
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1492s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
again that comes from the fact that it's linear but to see so a feature like i want an an even number of steps or something like this would be terrible uh because and they're going into things like this later but it would be terrible because here we have the sum and um as soon as you if you have a feature that is very high if you have an even number of steps then
1,520
1,548
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1520s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
um or if you have a feature that counts the steps you will never be able to to do well because if you have a feature that counts the steps it simply counts up and up and up and up depending on how many steps you do and your reward can never be specified in terms of a mix of these features and therefore your successor features are going to be useless but in our case where it's where feature
1,548
1,577
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1548s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
one is pick up is how many of the sorry after rephrase our feature one is whether or not you pick up a square therefore if we sum it up our successor feature one is going to be the number of this is this is a pound sign the number of squares that you pick up okay similarly our feature too is whether or not you pick up a triangle in a particular step so our successor feature number two is
1,577
1,615
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1577s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
going to be the number of triangles that you pick up over time you can see that the successor features is kind of the analogous of your q function but it is not in terms of a single number the reward it is going to be in terms of these features which is an entire vector okay and because we've constructed this in a linear way you can also pretty clearly see that the
1,615
1,640
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1615s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
q function is inherently related to the uh to the successor features you can obtain the q function by simply multiplying the successor features by your task vector w now a lot of you might be wondering where does this w come from and in our initial case we're just going to frame everything as being given right so we're we're given this this w we're we're defining everything from
1,640
1,674
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1640s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
our god-like perspective for now so don't think all of this is learned by now um yeah all right so how can you now derive this this magical new policy okay so we let's say we have this policy one and we have the policy two and they and you have the this features that you've learned constantly over both tasks in fact your it's given right it this pi function we give it
1,674
1,706
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1674s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
we impose it that the feature one is whether you pick up a red square feature two is whether you pick up a blue square then we know that the reward functions can be achieved by doing the w so this here your w is going to be one zero and your w here is going to be zero one and we now we want a good policy for task three and we know we can achieve this by the one negative one
1,706
1,734
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1706s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
w how can we derive a good policy and this is this algorithm this general policy evaluation a general policy improvement so it assumes that you as we said you have many many different many different policy so here you can see policy one where's policy two here's policy two and so on it assumes that you have many different features and therefore many different
1,734
1,766
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1734s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
successor features in fact you have a vector of them right so here you can see feature one feature two and so on and it also assumes that you're in a current state and you have many actions at your disposal right now action one action two and so on okay so this is all the past you've already defined your features you have learned these policies and now you're given a new w w
1,766
1,794
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1766s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
new in our case it's this one negative one and we want the best action so we are in state s we are given this w we want the best action now here is a method where we can simply calculate the best action in terms by by not reinforcement learning at all in this new task so by structuring things like this here so what does it really say here it this thing says we are going to evaluate
1,794
1,830
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1794s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
all of these different cells of this tensor right here so we're going to determine what is the successor feature number two for policy pi one um in state s if i right now do a2 this is very abstract so let's say you're here and action action two is actually going to the right okay so you're here oh this was yellow it doesn't matter so this is so this is action one this is action two
1,830
1,863
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1830s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
so action two is you go to the right okay you can you can see that this will let you pick up um we'll let you pick up a triangle now here that's action three and so on okay so what's this number going to be so we are in state s as we said and we do action two so action two is going to pick up a triangle the triangle the picking up of a triangle means that our pi for this step
1,863
1,904
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1863s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
or sorry our five for the step is going to be zero one okay so our successor features this is not the features itself this is the successor features the successor features decompose into the next step plus all the next steps that we can follow okay so all the steps that will come so what are these features going to be is it's going to be the sum over that plus everything that follows
1,904
1,936
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1904s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
and i can take a little bit of a guess here which means that this number so we only care about feature two right here this feature feature two this number is going to be one for the next step because we are going to pick up a triangle if we do action two but then after that we're going to follow policy one and policy one has been trained to pick up the red squares and not care
1,936
1,965
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1936s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
about triangles so i'm going to guess that every now and then it will kind of step over a triangle but it won't fall we won't you know explicitly go look for them so let's say the episode was 10 more steps but the board has like 100 squares so and it has like three triangles on it so let's say that's like three-tenths um in expectation okay so this is going to be this is going to
1,965
1,998
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1965s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
be the number that we're looking for we're doing this for every single one of these cells okay this this thing is going to do for every single one of these cells and this is very similar to evaluating q functions except we're evaluating an entire vector right here that's the difference to simply learning many q functions so if you were to evaluate only a q function then you would only
1,998
2,026
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1998s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
have this first matrix this first block right here okay but you have feature one feature two and so on so you calculate everything in terms of these features and then by linearity you can mix it with that vector so in our case this is going to be the one negative one which will give you the q functions right from what we've seen before you obtain a q function by simply
2,026
2,054
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2026s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
mixing your successor features with your um with this task vector and if you have a q function you can pretty easily determine uh which action you should take now you have here a q function with respect to every single policy but you can simply take the max right so the max across all of this will determine um will determine so you could take the max across all the
2,054
2,083
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2054s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
policies which will give you the q function for a particular action over all policies that you consider and then you can simply take the arg max of that and determine the action you should take okay so it's a pretty big evaluation but if you do this that means you don't have to do reinforcement learning on this task it simply determines which action right now is the best given everything
2,083
2,113
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2083s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
that i know from these old policies about the task and that's not going to be like the optimal policy uh per se but it's going to be one policy that's pretty pretty good and you can actually prove some things across that so they do this right here and you can see that here is what q learning does on this new task of picking up the squares and avoiding the triangles q learning
2,113
2,146
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2113s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
takes a while to get there however if you do what they are suggesting and you know you give the w you can supply the w almost from the beginning you see right here almost from the beginning it is at a high reward now q learning surpasses it eventually but um it's pretty impressive that without doing any learning you are immediately good right now the caveat here of course is that
2,146
2,177
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2146s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
they already need these policy pi 1 and pi 2 given to the algorithm and that comes from previous reinforcement learning trials and they say that they give these trials as many steps as q learning uses so they give them this these amounts of steps on these other tasks so the comparison here is a bit shaky if you ask me but the point made is that if you have a new task right now
2,177
2,206
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2177s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
you can obtain very good solutions uh and you don't have to do anything okay and these solutions can be the basis for new reinforcement learning right you could start q learning off right here and then get here much faster potentially and so on so the next objective right here is that now we have defined the tasks and we had we know what these features are and we know how
2,206
2,232
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2206s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
to mix these features as imposers of the task so what happens if we only have the reward function we specify the task only in terms of the reward functions but we're kind of looking at the features and we're like agent please figure out yourself how to to apply these features in order to make the reward high and that's what this thing is right here this gp and gpi with regress w so you
2,232
2,261
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2232s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
don't no longer tell it what the w is um it needs to infer it through reinforcement learning right and it's not really reinforcement learning but what it does where is it yeah it simply it because all of this is linear and this thing here is given so always remember this thing here is given and these are the rewards that you obtain you can simply do a regression
2,261
2,289
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2261s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
to figure out the w of the task now that's going to take some time but as you can see right here it is going to take um a lot less time than sim than doing q learning from scratch notably because you have good features so this is some this is this gets closer and closer to transfer learning right if you imagine that this right here is your pre-trained neural network and
2,289
2,318
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2289s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
you simply learn the last layer of it um you you freeze this you do transfer learning fine tune the last layer here we are so um it gets closer and closer and you'll see this trend right here so it's pretty cool what you can do but basically i think it's a lot of math around a framework and the more and more you relax the kind of impositions uh that they need for their framework the more it
2,318
2,352
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2318s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
gets back to simply well we do reinforcement learning at least in my um estimation so before we look at that this here is a pretty pretty cool experiment where they they look at how the how the different tasks can be achieved if you give different policies so you'll have noticed that we have always given these two two tasks one zero and zero one these were our tasks that we trained on
2,352
2,386
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2352s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
and then one z negative one is task we evaluated on okay and you might object and say wait a minute these these two tasks you know they're pretty good as let's say pre-training tasks because and it's basically the standard basis right and any other task can be mixed from those so these are orthogonal vectors in this vector space so you're being pretty generous to this
2,386
2,414
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2386s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
system what happens if we're not as generous so that's what they do here so they have different um policies and they evaluate how much you can learn with these different policies so the way you have to read this diagram is right here is going to be the one zero axis as they well they label it right here and this is going to be the zero one axis and this is evaluation
2,414
2,441
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2414s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
so every direction on this circle defines a task for example this task right here as you can see is going to define the task of picking up both the squares and the triangles right whatever you pick up you get a reward however the task down here is going to be please pick up the squares but avoid the triangles at all cost okay and now they're going to look what happens if we supply different
2,441
2,472
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2441s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
policies to choose from remember we're in this situation we're again in this situation where we give everything and we give initial policies we give the task vector and now it's about deriving a good policy just from looking at the old policy so no learning as a baseline you have q learning which into a given direction um tells you basically how how long q learning or takes or how
2,472
2,500
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2472s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg
9-o2aAoN0rY
far q learning gets with a given amount of steps indicated by this one two three four and so on um yeah you see i think this is this is this in how far q learning gets with these amounts of steps is the dotted lines right here so q learning gets this far with 10 to the i don't know 4 and then this far 10 to the 5 and so on so these are comparisons you can see
2,500
2,532
https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2500s
Fast reinforcement learning with generalized policy updates (Paper Explained)
https://i.ytimg.com/vi/9…axresdefault.jpg