video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
ZPewmEu7644
hi this is Jeff Dean welcome to applications of deep neural networks of Washington University in this video we're going to look at how we can use ganz to generate additional training data for the latest on my a I course and projects click subscribe in the bell next to it to be notified of every new video Dan's have a wide array of uses beyond just the face generation that you
0
20
https://www.youtube.com/watch?v=ZPewmEu7644&t=0s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
often see them use for they can definitely generate other types of images but they can also work on tabular data and really any sort of data where you are attempting to have a neural network that is generating data that should be real or should or could be classified as fake the key element to having something as again is having that discriminator that tells the difference
20
41
https://www.youtube.com/watch?v=ZPewmEu7644&t=20s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
in the generator that actually generates the data another area that we are seeing ganz use for a great deal is in the area of semi supervised training so let's first talk about what semi-supervised training actually is and see how again can be used to implement this first let's talk about supervised training and unsupervised training which you've probably seen in previous machine
41
64
https://www.youtube.com/watch?v=ZPewmEu7644&t=41s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
learning literature but just in case you haven't supervised training is what we've been doing up to this point I would say probably the vast majority of this class is in the area of supervised learning this is where you have multiple axes in the case of tabular data or grids and other things in the case of image data but you have some sort of input coming in which is the X and you
64
89
https://www.youtube.com/watch?v=ZPewmEu7644&t=64s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
know what the correct Y's are you are going to train the model to produce these Y's when you have these X's because later on you're going to have X's coming in where you don't know what the Y is and that's where you want the neural network or other model to be able to give you some estimate as far as what the Y value is going to actually be unsupervised training is where we have
89
115
https://www.youtube.com/watch?v=ZPewmEu7644&t=89s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
the X's it could look just like this it would work with image data tabular or really just about anything but there is no y we're letting the neural network or whatever model it is and you know typically by the way use neural networks for unsupervised training this is usually the area of things like k-means clustering and other things your classic unsupervised training is
115
141
https://www.youtube.com/watch?v=ZPewmEu7644&t=115s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
just going to take the inputs and cluster them in such a way so that similar ones are together these could be similar images these could be similar inputs in tabular data a variety of things semi supervised training it's actually much closer to supervised training I would say than unsupervised and this is where gams really shine and semi-supervised training you have X's
141
166
https://www.youtube.com/watch?v=ZPewmEu7644&t=141s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
just like you have in these others but you don't have a label or a Y for every single one of them you might have a small number of them by oh no means have the complete data set label traditionally what would be done is these values that were not labeled would be left out because they there was no way to feed them into traditional supervised learning or you would train
166
188
https://www.youtube.com/watch?v=ZPewmEu7644&t=166s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
it on the ones that you did have Y's for with classic back propagation or however you were training that particular model then you would create predictions Y predictions for all the missing values and then retrain the whole thing on the predictive values with the others in practice I never had a great deal of success with that technique but there is some theoretical basis for it with semi
188
213
https://www.youtube.com/watch?v=ZPewmEu7644&t=188s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
supervised training and Ganz will see that there's a way that we are able to actually make use of these now semi-supervised training this does make sense from a biological standpoint if you think about a child who is seeing all sorts of vehicles as they go about their daily lives with their parents or whoever they're with and they're seeing all these vehicles as they pass on the
213
240
https://www.youtube.com/watch?v=ZPewmEu7644&t=213s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
street and they're not labeled nobody is telling them even though that's a vehicle seeing just a barrage of images as they as they grow up they learn edges they learn other sorts of things they learn how to classify if something is on top of something else just by observing there's no particular labels then eventually somebody says hey that's a that's a bus that's that's a train
240
262
https://www.youtube.com/watch?v=ZPewmEu7644&t=240s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
that's a bicycle using that that small handful of labels that they're given when somebody actually tells them what they're looking at or they verify it independently that is semi-supervised training because it is building on those years and years of having unlabeled data that they they didn't know what they were looking at but they knew they were looking at something and it just it gives them
262
286
https://www.youtube.com/watch?v=ZPewmEu7644&t=262s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
additional reference that's exactly the same thing with supervised training these values even though we don't have wise they're still valuable for the neural network to be learning structure in this data as it is learning to predict the ones that we do actually in fact have the wise heart so let's look at the structure for this this is the structure of a normal image generating gang
286
310
https://www.youtube.com/watch?v=ZPewmEu7644&t=286s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
baseline so to speak where they research started we saw this before but just to quickly review we have actual images they go into a discriminator and we have the generated images that the generator so the cyan pieces those are the two neural networks random seed values are causing that generator to generate images the discriminator is learning to better and better discriminate between
310
333
https://www.youtube.com/watch?v=ZPewmEu7644&t=310s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
actual and generated the generator is learning to create better and better images that fool the discriminate now once this is all done you keep the generator because it generates images for you and you likely throw away the discriminator it was just there for the generator to practice against will see that this flips for semi-supervised learning and semi-supervised learning we
333
357
https://www.youtube.com/watch?v=ZPewmEu7644&t=333s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
care about the discriminator and not so much the generator we typically throw the generator away this is how you would train a semi supervised classification neural network it's very very similar to the diagram that we just looked at in this case we're looking at how we would train it on tabular data it's a medical record the discriminator would learn to tell the difference between a fake
357
380
https://www.youtube.com/watch?v=ZPewmEu7644&t=357s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
medical record or whatever the generator is generating this parts all the same as the previous one as is as is this part the difference is we're training it now to tell not just the difference between fake and real these are the real and this is this is fake we're teaching it to learn classes so there's four different classes of say medical record that we're looking at maybe four
380
404
https://www.youtube.com/watch?v=ZPewmEu7644&t=380s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
different health levels we're teaching it as a classification neural network to classify between five things the four classes that were actually interested in and is it a fake once we're done training the whole thing we now have this discriminator that can tell the difference between fake and what the what the classes are we also have the generator that is able to generate these
404
425
https://www.youtube.com/watch?v=ZPewmEu7644&t=404s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
fake medical records but we can then throw away the generator and we'll use the discriminator really truly as our actual neural network now for the medical records where we don't have the Y so we're missing this we still feed those in it's just now we're evaluating it not based on if it classified it correctly but just if it knew the difference between fake and real the
425
449
https://www.youtube.com/watch?v=ZPewmEu7644&t=425s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
street house view data set is a image data set that is often used to demonstrate semi-supervised game learning and I have a link to a curious example external to this class that demonstrates this if you're interested in this sort of technology but what this does is you have data on these addresses from images that were taken on the sides of buildings and not all of those are
449
474
https://www.youtube.com/watch?v=ZPewmEu7644&t=449s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
labeled or you simulate them not all being labeled and you see that the Gann is capable of learning to classify these 10-digit types even though it doesn't have labels on each of those now if you want to do the same thing for regression it becomes very similar you have two outputs so you have a multi output neural network one is the actual regression value that you're trying to
474
499
https://www.youtube.com/watch?v=ZPewmEu7644&t=474s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
train it on and the other is the probability that it's a fake record being generated now these I'm doing tabular as just the example again these could be medical records and perhaps the regression output would be a health level or maybe a guess at how old the patient is or some other value perhaps if they have a current disease or not a prediction so it's it's doing the same
499
525
https://www.youtube.com/watch?v=ZPewmEu7644&t=499s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
two things when you feed an medical records where we don't know the Y output then we want to see that this regression on the fake record when we're feeding in values where we have the medical record where you don't have the Y we just want to make sure that the probability that the fake record is fairly high and that's built into the training we don't so much care about what its regressing
525
549
https://www.youtube.com/watch?v=ZPewmEu7644&t=525s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
on or what the Russian output is for ones where we do have it we're penalizing it based on how close or how far away it was from the expected Y from this and just like the classification one when we're all done with this we throw away the generator in the discriminator becomes the semi-supervised neural network that was trained on this now if you want to go further with this semi-supervised
549
572
https://www.youtube.com/watch?v=ZPewmEu7644&t=549s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
learning technique i've given you a couple of lengths of articles that i found useful for this there is a link to the actual house data set that's a pretty interesting data set to look at it has all those house numbers above you can deal with in several ways you can deal with classifying the individual digits they give you the bounding rectangles around the digits they also
572
592
https://www.youtube.com/watch?v=ZPewmEu7644&t=572s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
give you just the bounding rectangle of the entire set of digits if you want to so you can be classifying digits or you can be classifying the entire address it just depends on how you want to set up the problem the examples that I give you here we're using individual digits this is the original paper that first started looking at this unsupervised representation learning with deep
592
616
https://www.youtube.com/watch?v=ZPewmEu7644&t=592s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
ZPewmEu7644
convolutional generative Gass general generative adversarial Network I have a link to this paper in the module thank you for watching this video in the next video we're going to take a look at some of the most cutting-edge and current research into Ganz it's a very active area of research this content change is often so subscribe to the channel to stay akha date on this course and other
616
639
https://www.youtube.com/watch?v=ZPewmEu7644&t=616s
GANS for Semi-Supervised Learning in Keras (7.4)
https://i.ytimg.com/vi/Z…axresdefault.jpg
g4M7stjzR1I
you're one of the one of the only people who dared boldly to try to formalize our the idea of artificial general intelligence to have a mathematical framework for intelligence just like as we mentioned termed IHC AI X I so let me ask the basic question what is IX e ok so let me first say what it stands for because letter stands for actually that's probably the
0
33
https://www.youtube.com/watch?v=g4M7stjzR1I&t=0s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
more basic question but it the first question is usually how how it's pronounced but finally I put it on the website how it's pronounced and you figured it out yeah the name comes from AI artificial intelligence and the X I is the Greek letter X I which are used for solo manav's distribution for quite stupid reasons which I'm not willing to repeat here in front of camera so it
33
61
https://www.youtube.com/watch?v=g4M7stjzR1I&t=33s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
just happened to be more less arbitrary I chose this I but it also has nice other interpretations so their actions and perceptions in this model around an agent has actions and perceptions and overtime so this is a Index IX index I so this action at time I and then followed by reception at time I will go with that I lit out the first part yes I'm just kidding I have some more
61
87
https://www.youtube.com/watch?v=g4M7stjzR1I&t=61s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
interpretations so at some point maybe five years ago or ten years ago I discovered in in Barcelona it wasn't a big church there was in you know it's stone engraved some text and the word I see appeared there very surprised and and and and happy about it and I looked it up so it is Catalan language and it means with some interpretation of dead say that's the right thing to do yeah
87
117
https://www.youtube.com/watch?v=g4M7stjzR1I&t=87s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
Eureka oh so it's almost like destined somehow came came to you in a dream so Osceola there's a Chinese word I she also written our galaxy if you transcribe that opinion then the final one is there is AI crossed with interaction be her status and it's going more to the content now so good old-fashioned AI is more about you know planning known the domestic world and induction is more
117
143
https://www.youtube.com/watch?v=g4M7stjzR1I&t=117s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
about often you know iid data and inferring models and essentially what this is a model does is combining these two and I actually also recently I think heard that in Japanese AI means love so so if you can combine excise somehow with that I think we can there might be some interesting ideas there so I let's then take the next step can you maybe talk at the big level of what is this
143
171
https://www.youtube.com/watch?v=g4M7stjzR1I&t=143s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
mathematical framework yeah so it consists essentially of two parts one is the learning and induction and prediction part and the other one is the planning part so let's come first to the learning induction prediction part which essentially explained already before so what we need for any agent to act well is that it can somehow predict what happens I mean if you have no idea what
171
198
https://www.youtube.com/watch?v=g4M7stjzR1I&t=171s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
your actions do how can you decide which acts not good or not so you need to have some model of what your actions affect so what you do is you have some experience you build models like scientists you know of your experience then you hope these models are roughly correct and then you use these models for prediction and the model is sorry to interrupt the model is based on your perception of
198
221
https://www.youtube.com/watch?v=g4M7stjzR1I&t=198s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
the world how your actions will affect our world that's not so what is the important part but it is technically important but at this stage we can just think about predicting say stock market data weather data or IQ sequences one two three four five what comes next yeah so of course our actions affect what we're doing what I come back to that in a second so and I'll keep interrupting
221
245
https://www.youtube.com/watch?v=g4M7stjzR1I&t=221s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
so just to draw a line between prediction and planning or what do you mean by prediction in this in this way it's trying to predict the the environment without your long-term action in the environment what is prediction okay if you want to put the actions in now okay then let's put in a now yes so another question ok so the simplest form of prediction is that you just have data
245
276
https://www.youtube.com/watch?v=g4M7stjzR1I&t=245s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
which you passively observe and you want to predict what happens without you know interfering as I said weather forecasting stock market IQ sequences or just anything okay and Salama of theory of induction based on compression so you look for the shortest program which describes your data sequence and then you take this program run it it reproduces your data
276
299
https://www.youtube.com/watch?v=g4M7stjzR1I&t=276s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
sequence by definition and then you let it continue running and then it will produce some predictions and you can rigorously prove that for any prediction task this is essentially the best possible predictor of course if there's a prediction task our task which is unpredictable like you know your fair coin flips yeah I cannot predict the next reckon but Solomon of Tarsus says
299
322
https://www.youtube.com/watch?v=g4M7stjzR1I&t=299s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
ok next head is probably 50% it's the best you can do so if something is unpredictable so Lamar will also not magically predict it but if there is some pattern and predictability then Solomonov induction we'll figure that out eventually and not just eventually but rather quickly and you can have proof convergence rates whatever your data is so there's pure magic in a sense
322
347
https://www.youtube.com/watch?v=g4M7stjzR1I&t=322s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
what's the catch well the catch is that is not computable and we come back to that later you cannot just implement it in even this Google resources here and run it and you know predict the stock market and become rich I mean if ray solomonoff already you know tried it at the time but the basic task is you know you're in the environment and you interact with environment to try to
347
367
https://www.youtube.com/watch?v=g4M7stjzR1I&t=347s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
learn a model the environment and the model is in the space as these all these programs and your goal is to get a bunch of programs that are simple and so let's let's go to the actions now but actually good that you asked usually I skip this part also that is also a minor contribution which I did so the action part but they're usually sort of just jump to the decision path so let me
367
385
https://www.youtube.com/watch?v=g4M7stjzR1I&t=367s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
explain to the action partner thanks for asking so you have to modify it a little bit by now not just predicting a sequence which just comes to you but you have an observation then you act somehow and then you want to predict the next observation based on the past observation and your action then you take the next action you don't care about predicting it because
385
409
https://www.youtube.com/watch?v=g4M7stjzR1I&t=385s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
you're doing it and then you get the next observation and you want more before you get it you want to predict it again based on your past excellent observation sequence you just condition extra on your actions there's an interesting alternative that you also try to predict your own actions if you want oh in the past or the future what are your future actions
409
434
https://www.youtube.com/watch?v=g4M7stjzR1I&t=409s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
wait let me wrap I think my brain is broke we should maybe discuss that later Biff after I've explained the aixi model that's an interesting variation but there's a really interesting variation and a quick comment I don't know if you want to insert that in here but you're looking at in terms of observations you're looking at the entire the big history a long history of the
434
456
https://www.youtube.com/watch?v=g4M7stjzR1I&t=434s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
observations exact that's very important the whole history from birth sort of of the agent and we can come back to that I'm also why this is important here often you know in RL you have MVPs Markov decision processes which are much more limiting okay so now we can predict conditioned on actions so even if the influence environment but prediction is not all we want to do right we also want
456
478
https://www.youtube.com/watch?v=g4M7stjzR1I&t=456s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
to act really in the world and the question is how to choose the actions and we don't want to greedily choose the actions you know just you know what is best in in the next time step and we first I should say you know what is you know how to be measure performance so we measure performance by giving the agent reward that's the so called reinforcement learning framework so
478
499
https://www.youtube.com/watch?v=g4M7stjzR1I&t=478s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
every time step you can give it a positive reward or negative reward or baby no reward it could be a very scarce right like if you play chess just at the end of the game you give +1 for winning or -1 for losing so in the aixi framework that's completely sufficient so occasionally you give a reward signal and you ask the agent to maximize river but not greedily sort of you know the
499
519
https://www.youtube.com/watch?v=g4M7stjzR1I&t=499s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
next one next one because that's very bad in the long run if you're greedy so but over the lifetime of the agent so let's assume the agent lives for M times that was there it dies in sort of hundred years sharp that's just you know the simples model to explain so it looks at the future reward some and ask what is my action sequence or actually more precisely my policy which
519
540
https://www.youtube.com/watch?v=g4M7stjzR1I&t=519s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
leads in expectation because of the know the world to the maximum reward some let me give you an analogy in chess for instance we know how to play optimally in theory it's just a minimax strategy I play the move which seems best to me under the assumption that the opponent plays the move which is best for him so best so worst for me and the assumption that he I play again the best move and
540
567
https://www.youtube.com/watch?v=g4M7stjzR1I&t=540s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
then you have this expecting max three to the end of the game and then you back propagate and then you get the best possible move so that is the optimal strategy which for norman already figured out a long time ago for playing adversarial games luckily or maybe unluckily for the theory it becomes harder the world is not always adversarial so it can be if the other
567
590
https://www.youtube.com/watch?v=g4M7stjzR1I&t=567s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
humans in cooperative fear or nature is usually I mean the dead nature is stochastic you know you know things just happen randomly or don't care about you so what you have to take into account is in noise yeah and not necessarily doesn't really so you'll replace the minimum on the opponent's side by an expectation which is general enough to include also adversarial cases so now
590
614
https://www.youtube.com/watch?v=g4M7stjzR1I&t=590s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
instead of a minimax trials you have an expected max strategy so far so good so that is well known it's called sequential decision theory but the question is on which probability distribution do you base that if I have the true probability distribution like say I play backgammon right there's dice and there's certain randomness involved you know I can calculate probabilities
614
634
https://www.youtube.com/watch?v=g4M7stjzR1I&t=614s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
and feed it in the expecting max or the signatures eg come up with the optimal decision if I have enough compute but in the before the real world we don't know that you know what is the probability you drive in front of me brakes and I don't know you know so depends on all kinds of things and especially new situations I don't know so is it this unknown thing about prediction and
634
656
https://www.youtube.com/watch?v=g4M7stjzR1I&t=634s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
there's where solomonov comes in so what you do is in sequential decision jury it just replace the true distribution which we don't know by this Universal distribution I didn't expect a talk about it but this is used for universal prediction and plug it into the sequential decision mechanism and then you get the best of both worlds you have a long-term planning agent but it
656
679
https://www.youtube.com/watch?v=g4M7stjzR1I&t=656s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
doesn't need to know anything about the world because the Solomon reduction part learns can you explicitly try to describe the universal distribution and how some of induction plays a role here yeah I'm trying to understand so what it does it so in the simplest case I said take the shortest program describing your data run it have a prediction which would be deterministic yes okay but you
679
705
https://www.youtube.com/watch?v=g4M7stjzR1I&t=679s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
should not just take a shortest program but also consider the longer ones but keep it lower a priori probability so in the Bayesian framework you say our priori any distribution which is a model or stochastic program has a certain a pre or probability which is two - two - and why - ders - length you know I could explain length of this program so longer programs are punished yeah a priori and
705
733
https://www.youtube.com/watch?v=g4M7stjzR1I&t=705s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
then you multiplied with the so called likelihood function yeah which is as the name suggests is how likely is this model given the data at hand so if you have a very wrong model it's very unlikely that this model is true so it this very small number so even if the model is simple it gets penalized by that and what you do is then you take just the sum word this is the average
733
757
https://www.youtube.com/watch?v=g4M7stjzR1I&t=733s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
over it and this gives you a probability distribution so it was universal distribution of Solomon of distribution so it's weighed by the simplicity of the program and likelihood yes it's kind of a nice idea yeah so okay and then you said there's you're playing N or M or forgot the letter steps into the future so how difficult is that problem what's involved there okay so here's a cop to
757
784
https://www.youtube.com/watch?v=g4M7stjzR1I&t=757s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
mutation problem what do we do yeah so you have a planning problem up to the horizon M and that's exponential time in in the horizon M which is I mean it's computable but in fact intractable I mean even for chess it's already intractable to do that exactly and you know it could be also discounted kind of framework or so so having a hard horizon you know at number of years it's just for simplicity
784
807
https://www.youtube.com/watch?v=g4M7stjzR1I&t=784s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
of discussing the model and also sometimes the mass is simple but there are lots of variations actually quite interesting parameter it's it's there's nothing really problematic about it but it's very interesting so for instance you think no let's let's pin let's let the parameter M tend to infinity right you want an agent which lives forever all right if you do it novel you have
807
830
https://www.youtube.com/watch?v=g4M7stjzR1I&t=807s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
two problems first the mathematics breaks down because you have an infinite revert sum which may give infinity and getting river 0.1 in the time step is infinity and giving you got one every time service affinity so equally good not really what we want another problem is that if you have an infinite life you can be lazy for as long as you want for ten years yeah and then catch up with
830
853
https://www.youtube.com/watch?v=g4M7stjzR1I&t=830s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
the same expected reward and you know think about yourself or you know or maybe you know some friends or so if they knew they lived forever you know why work hard now or you know just enjoy your life you know and then catch up later so that's another problem with infinite horizon and you mentioned yes we can go to discounting but then the standard discounting is so called
853
875
https://www.youtube.com/watch?v=g4M7stjzR1I&t=853s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
geometric discounting so $1 today is about worth as much as you know one dollar and five cents tomorrow so if you do this local geometric discounting you have introduced an effective horizon so the agent is now motivated to look ahead a certain amount of time effectively it's likely moving horizon and for any fixed effective horizon there is a problem to solve which requires larger
875
901
https://www.youtube.com/watch?v=g4M7stjzR1I&t=875s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
horizon so if I look ahead you know five time steps I'm a terrible chess player right and I need to look ahead longer if I play go I probably have to look ahead even longer so for every problem no forever horizon there is a problem which this horizon cannot solve yes but I introduced the so-called near harmonic horizon which goes down with one over T rather than
901
922
https://www.youtube.com/watch?v=g4M7stjzR1I&t=901s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
exponential in T which produces an agent which effectively looks into the future proportional to its age so if it's five years old it plans for five years if it's hundred years older than plans for hundred years interesting and a little bit similar to humans - right and my children don't I had very long within we get the doll to a play I had more longer maybe when
922
941
https://www.youtube.com/watch?v=g4M7stjzR1I&t=922s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
we get all very old I mean we know that we don't live forever and maybe then how horizon shrinks again so just adjusting the horizon what is there some mathematical benefit of that of or is it just a nice I mean intuitively empirically we'll probably a good idea to sort of push a horizon back to uh extend the horizon as you experience more of the world but is
941
967
https://www.youtube.com/watch?v=g4M7stjzR1I&t=941s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
there some mathematical conclusions here that are beneficial mr. lamagno who talks to the prediction party of extremely strong finite time but no finite data result so you have sown so much data then you lose sown so much so the dt r is really great with the aixi model with the planning part many results are only asymptotic which well this is what is asymptotic means you can
967
992
https://www.youtube.com/watch?v=g4M7stjzR1I&t=967s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
prove for instance that in the long run if the agent you know x long enough then you know it performs optimal or some nice things happens so but you don't know how fast it converges yeah so it may converge fast but we're just not able to prove it because the typical problem or maybe there's a bug in the in the in the model so that is really dead slow yeah so so that is what asymptotic
992
1,015
https://www.youtube.com/watch?v=g4M7stjzR1I&t=992s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
means sort of eventually but we don't know how fast and if I give the agent a fixed horizon M yeah then I cannot prove our synthetic results right so I mean sort of pivot dies in hundred years then it number uses over cannot say eventually so this is the advantage of the discounting that I can prove our synthetic results so just to clarify so so I okay I made I've built up a model
1,015
1,042
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1015s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
well now in a moment I've have this way of looking several steps ahead how do I pick what action I will take it's like with the playing chess right you do this minimax in this case here do expect the max based on the solo mode of distribution you propagate back and then while inaction falls out the action which maximizes the future expected reward on the Solano's distribution and
1,042
1,070
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1042s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
then you take the section and then repeat that you get a new observation and you feed it in this extant observation then you repeat and the reward so on yeah so you rewrote - yeah and then maybe you can even predict your own action I love the idea but ok this big framework what is it this is I mean it's kind of a beautiful mathematical framework to think about
1,070
1,093
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1070s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
artificial general intelligence what can you what does it help you into it about how to build such systems or maybe from another perspective what does it help us to in understanding AGI so when I started in the field I was always interested two things one was you know AGI i'm the name didn't exist 10 2014 area wa strong AI and physics c or everything so i switched back and forth
1,093
1,125
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1093s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
between computer science and physics quite often you said the theory of everything the theory of everything just alike it was a basic that regulators problems before all all of humanity yeah I can explain if you wanted some later time you know why I'm interested in these two questions Nestle and a small tangent if if if one to be it was one to be solved which one would you if one if you were
1,125
1,153
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1125s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
if an apple fell in your head and there was a brilliant insight and you could arrive at the solution to one would it be a GI or the theory of everything definitely a GI because once the a GI problem solve they can ask the a GI to solve the other problem for me yeah brilliantly okay so so as you were saying about it okay so and the reason why I didn't settle I mean this thought
1,153
1,179
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1153s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
about you know once you have solved a GI it solves all kinds of other not just as your every problem but all kinds of use more useful problems to humanity it's very appealing to many people and you know and it is thought also but I was quite disappointed with the state of the art of the field of AI there was some theory you know about logical reasoning but I was never convinced that this will
1,179
1,203
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1179s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
fly and then there was this for more holistic approaches with neural networks and I didn't like these heuristics so and also I didn't have any good idea myself so that's the reason why I toggle back and forth quite some violent even work so four and a half years and a company developing software something completely unrelated but then I had this idea about the ITC model and so what it gives you
1,203
1,229
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1203s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
it gives you a gold standard so I have proven that this is the most intelligent agents which anybody could build built in quotation mark right because it's just mathematical and you need infinite compute yeah but this is the limit and this is completely specified it's not just a framework and it you know every year tens of frameworks are developed with just have skeletons and then pieces
1,229
1,257
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1229s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
are missing and usually this missing pieces you know turn out to be really really difficult and so this is completely and uniquely defined and we can analyze that mathematically and we've also developed some approximations I can talk about it a little bit later that would dissolve the top-down approach like say for Normans minimax theory that's the theoretical optimal
1,257
1,279
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1257s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
play of games and now we need to approximate it put heuristics in prune the tree blah blah blah and so on so we can do that also with a hike symbol but for generally I it can also inspire those and most of most researchers go bottom-up right they have the systems that try to make it more general more intelligent it can inspire in which direction to go what do you mean by that
1,279
1,302
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1279s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
so if you have some choice to make right so how should they evaluate my system if I can't do cross validation how should I do my learning if my standard regularization doesn't work well yeah so the answers always this we have a system which does everything that's actually it's just you know completing the ivory tower completely useless from a practical point of view but you can look
1,302
1,325
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1302s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
at it and see oh yeah maybe you know I can take some aspects and you know instead of Kolmogorov complexity there just take some compressors which has been developed so far and for the planning well we have used it here which is also you know being used in go and it I at least it's inspired me a lot to have this formal definition and if you look at other fields you know like I
1,325
1,350
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1325s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
always come back to physics because I'm a physics background think about the phenomena of energy that was a long time a mysterious concept and at some point it was completely formalized and that really helped a lot and you can point out a lot of these things which were first mysterious and wake and then they have been rigorously formalized speed and acceleration has been confused tried
1,350
1,371
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1350s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
until it was formally defined here there was a time like this and and people you know often you know I don't have any background you know still confuse it so and this is a model or the the intelligence definitions which is sort of the dual to it we come back to that later formalizes the notion of intelligence uniquely and rigorously so in in a sense it serves as kind of the
1,371
1,395
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1371s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
light at the end of the tunnel so yeah so I mean there's a million questions I could ask her so maybe the kind of okay let's feel around in the dark a little bit so there's been here a deep mind but in general been a lot of breakthrough ideas just like we've been saying around reinforcement learning so how do you see the progress in reinforcement learning is different like which subset of IHC
1,395
1,420
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1395s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
does it occupy the current like you said the maybe the Markov assumption is made quite often in reinforce for learning the there's this other assumptions made in order to make the system work what do you see is the difference connection between reinforcement learning and I AXI and so the major difference is that essentially all other approaches they make stronger assumptions so in
1,420
1,449
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1420s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
reinforcement learning the Markov assumption is that the the next state or next observation only depends on the on the previous observation and not the whole history which makes of course the mathematics much easier rather than dealing with histories of course their profit from it also because then you have algorithms which run on current computers and do something practically
1,449
1,469
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1449s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
useful for generally re all the assumptions which are made by other approaches we know already now they are limiting so for instance usually you need a go digital assumption in the MTP framework in order to learn it goes this T essentially means that you can recover from your mistakes and that they are not traps in the environment and if you make this assumption then essentially you can
1,469
1,494
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1469s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
you know go back to a previous state go there a couple of times and then learn what what statistics and what the state is like and then in the long run perform well in this state yeah but there are no fundamental problems but in real life we know you know there can be one single action you know one second of being inattentive while driving a car fast you know you can ruin the rest of my life I
1,494
1,519
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1494s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
can become quadriplegic or whatever so and there's no recovery anymore so the real world is not err gorica i always say you know there are traps and there are situations we are not recover from and very little theory has been developed for this case what about what do you see in there in the context of I see as the role of exploration sort of you mentioned you know in the in the
1,519
1,548
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1519s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
real world and get into trouble and we make the wrong decisions and really pay for it but exploration is seems to be fundamentally important for learning about this world for gaining new knowledge so is it his exploration baked in another way to ask it what are the parameters of this of IHC it can be controlled yeah I say the good thing is that there are no parameters to control
1,548
1,572
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1548s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
and some other people drag knobs to control and you can do that I mean you can modify axes so that you have some knobs to play with if you want to but the exploration is directly baked in and that comes from the Bayesian learning and the long-term planning so these together already imply exploration you can nicely and explicitly prove that for a simple problem
1,572
1,604
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1572s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
like so-called bandit problems where you say to give a real good example say you have two medical treatments a and B you don't know the effectiveness you try a a little bit be a little bit but you don't want to harm too many patients so you have to sort of trade-off exploring yeah and at some point you want to explore and you can do the mathematics and figure out the optimal strategy it took
1,604
1,632
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1604s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
a Bayesian agents they're also non-bayesian agents but it shows that this patient framework by taking a prior over possible worlds doing the Bayesian mixture than the Bayes optimal decision with long term planning that is important automatically implies exploration also to the proper extent not too much exploration and not too little in this very simple settings in
1,632
1,655
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1632s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
the aixi model and was also able to prove that it is a self optimizing theorem or asymptotic optimality theorems or later only asymptotic not finite time bounds it seems like the long term planning is a really important but the long term part of the planet is really important yes and also I mean maybe a quick tangent how important do you think is removing the Markov
1,655
1,675
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1655s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
assumption and looking at the full history sort of intuitively of course it's important but is it like fundamentally transformative to the entirety of the problem what's your sense of it like because we all we make that assumption quite often it's just throwing away the past now I think it's absolutely crucial the question is whether there's a way to deal with it in
1,675
1,701
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1675s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
a more holistic and still sufficiently well way so I have to come everything up and fly but you know you have say some you know key event in your life you know a long time ago you know in some city or something you realized you know that's a really dangerous street or whatever right here and you want to remember that forever right in case you come back they're kind of a selective kind of
1,701
1,724
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1701s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
memory so you remember that all the important events in the past but somehow selecting the importance is see that's very hard yeah and I'm not concerned about you know just storing the whole history just you can calculate you know human life says 30 or 100 years doesn't matter right how much data comes in through the vision system and the auditory system you compress it a little
1,724
1,747
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1724s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
g4M7stjzR1I
bit in this case La Salette and store it we are soon in the means of just storing it yeah but you still need to the selection for the planning part and the compression for the understanding part the raw storage I'm really not concerned about and I think we should just store if you develop an agent preferably just restore all the interaction history and then you build
1,747
1,774
https://www.youtube.com/watch?v=g4M7stjzR1I&t=1747s
Marcus Hutter: What is AIXI? | AI Podcast Clips
https://i.ytimg.com/vi/g…axresdefault.jpg
P0yVuoATjzs
[Music] so Wolfgang helpfully laid out the dichotomy between industry people and academics and experimentalists and computational people and if you're wondering which one I am the answer is yes so this is I'm going to mostly describe work that happened in my lab at Harvard and work by Bill Lauder who is actually in industry at us doing a start-up because that's what everyone
0
36
https://www.youtube.com/watch?v=P0yVuoATjzs&t=0s
Predictive Coding Models of Perception
https://i.ytimg.com/vi/P…axresdefault.jpg