text
stringlengths
1
58k
This changes everything, at least many people say so.
Chat GPT, our lord and savior has arrived.
It is a new model by OpenAI that has been fine tuned on human feedback.
It is amazing at pretty much any task people throw at it and it can do so much more than previous models.
Or is it just that it's easier to make it do so much more? We don't know.
We're gonna look at the stuff it can do today that the stuff where it maybe also fails a little bit and the jail breaks.
Yes, the jail breaks.
I know AIs have jail breaks.
Now this is a crazy timeline.
So join me diving into chat GPT and let's see what this model can do.
Today's video is sponsored by weights and biases, but don't click away yet.
I want to tell you about a new feature that you might be interested in.
This is the reports API, which is just launching like right now.
What it does is it generates reports programmatically.
So you might be familiar with weights and biases and track your experiments can track your models, make everything reproducible.
And these reports have been a really core part of weights and biases where you can take pretty much everything that you do and present them in a nice write up to share to someone like your supervisor, co workers, team members, or the entire world, make them public.
So here I have a quick example.
All I do is I import the reports API, and then I create a new report and a call save.
So I will have an empty report to start with.
And now I can add stuff to that report via the API.
For example, right here, I'm going to add a header paragraph, an image and another paragraph.
And as you can see here, this is a report by me and everything is here.
Now obviously, this gets really powerful once you pair it with the experimental data that I've created before here, I'm going to add some plots and some charts that come straight from my experimental runs.
So here you can see a pretty basic chart that compares four of my runs.
But there's more I've also added this run compare panel right here, which you might know from weights and biases.
So this is a table that compares the different runs amongst themselves, I can then immediately compare that to the plots above and make very good decisions about what happened here.
Naturally, I can change pretty much anything that I could do in the UI also via the API.
Now this is fully fledged, I can embed code and markdown and math and lists and YouTube videos and images and songs.
And I got all the goodies right here.
I got the tables, I got the plots, I got the numbers, I got the compare charts, I got the hyper parameter importance plots, and so on, you get the idea.
So imagine that overnight, you run experiments on some new data or with a new method that you've devised and so on.
And then in the morning, once these things are done, you don't have to go, you know, to your experiments and filter and so on, you get a nice prepared report with only exactly the things that you are interested in.
All of this can be fully automated with the full power of a Turing complete programming language.
I think this very much opens up new possibilities in the world of ML ops in the world of reproducible and understandable machine learning experimentation and deployment.
And I absolutely invite you to check this out.
That being said, thank you so much to Waitspices for sponsoring this video.
Please check them out.
Use the link in the description it's 1db.
me slash Yannick to let them know that I've sent you and now let's get into the video.
Chat GPT is a really awkward word to say because I want to like start with a G like Jatt GPT.
Can't they just call it like Alan or something easy? All right, so there's a blog post by OpenAI called chat GPT optimizing language models for dialogue.
And this is a model that is able to interact with you.
So here, for example, the user says, help me write a short note to introduce myself to my neighbor chat GPT says, Hi, there, my name is your name, I just moved next door.
And then the user does something special, it asks, or they ask, Can you make it more formal? And it says, Dear neighbor, I am writing to introduce myself.
So this already tells us that there is a large degree of interactivity to this model.
So the model is able to sort of follow the conversation and adjust what it says based on what came previously.
And this is not too much unlike previous GPT models.
But now it's explicitly in a chat structure.
So user says something, model says something user says something previously, you had to prompt that out of the model.
And now it's directly fine tuned into the model, we're going to look at in a bit exactly how they arrived at this point.
The next prompt by the user here is also very interesting.
Can you send it for me? And chat GPT says, I'm sorry, but I'm a text based AI assistant and do not have the ability to send a physical letter for you.
I can only provide suggestions and advice on what to write, it's up to you to take that advice and send the letter yourself.
And we're gonna see that over and over again, this is what I want to call something like a safety mechanism that they have built in a built in is an overstatement.
Because again, you can not really build stuff into these large language models.
All you can do is either use an external system to detect something bad going on something you don't want like the user asking chat GPT to do something physical or you can fine tune it into the model.
So you give it lots of examples where it's being asked to do something you can't do and then train it to respond.
I'm sorry, I'm just an AI assistant.
I can't do that for you.
I'm getting super strong space Odyssey vibes from this model.
So in the method section, we go a bit on and it says we train this model using reinforcement learning from human feedback.
This is a technique open AI and others have previously described where you use human feedback in order to improve these language models.
Now this isn't super easy though, because usually you need like giant data sets to train these models.
And also reinforcement learning isn't exactly the most stable training paradigm there is.
So the current approach goes something like this, there's step one, they collect demonstration data from humans and they train a supervised policy.
Now this isn't yet the final product.
This is simply the first stepping stone into the direction of more human alignment.
Then the second step is to simply let this model now produce a lot of stuff and a human ranks the thing.
So human says this is good, this is better, this is really bad.
And that data is being used not to train the model itself, but to train a reward model.
So the way you take the main amount of human data is not by letting humans produce data, because that's really slow, you just do a little bit of that.
It is much more scalable to let the humans just consume data and rate it.
And that's what you use to build the reward model.
So this is a model that takes in a bunch of pieces of text and just tells you this is really good, this is really bad.
And now in step three, you can use reinforcement learning here, proximal policy optimization in order to train a model against your reward model.
So this technique has to be one of the more scalable ways in which you can use human feedback with reinforcement learning.
So first make an initial policy from human demonstrations, you need a little data, then let humans annotate the quality of outputs, which is more data, but the humans are more efficient and then use that to train a reward model to train the reinforcement learning against.
So the human knowledge is essentially distilled via the reward model into the model that then trains using reinforcement learning.
Here they say chat GPT is fine tuned from a model in the GPT 3.
5 series.
And in a different blog post, they go into what they mean by models defined as 3.
5.
They say it's a series of models that was trained on a blend of text and code from before Q4 2021.
The following models are in the GPT 3.
5 series.
So there's code DaVinci 2, which is a basis for something like copilot.
Actually, we don't know that but we can suspect then there's text DaVinci 2, which was the previous newest GPT 3 model, which they say is an instruct GPT model based on code DaVinci, which is really interesting, right? So the basis of the newer text models are actually fine tuned or trained on top of a code model, not a pure language model.
And then they say text DaVinci 3 is an improvement on text DaVinci 2.
How do they improve? We don't know.
Are these models as they say in the papers? No, they are trained similarly to the ones from the instruct GPT paper.
Do you have a thorough understanding what OpenAI is doing or what's happening? No, me neither.
Don't worry, OpenAI has you covered because here is their development and deployment lifecycle of something they call iterative improvement.
So this goes from initial development to alignment where they fine tune using instructions and alignment evaluations, then they read team and user tests, then they give the model to private beta, then they look at use cases in pilots, then they do risk assessments, retrospective impact assessment, and then the loop closes and they go again and develop a newer model.
And in this loop, OpenAI hopes to improve their models and make them more human aligned, which is all fine and good.
But you know what I don't see here? You ever getting that model? But in any case, let's move on.
So this latest model DaVinci 3 has dropped just like a few days before the chat GPT came out.
And people have already tested it and found it that in many places, it is actually better or at least on par with the previous GPT 3 models.
So the text DaVinci 2.
But now let's dive into chat GPT.
What can it do? Well, it can write a short essay in favor of the statement that a good model of cognitive function needs to implement biological detail.

All the datasets from https://huggingface.co/Whispering-GPT concated together to finetune OLM-GPT2

Downloads last month
2
Edit dataset card